Wayfair: Foundational Data Architect, Service Center
4 Copley Place - Floor 7
Boston, MA 02116


Wayfair Analytics is the engine that powers an enterprise obsessed with data. We move fast, iterating quickly on big business problems. We work smart, applying technology to unlock insights and provide outsized value to our customers. We swing big, knowing our customers wont benefit from micro optimizations. Leveraging the largest data set for products sold in the Home space, this team treats data as an asset and determines how to maximize its business value and extend our competitive advantage.

The Foundational Data Architect is a highly impactful and strategic position aligned to support the growth of our Customer Service organization. This role will architect and build the core Foundational Data for this strategic business unit.

You will be responsible for driving the Wayfair Customer Service Transformation. You will be instrumental in building out our underlying data layer that will ensure Wayfair captures everything involved in the customer experience both on-site and during a support call/chat. Capturing the full life cycle of an incident, what that incident entails, and ultimately the resolution.

What you'll do:

  • Design the optimal data architecture for our Customer Service Foundational Data Layer.
  • Build, schedule, and manage data movement from application origin through batch and streaming systems to make it available for key business decisions.
  • Develop a robust, sustainable plan for the data area going forward, including projecting space requirements, procuring technology, and partnering with engineering on improvements to the data, 100TB+ highly desired.
  • Ensure data products are aligned with the rapidly evolving needs of a multi-billion-dollar business.
  • Provide consulting to application and data engineering organizations on best practices for designing applications to enable easy analytics; be an expert on large-scale data processing.

Who you are:

  • A true expert on big data, comfortable working with datasets of varying latencies and size and disparate platforms.
  • Excited about unlocking the valuable data hidden in inaccessible raw tables and logs.
  • Attentive to detail and with a relentless focus on accuracy.
  • Excited to collaborate with partners in business reporting and engineering to determine the source of truth of key business metrics.
  • Familiarity with distributed data storage systems and the tradeoffs inherent in each one.

Who you are:

  • Skills Required: Data modelling, extensive experience with SQL, Python, and exposure to cloud computing (AWS, Azure or Google).
  • Experience with one or more higher-level JVM-based data processing tools such as Beam, Dataflow, Spark or Flink.
  • Experience designing and implementing different data warehousing technologies and approaches, such as RDBMS and NoSQL, Kimball vs. Inmon, etc. and how to apply them.
  • Experience scheduling, structuring, and owning data transformation jobs that span multiple systems and have high requirements for volume handled, duration, or timing.
  • Prior projects working with optimizing storage and access of high volume heterogeneous data with distributed systems such as Hadoop, including familiarity with various data storage mediums and the tradeoffs of each.
  • Prior data infrastructure experience in support to a Service driven organization is a plus but not essential.
  • Bachelors or Masters in Computer Science, Computer Engineering, Analytics, Mathematics, Statistics, Information Systems, Economics, Management or other quantitative discipline field with strong academic record.