TVision is the leader in TV engagement metrics. We measure what was previously unmeasurable - how people actually watch TV. We enable the media industry - advertisers, networks, and technology partners alike - to reduce waste and drive greater and more efficient marketing results.
Utilizing cutting edge technology, TVision goes beyond traditional TV data to include measurement of presence in room, co-viewing and attention, producing best-in-class TV data. This allows us to provide critical data to inform the decision making of a $100B/year industry.
Our growth and innovation have been recognized by The New York Times, Advertising Age, AdWeek, Business Insider, MediaPost, and Forbes. We were selected as a Best Place to Work 2019 by Built in Boston, and were named one of the top companies to watch in advertising technology by Business Insider in December 2019.
Measurement and data analysis lie at the foundation of TVision's data products, but those measurements need to be assembled with context to turn them into actionable insights for our customers. In this role, you will be working on scalable, high performance systems that collect and analyze data from thousands of sensors in our panel households, all across the US and the world.
On any given day, projects you might work on include:
- bringing a new TV metadata provider into our data pipeline;
- extending our back end APIs to support new measurement sensors;
- speeding up critical database queries in the data warehouses that generate our product;
- adding a refined analysis or a completely new model to one of the Spark jobs that do our computational heavy lifting;
- improving the monitoring and devops automation that let us do all these things and still finish work on time!
We primarily develop in Python and Haskell on the backend, use Postgres and Redshift as our main database technologies, and rely on Spark clusters for applying complex algorithms to large datasets in a timely fashion. All of our back end infrastructure resides in AWS. However, regardless of your specific technology background, we hire great engineers who love to build, and are willing to learn.
- BS/MS in Computer Science or closely related discipline (math, computer engineering).
- A passion for writing good, clean, and reliable code.
- Substantial experience with a broad range of database systems.
- Experience with any of the following specific technologies is a plus, but not a requirement:
- Apache Spark
- Column-oriented SQL data warehouses such as Snowflake or Redshift
- Workflow orchestration tools such as Apache Airflow
- Machine learning frameworks
- AWS devops tools and techniques
- Strong communications skills with both technical and non-technical team members.
- Collaborative and enthusiastic approach to software development.
- Strong sense of project ownership and personal responsibility.
- Competitive pay and stock options
- Your choice of comprehensive health benefits for you and your family (health, dental, vision)
- Short and long-term disability, Life and AD&D insurance
- FSA/HSA accounts
- 401(k) retirement plan options
- Pre-tax commuter benefits
- Monthly phone reimbursement
- Unlimited PTO and paid holidays
- Gym membership discounts
- Financial support for ongoing professional development
- Casual dress and fun office atmosphere