Starry is proud to be an Equal Opportunity workplace. Just like the internet service we provide, we do not discriminate. We welcome people from all over the world to share their knowledge and perspectives. At Starry, you can discover the many careers and opportunities that are made possible when you connect people to the limitless possibilities of the internet.
Our mission focuses on two things. First, were making the experience of accessing the internet simple, transparent, and delightful. Second, were bringing that experience to underserved communities around the world. We approach our mission with a cutting-edge wireless technology, customer service designed to delight, and a culture of innovation and intellectual curiosity.
About the Data Team:
The Data Engineering team is part of a larger Analytics organization that is responsible for encouraging data-driven decision making throughout Starry. Within Analytics, the Data Engineering team ensures that tooling and processes that support data pipelines, internal and external applications, and analytical workloads are easy-to-use, efficient, and highly performant.
Some of our more exciting projects include building data pipelines that model signal strength across the United States, making Airflow and Spark available to the rest of Starry as tools, and designing a streaming data platform that can provide real-time information about our network. We have a particular focus on geographic information system (GIS) specific tooling, and we care about running our own applications, and understanding and managing our own AWS infrastructure.
What you will be working on:
While our work and toolkit is ever-shifting, you should have experience with or be interested in using:
- An object-oriented programming language like Python or Java
- Shell scripting, Git, jq, awk, and a variety of other Unix utilities
- Various flavors of SQL, particularly those with GIS functionality
- Data warehouse and data lake architecture and design
- Workflow management tools like Airflow or Beam to manage ETL
- Stream processing platforms like Kafka or services like Kinesis
- Virtualization with Docker or an equivalent
- Spark, DBT, and other data tools
- Data engineering focused DevOps, observability, CI/CD, etc
- API and backend application development
- Unit and integration testing data pipelines
- 1+ years of data engineering experience or 3+ years of engineering experience
- Familiar with SQL, profile query performance, and design database schema
- Experience with batch ETL or backend bulk data operations
- Familiar with CI/CD, Docker or similar, and testing data intensive processes
Bonus points if...
- Streaming data experience with Kafka, Kinesis, RabbitMQ, etc
- Orchestration experience with frameworks like Airflow
- Container experience with frameworks like Docker
- Geographic information systems (GIS) experience
- Contributor to open source software
- 100% employer paid low deductible health plan, dental plan, vision plan, AD&D and life insurance
- 401(k) retirement plan and stock options
- 12 weeks of 100% paid parental leave for new mothers and fathers after six months of continuous employment
- Professional development assistance after six months of employment
- Catered meals on a weekly basis for employees working in the office
- Casual dress, community clubs, annual fitness reimbursement, stocked kitchen and other perks and discounts
Qualified Applicants must be legally authorized for employment in the United States. Qualified Applicants will not require employer sponsored work authorization now or in the future for employment in the United States.
Disclaimer: This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee.