Manages and responsible for successful delivery of large scale data structures and Pipelines and efficient Extract/Load/Transform (ETL) workflows. Acts as the data engineering team lead for large and complex projects involving multiple resources and tasks, providing individual mentoring in support of company objectives.
Designs and develops complex and large scale data structures and pipelines to organize, collect and standardize data to generate insights and addresses reporting needs. Writes complex ETL (Extract / Transform / Load) processes, designs database systems and develops tools for real-time and offline analytic processing. Develop frameworks, standards & reference material for architecture and associated products. Designs data marts and data models to support Data Science and other internal customers. Behaves as mentor to junior team members to provide technical advice. Applies knowledge of Aetna systems and products to consult and advise on additional efforts across multiple domains spanning broader enterprise. Collaborates with data science team to transform data and integrate algorithms and models into highly available, production systems. Uses in-depth knowledge on Hadoop architecture, HDFS commands and experience designing & optimizing queries to build scalable, modular, and efficient data pipelines. Uses advanced programming skills in Python, Java or any of the major languages to build robust data pipelines and dynamic systems. Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards. Experiments with available tools and advises on new tools in order to determine optimal solution given the requirements dictated by the model/use case.
Strong collaboration and communication skills within and across teams.
Ability to communicate technical ideas and results to non-technical clients in written and verbal form.
Proven ability to create innovative solutions to highly complex technical problems.
Ability to leverage multiple tools and programming languages to analyze and manipulate large data sets from disparate data sources.
Ability to understand and build complex systems and solve challenging analytical problems.
Advanced knowledge in Java, Python, Hive, Cassandra, Pig, MySQL or NoSQL or similar.
Advanced knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries against data in the HDFS environment.
Experience building and implementing data transformation and processing solutions.
Has in-depth knowledge of large scale search applications and building high volume data pipelines.
Experience with bash shell scripts, UNIX utilities & UNIX Commands.
7 or more years of progressively complex related experience.
Masters degree or PhD preferred.
Bachelor's degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline.
At CVS Health, we are joined in a common purpose: helping people on their path to better health. We are working to transform health care through innovations that make quality care more accessible, easier to use, less expensive and patient-focused. Working together and organizing around the individual, we are pioneering a new approach to total health that puts people at the heart.
We strive to promote and sustain a culture of diversity, inclusion and belonging every day. CVS Health is an equal opportunity and affirmative action employer. We do not discriminate in recruiting, hiring or promotion based on race, ethnicity, sex/gender, sexual orientation, gender identity or expression, age, disability or protected veteran status or on any other basis or characteristic prohibited by applicable federal, state, or local law. We proudly support and encourage people with military experience (active, veterans, reservists and National Guard) as well as military spouses to apply for CVS Health job opportunities.