You will join a newly forming team charged with building our new product in the Provider Data Management space. This is an incredible opportunity to help solve a significant problem in the healthcare industry. Customer empathy and user-centric design will be a priority. The ideal candidate will be excited about working on new product development, is comfortable pushing the envelope and challenging the status quo, sets high standards for him/herself and others, and works well with ambiguity.
HealthEdge is in search of a Principal Data Engineer to join our Cloud Data Engineering Team. The role focuses on designing, implementing, and testing cutting-edge technical data solutions. The Data Engineering team collaborates with business and technical partners to establish and maintain high-value data.
As a Principal Data Engineer, you will be responsible for implementing data pipelines supporting our Cloud SaaS application. Your responsibility will be analyzing, designing, and developing data pipeline jobs, the data flows, data mappings, and configuring data quality & validation rules. This hire should understand various integration architecture design patterns, experience in implementing real-time & batch Integrations, Integration job configuration & maintenance.
What you will do:
- Work with the technical design, building, and testing of cloud data services; allowing them meet performance, scalability, reliability, and security needs.
- Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Responsible for working with data asset-related technology that has been approved by the architecture team to move, transform, and validate data between systems and Azure Cloud.
- Ability to create designs following established data integration patterns and establish new patterns to meet business needs.
- Provide data integration subject matter expertise to the team covering ETL/ELT, replication, streaming, and virtualization. Can explain complex concepts to others and establish business value of solutions.
- Complete design, code review, and test case reviews to improve and set the bar for the quality of the delivery. Providing guidance for the engineering team while designing the data integration applications.
- Maintain data pipeline agile stories with delivery to dates to ensure Agile processes are adhered to.
- Contribute to a positive culture of continuous improvement and operational excellence by identifying and implementing process improvements where appropriate.
- Collaborate with architects, business users, system analysts, developers, and ecosystem partners to bring innovative ideas from conception to launch at scale.
- Work in an iterative/Agile environment and be a strong team player.
- Adopt emerging technologies and launch products / services faster with rapid prototyping & iterative methods.
- Proven ability to take prototypes to production, following engineering best practices.
What you will bring:
- In-depth knowledge of database structures/ data warehousing principles, and experience in SQL across different platforms including structured and unstructured data from various enterprise sources.
- Understand batch and real-time data integration patterns include data pipeline experience (Informatica Data Engineering or Cloud, Talend, AWS Glue, Azure Data Factory, etc.), data replication, data streaming, virtualization, etc.
- Experience in writing and executing complex SQL’s.
- Working knowledge of enterprise data architecture concepts and frameworks.
- Proven technical skills in data modeling, data management, data pipelines, ETL/ELT and data retention concepts and practices.
- Extensive hands-on experience implementing data migration and data processing using Azure services: Serverless Architecture, Azure Storage, Azure SQL DB/DW, Data Factory, Azure Stream Analytics, Synapse Analytics, HDInsight, Databricks, Cosmos DB, ML Studio, AI/ML, Azure Functions, ARM Templates, Azure DevOps, CI/CD etc.
- Extensive hands-on experience implementing and tuning Confluent Kafka and related services.
- Extensive knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores and experience with building and optimizing ‘big data’ data pipelines, architectures, and data sets.
- Experience in end-to-end data engineering solutions.
- Show a strong sense of ownership for their deliverables.
- Initiative and passion for finding solutions to ambiguous problems and ability to work in difficult situations and across organizational boundaries.
- Strong communication skills both written and verbal, with ability to communicate or convey the results to audience at all levels.
- Familiar with the latest technologies, trends, standards, products, and applicability
- Ability to effectively adapt to rapid technological and business change while maintaining enthusiasm and displaying sound judgment.
- B.S. in computer science or equivalent experience
- 7+ years of hands-on experience in data engineering or similar role.
- 3+ years of Healthcare domain experience, preferably with expertise in provider data
- Familiarity with HL7 FHIR data standard
- Experience with Java and client server applications