Let me tell you about an artificial intelligence (AI) company that you probably have never heard of, but you will. As a matter of fact, this company is the world’s largest independent AI company and it’s headquartered in the Boston area. We operate at the intersection of customer experience and AI – two of today’s hottest and most dynamic industries.
Interactions’ mission is to create amazing customer experiences by advancing AI technology that understands and engages on a human level. The human element of what we do not only relates to how we differentiate our AI technology, more importantly, it informs and guides our focus on our most valuable asset, our employees. We endeavor to create opportunities for our employees to advance their skills, their interests, their passions, their careers and their lives. Like all companies, we’re not perfect, but we are committed to continually improving our employee value proposition, one that centers on competitiveness, flexibility and an appreciation for individuality.
For prospective employees, if that sounds challenging and exciting, we’d love to talk to you.
The Senior Database Engineer is responsible for designing, implementing, maintaining and managing the big data infrastructure for our Virtual Assistant Platforms.
Essential Job Functions*:
- Design, implementation and installation of distributed big data infrastructure for high volume/velocity multi-tiered data storage, high availability and fault tolerance.
- Experience in the installation and maintenance of clustered big data environments based on Hadoop and related technologies
- Design, construct, implement and support data warehouse databases for optimal reporting and analytics.
- Optimize database operations for maximum performance.
- Work with engineering teams on near real time data pipeline and streaming technologies
- Work with the Operations team in performing routine and periodic database maintenance and develop and implement automated maintenance, compliance and archival strategies.
- Design and implement security strategies to ensure role based access, auditing, and authenticated access to data for internal users, external users and automated tools.
- Maintain the database infrastructure in multiple locations to handle the high throughput use cases that are critical to the Platform.
Other Duties and Responsibilities:
Ability to demonstrate Interactions Values of:
- Being passionate about customer service
- Obsessing with our customer’s success
- Respecting each other
- Creating opportunity
- Embracing disruption
- Doing what we say we will do
Preparation, Knowledge, Skills and Abilities:
- Bachelor’s Degree in Computer Science or equivalent.
- Experience managing Hadoop clusters including provisioning new nodes, managing alerts, tuning performance, and managing security
- Expertise with Hadoop ecosystem internals (HDFS, Hive, HBase, Oozie, Pig, Sqoop, etc) - storage, tuning, replication, etc.
- Solid understanding of query optimization of database technologies built on top of Hadoop and related technologies.
- Experience delivering data ingestion / ETL solutions at scale within Hadoop environment, including logging, monitoring, debugging, and security.
- Experience with distributed processing technologies like M/R and Spark, including internals like scheduling and resource management.
- Experience in data modeling and schema design.
- Practical experience implementing data warehouse architectures.
- Experience with data warehouse front-end and reporting tools.
- Experience with database sizing, server specification, and network architecture specification.
- Virtualization and Data migration experience.
- Design and architecture for high availability, redundancy and fault tolerance experience.
- Basic Systems Administration skills with Linux based systems
- Scripting experience with bash, Perl or Python.
- Prior experience with managing big data platforms in both CoLo and public cloud platforms
- Prior experience with PostgreSQL 9.x is desirable
More Jobs From