Embrace this pivotal role as an essential member of a high performing team dedicated to reaching new heights in data engineering. Your contributions will be instrumental in shaping the future of one of the world’s largest and most influential companies.
As a Senior Lead Data Engineer at JPMorgan Chase within the Commercial and Investment Bank's Markets Tech Team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics in a secure, stable, and scalable way. Leverage your deep technical expertise and problem solving capabilities to drive significant business impact and tackle a diverse array of challenges that span multiple data pipelines, data architectures, and other data consumers.
Job responsibilities
Design and build hybrid on-prem and public cloud data platform solutionsDesign and build end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloadsDevelop and own data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumersImplement and manage modern data lake and Lakehouse architectures, including Apache Iceberg table formatsImplement interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake FormationEstablish and maintain end-to-end data lineage to support observability, impact analysis, and regulatory requirementsImplement data quality validation and monitoring using frameworks such as Great ExpectationsProvide recommendations and insight on data management and governance procedures and intricacies applicable to the acquisition, maintenance, validation, and utilization of data, advises junior engineers and technologists.Design and deliver trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable wayDefine database back-up, recovery, and archiving strategy and approves data analysis tools and processesCreate functional and technical documentation supporting best practices, evaluate and report on access control processes to determine effectiveness of data asset security
Required qualifications, capabilities, and skills
Formal training or certification on computer science concepts or equivalent and 5+ years applied experienceHands-on experience building and operating batch and streaming data pipelines at scaleExperience with Apache Iceberg and modern table formats in Lakehouse environmentStrong proficiency with Databricks, Snowflake, Amazon Redshift, and AWS data services such as Glue and Lake FormationExperience implementing data lineage, data quality, and data observability frameworksWorking experience with both relational and NoSQL databasesAdvanced understanding of database back-up, recovery, and archiving strategyExperience presenting and delivering visual data