Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.
As a Lead Data Engineer at JPMorganChase within the Payments Technology team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.
Job Responsibilities
Lead cross-functional collaboration: Partner with stakeholders across all JPMorgan lines of business and functions to deliver robust software and data engineering solutions.Drive innovation and architecture: Spearhead the experimentation, design, development, and production deployment of advanced data pipelines, data services, and data platforms that directly support business objectives.Architect scalable data solutions: Design and implement highly scalable, efficient, and reliable data processing pipelines, leveraging advanced analytics to generate actionable business insights and optimize outcomes.Integrate security architecture: Proactively address opportunities to unify physical, IT, and data security architectures, ensuring comprehensive access management and data protection.
Required Qualifications, Capabilities, and Skills
Formal training or certification on software engineering concepts and 5+ years applied experienceExtensive experience in data technologies, with formal training or certification in large-scale technology program management.Advanced programming expertise: Proficient in Java and Python, with a strong track record of building and optimizing data frameworks and solutions.Comprehensive data lifecycle knowledge: Deep experience in architecting and managing data frameworks, including data lakes, and overseeing the full data lifecycle.Batch and real-time processing: Proven expertise in developing batch and real-time data processing solutions using Spark or Flink.Cloud data processing: Hands-on experience with AWS Glue and EMR for scalable data processing tasks.Databricks proficiency: Demonstrated ability to leverage Databricks for advanced analytics and data engineering.Service development and deployment: Skilled in building services using Spring Boot or Flask, and deploying them on AWS EKS or Kubernetes.Database management: Strong working knowledge of both relational and NoSQL databases, with experience in ETL pipeline development for batch and real-time processing, data warehousing, and NoSQL solutions.
Preferred Qualifications, Capabilities, and Skills
Cloud and containerization: Expertise in Amazon Web Services (AWS), Docker, and Kubernetes for cloud-native and containerized data solutions.Big data technologies: Advanced experience with Hadoop, Spark, and Kafka for distributed data processing and streaming.Distributed systems: Proven ability to design and develop distributed systems for large-scale data engineering applications.