You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.
As a Software Engineer II-Big Data/Pyspark at JPMorgan Chase within the Consumer and Community Banking-Customer Identity and Authentication team, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
Job responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics. Implement ETL transformations on big data platforms, utilizing NoSQL databases like MongoDB, DynamoDB, and Cassandra.Utilize Python for data processing and transformation tasks, ensuring efficient and reliable data workflows. Work hands-on with SPARK to manage and process large datasets efficiently.Implement data orchestration and workflow automation using Apache Airflow. Apply understanding of Event-Driven Architecture (EDA) and Event Streaming, with exposure to Apache Kafka.Use Terraform for infrastructure provisioning and management, ensuring a robust and scalable data infrastructure. Deploy and manage containerized applications using Kubernetes (EKS) and Amazon ECSImplement AWS enterprise solutions, including Redshift, S3, EC2, Data Pipeline, and EMR, to enhance data processing capabilities.Develop and optimize data models to support business intelligence and analytics requirements. Work with graph databases to model and query complex relationships within data.Create and maintain interactive and insightful reports and dashboards using Tableau to support data-driven decision-making.Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs.Required qualifications, capabilities and skills
Formal training or certification on software engineering concepts and 2+ years of applied experienceStrong programming skills in Python, with basic knowledge of JavaExperience with Apache Airflow for data orchestration and workflow managementFamiliarity with container orchestration platforms such as Kubernetes (EKS) and Amazon ECS. Experience with Terraform for infrastructure as code and cloud resource managementProficiency in data modeling techniques and best practices. Exposure to graph databases and experience in modeling and querying graph dataExperience in creating reports and dashboards using TableauExperience with AWS enterprise implementations, including Redshift, S3, EC2, Data Pipeline, and EMRHands-on experience with SPARK and managing large datasets. Experience in implementing ETL transformations on big data platforms, particularly with NoSQL databases (MongoDB, DynamoDB, Cassandra)Understanding of Event-Driven Architecture (EDA) and Event Streaming, with exposure to Apache KafkaPreferred qualifications, capabilities and skills
Strong analytical and problem-solving skills, with attention to detailAbility to work independently and collaboratively in a team environmentGood communication skills, with the ability to convey technical concepts to non-technical stakeholdersA proactive approach to learning and adapting to new technologies and methodologies