We have an exciting and rewarding opportunity for you to take your software engineering career to the next level.
As a Software Engineer III – PySpark Developer at JPMorgan Chase within the Consumer and Community Bank – Cards Technology Team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics in a secure, stable, and scalable way. Leverage your deep technical expertise and problem-solving capabilities to drive significant business impact and tackle a diverse array of challenges that span multiple data pipelines, data architectures, and other data consumers. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities
Executes software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems Designs and delivers trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable way Defines database back-up, recovery, and archiving strategy Design and develop data pipelines to ingest, store, and process data from multiple sources Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture. Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, opportunity, inclusion, and respect
Required qualifications, capabilities, and skills
Formal training or certification in software engineering concepts and 3+ years applied experience Proficient in coding in Java and PySpark, and experience with AWS cloud technologies, including S3 Experience with SQL-based technologies (e.g., MySQL/ Oracle DB) Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Cloud implementation experience with AWS including: AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Kinesis (or) MSK, Airflow (or) Lambda + Step Functions + Event Bridge Data De/Serialization: Expertise in at least 2 of the formats: Parquet, Iceberg, AVRO, JSON-LD AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis and proficient in automation and continuous delivery methods. Overall knowledge of the Software Development Life Cycle and solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)
Preferred qualifications, capabilities, and skills
Snowflake knowledge or experience preferred In-depth knowledge of the financial services industry and their IT systems Worked with building Data lake, built Data platforms, built Data frameworks, Built/Design of Data as a Service AP