Columbus, OH, United States
8 hours ago
Lead Software Engineer: Data Engineer/PySpark/AWS/Databricks

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.

As a Lead Data Engineer at JPMorgan Chase within the Corporate Sector, you will be part of an agile team dedicated to enhancing, designing, and delivering the software components of the firm’s cutting-edge technology products in a secure, stable, and scalable manner. In your role as an emerging member of a software engineering team, you will execute software solutions by designing, developing, and technically troubleshooting various components within a technical product, application, or system, while acquiring the skills and experience necessary for growth in your position.

Job responsibilities

Execute creative software solutions, design, development, and technical troubleshooting, thinking beyond conventional approaches to build effective solutions and resolve technical challenges.Develop secure, high-quality production code, and review and debug code written to ensure optimal performance and security.Identify opportunities to eliminate or automate the remediation of recurring issues, enhancing the overall operational stability of software applications and systems.Lead evaluation sessions with internal teams to assess architectural designs, technical credentials, and their applicability within existing systems and information architecture.Lead communities of practice across Software Engineering to promote awareness and adoption of new and cutting-edge technologies.Collaborate with business stakeholders to understand requirements and design appropriate solutions, producing architecture and design artifacts for complex applications.Implement robust monitoring and alerting systems to proactively identify and address data ingestion issues, optimizing performance and throughput.Implement data quality checks and validation processes to ensure the accuracy and reliability of data.Design and implement scalable data frameworks to manage end-to-end data pipelines for Financial Risk data analytics.Share and develop best practices with Platform and Architecture teams to enhance data pipeline frameworks and modernize the finance data analytics platform.Gather, analyze, and synthesize large, diverse data sets to continuously improve capabilities and user experiences, leveraging data-driven insights.

 Required qualifications, capabilities, and skills

Formal training or certification on software engineering concepts and 5+ years applied experienceComprehensive understanding of all aspects of the Software Development Life Cycle.Advanced proficiency in data processing frameworks and tools, including Parquet, Iceberg, PySpark, Databricks, Glue, Lambda, EMR, ECS, and Aurora.Advanced knowledge of agile methodologies, including CI/CD, application resiliency, and security practices.Proficiency in programming languages, and experience in Apache Spark for data processing and application development.Experience with scheduling tools like Autosys or Airflow to automate and manage job scheduling for efficient workflow execution.Hands-on experience in system design, application development, testing, and ensuring operational stability.Demonstrated expertise in software applications and technical processes within specialized disciplines such as cloud computing, artificial intelligence and machine learningProficiency in automation and continuous delivery methods, enhancing efficiency and reliability.In-depth understanding of the financial services industry and its IT systems.

Preferred qualifications, capabilities, and skills

Expertise in relational databases such as Oracle or SQL Server.Skilled in writing Oracle SQL queries utilizing DML, DDL, and PL/SQL.Possession of AWS certification, demonstrating cloud expertise.Familiarity with Databricks for advanced data analytics and processing.
Confirm your E-mail: Send Email