LONDON, LONDON, United Kingdom
10 hours ago
Lead Software Engineer - Data Engineering

Out of the successful launch of Chase UK in 2021, we’re a new team, with a new mission. We’re creating products that solve real world problems and put customers at the center -  all in an environment that nurtures skills and helps you realize your potential. Our team is key to our success. We’re people-first. We value collaboration, curiosity and commitment. 

As a hands-on Senior Lead Engineer at JPMorgan Chase within the International Consumer Bank, you are the heart of this venture, focused on getting smart ideas into the hands of our customers. You have a curious mindset, thrive in collaborative squads, and are passionate about new technology. By your nature, you are also solution-oriented, commercially savvy and have a head for fintech. You thrive in working in tribes and squads that focus on specific products and projects – and depending on your strengths and interests, you'll have the opportunity to move between them.

While we’re looking for professional skills, culture is just as important to us. We understand that everyone's unique – and that diversity of thought, experience and background is what makes a good team, great. By bringing people with different points of view together, we can represent everyone and truly reflect the communities we serve. This way, there's scope for you to make a huge difference – on us as a company, and on our clients and business partners around the world.

Job responsibilities

Architecture and implementation: Design and develop scalable and secure distributed architectures and solutions, focusing on data ingestion and processing - utilising appropriate cloud native technologies and services.

Data pipeline development: Design, implement, and maintain data pipelines that efficiently collect, process, and store large volumes of data from various sources, ensuring data timeliness, quality, and completeness.

Security and compliance: Ensure that data solutions comply with relevant data residency and privacy regulations, and implement best practices for securing data at rest and in transit in compliance with financial regulations and firm wide policies.

Required qualifications, capabilities, and skills

Programming: Comfortable with Python and at least one JVM language (Java/Kotlin/Scala) including sound testing and code review practices.

SQL expertise: Joins, aggregations, subqueries, window functions

Data pipelines: Design, build, and optimise production ETL/ELT pipelines (batch + streaming) using a popular framework (Spark, Flink, Dataflow, etc).

Streaming: Hands-on with Kafka (topics, keys, partitions, consumer groups) at-least-once semantics, and schema registry basics.

Warehousing/lakehouse: Data modelling, partitioning, clustering. Hands-on with one of BigQuery, Snowflake, Databricks, etc, and cloud storage or HDFS.

Cloud: Production experience with at least one major cloud provider (GCP/AWS) using native data services and IAM basics. FinOps-aware with cost-effective design.

Reliability: Data quality checks, backfills, incorporating SLIs with observability and reporting.

Kafka Connect (sources/sinks), change data capture (CDC), and schema evolution strategies.

Orchestrators (Airflow/Dagster/Flyte/Prefect/Argo Workflows) and workflow patterns (dependencies, idempotency, retries, SLAs).

Lakehouse platforms and table formats (Delta/Iceberg/Hudi/Avro/Parquet) and time-travel.

Security/RBAC, PII handling, and governance basics.

Preferred qualifications, capabilities, and skills

AWS/GCP Certifications 

 

Confirm your E-mail: Send Email