LONDON, LONDON, United Kingdom
16 hours ago
Mid-Level Machine Learning Engineer - Data Engineer II – Chase

At Chase UK, we’re redefining digital banking by harnessing cutting-edge technology to deliver seamless, intuitive experiences for our customers. Our engineering team operates with a start-up mindset, empowered to shape the future of banking through scalable, reliable, and innovative solutions. If you’re passionate about operationalizing advanced machine learning—including large language models (LLMs) and generative AI—this is the place for you.

As a Mid-Level ML Engineer within the International Consumer Bank at JPMorgan Chase, you’ll work alongside ML scientists, Data Engineers and software engineers to build, deploy, and maintain sophisticated machine learning solutions in production. You’ll play a hands-on role in implementing ML pipelines, deploying models (including LLMs), and developing the supporting infrastructure that keeps our AI-driven products robust and scalable.

Job Responsibilities:

Build, automate, and maintain ML pipelines for deploying advanced models, including large language models (LLMs), at scale. Collaborate with data engineers, scientists and product owners to operationalize workflows for reliable, seamless model deployment and monitoring. Implement monitoring, logging, and alerting for AI services, ensuring performance, security, and compliance in production environments. Write clean, maintainable, and efficient Python code for ML tooling, orchestration, and infrastructure. Develop and maintain infrastructure as code (IaC) using tools such as Terraform or CloudFormation. Work with containerization and orchestration technologies (e.g., Docker, Kubernetes) to support scalable and repeatable deployments of AI services. Apply robust software engineering best practices—version control, CI/CD, code reviews, testing, and automation—to all aspects of the ML lifecycle. Troubleshoot and optimize ML workflows, from initial development through deployment and production support. Engage in cross-functional squads, participating in technical discussions, design reviews, and continuous improvement initiatives. Contribute to team growth by sharing knowledge and mentoring junior engineers as needed.

Required Qualifications, Capabilities and Skills:

Strong software engineering background, with deep proficiency in Python (and optionally, Go or Java). Demonstrated experience deploying and maintaining LLMs (e.g., GPT's, Llama) in production environments. Familiarity with frameworks and tooling for LLMs and generative AI (e.g., Transformers, LangChain, Haystack, OpenAI, Vertex AI). Experience operationalizing ML solutions in cloud-native environments (AWS, GCP, Azure). Proficiency with containerization and orchestration (Docker, Kubernetes or similar) for scalable model deployment. Practical experience with infrastructure-as-code (Terraform, CloudFormation, etc.). Understanding of concurrency, distributed systems, and scalable API development for ML-powered applications. Experience with version control (Git) and CI/CD pipelines. Strong problem-solving skills, attention to detail, and a collaborative, growth-focused mindset. Experience working in agile, product-driven engineering teams.

Preferred Qualifications:

Exposure to Retrieval-Augmented Generation (RAG) pipelines, vector databases (e.g., Pinecone, Weaviate, Milvus), and knowledge bases, with familiarity in integrating them with LLMs. Experience with advanced model monitoring, observability, and governance of LLMs and generative AI systems. Experience with data engineering or analytics platforms. Understanding of AI safety, security, and compliance best practices in production. Enthusiasm for learning and adopting the latest MLOps and AI technologies.      

#ICB #ICBEngineering
 

Confirm your E-mail: Send Email