San Francisco, CA, 94103, USA
21 hours ago
Staff Software Engineer, (Backend) Uber AI Solutions
**About the Role** At Uber, our mission is to be the platform of choice for flexible earning opportunities. We are expanding this vision through **Uber AI Solutions (UAIS)** , a fast-growing team operating like a startup within Uber. We are building **foundational model data infrastructure** for the next generation of AI systems, where human intelligence and machine learning models work together to produce **Model Ready Datasets.** Our platform serves frontier labs, cognitive research teams, and model infrastructure organizations operating across **Generative AI** & **Physical AI** . As models are pushed into multimodal, interactive, and real world use cases, high quality, well structured and well understood data becomes a system level requirement. We are creating a cutting-edge, AI powered, scalable, human in the loop platform that integrates expert knowledge workers directly into the data lifecycle. From dataset design and evaluation to feedback driven iterations and responsible data production, our infrastructure enables team to move faster without compromising rigor and reliability. We operate in over 30+ countries, with knowledge workers actively engaged in preparing datasets for a variety of AI initiatives. We believe breakthroughs in AI will be driven not only by advances in models, but by the strengths of data systems that support them. Our mission is to build the infrastructure that makes frontier research and production possible at scale. We are seeking a **Staff Engineer** to provide technical direction and lead platform architecture for Uber AI Solutions, a fast-growing, startup-like organization within Uber. In this role, you will own and drive the design of foundational data infrastructure that enables frontier AI systems to transition from research to production with speed, rigor, and reliability. You will build scalable platforms that integrate expert human input with machine learning to produce high-quality, model-ready datasets for multimodal and real-world AI use cases. **What You'll Do** + Architect and evolve core systems that span multiple teams, ensuring scalability, performance, and long-term maintainability of critical platform services. + Provide technical leadership across teams, driving alignment on design patterns, service interfaces, and shared infrastructure investments. + Mentor and develop senior engineers, elevating technical depth, decision-making, and design rigor across the broader group. + Champion the adoption of AI-assisted development tools and modern engineering practices to improve code quality, reliability, and delivery speed across teams. + Influence hiring and talent development, helping shape team composition and maintaining a high engineering bar across multiple teams. **Basic Qualifications** + Bachelor's (or Master's) degree in Computer Science, Engineering or related discipline (or equivalent experience). + Expert in at least one major backend or infrastructure technology (languages, frameworks, distributed systems, data pipelines) and comfortable influencing architecture across teams. + Strong record of mentoring and developing engineers, setting technical standards, and driving impact beyond a single team. + Excellent communication and collaboration skills; able to engage with multiple teams, stakeholders, and articulate vision and trade-offs. + Experience participating in hiring and helping build out engineering teams or capability. **Preferred Qualifications** + 8+ years of professional software engineering experience, with substantial experience designing, building, and operating large-scale systems across multiple teams. + Deep understanding of ML Ops ecosystems, LLM or ML model lifecycle management, and large-scale data processing frameworks (e.g., Kubeflow, Airflow, Ray, Spark). + Proven experience architecting systems for data labeling, translation, or human-in-the-loop workflows supporting high-volume ML applications. + Strong familiarity with GenAI, Physical AI and LLM infrastructure model hosting, fine-tuning, evaluation, and integration into production services. + Experience mentoring engineers and leading technical initiatives applying AI/ML to complex business or operational domains (e.g., logistics, physical AI, robotics). For San Francisco, CA-based roles: The base salary range for this role is USD$223,000 per year - USD$248,000 per year. For Sunnyvale, CA-based roles: The base salary range for this role is USD$223,000 per year - USD$248,000 per year. For all US locations, you will be eligible to participate in Uber's bonus program, and may be offered an equity award & other types of comp. You will also be eligible for various benefits. More details can be found at the following link https://www.uber.com/careers/benefits. Uber is proud to be an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires accommodation, please let us know by completing this form- https://docs.google.com/forms/d/e/1FAIpQLSdb_Y9Bv8-lWDMbpidF2GKXsxzNh11wUUVS7fM1znOfEJsVeA/viewform
Confirm your E-mail: Send Email