We are now looking for a Senior Deep Learning Performance Architect! NVIDIA is seeking outstanding Performance Architects to help analyze and develop the next generation of architectures that accelerate AI and high-performance computing applications. Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction. GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems. NVIDIA's GPUs run AI algorithms, simulating human intelligence, and act as the brains of computers, robots and self-driving cars that can perceive and understand the world. Come, join our Deep Learning Architecture team, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly growing field!
What you’ll be doing:
Develop innovative HW architectures to extend the state of the art in parallel computing performance, energy efficiency and programmability.
Build the mathematical frameworks required to reason about system availability and workload goodput at massive scales.
Reason about overall Deep Learning workload performance under various scheduling, parallelization, and resiliency strategies.
Conduct "what-if" studies on hardware configurations, infrastructure knobs, and workload strategies to identify optimal system-level trade-offs.
Work closely with wider architecture and product teams to guide the hardware/software roadmap using data-driven performance and reliability projections.
Build and refine high-level simulators in python to model the interaction between knobs that impact performance and resiliency.
What we need to see:
MS or PhD in a Computer Science, Computer Engineering, Electrical Engineering or equivalent experience.
6+ years of relevant industry or research work experience.
Strong background in analytical and probabilistic modeling.
2+ years of experience in parallel computing architectures, distributed systems, or interconnect fabrics.
A strong understanding of distributed deep learning workloads scheduling in large scale systems.
Proficiency in Python for building performance and reliability models.
Ways to stand out from the crowd:
Direct experience managing or troubleshooting large-scale jobs—you understand how jobs actually fail and recover in production.
Experience working with large-scale operational datasets (e.g., scheduler or hardware telemetry).
Knowledge of how orchestrators (e.g., Slurm, Kubernetes, PyTorch) manage workload recovery and job scheduling under failures.
Ability to simplify and communicate rich technical concepts with a non-technical audience.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD.You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until February 8, 2026.This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.