Senior Site Reliability Engineer
NVIDIA
NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s motivated by outstanding technology and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. NVIDIA is at the forefront of generative AI models, from language to images. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, encouraging environment where everyone is inspired to do their best work.
NVIDIA is looking for a Senior Site Reliability Engineer (SRE) to join its cloud service team for supporting, triaging, and building generative AI-powered visual applications. As SREs are responsible for the big picture of how our systems relate to each other, we use a breadth of tools and approaches to tackle a broad spectrum of problems. We live SRE practices that are key to product quality, such as limiting time spent on reactive operational work, blameless postmortems, proactive identification of potential outages, and iterative improvements, which all make for exciting and multifaceted day-to-day work. The person in this position will be responsible for Service Response and workflow and will drive tools/service development to maintain and improve service SLOs. We partner with Service Owners to drive the reliability of the service.
What you will be doing:
Support and work on groundbreaking Generative AI inferencing workloads running in a globally-distributed heterogeneous environment spanning all major cloud service providers. Ensure the best possible performance and availability on current and next-generation GPU architectures.
Collaborate closely with the service owner, architecture, research, and tools teams at NVIDIA to achieve ideal results for AI problems at hand.
Monitoring & supporting critical high-performance, large-scale services running multi-cloud.
Participate in the triage & resolution of sophisticated infra-related issues.
Maintain services once live by measuring and monitoring availability, latency, and overall system health using metrics, logs, and traces.
Scale systems sustainably through mechanisms like automation and evolve systems by pushing for changes that improve reliability and velocity.
Practice balanced incident response and blameless postmortems.
Be part of an on-call rotation to support production systems and lead significant production improvement around tooling, automation, and process.
Architect, design, and code using your expertise to optimize, deploy and productize services.
What we need to see:
8+ years of experience operating & owning end-to-end availability and performance of mission-critical services in a live-site production environment, either as an SRE or Service Owner.
3+ years executing incident management and participating in an on call shift.
BS degree in Computer Science or a related technical field involving coding (e.g., physics or mathematics), or equivalent experience
Solid understanding of containerization and microservices architecture, K8s. Excellent understanding of the Kubernetes ecosystem and best practices with K8s.
Ability to dissect complex problems into simple sub-problems and use available solutions to resolve them.
Technical leadership beyond development that includes scoping, requirements capturing, leading and influencing multiple teams of engineers on broad development initiatives.
Lead significant production activities, including change management, post-mortem reviews, workflow processes, software design, and delivering software automation in various languages (Python, or Go ) and technologies (CI/CD auto-remediation, alert correlation).
Best in understanding SLO/SLIs, error budgeting, KPIs, and configuring for highly sophisticated services.
Experience with the ELK and Prometheus stacks as a power user and administrator.
Excellent understanding of cloud environments and technologies, especially AWS, Azure, GCP, or OCI.
Proven strengths in identifying, mitigating, and root-causing issues while continuously seeking ways to drive optimization, efficiency, and the bottom line.
Ways to stand out from the crowd:
Exposure to containerization and cloud-based deployments for AI models.
Excellent coding: Python, Go (Any similar language).
Prior experience driving production issues and helping with on-call support and understanding of Deep Learning / Machine Learning / AI.
Experience with Cuda, PyTorch, TensorRT, TensorFlow, and/or Triton as well as experience with StackStorm and similar automation platforms is a bonus.
Understanding of observability instrumentation techniques and best practices, including OpenTelemetry.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you.