Site Reliability Engineering (SRE) is an engineering discipline to design, build and maintain large scale production systems with high efficiency and availability using the combination of software and systems engineering practices. This is a highly specialized domain which demands knowledge across systems, networking, coding, database, capacity management, continuous delivery and deployment, and opensource cloud enabling technologies like Kubernetes and OpenStack. The SRE team at NVIDIA ensures that our internal and external facing GPU cloud services have reliability and uptime as promised to the users, and at the same time enabling developers to make changes to the existing system through careful preparation and planning while keeping an eye on capacity, latency and performance. SRE is also a mindset and a set of engineering approaches to running better production systems and optimizations. Much of our software development focuses on eliminating manual work through automation, performance tuning and improving efficiency of production systems.
As SREs responsible for the big picture of how our systems relate to each other, we use a breadth of tools and approaches to tackle a broad spectrum of problems. Practices such as limiting time spent on reactive operational work, blameless postmortems and proactive identification of potential outages' factor into iterative improvement that is key to both product quality and exciting dynamic day-to-day work. SRE's culture of diversity, intellectual curiosity, problem solving and openness is important to its success. Our organization brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to build an environment that provides the support and mentorship needed to learn and grow.
What you will be doing:
Design, implement and support large scale Kubernetes clusters with monitoring, logging and alerting.
Engage in and improve the whole lifecycle of services—from inception and design, through deployment, operation and refinement.
Support services before they go live through activities such as system design consulting, developing software platforms and frameworks, capacity management and launch reviews.
Maintain services once they are live by measuring and monitoring availability, latency and overall system health.
Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve reliability and velocity.
Practice sustainable incident response and blameless postmortems.
Be part of an on call rotation to support production systems.
What we need to see:
3+ years of hands-on experience in setup, administration and maintenance of multiple large (100+ nodes) Kubernetes clusters on-prem and Cloud Service Providers like AWS, Azure, GCP, OCI.
Strong coding experience in one or more of the following languages: Go, Python, Perl, Java, C, C++, Ruby.
Hands-on system administration experience of at least 2 years on large scale UNIX production environments, with validated debugging and troubleshooting skills.
Ability to maintain platform SLAs through accurate resolutions.
Outstanding teammate who can collaborate and influence in a multifaceted environment.
Demonstrable experience in handling algorithms, data structures, complexity analysis and software design.
BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics) or equivalent experience
Ways to stand out of a crowd:
Experience in using or running large private and public cloud systems based on Kubernetes, OpenStack and Docker.
Demonstrated ability to automate routine tasks, debug and optimize existing code.
Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
Hands-on experience on network and storage administration and unit testing and benchmarking are an integral part of your code.
Ability to reason and choose the best possible algorithm to meet scaling and availability challenges and ability to decompose complex requirements into simple tasks and reuse available solutions to implement most of those.