Senior GenAI Algorithms Engineer
NVIDIA
We are now looking for a Senior Gen AI Algorithms Engineer! NVIDIA is seeking engineers to design, develop and optimize Artificial Intelligence solutions to diverse real-world problems. If you have a strong understanding of deep learning and in particular large language models and their multimodal variants, then this role may be a great fit for you! Collaborate and interact with internal partners, users, and members of the open source community to analyze, define and implement highly optimized AI algorithms. The scope of these efforts includes a combination of implementing new algorithms, performance/accuracy tuning and analysis, defining APIs, and analyzing functionality coverage to build larger, coherent toolsets and libraries. The ability to work in a multifaceted, product-centric environment with excellent interpersonal skills are required, to be successful in this role.
What you’ll be doing:
Contribute to the cutting-edge open source NeMo framework
Develop and maintain SOTA GenAI models (e.g., large language models (LLMs), multimodal LLMs)
Tackle large-scale distributed systems capable of performing end-to-end AI training and inference-deployment (data fetching, pre-processing, orchestrate and run model training and tuning, model serving)
Analyze, influence, and improve AI/DL libraries, frameworks and APIs according to good engineering practices
Research, prototype, and develop effective tools and infrastructure pipelines
Publish innovative results on Github and scientific publications
What we need to see:
A PhD or Master's Degree (or equivalent experience) and 5+ years of industry experience in Computer Science, AI, Applied Math, or related field
Strong mathematical fundamentals and AI/DL algorithms skills or experience
Excellent programming, debugging, performance analysis, test design and documentation skills
Experience with AI/DL Frameworks (e.g. PyTorch, JAX)
Excellent Python programming skills
Ways to stand out from the crowd:
Prior experience with Generative AI techniques applied to LLMs and multimodal variants (Image, Video, Speech etc.)
Exposure to large-scale AI training, understanding of the compute system concepts (latency/throughput bottlenecks, pipelining, multiprocessing etc) and related performance analysis and tuning
Hands-on experience with inference and deployment environments would be an asset (e.g. TRT, ONNX, Triton)
Knowledge of GPU/CPU architecture and related numerical software
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.