Perik.ai See who’s hiring. Apply before everyone else.
← Back to all jobs

NCX Engineer, AI Accelerator

Nvidia
📍 2 Locations 📅 Posted May 6, 2026
Apply on Nvidia’s website →

About this role

NVIDIA is seeking an NCX Engineer, AI Accelerator to join our AI Accelerator team, collaborating closely with strategic customers to implement and enhance groundbreaking AI workloads! You will deliver hands-on technical assistance for advanced AI deployments, intricate distributed systems, and ensure customers realize efficient performance from NVIDIA's AI platform across varied environments. We partner with the world's most innovative AI companies to address their most challenging technical problems.

What you will be doing:

In this role, you will develop innovative solutions that advance AI infrastructure capabilities. You will directly influence customer success with breakthrough AI initiatives.

• Build and deploy custom AI solutions on NCP and Neo Cloud platforms, including distributed training, inference optimization, and MLOps pipelines constructed on NVIDIA reference architectures.

• Act as the main technical contact for strategic NCPs, offer remote and on-site support, troubleshoot complex production problems, and guide partner engineering teams on NVIDIA platform guidelines.

• Deploy and manage AI workloads across DGX Cloud, NCP data centers, and major CSP environments using Kubernetes, containers, and GPU scheduling systems aligned to NCP builds.

• Profile and tune large-scale training and inference workloads on NCP platforms. Implement observability and SLO/SLA monitoring. Lead detailed efforts to reduce latency, cost, and operational risk.

• Implement and expand NVIDIA reference architectures on partner platforms, develop integrations with partner control planes and customer environments, and ensure smooth API, data pipeline, and enterprise software connectivity.

• Build detailed implementation guides, runbooks, and post‑mortem documentation that codify standard methodologies for running NVIDIA AI workloads at scale on NCP platforms.

What we need to see:

• BS, MS, or Ph.D. in Computer Science, Computer/Electrical Engineering, or a related technical field, or equivalent experience.

• 8+ years of experience in customer facing technical roles such as Solutions Engineering, DevOps, Site Reliability, or ML Infrastructure Engineering, ideally supporting large‑scale cloud or service provider environments.

• Strong expertise in Linux systems, distributed computing, Kubernetes, containers, and GPU scheduling on multi-tenant or service-provider platforms.

• Demonstrated AI/ML experience supporting large‑scale training and inference workloads (e.g., LLMs, generative models, recommendation systems) in production or critically important environments.

• Solid programming skills in Python/Go, with hands‑on experience using frameworks such as PyTorch or TensorFlow for training and serving.

• Demonstrated capability to collaborate with customer and partner engineering teams in fast-paced environments, guide intricate technical investigations, and bring issues to root cause and resolution.

• Excellent communication and technical presentation skills, with the ability to clearly articulate architectures, trade‑offs, and recommendations to both engineering and leadership audiences.

Ways to stand out from the crowd:

• Experience with the NVIDIA ecosystem, including DGX systems, CUDA, NeMo, Triton, NIM, and NVIDIA networking technologies such as InfiniBand and RoCE.

• Direct experience collaborating with NVIDIA Cloud Partners, hyperscale CSPs, or managed AI cloud platforms, including implementation of NVIDIA reference architectures for AI infrastructure.

• Deep familiarity with MLOps and cloud‑native practices: containerization, CI/CD pipelines, observability stacks (Prometheus, Grafana, OpenTelemetry), and GitOps workflows.

• Background in infrastructure as code (Terraform, Ansible, or similar) for repeatable deployment and configuration of GPU‑accelerated clusters and NCP building blocks.

• Experience integrating AI platforms with enterprise systems such as Salesforce, ServiceNow, or other ITSM/CRM platforms to support end‑to‑end customer solutions and managed services.

NVIDIA offers competitive salaries and a generous benefits package. It is recognized as one of the technology world’s most desirable employers. We have some of the most innovative and dedicated people working here. Due to rapid growth, our outstanding teams are expanding quickly. Join us to make a lasting impact on the world!

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until May 9, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

This listing was aggregated by Perik.ai from Nvidia’s public job board. Click the button above to view the full job description and apply directly.
Explore more jobs
More from Nvidia Browse all AI & tech jobs

Perik.ai is an AI & tech job board that aggregates the latest openings from top companies — updated daily so you can apply before everyone else.

About FAQ Privacy Policy Terms of Service Contact