AI/ML Engineer
About this role
Draup is a Series A-funded agentic AI company building the intelligence layer for how global enterprises make workforce and go-to-market decisions. We work with 250+ enterprise clients — including 5 of the Fortune 10 — processing 1B+ job descriptions, 850M+ professional profiles, and signals from 100+ labor databases.
We are now building our Silicon Valley engineering team — a small, senior group focused on next-generation AI research and product.
What you'll do
• Build and maintain production-grade LLM pipelines and agentic workflows.
• Design and optimize RAG architectures using vector databases (Pinecone, FAISS, Weaviate) at scale.
• Implement agentic systems using LangGraph, LlamaIndex, or equivalent: tool use, multi-agent coordination, and reasoning loops.
• Own prompt engineering, model versioning, evaluation (RAGAS, DeepEval), and LLMOps instrumentation.
• Integrate AI features into large-scale data pipelines; maintain observability and guardrails in production.
What we require
• BS/MS in Computer Science, Machine Learning, or related field.
• 3–5 years of AI/ML engineering; minimum 2 years building LLM-powered systems shipped to production.
• Strong Python; PyTorch or Hugging Face Transformers; AWS or GCP; Docker/Kubernetes.
• Portfolio of shipped AI work required — agentic pipelines, RAG systems, or fine-tuned models.
• No visa sponsorship. Must be authorized to work in the US without current or future employer sponsorship.
Draup is a Series A-funded agentic AI company building the intelligence layer for how global enterprises make workforce and go-to-market decisions. We work with 250+ enterprise clients — including 5 of the Fortune 10 — processing 1B+ job descriptions, 850M+ professional profiles, and signals from 100+ labor databases.
We are now building our Silicon Valley engineering team — a small, senior group focused on next-generation AI research and product.
What you'll do
• Build and maintain production-grade LLM pipelines and agentic workflows.
• Design and optimize RAG architectures using vector databases (Pinecone, FAISS, Weaviate) at scale.
• Implement agentic systems using LangGraph, LlamaIndex, or equivalent: tool use, multi-agent coordination, and reasoning loops.
• Own prompt engineering, model versioning, evaluation (RAGAS, DeepEval), and LLMOps instrumentation.
• Integrate AI features into large-scale data pipelines; maintain observability and guardrails in production.
What we require
• BS/MS in Computer Science, Machine Learning, or related field.
• 3–5 years of AI/ML engineering; minimum 2 years building LLM-powered systems shipped to production.
• Strong Python; PyTorch or Hugging Face Transformers; AWS or GCP; Docker/Kubernetes.
• Portfolio of shipped AI work required — agentic pipelines, RAG systems, or fine-tuned models.
• No visa sponsorship. Must be authorized to work in the US without current or future employer sponsorship.