Perik.ai See who’s hiring. Apply before everyone else.
← Back to all jobs

AI/ML Engineer

Hophr
📍 San Francisco, CA 📅 Posted April 22, 2026
Apply on Hophr’s website →

About this role

Draup is a Series A-funded agentic AI company building the intelligence layer for how global enterprises make workforce and go-to-market decisions. We work with 250+ enterprise clients — including 5 of the Fortune 10 — processing 1B+ job descriptions, 850M+ professional profiles, and signals from 100+ labor databases.
We are now building our Silicon Valley engineering team — a small, senior group focused on next-generation AI research and product.
What you'll do
• Build and maintain production-grade LLM pipelines and agentic workflows.
• Design and optimize RAG architectures using vector databases (Pinecone, FAISS, Weaviate) at scale.
• Implement agentic systems using LangGraph, LlamaIndex, or equivalent: tool use, multi-agent coordination, and reasoning loops.
• Own prompt engineering, model versioning, evaluation (RAGAS, DeepEval), and LLMOps instrumentation.
• Integrate AI features into large-scale data pipelines; maintain observability and guardrails in production.
What we require
• BS/MS in Computer Science, Machine Learning, or related field.
• 3–5 years of AI/ML engineering; minimum 2 years building LLM-powered systems shipped to production.
• Strong Python; PyTorch or Hugging Face Transformers; AWS or GCP; Docker/Kubernetes.
• Portfolio of shipped AI work required — agentic pipelines, RAG systems, or fine-tuned models.
• No visa sponsorship. Must be authorized to work in the US without current or future employer sponsorship.

Draup is a Series A-funded agentic AI company building the intelligence layer for how global enterprises make workforce and go-to-market decisions. We work with 250+ enterprise clients — including 5 of the Fortune 10 — processing 1B+ job descriptions, 850M+ professional profiles, and signals from 100+ labor databases.
We are now building our Silicon Valley engineering team — a small, senior group focused on next-generation AI research and product.
What you'll do
• Build and maintain production-grade LLM pipelines and agentic workflows.
• Design and optimize RAG architectures using vector databases (Pinecone, FAISS, Weaviate) at scale.
• Implement agentic systems using LangGraph, LlamaIndex, or equivalent: tool use, multi-agent coordination, and reasoning loops.
• Own prompt engineering, model versioning, evaluation (RAGAS, DeepEval), and LLMOps instrumentation.
• Integrate AI features into large-scale data pipelines; maintain observability and guardrails in production.
What we require
• BS/MS in Computer Science, Machine Learning, or related field.
• 3–5 years of AI/ML engineering; minimum 2 years building LLM-powered systems shipped to production.
• Strong Python; PyTorch or Hugging Face Transformers; AWS or GCP; Docker/Kubernetes.
• Portfolio of shipped AI work required — agentic pipelines, RAG systems, or fine-tuned models.
• No visa sponsorship. Must be authorized to work in the US without current or future employer sponsorship.

This listing was aggregated by Perik.ai from Hophr’s public job board. Click the button above to view the full job description and apply directly.
Explore more jobs
More from Hophr Browse all AI & tech jobs

Perik.ai is an AI & tech job board that aggregates the latest openings from top companies — updated daily so you can apply before everyone else.

About FAQ Privacy Policy Terms of Service Contact