Forward Deployed Engineer (AI)
About this role
About OPAQUE
OPAQUE is the Confidential AI company. Born from UC Berkeley’s RISELab, we solve the core challenge blocking AI adoption at scale: security concerns about data leaks or compliance violations. OPAQUE provides verifiable privacy and governance for AI so organizations can safely run models, agents, and workflows on their most sensitive data. Its Confidential AI platform delivers verifiable runtime governance backed by cryptographic proof that data, models, and agent actions remain private, governed, and compliant with approved policies throughout every AI workflow. This extends traditional data governance tools with real runtime verification, enabling teams to responsibly deploy AI using their most valuable proprietary data, and move from pilot to production 4-5X faster. Customers and partners include ServiceNow, Anthropic, Encore Capital, Accenture, and leaders across high tech, financial services, insurance, and healthcare.
Learn More at Opaque.co
Read about our Values at Opaque.co/about
ABOUT THIS ROLE
As a Forward Deployed Engineer (AI) at OPAQUE Systems, you are the technical bridge between our enterprise customers and OPAQUE's confidential AI platform. You will work directly with customer engineering teams — from initial architecture review through production deployment — helping them install and deploy OPAQUE in their environment, and build AI solutions (agents / agentic workflows, RAG pipelines, LLM integrations) that run on OPAQUE's confidential AI platform.
This is a hands-on technical role. You will write code, run POCs, architect solutions, and own the customer's path from "interested" to "getting value." You will also be a critical signal source for the product team — the patterns you see in the field shape what OPAQUE builds next.
KEY RESPONSIBILITIES
• Develop custom AI / agentic solutions tailored to customer needs
• Lead technical discovery and architecture scoping with enterprise customers — translate vague AI / agentic aspirations into concrete implementation plans.
• Design and build production-grade solutions for agentic workflows, RAG pipelines, and LLM integrations on OPAQUE's platform.
• Partner with customer engineering teams through production deployment — unblock integration issues, debug failures, and ensure the solution is maintainable.
• Run technical whiteboard sessions, architecture reviews, and POC demonstrations with CTO, CISO, and engineering leader audiences.
• Synthesize field observations into product feedback — use case patterns, integration friction, objections, and feature gaps.
• Build reusable artifacts: integration guides, POC templates, architectural reference designs, and objection playbooks.
• Stay ahead of the AI landscape — agentic frameworks, RAG techniques, LLM capabilities, enterprise AI tooling.
QUALIFICATIONS
• 5+ years of software engineering experience working on production AI/ML systems
• Hands-on experience building agentic workflows: tool use, multi-agent orchestration, memory architecture, agent reliability patterns.
• Deep hands-on experience with RAG pipeline design: chunking strategies, embedding models, vector databases, retrieval quality evaluation, re-ranking.
• Strong Python skills; experience with AI frameworks (LangChain, LlamaIndex, LangGraph, CrewAI, or similar).
• Experience integrating with LLM APIs (OpenAI, Anthropic, Gemini) in enterprise production environments.
• Ability to work directly with enterprise customer teams — comfortable with executive-level technical conversations.
• Strong understanding of enterprise deployment constraints: IAM, data residency, network topology, compliance requirements (SOC 2, HIPAA, FedRAMP).
• Security-aware engineering mindset — understands trust boundaries, data flow, and why cryptographic attestation matters for regulated industries.
• Bonus: Experience with confidential computing, TEEs, or privacy-preserving AI.