Perik.ai See who’s hiring. Apply before everyone else.
← Back to all jobs

Software Engineer, Artificial Intelligence/LLM (Multiple Seniority Levels)

Beaconai
📍 San Carlos - Hybrid 📅 Posted May 5, 2026
Apply on Beaconai’s website →

About this role

ABOUT BEACON AI

We’re a fast-moving team of aviators, engineers, and operators building an AI platform to make flying safer, more efficient, and more capable. Backed by top investors, we’ve secured a dozen Department of Defense contracts and partnered with major airlines to deliver mission-critical systems. We operate without silos or heavy processes. Small, focused teams own what they build, ship quickly, and learn fast, pushing the boundaries of how humans and AI work together in aviation.

You will ship LLM-powered product features end-to-end. That means designing retrieval and tool-calling flows, writing the services that run them, building evals and guardrails, and watching cost, latency, and quality in production. You’ll partner with the ML/infra teammates on embeddings, indexing, and model hosting, and with the product teammates on user experience and outcomes. We move fast, and we care about reliability in a safety-critical domain.

We’re hiring across levels. Senior engineers own features and services. Staff engineers own systems, standards, and cross-team technical direction.

WHAT YOU’LL DO

Build user-facing LLM features

- Design and implement retrieval-augmented generation and tool-calling flows using frameworks like LangChain or equivalent primitives, where simpler is better.

- Deliver robust JSON and schema-bound outputs with validation, retries, and fallbacks.

- Add function calling to integrate with internal tools, search, routing, and data services.

Own the service layer

- Ship APIs and workers in Python or TypeScript with clear contracts, streaming, and backoff.

- Add caching, request shaping, prompt templates, and context packing to control latency and cost.

- Integrate with AWS Bedrock, OpenAI, Anthropic, or self-hosted endpoints as needed.

Retrieval and data prep

- Collaborate with infrastructure teammates to develop chunking, embeddings, and indexing capabilities for documents, time series, and multimedia.

- Choose and tune vector backends such as OpenSearch, pgvector, or Pinecone.

- Keep knowledge bases fresh with data syncs from S3, Aurora, DynamoDB, and external sources.

Evaluation and quality

- Create offline evals and golden sets for prompts, retrievers, and tools.

- Stand up online metrics for task success, hallucination rate, retrieval precision/recall, p95 latency, and cost per request.

- Run A/B tests and prompt/version rollouts with guardrails and canaries.

Safety, privacy, and compliance

- Implement content and policy checks, PII detection and redaction, access controls, and auditing.

- Design human-in-the-loop paths for sensitive actions.

- Handle aviation data with care and follow internal security standards.

Operate what you build

- Add tracing, logs, and dashboards for model calls, token usage, errors, and saturation.

- Debug tricky failures across retrieval, prompts, tools, and providers.

WHAT WILL MAKE YOU SUCCESSFUL

- Shipped LLM apps: You’ve put LLM features in front of users and improved them with data.

- Strong builder: Comfortable writing production code, tests, and docs. You keep things simple and observable.

- RAG and tools depth: You understand embeddings, chunking, vector search tradeoffs, and function calling.

- Quality mindset: You design evals, define success metrics, and iterate based on evidence.

- Cost and latency aware: You track p95, hit SLAs, and reduce cost without hurting quality.

- Clear communicator: You explain tradeoffs and align partners across product, infra, and security.

NICE TO HAVE

- Experience with Bedrock, OpenSearch Serverless, pgvector, Pinecone, or Weaviate.

- Prompt versioning, guardrails, and provider routing in production.

- Multimodal work with time series or video.

- Familiarity with GPU inference, Triton, or TensorRT-LLM.

- Aviation or other safety-critical domain exposure.

- DevOps basics for CI/CD, IaC, and secure secrets handling.

EXAMPLE PROBLEMS YOU MIGHT TACKLE IN MONTH ONE

- Transform an internal knowledge base into a low-latency RAG service, complete with explicit schemas and evaluations.

- Add tool-calling to automate a repetitive cockpit or ops workflow with guardrails and audit trails.

- Reduce the cost per request through improved chunking, caching, and prompt refactoring, while maintaining task success rates.

Work Location
This is a hybrid role based in San Carlos, CA, with 3+ days per week onsite and the option to work remotely on remaining days.

Perks & Benefits (Full-Time Employees)

- Healthcare: 100%* of employee medical premiums covered; 25% for dependents

- Time Off: 3 weeks PTO plus 13+ paid company holidays

- Stipends: Monthly phone and wellness benefits

- 401(k): Offered (no current employer match, but we are committed to enhancing this benefit in the future).

Due to U.S. export control regulations, we can only hire U.S. Persons (U.S. citizens, Green Card holders, lawful permanent residents, or individuals granted asylum or refugee status). We are unable to provide visa sponsorship or support visa transfers. All work must be performed in the United States.

Beacon AI is an equal opportunity employer and does not discriminate based on race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, veteran status, or any other protected characteristic. We prohibit harassment or discrimination of any kind in the workplace and comply with all applicable federal, state, and local employment laws.

This listing was aggregated by Perik.ai from Beaconai’s public job board. Click the button above to view the full job description and apply directly.
Explore more jobs
More from Beaconai Browse all AI & tech jobs

Perik.ai is an AI & tech job board that aggregates the latest openings from top companies — updated daily so you can apply before everyone else.

About FAQ Privacy Policy Terms of Service Contact