Perik.ai See who’s hiring. Apply before everyone else.
← Back to all jobs

Software Engineer, Cloud Infrastructure (Multiple Seniority Levels)

Beaconai
📍 San Carlos - Hybrid 📅 Posted May 5, 2026
Apply on Beaconai’s website →

About this role

ABOUT BEACON AI

We’re a fast-moving team of aviators, engineers, and operators building an AI platform to make flying safer, more efficient, and more capable. Backed by top investors, we’ve secured a dozen Department of Defense contracts and partnered with major airlines to deliver mission-critical systems. We operate without silos or heavy processes. Small, focused teams own what they build, ship quickly, and learn fast, pushing the boundaries of how humans and AI work together in aviation.

ROLE OVERVIEW

We are seeking skilled Cloud and ML Infrastructure Engineers to lead the buildout of our AWS foundation and our LLM platform. You will design, implement, and operate services that are scalable, reliable, and secure.

The broad scope means focus areas in LLM/ML Infra and IoT infra are strong bonus points. For ML Infra, build the stack that powers retrieval-augmented generation and application workflows built with frameworks like LangChain. Experience with IoT AWS services is a plus.

You will work closely with other engineers and product management. The ideal candidate is hands-on, comfortable with ambiguity, and excited to build from first principles.

KEY RESPONSIBILITIES

- Cloud Infrastructure Setup and Maintenance

- Design, provision, and maintain AWS infrastructure using IaC tools such as AWS CDK or Terraform.

- Build CI/CD and testing for apps, infra, and ML pipelines using GitHub Actions, CodeBuild, and CodePipeline.

- Operate secure networking with VPCs, PrivateLink, and VPC endpoints. Manage IAM, KMS, Secrets Manager, and audit logging.

- LLM Platform and Runtime

- Stand up and operate model endpoints using AWS Bedrock and/or SageMaker; evaluate when to use ECS/EKS, Lambda, or Batch for inference jobs.

- Build and maintain application services that call LLMs through clean APIs, with streaming, batching, and backoff strategies.

- Implement prompt and tool execution flows with LangChain or similar, including agent tools and function calling.

- RAG Data Systems and Vector Search

- Design chunking and embedding pipelines for documents, time series, and multimedia. Orchestrate with Step Functions or Airflow.

- Operate vector search using OpenSearch Serverless, Aurora PostgreSQL with pgvector, or Pinecone. Tune recall, latency, and cost.

- Build and maintain knowledge bases and data syncs from S3, Aurora, DynamoDB, and external sources.

- Evaluation, Observability, and Cost Governance

- Create offline and online eval harnesses for prompts, retrievers, and chains. Track quality, latency, and regression risk.

- Instrument model and app telemetry with CloudWatch and OpenTelemetry. Build token usage and cost dashboards with budgets and alerts.

- Add guardrails, rate limits, fallbacks, and provider routing for resilience.

- Safety, Privacy, and Compliance

- Implement PII detection and redaction, access controls, content filters, and human-in-the-loop review where needed.

- Use Bedrock Guardrails or policy services to enforce safety standards. Maintain audit trails for regulated environments.

- Data Pipeline Construction

- Build ingestion and processing pipelines for structured, unstructured, and multimedia data. Ensure integrity, lineage, and cataloging with Glue and Lake Formation.

- Optimize bulk data movement and storage in S3, Glacier, and tiered storage. Use Athena for ad-hoc analysis.

- IoT Deployment Management

- Manage infrastructure that deploys to and communicates with edge devices. Support secure messaging, identity, and over-the-air updates.

- Analytics and Application Support

- Partner with product and application teams to integrate retrieval services, embeddings, and LLM chains into user-facing features.

- Provide expert troubleshooting for cloud and ML services with an emphasis on uptime and performance.

- Performance Optimization

- Tune retrieval quality, context window use, and caching with Redis or Bedrock Knowledge Bases.

- Optimize inference with model selection, quantization where applicable, GPU/CPU instance choices, and autoscaling strategies.

WHAT WILL MAKE YOU SUCCESSFUL

- End-to-End Ownership: Drives work from design through production, including on-call and continuous improvement.

- LLM Systems Experience: Shipped or operated LLM-powered applications in production. Familiar with RAG design, prompt versioning, and chain orchestration using LangChain or similar.

- AWS Depth: Strong with core AWS services such as VPC, IAM, KMS, CloudWatch, S3, ECS/EKS, Lambda, Step Functions, Bedrock, and SageMaker.

- Data Engineering Skills: Comfortable building ingestion and transformation pipelines in Python. Familiar with Glue, Athena, and event-driven patterns using EventBridge and SQS.

- Security Mindset: Applies least privilege, secrets management, network isolation, and compliance practices appropriate to sensitive data.

- Evaluation and Metrics: Uses quantitative evals, A/B testing, and live metrics to guide improvements.

- Clear Communication: Explains tradeoffs and aligns partners across product, security, and application engineering.

BONUS POINTS

- 4+ years working with serverless or container platforms on AWS.

- Experience with vector databases, OpenSearch, or pgvector at scale.

- Hands-on with Bedrock Guardrails, Knowledge Bases, or custom policy engines.

- Familiarity with GPU workloads, Triton Inference Server, or TensorRT-LLM.

- Experience with big data tools for large-scale processing and search.

- Background in aviation data or other safety-critical domains.

- DevOps or DevSecOps experience automating CI/CD for ML and app services.

Work Location
This is a hybrid role based in San Carlos, CA, with 3+ days per week onsite and the option to work remotely on remaining days.

Perks & Benefits (Full-Time Employees)

- Healthcare: 100%* of employee medical premiums covered; 25% for dependents

- Time Off: 3 weeks PTO plus 13+ paid company holidays

- Stipends: Monthly phone and wellness benefits

- 401(k): Offered (no current employer match, but we are committed to enhancing this benefit in the future).

Due to U.S. export control regulations, we can only hire U.S. Persons (U.S. citizens, Green Card holders, lawful permanent residents, or individuals granted asylum or refugee status). We are unable to provide visa sponsorship or support visa transfers. All work must be performed in the United States.

Beacon AI is an equal opportunity employer and does not discriminate based on race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, veteran status, or any other protected characteristic. We prohibit harassment or discrimination of any kind in the workplace and comply with all applicable federal, state, and local employment laws.

This listing was aggregated by Perik.ai from Beaconai’s public job board. Click the button above to view the full job description and apply directly.
Explore more jobs
More from Beaconai Browse all AI & tech jobs

Perik.ai is an AI & tech job board that aggregates the latest openings from top companies — updated daily so you can apply before everyone else.

About FAQ Privacy Policy Terms of Service Contact