Perik.ai See who’s hiring. Apply before everyone else.
← Back to all jobs

Member of Technical Staff, ML Research Engineer

Arcada
📍 San Francisco 📅 Posted April 24, 2026
Apply on Arcada’s website →

About this role

ABOUT

AI systems are getting better on benchmarks, but still fail in real-world use.

At Arcada Labs, we build products used by millions of people around the world that give us direct access to real human preference and judgment. That lets us evaluate models on what people actually care about, not just what benchmarks happen to measure.

Our products have reached millions of users across 190+ countries and are already used by frontier labs. We’ve collaborated on announcing model releases with OpenAI, xAI https://x.com/elonmusk/status/2019164163906629852?s=20, Meta, and Google DeepMind https://x.com/OfficialLoganK/status/1990826955730489733?ref=notes.designarena.ai, and more.

Whoever defines the evaluations defines what models become good at. We create the evolutionary pressure that pushes models toward what people actually want.

We’re a small, deeply technical team with people from Harvard, Berkeley, Apple, Microsoft, Amazon, and Meta, backed by Index Ventures, YC, Conviction, SV Angel, BoxGroup and others.

ABOUT THE ROLE

We’re looking for an ML Research Engineer to help us build better ways to evaluate and understand real AI capabilities.

You’ll design and run experiments that turn millions of human preference into reliable signals about what makes models useful, trustworthy, and capable in practice (design taste, agent behavior, multi-step tasks, reasoning, etc.). Your work will shape our public leaderboards and the evaluation tools we share with frontier labs.

You’ll work at the intersection of engineering, ML, and research - deciding what to evaluate, how to evaluate it (using real human preference data and other signals), and how to turn those results into better rankings and insights.

WHAT YOU’LL OWN

- Design and run large-scale evaluations that measure how frontier models perform in real-world workflows

- Turn human preference votes and interaction traces into reliable signals about model capability, taste, reasoning, robustness, and agent behavior

- Develop ranking systems, analysis pipelines, and experimental methods for comparing models

- Identify where models fail, why they fail, and what those failures reveal about the next frontier of capability

- Work with engineers to turn research findings into user-facing products, leaderboards, and tools for frontier labs

- Contribute to internal research reports, external publications, and customer-facing analyses

WHAT WE’RE LOOKING FOR

- Experience training, fine-tuning, or evaluating models, including LLMs, reward models, preference models, or RLHF/DPO-style systems

- Prior research experience, publications, open-source work, or hands-on work with frontier models

- Strong familiarity with modern AI systems, model evaluation, agentic workflows, and frontier model behavior

- Ability to turn vague real-world problems into concrete evaluation tasks, experiments, and measurable systems

- Strong experimental judgment, including confidence with noisy human preference data, statistical rigor, and imperfect real-world signals

- Good taste for what matters in model behavior - and a strong desire to advance model progress

This listing was aggregated by Perik.ai from Arcada’s public job board. Click the button above to view the full job description and apply directly.
Explore more jobs
More from Arcada Browse all AI & tech jobs

Perik.ai is an AI & tech job board that aggregates the latest openings from top companies — updated daily so you can apply before everyone else.

About FAQ Privacy Policy Terms of Service Contact