Principal Database Architect, Multi-Region Data Platform
About this role
Solvd Inc. is a rapidly growing AI-native consulting and technology services firm delivering enterprise transformation across cloud, data, software engineering, and artificial intelligence. We work with industry-leading organizations to design, build, and operationalize technology solutions that drive measurable business outcomes.
Following the acquisition of Tooploox, a premier AI and product development company, Solvd now offers true end-to-end delivery—from strategic advisory and solution design to custom AI development and enterprise-scale implementation. Our capability centers combine deep technical expertise, proven delivery methodologies, and sector-specific knowledge to address complex business challenges quickly and effectively.
Solvd is delivering a multi-year cloud modernization for a Tier-1 US payments platform: migrating a single-region AWS footprint into a multi-region, ultimately active-active architecture across three phases. The data layer is critical path; it spans Aurora MySQL, Cassandra, ElastiCache (Redis / Valkey), DynamoDB, DocumentDB, OpenSearch, and Neptune.
We're hiring a Principal Database Architect to own the data-layer strategy for this program and lead a team of database engineers across both AWS regions. This role will be the senior design authority who can own cross-database DR strategy, RTO/RPO tradeoffs, consistency models, replication patterns, cutover planning, and executive-level technical decisioning.
WHAT YOU'LL DO
- Own the multi-region data architecture across six AWS-native engines: Aurora MySQL, Cassandra, ElastiCache (Redis / Valkey), DynamoDB, DocumentDB, and Neptune. For each, define the cross-region replication pattern (Aurora Global Database, DynamoDB Global Tables, DocumentDB Global Clusters, Cassandra multi-DC, MemoryDB or Valkey global datastore, Neptune Global Database) and the operational model that supports it.
- Define and own the per-tier RTO and RPO commitments (Tier 0 / Tier 1 / Tier 2) and the cutover and failback runbooks and perform validation using Fault Injection Service / Game Days.
- Sequence the data-layer cutover across all three program phases. Identify strong-consistency vs eventual-consistency decisions per workload, and where active-active is feasible vs where region-sharded ledgers or single-writer with replicated read-models is the right pattern.
- Lead a team of Database Engineers across the engines. Set the technical bar, review designs, and unblock implementation. Partner with the client's DB and SRE leadership on architecture decisions and Operational Readiness Reviews.
- Translate data-layer constraints into the application decomposition strategy. Where the monolith is being broken up into microservices, advise on data ownership boundaries (DDD-aligned), strangler-fig extraction patterns, CDC pipelines (DMS, DDB Streams, Kafka Connect), and the event-based contracts that decouple services from the monolith.
- Produce the audit-grade artifacts the client and their regulators expect: replication runbooks, DR test evidence, RTO and RPO measurement methodology, and Well-Architected Reliability Pillar alignment.
WHAT YOU'LL BRING (REQUIRED)
- 12+ years in database engineering and architecture, with at least 5 years in a Principal or Staff-level architect role on production systems at fintech scale.
- Demonstrated experience designing and shipping multi-region active-passive AND active-active data architectures on AWS, including at least one full transition from single-region to multi-region in production, ideally for financial services or similarly regulated/high-availability environments.
- Deep, production experience with Aurora MySQL (including Global Database) and at least three of: Cassandra, DynamoDB (Global Tables), ElastiCache or MemoryDB (Redis / Valkey), DocumentDB, Neptune. Working knowledge of the remaining engines.
- Real experience with monolith decomposition from the data side: extracting bounded contexts out of a shared relational database, owning CDC and dual-write patterns, and managing the transitional period where both monolith and microservices read and write the same data.
- Fintech or payments domain experience: idempotent write patterns, ledger correctness, transaction reconciliation, and the regulatory posture that comes with PCI-DSS, SOX, or equivalent.
- Hands-on with IaC (Terraform or CDK) and observability tooling (Datadog, New Relic, CloudWatch).
- Strong written communication.
NICE TO HAVE
- Prior consulting experience delivering an AWS modernization program against an RFP-defined SOW.
- Experience with AWS Application Recovery Controller (ARC) Region Switch.
- Familiarity with cell-based architecture patterns and per-cell sharding for blast-radius reduction.
- AWS certifications (Solutions Architect Professional, Database Specialty) are a plus, not a filter.
When you join Solvd, you'll…
- Shape real-world AI-driven projects across key industries, working with clients from startup innovation to enterprise transformation.
- Be part of a global team with equal opportunities for collaboration across continents and cultures.
- Thrive in an inclusive environment that prioritizes continuous learning, innovation, and ethical AI standards.
Ready to make an impact?
If you're excited to build things that matter, champion responsible AI, and grow with some of the industry’s sharpest minds. Apply today and let’s innovate together.
Solvd is an equal opportunity employer.
I agree to the processing of my personal data given in the recruitment process by Solvd Inc., with its principal place of business at 1646 N California Blvd, Suite 515, Walnut Creek, CA 94596, United States, for the purpose of future recruitment processes.
You can withdraw your consent at any time, however it will not affect the lawfulness of the processing performed on this basis prior to such withdrawal.
The controller of your personal data is Solvd Inc., with its principal place of business at 1646 N California Blvd, Suite 515, Walnut Creek, CA 94596, United States. More information on processing your personal data you can find in the Privacy Policy.