Finance Staff Data Engineer, AI Native
About this role
Accountabilities:
• Architect and evolve scalable data ingestion, transformation, and egress frameworks for financial data systems.
• Design and maintain robust pipelines ensuring high data quality, reliability, and observability across all workflows.
• Build and enhance CI/CD pipelines, improving testing, deployment automation, and developer velocity.
• Develop and optimize data infrastructure across AWS, Databricks, and related cloud environments.
• Define and enforce data security, governance, and SOX compliance controls across systems.
• Implement distributed data processing systems using Spark and modern cloud data architectures.
• Improve developer experience through tooling, automation, and AI-assisted engineering workflows.
• Establish ingestion standards, monitoring systems, and recovery mechanisms to ensure system resilience.
• Collaborate with analytics engineers and finance stakeholders to support downstream reporting and modeling.
• Leverage AI/LLM tools to accelerate development, debugging, and system optimization while maintaining quality ownership.
• Identify architectural risks, dependencies, and scalability challenges, providing clear technical direction.
• Mentor engineers and contribute to raising engineering standards across the team.
Requirements:
• 8+ years of experience building and operating large-scale distributed data systems in production environments.
• Strong expertise with cloud platforms, ideally Databricks, and AWS infrastructure/services.
• Advanced proficiency in Python, SQL, and Spark for large-scale data processing.
• Strong experience with CI/CD pipelines, Terraform, and modern DevOps practices.
• Solid understanding of dbt, data modeling, and analytical data architecture.
• Experience with data orchestration tools such as Airflow.
• Deep knowledge of data ingestion challenges including networking, APIs, and cross-cloud integration.
• Strong systems design skills with experience in scalable and event-driven architectures.
• Proven ability to use AI/LLM tools (e.g., Cursor, Claude, GitHub Copilot-style tools) to enhance engineering productivity.
• Experience implementing data quality controls, validation frameworks, and observability systems.
• Ability to independently scope ambiguous technical problems and drive them to completion.
• Strong communication skills across technical and non-technical stakeholders.
• Bachelor’s degree in Computer Science, Engineering, Mathematics, or equivalent experience.
Benefits:
• Competitive compensation package including base salary and equity opportunities.
• Comprehensive health benefits (medical, dental, vision, life, and disability coverage).
• Retirement savings plans with company contributions (401k/RRSP equivalents depending on location).
• Flexible paid time off and company-wide holidays.
• Fully remote-first working environment with home office equipment support.
• Learning and development programs to support technical growth.
• Mental health and employee assistance programs.
• Access to advanced AI tools and modern engineering environments.
How Jobgether works:
We use an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company. The final decision and next steps (interviews, assessments) are managed by their internal team.
We appreciate your interest and wish you the best!
Why Apply Through Jobgether?
Data Privacy Notice: By submitting your application, you acknowledge that Jobgether will process your personal data to evaluate your candidacy and share relevant information with the hiring employer. This processing is based on legitimate interest and pre-contractual measures under applicable data protection laws (including GDPR). You may exercise your rights (access, rectification, erasure, objection) at any time.
#LI-CL1