Senior Software Engineer, Data Engineering
About this role
Accountabilities
You will design and evolve scalable data architectures and engineering frameworks that support analytics, AI/ML, and business operations. Your work will focus on building robust, efficient, and secure data pipelines and ensuring high-quality data availability across the organization.
• Design, develop, and maintain scalable and secure data architectures and systems
• Build and optimize ETL/ELT pipelines ensuring efficient, reliable data movement from diverse sources
• Develop and maintain data models for both OLTP and OLAP use cases
• Integrate data from APIs, databases, and third-party systems into unified data platforms
• Improve performance, scalability, and reliability of data pipelines and processing systems
• Implement data quality checks, governance frameworks, and monitoring processes
• Collaborate with data scientists, analysts, and engineers to deliver data solutions aligned with business needs
• Support production systems and ensure adherence to SLAs and operational reliability standards
• Contribute to technical leadership, documentation, and cross-team engineering best practices
Requirements
You bring strong experience in data engineering and distributed systems, with the ability to design and maintain large-scale data pipelines in cloud environments. You are comfortable working across the full data lifecycle, from ingestion to modeling and production optimization.
• 5+ years of experience in data engineering, building and maintaining scalable data pipelines
• 3+ years working with workflow orchestration tools such as Airflow or equivalent Python-based systems
• Strong programming skills in Python, Java, or Scala
• Deep proficiency in SQL and experience with relational and analytical databases (e.g., PostgreSQL, MySQL, Redshift, BigQuery, Snowflake)
• Experience designing data models (3NF) and working with data modeling tools
• Hands-on experience with cloud platforms, particularly AWS
• Experience building and optimizing ETL/ELT pipelines and distributed data systems
• Familiarity with big data technologies such as Spark, Hadoop, or Kafka
• Strong understanding of data governance, security practices (including PHI/PII), and data quality frameworks
• Excellent problem-solving, communication, and cross-functional collaboration skills
• Bachelor’s degree in Computer Science or related field preferred
Benefits
• Competitive base salary with annual cash bonus eligibility
• Equity grants as part of total compensation
• Remote-first work environment
• Flexible Time Off policy
• Generous parental leave
• Comprehensive health, dental, and vision insurance with strong employer contributions
• 401(k) retirement savings plan
• Lifestyle Spending Account (LSA)
• Mental health and wellness support programs
• Additional stipends and development resources
How Jobgether works:
We use an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company. The final decision and next steps (interviews, assessments) are managed by their internal team.
We appreciate your interest and wish you the best!
Why Apply Through Jobgether?
Data Privacy Notice: By submitting your application, you acknowledge that Jobgether will process your personal data to evaluate your candidacy and share relevant information with the hiring employer. This processing is based on legitimate interest and pre-contractual measures under applicable data protection laws (including GDPR). You may exercise your rights (access, rectification, erasure, objection) at any time.
#LI-CL1