Senior Data Engineer
About this role
Techwave, we are always in an exercise to foster a culture of growth, and inclusivity. We ensure whoever is associated with the brand is being challenged at every step and is provided with all the necessary opportunities to excel in life. People are at the core of everything we do.
Who are we?
Techwave is a leading global IT and engineering services and solutions company revolutionizing digital transformations. We believe in enabling clients to maximize the potential and achieve a greater market with a wide array of technology services, including, but not limited to, Enterprise Resource Planning, Application Development, Analytics, Digital, and the Internet of things (IoT).
Founded in 2004, headquartered in Houston, TX, USA, Techwave leverages its expertise in Digital Transformation, Enterprise Applications, and Engineering Services to enable businesses accelerate their growth.
Plus, we're a team of dreamers and doers who are pushing the boundaries of what's possible.
And we want YOU to be a part of it.
Job Description
Role: Data Engineer
Experience: 3-7 Years of experience
Location: Hyderabad
Summary
The Data Engineer (Senior Associate) leads the design and implementation of scalable, reliable data pipelines and data models that enable enterprise analytics and data products. This role owns complex data engineering solutions, drives best practices for quality and performance, and mentors junior engineers while partnering with stakeholders to deliver high-impact data capabilities.
Responsibilities
• Design, build, and maintain robust batch and streaming data pipelines, and lead the development of reusable data pipeline components, templates, and utilities aligned to established engineering standards, patterns, and reference architectures.
• Lead the onboarding of new data sources by performing source data profiling, documenting assumptions, validating data completeness and quality, and recommending integration and modeling approaches.
• Implement and optimize data transformations and analytical models to deliver high‑quality, analytics‑ready datasets that support reporting, advanced analytics, and downstream data products.
• Own schema evolution and change management processes, proactively assessing downstream impacts and coordinating changes to ensure data stability and continuity.
• Collaborate closely with data analysts, data architects, product teams, and business stakeholders to translate data requirements into scalable, well‑designed technical solutions.
• Design and enforce data quality checks, validation rules, and monitoring frameworks; lead investigation and resolution of complex data issues and defects, identifying root causes and preventative actions.
• Identify and drive optimization opportunities across pipelines, queries, and data processing patterns to improve performance, reliability, and cost efficiency.
• Create and maintain comprehensive documentation for pipelines, data models, business logic, and operational procedures to support transparency, reuse, and long‑term maintainability.
• Actively participate in and contribute to code reviews, testing strategies, and CI/CD pipelines, promoting engineering best practices and consistency across the team.
• Play a key role in production operations, including incident triage, root cause analysis, corrective actions, and post‑incident improvements, in close partnership with Data Ops.
• Design, implement, and maintain dashboards, metrics, and alerts that proactively surface data reliability, freshness, and quality issues before they impact consumers.
• Apply and advocate for governance‑by‑design principles, including implementation of data security, privacy controls, role‑based access, encryption standards, and data classification policies.
• Lead or contribute to metadata management practices, including dataset documentation, data lineage, ownership definitions, and discoverability.
• Design and execute unit, integration, and regression tests for data pipelines to validate transformations, business rules, and expected outcomes.
• Actively participate in sprint planning, backlog refinement, and technical estimation, providing guidance on effort, dependencies, risks, and delivery sequencing.
• Mentor and support Associate‑level engineers through technical guidance, code reviews, and knowledge sharing.
Skills & Capabilities
• Advanced proficiency in SQL and strong experience with at least one programming language such as Python for data processing and automation.
• Strong working knowledge of cloud‑native data services and architectural concepts, including storage layers, compute separation, scalability, and cost‑aware design.
• Hands‑on experience integrating data from diverse sources, including APIs, CSV, JSON, XML, Dataverse, and multiple database technologies into centralized data platforms.
• Strong understanding of ETL / ELT patterns, data modeling techniques (dimensional, analytical, and domain‑oriented models), and the end‑to‑end data lifecycle.
• Practical experience working with modern data platforms such as Snowflake or Microsoft Fabric, across data warehouse, data lake, and Lakehouse architectures.
• Experience designing or supporting semantic layers and analytics consumption patterns, including BI tools, metrics definitions, and curated datasets.
• Solid experience with version control, branching strategies, and CI/CD practices for data pipelines and analytics workloads.
• Strong analytical thinking and problem‑solving skills, with the ability to troubleshoot complex technical and data quality issues.
• Proven ability to collaborate effectively in cross‑functional, Agile delivery teams while working independently on complex engineering tasks.
• Good understanding of data privacy regulations and secure data handling practices in enterprise environments.
• Strong written and verbal communication skills, with the ability to clearly document technical decisions and explain data concepts to both technical and non‑technical stakeholders.
Education / Professional Experience/ Qualifications
• Bachelor’s degree in Computer Science, Engineering, Information Systems, Analytics, or equivalent practical experience.
• 4–7 years of relevant experience in data engineering, analytics engineering, or related technical roles, with demonstrated progression in responsibility and technical ownership.
• 5+ years of Data Analyst or Business Data Analyst experience in enterprise data environments.
• Preferred: Experience in professional services, accounting industry, or client service/consultative technology roles.
• Experience with ETL/ELT tools (Informatica, DBT).
• Experience with BI/reporting tools (Power BI, Tableau, Excel).
• Experience with data modeling methodologies (Dimensional modeling, Data Marts, Data Warehousing, Star/Snowflake schema).
• Familiarity with cloud data platforms (Azure, AWS, GCP).
• Exposure to API testing tools (Postman, Swagger).
• Agile/Scrum delivery exposure.