Data Engineer - Shenzhen
About this role
Job Description
• Design, develop, maintain, and optimize scalable data pipelines and integration workflows across CASETiFY’s core business systems and data platforms.
• Build and support data ingestion, transformation, validation, and delivery processes for structured and semi-structured data from multiple source systems.
• Work closely with BI, analytics, and business stakeholders to understand reporting and analytical needs and translate them into reliable data engineering solutions.
• Develop and maintain curated datasets, data models, data marts, and reusable data assets to support business intelligence, operational reporting, self-service analytics, and management dashboards.
• Support data integration across key systems such as eCommerce platforms, ERP, OMS, WMS, CRM, marketing systems, finance systems, customer operations platforms, and other enterprise applications.
• Ensure data pipelines and datasets are accurate, complete, timely, and well governed through strong engineering practices, validation controls, monitoring, and reconciliation mechanisms.
• Collaborate with product, engineering, and platform teams to ensure data solutions are scalable, secure, maintainable, and aligned with enterprise architecture and business priorities.
• Support the implementation of data quality standards, metadata management, lineage, documentation, and data governance practices.
• Monitor and troubleshoot pipeline failures, data issues, and performance bottlenecks, and drive timely resolution and continuous improvement.
• Improve engineering efficiency through automation, standardization, reusable frameworks, and best practices in data development and deployment.
• Support the enablement of AI, machine learning, and advanced analytics use cases by preparing high-quality and sustainable data foundations
• Participate in data platform enhancement, architecture discussions, release activities, and cross-functional delivery planning while maintaining clear technical documentation and operational procedures.
Requirements
• Solid hands-on experience in data engineering, ETL / ELT development, and enterprise data integration.
• Good understanding of data warehousing, data modeling, pipeline orchestration, data transformation, and data lifecycle management.
• Practical experience in building and maintaining data pipelines for analytics, reporting, and operational use cases.
• Strong SQL skills and hands-on experience with modern data platforms, cloud data environments, and related engineering tools.
• Experience working with structured and semi-structured data from multiple business systems and platforms.
• Good understanding of data quality controls, reconciliation, validation, monitoring, and troubleshooting practices.
• Experience in supporting BI and analytics use cases through curated datasets, semantic consistency, and well-structured data models.
• Familiarity with version control, automation, deployment processes, and engineering best practices in data environments.
• Good problem-solving and analytical skills with the ability to identify data issues and translate business needs into structured technical solutions.
• Able to work collaboratively with BI, analytics, engineering, product, and business stakeholders in cross-functional environments.
• Known for promoting reliability, data accuracy, structured thinking, and continuous improvement.
• At least 4-6 years of relevant working experience in data engineering, data platform development, or related roles.
• Experience in eCommerce, retail, omnichannel, supply chain, finance, or other data-intensive environments is preferred.
• Familiarity with enabling data foundations for AI, machine learning, or advanced analytics is a plus.
• Experience in multicultural and fast-paced environments is preferred.
• Happy to work in a buzzing multicultural environment, with proficient spoken and written English; Chinese is a plus.