Fabric Data Lead
About this role
Techwave, we are always in an exercise to foster a culture of growth, and inclusivity. We ensure whoever is associated with the brand is being challenged at every step and is provided with all the necessary opportunities to excel in life. People are at the core of everything we do.
Who are we?
Techwave is a leading global IT and engineering services and solutions company revolutionizing digital transformations. We believe in enabling clients to maximize the potential and achieve a greater market with a wide array of technology services, including, but not limited to, Enterprise Resource Planning, Application Development, Analytics, Digital, and the Internet of things (IoT).
Founded in 2004, headquartered in Houston, TX, USA, Techwave leverages its expertise in Digital Transformation, Enterprise Applications, and Engineering Services to enable businesses accelerate their growth.
Plus, we're a team of dreamers and doers who are pushing the boundaries of what's possible.
And we want YOU to be a part of it.
Job Description
Key Responsibilities:
Architecture & Strategy
• Define and implement enterprise data fabric strategy to unify on-prem, cloud, and SaaS data sources.
• Design scalable, high-performance architectures to process large volumes of structured and unstructured data.
• Collaborate with enterprise architects to align data fabric initiatives with business goals.
Data Integration & Engineering
• Lead data ingestion, ETL/ELT pipelines, and real-time streaming solutions that can handle millions to billions of records daily.
• Implement data virtualization, data cataloging, and metadata management for massive datasets.
• Ensure data quality, governance, and lineage across large-scale data sources.
Team Leadership
• Lead and mentor a team of data engineers, data architects, and analysts working on high-volume, high-complexity data pipelines.
• Drive adoption of Agile and DevOps practices in large-scale data engineering projects.
Technology & Tools
• Hands-on expertise in tools like:
• Azure Data Factory, Azure Databricks, Synapse, Data Lake Storage
• Data fabric platforms: Informatica, Talend, IBM Cloud Pak for Data, Microsoft Fabric
• Spark, SQL, Python, or Scala for distributed processing of large datasets.
• Implement and optimize batch and streaming pipelines for large data volumes.
• Ensure high availability, performance, and cost efficiency of enterprise-scale data fabric solutions
Required Skills & Qualifications:
• 10–12 years of experience in data engineering, data architecture, or data integration, including handling large-scale datasets.
• Proven experience in data fabric implementation or enterprise data integration platforms.
• Expertise in cloud data platforms (Azure preferred, AWS/GCP also valuable).
• Strong programming skills: Python, Scala, SQL, Spark, or other distributed processing frameworks for big data.
• Experience with data lakes, data warehouses, and lakehouse architectures handling large data volumes.
• Strong understanding of ETL/ELT processes, batch and streaming pipelines for high-volume datasets.
• Experience with Agile methodologies and DevOps practices.
•
Preferred Skills:
• Experience with Power BI, Tableau, or other visualization tools for large-scale data.
• Familiarity with AI/ML integration in big data pipelines.
• Knowledge of regulatory compliance standards like GDPR, HIPAA, or SOC2.
• Certifications in Azure Data Engineer, Azure Solutions Architect, or equivalent.
Key Success Metrics:
• Successful implementation of enterprise-wide data fabric initiatives for large datasets.
• Efficient, scalable, and cost-optimized high-volume data pipelines.
• Improved data accessibility, quality, and governance across the enterprise.
• High adoption of data fabric solutions by business and technical teams.