Forward-Deployed Engineer – Data Engineering Expert
About this role
About Us
At Viaduct, we use patented AI to discover hidden patterns in complex time series data – so manufacturers and operators of connected equipment can deliver transformative business results from their data, fast. On our platform we deliver solutions across the equipment lifecycle, from manufacturing productivity, manufacturing quality, service operations and fleet management.
Who You Are
You are a thoughtful engineer. You are energized by working on projects with direct user impact. You thrive on context-switching and working on a variety of different projects. You understand the complexities of distributed systems and how to triage and solve issues that arise with them. Scalability is top of mind when designing any system or writing code. You believe building a better ETL system requires close collaboration with the machine learning and data science teams. You are actively leveraging agentic AI tools to increase your productivity.
What We Are Looking For
We are looking for a well-rounded data engineer who ideally has the following skills and expertise.
Key Responsibilities
• Operate as the data engineering expert on the Internal Efficiencies team, collaborating closely with other engineers, data scientists, solution / product experts, and customer success leaders to deliver against user / customer deliverables
• Contribute to defining data engineering deliverables from inception to completion, ensuring timely delivery and high-quality outputs
• Leverage agentic AI tools on a daily basis to deliver code and improve your productivity
• Develop and maintain strong relationships with all Internal Efficiencies team members as well as engineering subject-matter experts across Viaduct
• Work with customer success and solution / product team members to scope user/customer needs and translate them into actionable analytics solutions
• Deliver against deliverables with quality and speed, contributing new features to solutions and products, refactoring code, improving tests, automating manual processes, and owning other engineering tasks
• Stay updated with the latest trends and related technologies and apply them to improve our software
Security and Privacy Responsibilities
• Follow our policy and procedure documents related to security and privacy
• Follow the guidelines in the Employee Handbook
• Follow the OWASP Top 10 guidelines when implementing and reviewing code
• Participate in new hire and annual training for security and privacy
• Treat data security and privacy as one of your primary job responsibilities
• Report Security Incidents you discover as bugs
• Get approval from the Security Team before adding new 3rd party software to our codebase
• Explicitly consider security implications when doing PR reviews
Minimum Qualifications
• Experience owning a project from design to implementation
• 3+ years working as a full-time data engineer
• Proficiency in Python/Scala/Java/C++ and SQL
• Comfortable exploring and answering questions with data and SQL
• Experience leveraging agentic AI tools to deliver code (e.g., Claude Code, Codex, Cursor)
• Track record of experience with many of the following: (a) Spark or equivalent technologies, (b) workflow schedulers (Airflow, Prefect, Argo, etc), (c) distributed file-systems (HDFS, S3, etc), (d) tools in the open-source data ecosystem (Apache, CNCF, etc), (e) incremental or real-time processing (Delta Lake, Apache Hudi, Kafka Stream, Spark Streaming, etc)
• Familiarity with automating data engineering tasks and processes using command-line or GUI tools (e.g., Linux OS, Gumloop, Airbyte, ADF, etc)
• Business English fluency and aptitude for working in an international working environment across geographies and time zones
• Bachelor’s degree (or equivalent) in a related field (e.g., Computer Science)
Preferred Qualifications
• Experience as a tech lead or mentor
• 3+ years of experience with Spark or equivalent technologies
• 2+ years of experience with a workflow scheduler (Airflow, Prefect, Argo, etc)
• 2+ years of experience with distributed file-systems (HDFS, S3, etc)
• Track record of success in an environment with direct user/customer interaction or on projects with a strong user/customer-feedback component
• Track record of success in a fast-changing environment where projects and tasks change rapidly
• Personal enthusiasm and track record of success in leveraging agentic AI tools to deliver code AND solutions (e.g., Claude Code, Codex, Cursor)
• Advanced / graduate degree (or equivalent) in a related field (e.g., Computer Science)
Position Details
• Compensation: Competitive base salary, short-term incentive (annual bonus) and long-term incentive (long-term upside)
• Travel: 0-20% (with significant occasional international travel component)
• Employment Type: Full Time
• Work site: Flexible (Ljubljana office and/or work from home)
• Team: Internal Efficiencies