Perik.ai See who’s hiring. Apply before everyone else.
← Back to all jobs

Data Engineer Pleno

Platformscience
📍 Londrina 📅 Posted April 27, 2026
Apply on Platformscience’s website →

About this role

Brazil, Londrina

At Platform Science, we’re working to connect everything that moves.

Founded in 2015, we are an open IoT platform that partners with innovative fleets, application developers, vehicle manufacturers, and equipment providers in the transportation industry to deliver revolutionary solutions to supply chain professionals across the globe.

Our employees are an engaging, diverse group of people who believe in the power of great ideas. We hire people with different experiences and perspectives to build a company culture that fuels growth through innovation.

We value thoughtful actions and empathy for others. We approach challenges with resiliency and creativity, while encouraging transparency because, no matter our backgrounds or responsibilities, we are one team.cker, and Kubernetes.

What You’ll Do:

• Serve as a developer for our data analytics platform, informing technical direction, standards, and long-term strategy for data ingestion, storage, processing, and consumption.

• Partner with business stakeholders, product owners, engineers, analysts, and data scientists to design and develop highly scalable, resilient, and performant software and data architectures for both batch and streaming data pipelines.

• Collaborate closely with product and engineering teams to define and enforce governance around data models that enable complex analysis, visualization, and data science across the organization.

• Leverage our modern data stack (e.g., Snowflake/dbt), to continue to buildout, data model, cost optimize, and manage our data warehouse.

• Partner with Data Scientists to design, build, and productionize robust Machine Learning pipelines and feature stores.

• Establish data quality standards and processes to ensure data is properly filtered, cleaned, and transformed from a variety of sources to enable accurate and trustworthy analysis.

• Work directly with management and executive teams to prioritize and scope platform initiatives that align with the company's highest business and information needs.

• Mentor and coach team members on best practices, complex system design, and career growth, serving as an authoritative voice for data engineering excellence.

What We’re Looking For:

• 3+ years of progressive experience as a Data Engineer, focusing on building high-scale, production-grade data analytics platforms.

• Proficiency (3+ years) in software development, including the ability to write expert-level, maintainable, and robust code using Python and SQL.

• 3+ years designing, managing, and maintaining data warehouses (especially modern columnar databases), including expertise in dimensional data modeling and maintaining complex ETL/ELT pipelines.

• Experience working with both OLTP/OLAP relational databases and distributed data systems.

• 3+ years working with AWS services such as EC2, Lambda, S3, RDS, ECR, EKS, IAM, and IoT, specifically in the context of data infrastructure.

• 3+ years of experience working with streaming data technologies such as Kinesis, Kafka, or Storm, and designing real-time data processing architectures.

• Advanced SQL skills including performance tuning, window functions, and complex query optimization.

Great to have:

• Proven ability to design and manage CI/CD pipelines using tools like Jenkins, TravisCI, or GitLab Runners for data platform code deployment.

• Demonstrated experience using Terraform (or similar IaC tool like CloudFormation) to provision, manage, and scale cloud infrastructure, data pipelines, and platform resources.

• Hands-on experience with AWS Database Migration Service (DMS) for large-scale database migrations and continuous data replication from various data sources.

• Demonstrated technical leadership regarding all things data: mining, modeling, transformation, cleansing, validation, security, and governance.

• Exceptional communication and presentation skills, with the ability to articulate complex technical trade-offs to both technical and non-technical audiences.

• Experience building and managing a workflow management system such as Airflow, Luigi, or Prefect.

• Expertise with containerization and orchestration using Docker/Kubernetes/EKS.

• Proven experience building and optimizing serverless data pipelines (e.g., AWS Lambda, Step Functions).

• Experience with Data Build Tool (dbt) for transformation and modeling.

• Experience with Snowflake or other major columnar/cloud data warehouses.

• BS/MS in Computer Science, Engineering, or equivalent practical experience.

• Experience with Feature Store architectures (e.g., Feast, Tecton) and developing standardized feature engineering pipelines.

This listing was aggregated by Perik.ai from Platformscience’s public job board. Click the button above to view the full job description and apply directly.
Explore more jobs
More from Platformscience Browse all AI & tech jobs

Perik.ai is an AI & tech job board that aggregates the latest openings from top companies — updated daily so you can apply before everyone else.

About FAQ Privacy Policy Terms of Service Contact