As a Data Engineer, you will design and build data infrastructure, ensuring data continuity, modernizing data systems, and collaborating with teams to optimize data workflows.
At Wave, we help small businesses to thrive so the heart of our communities beats stronger. We work in an environment buzzing with creative energy and inspiration. No matter where you are or how you get the job done, you have what you need to be successful and connected. The mark of true success at Wave is the ability to be bold, learn quickly and share your knowledge generously.
Reporting to the Manager, Data Engineering, as a Data Engineer you will be building tools and infrastructure to support the efforts of the Data Products and Insights & Innovation teams, and the business as a whole.
We’re looking for a talented, curious self-starter who is driven to solve complex problems and can juggle multiple domains and stakeholders. This highly technical individual will collaborate with all levels of the Data & AI team as well as the various engineering teams to develop data solutions, scale our data infrastructure and advance Wave to the next stage in our transformation as a data-centric organization.
This role is for someone with proven experience in complicated product environments. Strong communication skills are a must to bridge the gap between technical and non-technical audiences across a spectrum of data maturity.
Here's How You Make an Impact:
- You’re a builder - Design, build, and deploy components of a modern data platform, including CDC-based ingestion using Debezium and Kafka, a centralized Hudi-based data lake, and a mix of batch, incremental, and streaming data pipelines.
- You ensure continuity while driving modernization - Maintain and enhance the Amazon Redshift warehouse and legacy Python ELT pipelines, while driving the transition to a Databricks and dbt–based analytics environment that will replace the current stack.
- You balance innovation with operational excellence - Build fault-tolerant, scalable, and cost-efficient data systems, and continuously improve observability, performance, and reliability across both legacy and modern platforms.
- You collaborate to deliver impact - Partner with cross-functional teams to design and deliver data infrastructure and pipelines that support analytics, machine learning, and GenAI use cases, ensuring timely and accurate data delivery.
- You thrive in ambiguity and take ownership -Work autonomously to identify and implement opportunities to optimize data pipelines and improve workflows under tight timelines and evolving requirements.
- You keep the platform reliable - Respond to PagerDuty alerts, troubleshoot incidents, and proactively implement monitoring and alerting to minimize incidents and maintain high availability.
- You’re a strong communicator - Provide technical guidance to colleagues, clearly communicating complex concepts and actively listening to build trust and resolve issues efficiently.
- You’re customer-minded - Assess existing systems, improve data accessibility, and deliver practical solutions that enable internal teams to generate actionable insights and enhance the experience of our external customers.
You Thrive Here by Possessing the Following:
- Data Engineering Expertise: 3+ years of experience building data pipelines and managing a secure, modern data stack, including CDC streaming ingestion (e.g., Debezium) into data warehouses that support AI/ML workloads.
- AWS Cloud Proficiency: At least 3 years of experience working with AWS cloud infrastructure, including Kafka (MSK), Spark / AWS Glue, and infrastructure as code (IaC) using Terraform.
- Data modelling and SQL: Fluency in SQL, strong understanding of data modelling principles and data storage structures for both OLTP and OLAP
- Databricks experience: Experience developing or maintaining a production data system on Databricks is a significant plus.
- Strong Coding Skills: Experience writing and reviewing high-quality, maintainable code to improve the reliability and scalability of data platforms, using Python, SQL, and dbt, and leveraging third-party frameworks as needed.
- Data Lake Development: Prior experience building data lakes on S3 using Apache Hudi with Parquet, Avro, JSON, and CSV file formats.
- CI/CD Best Practices: Experience developing and deploying data pipeline solutions using CI/CD best practices to ensure reliability and scalability.
- Data Governance Knowledge: Familiarity with data governance practices, including data quality, lineage, and privacy, and experience using data cataloging tools to support discoverability and compliance.
- Data Integration Tools: Working knowledge of tools such as Stitch and Segment CDP for integrating diverse data sources into a cohesive ecosystem.
- Analytical and ML Tools Expertise: Experience with Athena, Redshift, or SageMaker Feature Store for analytics and ML workflows is a plus.
Bonus points for:
At Wave, we value diversity of perspective. Your unique experience enriches our organization. We welcome applicants from all backgrounds. Let’s talk about how you can thrive here!
Wave is committed to providing an inclusive and accessible candidate experience. If you require accommodations during the recruitment process, please let us know by emailing [email protected]. We will work with you to meet your needs.
Please note that we use AI-assisted note-taking in interviews for transcription purposes only. This helps ensure interviewers can remain fully present and engaged throughout the discussion.
This advertised posting is a current vacancy.
Top Skills
Amazon Redshift
Apache Hudi
Avro
Aws Glue
Ci/Cd
Databricks
Dbt
Debezium
JSON
Kafka
Parquet
Python
SQL
Terraform
Similar Jobs
Artificial Intelligence • Cloud • Computer Vision • Hardware • Internet of Things • Software
Architect and maintain marketing databases and pipelines, implement data infrastructure for AI/ML, and manage data integration in a collaborative environment.
Top Skills:
Adobe Real-Time CdpBlueshiftDatabricksDbtDomoGoogle BigqueryHightouchKubeflowLatticeLookerLyticsMixpanelMlflowPythonRdsRedshiftSegmentSnowflakeSQLTableau
Beauty • Healthtech • Software
The Data Engineer will analyze large datasets, build dashboards, optimize SQL queries, and support data infrastructure using various tools and technologies.
Top Skills:
AirflowDbtFivetranMssqlMySQLPostgresPrefectPythonRedshiftSnowflakeSQL
Big Data • Analytics • Business Intelligence • Big Data Analytics
Looking for a Data Engineer experienced in Dataiku to design and maintain data pipelines and infrastructure, ensuring data flow and integration across teams.
Top Skills:
BigQueryCloud StorageDataflowDataikuGCPPythonSnowflakeSQL
What you need to know about the Ottawa Tech Scene
The capital city of Canada and the nation's fourth-largest urban area, Ottawa has proven a rapidly growing global tech hub. With over 1,800 tech companies, many of which are leaders in their sectors, the city's tech talent now makes up more than 13 percent of its total workforce. This growth is driven not only by the big players like UL Solutions and Dropbox, but also by a thriving startup ecosystem, as new businesses emerge to follow in the footsteps of those that came before them.



