Senior Data Engineer - LATAM
This position offers you the opportunity to join a fast-growing technology organization that is redefining productivity paradigms in the software engineering industry. Thanks to our flexible, distributed model of global operation and the high caliber of our experts, we have enjoyed triple digit growth over the past five years, creating amazing career opportunities for our people. If you want to accelerate your career working with like-minded subject matter experts, solving interesting problems and building the products of tomorrow, this opportunity is for you.
You'd join the data platform team to develop data infrastructure to support our next generation products and data initiatives.
In this group, you will be responsible for building, operationalizing and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
Do you believe testing should be at the heart of any engineering effort? Do you believe that ruthless simplification and refactoring are the soul of any engineering effort? Do you want to help establish an Engineering culture with these fundamentals? We are looking for individuals hungry to drive change across an already very profitable business. Technology is changing rapidly, and that pace is only accelerating. Advancements in GPT (Generative Pre-trained Transformer) models and Retrieval-Augmented Generation (RAG) algorithms are just a couple examples. We are expanding our Engineering teams to leverage these innovative technologies and modernize our entire Data Management Platform. Come join us to learn and apply modern technologies and prepare our platforms for the many growth opportunities which lie ahead.
Key Responsibilities
- Develop and maintain data pipelines using Medallion Architecture
- Write and optimize Spark jobs using Python/PySpark and Scala
- Work with Iceberg data format for efficient data lake management
- Utilize Trine for SQL querying and Cypher for graph database operations
- Manage and optimize AWS cloud infrastructure for data processing and storage
- Design and maintain PostgreSQL databases
- Collaborate with cross-functional teams in two-week sprints
- Participate in on-call rotation during business hours
- Contribute to continuous improvement of data engineering practices
Required Skills and Experience
- 5+ years of experience in data engineering or related field
- Strong proficiency in Python, PySpark, and Scala
- Experience with AWS cloud services and PostgreSQL
- Familiarity with Medallion Architecture and Iceberg data format
- Knowledge of Trine for SQL querying
- Experience with agile methodologies and Jira
- Excellent communication skills and ability to work autonomously
What you'll bring to us:
- Able to work well within the constructs of an agile development process, including SCRUM, Unit Testing, Continuous Build, and Integration, etc.
- Proficiency in programming languages, especially Python, is typically required
- Experience in building and optimizing 'big data' data pipelines, and data sets. This includes designing, constructing, installing, testing, and maintaining highly scalable data management systems
- Understanding of data structures and algorithms, as well as skills in distributed computing. The ability to develop procedures for data mining, data modeling, and data production.
- Experience with cloud services such as AWS, Google Cloud, or Azure can be important, as many businesses utilize these platforms for their data storage and processing needs.
Some of the benefits you’ll enjoy working with us:
- The chance to work in innovative projects with leading brands, that use the latest technologies that fuel transformation.
- The opportunity to be part of an amazing, multicultural community of tech experts.
- The opportunity to grow and develop your career with the company.
- A flexible and remote working environment.
Come and join our #ParserCommunity.
Follow us on Linkedin