Technology is our how. And people are our why. For over two decades, we have been harnessing technology to drive meaningful change.
By combining world-class engineering, industry expertise and a people-centric mindset, we consult and partner with leading brands from various industries to create dynamic platforms and intelligent digital experiences that drive innovation and transform businesses.
From prototype to real-world impact - be part of a global shift by doing work that matters.
Job Description Our data team has expertise across engineering, analysis, architecture, modeling, machine learning, artificial intelligence, and data science. This discipline is responsible for transforming raw data into actionable insights, building robust data infrastructures, and enabling data-driven decision-making and innovation through advanced analytics and predictive modeling.
Responsibilities: Work closely with the Data Analyst/Data Scientist to understand evolving needs and define the data processing flow or interactive reports. Discuss with the stakeholders from other teams to better understand how data flows are used within the existing environment. Propose solutions for the cloud-based architecture and deployment flow. Design and build processes, data transformation, and metadata to meet business requirements and platform needs. Design and propose solutions for the Relational and Dimensional Model based on platform capabilities. Develop, maintain, test, and evaluate big data solutions. Focus on production status and data quality of the data environment. Pioneer initiatives around data quality, integrity, and security. Qualifications +5 years working with the GCP ecosystem. +5 years of experience in Data Engineering. Proficiency in Apache Spark. Proficiency in Python. Some experience leading IT projects and stakeholder management. Experience implementing ETL/ELT process and Data pipelines. Experience with Snowflake. Strong SQL scripting experience. Background and experience with cloud data technologies and tools. Familiar with data tools and technologies like: Spark, Hadoop, Apache beam, Dataproc or similar. BigQuery, Redshift, or other Data warehouse tools. Real-time pipelines with Kinesis or Kafka. Batch processing. Serverless processing. Strong analytic skills related to working with structured and unstructured data sensibilities. Cloud certifications such as Associate Cloud Engineer will be an asset. #J-18808-Ljbffr