Data Engineer Opportunity Stable and secure permanent full-time role with a competitive salary and benefits.
Flexibility and access to professional development.
Immediate start with a salary range of $70k - $85k p.a plus Superannuation.
About Us We are an IT consulting and services company focusing on Data Analysis, Data Visualization, Big Data, and Cloud services based in Victoria.
Role Description We have an opportunity for a Data Engineer to work on a large, greenfield data project.
You will be working closely with the business and other stakeholders as you develop a business-critical analytics and data platform.
Key Responsibilities: Build data lakes, data pipelines, big data solutions, and data warehousing solutions. Develop and deploy key cloud-based infrastructure and maintain data pipelines. Excellent working experience in Python and SQL. Excellent working experience of AWS Redshift. Experience with Docker and Kubernetes. Experience with AWS including any or all of EC2, S3, IAM, EKS, and RDS. Good understanding of Data Warehousing/ETL concepts. Experience with CI/CD tools. Translate business requirements into efficient technical design. Collaborate with cross-functional teams and work with different stakeholders. Work with the Lead Data Engineer to manage and maintain a global data warehouse environment/ETL framework and database integrations. Problem analysis, process improvements, and efficient issue resolution. Design and implement technology best practices, guidelines, and repeatable operational processes. Skills and Experience: To meet the challenges of this role, you will ideally be a bachelor's degree qualified in Computer Science, Engineering, or a related field and possess the following skills and experience:
5+ years of experience in building cloud data infrastructure that can scale with ease, depending on traffic. Experience with Cloud platforms such as AWS, Azure, and GCP. Experience with DevOps/Continuous Integration frameworks with configuration management tools such as Puppet (Chef, Salt, Docker also relevant). Experienced with automation and orchestration concepts/skills. Good knowledge of data management, data governance, and data storage principles. Proficiency in Pyspark for data processing and analytics. Work closely with functional teams to design and implement a Customer Data Platform (CDP) on Cloud. Ensure data solutions adhere to best practices in security, scalability, and performance. Develop, manage, and maintain ETL frameworks, data modeling, and analysis using Kimball dimensional modeling methodology. Ability to work both individually and collaboratively and enjoy taking ownership of a project. Infrastructure as Code: Terraform, ARM Templates, Cloud Formation, AWS CDK, Serverless Framework, and Pulumi. Build/Release Tools: GitHub, Azure DevOps, Bitbucket, and TeamCity. Containers: Docker, Kubernetes. Project Management Capabilities (scrum, agile, and waterfall) and good communication skills. Job Type: Full-time
Salary: $70,000.00 – $85,000.00 per year
Benefits: Hybrid Schedule: 8 hour shift Ability to commute/relocate: Melbourne VIC: Reliably commute or planning to relocate before starting work (Preferred) Work Authorisation: Australia (Preferred) Date Posted: 15 Oct 2024
Work Location: Hybrid remote in Ballarat, VIC 3350
#J-18808-Ljbffr