Gridsight is a rapidly growing Grid/CleanTech startup that is accelerating global electrification and decarbonisation. Our AI-powered software generates insights that ensure electrical utilities maximise reliability, optimise operations, and integrate renewable energy sources seamlessly into the grid. Having recently raised our Series A round, we're excited to supercharge our team in our quest to modernise, optimise and decentralise the global electricity grid as fast as possible.
The Opportunity
As a Graduate Data Engineer at Gridsight, you'll engage your passion for distributed renewables with your experience leading and owning data pipelines to enhance and scale our platform to hundreds of thousands of meters across the globe. In this opportunity to design, develop, and optimise data pipelines and architectures for electricity distribution networks, you'll play a critical role in redefining and revolutionising the way utilities can enable the decentralisation and decarbonisation of the grid. Join us in our mission to enable a future-proof grid powered by decentralised resources and renewables.
Core Responsibilities
Create scalable and efficient production-quality ETL pipelines to handle large volumes of data from various sources; Integrate and transform a variety of meter and GIS data sources into common Gridsight schemas; Manage the end-to-end execution of customer data pipelines and remediate any failures; Ensure data accuracy, consistency, and integrity by implementing robust data validation and governance practices; monitor and optimise data pipelines and queries to ensure high performance and low latency; Discover, develop, and productionise high-value features and Machine Learning models; Work closely with data scientists, customer success engineers, and other key stakeholders to understand ongoing data needs; provide solutions that support their requirements. Your Skills & Qualifications
Bachelor's degree in Computer Science, Engineering, Mathematics, or a related discipline; Experience with SQL relational databases; exposure to data engineering tools such as dbt preferred; Hands-on experience building production-quality ETL pipelines; Proficiency with big data frameworks (Spark preferred); Familiarity with at least one major cloud computing provider (AWS preferred); Fluency in Python, the command line, and Git; Previous experience as a Data Engineering intern, preferably with delivering projects to external stakeholders or clients; Strong analytical and problem-solving skills, with a proactive attitude towards identifying and resolving technical challenges; Self-starter mentality; you're able to independently prioritise tasks and manage time effectively with minimal oversight; Excellent communication skills and ability to collaborate effectively within a start-up environment. Ways We Work
Remote-first, office-optional. Do you work your best from home, in office, or a mix of both? Either way, we've got you covered. Work flexibly, communicate frequently. No hard and fast core working hours, and asynchronous communication is embraced (unless you have client commitments, of course); transparency is the name of the game. Travel occasionally, banter continuously. Attend in-person working weeks once per quarter and a retreat once per year; working together IRL may be infrequent, but the chats (and emojis) never stop. What We Offer
Competitive base salary; Equity incentives; Well-being + WFH allowances. #J-18808-Ljbffr