Senior Data Engineer

Piper Maddox
City of London
3 days ago
Create job alert
Data Engineer

Asset Performance & Renewable Energy Analytics


The opportunity

An established renewable energy and digital solutions business is expanding its Asset Performance Management (APM) technology team and is hiring an experienced Data Engineer to support large-scale operational renewable assets.


This role sits within a product-focused engineering group responsible for building and scaling data platforms used to monitor, optimise, and improve the performance of wind, solar, and energy storage assets globally.


You will work closely with software engineers, data scientists, and platform teams to design and operate high-quality data pipelines that directly underpin operational decision-making and analytics for live energy assets.


Key responsibilities

  • Design, build, and maintain scalable data pipelines using Databricks (including Delta Live Tables).
  • Develop robust ETL/ELT workflows ingesting data from operational, telemetry, and third-party systems.
  • Optimise pipeline performance, reliability, and cost efficiency in cloud environments.
  • Ensure data quality, lineage, governance, and documentation across production systems.
  • Collaborate cross-functionally with analytics, product, and platform teams.
  • Support CI/CD automation for data pipeline deployment.
  • Contribute to reusable frameworks and engineering best practices within the team.

Essential experience

Candidates must have prior, hands-on experience working with at least one of the following APM platforms:



  • Power Factors
  • Bazefield
  • GPM

This experience is critical, as the role involves working directly with data models, integrations, and operational outputs from these platforms.


Technical requirements

  • Proven experience as a Data Engineer in production environments.
  • Strong Python and SQL skills.
  • Hands-on Databricks experience (DLT, Delta Lake; Unity Catalog desirable).
  • Solid understanding of data modelling, data warehousing, and distributed systems.Experience with cloud data platforms (Azure preferred; AWS or GCP acceptable).
  • Familiarity with Git-based workflows and CI/CD pipelines.
  • Exposure to analytics or ML-driven use cases is beneficial.

Nice to have

  • Databricks certifications (Associate or Professional).
  • Experience supporting asset-heavy or industrial environments.
  • Background in energy, utilities, or infrastructure data platforms.

Why this role

  • Work on live, utility-scale renewable assets rather than abstract datasets.
  • High-impact role within a mature but fast-evolving digital platform.
  • Strong engineering culture with real ownership and technical influence.
  • Long-term stability combined with ongoing platform growth and investment.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data science in your 30s, 40s or 50s? You’re far from alone. Across the UK, businesses are investing in data science talent to turn data into insight, support better decisions and unlock competitive advantage. But with all the hype about machine learning, Python, AI and data unicorns, it can be hard to separate real opportunities from noise. This article gives you a practical, UK-focused reality check on data science careers for mid-life career switchers — what roles really exist, what skills employers really hire for, how long retraining typically takes, what UK recruiters actually look for and how to craft a compelling career pivot story. Whether you come from finance, marketing, operations, research, project management or another field entirely, there are meaningful pathways into data science — and age itself is not the barrier many people fear.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.