Senior Data Engineer

Data Freelance Hub
Glasgow
1 month ago
Applications closed

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

This role is a Senior Data Engineer on a 24-month fixed‑term contract, with a pay rate of "unknown". The position is remote and requires 5+ years of data engineering experience, advanced expertise in Databricks, and a Databricks Certified Data Engineer Professional certification.


Location: Glasgow, Scotland, United Kingdom (Remote)


Overview

Do you want to work to make Power for Good? We're the world's largest independent renewable energy company. We're driven by a simple yet powerful vision: to create a future where everyone has access to affordable, zero carbon energy. We know that achieving our ambitions would be impossible without our people. Because we're tackling some of the world's toughest problems, we need the very best people to help us. They're our most important asset so that's why we continually invest in them. RES is a family with a diverse workforce, and we are dedicated to the personal professional growth of our people, no matter what stage of their career they're at. We can promise you rewarding work which makes a real impact, the chance to learn from inspiring colleagues from across a growing, global network and opportunities to grow personally and professionally. Our competitive package offers rewards and benefits including pension schemes, flexible working, and top‑down emphasis on better work‑life balance. We also offer private healthcare, discounted green travel, 25 days holiday with options to buy/sell days, enhanced family leave and four volunteering days per year so you can make a difference somewhere else.


Position

We are looking for a Senior Data Engineer with advanced expertise in Databricks to lead the development of scalable data solutions within our asset performance management software, part of our Digital Solutions business.


This role involves architecting complex data pipelines, mentoring junior engineers, and driving best practices in data engineering and cloud analytics. You will play a key role in shaping our data strategy which is the backbone of our software and enabling high‑impact analytics and machine‑learning initiatives.


Accountabilities

  • Design and implement scalable, high-performance data pipelines.
  • Work with the lead cloud architect on the design of data lakehouse solutions leveraging Delta Lake and Unity Catalog.
  • Collaborate with cross‑functional teams to define data requirements, governance standards, and integration strategies.
  • Champion data quality, lineage, and observability through automated testing, monitoring, and documentation.
  • Mentor and guide junior data engineers, fostering a culture of technical excellence and continuous learning.
  • Drive the adoption of CI/CD and DevOps practices for data engineering workflows.
  • Stay ahead of emerging technologies and Databricks platform updates, evaluating their relevance and impact.

Knowledge

  • Deep understanding of distributed data processing, data lakehouse architecture, and cloud‑native data platforms.
  • Optimization of data workflows for performance, reliability, and cost‑efficiency on cloud platforms (particularly Azure but experience with AWS and/or GCP would be beneficial).
  • Strong knowledge of data modelling, warehousing, and governance principles.
  • Knowledge of data privacy and compliance standards (e.g., GDPR, HIPAA).
  • Understanding of OLTP and OLAP and what scenarios to deploy them in.
  • Understanding of incremental processing patterns.

Skills

  • Strong proficiency in Python and SQL. Experience working with Scala would be beneficial.
  • Proven ability to design and optimize large‑scale ETL/ELT pipelines.
  • Building and managing orchestrations.
  • Excellent oral and written communication, both within the team and with our stakeholders.

Experience

  • 5+ years of experience in data engineering, with at least 2 years working extensively with Databricks and orchestrated pipelines such as DBT, DLT, or workflows using jobs.
  • Experience with Delta Lake and Unity Catalog in production environments.
  • Experience with CI/CD tools and version control systems (e.g., Git, GitHub Actions, Azure DevOps, Databricks Asset Bundles).
  • Experience with real‑time data processing, both batch and streaming.
  • Experience working on machine learning workflows and integration with data pipelines.
  • Experience leading data engineering projects with distributed teams, ideally in a cross‑functional environment.

Qualifications

  • Databricks Certified Data Engineer Professional or equivalent certification.

Tags

  • #GitHub
  • #Data Strategy
  • #Strategy
  • #Data Processing
  • #Delta Lake
  • #Python
  • #SQL (Structured Query Language)
  • #GDPR (General Data Protection Regulation)
  • #Databricks
  • #AWS (Amazon Web Services)
  • #Data Quality
  • #dbt (data build tool)
  • #Compliance
  • #GIT
  • #Azure DevOps
  • #Documentation
  • #Version Control
  • #Data Lake
  • #Scala
  • #Monitoring
  • #Batch
  • #Automated Testing
  • #Observability
  • #Azure
  • #ML (Machine Learning)
  • #Data Lakehouse
  • #Data Engineering
  • #Cloud
  • #Data Pipeline
  • #GCP (Google Cloud Platform)
  • #ETL (Extract #Transform #Load)

Freelance data hiring powered by an engaged, trusted community — not a CV database.


85 Great Portland Street, London, England, W1W 7LT


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.