Senior Data Engineer/ PowerBI

Glasgow
3 days ago
Create job alert

Lead Data Engineer - Azure & Databricks Lakehouse

Glasgow (3/4 days onsite) | Exclusive Role with a Leading UK Consumer Business

A rapidly scaling UK consumer brand is undertaking a major data modernisation programme-moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse.
They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines, Unity Catalog, and Azure Data Factory, and this role sits right at the heart of that transformation.
This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care.
If you want to build a best-in-class Lakehouse from scratch-this is the one.

? What You'll Be Doing

Lakehouse Engineering (Azure + Databricks)

Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark, and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold).

Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks.

Apply Lakeflow expectations for data quality, schema validation and operational reliability.

Curated Data Layers & Modelling

Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations).

Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets.

Apply governance, lineage and fine-grained permissions via Unity Catalog.

Orchestration & Observability

Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory.

Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform.

DevOps & Platform Engineering

Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts.

Ensure secure, enterprise-grade platform operation across Dev ? Prod, using private endpoints, managed identities and Key Vault.

Contribute to platform standards, design patterns, code reviews and future roadmap.

Collaboration & Delivery

Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation.

Influence architecture decisions and uplift engineering maturity within a growing data function.

? Tech Stack You'll Work With

Databricks: Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses

Azure: ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints

Languages: PySpark, Spark SQL, Python, Git

DevOps: Azure DevOps Repos, Pipelines, CI/CD

Analytics: Power BI, Fabric

? What We're Looking For

Experience

5-8+ years of Data Engineering with 2-3+ years delivering production workloads on Azure + Databricks.

Strong PySpark/Spark SQL and distributed data processing expertise.

Proven Medallion/Lakehouse delivery experience using Delta Lake.

Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies.

Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills.

Mindset

Strong grounding in secure Azure Landing Zone patterns.

Comfort with Git, CI/CD, automated deployments and modern engineering standards.

Clear communicator who can translate technical decisions into business outcomes.

Nice to Have

Databricks Certified Data Engineer Associate

Streaming ingestion experience (Auto Loader, structured streaming, watermarking)

Subscription/entitlement modelling experience

Advanced Unity Catalog security (RLS, ABAC, PII governance)

Terraform/Bicep for IaC

Fabric Semantic Model / Direct Lake optimisation

Related Jobs

View all jobs

Senior Data Engineer

Senior Consultant - AI & Data, Financial Services, Data Platforms, Data Engineer, BCM, Edinburgh

Senior Consultant - AI & Data, Financial Services, Data Platforms, Data Engineer, BCM, Edinburgh

Data Engineer

Senior Business Intelligence Developer

Senior Business Intelligence Developer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.

Neurodiversity in Data Science Careers: Turning Different Thinking into a Superpower

Data science is all about turning messy, real-world information into decisions, products & insights. It sits at the crossroads of maths, coding, business & communication – which means it needs people who see patterns, ask unusual questions & challenge assumptions. That makes data science a natural fit for many neurodivergent people, including those with ADHD, autism & dyslexia. If you’re neurodivergent & thinking about a data science career, you might have heard comments like “you’re too distracted for complex analysis”, “too literal for stakeholder work” or “too disorganised for large projects”. In reality, the same traits that can make traditional environments difficult often line up beautifully with data science work. This guide is written for data science job seekers in the UK. We’ll explore: What neurodiversity means in a data science context How ADHD, autism & dyslexia strengths map to common data science roles Practical workplace adjustments you can request under UK law How to talk about your neurodivergence in applications & interviews By the end, you’ll have a clearer sense of where you might thrive in data science – & how to turn “different thinking” into a real career advantage.