National AI Awards 2025Discover AI's trailblazers! Join us to celebrate innovation and nominate industry leaders.

Nominate & Attend

Data Engineer

Somerset Bridge
Newcastle upon Tyne
1 week ago
Create job alert

Data Engineer

Application Deadline:

27 June 2025
Department:

[SBSS] Enterprise Data Management
Employment Type:

Permanent - Full Time
Location:

Newcastle
Reporting To:

Mike Jolley
Compensation:

£55,000 - £68,500 / year

Description
We're building something special — and we need a talented

Data Engineer

to help bring our Azure data platform to life.

This is your chance to work on a greenfield Enterprise Data Warehouse programme in the insurance sector, shaping data pipelines and platforms that power smarter decisions, better pricing, and sharper customer insights.

The Data Engineer will design, build, and optimise scalable data pipelines within Azure Databricks, ensuring high-quality, reliable data is available to support pricing, underwriting, claims, and operational decision-making. This role is critical in modernising SBG’s cloud-based data infrastructure, ensuring compliance with FCA/PRA regulations, and enabling AI-driven analytics and automation.

By leveraging Azure-native services, such as Azure Data Factory (ADF) for orchestration, Delta Lake for ACID-compliant data storage, and Databricks Structured Streaming for real-time data processing, the Data Engineer will help unlock insights, enhance pricing accuracy, and drive innovation. The role also includes optimising Databricks query performance, implementing robust security controls (RBAC, Unity Catalog), and ensuring enterprise-wide data reliability.

Working closely with Data Architects, Pricing Teams, Data Analysts, and IT, this role will ensure our Azure Databricks data ecosystem is scalable, efficient, and aligned with business objectives. Additionally, the Data Engineer will contribute to cost optimisation, governance, and automation within Azure’s modern data platform.

Key Responsibilities

Data Pipeline Development – Design, build, and maintain scalable ELT pipelines using Azure Databricks, Azure Data Factory (ADF), and Delta Lake to automate real-time and batch data ingestion.
Cloud Data Engineering – Develop and optimise data solutions within Azure, ensuring efficiency, cost-effectiveness, and scalability, leveraging Azure Synapse Analytics, ADLS Gen2, and Databricks Workflows
Data Modelling & Architecture – Implement robust data models to support analytics, reporting, and machine learning, using Delta Lake and Azure Synapse.
Automation & Observability – Use Databricks Workflows, dbt, and Azure Monitor to manage transformations, monitor query execution, and implement data reliability checks.
Data Quality & Governance – Ensure data integrity, accuracy, and compliance with industry regulations (FCA, Data Protection Act, PRA) using Databricks Unity Catalog and Azure Purview.
Collaboration & Stakeholder Engagement – Work closely with Data Scientists, Pricing, Underwriting, and IT to deliver data-driven solutions aligned with business objectives.
Data Governance & Security – Implement RBAC, column-level security, row-access policies, and data masking to protect sensitive customer data and ensure FCA/PRA regulatory compliance.
Innovation & Continuous Improvement – Identify and implement emerging data technologies within the Azure ecosystem, such as Delta Live Tables (DLT), Structured Streaming, and AI-driven analytics to enhance business capabilities.

Skills, Knowledge and Expertise

Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks.
Strong proficiency in SQL (T-SQL, Spark SQL) for data extraction, transformation, and optimisation.
Proficiency in Azure Databricks (PySpark, Delta Lake, Spark SQL) for big data processing.
Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics.
Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks.
Strong Python (PySpark) skills for big data processing and automation.
Experience with Scala (optional but preferred for advanced Spark applications).
Experience working with Databricks Workflows & Jobs for data orchestration.
Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference.
Experience with data modelling techniques to support analytics and reporting.
Familiarity with real-time data processing and API integrations (e.g., Kafka, Spark Streaming).
Proficiency in CI/CD pipelines for data deployment using Azure DevOps, GitHub Actions, or Terraform for Infrastructure as Code (IaC).
Understanding of MLOps principles, including continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning models.
Experience with performance tuning and query optimisation for efficient data workflows.
Strong understanding of query optimisation techniques in Databricks (caching, partitioning, indexing, and auto-scaling clusters).
Experience monitoring Databricks workloads using Azure Monitor, Log Analytics, and Databricks Performance Insight
Familiarity with cost optimization strategies in Databricks and ADLS Gen2 (e.g., managing compute resources efficiently).
Problem-solving mindset – Ability to diagnose issues and implement efficient solution
Experience implementing Databricks Unity Catalog for data governance, access control, and lineage tracking.
Understanding of Azure Purview for data cataloging and metadata management.
Familiarity with object-level and row-level security in Azure Synapse and Databricks
Experience working with Azure Event Hubs, Azure Data Explorer, or Kafka for real-time data streaming.
Hands-on experience with Databricks Structured Streaming for real-time and near-real-time data processing.
Understanding of Delta Live Tables (DLT) for automated ELT and real-time transformations.
Analytical thinking – Strong ability to translate business needs into technical data solution
Attention to detail – Ensures accuracy, reliability, and quality of data.
Communication skills – Clearly conveys technical concepts to non-technical stakeholders.
Collaboration – Works effectively with cross-functional teams, including Pricing, Underwriting, and IT.
Adaptability – Thrives in a fast-paced, agile environment with evolving priorities.
Stakeholder management – Builds strong relationships and understands business requirements
Innovation-driven – Stays up to date with emerging technologies and industry trends.

Our Benefits

Hybrid working – 2 days in the office and 3 days working from home
25 days annual leave, rising to 27 days over 2 years’ service and 30 days after 5 years’ service. Plus bank holidays!
Discretionary annual bonus
Pension scheme – 5% employee, 6% employer
Flexible working – we will always consider applications for those who require less than the advertised hours
Flexi-time
Healthcare Cash Plan – claim cashback on a variety of everyday healthcare costs
Electric vehicle – salary sacrifice scheme
100’s of exclusive retailer discounts
Professional wellbeing, health & fitness app - Wrkit
Enhanced parental leave, including time off for IVF appointments
Religious bank holidays – if you don’t celebrate Christmas and Easter, you can use these annual leave days on other occasions throughout the year.
Life Assurance - 4 times your salary
25% Car Insurance Discount
20% Travel Insurance Discount
Cycle to Work Scheme
Employee Referral Scheme
Community support day

#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

National AI Awards 2025

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Jobs UK 2025: 50 Companies Hiring Now

Bookmark this guide—refreshed every quarter—so you always know who’s really expanding their data‑science teams. Budgets for predictive analytics, GenAI pilots & real‑time decision engines keep climbing in 2025. The UK’s National AI Strategy, tax relief for R&D & a sharp rise in cloud adoption mean employers need applied scientists, ML engineers, experiment designers, causal‑inference specialists & analytics leaders—right now. Below you’ll find 50 organisations that have advertised UK‑based data‑science vacancies or announced head‑count growth during the past eight weeks. They’re grouped into five quick‑scan categories so you can jump straight to the kind of employer—& culture—that suits you. For every company you’ll see: Main UK hub Example live or recent vacancy Why it’s worth a look (tech stack, mission, culture) Search any employer on DataScience‑Jobs.co.uk to view current ads, or set up a free alert so fresh openings land straight in your inbox.

Return-to-Work Pathways: Relaunch Your Data Science Career with Returnships, Flexible & Hybrid Roles

Returning to work after an extended break can feel like stepping into a whole new world—especially in a dynamic field like data science. Whether you paused your career for parenting, caring responsibilities or another life chapter, the UK’s data science sector now offers a variety of return-to-work pathways. From structured returnships to flexible and hybrid roles, these programmes recognise the transferable skills and resilience you’ve gained and provide mentorship, upskilling and supportive networks to ease your transition back. In this guide, you’ll discover how to: Understand the current demand for data science talent in the UK Leverage your organisational, communication and analytical skills in data science roles Overcome common re-entry challenges with practical solutions Refresh your technical knowledge through targeted learning Access returnship and re-entry programmes tailored to data science Find roles that fit around family commitments—whether flexible, hybrid or full-time Balance your career relaunch with caring responsibilities Master applications, interviews and networking specific to data science Learn from inspiring returner success stories Get answers to common questions in our FAQ section Whether you aim to return as a data analyst, machine learning engineer, data visualisation specialist or data science manager, this article will map out the steps and resources you need to reignite your data science career.

LinkedIn Profile Checklist for Data Science Jobs: 10 Tweaks to Elevate Recruiter Engagement

Data science recruiters often sift through dozens of profiles to find candidates skilled in Python, machine learning, statistical modelling and data visualisation—sometimes before roles even open. A generic LinkedIn profile won’t suffice in this data-driven era. This step-by-step LinkedIn for data science jobs checklist outlines ten targeted tweaks to elevate recruiter engagement. Whether you’re an aspiring junior data scientist, a specialist in MLOps, or a seasoned analytics leader, these optimisations will sharpen your profile’s search relevance and demonstrate your analytical impact.