Senior Data Engineer

Phoenix Group Holdings
Edinburgh
3 days ago
Create job alert

We have an incredible opportunity to join us here at Phoenix Group as a Senior Data Engineer to join our Engineering & Delivery team within Group IT.


Job Type: Permanent - Specialist Band 2


Location: This role could be based in either our Birmingham, Telford or Edinburgh offices with time spent working in the office and at home.


Flexible working: All of our roles are open to part-time, job-share and other types of flexibility. We will discuss what is important to you and balancing this with business requirements during the recruitment process. You can read more about Phoenix Flex here.


Closing Date: 19/01/2026


Salary and benefits: £45,000 - £60,000 plus 16% bonus up to 32%, private medical cover, 38 days annual leave, excellent pension, 12x salary life assurance, career breaks, income protection, 3x volunteering days and much


Who are we?

We want to be the best place that any of our 6,600 colleagues have ever worked.


We’re Phoenix Group, we’re a long-term savings and retirement business. We offer a range of products across our market-leading brands, Standard Life, SunLife, Phoenix Life and ReAssure. Around 1 in 5 people in the UK has a pension with us. We’re a FTSE 100 organisation that is tackling key issues such as transitioning our portfolio to net zero by 2050, and we’re not done yet.


The role

We are seeking a Senior Data Engineer to join the Engineering and Delivery function in Group. This is a pivotal role for candidates with a strong background in data and engineering who want to shape how data drives every aspect of a modern pensions business. From operational efficiency and digital transformation to regulatory compliance and customer engagement, your work will influence decisions and enable change across the organisation.


As a Senior Data Engineer, you will be responsible for designing, implementing, and optimizing data solutions on cloud platforms, with a strong emphasis on Databricks. Beyond analytics, you will help embed data capabilities into core business processes, supporting areas such as operations, digital services, risk management, accounting and actuarial. You will collaborate with cross-functional teams—including data scientists, analysts, product owners, and operational leaders—to ensure data is a trusted, integrated asset powering innovation and business outcomes.


Key Responsibilities

  • Design and implement end-to-end data engineering solutions across multiple platforms, including Azure, Databricks, SQL Server, and Salesforce, enabling seamless data integration and interoperability.
  • Architect and optimize Delta Lake environments within Databricks to support scalable, reliable, and high-performance data pipelines for both batch and streaming workloads.
  • Develop and manage robust data pipelines for operational, analytical, and digital use cases, leveraging best practices for data ingestion, transformation, and delivery.
  • Integrate diverse data sources—cloud, on-premises, and third-party systems—using connectors, APIs, and ETL frameworks to ensure consistent and accurate data flow across the enterprise.
  • Implement advanced data storage and retrieval strategies that support operational data stores (ODS), transactional systems, and analytical platforms.
  • Collaborate with cross-functional teams (data scientists, analysts, product owners, and operational leaders) to embed data capabilities into business processes and digital services.
  • Optimize workflows for performance and scalability, addressing bottlenecks and ensuring efficient processing of large-scale datasets.
  • Apply security and compliance best practices, safeguarding sensitive data and ensuring adherence to governance and regulatory standards.
  • Create and maintain comprehensive documentation for data architecture, pipelines, and integration processes to support transparency and knowledge sharing.

Qualifications

  • Proven experience in enterprise-scale data engineering, with a strong focus on cloud platforms (Azure preferred) and cross-platform integration (., Azure Salesforce, SQL Server).
  • Deep expertise in Databricks and Delta Lake architecture, including designing and optimizing data pipelines for batch and streaming workloads.
  • Strong proficiency in building and managing data pipelines using modern ETL/ELT frameworks and connectors for diverse data sources.
  • Hands‑on experience with operational and analytical data solutions, including ODS, data warehousing, and real‑time processing.
  • Solid programming skills in Python, Scala, and SQL, with experience in performance tuning and workflow optimization.
  • Experience with cloud-native services (Azure Data Factory, Synapse, Event Hub, and integration patterns for hybrid environments.

We want to hire the whole version of you.

We are committed to ensuring that everyone feels accepted and welcome applicants from all backgrounds. If your experience looks different from what we’ve advertised and you believe that you can bring value to the role, we’d love to hear from you.


If you require any adjustments to the recruitment process, please let us know so we can help you to be at your best.


Please note that we reserve the right to remove adverts earlier than the advertised closing date. We encourage you to apply at the earliest opportunity.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data science in your 30s, 40s or 50s? You’re far from alone. Across the UK, businesses are investing in data science talent to turn data into insight, support better decisions and unlock competitive advantage. But with all the hype about machine learning, Python, AI and data unicorns, it can be hard to separate real opportunities from noise. This article gives you a practical, UK-focused reality check on data science careers for mid-life career switchers — what roles really exist, what skills employers really hire for, how long retraining typically takes, what UK recruiters actually look for and how to craft a compelling career pivot story. Whether you come from finance, marketing, operations, research, project management or another field entirely, there are meaningful pathways into data science — and age itself is not the barrier many people fear.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.