Senior Data Engineer

Pantheon
City of London
1 day ago
Create job alert

Pantheon has been at the forefront of private markets investing for more than 40 years, earning a reputation for providing innovative solutions covering the full lifecycle of investments, from primary fund commitments to co‑investments and secondary purchases, across private equity, real assets and private credit.


We have partnered with more than 650 clients, including institutional investors of all sizes as well as a growing number of private wealth advisers and investors, with approximately $65 bn in discretionary assets under management (as of December 31, 2023).


Leveraging our specialized experience and global team of professionals across Europe, the Americas and Asia, we invest with purpose and lead with expertise to build secure financial futures.


Pantheon is undergoing a multi‑year program to build out a new best‑in‑class Data Platform using cloud‑native technologies hosted in Azure. We require an experienced and passionate hands‑on Senior Data Engineer to design and implement new data pipelines for adaptation to business and or technology changes. This role will be integral to the success of this program and to establishing Pantheon as a data‑centric organization.


You will be working with a modern Azure tech stack and proven experience of ingesting and transforming data from a variety of internal and external systems is core to the role.


You will be part of a small and highly skilled team, and you will need to be passionate about providing best‑in‑class solutions to our global user base.


Key Responsibilities

  • Design, build, and maintain scalable, secure, and high‑performance data pipelines on Azure, primarily using Azure Databricks, Azure Data Factory, and Azure Functions.
  • Develop and optimise batch and streaming data processing solutions using PySpark and SQL to support analytics, reporting, and downstream data products.
  • Implement robust data transformation layers using dbt, ensuring well‑structured, tested, and documented analytical models.
  • Collaborate closely with business analysts, QA teams, and business stakeholders to translate data requirements into reliable technical solutions.
  • Ensure data quality, reliability, and observability through automated testing, monitoring, logging, and alerting.
  • Lead on performance tuning, cost optimisation, and capacity planning across Databricks and associated Azure services.
  • Implement and maintain CI/CD pipelines using Azure DevOps, promoting best practices for version control, automated testing, and deployment.
  • Enforce data governance, security, and compliance standards, including access controls, data lineage, and auditability.
  • Contribute to architectural decisions and provide technical leadership, mentoring junior engineers and setting engineering standards.
  • Produce clear technical documentation and contribute to knowledge sharing across the data engineering function.

Knowledge & Experience Required
Essential Technical Skills

  • Python and PySpark for large‑scale data processing.
  • SQL (advanced querying, optimisation, and data modelling).
  • Azure Data Factory (pipeline orchestration and integration).
  • Azure DevOps (Git, CI/CD pipelines, release management).
  • Azure Functions / serverless data processing patterns.
  • Data modelling (star schemas, data vault, or lakehouse‑aligned approaches).
  • Data quality, testing frameworks, and monitoring/observability.
  • Strong problem‑solving ability and a pragmatic, engineering‑led mindset.
  • Experience in Agile SW development environment.
  • Excellent communication skills, with the ability to explain complex technical concepts to both technical and non‑technical stakeholders.
  • Leadership and mentoring capability, with a focus on raising engineering standards and best practices.
  • Significant commercial experience (typically 5+ years) in data engineering roles, with demonstrable experience designing and operating production‑grade data platforms.
  • Strong hands‑on experience with Azure Databricks, including cluster configuration, job orchestration, and performance optimisation.
  • Proven experience building data pipelines with Databricks and Azure Data Factory; integrating with Azure‑native services (e.g., Data Lake Storage Gen2, Azure Functions).
  • Advanced experience with Python for data engineering, including PySpark for distributed data processing.
  • Strong SQL expertise, with experience designing and optimising complex analytical queries and data models.
  • Practical experience using dbt in a production environment, including model design, testing, documentation, and deployment.
  • Experience implementing CI/CD pipelines using Azure DevOps or equivalent tooling.
  • Solid understanding of data warehousing and lakehouse architectures, including dimensional modelling and modern analytics patterns.
  • Experience working in agile delivery environments and collaborating with cross‑functional teams.
  • Exposure to cloud security, data governance, and compliance concepts within Azure.

Desired Experience

  • Power BI and DAX
  • Business Objects Reporting

This job description is not to be construed as an exhaustive statement of duties, responsibilities, or requirements. You may be required to perform other job‑related duties as reasonably requested by your manager.


Pantheon is an Equal Opportunities employer; we are committed to building a diverse and inclusive workforce so if you're excited about this role but your past experience doesn't perfectly align we'd still encourage you to apply.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.

Neurodiversity in Data Science Careers: Turning Different Thinking into a Superpower

Data science is all about turning messy, real-world information into decisions, products & insights. It sits at the crossroads of maths, coding, business & communication – which means it needs people who see patterns, ask unusual questions & challenge assumptions. That makes data science a natural fit for many neurodivergent people, including those with ADHD, autism & dyslexia. If you’re neurodivergent & thinking about a data science career, you might have heard comments like “you’re too distracted for complex analysis”, “too literal for stakeholder work” or “too disorganised for large projects”. In reality, the same traits that can make traditional environments difficult often line up beautifully with data science work. This guide is written for data science job seekers in the UK. We’ll explore: What neurodiversity means in a data science context How ADHD, autism & dyslexia strengths map to common data science roles Practical workplace adjustments you can request under UK law How to talk about your neurodivergence in applications & interviews By the end, you’ll have a clearer sense of where you might thrive in data science – & how to turn “different thinking” into a real career advantage.