Data Engineer

London
4 days ago
Create job alert

We are Data Services, our mission is to unlock the value of data by delivering high-quality, reliable, and secure data services that are accessible, understandable, and actionable. We continuously evolve our offerings, leveraging modern cloud-based technologies, and fostering strong partnerships to help our colleagues in the Bank navigate the complexities of a data-driven world and achieve their strategic objectives.

Active SC Clearance

Job Description:

The world of data in Central Banking is evolving rapidly. With the rise of detailed data collection in financial regulation and the swift advancements in cloud-native data technologies, the demand for visionary data engineers is growing. We’re seeking a senior Data Engineer to join our Data Engineering team and play a pivotal role in shaping the Bank’s strategic cloud-first data platform.

As a senior member of the team, you will play a key role in designing and delivering robust, scalable data solutions that support the Bank’s core responsibilities around monetary policy, financial stability, and regulatory supervision. You’ll contribute to technical design decisions, mentor engineers, and collaborate across teams to ensure our data infrastructure continues to evolve and meet future demands.

Role Responsibilities

  • Lead the design, development, and deployment of scalable, secure, and cost-effective distributed data solutions using Azure services (e.g., Azure Databricks, Azure Data Lake Storage, Azure Data Factory).

  • Architect and implement advanced data pipelines using Databricks, Delta Lake, Python and Spark, ensuring performance, reliability, and maintainability across cloud and on-prem environments.

  • Champion data quality, governance, and observability, ensuring data is accurate, timely, and fit-for-purpose for analytics, BI, and operational use cases.

  • Drive the modernization of legacy systems, leading the migration of data infrastructure to Azure with minimal disruption and long-term scalability.

  • Act as a technical authority on Azure-native data engineering, guiding best practices and setting standards across the team.

  • Mentor and coach junior and mid-level engineers, fostering a culture of continuous learning, innovation, and technical excellence.

  • Collaborate with architects, analysts, and stake holders to align data engineering efforts with strategic business goals and enterprise data strategy.

  • Evaluate and introduce emerging technologies, tools, and methodologies to enhance the Bank’s data capabilities.

  • Own the end-to-end delivery of complex data solutions, from requirements gathering to production deployment and support.

  • Contribute to the development of reusable frameworks, templates, and patterns to accelerate delivery and ensure consistency across projects.

    Minimum Criteria

  • Extensive experience with Azure services including Azure Databricks, Azure Data Lake Storage, and Azure Data Factory.

  • Advanced proficiency in SQL, Python, and Spark (PySpark), with a strong focus on performance optimization and distributed processing.

  • Proven experience in CI/CD practices using industry-standard tools (e.g., GitHub Actions, Azure DevOps).

  • Strong understanding of data architecture principles and cloud-native design patterns.

    Essential Criteria

  • Demonstrated ability to lead technical delivery, mentor engineering teams and collaborate with stakeholders to ensure alignment between data solutions and business strategy.

  • Proficiency in Linux/Unix environments and shell scripting.

  • Deep understanding of source control, testing strategies, and agile development practices.

  • Self-motivated with a strategic mindset and a passion for driving innovation in data engineering.

    Desirable Criteria

  • Experience delivering data pipelines on Hortonworks/Cloudera on-prem and leading cloud migration initiatives.

  • Familiarity with: Apache Airflow

  • Data modelling and metadata management

  • Experience influencing enterprise data strategy and contributing to architectural governance.

    Changed this now. I was confusing this with PDE role as I am working on that in parallel. Hope this makes sense now.

    data solutions rather than architectures?

    Should add Python here as a key tech we use

    Have mentioned Python in 'Minimum Criteria' section below, but will add here too

    this could be added to Essential Criteria ?

    stakeholder and project management ?

    Have updated #1 in essential criteria below. But I have now used the previous version to create requisition in OBS. Will see if it can be changed.

    What is the difference between "minimum" and "essential" criteria. Both imply that they are mandatory and so could be one list?

    This is a bit confusing. I used to have just one, but this is the standard format of JD that the Bank wants us to follow. Here is the difference:

    Min Criteria:

    This must list the minimum technical skills/experience/qualifications required to do the job and should be measurable/scoreable. The screening questions you select must link to these, in order to allow candidates to best demonstrate their suitability for the role.

    Essential:

    This lists other important technical skills/experience/qualifications, and also more behavioural competencies. These are ones that are better assessed at interview rather than on screening questions on the application form

    Ok, I think we could go back and ask HR about this as it does seem confusing and to me doesn't give a good impression of the Bank to applicants at it looks like 2 lists for the same thing.

    I had checked this earlier, but seems they want us to follow this format. When I advertised last time, I just mentioned Minimum Criteria, but they said it has to be split into Minimum and Essential.

    Don't think we need to mention Atlas or Cloudera Manager as we hardly ever use those. Airflow could be useful so would leave that in

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.