Senior Data Engineer

Stott and May
City of London
1 day ago
Create job alert

Job Title: Senior Data Engineer

Location: London (Hybrid – minimum 2 days per week in the office)

Day Rate: Market rate (Inside IR35)

Contract Duration: 6 months


Role Overview

We are seeking an experienced Senior Data Engineer to design, develop and maintain scalable data pipelines that ensure high-quality, reliable data is available for business decision-making. You will work closely with data architects, product teams, analysts and data scientists to deliver robust data solutions that power analytics, reporting and advanced data products across our retail organisation.


This role requires strong hands-on experience in modern cloud data platforms including Snowflake, DBT and AWS/Azure, alongside expertise in data modelling, ETL/ELT processes and pipeline orchestration. You will also act as a technical mentor within a collaborative and innovative data engineering team.


Key Responsibilities

  • Design, develop, optimise and maintain scalable ETL/ELT data pipelines using modern cloud technologies.
  • Monitor, troubleshoot and enhance production data pipelines to ensure performance, reliability and data integrity.
  • Write and optimise complex SQL queries to support high-performance analytics workloads.
  • Implement flexible Data Vault models in Snowflake to support enterprise-scale analytics and business intelligence.
  • Build and maintain scalable cloud-based data solutions using Snowflake, DBT, AWS and/or Azure.
  • Support infrastructure and deployment automation using Terraform and CI/CD platforms.
  • Implement and enforce data quality controls and governance processes to ensure accurate and consistent data flows.
  • Support governance frameworks such as Alation and ensure adherence to regulatory and organisational standards.
  • Provide technical leadership, mentor junior engineers and promote engineering best practice.
  • Collaborate with data scientists to deploy analytical and AI models in production environments.
  • Engage with business and product stakeholders to translate requirements into scalable technical solutions.
  • Contribute to architecture discussions and recommend improvements based on emerging data engineering trends.


Essential Skills and Experience

  • Strong experience building and maintaining ETL/ELT pipelines using DBT, Snowflake, Python, SQL, Terraform and Airflow.
  • Proven experience designing and implementing data solutions on cloud-based architectures.
  • Experience working with cloud data warehouses and analytics platforms such as Snowflake and AWS or Azure.
  • Proficiency using GitHub for version control, collaboration and release management.
  • Experience implementing data governance frameworks, including data quality management and compliance practices.
  • Advanced SQL skills including complex query writing, optimisation and analytics-focused database design.
  • Strong communication and stakeholder engagement skills, with the ability to present technical concepts clearly.
  • Excellent problem-solving skills and the ability to translate business requirements into technical solutions.

Languages: Python (primary), SQL, Bash

Cloud: Azure, AWS

Tools: Airflow, DBT

Data Platforms: Snowflake, Delta Lake, Redis, Azure Data Lake

Infrastructure and Operations: Terraform, GitHub Actions, Azure DevOps, Azure Monitor


Desirable Skills and Experience

  • Experience with enterprise data platforms such as Snowflake and Azure Data Lake.
  • Understanding of monitoring, model performance tracking and observability best practices.
  • Familiarity with orchestration tools such as Airflow or Azure Data Factory.

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.