Senior Data Engineer

WGSN
London
6 days ago
Create job alert
The role

We are looking to hire a Senior Data Engineer to join our Data team in London.


This is an office-based role out of our London office.


Working at WGSN
Together, we create tomorrow

A career with WGSN is fast-paced, exciting and full of opportunities to grow and develop. We're a team of consumer and design trend forecasters, content creators, designers, data analysts, advisory consultants and much more, united by a common goal: to create tomorrow.


WGSN's trusted consumer and design forecasts power outstanding product design, enabling our customers to create a better future. Our services cover consumer insights, beauty, consumer tech, fashion, interiors, lifestyle, food and drink forecasting, data analytics and expert advisory. If you are an expert in your field, we want to hear from you.


Role overview

WGSN is expanding its AI & Data capability and strengthening its data foundation. As a Data Engineer, you will play a key role in building, optimising, and maintaining the data pipelines, models, and infrastructure that power our classification systems, AI workflows, forecasting models, TikTok insights, and consumer intelligence products.


You will work closely with senior data scientists, analysts, and engineers, particularly within the TikTok and Pulse pods, ensuring high-quality, well-modelled, reliable data flows across Snowflake, Databricks, and downstream systems. This role is ideal for someone with strong technical depth, advanced SQL and data modelling capabilities, and a passion for building scalable, efficient data systems. This is a hands‑on, senior individual‑contributor role requiring at least 5+ years of experience in data engineering and the ability to mentor junior engineers when needed.


Key accountabilities
Data Architecture & Modelling

  • Design, develop, and maintain scalable data architectures across Snowflake, Databricks, and cloud environments.
  • Lead schema design, dimensional modelling, and query optimisation to support high-performance analytics and AI workloads.
  • Collaborate with senior data scientists to structure data for classification, forecasting, embedding generation, and multimodal workflows.

Advanced SQL & Performance Optimisation

  • Own complex SQL development and performance tuning across DS&E and DPS teams.
  • Optimise costly queries, improve warehouse efficiency, and ensure best-practice SQL standards across shared codebases.

Pipeline Development (Batch & Near-Real-Time)

  • Build robust ETL/ELT pipelines for ingestion, transformation, validation, and delivery.
  • Develop resilient ingestion workflows for external APIs, including rate limiting, retries, schema drift handling, and monitoring.
  • (Future-facing) Support design of streaming or near-real-time data flows as product needs evolve.

Snowflake & Databricks Expertise

  • Implement pipelines using Snowpark, PySpark, and distributed compute environments.
  • Apply Snowflake performance optimisation, cost governance, RBAC, and Snowflake best practices.
  • Support compute scaling across cloud platforms (AWS, GCP) and distributed cluster environments.

Data Quality, Contracts & Observability

  • Implement data validation frameworks (e.g., Great Expectations) and enforce data contracts.
  • Build monitoring, alerting, and lineage visibility for pipelines (e.g., dbt tests, metadata tracking).
  • Ensure high standards of data accuracy, completeness, and reliability.

DataOps & CI/CD for Data

  • Build automated CI/CD workflows for data using GitHub Actions, CircleCI, or similar.
  • Develop automated unit tests, integration tests, and quality gates for data pipelines.
  • Partner with DataOps & Platform Engineering to improve observability, documentation, and deployment workflows.

Workflow Orchestration

  • Build and maintain orchestration workflows using Airflow, Prefect, Dagster, or equivalent.
  • Optimise DAGs for performance, reliability, and clarity, while ensuring operational excellence.

Cloud Infrastructure, Containers & Runtime Management

  • Run, log, monitor, and debug workloads across VMs, Docker containers, and cloud compute environments.
  • Improve reliability and maintainability of containerised workloads powering AI and data pipelines.

Cross-Functional Collaboration

  • Translate analytical and AI requirements into scalable engineering solutions.
  • Document pipelines, decisions, runbooks, and architecture clearly and consistently.

Mentorship & Capability Building

  • Provide guidance to junior engineers and contribute to building team-wide engineering maturity.

This list is not exhaustive and there may be other activities you are required to deliver.


Skills, experience & qualifications required
Experience

  • 5+ years of hands-on experience as a Data Engineer.
  • Proven success designing and scaling production-grade data pipelines in cloud environments.
  • Experience mentoring junior engineers or contributing to capability uplift across teams.

Technical Skills

  • Expert-level SQL: complex queries, optimisation, performance tuning, analytical SQL.
  • Advanced data modelling (star schemas, normalisation, dimensional modelling).
  • Strong Python skills, including Pandas, NumPy, and PySpark/Snowpark.
  • Experience with Snowflake (performance optimisation, cost management, RBAC, governance).
  • Experience with Databricks, distributed compute, and PySpark.
  • Data pipeline orchestration (Airflow, Dagster, Prefect).
  • Data validation frameworks (e.g., Great Expectations).
  • Strong familiarity with cloud platforms (AWS or GCP).
  • Experience building resilient API ingestion pipelines.
  • Understanding of Docker, Linux servers, and cloud VMs.
  • DataOps & DevOpsCI/CD workflows for data pipelines (GitHub Actions, CircleCI).
  • Logging, monitoring, observability for data workflows.

Soft Skills

  • Excellent communication across technical and non-technical teams.
  • Ability to work within and contribute to cross-functional pods (DS + DE + Product + Content).
  • Strong problem-solving skills and ownership mindset.

What we offer

Our benefits and wellbeing package offers flexible benefits you can tailor to your own personal needs, including:



  • 25 days of holiday per year – with an option to buy/sell up to 5 days
  • Pension, Life Assurance and Income Protection – Flexible benefits platform with options including Private Medical, Dental Insurance & Critical Illness
  • Employee assistance programme, season ticket loans and cycle to work scheme
  • Volunteering opportunities and charitable giving options
  • Great learning and development opportunities.

More about WGSN

WGSN is the global authority on consumer trend forecasting.


We help brands around the world create the right products at the right time for tomorrow’s consumer.


Our values
We Are Everywhere

The future is everything, it happens everywhere. WGSN is the world-leading forecaster because we track and analyse consumer behaviours, product innovation, design and creativity, everywhere.


We Are Future Focused

We utilise our global resources and intelligence to research, source and analyse quantitative and qualitative data to produce our forecasts. Everything we do is focused on working with our customers to create a successful and positive tomorrow.


We Are Rigorous

We source, review and assess quantitative and qualitative data to produce robust, actionable forecasts. To provide credible insights and design solutions for our clients, it is essential that rigour runs through everything we do.


Our culture

An inclusive culture is one of our key priorities. We want our people to truly be themselves and thrive. We love having a diverse team of people who bring new ideas, different strengths and perspectives & reflect the global audience we work with.


Inclusive workforce

We are committed to supporting the environment and sustainability, including ensuring our pension plan defaults to sustainable options and striving to be net zero by 2030.


Recognising great performance

Our awards schemes recognise and reward the brilliant achievements of our people.


We offer a flexible working environment with a wide range of flexible, hybrid and agile working arrangements. Conversations about flexible working have always been—and will continue to be—actively encouraged here, but we do not offer full remote working.


We want to ensure everyone has the opportunity to perform their best when interviewing, so if you require any reasonable adjustments that would make you more comfortable during the process, please let us know so that we can do our best to support you.


A Note for Applicants

We use AI to help our team screen applications and identify candidates whose skills and experience match the role. This technology removes personal information to promote a fair and unbiased process. We believe this tool helps us find the best talent while maintaining transparency and fairness.


A Note for Recruiters

Thank you so much for your interest in working with us at WGSN! Our internal Talent Acquisition team takes care of all our recruitment efforts. When we need some extra help, we partner with agencies on our Preferred Supplier List (PSL) that truly understand our business, culture and ways of working together. Since we focus on these established partnerships, we’re unable to respond to unsolicited contacts or CVs from outside our PSL. But don’t worry! If we decide to explore new partnerships, we’ll be sure to reach out.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data science in your 30s, 40s or 50s? You’re far from alone. Across the UK, businesses are investing in data science talent to turn data into insight, support better decisions and unlock competitive advantage. But with all the hype about machine learning, Python, AI and data unicorns, it can be hard to separate real opportunities from noise. This article gives you a practical, UK-focused reality check on data science careers for mid-life career switchers — what roles really exist, what skills employers really hire for, how long retraining typically takes, what UK recruiters actually look for and how to craft a compelling career pivot story. Whether you come from finance, marketing, operations, research, project management or another field entirely, there are meaningful pathways into data science — and age itself is not the barrier many people fear.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.