Senior Staff Data Engineer

Warner Bros. Discovery
London
3 days ago
Create job alert

This job is with Warner Bros. Discovery, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly.

Welcome to Warner Bros. Discovery… the stuff dreams are made of.
Who We Are…
When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next…

From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive.

Warner Bros. has been entertaining audiences for more than 90 years through the world’s most-loved characters and franchises. Warner Bros. employs people all over the world in a wide variety of disciplines. We're always on the lookout for energetic, creative people to join our team.

Your New Role...

We are seeking an exceptional Senior Staff Data Engineer to lead the design, development, and scaling of the data and platform systems that power our experimentation, adaptive optimisation, and automated decisioning ecosystem. This is a high-impact, hands-on technical leadership role that will shape how experimentation data is collected, processed, served, and operationalised across millions of users worldwide.

This role will act as a force multiplier for all Labs initiatives and, in particular, will support the development and productionization of the new Canvas Optimisation system.

As a senior technical leader, you will define the architecture and long-term strategy for experimentation data infrastructure, ensure reliable and cost-efficient data pipelines, and partner closely with Data Science, Engineering, Product, and Analytics teams to scale our platform from hundreds to thousands of concurrent experiments and bandits.

This role would be tasked with reducing Labs data processing costs by ~25% over the year through architectural optimisation, storage strategy improvements, compute efficiency, and intelligent data lifecycle management.

Your Role Accountabilities...

Scale Experimentation Data Platforms
Architect and lead the design of scalable, reliable data pipelines supporting large-scale A/B testing, multivariate testing, and adaptive experimentation.

Build and maintain systems that support real-time and batch experiment telemetry ingestion, feature logging, exposure tracking, and outcome measurement.

Design data models and storage strategies optimised for:
Experiment analysis latency

Cost efficiency

Long-term reproducibility

Governance and auditability

Enable production-grade pipelines for statistical methods such as CUPED, regression adjustment, and other variance-reduction workflows (in partnership with Data Science).

Enable Adaptive & Bandit Systems at Scale
Build data infrastructure that supports multi-armed bandit decisioning systems, including:
Low-latency reward signal pipelines

Feature and context streaming

Policy logging and replay data stores

Partner with scientists to productionize bandit frameworks (e.g., Thompson Sampling, epsilon-greedy, UCB) via reliable data services and APIs.

Design systems enabling off-policy evaluation (OPE), replay simulation datasets, and long-term policy evaluation storage.

Canvas Optimization & Personalization Infrastructure
Lead data system architecture supporting the Canvas Optimisation platform, including:

Artwork / creative performance telemetry

Impression → engagement attribution pipelines

Near real-time reward computation

Global rollout observability and monitoring

Ensure high availability, correctness, and explainability of decisioning-support data feeds.

Cost Optimisation & Efficiency Leadership
Drive initiatives to reduce overall Labs data processing costs by ~25%, including:
Query and job optimisation across compute platforms (Databricks, Spark, etc.)

Storage tiering and retention policy optimisation

Data compaction, partitioning, and indexing strategies

Eliminating redundant or low-value pipeline stages

Establish cost observability dashboards and cost-to-value monitoring frameworks.

Platform, Reliability & Developer Experience
Build reusable data services, libraries, and APIs that:
Simplify experiment onboarding

Standardize telemetry schemas

Enable self-service data access

Define and enforce SLAs for critical experimentation and decisioning datasets.

Lead data quality frameworks including anomaly detection, reconciliation, and automated validation.

Technical Leadership & Mentorship
Serve as a technical thought leader across data platform architecture for experimentation and optimization.

Mentor data engineers and platform engineers on distributed systems design, pipeline reliability, and performance optimization.

Influence cross-org roadmaps spanning experimentation science, platform engineering, and product personalization systems.

Qualifications and Experience... BS/MS in Computer Science, Engineering, or related field (or equivalent industry experience).

10+ years building and operating large-scale distributed data systems.

Deep experience with:
Streaming and batch data architectures

Experiment telemetry systems

Data modelling for analytics and decisioning

Hands-on experience with modern data stack technologies (e.g., Spark, Airflow, Redshift, Databricks, Snowflake, Delta Lake, etc.).

Strong programming mindset with a focus on building and enforcing standards that ensure code is maintainable, readable, and extensible over time.

Strong communication skills and ability to partner across DS, Engineering, Product, and Analytics.

Preferred Qualifications
Experience supporting experimentation or personalisation platforms at scale.

Familiarity with adaptive experimentation and bandit system data requirements.

Experience with cost optimisation of large-scale cloud data platforms.

Experience operating global, multi-region data systems.

Hybrid Working - This role is advertised as a Hybrid work model, that combines remote and in-office work, following our current company policy and to be agreed with your Line Manager. Subject to any applicable laws, WBD / your Line Manager reserves the right to change this working agreement where this is essential to business needs and upon reasonable notice to you.

How We Get Things Done…

This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/  along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview.

Championing Inclusion at WBD
Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law.

If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.

Related Jobs

View all jobs

Staff Data Engineer

Staff Data Engineer

Senior Data Engineer (AWS, Airflow, Python)

Senior Data Engineer (AWS, Airflow, Python)

Senior Data Engineer (AWS, Airflow, Python)

Data Architect

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.