Lead Data Engineer

Legal 500
London
2 months ago
Applications closed

Related Jobs

View all jobs

Lead Data Engineer

Lead Data Engineer

Lead Data Engineer

Lead Data Engineer

Lead Data Engineer - Azure Synapse

Lead Data Engineer - Hadoop - Spark - Python

Lead Data Engineer


Technology - London (Fleet Street) - Hybrid - Full Time


About Legal 500

Legal 500 was founded by John Pritchard in 1987 as the original clients’ guide to law firms, the first of its kind. It is now a data-driven, AI-optimised research platform which benchmarks, informs and connects providers and users of legal services in over 100 countries worldwide.


Our research and data are trusted and relied upon by corporate clients globally as an essential part of the process, both of instructing law firms with new mandates, and when reviewing existing mandates or panels.


We exist to empower both buyers and sellers in the international legal marketplace to make better decisions and have improved outcomes for their organisations. This is achieved by leveraging a trusted, comprehensive research process with a unique, vast, proprietary and constantly updated set of client-supplied data, unrivalled in the market.


On the supply side of the legal market, every year Legal 500’s team of over 150 researchers, technologists, data analysts, journalists and content specialists collate and review 60,000+ data-submissions from law firms and conduct interviews with thousands of leading law firm partners. On the demand side, Legal 500 analyses confidential data from 300,000+ commercial law firm clients to benchmark law firms and lawyers by practice area; industry; jurisdiction; as well as by proprietary client satisfaction metrics, NPS®, and other qualitative and quantitative criteria.


Legal 500 is the only source of this depth of global research and data on law firms, lawyers and their clients.


The Role

As our Lead Data Engineer, you’ll be the senior technical voice in the data team and a critical partner to the Head of Data. You’ll design, build, and improve our Snowflake and dbt-driven platform, establish engineering best practices, and support the growth of our data capabilities as the organisation scales.


You’ll work in a Microsoft-first environment, using Azure cloud services to orchestrate, automate, and deliver robust, production-ready data pipelines.


This is a hands-on leadership role with real ownership and the opportunity to bring modern engineering discipline into a growing function.


What You’ll Be Doing

Platform Ownership & Architecture

  • Lead the architectural direction of our Snowflake-based data platform
  • Design scalable ELT pipelines and transformation layers using dbt
  • Build high-quality data models across staging, intermediate, and marts layers
  • Make architectural decisions around modelling approaches and data lifecycle design


Data Engineering & Delivery

  • Develop and optimise transformations in dbt and SQL
  • Use Azure services (e.g., Azure Data Factory, Azure Functions, Azure Storage) to orchestrate and deliver pipelines
  • Introduce CI/CD, testing, code quality, observability, and documentation best practices
  • Improve performance, cost efficiency, and reliability of the platform


Leadership & Continuous Improvement

  • Set engineering standards, patterns, and technical guidelines for the team
  • Mentor and guide engineers and analysts
  • Partner closely with product, software engineering, and research teams
  • Drive a culture of ownership, collaboration, and delivery excellence


What We’re Looking For

Technical Skills

  • Strong experience with Snowflake (performance tuning, warehouses, modelling, optimisation)
  • Deep experience with dbt (tests, macros, documentation, project structure)
  • Excellent SQL skills and strong data modelling foundations
  • Experience designing and building ELT pipelines
  • Experience using Azure cloud services for data workflows (e.g., Data Factory, Azure Functions, ADLS)
  • Solid understanding of version control, CI/CD, testing, and engineering best practice


Leadership & Ownership

  • Experience leading or steering data engineering initiatives
  • Ability to introduce structure, standards, and long-term thinking
  • Comfortable influencing cross-functional teams and mentoring others
  • Pragmatic, delivery-focused approach with strong communication skills


Interview Process

  • Screening Call with Talent Partner
  • 1st Stage with the Head Of Data
  • Technical Test – Take Home
  • Final Interview

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.