Senior Data Engineer - Darwin

Direct Line Group
London
1 week ago
Create job alert

DLG is evolving. Across every facet of our business, our teams are embracing new opportunities and putting customers at the heart of everything they do. By joining them, you’ll have the opportunity to not just be recognised for your skills but encouraged to build upon them and empowered to do your absolute best.


At Darwin Insurance, a brand within DLG, you’ll help build the next generation of our low‑latency, cloud‑native data platforms — the systems that power real‑time pricing, machine learning workflows, and operational intelligence. You’ll own the design of resilient ETL/ELT pipelines, distributed data processing, and scalable AWS infrastructure, collaborating closely with a tight‑knit team of engineers, architects, and product leaders who care deeply about doing things the right way.


We engineer like a startup: Clean Code, TDD, CI/CD, Infrastructure‑as‑Code, full ownership, and a bias for shipping well‑designed, well‑tested systems. You’ll have a genuine voice in architectural direction, mentor strong engineers, and help raise the technical bar across the team.


This role is ideal for someone who wants meaningful impact — someone who’s excited by the blend of startup autonomy and big‑company reach, and who wants to shape the future of Darwin’s data platform as we scale our ambitions.


What You’ll Be Doing

  • Lead collaborations with cross‑functional teams to understand business requirements and communicate complex data concepts clearly to non‑technical stakeholders, facilitating informed decision‑making across the organization.
  • Design robust data transformation solutions using ETL and ELT methodologies and maintain comprehensive documentation for all data pipelines and integration procedures to ensure clarity and effective management.
  • Proactively monitor data systems for performance optimisation and implement data quality checks to maintain high standards of accuracy and reliability in data processing.
  • Develop and enforce security protocols to protect data assets and ensure all data handling complies with legal and regulatory standards, supporting robust data governance.
  • Stay abreast of advancements in data engineering technologies and practices, incorporating innovative tools and methods to enhance the organization’s data capabilities.
  • Mentor junior data engineers, fostering an environment of growth and continuous learning, while leading by example in technical expertise and basic project management.
  • Work closely with data architects and other engineers to refine data storage and architecture, ensuring the infrastructure supports scalable solutions and future technological developments.

What You’ll Need

  • Significant years of experience, gained in a professional services setting, in designing and implementing complex data transformations using both ETL and ELT methodologies to support scalable data architecture in high‑demand environments.
  • Excellent understanding of distributed data processing, ideally working with Apache Spark.
  • Extensive experience (5+ years) with advanced SQL techniques and database management, including proficiency with database systems like PostgreSQL, MySQL, and Amazon Redshift, optimising database performance and integrity in large‑scale environments.
  • In‑depth knowledge of AWS cloud services including Glue, S3, KMS, Lambda, and Kinesis, demonstrating capability in deploying and managing scalable, secure cloud infrastructure.
  • Advanced proficiency in Python programming, with demonstrated experience in developing, automating, and enhancing data processing tasks and analytical applications.
  • Strong command over Unix/Linux operating environments, with the ability to manage and optimise system performance for data applications.
  • Solid understanding and practical experience in Agile development processes, skilled in translating strategic goals into executable stories, with a focus on incremental and iterative delivery.
  • Strong advocate for Software Craftsmanship and Clean Code, with a commitment to maintaining high standards in code quality and system reliability.
  • Demonstrated experience and passion for Test‑Driven Development (TDD), ensuring robust and reliable software solutions through comprehensive testing practices.
  • Excellent communication skills, capable of effectively articulating technical concepts to both technical and non‑technical stakeholders, facilitating clear understanding and cooperative problem‑solving.
  • Clear understanding of DevOps practices and Infrastructure as Code, with hands‑on experience in automating and optimising infrastructure deployment and management.
  • Over 5 years of experience managing clustered systems, ensuring high availability and performance scalability in distributed computing environments.
  • Expert knowledge of various database technologies, including SQL and NoSQL, and their appropriate use cases and trade‑offs, guiding optimal database selection and use.

Benefits

  • 9% employer‑contributed pension
  • 50% off home, motor and pet insurance plus Green Flag breakdown cover
  • Additional optional Health and Dental insurance
  • Up to 10% AIP Bonus
  • EV car scheme allowing all colleagues to lease a brand‑new electric or plug‑in hybrid car in a tax‑efficient way.
  • Generous holidays
  • Buy‑as‑you‑earn share scheme
  • Employee discounts and cashback
  • Plus, many more.

We want everyone to get the most out of their time at DLG. Which is why we’ve looked beyond the financial rewards and created an offer that takes your whole life into account. Supporting our people to work at their best – whatever that looks like – and offering real choice, flexibility, and a greater work‑life balance that means our people have time to focus on the things that matter most to them. Our benefits are about more than just the money you earn. They’re about recognising who you are and the life you live.


Be yourself

Direct Line Group is an equal opportunity employer, and we think diversity of background and thinking is a big strength in our people. We're delighted to feature as one of the UK's Top 50 Inclusive Employers and are committed to making our business an inclusive place to work, where everyone can be themselves and succeed in their careers. We know you're more than a CV, and the things that make you, you, are what bring potential to our business. We recognise and embrace people that work in different ways so if you need any adjustments to our recruitment process, please speak to the recruitment team who will be happy to support you.


Location: London, England, United Kingdom.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data science in your 30s, 40s or 50s? You’re far from alone. Across the UK, businesses are investing in data science talent to turn data into insight, support better decisions and unlock competitive advantage. But with all the hype about machine learning, Python, AI and data unicorns, it can be hard to separate real opportunities from noise. This article gives you a practical, UK-focused reality check on data science careers for mid-life career switchers — what roles really exist, what skills employers really hire for, how long retraining typically takes, what UK recruiters actually look for and how to craft a compelling career pivot story. Whether you come from finance, marketing, operations, research, project management or another field entirely, there are meaningful pathways into data science — and age itself is not the barrier many people fear.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.