Data Engineer – GCP/DSS

Hammersmith Broadway
1 month ago
Applications closed

Related Jobs

View all jobs

Data Engineer - GCP Pipelines & Infra (Hybrid)

Senior Data Engineer (GCP) – Hybrid, Edinburgh

Senior Data Engineer, GCP — Hybrid (Bristol) Data Platform

Senior Data Engineer (GCP) – Hybrid Bristol

Senior Data Engineer, GCP – Hybrid Edinburgh

Senior Data Engineer, GCP Pipelines | Hybrid London

Job Title: Data Engineer – GCP/DSS

Department: Enabling Functions

Location: Hybrid, London

Type: Both Contract (Inside IR35) & Permanent available

Salary: Competitive; depends on experience and open to discussion

Purpose of Job

What you will be working on

While our broker platform is the core technology crucial to success – this role will focus on supporting the middle/back-office operations that will lay the foundations for further and sustained success.

We're a multi-disciplined team, bringing together expertise in software and data engineering, full stack development, platform operations, algorithm research, and data science. Our squads focus on delivering high-impact solutions – we favour a highly iterative, analytical approach.

You will be designing and developing complex data processing modules and reporting using Big Query and Tableau. In addition, you will also work closely with the Infrastructure/Platform Team, responsible for architecting and operating the core of the Data Analytics platform.

Principle Accountabilities

Work with both the business teams (finance and actuary initially), data scientists and engineers to design, build, optimise and maintain production grade data pipelines and reporting from an internal Data warehouse solution, based on GCP/Big Query.

Work with finance, actuaries, data scientists and engineers to understand how we can make best use of new internal and external data sources.

Work with our delivery partners at EY/IBM to ensure robustness of design and engineering of the data model/MI and reporting which can support our ambitions for growth and scale.

BAU ownership of data models, reporting and integrations/pipelines.

Create frameworks, infrastructure and systems to manage and govern data assets.

Produce detailed documentation to allow ongoing BAU support and maintenance of data structures, schema, reporting etc.

Work with the broader Engineering community to develop our data and MLOps capability infrastructure.

Ensure data quality, governance, and compliance with internal and external standards.

Monitor and troubleshoot data pipeline issues, ensuring reliability and accuracy

Regulatory Conduct and Rules

  1. Act with integrity

  2. Act with due skill, care and diligence

  3. Be open and co-operative with Lloyd’s, the FCA, the PRA, and other regulators

  4. Pay due regard to the interests of customers and treat them fairly

  5. Observe proper standards of market conduct

    Education, Qualifications, Knowledge, Skills and Experience

  • Experience designing data models and developing industrialised data pipelines.

  • Strong knowledge of database and data lake systems.

  • Hands-on experience in Big Query, dbt, GCP cloud storage.

  • Proficient in Python, SQL and Terraform.

  • Knowledge of Cloud SQL, Airbyte, Dagster.

  • Comfortable with shell scripting with Bash or similar.

  • Experience provisioning new infrastructure in a leading cloud provider, preferably GCP.

  • Proficient with Tableau Cloud for data visualization and reporting.

  • Experience creating DataOps pipelines.

  • Comfortable working in an Agile environment, actively participating in approaches such as Scrum or Kanban

    Desirable Skills

    Experience of streaming data systems and frameworks would be a plus.

    Experience working in regulated industry, especially financial services, would be a plus.

    Experience creating MLOps pipelines is a plus

    The applicant must also demonstrate the following skills and abilities

    Excellent communication skills (both oral and written).

    Pro-active, self-motivated and able to use own initiative.

    Excellent analytical and technical skills.

    Ability to quickly comprehend the functions and capabilities of new technologies.

    Ability to offer balanced opinion regarding existing and future technologies.

    How to Apply

    If you are interested in the Data Engineer – GCP/DSS position, please apply here

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data science in your 30s, 40s or 50s? You’re far from alone. Across the UK, businesses are investing in data science talent to turn data into insight, support better decisions and unlock competitive advantage. But with all the hype about machine learning, Python, AI and data unicorns, it can be hard to separate real opportunities from noise. This article gives you a practical, UK-focused reality check on data science careers for mid-life career switchers — what roles really exist, what skills employers really hire for, how long retraining typically takes, what UK recruiters actually look for and how to craft a compelling career pivot story. Whether you come from finance, marketing, operations, research, project management or another field entirely, there are meaningful pathways into data science — and age itself is not the barrier many people fear.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.