Senior BigQuery Data Engineer - Contract

Augustinus Bader
City of London
4 days ago
Create job alert

Role title

Senior BigQuery Data Engineer / Contract


Contract type

Day rate contractor

Initial term four to five months


Context

The business is consolidating a growing number of data sources into BigQuery as a core enterprise data platform. Initial focus has been on DTC and ecommerce, with planned expansion across finance, operations, logistics, marketing and others.

Current data sources include ecommerce platforms, subscription systems, customer service tools, personalisation platforms, and marketplace integrations. Data is actively consumed via SQL and AI assisted analysis to power internal reporting applications built in Laravel.

The role is required to stabilise, structure, and future proof the BigQuery environment so it can support scale, governance, and enterprise wide adoption. Preference for IaC, such as Terraform

Primary objectives

  • BigQuery architecture and data model ownership
  • Review the current BigQuery structure, ingestion patterns, and table design.
  • Design and implement a scalable, well governed data architecture suitable for a global enterprise business
  • Define and implement golden datasets with clear ownership, access rules, and change control.
  • Introduce appropriate schema and field level controls to prevent uncontrolled changes and data drift.
  • Ensure the data model supports downstream analytics, AI driven querying, and application level reporting.
  • Produce clear documentation explaining the architecture, data model, and usage patterns for both technical and non technical stakeholders.
  • Delivery oversight and operating model support
  • Work alongside the existing data engineering resource to review current data pipelines, models, and delivery practices.
  • Assess the effectiveness of current ways of working, technical approaches, and delivery processes against current and future business needs.
  • Provide an evidence based view on strengths, gaps, and areas for improvement across data engineering capability and operating model.
  • Make pragmatic recommendations on role scope, process improvements, upskilling opportunities, and resourcing required to support the target state.
  • The required tech stack is Google Big Query plus ‘Infrastructure as Code’ / DBT to be confirmed.

Key deliverables

  • Documented target state BigQuery architecture and data model.
  • Defined and implemented golden tables with clear ownership and governance.
  • Standards for data ingestion, transformation, and consumption.
  • A practical roadmap for scaling BigQuery usage across additional business functions.
  • Clear documentation that enables confident use of data across the organisation.

Required experience

  • Strong hands on experience designing and operating BigQuery environments at scale.
  • Deep understanding of data modelling, analytics architecture, and data governance.
  • Experience working with complex, multi source data environments, ideally including ecommerce and subscription data.
  • Experience with data pipeline orchestrations tools such as Cloud Composer, Airflow or equivalent.
  • Comfort working in fast moving environments with imperfect starting points.
  • Ability to balance best practice with pragmatism and delivery speed.
  • Strong communication skills and ability to explain complex concepts clearly.

Nice to have

  • Experience supporting AI driven analytics or natural language querying of data.
  • Experience working closely with application teams consuming data directly in products or dashboards.
  • Background in DTC, retail, or consumer brands.

Working style

  • Hands on and delivery focused.
  • Pragmatic and outcome driven.
  • 2 Days in office (Central London) per week.
  • Comfortable operating with autonomy.
  • Able to challenge existing approaches constructively.
  • Focused on clarity, documentation, and long term sustainability.

Success looks like

  • BigQuery is trusted as a scalable, governed enterprise data platform.
  • Golden datasets (Curated, business validated tables that serve as the single source of truth) are clearly defined, locked down, and actively used.
  • The business is unblocked to expand data usage across finance, operations, and other functions.

There is clear visibility on the current operating model and what is required to support future growth.

Related Jobs

View all jobs

Senior BigQuery Data Engineer - Contract

Senior Data Engineer

Senior Data Engineer - Customer Data Services - CIO Enabling

Senior GCP Data Engineer: dbt & BigQuery — Remote & Flexible

Senior Data Engineer in London - Qodea

Senior Big Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.