Data Architect

Stott and May
Bristol
1 week ago
Create job alert

Location: Bristol (Hybrid – 2 days per week in the office)

Contract Duration: 6 months

The Role

We are seeking an experienced GCP Data Architect to play a critical role in architecting and modernising cloud-native data platforms on Google Cloud Platform within a major financial services environment. The successful candidate will design scalable, secure, resilient and cost-optimised data solutions supporting high-volume, highly regulated workloads.

This role offers exposure to large-scale transformation programmes, modern engineering practices, mature data governance frameworks and advanced GCP services, with the opportunity to influence enterprise-wide data strategy.

Key Responsibilities
  • Architect end-to-end data solutions using GCP services including BigQuery, Dataflow, Pub/Sub, Dataproc, GCS and Composer.
  • Design conceptual, logical and physical data models across complex risk, operations, analytics and regulatory domains.
  • Build scalable ingestion and transformation frameworks with strong emphasis on data quality, lineage, metadata and auditability.
  • Identify appropriate cloud architecture patterns for workload deployment and ensure governance standards are upheld.
  • Define and enforce security best practices, IAM policies and data protection standards.
  • Develop and deploy data pipelines using multiple GCP services.
  • Manage data lineage and data quality aspects of data products.
  • Define and implement SRE principles including SLIs, SLOs and SLAs for workloads.
  • Lead cost optimisation initiatives and support FinOps optimisation for live workloads.
  • Review and create FinOps dashboards to monitor cloud spend and efficiency.
  • Provide architectural governance, reusable frameworks and technical oversight to engineering teams.
  • Support Agile feature teams across the lab environment to optimise performance and reliability.
  • Drive cloud modernisation and legacy-to-cloud migration initiatives.
  • Conduct proof of concepts and evaluate emerging tools to enhance cloud data capabilities.
  • Ensure compliance with regulatory, audit and enterprise data governance requirements.
  • Create custom monitoring and alerting solutions using Dynatrace.
  • Integrate Looker and enable reporting solutions for end-users within defined guardrails.
  • Implement TDD for unit testing and DBB for functional testing.
  • Strong experience architecting high-volume ingestion, transformation and analytics pipelines on GCP.
  • Deep knowledge of data governance, lineage, metadata management and regulatory controls.
  • Proven experience identifying appropriate architecture patterns and enforcing governance standards.
  • Experience managing data lineage and quality across enterprise data platforms.
  • Strong understanding of SRE principles and workload reliability engineering.
  • Experience supporting FinOps optimisation and cloud cost management initiatives.
  • Hands‑on experience developing and deploying data pipelines using GCP services.
  • Experience implementing TDD and DBB testing approaches.
  • Strong stakeholder engagement and ability to influence technical direction within cross‑functional Agile teams.
  • Experience delivering cloud migration or platform modernisation programmes.
  • Excellent documentation and communication skills.
  • GCP Professional Data Engineer or Cloud Architect certification.
  • Experience within BFSI domains such as AML, Fraud, Risk, Finance or Regulatory Reporting.
  • Exposure to real‑time streaming, MLOps and advanced analytics architectures.
  • Experience with observability tooling and cloud cost management frameworks.


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Architect

Data Architect

GCP Data Architect

Data Architect – Mainframe Migration & Modernization

Data Architect

Data Architect

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.