Data Architect - Bristol - Hybrid Opportunity

Bristol
5 hours ago
Create job alert

Your new company

They are a specialist insurance and risk solutions provider, supporting clients with tailored coverage and expert advice across a range of sectors. The business is known for its client‑focused approach, strong market relationships and commitment to delivering practical, dependable solutions.

With a collaborative culture and a focus on professional development, they offer a supportive environment where people are trusted, valued and encouraged to grow their careers within a forward‑thinking organisation.

Your new role

As a Data Architect, you'll play a key role in shaping how data is designed, managed and used across the business. You'll set the architectural direction for our data estate - from the point data first lands on the platform, through the Bronze, Silver and Gold layers of our Medallion Architecture, and all the way to analytics, AI and self-service reporting.

Working within the Microsoft Azure and Databricks ecosystem, you'll help build a data platform that's scalable, flexible and built to last. Your work will directly support high‑impact use cases, including advanced analytics, pricing models, AI/ML solutions and regulatory reporting - ensuring teams across the business can trust and use data with confidence.

Data Architecture & Modelling

Define and own the architectural principles, standards and policies governing SBG's data estate from the landing zone through to the Gold layer.

Design and govern the Medallion Architecture (Bronze / Silver / Gold), ensuring every layer is built for analytics, AI/ML and self-service consumption.

Own data modelling standards - conceptual, logical and physical - and ensure models are fit for both regulatory reporting and AI-driven insight.

Define Unity Catalogue structure, metadata standards and data lineage governance across the estate. Data Ingestion & Processing

Define ingestion standards and data contracts for data arriving from the landing zone into the Bronze layer, working in partnership with the Development and Application Management team.

Design and optimise ETL/ELT pipeline frameworks using Databricks, Delta Lake and Azure Data Factory. * Ensure Silver and Gold layer data products are fit for purpose for analytics, pricing, AI and ML model consumption.

Optimise data pipelines for efficiency, cost-effectiveness and high performance, leveraging Databricks for big data processing and machine learning.

Governance & Standards

Act as the architectural authority for the data estate - reviewing designs, enforcing standards and preventing platform fragmentation as SBG scales.

Ensure all data architecture decisions align with regulatory requirements - FCA, GDPR, Solvency II, IFRS 17 and BCBS 239.

Define and maintain data architecture policies and guidelines ensuring long-term scalability and sustainability.

Analytics & AI Enablement

Design the Gold layer to ensure data products are structured, documented and accessible for self-service analytics and AI/ML model consumption.

Collaborate with ML Ops and Data Science teams to define data product standards and feature engineering patterns.

Evaluate and lead adoption of emerging Azure and Databricks capabilities - including Microsoft Fabric, OneLake and DirectLake - where they advance the data architecture.

Drive innovation by evaluating and implementing emerging cloud-based data technologies to enhance SBG's competitive advantage.

What you'll need:

Strong stakeholder management across business, IT and compliance teams.

Excellent communication, collaboration and influencing skills at all levels of an organisation.

Experience leading data architecture and engineering teams in an enterprise environment.

Ability to define and implement a data strategy aligned with business objectives.

Proven track record of delivering enterprise-scale data solutions with a focus on performance, security and scalability.

Experience in regulated financial services, ensuring compliance with industry standards.

Deep expertise in data modelling - conceptual, logical and physical.

Data warehousing and data lake architecture for high-performance analytics.

ETL/ELT pipeline development and optimisation to support large-scale data processing.

Data integration across structured and unstructured sources, ensuring high availability.

Metadata management and governance to maintain data quality and lineage.

Experience defining data contracts and ingestion standards between source delivery teams and the data estate.

Deep expertise in Microsoft Azure cloud services - ADF, ADLS, Synapse, Purview.

Databricks - Delta Lake architecture, optimisation and advanced data processing.

Apache Spark for large-scale distributed computing and performance tuning.

Microsoft Fabric - OneLake and DirectLake integration.

Azure Synapse Analytics for enterprise-scale data warehousing.

Infrastructure-as-Code (Terraform or Azure Bicep) to automate cloud deployments.

CI/CD pipelines with Azure DevOps or GitHub Actions for automated deployment of data pipelines.

MLOps best practices - MLflow, Databricks Model Serving, Feature Store.

Knowledge of IFRS 17, BCBS 239, UK Data Protection Act and Solvency II compliance.

Experience with pricing models, claims processing and fraud detection in the insurance sector.

Strong problem-solving skills and ability to translate business needs into technical solutions.

Ability to document and present complex data architectures to technical and non-technical stakeholdersWhat you'll get in return

Hybrid working - 2 days in the office and 3 days working from home

25 days annual leave, rising to 27 days over 2 years' service and 30 days after 5 years' service. Plus bank holidays!

Discretionary annual bonus

Pension scheme - 5% employee, 6% employer & many more

What you need to do now

If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV.

If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career.

Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)

Related Jobs

View all jobs

Data Architect

Data Architect

Data Architect

Data Architect

Data Architect

Data Architect

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

New Data Science Employers to Watch in 2026: UK and International Companies Leading Analytics and AI Innovation

Data science has emerged as one of the most transformative forces across industries, turning raw information into actionable insights, predictive models, and AI-powered solutions. In 2026, the UK is witnessing a surge in organisations where data science is not just a support function but the core of their products and services. For professionals exploring opportunities on www.DataScience-Jobs.co.uk , identifying these employers early can provide a competitive advantage in a market with high demand for advanced analytics and machine learning expertise. This article highlights new and high-growth data science employers to watch in 2026, focusing on UK startups, scale-ups, and global firms expanding their data science operations locally. All of the companies included have recently raised investment, won high-profile contracts, or significantly scaled their analytics teams.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.