Technical Lead / Data Architect

YunoJuno
City of London
1 day ago
Create job alert

Technical Lead / Data Architect

Start date: ASAP

Duration: 6 months

Location: London, flexible working


*Please note. - You must be registered as a freelance contractor (Ltd Co, Sole Trader) or work via an umbrella company for this assignment.*


ABOUT THE ROLE


The Technical Lead / Data Architect owns the end-to-end technical architecture, engineering organisation, platform maturity evolution, and multi-layered replatforming agenda for the VWG DTO ecosystem. This senior role reports directly to the DTO Overall Lead and operates as a peer to the Head of Governance & Strategy and the Enablement

Lead. The role is accountable for delivering a modern, scalable, AI-ready, governance-aligned data ecosystem spanning ingestion, transformation, modelling, storage, sharing, observability, and reporting.


The remit includes working with engineering teams (Ingestion, ETL/Transformation, BI/Viz, QA, DevOps, Automation), embedding governance and compliance expectations into the architecture, and driving the platform transition from both Starburst→Redshift and Airbyte→Adverity.


ROLE SUMMARY


The Technical Lead / Data Architect is accountable for:

• End-to-end technical architecture across the full e2e stack.

• Engineering leadership across ingestion, transformation, BI, QA, DevOps.

• Replatforming One Reporting’s analytical substrate to Redshift, enabling zero-copy data sharing, unified lineage and AI/LLM workloads.

• Replatforming the ingestion and integration layer from Airbyte→Adverity to support harmonised vendor management, higher observability and stronger schema governance.

• Delivering 24-hour latency expectations and 0–1% discrepancy tolerances.

• Embedding automation, observability and controlled backfills across all pipelines.

• Translating expanded client expectations (2025→2026) into scalable architectural patterns.


KEY RESPONSIBILITIES


ARCHITECTURE


1. Own the end-to-end technical architecture covering ingestion → harmonisation → modelling → metadata → storage → sharing → BI.


2. Lead architectural redesign driven by 2025 expansions (multi-source reconciliation, dual-environment support, URL governance, finance reconciliation).


3. Deliver the 2026 requirements: creative-level granularity, lineage versioning, governed metadata history, automation-first operations, AI-ready data layers.


4. Define canonical modelling standards, entity relationships, lineage contracts, and schema evolution frameworks.


5. Embed governance frameworks (taxonomy, CSREF/URL rules, tagging, QA rules) directly into the platform architecture.


ENGINEERING LEADERSHIP


1. Work with all engineering teams: Ingestion, Transformation/ETL, BI/Viz, QA, DevOps, and Automation.

2. Drive consolidation under a single engineering leadership structure.

3. Ensure deterministic, auditable, highly reliable pipelines supporting 80+ markets.

4. Establish modern software engineering practice: code reviews, CI/CD, IaC, automated QA gates, telemetry-driven operations.

5. Partner with Governance to ensure rule-driven, compliant, “Right First Time” execution across markets.


REPLATFORMING


Starburst → Redshift

• Lead end-to-end replatforming from federated Starburst architecture to Redshift analytical substrate.

• Deliver native zero-copy data sharing for the client.

• Establish unified compute+storage lineage, enabling audit-ready transparency.

• Enable SQL-native AI/LLM workloads


Airbyte → Adverity

• Lead migration of ingestion layer to Adverity to consolidate connectors, improve vendor-supported reliability, and reduce ingestion failure domains.

• Deliver a unified ingestion governance layer: schema validation, drift detection, automated reprocessing rules, lineage tagging at source.

• Support increasing platform complexity (additional local DSPs, retailer platforms, local publishers, custom feeds).


PLATFORM MATURITY


1. Deliver automated, rule-driven, end-to-end QA with 0–1% tolerance.

2. Implement full observability: SLA/SLO telemetry, heartbeat checks, freshness monitoring, discrepancy detection, error patterning.

3. Build governed, deterministic backfill mechanisms.

4. Create AI-ready, metadata-rich, versioned, machine-consumable data layers.

5. Ensure the platform “explains itself” to audits, client pipelines and LLM-based validation systems.


PEOPLE LEADERSHIP


• Lead, mentor and develop engineering leads and multi-disciplinary technical teams.

• Build an accountable, proactive engineering culture.

• Create clear KPIs, SLAs, maturity models and progression paths.

• Forecast capability needs and drive hiring aligned to 2026 requirements.

• Provide documentation, architectural decisions, and transparency to audits and client councils.


IDEAL CANDIDATE PROFILE


• Deep experience in cloud data architecture (AWS, Redshift, Glue, S3, Lambda, Bedrock).

• Strong expertise in ingestion frameworks (Airbyte, Adverity), schema governance and pipeline orchestration.

• Hands-on understanding of BI, modelling, lineage, metadata and harmonisation.

• Strong understanding of data governance, taxonomy, ID hygiene and compliance.

• Excellent communication and client-facing leadership capability.

• Strong proficiency in SQL and analytical modelling for high-volume datasets.

• Hands-on experience with dbt / dbt Cloud for modular transformations and testing.

• Experience with pipeline orchestration tools such as Airflow.

• Proficiency in DevOps/DataOps practices, including CI/CD, Git, environment automation and deployment strategies.

• Experience with Infrastructure-as-Code (Terraform, CloudFormation).

• Exposure to containerisation (Docker, ECS, Kubernetes).

• Familiarity with observability stacks (Datadog, CloudWatch, Grafana) including SLA/SLO telemetry.

• Experience with AI-ready data architectures and embedding LLM workflows into warehouse layers.

• The contractor will not need to undertake team lead responsibilities, but hands on architecture and dev experience is essential.


KPIs & SUCCESS MEASURES


• Replatforming delivered successfully and adopted.

• Airbyte→Adverity ingestion migration completed with improved reliability.

• 24h latency achieved consistently across platforms.

• 0–1% discrepancy tolerance achieved across reporting.

• Reduction in manual remediation and engineering intervention.

• Market satisfaction and client audit performance.

Related Jobs

View all jobs

Technical Lead / Data Architect

Lead Data Engineer

Lead Data Engineer - Hybrid working

Lead Data Engineer - Hybrid working

Lead Data Engineer

Lead Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.