Data Architect

Computer Futures
Manchester
2 weeks ago
Create job alert
Core Responsibilities
1. Modernising an Existing Data Platform

  • Take ownership of redesigning a legacy analytical/data environment so it can interact cleanly with new microservice‑driven upstream systems and a large, frequently changing set of data models.
  • Establish methods for unifying older historical datasets with redesigned structures in a way that protects current reporting and user experiences.
  • Identify and recommend upgrade paths that allow the system to evolve without disrupting ongoing operations.

2. Architecture for Data Ingestion, Processing & Integration

  • Define updated patterns for collecting, transforming, and storing data coming from distributed microservice ecosystems.
  • Safeguard the reliability, lineage, and accuracy of downstream data consumers as schemas and interfaces evolve.
  • Create migration and transitional approaches that allow features to be introduced gradually and with minimal delivery risk.

3. Infrastructure, Access, Security & Connectivity

  • Shape the move from specialist UC devices and private network access to standardised DWP equipment using Zscaler‑based connectivity.
  • Set out the required networking, security, and infrastructure controls that enable compliant, secure communication between workloads running in AWS and environments hosted within Azure.
  • Ensure all movements of sensitive information across cloud boundaries follow strict principles of confidentiality, integrity, and controlled access.

4. Technology Evolution & Alignment to Enterprise Standards

  • Review current technology selections and propose pathways that transition services toward tools and languages approved across the wider organisation.
  • Produce architectural recommendations, structured options papers, and supporting decision artefacts.
  • Promote a reduction of complexity, alignment with enterprise blueprints, and removal of accumulated technical debt.

Skills & Experience Essential

  • Demonstrated experience guiding architecture in complex, high‑risk brownfield environments.
  • Strong background in data platform design: ingestion patterns, schema lifecycle management, ETL/ELT approaches, and integration with microservice ecosystems.
  • Solid multi‑cloud architectural knowledge, specifically around secure data movement between AWS and Azure.
  • Familiarity with regulatory, government, or similarly controlled operating environments.
  • Experience acting as technical authority, including security governance and operational/SRE readiness.
  • Background in maintaining high‑availability services throughout platform modernisation initiatives.
  • Comprehensive knowledge of infrastructure, identity, networking, and security patterns (including Zscaler, VPC design, VPN, etc.).
  • Ability to work within organisational standards and design frameworks.
  • Hands‑on experience re‑engineering or standardising technology stacks.
  • Strong communication and leadership qualities, especially in ambiguous or multi‑stakeholder environments.
  • Proven ability to produce high‑quality technical documentation using enterprise‑standard tools.

Desirable

  • Familiarity with event‑driven integrations and contract‑based microservice communication.
  • Awareness of Kotlin and migration paths toward standard languages used in government (e.g., Java or Python).
  • Understanding of business intelligence tools or emerging AI‑assisted analytics (not essential).
  • Relevant certifications across AWS and/or Azure.


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Architect

Data Architect

GCP Data Architect

Data Architect – Mainframe Migration & Modernization

Data Architect

Data Architect

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.