GDPR Data Architect

Glasgow
13 hours ago
Create job alert

Data Architect
6 month contract
Glasgow 2-3 days per week
£700 per day
 
We are seeking an experienced Data Architect who can define, govern, and deliver GDPR compliant enterprise data architectures across relational (Oracle), NoSQL, unstructured data platforms, and Data Warehouse. The ideal candidate will possess strong expertise in data integration, migration, and centralisation initiatives, ensuring privacy by design across complex, distributed data landscapes.
 
Key Responsibilities:

Data Architecture & Modelling:

   - Define enterprise data architecture aligned with GDPR and organisational data strategy.
   - Design conceptual, logical, and physical data models for relational databases (e.g., Oracle), NoSQL databases (e.g., MongoDB), and unstructured data.
   - Establish standards for data classification, metadata, and ownership across structured and unstructured data.
   - Ensure data models support traceability, lineage, and regulatory reporting.

Data Governance:

   - Architect privacy by design and privacy by default solutions.
   - Define data retention, archival, anonymization, and deletion strategies across all data stores.
   - Design architectures supporting DSAR, right to erasure, and consent management.
   - Define security architecture including role-based access control (RBAC), column/row level security, data masking, tokenization, and encryption.
   - Ensure compliance across cloud, hybrid, and on-premises environments.
 
GDPR & Regulatory Compliance:

   - Architect solutions fully aligned to GDPR and applicable regulatory frameworks.
   - Implement and operationalize key compliance capabilities: data retention & archival policies, right to erasure/data subject rights, data minimization, security & access controls, auditability, traceability & data lineage.
   - Work with Risk and Compliance teams to translate regulatory needs into technical design.
 
Data Integration & Enterprise Connectivity:

   - Design and govern enterprise data integration architectures using ETL/ELT frameworks, API-based integration, event-driven and streaming integration, message queues and middleware.
   - Define integration patterns for real-time, near real-time, and batch processing.
   - Ensure consistent data movement while preserving lineage, quality, and compliance.
 
Data Migration & Centralisation:

   - Lead data migration initiatives from legacy and distributed systems to centralised platforms such as enterprise data warehouses, data lakes, and MDM.
   - Define migration strategies including data profiling and cleansing, PII identification and remediation, and incremental and phased migrations.
   - Ensure minimal business disruption and full GDPR compliance during migration.
   - Validate post-migration data quality, security, and accessibility.
 
Collaboration & Oversight:

   - Provide architectural leadership and guidance to Data Engineers and Platform teams.
   - Work closely with Legal, Compliance, Security, and Risk teams.
   - Support GDPR audits, regulatory assessments, and architectural reviews.
   - Define and maintain architecture standards, patterns, and documentation.
 
Required Skills & Experience:

  • 15+ years of overall IT experience, with at least 5+ years of hands-on experience in enterprise data architecture, design, governance, and implementation.
  • Strong experience designing and implementing enterprise-grade ETL/ELT data pipelines. Hands-on experience with tools such as SQL, NoSQL Datastores, Talend, Informatica, IBM DataStage, or equivalent platforms, Collibra, Data Lineage, Alfresco.
  • Proven experience acting as an enterprise or lead data architect within a large, multinational organisation across the disciplines of metadata management, data quality, data warehousing, reporting & analytics, and data governance.
  • Strong knowledge of data modelling techniques (conceptual, logical, and physical).
  • Excellent communication and stakeholder management skills.
  • Solid understanding of SDLC methodologies including Agile, DevOps, rapid prototyping, and iterative delivery models.
  • Ability to assess and estimate the financial impact of technology decisions.
  • Capability to evaluate emerging technologies and understand their impact on business models

Related Jobs

View all jobs

GDPR Data Architect

Senior Data Engineer

Senior Data Engineer

Data Engineer (Junior / Senior)

GCP Data Architect & Governance Lead

Technical Data Architect

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.