Senior Data Architect

ELLIOTT MOSS CONSULTING PTE. LTD.
Penarth
1 day ago
Create job alert
Job Description

We are seeking an experienced Senior Data Architect / Databricks Architect to lead the design and implementation of scalable lakehouse-based data architectures using the Databricks platform.


The role focuses on delivering enterprise-grade data solutions, implementing Unity Catalog governance, and enabling end-to-end data lifecycle management across ingestion, processing, storage, and analytics layers.


The ideal candidate will have strong expertise in Databricks, Apache Spark, Delta Lake, and cloud data platforms, along with the ability to collaborate with the project teams to design high-performance, secure, and scalable data ecosystems.


Key Responsibilities

  • End-to-End Data Architecture: Collaborate with Databricks Professional Services and project stakeholders to design comprehensive end-to-end data architectures on the Databricks platform.
  • Define scalable data ingestion strategies integrating structured and unstructured data from multiple source systems. Architect scalable lakehouse storage solutions using Delta Lake and modern data platform best practices.
  • Develop robust data processing frameworks leveraging Apache Spark and Databricks workflows.
  • Design data consumption layers that support analytics, reporting, AI/ML, and operational workloads.
  • Ensure seamless data movement and lifecycle management across ingestion, transformation, storage, and consumption layers.
  • Governance, Security & Compliance: Implement data governance frameworks leveraging Unity Catalog for centralized governance.
  • Configure metastore, catalog and schema structures, and implement access control policies.
  • Design and enforce data security, role-based access control, and data protection strategies.
  • Ensure compliance with regulatory requirements and enterprise data governance standards.
  • Implement data lineage, monitoring, audit logging, and observability for the data platform.
  • Optimize system performance through cluster configuration, workload management, and query tuning.
  • Define and implement data quality frameworks and validation processes.
  • Data Modelling & Design: Design business-aligned data models supporting enterprise analytics and operational use cases.
  • Implement dimensional modeling, normalized models, and data vault architectures.
  • Design optimized Delta table structures to improve scalability and query performance.
  • Implement medallion architecture (Bronze, Silver, Gold layers) for structured data refinement.
  • Develop data schemas that support both BI analytics and machine learning workloads.
  • Maintain data dictionaries, metadata documentation, and model specifications.
  • Technical Leadership & Collaboration: Lead technical workshops with the project team, stakeholders, and cross-functional teams to gather and refine requirements.
  • Provide architectural guidance and best practices for Databricks-based data engineering teams.
  • Collaborate with Infrastructure, Applications, and Cybersecurity teams for integrated enterprise solutions.
  • Mentor data engineers, architects, and platform specialists on modern lakehouse architectures.
  • Present architecture strategies, solution designs, and technical recommendations to leadership and stakeholders.
  • Solution Implementation: Lead implementation of Databricks-based solutions from architecture design to production deployment.
  • Oversee proof-of-concept (POC) initiatives and pilot programs to validate technical feasibility.
  • Ensure solutions meet scalability, reliability, security, and performance requirements.
  • Conduct architecture reviews and governance checkpoints aligned with enterprise standards.

Required Technical Skills

  • Databricks & Data Platform: Strong hands‑on experience with the Databricks platform, including workspace administration, cluster configuration and optimization, workflow orchestration, and Unity Catalog. Experience implementing Unity Catalog for unified data governance, including metastore configuration, catalog and schema design, and access control and policy management.
  • Data Engineering & Architecture: Expertise in data modeling approaches including dimensional modeling, Data Vault, and lakehouse architecture.
  • Deep knowledge of Delta Lake features, including ACID transactions, time travel, performance optimization techniques, and strong proficiency in Apache Spark (Spark SQL, DataFrames, performance tuning).
  • Programming: Strong coding experience in Python, SQL, Scala, and cloud platforms.
  • Hands‑on experience with at least one major cloud platform: Microsoft Azure, Amazon Web Services (AWS), or Google Cloud Platform (GCP).
  • Additional Technical Skills: Data pipeline development and ETL/ELT architecture, metadata management and data governance frameworks, CI/CD implementation for data platforms.
  • Data quality monitoring and validation frameworks, performance optimization and troubleshooting, knowledge of data security, compliance, and regulatory standards.
  • Professional Experience: 8–10+ years of experience in data architecture, data engineering, or advanced analytics roles; 3–5+ years of hands‑on Databricks platform experience.
  • Proven experience implementing Unity Catalog in enterprise‑scale environments.
  • Demonstrated success designing large‑scale enterprise data models and lakehouse architectures.
  • Experience working with Databricks Professional Services or partner ecosystems is highly desirable.
  • Experience across multiple industries such as Public Sector, Financial Services, Healthcare, or Retail is advantageous.

Preferred Certifications

  • Databricks Certified Associate Developer for Apache Spark.
  • Databricks Data Engineer certification.
  • Professional Cloud certifications such as Azure Data Engineer Associate, AWS Data Analytics Specialty, Google Professional Data Engineer, or other relevant data management or analytics certifications.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Architect - Data Platform - SC

Senior Data Architect

Senior Data Architect

Senior Data Architect

Senior Data Architect

Senior Data Architect

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

New Data Science Employers to Watch in 2026: UK and International Companies Leading Analytics and AI Innovation

Data science has emerged as one of the most transformative forces across industries, turning raw information into actionable insights, predictive models, and AI-powered solutions. In 2026, the UK is witnessing a surge in organisations where data science is not just a support function but the core of their products and services. For professionals exploring opportunities on www.DataScience-Jobs.co.uk , identifying these employers early can provide a competitive advantage in a market with high demand for advanced analytics and machine learning expertise. This article highlights new and high-growth data science employers to watch in 2026, focusing on UK startups, scale-ups, and global firms expanding their data science operations locally. All of the companies included have recently raised investment, won high-profile contracts, or significantly scaled their analytics teams.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.