Senior Azure Data Engineer

Brompton Bicycle
Greenford
2 weeks ago
Applications closed

Related Jobs

View all jobs

Senior Azure Data Engineer

Senior Azure Data Engineer

Senior Azure Data Engineer: ETL, Spark & Data Lake (Hybrid)

Senior Azure Data Engineer — Data Warehouse & BI

Senior Azure Data Engineer - Databricks & AI

Senior Azure Data Engineer – Databricks, AI & Finance

Senior Azure Data Engineer
About Us

Brompton Bicycle is a leading producer of folding bicycles renowned for our commitment to quality and innovation. Our mission is to revolutionise urban living and ensure the finest experience for our customers. To aid this weve assembled a Data & Analytics team and are on the lookout for a skilled Senior Azure Data Engineer to join our ranks.


Job Overview

In this role you will play a lead role in the conception, planning and execution of Brompton Bicycles data infrastructure.


You will be working directly with diverse data sets from multiple systems orchestrating their seamless integration and optimisation to enable our business to derive valuable insights. This process will encompass everything from the raw development of data pipelines to the management and optimisation of these pipelines using all tools available in the Azure Cloud.


A significant aspect of your role will be the migration of our existing on‑premises databases to the Azure Cloud, a complex project requiring a deep understanding of cloud architecture and database management as well as change management ensuring the continuation of data as a product to your stakeholders through a smooth data infrastructure transition.


As a vital part of our team you will collaborate with diverse departments across our organisation (Finance Planning Commercial etc.) ensuring that the data solutions you architect is finely attuned to our unique business needs. Your work will directly support the creation of data‑driven strategies that will yield pivotal insights bolstering decision making and strategic planning.


You will have the chance to contribute directly to our mission of revolutionising urban living by ensuring that our data management and analysis processes are as efficient, reliable and insightful as possible.


Responsibilities

  • Develop, construct, test and maintain data architectures within large‑scale data processing systems.
  • Migrate existing on‑premises databases to the Azure Cloud.
  • Develop and manage data pipelines using Azure Data Factory, Delta Lake and Spark, ensuring all data sets are secure, reliable and accessible.
  • Utilise Azure Cloud architecture knowledge to design and implement scalable data solutions.
  • Utilise Spark SQL, Python, R and other data frameworks to manipulate data and gain a thorough understanding of the datasets’ characteristics. This role requires the ability to comprehend the business logic behind the data’s creation with the aim of enhancing the data modelling process.
  • Interact with API systems to query and retrieve data for analysis.
  • Work closely with Business Analysts, IT Ops and other stakeholders to understand data needs and deliver on those needs.
  • Ensure understanding and compliance with data governance and data quality principles.
  • Design robust data models that enhance data accessibility and facilitate deeper analysis.
  • Implement and manage Unity Catalog for centralized data governance and unified access controls across Databricks.
  • Maintain technical documentation for the entirety of the code base.
  • Own end‑to‑end ownership of the Data Engineering Lifecycle.
  • Implement and manage Fivetran for efficient and reliable ETL processes.

Requirements

  • Bachelors degree in computer science, engineering or equivalent experience.
  • Extensive experience as a Senior Data Engineer / Cloud Data Architect or similar role.
  • Deep knowledge of Azure Cloud architecture and Azure Databricks, DevOps and CI/CD.
  • Extensive experience migrating on‑premises data warehouses to the cloud.
  • Proficiency with Spark SQL, Python, R and other data engineering development tools.
  • Experience with metadata‑driven pipelines and SQL serverless data warehouses.
  • Extensive knowledge of querying API systems.
  • Excellent problem‑solving skills and attention to detail.
  • Extensive experience building and optimising ETL pipelines using Databricks.
  • Excellent communication and change‑management skills with the ability to explain complex technical concepts to non‑technical stakeholders.
  • Understanding of data governance and data quality principles.
  • Experience with implementing and managing Unity Catalog for data governance.
  • Familiarity with Fivetran ETL tool for seamless data integration.

Desirable Skills

  • Masters degree in a relevant field.
  • Experience with data visualisation tools such as Power BI or similar.
  • Familiarity with agile methodologies.
  • Certifications in Azure or other cloud platforms.

Required Experience

Senior IC


Key Skills

  • Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala

Employment Type: Full‑Time


Experience: Years


Vacancy: 1


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data science in your 30s, 40s or 50s? You’re far from alone. Across the UK, businesses are investing in data science talent to turn data into insight, support better decisions and unlock competitive advantage. But with all the hype about machine learning, Python, AI and data unicorns, it can be hard to separate real opportunities from noise. This article gives you a practical, UK-focused reality check on data science careers for mid-life career switchers — what roles really exist, what skills employers really hire for, how long retraining typically takes, what UK recruiters actually look for and how to craft a compelling career pivot story. Whether you come from finance, marketing, operations, research, project management or another field entirely, there are meaningful pathways into data science — and age itself is not the barrier many people fear.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.