Senior Azure Data Engineer (SC Cleared) - Permanent - London, UK (Basé à London)

Jobleads
London
1 week ago
Create job alert

Job Description

Job Summary:

We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities. The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing data transformations, and ensuring data quality and reliability. This role requires a strong understanding of data engineering principles, big data technologies, cloud computing (specifically Azure), and experience working with large datasets.

Key Responsibilities:

Data Pipeline Development & Optimisation:

  • Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (eg, APIs, databases, financial data providers) into the Azure Databricks platform.
  • Optimise data pipelines for performance, efficiency, and cost-effectiveness.
  • Implement data quality checks and validation rules within data pipelines.

Data Transformation & Processing:

  • Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies.
  • Develop and maintain data processing logic for cleaning, enriching, and aggregating data.
  • Ensure data consistency and accuracy throughout the data life cycle.

Azure Databricks Implementation:

  • Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services.
  • Implement best practices for Databricks development and deployment.
  • Optimise Databricks workloads for performance and cost.
  • Need to program using SQL, Python, R, YAML and JavaScript.

Data Integration:

  • Integrate data from various sources, including relational databases, APIs, and streaming data sources.
  • Implement data integration patterns and best practices.
  • Work with API developers to ensure seamless data exchange.

Data Quality & Governance:

  • Hands-on experience in using Azure Purview for data quality and data governance.
  • Implement data quality monitoring and alerting processes.
  • Work with data governance teams to ensure compliance with data governance policies and standards.
  • Implement data lineage tracking and metadata management processes.

Collaboration & Communication:

  • Collaborate closely with data scientists, economists, and other technical teams to understand data requirements and translate them into technical solutions.
  • Communicate technical concepts effectively to both technical and non-technical audiences.
  • Participate in code reviews and knowledge sharing sessions.

Automation & DevOps:

  • Implement automation for data pipeline deployments and other data engineering tasks.
  • Work with DevOps teams to implement and build CI/CD pipelines for environmental deployments.
  • Promote and implement DevOps best practices.

Essential Skills & Experience:

  • 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks.
  • Strong proficiency in Python and Spark (PySpark) or Scala.
  • Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns.
  • Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and Azure SQL Database.
  • Experience working with large datasets and complex data pipelines.
  • Experience with data architecture design and data pipeline optimization.
  • Proven expertise with Databricks, including hands-on implementation experience and certifications.
  • Experience with SQL and NoSQL databases.
  • Experience with data quality and data governance processes.
  • Experience with version control systems (eg, Git).
  • Experience with Agile development methodologies.
  • Excellent communication, interpersonal, and problem-solving skills.
  • Experience with streaming data technologies (eg, Kafka, Azure Event Hubs).
  • Experience with data visualisation tools (eg, Tableau, Power BI).
  • Experience with DevOps tools and practices (eg, Azure DevOps, Jenkins, Docker, Kubernetes).
  • Experience working in a financial services or economic data environment.
  • Azure certifications related to data engineering (eg, Azure Data Engineer Associate).

#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineering Consultant

Senior Data Engineer

Senior Data Scientist - Outside IR35 Contract

Lead Data Engineer

IT Project Manager

Get the latest insights and jobs direct. Sign up for our newsletter.

By subscribing you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Portfolio Projects That Get You Hired for Data Science Jobs (With Real GitHub Examples)

Data science is at the forefront of innovation, enabling organisations to turn vast amounts of data into actionable insights. Whether it’s building predictive models, performing exploratory analyses, or designing end-to-end machine learning solutions, data scientists are in high demand across every sector. But how can you stand out in a crowded job market? Alongside a solid CV, a well-curated data science portfolio often makes the difference between getting an interview and getting overlooked. In this comprehensive guide, we’ll explore: Why a data science portfolio is essential for job seekers. Selecting projects that align with your target data science roles. Real GitHub examples showcasing best practices. Actionable project ideas you can build right now. Best ways to present your projects and ensure recruiters can find them easily. By the end, you’ll be equipped to craft a compelling portfolio that proves your skills in a tangible way. And when you’re ready for your next career move, remember to upload your CV on DataScience-Jobs.co.uk so that your newly showcased work can be discovered by employers looking for exactly what you have to offer.

Data Science Job Interview Warm‑Up: 30 Real Coding & System‑Design Questions

Data science has become one of the most sought‑after fields in technology, leveraging mathematics, statistics, machine learning, and programming to derive valuable insights from data. Organisations across every sector—finance, healthcare, retail, government—rely on data scientists to build predictive models, understand patterns, and shape strategy with data‑driven decisions. If you’re gearing up for a data science interview, expect a well‑rounded evaluation. Beyond statistics and algorithms, many roles also require data wrangling, visualisation, software engineering, and communication skills. Interviewers want to see if you can slice and dice messy datasets, design experiments, and scale ML models to production. In this guide, we’ll explore 30 real coding & system‑design questions commonly posed in data science interviews. You’ll find challenges ranging from algorithmic coding and statistical puzzle‑solving to the architectural side of building data science platforms in real‑world settings. By practising with these questions, you’ll gain the confidence and clarity needed to stand out among competitive candidates. And if you’re actively seeking data science opportunities in the UK, be sure to visit www.datascience-jobs.co.uk. It’s a comprehensive hub featuring junior, mid‑level, and senior data science vacancies—spanning start‑ups to FTSE 100 companies. Let’s dive into what you need to know.

Negotiating Your Data Science Job Offer: Equity, Bonuses & Perks Explained

Data science has rapidly evolved from a niche specialty to a cornerstone of strategic decision-making in virtually every industry—from finance and healthcare to retail, entertainment, and AI research. As a mid‑senior data scientist, you’re not just running predictive models or generating dashboards; you’re shaping business strategy, product innovation, and customer experiences. This level of influence is why employers are increasingly offering compensation packages that go beyond a baseline salary. Yet, many professionals still tend to focus almost exclusively on base pay when negotiating a new role. This can be a costly oversight. Companies vying for data science talent—especially in the UK, where demand often outstrips supply—routinely offer equity, bonuses, flexible work options, and professional development funds in addition to salary. Recognising these opportunities and effectively negotiating them can have a substantial impact on your total earnings and long-term career satisfaction. This guide explores every facet of negotiating a data science job offer—from understanding equity structures and bonus schemes to weighing crucial perks like remote work and ongoing skill development. By the end, you’ll be well-equipped to secure a holistic package aligned with your market value, your life goals, and the tremendous impact you bring to any organisation.