Data Engineer - Newcastle

Accenture UK & Ireland
Newcastle upon Tyne
2 weeks ago
Create job alert

Data Engineer
Role: Data Engineer
Location: Newcastle Upon Tyne
Salary: TBC – Depending on experience
Levels: Senior Analyst, Specialist
Hybrid Working: 3 days per week in our Newcastle, Cobalt business park office
Please Note: Any offer of employment is subject to satisfactory BPSS and SC security clearance which requires 5 years continuous UK address history (typically including no periods of 30 consecutive days or more spent outside of the UK) and declaration of being a British or EU passport holder or hold Indefinite Leave to remain within the UK at the point of application.
Note: The above information relates to a specific client requirement.


About The Team

Our Advanced Technology Centre is a hub of innovation where we deliver high‑quality data and technology services to clients across both the public and private sectors. You’ll join a collaborative culture that values diverse thinking, continuous learning, and opportunities for career growth within a global network of experts.


Role Overview

As a Data Engineer, you will design, build, and maintain scalable data solutions that enable analytics, AI, and operational insights. You’ll work alongside client and internal teams to create robust data pipelines, ensure data reliability, and support cloud‑based architectures that power intelligent decision‑making.


Key Responsibilities
Data Pipeline Development

  • Build, optimize, and maintain scalable data pipelines using Java (primary), plus exposure to Python, Flink, Kafka, or Spark.
  • Develop and support real‑time streaming pipelines and event‑driven integrations.
  • Integrate data from multiple sources (streaming, batch, APIs) using AWS managed services (e.g., Kinesis, MSK, Lambda, Glue).

Data Architecture & Standards

  • Contribute to data modelling, data architecture best practices, and modern patterns (e.g., medallion architecture).
  • Ensure data quality, lineage, governance, and security controls are applied consistently.

DevOps & Deployment

  • Deploy and maintain data applications using CI/CD tooling (Azure DevOps, GitHub Actions, Jenkins).
  • Use Infrastructure as Code (e.g., Terraform, CloudFormation) to manage cloud environments.
  • Work with container technologies such as Docker and Kubernetes‑based workloads.

Collaboration

  • Work closely with analytics, ML/AI, and product teams to deliver clean, well‑structured datasets.
  • Participate in code reviews and internal knowledge‑sharing sessions.
  • Provide guidance to junior engineers where needed.

Core Data Engineering

  • Strong programming proficiency in Java (preferred) or Python.
  • Hands‑on experience with at least one of: Kafka, Flink, Spark (Flink/Kafka preferred for streaming).
  • Solid understanding of stream processing concepts (e.g., event time, state, backpressure).
  • Understanding of software engineering best practices: testing, design patterns, CI/CD, Git.
  • Experience building ETL/ELT or streaming data pipelines.
  • Exposure to microservices and distributed system concepts.
  • Experience working with cloud platforms, ideally AWS, but Azure/GCP also acceptable.
  • Understanding of distributed compute, large‑scale data systems, and performance considerations.

DevOps & Engineering Practices

  • Experience with CI/CD tools (Azure DevOps, GitHub Actions, Jenkins etc.).
  • Infrastructure‑as‑Code (Terraform preferred).
  • Experience with containerisation (Docker) and orchestration platforms (Kubernetes/EKS).

Certifications & Tools

  • Exposure to enterprise data platforms (Databricks, Snowflake, BigQuery, or similar).
  • Cloud certifications (AWS, Azure, GCP) are beneficial but not required.

Other Requirements

  • Minimum 3 years’ experience working on data engineering or large‑scale data solutions.
  • Comfortable working in Agile delivery teams.
  • Strong communication skills and ability to collaborate with technical and non‑technical stakeholders.

Desirable

  • Experience in client‑facing or consulting environments.
  • Professional cloud or data engineering certifications.
  • Experience mentoring or supporting junior engineers.
  • Background in designing or operating real‑time, low‑latency systems.


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

New Data Science Employers to Watch in 2026: UK and International Companies Leading Analytics and AI Innovation

Data science has emerged as one of the most transformative forces across industries, turning raw information into actionable insights, predictive models, and AI-powered solutions. In 2026, the UK is witnessing a surge in organisations where data science is not just a support function but the core of their products and services. For professionals exploring opportunities on www.DataScience-Jobs.co.uk , identifying these employers early can provide a competitive advantage in a market with high demand for advanced analytics and machine learning expertise. This article highlights new and high-growth data science employers to watch in 2026, focusing on UK startups, scale-ups, and global firms expanding their data science operations locally. All of the companies included have recently raised investment, won high-profile contracts, or significantly scaled their analytics teams.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.