AWS Data Engineer

Technopride Ltd
London
4 days ago
Create job alert

London, United Kingdom | Posted on 04/12/2025


We provide end-to-end IT solutions and services including Applications services, Data & Analytics services, AI/ML Technologies and Professional services in the UK and EU market.


Job Description

(10+ years of experience required)


Role Overview

We are building a next-generation data platform and are looking for an experienced Senior Data Engineer to help design, develop, and optimize large-scale data solutions. This role involves end-to-end data engineering, modern cloud-based development, and close collaboration with cross-functional stakeholders to deliver reliable, scalable, and high-quality data products.


Key Responsibilities

  • Design, develop, and maintain scalable, testable, and high-performance data pipelines using Python and Apache Spark.
  • Orchestrate data workflows using cloud-native services such as AWS Glue, EMR Serverless, Lambda, and S3.
  • Apply modern engineering practices including modular design, version control, CI/CD automation, and comprehensive testing.
  • Support the design and implementation of lakehouse architectures leveraging table formats such as Apache Iceberg.
  • Collaborate with business stakeholders to translate requirements into robust data engineering solutions.
  • Build observability and monitoring into data workflows; implement data quality checks and validations.
  • Participate in code reviews, pair programming, and architecture discussions to promote engineering excellence.
  • Continuously expand domain knowledge and contribute insights relevant to data operations and analytics.

What You’ll Bring

  • Strong ability to write clean, maintainable Python code using best practices such as type hints, linting, and automated testing frameworks (e.g., pytest).
  • Deep understanding of core data engineering concepts including ETL/ELT pipeline design, batch processing, schema evolution, and data modeling.
  • Hands-on experience with Apache Spark or willingness and capability to learn large-scale distributed data processing.
  • Familiarity with AWS data services such as S3, Glue, Lambda, and EMR.
  • Ability to work closely with business and technical stakeholders and translate needs into actionable engineering tasks.
  • Strong team collaboration skills, especially within Agile environments, emphasizing shared ownership and high transparency.

Nice-to-Have Skills

  • Experience with Apache Iceberg or similar lakehouse table formats (Delta Lake, Hudi).
  • Practical exposure to CI/CD tools such as GitLab CI, GitHub Actions, or Jenkins.
  • Familiarity with data quality frameworks such as Great Expectations or Deequ.
  • Interest or background in financial markets, analytical datasets, or related business domains.


#J-18808-Ljbffr

Related Jobs

View all jobs

AWS Data Engineer   (Hybrid) Bristol

AWS Data Engineer

AWS Data Engineer

AWS Data Engineer

AWS Data Engineer

AWS Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data science in your 30s, 40s or 50s? You’re far from alone. Across the UK, businesses are investing in data science talent to turn data into insight, support better decisions and unlock competitive advantage. But with all the hype about machine learning, Python, AI and data unicorns, it can be hard to separate real opportunities from noise. This article gives you a practical, UK-focused reality check on data science careers for mid-life career switchers — what roles really exist, what skills employers really hire for, how long retraining typically takes, what UK recruiters actually look for and how to craft a compelling career pivot story. Whether you come from finance, marketing, operations, research, project management or another field entirely, there are meaningful pathways into data science — and age itself is not the barrier many people fear.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.