Freelance Senior Data Engineer

Publicis Production
City of London
7 months ago
Applications closed

Senior Data Engineer


Start - ASAP


Duration - 3 months


Rates - TBC


Location - Chancery Lane


Hybrid - 3 days onsite / 2 days remote


We are seeking a proactive and self-motivated Senior Data Engineer with a proven track record in building scalable cloud-based data solutions across multiple cloud platforms to support our work in architecting, building and maintaining the data infrastructure. The specific focus for this role will start with GCP however we require experience with Snowflake and Databricks also.


As a senior member within the data engineering space, you will play a pivotal role in designing scalable data pipelines, optimising data workflows, and ensuring data availability and quality for production technology.


The ideal candidate brings deep technical expertise in AWS, GCP and/or Databricks alongside essential hands-on experience building pipelines in Python, analysing data requirements with SQL, and modern data engineering practices. Your ability to work across business and technology functions, drive strategic initiatives, and independently problem solve will be key to success in this role.

Qualifications:


Experience:


  • 7+ years of experience in data engineering and solution delivery, with a strong track record of technical leadership.
  • Deep understanding of data modeling, data warehousing concepts, and distributed systems.
  • Excellent problem-solving skills and ability to progress with design, build and validate output data independently.
  • Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools.
  • Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure.
  • Strong background in database technologies (SQL Server, Redshift, PostgreSQL, Oracle).


Desirable Skills:


  • Familiarity with machine learning pipelines and MLOps practices.
  • Additional experience with Databricks and specific AWS such as Glue, S3, Lambda
  • Proficient in Git, CI/CD pipelines, and DevOps tools (e.g., Azure DevOps)
  • Hands-on experience with web scraping, REST API integrations, and streaming data pipelines.
  • Knowledge of JavaScript and front-end frameworks (e.g., React)


Key Responsibilities:


  • Architect and maintain robust data pipelines (batch and streaming) integrating internal and external data sources (APIs, structured streaming, message queues etc.).
  • Collaborate with data analysts, scientists, and software engineers to understand data needs and develop solutions.
  • Understand requirements from operations and product to ensure data and reporting needs are met
  • Implement data quality checks, data governance practices, and monitoring systems to ensure reliable and trustworthy data.
  • Optimize performance of ETL/ELT workflows and improve infrastructure scalability.

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

New Data Science Employers to Watch in 2026: UK and International Companies Leading Analytics and AI Innovation

Data science has emerged as one of the most transformative forces across industries, turning raw information into actionable insights, predictive models, and AI-powered solutions. In 2026, the UK is witnessing a surge in organisations where data science is not just a support function but the core of their products and services. For professionals exploring opportunities on www.DataScience-Jobs.co.uk , identifying these employers early can provide a competitive advantage in a market with high demand for advanced analytics and machine learning expertise. This article highlights new and high-growth data science employers to watch in 2026, focusing on UK startups, scale-ups, and global firms expanding their data science operations locally. All of the companies included have recently raised investment, won high-profile contracts, or significantly scaled their analytics teams.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.