Senior Data Engineer

TP ICAP
Greater London
7 months ago
Applications closed

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Role Overview

This is a Senior Data Engineer role that sits within the Brokerage & Pricing team within the TP ICAP Technology division. The Senior Data engineer will join an Agile team alongside other engineers, working on the next generation of strategic back office applications, ensuring solutions provide maximum value to users. The team’s focus on Brokerage & Pricing technology is to optimise the management of brokerage data and calculations used to drive all broking activity in our £1billion+ revenue Global Broking organisation, and carrying out commercial analysis on that data to understand revenues and drive client commercial agreements. The Senior Data Engineer is responsible for designing, developing, and maintaining data pipelines and ETL processes to support data integration and analytics. This role requires a deep understanding of data structures and content, ensuring high-quality data through rigorous testing and validation. The engineer collaborates with system owners and stakeholders to understand data requirements, delivering reliable and efficient data solutions. Attention to detail and a commitment to data quality are paramount in maintaining the integrity and reliability of data.

Role Responsibilities

Design, develop, and maintain data pipelines and ETL processes to support data integration and analytics. 

Code primarily in Python to build and optimise data workflows. 

Implement and manage workflows using Apache Airflow (MWAA). 

Ensure high-quality data through rigorous testing and validation processes. 

Produce data quality reports to monitor and ensure the integrity of data. 

Conduct thorough data exploration and analysis to understand data structure and content before developing ETL pipelines. 

Collaborate with system owners and stakeholders to understand data requirements and deliver solutions. 

Monitor and troubleshoot data pipelines to ensure reliability and track performance. 

Maintain detailed documentation of data processes, workflows, and system configurations. 

Familiar with data lakes and their architecture.

Experience / Competences

Strong experience as a Data Engineer, preferably in the finance sector. 

Strong understanding of ETL processes and data pipeline design. 

Extensive experience coding in Python. 

Hands-on experience with Apache Airflow (MWAA) for workflow management. 

Experience with AWS Athena/PySpark (Glue) for data querying and processing. 

Strong SQL/PLSQL skills, particularly with MS SQL and Oracle databases. 

Highly proficient in SQL with experience in both relational and NoSQL databases. 

Attention to detail and the ability to work under pressure without being distracted by complexity. 

Excellent problem-solving skills and the ability to think critically and creatively. 

Strong collaboration skills and the ability to communicate effectively with team members and stakeholders. 

Passion for data quality and a commitment to maintaining high standards of data engineering. 

Proficiency in Python for data engineering tasks. 

proficiency in using AWS Cloud services in the context of data processing. 

Strong understanding of ETL processes and data pipeline design. 

Familiarity with data lakes, operational databases/data stores and their architecture. 

Fluent in using Python CDK for AWS. 

Familiarity with version control systems (e.g., Git) and backlog management tools (e.g., JIRA). 

Ability to write clear and concise documentation. 

Strong communication skills, both written and verbal. 

Ability to work effectively as part of a team and independently when required. 

Job Band & Level

Manager / Level 6

#LI-Hybrid #LI-MID

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.