Senior Data Engineer

TP ICAP
Greater London
8 months ago
Applications closed

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Role Overview

This is a Senior Data Engineer role that sits within the Brokerage & Pricing team within the TP ICAP Technology division. The Senior Data engineer will join an Agile team alongside other engineers, working on the next generation of strategic back office applications, ensuring solutions provide maximum value to users. The team’s focus on Brokerage & Pricing technology is to optimise the management of brokerage data and calculations used to drive all broking activity in our £1billion+ revenue Global Broking organisation, and carrying out commercial analysis on that data to understand revenues and drive client commercial agreements. The Senior Data Engineer is responsible for designing, developing, and maintaining data pipelines and ETL processes to support data integration and analytics. This role requires a deep understanding of data structures and content, ensuring high-quality data through rigorous testing and validation. The engineer collaborates with system owners and stakeholders to understand data requirements, delivering reliable and efficient data solutions. Attention to detail and a commitment to data quality are paramount in maintaining the integrity and reliability of data.

Role Responsibilities

Design, develop, and maintain data pipelines and ETL processes to support data integration and analytics. 

Code primarily in Python to build and optimise data workflows. 

Implement and manage workflows using Apache Airflow (MWAA). 

Ensure high-quality data through rigorous testing and validation processes. 

Produce data quality reports to monitor and ensure the integrity of data. 

Conduct thorough data exploration and analysis to understand data structure and content before developing ETL pipelines. 

Collaborate with system owners and stakeholders to understand data requirements and deliver solutions. 

Monitor and troubleshoot data pipelines to ensure reliability and track performance. 

Maintain detailed documentation of data processes, workflows, and system configurations. 

Familiar with data lakes and their architecture.

Experience / Competences

Strong experience as a Data Engineer, preferably in the finance sector. 

Strong understanding of ETL processes and data pipeline design. 

Extensive experience coding in Python. 

Hands-on experience with Apache Airflow (MWAA) for workflow management. 

Experience with AWS Athena/PySpark (Glue) for data querying and processing. 

Strong SQL/PLSQL skills, particularly with MS SQL and Oracle databases. 

Highly proficient in SQL with experience in both relational and NoSQL databases. 

Attention to detail and the ability to work under pressure without being distracted by complexity. 

Excellent problem-solving skills and the ability to think critically and creatively. 

Strong collaboration skills and the ability to communicate effectively with team members and stakeholders. 

Passion for data quality and a commitment to maintaining high standards of data engineering. 

Proficiency in Python for data engineering tasks. 

proficiency in using AWS Cloud services in the context of data processing. 

Strong understanding of ETL processes and data pipeline design. 

Familiarity with data lakes, operational databases/data stores and their architecture. 

Fluent in using Python CDK for AWS. 

Familiarity with version control systems (e.g., Git) and backlog management tools (e.g., JIRA). 

Ability to write clear and concise documentation. 

Strong communication skills, both written and verbal. 

Ability to work effectively as part of a team and independently when required. 

Job Band & Level

Manager / Level 6

#LI-Hybrid #LI-MID

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Where to Advertise Data Science Jobs in the UK (2026 Guide)

Advertising data science jobs in the UK requires a different approach to most technical hiring. Data science spans a broad and often misunderstood spectrum — from statistical modelling and experimental design through to machine learning engineering, product analytics and AI research. The strongest candidates identify firmly with specific subdisciplines and are frustrated by adverts that conflate data scientist with data analyst, business intelligence developer or machine learning engineer. General job boards produce high application volumes for data roles but consistently fail to match specialist data science profiles with the right opportunities. This guide, published by DataScienceJobs.co.uk, covers where to advertise data science roles in the UK in 2026, how the main platforms compare, what employers should expect to pay, and what the data says about hiring across different role types.

New Data Science Employers to Watch in 2026: UK and International Companies Leading Analytics and AI Innovation

Data science has emerged as one of the most transformative forces across industries, turning raw information into actionable insights, predictive models, and AI-powered solutions. In 2026, the UK is witnessing a surge in organisations where data science is not just a support function but the core of their products and services. For professionals exploring opportunities on www.DataScience-Jobs.co.uk , identifying these employers early can provide a competitive advantage in a market with high demand for advanced analytics and machine learning expertise. This article highlights new and high-growth data science employers to watch in 2026, focusing on UK startups, scale-ups, and global firms expanding their data science operations locally. All of the companies included have recently raised investment, won high-profile contracts, or significantly scaled their analytics teams.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.