Experienced Business Analyst (Data Transformation, Investment Banking)

Boundaryless Ltd
Wembley
22 hours ago
Create job alert

Company Introduction


Boundaryless is a boutique consulting firm, providing clients with deep expertise in advanced analytics & data forensics, agentic automation, and Artificial Intelligence solutions. We operate around the world, with offices in 7 countries: United Kingdom, France, The Netherlands, Switzerland, India, Sri Lanka, and the USA.


Boundaryless is an equal opportunity employer.  While we apply high standards to the experts we hire and the work we deliver, we also strive for diversity. Our team is a living example of this, with 13 nationalities and 50% female employees.


Due to the growing demand from financial services organizations for automation services supporting data transformation and agentic process automation, open positions for Business Analysts who understand operational and regulatory reporting processes in (investment) banking.


Capital Markets knowledge and Finance experience are key components for this role.


 


Role Description


  • The Technical Data Analyst will support a data transformation and controls program for a top-tier banking client
  • Translate business requirements into technical specifications.
  • Responsible for analyzing, validating, and reconciling large-scale datasets used for operational and regulatory reporting.
  • Work across IT teams to define data requirements, quality rules, and controls for critical data elements. Facilitating the UK issue remediation process by hosting and facilitating conference calls amongst SMEs required to drive items to closure
  • Tracking, adjudication, and reporting of Data Quality issues
  • You will work with team members on topics including data profiling, lineage validation, source-to-target mapping, and root-cause analysis for data issues.
  • Produce clear documentation and evidence to support UAT sign-off, audit reviews, and control attestations.
  • Ensure traceability from data requirement → transformation logic → quality checks → reporting outputs → audit evidence.


 


Location


  • The role supports one of our top-tier banking clients in London (Canary Wharf) and requires a minimum of three days on-site presence.
  • This is a permanent position based in the UK. We will only consider applicants who are eligible to work in the UK. For this role do NOT offer visa sponsorship.


 


Experience Requirements & Qualifications


  • Minimum 5 years of relevant experience in data analytics, data quality, reporting controls, or data transformation programs (preferably in financial services).
  • Proficiency in SQL (advanced querying, performance tuning, reconciliation logic).
  • Strong proficiency in Python for data analysis and automation (pandas, data validation frameworks, scripting).
  • Experience supporting or validating ETL/ELT pipelines and data quality frameworks (rules, thresholds, exception handling).
  • Working knowledge of Autosys & Apache Airflow (monitoring schedules, reruns, failure triage).
  • Experience with CI/CD tools (Git, Harness, UrbanCode Deploy (UCD), Red Hat OpenShift).
  • Familiarity with AWS S3 for large-scale data storage and dataset movement patterns.
  • Experience supporting Tableau dashboards (data validation, extract refresh checks, reconciliation to source).


 


Nice-to-Have


  • Ideally, you have experience with tools including PySpark, Spark SQL, Hive, Impala, HDFS, Parquet, and Oracle databases.
  • Experience in regulated or enterprise data environments (investment banking, risk, finance, compliance).
  • Exposure to data governance concepts: critical data elements (CDEs), lineage, data quality dimensions, audit frameworks.
  • Experience working in Agile/Scrum delivery models
  • Knowledge of monitoring/alerting tools for data pipelines and batch operations.
  • Familiarity with regulatory reporting domains or control frameworks (data controls, attestations, audit readiness).
  • Familiarity with Visio, as well as DCRM tools to manage, document and monitor data quality issues


 


Main tasks and responsibilities     


  • Conduct discovery sessions to understand reporting objectives, key datasets, and edge cases.
  • Perform data profiling and produce findings on completeness, accuracy, consistency, and timeliness.
  • Design and execute reconciliation checks (source vs target, pre vs post transformation, report vs underlying data).
  • Support creation of data requirements artifacts (data dictionaries, source-to-target mapping support, business rules, quality rules).
  • Define and maintain data quality checks, exception logs, and analysis on recurring defects.
  • Lead/Support UAT activities for reporting and controls; validate outcomes and document evidence.
  • Produce audit-ready documentation: test packs, control evidence, runbooks, and explainable results.
  • Partner with engineers to troubleshoot data issues and ensure analytical intent is preserved in implementation.
  • Track and communicate risks, dependencies, and changes impacting reporting and control outcomes.


 


Your application


If you feel you fit this profile, please apply via 


Please send your CV as well as an overview of automation projects you have delivered.

Related Jobs

View all jobs

Experienced Business Analyst (Data Transformation, Investment Banking)

Experienced Business Analyst (Data Transformation, Investment Banking)

Experienced Business Analyst (Data Transformation, Investment Banking)

Experienced Business Analyst (Data Transformation, Investment Banking)

Experienced Business Analyst (Data Transformation, Investment Banking)

Business Data Analyst

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.