Financial Data Warehouse

Synechron
Glasgow
10 months ago
Applications closed

Related Jobs

View all jobs

Junior Data Engineer

Data Warehouse Developer

Data Engineer (Azure)

Data Engineer

Senior Data Engineer

Data Analyst

Greetings,


We have an immediate vacancy for a Financial Data Warehouse with 7 years of experience at Synechron, based in Glasgow


Job Role: - Financial Data Warehouse

Job Location: - Glasgow


About Company:

At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,700+, and has 58 offices in 21 countries within key global markets. For more information on the company, please visit ourwebsiteorLinkedIn community.


Diversity, Equity, and Inclusion


Synechron’s Diversity, Equity, and Inclusion (DEI) program, Same Difference, was developed because we believe in a culture of listening, respect, and opportunity.

We each bring unique backgrounds, thoughts, talents, and experiences with us to work every day, and we know that by embracing them, we are creating an even greater Synechron. The best way to build a strong team is to value individual differences. So, it doesn’t matter where you’re from or what you’ve had to do to get here – if you have the skills, enthusiasm, and drive to make your mark, we’ll support you like we support each other. Choose a career with us and let’s pursue innovation, together.


About the Department:

Treasury, Planning & Capital Technology - Deliver the technology products used to size and manage the firm's liquidity, unsecured funding and capital resources. This includes intraday liquidity and cash management; calculation, forecasting and stress testing of liquidity and capital measures; resolution planning; budgeting and financial planning including forecasting of revenue, expenses, and balance sheet; accessing unsecured funding markets; and delivering firmwide management reporting and analytics. Our key internal clients include Treasury, Capital Planning, Financial Planning & Analysis (FP&A), Cash Management Operations, Global Corporate Controllers (GCC) and the ISG, Wealth Management and Investment Management Finance Teams.


1. Overview of Current Challenges:

  • There is a collection of ad hoc requirements being built without a cohesive strategy.
  • The data ecosystem is fragmented, lacking integration across various components including subledgers, general ledgers, and forecasting.
  • Derivatives trading market values feed into approximately 30 different finance processes, with data sourced from 5 or 6 hubs used across different processes and platforms.


2. Finance Data Warehouse (FDW) Development:

  • Around five years ago, the initiative to develop a Finance Data Warehouse (FDW) was launched to create a consolidated entry point into finance.
  • The FDW serves as a distribution point for all downstream processes, allowing for data sourcing, control, adjustment, curation, and distribution.
  • A significant volume of data enters the FDW, with numerous consumers relying on it.
  • The data landscape is more varied and disparate than previously anticipated.


3. Data Challenges and Evolution:

  • Loans data has expanded from 5 use cases to 30 use cases, highlighting the need for a solid foundation.
  • The project initially focused on derivatives, which presented substantial challenges due to the complexity of 7 upstream systems and multiple asset classes.
  • The data model evolved significantly from being product-specific to a product-agnostic model that branches into more granular asset-class types.
  • Although the process of getting data into the warehouse was relatively straightforward, encouraging downstream consumers to adopt this data has proven very difficult.
  • The quality of data remains a critical issue, with poor data quality leading to economic impacts for the firm.


4. Current Goals:

  • Goal 1: Accelerate the adoption of data within the FDW, ensuring that the purpose of FDW data is incorporated into downstream processes.
  • Goal 2: Position the FDW as a platform for effort and risk reduction, enhancing controls over finance and helping to manage data-related risks. The benefits of adoption from a control perspective have not met expectations, necessitating the development of automated controls to facilitate greater adoption.
  • Goal 3: Address regulatory focus on data governance. Recent audits (BCBS 239 and Co-Rep) have emphasized the need for transaction testing and data lineage. The FDW should align with increasingly stringent regulatory standards for governance and control.


5. Discussion Points:

• Downstream Consumers:

  • The majority are end-user computing (EUC) applications and enterprise user applications (EUA), with a total of 12,000 EUCs in use.
  • The FDW acts as a pre-processed finance data platform, facilitating calculation and curation.
  • While some EUCs are directly connected to the FDW, most operate on industrial-strength systems.

6. Projects and Initiatives:

  • Pat is currently working on an EUC decommissioning program.
  • Preparations for the Comprehensive Capital Analysis and Review (CCAR) audit are underway, focusing on ensuring data lineage and governance controls.
  • The FDW is currently based on a Teradata database but is transitioning to Snowflake, with some early adopters already utilizing the new system. Pat's team is exclusively focused on cloud solutions, replicating and copying data into Snowflake, while managing a dual-environment situation.

7. Additional Notes:

  • Snowflake provides automatic lineage tracking for data, enhancing visibility from table to view and across the data stack.
  • The FDW is connected to the firm’s data program, which includes a firm-wide data catalogue detailing key data elements.

Skills Required

  • Java and/or Scala Backends
  • Distributed compute concepts: Spark, Data Bricks, Apache Beam etc
  • Full Stack experience
  • Azure Cloud Technologies
  • Angular/Similar front end
  • Cloud DB / Relational DB: Snowflake / Sybase
  • Unix experience

Job Responsibilities

  • Designing and developing server-side components that meet the business requirements in an effective and efficient manner.
  • Directly interfacing with business users in understanding the requirement and providing solutions.
  • Assisting & guiding team members in design and development.
  • Partnering along with the leads to identify and mitigate risks and escalate issues as necessary.
  • Ensure code quality and automated testing standards, be a part of regular code reviews and ensure quality gates are upheld and enhanced.
  • Working on the cloud migration efforts to build the next generation platform for Treasury
  • 7-10 years of work experience in software development.
  • Should have hands-on experience with Java, Scala, Spark, and SQL.
  • Strong knowledge of multi-threading and high-volume server-side development.
  • Experience with Snowflake, PowerBI and Cloud platforms is a plus.
  • Should possess good architectural knowledge and be aware of enterprise application design patterns.
  • Basic working knowledge of Unix/Linux.
  • Exposure to Test and Behaviour Driven Development in agile setup is a huge plus
  • Exposure to JavaScript framework like Angular is desirable
  • Excellent communication, team-work and interpersonal skills.
  • Desire to learn the business domain and partner with stakeholders to specify new business features
  • Strong analytical capability and problem-solving skills.
  • Bachelor of Science in Computer Science or relevant technical degree.

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

New Data Science Employers to Watch in 2026: UK and International Companies Leading Analytics and AI Innovation

Data science has emerged as one of the most transformative forces across industries, turning raw information into actionable insights, predictive models, and AI-powered solutions. In 2026, the UK is witnessing a surge in organisations where data science is not just a support function but the core of their products and services. For professionals exploring opportunities on www.DataScience-Jobs.co.uk , identifying these employers early can provide a competitive advantage in a market with high demand for advanced analytics and machine learning expertise. This article highlights new and high-growth data science employers to watch in 2026, focusing on UK startups, scale-ups, and global firms expanding their data science operations locally. All of the companies included have recently raised investment, won high-profile contracts, or significantly scaled their analytics teams.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.