Senior Data Engineer

Br Dge
Edinburgh
3 weeks ago
Create job alert

Remote with the option to use the Edinburgh office - Permanent/Full Time


BR-DGE is an award winning FinTech founded in Edinburgh. Our platform enables e-commerce and technology businesses to have the freedom and flexibility to redefine the way they handle payments.


Since our inception in 2018 we have been leading the way in the future of payment orchestration. Our products enable enterprise businesses to optimise their payment infrastructure and create frictionless digital payment experiences for their end users. Now with a global reach, our customer base is made up of incredible brands and household names from across the travel, retail and gambling sectors and it’s growing fast! Our world class partners include Visa and Worldpay and we’re continuing to build a strong partner network with the biggest players in the payments industry. It’s an exciting time to be part of BR-DGE!


The journey so far has been incredible, but we’re just getting started and with ambitious growth plans, we’re now looking for more exceptional talent to join our team.


All BR-DGE Builders receive the following benefits:

  • Flexible and remote working
  • Remote working allowance
  • 33 days holiday including public holidays
  • Your birthday as a day off
  • Family healthcare
  • Life insurance
  • Employee assistance programme
  • A culture that champions rapid career progression
  • Investment in your learning and development
  • Regular team events & socials

Become a BR-DGE Builder

Why this role exists



  • 1. Data is becoming a critical part of BR DGE’s next growth phase, powering internal analytics and customer facing insights and monitoring.
  • 2. The data engineering space is largely greenfield. We need a production grade data platform that can ingest, transform, validate, and monitor data from core systems and operational tooling.
  • 3. The robustness, scalability, and governance of our data architecture impacts our ability to grow safely and meet regulatory expectations.
  • 4. This role owns the insights data platform, while partnering closely with Analytics, Product, and Engineering to ensure the platform delivers trusted datasets and timely signals.

What you will do

  • 1. Design and ship a tiered data platform that supports multiple latency needs, including low latency pipelines for operational monitoring and customer facing insights, plus batch pipelines for reporting and deeper analysis.
  • 2. Build and own end-to-end ingestion patterns across batch, micro batch, and selected near real time use cases, with strong orchestration and dependency management.
  • 3. Implement schema evolution, data contracts, and approaches for late arriving and corrected data so consumers can trust the outputs.
  • 4. Treat curated datasets as products that are well defined, documented, reliable, and safe to use for both internal and external consumers.
  • 5. Set platform standards for idempotent ingestion, deduplication, data quality, lineage, and observability.
  • 6. Ensure the platform meets regulated fintech & payments expectations for access control, security, and governance while staying cost efficient as volumes grow.
  • 7. Partner with Product and Engineering on event and domain modelling. Decide what data gets emitted and what latency and granularity is needed for analytics and product goals.
  • 8. Support Data Science with reliable feature ready datasets and pragmatic collaboration, without owning reporting or business analysis.
  • 9. Evolve the current lightweight tooling into a more observable, structured platform. Improve standards without creating unnecessary platform complexity.
  • 10. Automate data infrastructure and workflows using infrastructure as code and CI CD practices.

What we are looking for
Must have

  • 1. Proven experience designing, building, and operating production grade data pipelines and platforms.
  • 2. Strong SQL, specifically PostgreSQL, plus at least one programming language such as Python or Java.
  • 3. Experience with data processing or orchestration tooling such as Spark, Airflow, or Kafka.
  • 4. Experience designing data models for analytics and reporting workloads.
  • 5. Practical knowledge of data quality, testing, observability, lineage, and governance patterns.
  • 6. Strong experience with AWS based data platforms, with hands on use of services like S3, Glue, Athena, Redshift, Kinesis, EMR, or MSK.
  • 7. Infrastructure as code experience using Terraform or CloudFormation, and comfort operating systems in production.
  • 8. Ability to collaborate across Engineering, Product, Analytics, and Data Science, and drive standards through influence.

Nice to have

  • 1. Experience building customer-facing data products where latency and correctness affect user outcomes.
  • 2. Experience in regulated fintech or payments environments, especially around access control and auditability.

  • 3. Experience with cost and performance optimisation at scale in AWS data stacks.

    • Tech context
    • This role will work across ingestion, orchestration, modelling, governance, and observability in an AWS centric environment, with PostgreSQL and modern data tooling. Current tooling is intentionally lightweight, and the platform is evolving as BR-DGE grows. In some cases you do not need to be hands‑on day to day, but you must be fluent enough to make strong technical decisions and review work.



What We Offer

  • 33 days holiday, including public holidays
  • Birthday off
  • Family healthcare
  • Life insuranceEmployee assistance programme
  • Investment in learning and development
  • Regular team events and off‑sites
  • A collaborative culture where documentation is treated as a first‑class product


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.