Data Engineer

Above & Beyond - Climate Tech Recruitment
City of London
3 days ago
Create job alert

Data Engineer

Remote or Hybrid

Based in London or Nairobi (must have right to work)

London - £80,000 - 100,000

Nairobi - KES 10-15M


Above and Beyond Recruitment is proud to be partnering with ONE Data to recruit a Data Engineer to join their mission to build the world's first public finance and development data tool.


Who are we?

ONE Data is an initiative of The ONE Campaign focused on transforming how public finance and development data is accessed and used.


Our vision is a world where information asymmetries are collapsed and high-quality, evidence-based decisions lead to greater economic opportunity and healthier lives.


Our mission is to organise the world’s public finance and development data and make it universally accessible and useful - collapsing the time from raw data to actionable insight. By building open, interoperable data infrastructure and intuitive analytical tools, ONE Data strengthens transparency, accountability, and more effective investment in development.


In a system where data is fragmented, delayed, and difficult to interpret, ONE Data integrates disparate sources into trusted, policy-relevant insights that empower decision-makers, advocates, journalists, researchers, and partners globally.


The opportunity:

We are looking for a Data Engineer to help build the data infrastructure that powers ONE Data's products, Knowledge Graph, APIs, and analytical platforms. This is a role with real ownership. You will shape foundational systems, help make architectural decisions, and see your work directly enable better policy decisions, research and analysis.


ONE Data works with complex, fragmented public finance and development datasets, from aid flows and budget data to debt statistics and policy indicators. The Data Engineer designs the pipelines, models, and quality frameworks that transform these disparate sources into trusted, interoperable data that researchers, policymakers, and advocates can rely on.


The successful candidate will help shape a working foundation into a mature, well-documented, well-tested data platform. They will contribute to architectural decisions alongside the Senior Director for Data & Product, helping establish engineering standards, and coordinating with external service providers for specialised data modelling and engineering work when the scope requires it.


You will focus on:

In the coming months, the Data Engineer will focus on:

  • Building the Development Finance Observatory, designing and shipping the ETL pipelines and tools that integrate development finance datasets (e.g. OECD, IATI, World Bank, IMF, WHO, etc.) into a unified knowledge graph.
  • Scaling the Knowledge Graph, including schema design, data integration, and optimisations.
  • Developing the data quality framework, implementing provenance tracking, quality indicators, coverage metrics, and automated testing so that every data point in our systems is trustworthy and well documented


And will also contribute to:

  • Shipping open-source data infrastructure, building pipelines and tools that the broader development data community can use and extend.
  • Designing APIs for data access, including RESTful APIs and an MCP server to provide programmatic access to our data.
  • Coordinating with specialist partners and external data engineering service providers for deep domain work like concept modelling or high-volume data integration.


Tech stack:

  • Languages: python (pandas, httpx, sdmx, pydantic, FastAPI, FastMCP, ADK), SQL (ISO Graph Query Language would be a plus)
  • Cloud: Google Cloud Platform (Cloud Run, Cloud Build, BigQuery, Spanner Graph, Cloud SQL, Cloud Storage)
  • Other: DuckDB, Terraform, Git, Cloud Build


The infrastructure runs primarily on Google Cloud Platform, with the Knowledge Graph built on Spanner through the Data Commons infrastructure, alongside BigQuery for internal analytical workloads and MySQL for supporting services.



Key responsibilities:

Data infrastructure and pipelines

  • Design, build, and maintain open-source ETL/ELT pipelines that ingest, clean, transform, and deliver development finance data from multiple sources.
  • Contribute to data modelling and schema design across ONE Data's infrastructure.
  • Help design, build and maintain APIs for structured data access, serving both internal products and external users.
  • Implement and maintain Infrastructure-as-Code for deployment, scaling, and monitoring.
  • Establish and maintain data lineage documentation across all systems.
  • Design and implement data quality frameworks, automated testing, and monitoring systems.


Knowledge graph and data architecture

  • Contribute to the development and evolution of the ONE Data’s deployment of the Data Commons Knowledge Graph on Spanner Graph, including schema design, data integration, and query optimisation.
  • Work within and extend the Data Commons infrastructure to support ONE Data's analytical and product needs.
  • Ensure interoperability and consistency across ONE Data’s systems, tools and products.


Collaboration and delivery

  • Support policy researchers, partners, and clients with data access and integration needs.
  • Help coordinate external data engineering service providers for specialised or high-volume data modelling work.
  • Participate in sprint planning, technical design reviews, and agile delivery cycles.
  • Contribute to open-source tooling and documentation.



Qualifications:

Education & Experience

  • Bachelor's degree (or higher) in computer science, data engineering, software engineering, or a related field.
  • 5+ years of experience in data engineering, back-end development, or a related technical role.
  • Experience working with open data, public finance, or international development datasets, including navigating the challenges of fragmented sources, inconsistent standards, and incomplete coverage that characterise this domain.
  • Experience contributing to data infrastructure decisions, with a desire to grow into architectural ownership.


Technical Expertise

  • Strong Python and SQL expertise for data engineering
  • Experience designing and building scalable ETL/ELT pipelines and data architectures.
  • Experience with Google Cloud Platform services (BigQuery, Cloud Storage, Spanner, Cloud Run, etc).
  • Experience with API design and development for data access.
  • Familiarity with Infrastructure-as-Code (Terraform or similar) or willingness to learn
  • Familiarity with graph databases or Knowledge Graph technologies strongly preferred. Willingness to learn and develop expertise in this area is essential.
  • Familiarity with data quality frameworks, automated testing, and monitoring.
  • Strong understanding of data modelling, schema design, and data governance principles.


Other attributes and culture fit:

  • Commitment to ONE Data's mission of making public finance and development data universally accessible and useful.
  • Belief that well-engineered data infrastructure is a public good.
  • Ability to operate effectively within a global matrix organisation.
  • Highly organised, analytical and self-motivated.
  • Collaborative mindset with strong interpersonal skills.
  • Comfortable navigating ambiguity and fast-moving priorities.
  • Remains positive under pressure and in high-stakes environments.
  • Independent problem solver with sound judgement.
  • Action-oriented and results focused.
  • Flexible and resourceful approach to delivery.
  • Commitment to transparency, accountability and equity in development.


Languages:

Fluency in English required. Proficiency in additional languages relevant to ONE’s work (such as French or German) is a plus.


Travel:

Travel requirements vary by role but may include occasional domestic and international travel (up to 10%) to attend partner meetings, conferences, or team convenings.


Work environment:

Hybrid or remote work environment depending on location. Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.



ONE is an equal opportunity employer and does not discriminate in its selection and employment practices. All qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, political affiliation, sexual orientation, gender identity, marital status, disability, protected veteran status, genetic information, age, or other legally protected characteristics.

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.