Senior Data Engineer - Pathogen

Ellison Institute of Technology Oxford
Oxford
1 month ago
Create job alert
Senior Data Engineer, EIT Pathogen Program

The Ellison Institute of Technology (EIT) tackles humanity's greatest challenges by turning science and technology into impactful global solutions. Focused on areas such as health, food security, sustainable agriculture, climate change, clean energy, and robotics in an era of artificial intelligence, EIT blends groundbreaking research with practical applications to deliver lasting results.


A cornerstone of EIT's mission is its upcoming 300,000-square‑foot research facility at the Oxford Science Park, set to open in 2027. This campus will feature advanced labs, an oncology and preventative care clinic, and collaborative spaces to strengthen its partnership with the University of Oxford. It will also host the Ellison Scholars, driving innovation for societal benefit.


The Pathogen Mission highlights EIT's transformative approach, using Whole Genome Sequencing (WGS) and Oracle's cloud technology to create a global pathogen metagenomics system. This initiative aims to improve diagnostics, provide early epidemic warnings, and guide treatments by profiling antimicrobial resistance. The goal is to deliver certified diagnostic tools for widespread use in labs, hospitals, and public health.


EIT fosters a culture of collaboration, innovation, and resilience, valuing diverse expertise to drive sustainable solutions to humanity's enduring challenges.


Key Responsibilities

  • Ensure data in the platform is acquired, processed, curated, and made accessible to scientists, digital analytics products, bioinformatics, and AI at a high standard of quality and availability.
  • Ensure data access adheres to FAIR principles (Findable, Accessible, Interoperable, and Re‑usable).
  • Ensure data is secured and compliant with regulatory, legal, and data sharing requirements.
  • Ensure efficient, performant, and high‑quality pipelines for data ingestion into the platform.
  • Contribute to building data management components, including reference data management, de‑identification, data curation, pathogen and technical metadata catalogues, and data access controls.
  • Ensure efficient, secure, scalable, available, and performant data storage components, including genomic variant storage, clinical data stores, and clinical imaging.
  • Ensure robust ingest services capable of seamlessly integrating data from distributed sequencing devices, including real‑time telemetry streams.
  • Ensure data is processed to enable optimal access and consumption by digital analysis products, bioinformatics pipelines, and researchers/scientists.

Requirements
Essential Knowledge, Skills and Experience

  • Deep experience in building modern data platforms using cloud‑based architectures and tools.
  • Experience delivering data engineering solutions on cloud platforms, preferably Oracle OCI, AWS, or Azure.
  • Proficient in Python and workflow orchestration tools such as Airflow or Prefect.
  • Expert in data modeling, ETL, and SQL.
  • Experience with real‑time analytics from telemetry and event‑based streaming (e.g., Kafka).
  • Experience managing operational data stores with high availability, performance, and scalability.
  • Expertise in data lakes, lakehouses, Apache Iceberg, and data mesh architectures.
  • Proven ability to build, deliver, and support modern data platforms at scale.
  • Strong knowledge of data governance, data quality, and data cataloguing.
  • Experience with modern database technologies, including Iceberg, NoSQL, and vector databases.
  • Embraces innovation and works closely with scientists and partners to explore cutting‑edge technology.
  • Knowledge of master data, metadata, and reference data management.
  • Understanding of Agile practices and sprint‑based methodologies.
  • Active contributor to knowledge sharing and collaboration.

Desirable Knowledge, Skills and Experience

  • Familiarity with genomics and associated data standards.
  • Experience with healthcare clinical data and standards such as OMOP and SNOMED.
  • Familiarity with containerization tools such as Docker and Kubernetes.
  • Familiarity with Git and CI/CD workflows.

Key Attributes

  • Strong collaborator with excellent communication skills.
  • Comfortable working in a fast‑paced, dynamic environment.
  • Eagerness to learn and cross‑train in new technologies.
  • Proactive and hands‑on approach to exploring new tools and developing proof of concepts (POCs).

Benefits

  • Competitive salary on offer.
  • Enhanced holiday pay.
  • Pension.
  • Life Assurance.
  • Income Protection.
  • Private Medical Insurance.
  • Hospital Cash Plan.
  • Therapy Services.
  • Perk Box.
  • Electric Car Scheme.

Why work for EIT

At the Ellison Institute, we believe a collaborative, inclusive team is key to our success. We are building a supportive environment where creative risks are encouraged and everyone feels heard. Valuing emotional intelligence, empathy, respect, and resilience, we encourage people to be curious and to have a shared commitment to excellence. Join us and make an impact!


Terms of Appointment

  • You must have the right to work permanently in the UK with a willingness to travel as necessary.
  • You will live in, relocate to, or be within easy commuting distance of Oxford.
  • During peak periods, some longer hours may be required and some working across multiple time zones due to the global nature of the programme.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.