Principal Data Engineer

Oritain
London
2 months ago
Applications closed

Related Jobs

View all jobs

Principal Data Engineer

Principal Data Engineer (MS Azure)

Principal Data Engineer

Principal Data Engineer

Principal Data Engineer

Principal Data Engineer

Company Overview

Oritain is the global leader in product verification, with locations in Auckland, Dunedin, London, Singapore and Washington D.C.


Our Mission: Harnessing our data to protect people & planet

Our mission is to protect people and the planet by harnessing science, technology and services to build a community of origin verified suppliers and buyers.


Role: Principal Data Engineer

We are looking for a Principal Data Engineer to lead on the transformation of our entire data platform. This is a critical leadership role responsible for defining, building and running the scalable, robust, and trustworthy data infrastructure that will underpin all future product development, scientific analysis and business operations.


The Opportunity: Founding the Data Platform

Reporting to the Head of Engineering, you will be the most senior technical voice for data platforms within the organisation. You will own the strategy, design, and initial implementation of the pipelines and architecture required to integrate complex scientific data with our commercial software applications.


You will act as a technical leader and mentor to the wider engineering team, ensuring that all data‑related systems meet the highest standards of reliability, performance, and security.


Key Responsibilities
Data Architecture & Strategy

  • Platform Leadership: Define and own the technical strategy and architecture for our entire data platform covering ingestion, storage, processing, governance, and consumption, including use‑cases in support of Operations, Data Science, Customer‑Facing Portals and Business Intelligence.
  • Pipeline Design: Design and implement highly scalable, performant, and reliable ETL/ELT data pipelines to handle diverse data sources, including complex scientific datasets and supply chain inputs alongside business information.
  • Technology Selection: Evaluate, recommend, and drive the adoption of new data services and modern data tools to ensure we have a future‑proof data ecosystem.
  • Data Modeling: Lead the design of canonical data models for our data warehouse and operational data stores, ensuring data quality, consistency, and integrity across the platform.
  • Single Source of Truth: Define and maintain identifiers for clients, suppliers and transactions to ensure consistency across systems such as Salesforce, Netsuite, internal databases and portals.

Implementation & Technical Excellence

  • Hands‑on Development: Serve as the most senior, hands‑on developer, writing high‑quality, production‑grade code (primarily Python and/or Scala/Spark) to build initial pipelines and core data services.
  • Data Governance & Security: Architect data security and governance policies, ensuring compliance and best practices around data access, masking, and retention especially for sensitive origin data.
  • Data Quality: Implement automated deduplication, conflict resolution and anomaly detection to maintain data integrity across ingestion sources.
  • Operational Health: Implement robust monitoring, logging, and alerting for all data pipelines and infrastructure, ensuring high data reliability and performance.
  • Infrastructure as Code (IaC): Work closely with the Infrastructure team to define and automate the provisioning of all Azure data resources using Terraform or similar IaC tools.

Cross‑Functional Leadership

  • Scientific Collaboration: Partner closely with the Science teams to understand the structure, complexity, and requirements of raw scientific data, ensuring accurate data translation and ingestion.
  • Mentorship: Provide technical guidance and mentorship to software engineers on best practices for interacting with and consuming data services.
  • Product Partnership: Collaborate with the Product Director to understand commercial and user‑facing data requirements, translating these needs into actionable data infrastructure features.

The Engineering Environment

  • Technology: We currently make extensive use of Microsoft Azure and related data services and are moving to Databricks. This role will be an authority across both.
  • Collaboration: You will be the technical data expert, integrating with the Software Engineering, Data Science and Product teams.
  • Work Style: London office, with a minimum requirement of three days per week on‑site to facilitate strategic planning and hands‑on collaboration.

Skills & Experience

  • Principal/Lead Expertise: Extensive experience (typically 7+ years) focused on data engineering, including significant time spent in a Principal, Lead, or Architect role defining data strategy from the ground up.
  • Databricks: Deep, practical, and architectural experience of the Databricks platform.
  • Azure Data Stack: Operational experience of building and running within the Microsoft Azure data ecosystem (e.g., Azure Data Factory, Azure Data Lake, Azure Synapse Analytics, Azure SQL/Cosmos DB).
  • Coding Proficiency: Expert‑level proficiency in Python (or Scala) and SQL, with a strong focus on writing clean, tested, and highly performant data processing code.
  • Data Warehouse Design: Proven track record designing and implementing scalable data warehouses/data marts for analytical and operational use cases.
  • Pipeline Automation: Strong experience with workflow orchestration tools and implementing CI/CD for data pipelines.
  • Cloud Infrastructure: Familiarity with Infrastructure as Code (Terraform) and containerisation.

Desirable Attributes

  • Experience processing scientific, geospatial or time‑series data.
  • Experience in the governance or compliance sector where data integrity is paramount.
  • Familiarity with streaming data technologies.

Company Benefits

  • Paid Leave – 35 days (inclusive of public holidays)
  • Birthday Off
  • Volunteering Leave Allowance
  • Enhanced Parental Leave
  • Life Insurance
  • Healthcare Cash Plan
  • Employee Assistance Programme (EAP)
  • Pension
  • Monthly Wellbeing Allowance
  • Breakfast, Snacks, Friday lunch & Barista Coffee Machine in the office
  • Learning Portal with over 100,000 assets available to support professional development
  • Hybrid working set‑up (Farringdon, London)
  • Plenty of friendly 4‑legged pets in the office

About Oritain

Oritain is a global leader in forensic origin verification. Using cutting‑edge science, advanced technology, and specialised services, we independently verify where products and raw materials come from – protecting brand integrity, supporting compliance, and strengthening supply chain trust and transparency. Our method is highly resistant to tampering, court‑admissible, and trusted by suppliers and manufacturers, brands and retailers, consumers, and regulators.


Driven by purpose, we are committed to advancing the scientific techniques and systems needed to identify the origin of the world’s most critical commodities – enabling more ethical, resilient, and accountable supply chains.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.