Integrations & Data Engineer, Integrations & Data Engineer

Pilgrim's Europe
Exeter
1 week ago
Create job alert
Role Purpose

This role ensures that business data is collected, transformed, validated, stored, and distributed through resilient automated pipelines, reducing manual processes and enabling reliable analytics, reporting, and system integrations.


The position supports both Business as Usual (BAU) operations and project delivery activities. This includes maintaining the stability and performance of existing integrations and data pipelines, as well as designing and implementing innovative solutions in line with agreed project specifications and architecture standards.


The Integrations & Data Engineer will also be responsible for ensuring that their solutions adhere to modern, secure standards and have scalability in mind. This includes following recognised frameworks and best practices to provide solutions that remain robust, maintainable, and future‑proof, supporting the business’ evolving needs and compliance requirements.


Key Responsibilities
Integration Engineering & Data Pipelines

  • Develop, implement, support, and oversee data integrations between platforms.
  • Develop and maintain scalable ETL/ELT pipelines to ensure reliable and structured data availability for consumption.
  • Produce and maintain a comprehensive documentation of data schemas, conventions, and definitions.
  • Ensure a high level of data integrity through validation and comprehensive error handling measures.

Process Automation & Workflow Development

  • Design, build and maintain robust data automation workflows to reduce manual processing and improve operational efficiency.
  • Select appropriate tools to support automated data processing at scale, while including logic to handle scheduling and retry mechanisms for reliability.
  • Undertake user studies to proactively identify opportunities for efficiency gains through automation.

Operational Support (BAU), Continuous Improvement, & Knowledge Sharing

  • Respond to & resolve escalated support tickets relating to data, integrations, and automations, providing RCA and explanations to the support team.
  • Proactively review & recommend potential improvements, reworks, or changes to existing workflows as part of a continuous improvement mindset.
  • Collaborate with suppliers, partners and external contractors during support and project delivery activities.
  • Write & maintain KB articles for technical documentation and end‑user guides.

Project Delivery & Stakeholder Collaboration

  • Provide technical expertise, effort estimation, and risk identification during the planning of projects.
  • Maintain consistent & audience‑appropriate communication with project stakeholders during development.
  • Provide timely progress updates, risks and potential blockers to stakeholders and the project manager.

Required Skills & Experience
Core Data & Integration Skills

  • Strong experience designing, building and maintaining integrations between systems, including working with RESTful APIs as both data sources and destinations.
  • Solid understanding of common data formats such as JSON, XML and CSV.
  • Strong ability to validate, cleanse, manipulate and transform data to for quality, reliability and accuracy.
  • Experience developing and optimising ETL/ELT pipelines in analytical, reporting, and operational contexts.

Database & Platform Expertise
Microsoft SQL Server

  • Advanced knowledge of Microsoft SQL Server, including writing DML queries, stored procedures, and functions.
  • Experience managing SQL Server instances (security, users, monitoring, maintenance).

SAP Business One

  • Understanding of core SAP B1 processes and objects (business partners, documents, etc).
  • Experience customising the platform using add‑ons such as Boyum B1UP.
  • Familiarity with SAP B1’s underlying data structures.

Microsoft Fabric

  • Experience ingesting data from APIs or databases into Fabric.
  • Skilled in using Dataflows, Power Query, and Data Warehouse tooling to cleanse, transform, and prepare data.
  • Ability to publish structured, reliable datasets for analytical and reporting use cases.

Microsoft Azure

  • Hands‑on experience with Azure Logic Apps for workflow automation.
  • Experience using Azure Function Apps for high‑speed data manipulation and transfer operations.
  • Familiarity with additional Azure data and compute services beneficial to integration workloads.

Programming & Automation Skills

  • Ability to write and understand T‑SQL, VBScript, and Python.
  • Experience of working with DAX, Node.js, PowerShell or Bash.
  • Experience with automation, workflow orchestration, scheduling, and error‑handling logic.
  • Familiarity with version control (e.g., Git), DevOps and CI/CD practices is desirable.

Teamwork, Service & Communication

  • Strong problem‑solving and critical thinking skills with the initiative to identify optimal solutions.
  • Excellent customer service mindset when supporting internal teams and franchise partners.
  • Ability to communicate and explain technical concepts clearly to both technical and non‑technical audiences.
  • Experience mentoring or guiding first‑line support staff through escalated incident resolution.
  • Confidence in identifying weaknesses in business or technical processes and recommending improvements.
  • Understanding of commercial and operational metrics, and how data solutions support wider business goals.


#J-18808-Ljbffr

Related Jobs

View all jobs

Integrations & Data Engineer

Automation-Focused Integrations & Data Engineer

Data Engineer: Scalable Pipelines & Data Orchestration

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.