Data Engineer Partner

Open Partners
Manchester
3 weeks ago
Create job alert
Role Purpose

Role Purpose


The Data Engineer is responsible for developing and maintaining the infrastructure behind our data pipelines and data warehouses. You will lead the design and delivery of internal and external data engineering solutions, managing the development of transformation pipelines across our data architecture.


Role Responsibilities

  • Develop the tech stack and data architecture behind the agency's automated reporting infrastructure. Research, analyze, and help implement technical approaches for solving complex development and integration problems to support the strategy roadmap.


  • Build and maintain ETL/ELT pipelines to create high-performance data feeds for reporting suites across organic & paid media.


  • Lead the process on troubleshooting data feeds, understanding and solving problems across various data platforms and technologies (e.g., API failures, data discrepancies).


  • Work closely with the AI & automation team to ensure that clean and structured data can be successfully imported from key systems and applications around the business.


  • Champion best practice, governance, and security in all data-led projects.


  • Support the AI team to scope out new opportunities for utilizing Artificial Intelligence in product development and engineering workflows.


  • Speak directly with clients regarding technical data requirements and manage high-level stakeholder expectations regarding data feasibility.



Your KPIs / Outputs

  • Design and maintain a scalable data architecture and tech stack to power automated reporting.


  • Ensure high-quality, normalized data availability by building and stabilizing ETL/ELT pipelines.


  • Minimize data feed downtime and troubleshooting requirements through robust engineering.


  • Champion data governance, security, and best practices across all data-led projects.


  • Enable faster delivery of client solutions by automating complex data integration from key platforms.


  • Drive the technical roadmap for integrating data pipelines & warehousing across the wider business.



Role Details

Reports to: Harry Smith (Data Senior Partner)


Responsible for: Architecture, development, and maintenance of our reporting pipelines, data warehousing, and database structures.


Location: Manchester & Hybrid.


Hours / Days: 37.5 hours, over 5 days per week.


Contract basis: Permanent.


To be successful in this role

  • 5+ years of experience in data engineering and building data pipelines/architecture.


  • Good strategic thinkers, with the ability to undertake strategic planning for data infrastructure and roadmaps.


  • Able to take the lead of a function (Data Engineering) and make it your own, championing governance and security.


  • Self-motivated and proactive in researching and implementing new technical approaches.

    >
  • Confidence in translating technical concepts, with the ability to communicate complex integration problems to non-technical stakeholders clearly.



Skills & Experience required

Must Have:



  • Advanced SQL & Python.


  • Cloud Engineering (e.g., AWS, Azure, GCP).


  • Database design and architecture principles.


  • High level of Microsoft Excel / Google Sheets skills.


  • Intermediate A.I./prompt engineering knowledge.



Desirable:



  • Experience with Google Cloud Platform (GCP) and BigQuery.


  • Experience in marketing (offline and online) data.


  • Advanced coding/version control skills (e.g. GitHub).


  • Experience with marketing API management tools (e.g., Adverity, Fivetran, Funnel).



Expectations for all Open Partners Employees

  • Follow our Employee Handbook.


  • Live by our values


    • Smarter - Aim high, train hard, embrace next


    • Faster - Learn fast, adapt fast, act fast





Better - Think, say and do what’s best


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Secure Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.