Data Engineer

Catch Resource Management
Newcastle upon Tyne
3 months ago
Applications closed

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer –Data Engineer, data engineering, data, Python, Astronomer airflow, Apache airflow, ETL, workflows, Data pipelines, DAG, SQL, data warehousing, Azure, Docker, data governance, privacy, security, API integration, data modelling – UK – Remote – Contract - £400-£475pd, outside IR35


Our client, are currently looking for a Data Engineer with expertise in Python and Astronomer Airflow to support and enhance data pipelines for customer-facing digital applications. In this role you will be responsible for designing, building, supporting and maintaining robust, scalable data pipelines that drive digital products. You will play a crucial role in ensuring that the applications receive timely and accurate data, directly impacting the user experience.


This is a 6 month, homebased contract. (Candidates must be based in the UK)


Key Skills & Experience:


  • Bachelor’s degree in Computer Science, Data Engineering, or a related field, or equivalent practical experience.
  • 3+ years of experience as a Data Engineer, with a focus on supporting data pipelines for digital applications.
  • Proficiency in Python programming and experience with Astronomer Airflow or Apache Airflow, including DAG creation, workflow management, and scheduling.
  • Solid understanding of ETL processes and data integration techniques.
  • Experience working with SQL and relational databases and data warehousing solutions.
  • Familiarity with cloud platforms, specifically Microsoft Azure and data-related services.
  • Understanding of data architecture principles and best practices for data management in customer-facing applications.
  • Strong problem-solving skills, with the ability to troubleshoot and resolve data pipeline issues quickly.
  • Experience working with data pipelines in a customer-facing digital environment, such as web or mobile applications.
  • Knowledge of data governance, privacy, and security best practices.
  • Experience with containerization and orchestration tools like Docker.
  • Understanding of data modelling and API integration for digital applications.
  • Strong problem-solving skills, with attention to detail and a commitment to delivering high-quality work.
  • Excellent communication skills, with the ability to work effectively in a collaborative, cross-functional, team-oriented environment.
  • Experience in working in a fast paced and cross functional environment, utilising strong organizational skills with the ability to handle multiple priorities and deliver to deadlines.
  • Broad experience across a number of IT disciplines.
  • Be flexible with respect to job responsibilities and consistently strive to be an effective team member.


Main Responsibilities:


  • Design, develop, support and maintain data pipelines using Python and Astronomer Airflow to support data flow into customer-facing digital applications.
  • Collaborate with members of the delivery team, and other stakeholders to understand data requirements for digital products and implement solutions accordingly.
  • Build ETL processes to extract, transform, and load data from various sources into data stores that feed into digital applications.
  • Ensure the reliability, scalability, and performance of data pipelines, with a focus on minimizing latency and maximizing data quality.
  • Monitor and maintain data pipelines, proactively identifying and resolving issues to ensure consistent data delivery to applications.
  • Implement data validation and quality checks to ensure data accuracy and integrity within our digital platforms.
  • Continuously optimize and enhance data workflows and processes to support evolving product and business needs.
  • Document data engineering processes, including pipeline design, data flows, and operational procedures.
  • Stay up-to-date with the latest trends and best practices in data engineering, particularly in relation to digital applications. Performing gap analysis to identify improvement opportunities.
  • Engage in Agile ceremonies, primarily within the delivery team as well as the wider IT Agile Release Train.
  • Participate in code reviews, providing constructive feedback to peers and ensuring high code quality.


Location:UK Wide/ Remote


Candidates must be eligible to work in this country.


Catch Resource Management is a leading provider of Dynamics 365, JD Edwards, NetSuite and other ERP resources to both end users and to product suppliers/authors.


Our consultants deliver a completely professional resourcing service, always backed up by our team of ERP specialists who are all experienced in full project life cycle implementation and support, thus ensuring that we fully understand our clients’ requirements and our candidates’ skills.


If you have the relevant skills and experience for this position we would welcome your application, however please note that we receive high levels of responses to our advertisements so can only immediately respond to those that are a close match. However, if you are interested in hearing about similar positions then please register on our website:www.catchgroup.com.

Get the latest insights and jobs direct. Sign up for our newsletter.

By subscribing you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Portfolio Projects That Get You Hired for Data Science Jobs (With Real GitHub Examples)

Data science is at the forefront of innovation, enabling organisations to turn vast amounts of data into actionable insights. Whether it’s building predictive models, performing exploratory analyses, or designing end-to-end machine learning solutions, data scientists are in high demand across every sector. But how can you stand out in a crowded job market? Alongside a solid CV, a well-curated data science portfolio often makes the difference between getting an interview and getting overlooked. In this comprehensive guide, we’ll explore: Why a data science portfolio is essential for job seekers. Selecting projects that align with your target data science roles. Real GitHub examples showcasing best practices. Actionable project ideas you can build right now. Best ways to present your projects and ensure recruiters can find them easily. By the end, you’ll be equipped to craft a compelling portfolio that proves your skills in a tangible way. And when you’re ready for your next career move, remember to upload your CV on DataScience-Jobs.co.uk so that your newly showcased work can be discovered by employers looking for exactly what you have to offer.

Data Science Job Interview Warm‑Up: 30 Real Coding & System‑Design Questions

Data science has become one of the most sought‑after fields in technology, leveraging mathematics, statistics, machine learning, and programming to derive valuable insights from data. Organisations across every sector—finance, healthcare, retail, government—rely on data scientists to build predictive models, understand patterns, and shape strategy with data‑driven decisions. If you’re gearing up for a data science interview, expect a well‑rounded evaluation. Beyond statistics and algorithms, many roles also require data wrangling, visualisation, software engineering, and communication skills. Interviewers want to see if you can slice and dice messy datasets, design experiments, and scale ML models to production. In this guide, we’ll explore 30 real coding & system‑design questions commonly posed in data science interviews. You’ll find challenges ranging from algorithmic coding and statistical puzzle‑solving to the architectural side of building data science platforms in real‑world settings. By practising with these questions, you’ll gain the confidence and clarity needed to stand out among competitive candidates. And if you’re actively seeking data science opportunities in the UK, be sure to visit www.datascience-jobs.co.uk. It’s a comprehensive hub featuring junior, mid‑level, and senior data science vacancies—spanning start‑ups to FTSE 100 companies. Let’s dive into what you need to know.

Negotiating Your Data Science Job Offer: Equity, Bonuses & Perks Explained

Data science has rapidly evolved from a niche specialty to a cornerstone of strategic decision-making in virtually every industry—from finance and healthcare to retail, entertainment, and AI research. As a mid‑senior data scientist, you’re not just running predictive models or generating dashboards; you’re shaping business strategy, product innovation, and customer experiences. This level of influence is why employers are increasingly offering compensation packages that go beyond a baseline salary. Yet, many professionals still tend to focus almost exclusively on base pay when negotiating a new role. This can be a costly oversight. Companies vying for data science talent—especially in the UK, where demand often outstrips supply—routinely offer equity, bonuses, flexible work options, and professional development funds in addition to salary. Recognising these opportunities and effectively negotiating them can have a substantial impact on your total earnings and long-term career satisfaction. This guide explores every facet of negotiating a data science job offer—from understanding equity structures and bonus schemes to weighing crucial perks like remote work and ongoing skill development. By the end, you’ll be well-equipped to secure a holistic package aligned with your market value, your life goals, and the tremendous impact you bring to any organisation.