Senior Data Engineer

Domestic & General Service GmbH
London
1 month ago
Applications closed

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer | Outside IR35 | Remote

Senior Data Engineer - MS Fabric - Remote - £70k - £75k

Senior Data Engineer - DV Cleared

Senior Data Engineer - Snowflake - £110,000 - London - Hybrid

Senior Data Engineering Consultant

For this role, senior experience of Data Engineering and building automated data pipelines on IBM Datastage & DB2, AWS and Databricks from source to operational databases through to curation layer is expected using the latest cloud modern technologies where experience of delivering complex pipelines will be significantly valuable to how D&G maintain and deliver world class data pipelines.

Job summary:

D+G is transforming into a technology-powered product business serving customers around the world. Our products and services rely heavily on compelling digital experiences and data-led journeys for our B2B and B2C clients and customers.

This is a key lead engineering role with D&G’s technology team which presents a challenging and exciting opportunity which will require real enthusiasm and modern Data engineering experience to stabilise, enhance and transform D&G’s operational Customer Databases as they move from legacy systems to new scalable cloud solutions across the UK, EU and US. The role will require an experienced Data engineer with good knowledge of IBM Datastage & DB2, AWS & Databricks pipelines who is able to excel in challenging environments with the confidence to help the teams steer the right course in the development of the data platform alongside supporting any required tooling decisions.

The role will enable D+G to deliver a modern data services layer, delivered as a product, and which can hence be consumed by key service channels and stakeholders on demand.

Strategic Impact:

Quality Customer Data is the lifeblood of D&G’s operations which allows us to serve our customers with outstanding propositions and outcomes. This role will be integral to supporting this through the following areas of delivery:

This role will initially help stabilise existing on-prem Customer Data Platforms to help serve our customers and protect the one-billion-pound revenue across the UK and EU. Targets will be to reduce merge and compliance incident backlog, promote more automation and support onboarding of 3rd party to provide managed break / fix service.

Support Data Growth in UK and US Markets

Supporting further growth in UK / EU markets through enhancement of the Customer on-prem IBM platforms to ensure they remain available, robust and secure for growing data demands in UK / EU whilst leading on delivery of cloud based solutions for the US pipelines and Data platform.

Knowledge, Expertise, Complexity and Scope:

Knowledge in the following areas essential:

  1. Databricks:Expertise in managing and scaling Databricks environments for ETL, data science, and analytics use cases.
  2. AWS Cloud:Extensive experience with AWS services such as S3, Glue, Lambda, RDS, and IAM.
  3. IBM Skills:DB2, Datastage, Tivoli Workload Scheduler, Urban Code.
  4. Programming Languages: Proficiency in Python, SQL.
  5. Data Warehousing & ETL: Experience with modern ETL frameworks and data warehousing techniques.
  6. DevOps & CI/CD:Familiarity with DevOps practices for data engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog).
  7. Familiarity with big data technologies like Apache Spark, Hadoop, or similar.
  8. ETL/ELT tools and creating common data sets across on-prem (IBMDatastage ETL) and cloud data stores.
  9. Leadership & Strategy:Lead Data Engineering team(s) in designing, developing, and maintaining highly scalable and performant data infrastructures.
  10. Customer Data Platform Development:Architect and manage our data platforms using IBM (legacy platform) & Databricks on AWS technologies (e.g., S3, Lambda, Glacier, Glue, EventBridge, RDS) to support real-time and batch data processing needs.
  11. Data Governance & Best Practices:Implement best practices for data governance, security, and data quality across our data platform. Ensure data is well-documented, accessible, and meets compliance standards.
  12. Pipeline Automation & Optimisation:Drive the automation of data pipelines and workflows to improve efficiency and reliability.
  13. Team Management:Mentor and grow a team of data engineers, ensuring alignment with business goals, delivery timelines, and technical standards.
  14. Cross Company Collaboration:Work closely with all levels of business stakeholder including data scientists, finance analysts, MI and cross-functional teams to ensure seamless data access and integration with various tools and systems.
  15. Cloud Management:Lead efforts to integrate and scale cloud data services on AWS, optimising costs and ensuring the resilience of the platform.
  16. Performance Monitoring:Establish monitoring and alerting solutions to ensure the high performance and availability of data pipelines and systems to ensure no impact to downstream consumers.

Key Responsibilities:

  1. Manage outcomes for the on-prem customer platform break / fix service.
  2. Build and deliver automated and secure data pipelines that provisions data for all business users and applications (including operational and insight).
  3. Work with the DevOps developer and testers to help support and create our AWS & Databricks infrastructure and continuous delivery pipelines.
  4. Ensure all developments are tested and deployed within the automated CI / CD pipeline where appropriate.
  5. Version and store all development artefacts in the agreed repository.
  6. Ensure all data are catalogued and appropriate documentation is created and maintained for all ETL code and associated NFR’s.
  7. Collaborate with the product owner (Data) & business stakeholders to understand the requirements and capabilities.
  8. Collaborate with the lead architect, CCOE to align to the best practice delivery strategy.
  9. Participate in the teams agile planning and delivery process to ensure work is delivered in line with the Product Owners priorities.
  10. Create low level designs for Epic’s and Stories and where required, support the lead architect to create the designs to enable the realization of the Data Lake, Operational Customer DB, Warehouse and marts while ensuring scalability, security by design, ease of use and high availability & reliability.
  11. Identify the key capabilities needed for success and the technology choices, coding standards, testing techniques and delivery approach to deliver reliable data services.
  12. Learn emerging technologies to keep abreast of new or better ways of delivering the Data Pipeline.
  13. Welcomes a challenge as a new opportunity to learn new things and make new friends whilst always thinking of better techniques to solve problems.

At Domestic & General, we are proud of our 100-year legacy and excited about our future growth plans. We are expanding our horizons, entering new markets and territories internationally and we need your expertise to help us on the journey.

Remote

Full Time

Salary: £23,000 per year plus OTE £7,200 per annum
Contract: Permanent, full time with a fixed weekly shift pattern
Location: REMOTE – working from home

#J-18808-Ljbffr

Get the latest insights and jobs direct. Sign up for our newsletter.

By subscribing you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Portfolio Projects That Get You Hired for Data Science Jobs (With Real GitHub Examples)

Data science is at the forefront of innovation, enabling organisations to turn vast amounts of data into actionable insights. Whether it’s building predictive models, performing exploratory analyses, or designing end-to-end machine learning solutions, data scientists are in high demand across every sector. But how can you stand out in a crowded job market? Alongside a solid CV, a well-curated data science portfolio often makes the difference between getting an interview and getting overlooked. In this comprehensive guide, we’ll explore: Why a data science portfolio is essential for job seekers. Selecting projects that align with your target data science roles. Real GitHub examples showcasing best practices. Actionable project ideas you can build right now. Best ways to present your projects and ensure recruiters can find them easily. By the end, you’ll be equipped to craft a compelling portfolio that proves your skills in a tangible way. And when you’re ready for your next career move, remember to upload your CV on DataScience-Jobs.co.uk so that your newly showcased work can be discovered by employers looking for exactly what you have to offer.

Data Science Job Interview Warm‑Up: 30 Real Coding & System‑Design Questions

Data science has become one of the most sought‑after fields in technology, leveraging mathematics, statistics, machine learning, and programming to derive valuable insights from data. Organisations across every sector—finance, healthcare, retail, government—rely on data scientists to build predictive models, understand patterns, and shape strategy with data‑driven decisions. If you’re gearing up for a data science interview, expect a well‑rounded evaluation. Beyond statistics and algorithms, many roles also require data wrangling, visualisation, software engineering, and communication skills. Interviewers want to see if you can slice and dice messy datasets, design experiments, and scale ML models to production. In this guide, we’ll explore 30 real coding & system‑design questions commonly posed in data science interviews. You’ll find challenges ranging from algorithmic coding and statistical puzzle‑solving to the architectural side of building data science platforms in real‑world settings. By practising with these questions, you’ll gain the confidence and clarity needed to stand out among competitive candidates. And if you’re actively seeking data science opportunities in the UK, be sure to visit www.datascience-jobs.co.uk. It’s a comprehensive hub featuring junior, mid‑level, and senior data science vacancies—spanning start‑ups to FTSE 100 companies. Let’s dive into what you need to know.

Negotiating Your Data Science Job Offer: Equity, Bonuses & Perks Explained

Data science has rapidly evolved from a niche specialty to a cornerstone of strategic decision-making in virtually every industry—from finance and healthcare to retail, entertainment, and AI research. As a mid‑senior data scientist, you’re not just running predictive models or generating dashboards; you’re shaping business strategy, product innovation, and customer experiences. This level of influence is why employers are increasingly offering compensation packages that go beyond a baseline salary. Yet, many professionals still tend to focus almost exclusively on base pay when negotiating a new role. This can be a costly oversight. Companies vying for data science talent—especially in the UK, where demand often outstrips supply—routinely offer equity, bonuses, flexible work options, and professional development funds in addition to salary. Recognising these opportunities and effectively negotiating them can have a substantial impact on your total earnings and long-term career satisfaction. This guide explores every facet of negotiating a data science job offer—from understanding equity structures and bonus schemes to weighing crucial perks like remote work and ongoing skill development. By the end, you’ll be well-equipped to secure a holistic package aligned with your market value, your life goals, and the tremendous impact you bring to any organisation.