Data Engineer

RBC
Newcastle upon Tyne
1 week ago
Create job alert
Job Description

We have an exciting opportunity for a Data Engineer to join the team in Newcastle or London. You will work closely with business and technology teams across Wealth Management Europe (WME) to support the ongoing maintenance and evolution of the Data Lakehouse platform. The primary focus being the ingestion and modelling of new data, and the evolution of the platform itself utilising new technologies to improve performance and accuracy of the data.


RBC’s expectation is that all employees and contractors will work in the office with some flexibility to work up to 1 day per week remotely, depending on working arrangements.


What will you do?

  • Responsible for the development and ongoing maintenance of the Data Lakehouse platform infrastructure using the Microsoft Azure technology stack, including Databricks and Data Factory.


  • Manage data pipelines consisting of a series of stages through which data flows (for example, from data sources or endpoints of acquisition to integration to consumption for specific use cases). These data pipelines must be created, maintained and optimized as workloads move from development to production for specific use cases. Architecting, creating and maintaining data pipelines will be the primary responsibility of the data engineer.


  • Create new and modify existing Notebooks, Functions and Workflows to support efficient reporting and analytics to the business.


  • Create, maintain, and develop Dev, UAT and Production environments ensuring consistency.


  • Responsible for using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks to minimize manual and error‑prone processes and improve productivity.


  • Competent in using GitHub (or other version control tooling) and in using data and schema comparisons via Visual Studio.


  • Champion for the DevOps process to ensure the latest techniques are being used and that implementation methodologies involving new or changes to existing source code or data structures follow the agreed development and release processes and that all productionised code is adequately documented, reviewed and unit tested where appropriate.


  • Identify, source, stage, and model internal process improvements to automate manual processes and optimise data delivery for greater scalability, as part of the end-to-end data lifecycle.


  • Be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. Additionally, be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalisation techniques in optimally addressing these data requirements.



What do you need to succeed?
Must-have

  • Proven experience working within Data Engineering and Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management.


  • Proven experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative.


  • Strong experience with popular database programming languages for relational databases (SQL, T‑SQL).


  • Experience working on a cloud data platform such as Databricks or Snowflake.


  • Adept in agile methodologies, and capable of applying DevOps and DataOps principles to data pipelines.


  • Basic experience in working with data governance, data quality and data security teams.


  • Good understanding of datasets, Data Lakehouses, modelling, database design and programming.


  • Knowledge of Data Lakehousing techniques, solutions and methodologies.


  • Strong experience supporting and working with cross-functional teams in a dynamic business environment.


  • Required to be highly creative and collaborative working closely with business teams and IT teams to define the business problem, refine the requirements, and design and develop data deliverables accordingly.



Nice-to-have

  • Knowledge of Terraform or other Infrastructure‑as‑code tools.


  • Experience with advanced analytics tools for Object‑oriented/object function scripting using languages such as Python, Java, C++, Scala, R, and others.


  • Experience using automated unit testing methodologies.



What is in it for you?

We thrive on the challenge to be our best - progressive thinking to keep growing and working together to deliver trusted advice to help our clients thrive and communities prosper. We care about each other, reaching our potential, making a difference to our communities, and achieving success that is mutual.



  • Leaders who support your development through coaching and managing opportunities.


  • Opportunities to work with the best in the field.


  • Ability to make a difference and lasting impact.


  • Work in a dynamic, collaborative, progressive, and high‑performing team.



Agency Notice

RBC Group does not accept agency resumés. Please do not forward resumés to our employees, nor any other company location. RBC Group only pay fees to agencies where they have entered into a prior agreement to do so and in any event do not pay fees related to unsolicited resumés. Please contact the Recruitment function for additional details.


Inclusion and Equal Opportunity Employment

At RBC, we believe an inclusive workplace that has diverse perspectives is core to our continued growth as one of the largest and most successful banks in the world. Maintaining a workplace where our employees feel supported to perform at their best, effectively collaborate, drive innovation, and grow professionally helps to bring our Purpose to life and create value for our clients and communities. RBC strives to deliver this through policies and programs intended to foster a workplace based on respect, belonging and opportunity for all.


Join our Talent Community

Stay in‑the‑know about great career opportunities at RBC. Sign up and get customised info on our latest jobs, career tips and Recruitment events that matter to you.


Expand your limits and create a new future together at RBC. Find out how we use our passion and drive to enhance the well‑being of our clients and communities at jobs.rbc.com.


RBC is presently inviting candidates to apply for this existing vacancy. Applying to this posting allows you to express your interest in this current career opportunity at RBC. Qualified applicants may be contacted to review their resume in more detail.


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.

Data Science Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data science in your 30s, 40s or 50s? You’re far from alone. Across the UK, businesses are investing in data science talent to turn data into insight, support better decisions and unlock competitive advantage. But with all the hype about machine learning, Python, AI and data unicorns, it can be hard to separate real opportunities from noise. This article gives you a practical, UK-focused reality check on data science careers for mid-life career switchers — what roles really exist, what skills employers really hire for, how long retraining typically takes, what UK recruiters actually look for and how to craft a compelling career pivot story. Whether you come from finance, marketing, operations, research, project management or another field entirely, there are meaningful pathways into data science — and age itself is not the barrier many people fear.