Senior Platform Engineer (Infrastructure)

uSwitch
London
1 year ago
Applications closed

Related Jobs

View all jobs

Senior Data Engineer AWS

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

SAS Data Engineer

SAS Data Engineer

Description

Hybrid - 2 days per week in office (London Bridge/Tower Bridge area)

The RVU London cloud infrastructure team

We are committed to Open Source software in order to build services that help millions of customers to save money and make confident decisions. As well as helping our customers, we also give back to the community by open sourcing interesting projects that we build that might benefit others.

We’re looking for an experienced Platform/Infrastructure Engineer to join our infrastructure platform team, known internally as ‘Airship’.

Our goal as a team is to enable our development teams to deliver services quickly, reliably and securely. We do this by running multiple Kubernetes EKS and Fargate clusters in AWS, creating common tooling to aid in development tasks and running shared services such as Opensearch, Envoy, Vault and Prometheus to name a few. The team has also recently expanded its scope to simplify Data engineering in the organisation using the same techniques we used to ease creating web applications on data pipelines, leveraging Argo Workflows and Argo Events  as well as completed a migration to Github Actions.

Day to day tasks will include:

  • Planning and working on our infrastructure platform: from maintenance to design systems improvements or to adopt new technologies
  • Working with product engineering and data teams to design, build and improve scalability and reliability of their systems with an emphasis to provide the best DevEx
  • Developing tooling to help our teams work more efficiently

Requirements

The ideal candidate will have some of the following skills:

  • Extensive experience  in running Kubernetes clusters in production
  • Knowledge of Golang, Helm and Terraform  (some knowledge of  Python is definitely a plus)
  • Production experience in Cilium and/or eBPF and networking in general
  • Extensive experience in monitoring systems and their performance
  • The ability to debug large and complex systems and solving large problems that affect a wide user base in a simple way
  • Experience with image vulnerability scanning and patching strategies for large systems
  • Experience / Familiarity with AWS Multi Accounts system designs tools like  Crossplane and Control Tower 
  • Familiarity with Argo Workflows or similar data pipeline as a service tools
  • Familiarity working with a variety of Cloud Native projects
  • Familiarity with Github Action 
  • Familiarity with OpenTelemetry

Out team has been featured in a few conferences:

CNCF:   

PlatformCon:    and

 

We have also been featured  in the London AWS Summit 2023 for contribution to the EKS tooling community  

We also hosted and held the Terraform Hashicorp  User Group meetup in London in April. 

Examples of some projects we have worked on:

Short lived database credentials

Our running services previously relied on having long lived credentials to access data that were rarely, if ever, rotated. We wanted human and pod identity to be used to grant short-lived credentials based on policies. We used Vault to build a solution to this problem, creating tooling such as / to make it as easy as possible for developers to use these credentials with their services. ()

: a service that integrates AWS IAM with Kubernetes

We have a lot of existing AWS resource that have their access limited using IAM. We used Kube2IAM initially but experienced race conditions that would hand different role credentials to pods. We started work on a replacement and have worked with the community to get it used in other places.

: Envoy control plane for multi-cluster load balancing

For some of our more important applications it was important to have them survive a total cluster outage. This meant we needed a way to easily route traffic to an application spread out across multiple clusters so we created Yggdrasil, a tool to configure Envoy nodes to route our traffic between clusters based on Ingress resources. ()

: more confidence in the status of your deployments

It tracks deployments as they roll out and posts useful status updates into Slack. It does this by watching the Kubernetes api for namespaces and deployments with the correct annotations. When a new deployment rollout begins and completes updates are posted to the Slack API. Any errors during the deployment rollout are captured and included in the Slack message (see example below). This can be very useful to help quickly debug a failing deployment.

You can also check out our to see a number of blogs on what we’ve been up to.

Our commitment to you

At RVU, we are dedicated to developing valuable, inclusive, and user-friendly products and services for all. To achieve this it’s essential that our teams reflect the diverse range of people in our community. We believe in being the change we wish to see in the world, by embracing our differences and holding ourselves accountable to being open and inclusive teammates and wider community members.

Benefits

What we’ll give back to you:

We want to give you a great work environment; contribute back to both your personal and professional development; and give you great benefits to make your time at RVU even more enjoyable. Some of these benefits include:

  • Employer matching pension up to
  • Hybrid approach of in-office and remote working, and a “Work from Home” budget to help contribute towards a great work environment at home
  • Excellent maternity, paternity and adoption leave policy, for those key moments in your life
  • 25 days holiday (increasing to 30 days) + 2 days “My Time” per year
  • Up to 30 days per year “working from anywhere”
  • A healthy learning and training budget, as well as the chance to go to conferences around the world every year
  • Electric vehicles scheme
  • In office gym
  • Free breakfast in the office daily
  • Health insurance
  • Access to the Calm and Peppy app for physical and mental health
  • Regular events - from team socials to company-wide events with insightful external speakers, we want to make sure our colleagues continue to feel connected

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.