Senior Data Analyst

The Lettings Hub
Peterborough
1 month ago
Create job alert

The Lettings Hub are a prop-tech company who provide technology driven products and services to manage lettings. We work with a national network of letting agents and provide a range of services to help our clients with the move-in process. This includes completing the necessary checks on potential tenants before they move into a landlord’s property, as well as a range of additional property insurance products and services to support the letting agent, tenant and the landlord.

Job Description

We are looking for a hybrid Senior Data Analyst and Data Engineer who will own the analytical foundations of the business. This is a hands-on role combining SQL engineering, data modelling, automation, and dashboard development. You will build and maintain the data pipelines and analytical layers that support product, operations, finance, and compliance across a multi-platform ecosystem.

This role suits someone who enjoys both analysis and engineering, who wants to shape how data flows through an organisation, and who is motivated by improving systems rather than maintaining static reporting.

You will design and optimise data structures, build automated workflows, integrate systems, and develop reliable analytical models used across the group. You will reduce manual work, consolidate reporting, and turn disconnected data sources into well defined, reusable datasets.

This is the core data role for the organisation. You will set standards, improve our tooling, and have ownership over how our data environment evolves.

This is a hands‑on technical role with future leadership potential with the scope to influence how our data stack evolves.

Analytics and Modelling
  • Build analytical layers and dashboards in our BI platform that serve as single sources of truth. (Currently Metabase)
  • Develop metric definitions and modelling logic used across multiple teams.
  • Provide deeper analysis of product, operational, and financial performance
  • Create reusable data models that support scalable analytics and automation.
  • Implement validation, monitoring, and alerting to ensure data reliability.
  • Design, build, and maintain ETL and ELT pipelines using SQL and automation platforms.
  • Optimise database queries, indexes, and schemas to improve performance.
Automation and System Integration
  • Replace manual reporting, especially Excel based processes, with automated workflows.
  • Build and maintain integrations between internal systems, BI tools, finance systems, and external APIs.
  • Improve data flow across the organisation by designing efficient, traceable automation paths.
Governance and Quality
  • Document data models, definitions, and lineage to maintain clarity and transparency.
  • Support compliance with GDPR, FCA requirements, and internal governance processes.
First 60 to 90 Days: Technical Priorities
  • Consolidate all finance reporting into accurate, consistent analytical models and automated dashboards.
  • Review manual reporting workflows and convert them into automated, repeatable processes.
  • Map data sources, identify key reliability issues, and produce a remediation plan.
  • Begin standardising core entities and metrics across systems and teams.
Communication and Collaboration
  • Able to communicate technical decisions clearly to engineering, product, and finance teams.
  • Comfortable owning problems end to end, from defining the requirements through to automated solution.
About the Team

As a Prop-Tech business, we are always looking to the future. What can we be doing next to ensure Letting Agents jobs are made quicker and easier, tenants have a smoother journey and landlords have piece of mind? So, although we have over 100+ years of industry experience across the team, we are not stuck in the past.

The only way to achieve this is having great people, but also a great environment at work. This is why we prioritise wellbeing and culture and ensure all of our colleagues have the space to share ideas and grow with the business. We have quarterly awards to celebrate together as a whole team, a free snack station for those afternoons only chocolate will help and monthly challenges set by the CEO that brings the competitive side out of all of us!!

If you think this is the type of environment you would thrive in, make sure to apply

  • Some travel to Peterborough and Nottingham required during onboarding phase and from time to time where required.


#J-18808-Ljbffr

Related Jobs

View all jobs

Senior Data Analyst

Senior Data Analyst

Senior Data Analyst

Senior Data Analyst

Senior Data Analyst

Senior Data Analyst - Marketing

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.