Data Engineer - sports analytics/betting
Salary 50k-65k (plus very lucrative bonus on top)
Location: London or Leeds (very relaxed about hybrid/remote working)
We are a proprietary sports pricing and product provider, specializing in the development of intricate, simulation-driven pricing and risk systems that empower leading sports brands. As pioneers in player-level, play-by-play simulations and forecasting, we deliver the groups most advanced pricing and risk capabilities - particularly focused on the US market.
The purpose of this role is to implement and maintain data infrastructure that facilitates data-driven decision making, innovation and operational efficiency while ensuring that the data pipeline is secure, reliable, and scalable. The role holder will build and maintain high-performance data systems that are foundational to driving business growth and success. The successful candidate will have a strong grasp of modern data modelling practices, analytics tooling, and interactive dashboard development in Power BI and Plotly/Dash (or similar).
Design and implement scalable data architectures and systems to support business intelligence and analytics needs.
Develop, optimize, and maintain ETL pipelines for efficient data integration and transformation.
Oversee data storage solutions, including backup and recovery strategies to ensure data integrity and availability.
Write and manage SQL queries to extract, manipulate, and analyze data for reporting and decision-making.
Implement robust data security and privacy protocols in compliance with relevant regulations and best practices.
Collaborate with clients and end users to gather requirements, provide updates, and deliver tailored data solutions.
Proficient in writing clean, efficient, and maintainable SQL and Python code, particularly for data transformation and analytics use cases.
Understanding of data modelling concepts and be able to design data models that are optimised for different user cases.
Familiarity with SQL and experience working with and designing relational databases.
Experience implementing data pipelines that run on Kafka or equivalent distributed event store and stream-processing platforms.
Ability to debug and optimize failing or slow data pipelines and queries.
Systems integration experience: networking, data migrations, API integration and design.
Experience working with AWS S3, Athena, ECS, Cloud Formation, Lambdas & Cloudwatch.
Familiar with analytics tools such as Power BI, Plotly/Dash, or similar for building interactive and impactful visualizations.