Data Analytics Engineer

easyJet
Luton
2 months ago
Applications closed

Related Jobs

View all jobs

Data & Analytics Engineer

Data Architect

Data Engineer / Analytics Engineer

Data Engineer - Remote - Global Tech Company - £80,000 - Snowflake/ DBT/ SQL/ Airbyte

Data Engineer - Remote - Global Tech Company - £80,000 - Snowflake/ DBT/ SQL/ Airbyte

Data Engineer - Remote - Global Tech Company - £80,000 - Snowflake/ DBT/ SQL/ Airbyte

Data Analytics Engineer

Luton/Hybrid

COMPANY

When it comes to innovation and achievement there are few organisations with a better track record. Join us and you’ll be able to play a big part in the success of our highly successful, fast-paced business that opens up Europe so people can exercise their get-up-and-go. With almost 300 aircraft flying over 1,000 routes to more than 32 countries, we’re the UK’s largest airline, the fourth largest in Europe and the tenth largest in the world. Set to fly more than 90 million passengers this year, we employ over 10,000 people. It’s big-scale stuff and we’re still growing.

JOB PURPOSE

The Analytics Engineer plays a crucial role in data-driven teams, focusing on ensuring that data is fit for purpose, curated for consumption, and discoverable within and beyond their immediate domain. They operate within domain-specific product teams, leveraging their analytical expertise and knowledge of data tools available to inform product development. They perform the heavy lifting of data preparation, allowing Data Scientists and Business Intelligence Analysts to act on data focusing on their core responsibilities to deliver insight and decision support.

JOB ACCOUNTABILITIES

  1. To shape and manage future direction of easyJet’s Data Engineering practice, focused on defining, designing and deploying critical analytical pipelines supporting high-quality and reusable data assets.
  2. Focuses on transforming data into actionable insights to support data analytics, data science, machine learning and AI business use cases. They possess the best combination of skills in the product team specializing in transforming and preparing unified cross-functional data sets supporting reports and dashboards that visualize complex data in a meaningful way across multiple business domains.
  3. Ensures data quality and integrity by implementing rigorous testing protocols and validation processes. They are responsible for making data compliant with data policies and its protection and optimizing queries while applying best practices.
  4. Collaborates with BI Analysts and Data Scientists to refine analytical methodologies and enhance reporting capabilities. They curate data sets and develop code for feature engineering, ensuring the code is production-ready.
  5. Acts as a technical Data Steward, ensuring data governance and discoverability and promoting the utility of data for analytics, data science, and AI. Ensures that the data and analytical pipelines are well documented with end-to-end lineage and observability.
  6. Able to take the business stakeholders on the analytics engineering journey. Is a strong communicator capable of expressing technical concepts to non-technical stakeholders in an easy-to-understand way.

KEY SKILLS REQUIRED

  1. Proficiency in SQL for querying, transforming, and managing data within databases.
  2. Experience in developing and optimizing ETL/ELT pipelines and using DBT for data transformation and modelling.
  3. Knowledge of data modelling techniques, including star and snowflake schemas, to structure data for efficient analysis.
  4. Familiarity with cloud platforms such as AWS or GCP, including services like Databricks, Redshift, BigQuery, and Snowflake.
  5. Strong Python skills for data manipulation, scripting, and automating tasks using libraries like Pandas and NumPy.
  6. Expertise in managing data architecture and processing within data warehouses and lakehouse platforms like Databricks, Redshift and Snowflake.
  7. Experience using Git for version control and managing changes to code and data models.
  8. Ability to automate tasks and processes using Python or workflow orchestration tools like Apache Airflow.
  9. Skill in integrating data from various sources, including APIs, databases, and third-party systems.
  10. Ensuring data quality through monitoring and validating data throughout the pipeline.
  11. Strong troubleshooting skills for resolving data pipeline issues and optimizing performance.
  12. Familiarity with Tableau and ThoughtSpot for analytics and ensuring data compatibility with analytical platforms.

What you’ll get in return

  • Competitive base salary
  • Up to 20% bonus
  • 25 days holiday
  • BAYE, SAYE & Performance share schemes
  • 7% pension
  • Life Insurance
  • Work Away Scheme
  • Flexible benefits package
  • Excellent staff travel benefits

Apply
Complete your application on our careers site.
We encourage individuality, empower our people to seize the initiative, and never stop learning. We see people first and foremost for their performance and potential and we are committed to building a diverse and inclusive organisation that supports the needs of all. As such we will make reasonable adjustments at interview through to employment for our candidates.

#MP1 #LI-HYBRID

#J-18808-Ljbffr

Get the latest insights and jobs direct. Sign up for our newsletter.

By subscribing you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Portfolio Projects That Get You Hired for Data Science Jobs (With Real GitHub Examples)

Data science is at the forefront of innovation, enabling organisations to turn vast amounts of data into actionable insights. Whether it’s building predictive models, performing exploratory analyses, or designing end-to-end machine learning solutions, data scientists are in high demand across every sector. But how can you stand out in a crowded job market? Alongside a solid CV, a well-curated data science portfolio often makes the difference between getting an interview and getting overlooked. In this comprehensive guide, we’ll explore: Why a data science portfolio is essential for job seekers. Selecting projects that align with your target data science roles. Real GitHub examples showcasing best practices. Actionable project ideas you can build right now. Best ways to present your projects and ensure recruiters can find them easily. By the end, you’ll be equipped to craft a compelling portfolio that proves your skills in a tangible way. And when you’re ready for your next career move, remember to upload your CV on DataScience-Jobs.co.uk so that your newly showcased work can be discovered by employers looking for exactly what you have to offer.

Data Science Job Interview Warm‑Up: 30 Real Coding & System‑Design Questions

Data science has become one of the most sought‑after fields in technology, leveraging mathematics, statistics, machine learning, and programming to derive valuable insights from data. Organisations across every sector—finance, healthcare, retail, government—rely on data scientists to build predictive models, understand patterns, and shape strategy with data‑driven decisions. If you’re gearing up for a data science interview, expect a well‑rounded evaluation. Beyond statistics and algorithms, many roles also require data wrangling, visualisation, software engineering, and communication skills. Interviewers want to see if you can slice and dice messy datasets, design experiments, and scale ML models to production. In this guide, we’ll explore 30 real coding & system‑design questions commonly posed in data science interviews. You’ll find challenges ranging from algorithmic coding and statistical puzzle‑solving to the architectural side of building data science platforms in real‑world settings. By practising with these questions, you’ll gain the confidence and clarity needed to stand out among competitive candidates. And if you’re actively seeking data science opportunities in the UK, be sure to visit www.datascience-jobs.co.uk. It’s a comprehensive hub featuring junior, mid‑level, and senior data science vacancies—spanning start‑ups to FTSE 100 companies. Let’s dive into what you need to know.

Negotiating Your Data Science Job Offer: Equity, Bonuses & Perks Explained

Data science has rapidly evolved from a niche specialty to a cornerstone of strategic decision-making in virtually every industry—from finance and healthcare to retail, entertainment, and AI research. As a mid‑senior data scientist, you’re not just running predictive models or generating dashboards; you’re shaping business strategy, product innovation, and customer experiences. This level of influence is why employers are increasingly offering compensation packages that go beyond a baseline salary. Yet, many professionals still tend to focus almost exclusively on base pay when negotiating a new role. This can be a costly oversight. Companies vying for data science talent—especially in the UK, where demand often outstrips supply—routinely offer equity, bonuses, flexible work options, and professional development funds in addition to salary. Recognising these opportunities and effectively negotiating them can have a substantial impact on your total earnings and long-term career satisfaction. This guide explores every facet of negotiating a data science job offer—from understanding equity structures and bonus schemes to weighing crucial perks like remote work and ongoing skill development. By the end, you’ll be well-equipped to secure a holistic package aligned with your market value, your life goals, and the tremendous impact you bring to any organisation.