Data Engineer

Edelman
London
3 months ago
Applications closed

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Edelman is a voice synonymous with trust, reimagining a future where the currency of communication is action. Our culture thrives on three promises: boldness is possibility, empathy is progress, and curiosity is momentum. At Edelman, we understand diversity, equity, inclusion and belonging (DEIB) transform our colleagues, our company, our clients, and our communities. We are in relentless pursuit of an equitable and inspiring workplace that is respectful of all, reflects and represents the world in which we live, and fosters trust, collaboration and belonging.We currently seeking a Data Engineer with 3-5 years’ experience. The ideal candidate would have the ability to work independently within an AGILE working environment and have experience working with cloud infrastructure leveraging tools such as Apache Airflow, Databricks, and Snowflake. A familiarity with real-time data processing and AI implementation is advantageous.Why You'll Love Working with Us:At Edelman, we believe in fostering a collaborative and open environment where every team member’s voice is valued. Our data engineering team thrives on building robust, scalable, and efficient data systems to power insightful decision-making. We are at an exciting point in our journey, focusing on designing and implementing modern data pipelines, optimizing data workflows, and enabling seamless integration of data across platforms. You’ll work with best-in-class tools and practices for data ingestion, transformation, storage and analysis, ensuring high data quality, performance, and reliability. Our data stack leverages technologies like ETL/ELT pipelines, distributed computing frameworks, data lakes, and data warehouses to process and analyze data efficiently at scale. Additionally, we are exploring the use of Generative AI techniques to support tasks like *data enrichment and automated reporting, enhancing the insights we deliver to stakeholders. This role provides a unique opportunity to work on projects involving batch processing , streaming data pipelines, and automation of data workflows, with occasional opportunities to collaborate on AI-driven solutions. If you’re passionate about designing scalable systems, building reliable data infrastructure, and solving real-world data challenges, you’ll thrive here. We empower our engineers to explore new tools and approaches while delivering meaningful, high-quality solutions in a supportive, forward-thinking environment.

Responsibilities:

Design, build, and maintain scalable and robust data pipelines to support analytics and machine learning models, ensuring high data quality and reliability for both batch & real-time use cases. Design, maintain, optimize data models and data structures in tooling such as Snowflake and Databricks. Leverage Databricks and Cloud-native solutions for big data processing, ensuring efficient management of Spark jobs and seamless integration with other data services. Utilize PySpark and/or Ray to build and scale distributed computing tasks, enhancing the performance of machine learning model training and inference processes. Monitor, troubleshoot, and resolve issues within data pipelines and infrastructure, implementing best practices for data engineering and continuous improvement. Diagrammatically document data engineering workflows. Collaborate with other Data Engineers, Product Owners, Software Developers and Machine Learning Engineers to implement new product features by understanding their needs and delivery timeously. 

Qualifications:

Minimum of 3 years experience deploying enterprise level scalable data engineering solutions. Strong examples of independently developed data pipelines end-to-end, from problem formulation, raw data, to implementation, optimization, and result. Proven track record of building and managing scalable cloud-based infrastructure on AWS (incl. S3, Dynamo DB, EMR). Proven track record of implementing and managing of AI model lifecycle in a production environment. Experience using Apache Airflow (or equivalent) , Snowflake, Lucene-based search engines. Experience with Databricks (Delta format, Unity Catalog). Advanced SQL and Python knowledge with associated coding experience. Strong Experience with DevOps practices for continuous integration and continuous delivery (CI/CD). Experience wrangling structured & unstructured file formats (Parquet, CSV, JSON). Understanding and implementation of best practices within ETL end ELT processes. Data Quality best practice implementation using Great Expectations. Real-time data processing experience using Apache Kafka Experience (or equivalent) will be advantageous. Work independently with minimal supervision. Takes initiative and is action-focused. Mentor and share knowledge with junior team members. Collaborative with a strong ability to work in cross-functional teams. Excellent communication skills with the ability to communicate with stakeholders across varying interest groups. Fluency in spoken and written English.

#LI-RT9We are dedicated to building a diverse, inclusive, and authentic workplace, so if you’re excited about this role but your experience doesn’t perfectly align with every qualification, we encourage you to apply anyway. You may be just the right candidate for this or other roles.

Get the latest insights and jobs direct. Sign up for our newsletter.

By subscribing you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Portfolio Projects That Get You Hired for Data Science Jobs (With Real GitHub Examples)

Data science is at the forefront of innovation, enabling organisations to turn vast amounts of data into actionable insights. Whether it’s building predictive models, performing exploratory analyses, or designing end-to-end machine learning solutions, data scientists are in high demand across every sector. But how can you stand out in a crowded job market? Alongside a solid CV, a well-curated data science portfolio often makes the difference between getting an interview and getting overlooked. In this comprehensive guide, we’ll explore: Why a data science portfolio is essential for job seekers. Selecting projects that align with your target data science roles. Real GitHub examples showcasing best practices. Actionable project ideas you can build right now. Best ways to present your projects and ensure recruiters can find them easily. By the end, you’ll be equipped to craft a compelling portfolio that proves your skills in a tangible way. And when you’re ready for your next career move, remember to upload your CV on DataScience-Jobs.co.uk so that your newly showcased work can be discovered by employers looking for exactly what you have to offer.

Data Science Job Interview Warm‑Up: 30 Real Coding & System‑Design Questions

Data science has become one of the most sought‑after fields in technology, leveraging mathematics, statistics, machine learning, and programming to derive valuable insights from data. Organisations across every sector—finance, healthcare, retail, government—rely on data scientists to build predictive models, understand patterns, and shape strategy with data‑driven decisions. If you’re gearing up for a data science interview, expect a well‑rounded evaluation. Beyond statistics and algorithms, many roles also require data wrangling, visualisation, software engineering, and communication skills. Interviewers want to see if you can slice and dice messy datasets, design experiments, and scale ML models to production. In this guide, we’ll explore 30 real coding & system‑design questions commonly posed in data science interviews. You’ll find challenges ranging from algorithmic coding and statistical puzzle‑solving to the architectural side of building data science platforms in real‑world settings. By practising with these questions, you’ll gain the confidence and clarity needed to stand out among competitive candidates. And if you’re actively seeking data science opportunities in the UK, be sure to visit www.datascience-jobs.co.uk. It’s a comprehensive hub featuring junior, mid‑level, and senior data science vacancies—spanning start‑ups to FTSE 100 companies. Let’s dive into what you need to know.

Negotiating Your Data Science Job Offer: Equity, Bonuses & Perks Explained

Data science has rapidly evolved from a niche specialty to a cornerstone of strategic decision-making in virtually every industry—from finance and healthcare to retail, entertainment, and AI research. As a mid‑senior data scientist, you’re not just running predictive models or generating dashboards; you’re shaping business strategy, product innovation, and customer experiences. This level of influence is why employers are increasingly offering compensation packages that go beyond a baseline salary. Yet, many professionals still tend to focus almost exclusively on base pay when negotiating a new role. This can be a costly oversight. Companies vying for data science talent—especially in the UK, where demand often outstrips supply—routinely offer equity, bonuses, flexible work options, and professional development funds in addition to salary. Recognising these opportunities and effectively negotiating them can have a substantial impact on your total earnings and long-term career satisfaction. This guide explores every facet of negotiating a data science job offer—from understanding equity structures and bonus schemes to weighing crucial perks like remote work and ongoing skill development. By the end, you’ll be well-equipped to secure a holistic package aligned with your market value, your life goals, and the tremendous impact you bring to any organisation.