Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Senior Data Engineer

Wavendon
3 days ago
Create job alert

At Unisys, we are dedicated to building high-performance, secure digital solutions that drive innovation and transformation for our clients. With a legacy of over a century in technology leadership, we empower businesses and governments around the world to achieve new levels of efficiency, security, and customer satisfaction. Our team thrives on solving complex challenges with cutting-edge technologies, and we are proud to be a trusted partner to some of the world’s most respected organizations.

We are seeking an experienced, hands-on Senior Engineer with specialized skills working on developing applications and a flair for Generative AI and AI/LM technologies. The engineer will be responsible for:

• Quickly converting an idea to a demonstration

• Lead the design and development of Generative AI-based applications.

• Working within in a Scrum team to develop rapid prototypes.

• Evaluate, select, and integrate AI tools and frameworks that are essential for generative AI development.

• Stay up to date with the latest advancements in generative AI and contribute to our research efforts in this field.

• Learning, evaluating and leveraging a variety of AI tools and technologies.

• Being a self-starter and able to work independently, as well as within a team, to deliver a result.

• Provide guidance and mentorship to junior AI engineers, fostering a culture of continuous learning.

• Demonstrate passion for quality and productivity by use of efficient development techniques, standards and guidelines.

• Design, train, fine-tune, and evaluate ML and NLP models (including small/medium language models) using robust experimentation practices, clear success metrics, and offline/online evaluation.

• Build and maintain data pipelines for ML (ingestion, validation, feature engineering, labeling) with reproducibility, lineage, and governance.

• Implement retrieval-augmented generation (RAG) and hybrid search, including embeddings, vector stores, and re-ranking to improve quality and guardrails.

• Optimize LLM/SLM inference (quantization, distillation, batching, caching) for latency, throughput, and cost; select and tune serving stacks (e.g., vLLM/TGI/Triton) and GPU/CPU targets.

• Operationalize models with MLOps/AIOps best practices: CI/CD for models, model/version registries, feature stores, automated testing, canary/blue–green rollouts, A/B tests, and rollback strategies.

• Deploy, scale, and monitor models on Kubernetes (Helm, Rancher, AKS, K3S) and cloud AI platforms; instrument for observability (metrics, logs, traces), data/label drift, bias, hallucination and safety events.

• Implement security, compliance, and privacy-by-design for AI systems (PII handling, policy enforcement, content moderation, prompt/response safety, secret management).

• Partner with Data Engineering and Platform teams to ensure capacity planning, cost controls, and reliability (SLOs/SLIs) for model serving in production.

You will be successful in this role if you have:

• Must have proficient skills working on Large Language Models (LLMs).

• Proficiency in evaluating, deploying and using on premise open source technology stacks.

• Proficiency in Azure Cognitive Services, which includes Azure Language Service, Azure Text Analytics, Azure Speech Service, and other AI-related offerings.

• A good understanding of OpenAI's GPT models and how it integrates with Azure services. This includes knowledge of GPT's capabilities, limitations, and available features.

• Strong programming skills in Python for working with Azure services and data manipulation.

• Good understanding of databases e.g. Postgres

• Proficient knowledge of back-end programming languages like NodeJS, Python and/or Golang

• Proficient knowledge on one of front-end technologies like React or Angular

• Hands on technical Skills related to Backend development.

• Experience with AWS and GCP is an added advantage.

• Demonstrated Cluster Management knowledge and experience using platforms including – Kubernetes, Rancher, Helm, Docker

• Familiarity with Figma for UI Mockups

• Cloud certification at a developer or equivalent level is an added advantage.

• Azure AI Certification or equivalent is an added advantage.

• Familiarity with Data engineering and Machine learning models

• Excellent written and verbal communication skills, flexible and good attitude

• Experience working with open-source projects, third-party libraries, SDKs and APIs

• Practical experience training/fine-tuning SLMs/LLMs (e.g., Llama, Mistral) using techniques such as LoRA/QLoRA, PEFT, and prompt-tuning; knowledge of evaluation frameworks and benchmark design.

• Strong grasp of ML fundamentals (supervised/unsupervised learning, bias/variance, cross-validation), NLP techniques, and vector embeddings; hands-on with PyTorch/TensorFlow and scikit-learn.

• Experience with LLM/RAG tooling and NLP stacks (Hugging Face Transformers/Datasets, LangChain or LlamaIndex) and vector databases (pgvector/Postgres, Pinecone, Milvus, or Azure AI Search).

• Proficiency with experiment tracking and model registries (MLflow, Weights & Biases) and feature stores; comfort with disciplined experiment design and reproducibility.

• Model serving and optimization expertise: vLLM, ollama, Text Generation Inference, Triton, ONNX Runtime; quantization (bitsandbytes/INT8/FP8, GPTQ/AWQ), distillation, and caching/batching strategies.

• MLOps/AIOps experience building CI/CD for ML (Jenkins/Azure DevOps), automated testing of data and models, canary releases, model rollbacks, and online A/B experimentation.

• Production monitoring and observability (Prometheus/Grafana/OpenTelemetry/ELK), data and concept drift detection, safety/hallucination monitoring, and feedback loops for continual learning.

• Data engineering proficiency with Spark/Databricks and/or Kafka for streaming features, plus robust data quality/lineage tooling.

• Understanding of GPU/accelerator concepts (CUDA, NCCL, memory/throughput tradeoffs) and capacity planning for cost-effective scaling.

• Knowledge of AI safety, security, and compliance practices (RBAC, secrets management, PII handling, red-teaming, prompt injection defenses, content moderation).

• Experience with managed AI platforms (Azure AI Studio/Model Catalog, Azure ML, SageMaker, Vertex AI) and multi-cloud deployments is a plus

Related Jobs

View all jobs

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Recruitment Trends 2025 (UK): What Job Seekers Need To Know About Today’s Hiring Process

Summary: UK data science hiring has shifted from title‑led CV screens to capability‑driven assessments that emphasise rigorous problem framing, high‑quality analytics & modelling, experiment/causality, production awareness (MLOps), governance/ethics, and measurable product or commercial impact. This guide explains what’s changed, what to expect in interviews & how to prepare—especially for product/data scientists, applied ML scientists, decision scientists, econometricians, growth/marketing analysts, and ML‑adjacent data scientists supporting LLM/AI products. Who this is for: Product/decision/data scientists, applied ML scientists, econometrics & causal inference specialists, experimentation leads, analytics engineers crossing into DS, ML generalists with strong statistics, and data scientists collaborating with platform/MLOps teams in the UK.

Why Data Science Careers in the UK Are Becoming More Multidisciplinary

Data science once meant advanced statistics, machine learning models and coding in Python or R. In the UK today, it has become one of the most in-demand professions across sectors — from healthcare to finance, retail to government. But as the field matures, employers now expect more than technical modelling skills. Modern data science is multidisciplinary. It requires not just coding and algorithms, but also legal knowledge, ethical reasoning, psychological insight, linguistic clarity and human-centred design. Data scientists are expected to interpret, communicate and apply data responsibly, with awareness of law, human behaviour and accessibility. In this article, we’ll explore why data science careers in the UK are becoming more multidisciplinary, how these five disciplines intersect with data science, and what job-seekers & employers need to know to succeed in this transformed field.

Data Science Team Structures Explained: Who Does What in a Modern Data Science Department

Data science is one of the most in-demand, dynamic, and multidisciplinary areas in the UK tech and business landscape. Organisations from finance, retail, health, government, and beyond are using data to drive decisions, automate processes, personalise services, predict trends, detect fraud, and more. To do that well, companies don’t just need good data scientists; they need teams with clearly defined roles, responsibilities, workflows, collaboration, and governance. If you're aiming for a role in data science or recruiting for one, understanding the structure of a data science department—and who does what—can make all the difference. This article breaks down the key roles, how they interact across the lifecycle of a data science project, what skills and qualifications are typical in the UK, expected salary ranges, challenges, trends, and how to build or grow an effective team.