Cloud Data Engineer

ELLIOTT MOSS CONSULTING PTE. LTD.
Penarth
2 days ago
Create job alert
Job Description

The Cloud Data Engineer is responsible for designing, building, and maintaining scalable, secure, and governed cloud-based data platforms. The role involves working with AWS, Databricks, and Informatica IDMC to support data ingestion, transformation, analytics, and reporting while ensuring data quality, security, and compliance.


The candidate will collaborate with cross-functional teams to deliver reliable data solutions that support healthcare analytics and digital transformation initiatives.


Key Responsibilities

  • Design and implement cloud-based data storage solutions including data lakes, data warehouses, and databases using AWS services such as Amazon S3, RDS, Redshift, and DynamoDB, and Databricks Delta Lake.
  • Develop, manage, and optimize data pipelines using AWS Glue, AWS Lambda, AWS Step Functions, Databricks, and Informatica IDMC.
  • Integrate data from multiple internal and external sources while ensuring data quality, governance, and compliance.
  • Build and maintain ETL/ELT processes using Databricks (Spark) and Informatica IDMC.
  • Monitor and optimize data workflows and query performance to meet scalability and performance requirements.
  • Implement data security controls, encryption, and compliance with data protection regulations.
  • Automate data ingestion, transformation, and monitoring processes.
  • Maintain documentation for data architecture, pipelines, and configurations.
  • Collaborate with data scientists, analysts, and software engineers to deliver data solutions.
  • Troubleshoot and resolve data-related issues to ensure data availability and integrity.
  • Optimize cloud resource usage to control operational costs.

Job Requirements

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • Minimum 8 years of relevant experience in data engineering.
  • Hands‑on experience with AWS services, Databricks, and/or Informatica IDMC.
  • Proficiency in Python, Java, or Scala.
  • Strong knowledge of SQL and NoSQL databases.
  • Experience with data modeling, schema design, and complex data transformations.
  • Strong analytical, problem‑solving, and communication skills.

Preferred Skills

  • Experience with PySpark on Databricks.
  • Knowledge of data governance and data cataloging tools, especially Informatica IDMC.
  • Familiarity with Tableau or other data visualization tools.
  • Experience with Docker and Kubernetes.
  • Understanding of DevOps and CI/CD pipelines.
  • Experience using Git or other version control systems.


#J-18808-Ljbffr

Related Jobs

View all jobs

Cloud Data Engineer...

Cloud Data Engineer

Cloud Data Engineer – Hybrid, Real-Time Analytics & Growth

Cloud Data Engineer: Build Scalable Data Ecosystems

Cloud Data Engineer - Real-Time Fuel Analytics

Cloud Data Engineer & Full-Stack Platform Developer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Science Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data science in your 30s, 40s or 50s? You’re far from alone. Across the UK, businesses are investing in data science talent to turn data into insight, support better decisions and unlock competitive advantage. But with all the hype about machine learning, Python, AI and data unicorns, it can be hard to separate real opportunities from noise. This article gives you a practical, UK-focused reality check on data science careers for mid-life career switchers — what roles really exist, what skills employers really hire for, how long retraining typically takes, what UK recruiters actually look for and how to craft a compelling career pivot story. Whether you come from finance, marketing, operations, research, project management or another field entirely, there are meaningful pathways into data science — and age itself is not the barrier many people fear.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.