Kafka Data Architect(Streaming And Payment)

IBU
London
2 days ago
Create job alert

We are seeking a Hands-On Data Architect to design, build, and operate a high-scale, event-driven data platform supporting payment and channel operations. This role combines strong data architecture fundamentals , deep streaming expertise , and hands-on engineering in a regulated, high-throughput environment.

You will lead the evolution from legacy data ingestion patterns to a modern AWS-based lakehouse and streaming architecture , handling tens of millions of events per day , while applying domain-driven design (DDD) and data-as-a-product principles .

This is a builder role , not a documentation-only architect position.

Key Responsibilities

Data Products & Architecture

  • Design and deliver core data products including:
  • Channel Operations Warehouse (high-performance, ~30 days retention)
  • Channel Analytics Lake (long-term retention, 7+ years)
  • Define and expose data APIs and status/statement services with clear SLAs.
  • Architect an AWS lakehouse using S3, Glue, Athena, Iceberg , with Redshift for BI and operational analytics.
  • Enable dashboards and reporting using Amazon QuickSight (or equivalent BI tools).

Streaming & Event-Driven Architecture

  • Design and implement real-time streaming pipelines using:
  • Kafka (Confluent or AWS MSK)
  • AWS Kinesis / Kinesis Firehose
  • EventBridge for AWS-native event routing
  • Define patterns for:
  • Ordering, replay, retention, and idempotency
  • At-least-once and exactly-once processing
  • Dead-letter queues (DLQs) and failure recovery
  • Implement CDC pipelines from Aurora PostgreSQL into Kafka and the lakehouse.

Event Contracts & Schema Management

  • Define and govern event contracts using Avro or Protobuf .
  • Manage schema evolution through Schema Registry , including:
  • Compatibility rules
  • Versioning strategies
  • Backward and forward compatibility
  • Align domain events with Kafka topics and analytical storage models.

Migration & Modernization

  • Assess existing "as-is" ingestion mechanisms (APIs, files, SWIFT feeds, Kafka, relational stores).
  • Design and execute migration waves , cutover strategies, and rollback runbooks.
  • Ensure minimal disruption during platform transitions.

Governance, Quality & Security

  • Apply data-as-a-product and data mesh principles:
  • Clear ownership
  • Quality SLAs
  • Access controls
  • Retention and lineage
  • Implement security best practices:
  • Data classification
  • KMS-based encryption
  • Tokenization where required
  • Least-privilege IAM
  • Immutable audit logging

Observability, Reliability & FinOps

  • Build observability for streaming and data platforms using:
  • CloudWatch, Prometheus, Grafana
  • Track operational KPIs:
  • Throughput (TPS)
  • Processing lag
  • Success/error rates
  • Cost per million events
  • Define actionable alerts, dashboards, and operational runbooks.
  • Design for high availability with multi-AZ / multi-region patterns , meeting defined RPO/RTO targets.

Hands-On Engineering

  • Write and review production-grade code using:
  • Python, Scala, SQL
  • Spark / AWS Glue
  • AWS Lambda & Step Functions
  • Build infrastructure using Terraform (IaC) .
  • Implement CI/CD pipelines (GitLab, Jenkins).
  • Enforce automated testing, performance profiling, and secure coding practices.

Required Skills & Experience

Streaming & Event-Driven Systems

  • Strong experience with Kafka (Confluent) and/or AWS MSK
  • Experience with AWS Kinesis / Firehose
  • Deep understanding of:
  • Event ordering and replay
  • Delivery semantics
  • Outbox and CDC patterns
  • Practical experience using EventBridge for event routing and filtering

AWS Data Platform

  • Hands-on experience with:
  • S3, Glue, Athena
  • Redshift
  • Step Functions and Lambda
  • Familiarity with Iceberg-based lakehouse architectures
  • Experience building streaming pipelines into S3 and Glue

Payments & Financial Messaging

  • Experience with payments data and flows
  • Knowledge of ISO 20022 messages:
  • PAIN, PACS, CAMT
  • Understanding of payment lifecycle, reconciliation, and statements
  • Exposure to API, file-based, and SWIFT-based integration channels

Data Architecture Fundamentals (Must-Have)

  • Logical data modeling (ER diagrams, normalization up to 3NF/BCNF)
  • Physical data modeling:
  • Partitioning strategies
  • Indexing
  • SCD types
  • Strong understanding of:
  • Transactional vs analytical schemas
  • Star schema, Data Vault, and 3NF trade-offs
  • Practical experience with:
  • CQRS and event sourcing
  • Event-driven architecture
  • Domain-driven design (bounded contexts, aggregates, domain events)

Related Jobs

View all jobs

Kafka Data Architect(Streaming And Payment)

Enterprise Data Architect

Enterprise Data Architect

Enterprise Data Architect

Postgres Data Architect

Data Architect / BI Architect

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write a Data Science Job Ad That Attracts the Right People

Data science plays a critical role in how organisations across the UK make decisions, build products and gain competitive advantage. From forecasting and personalisation to risk modelling and experimentation, data scientists help translate data into insight and action. Yet many employers struggle to attract the right data science candidates. Job adverts often generate high volumes of applications, but few applicants have the mix of analytical skill, business understanding and communication ability the role actually requires. At the same time, experienced data scientists skip over adverts that feel vague, inflated or misaligned with real data science work. In most cases, the issue is not a lack of talent — it is the quality and clarity of the job advert. Data scientists are analytical, sceptical of hype and highly selective. A poorly written job ad signals unclear expectations and immature data practices. A well-written one signals credibility, focus and serious intent. This guide explains how to write a data science job ad that attracts the right people, improves applicant quality and positions your organisation as a strong data employer.

Maths for Data Science Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data science jobs in the UK, the maths can feel like a moving target. Job descriptions say “strong statistical knowledge” or “solid ML fundamentals” but they rarely tell you which topics you will actually use day to day. Here’s the truth: most UK data science roles do not require advanced pure maths. What they do require is confidence with a tight set of practical topics that come up repeatedly in modelling, experimentation, forecasting, evaluation, stakeholder comms & decision-making. This guide focuses on the only maths most data scientists keep using: Statistics for decision making (confidence intervals, hypothesis tests, power, uncertainty) Probability for real-world data (base rates, noise, sampling, Bayesian intuition) Linear algebra essentials (vectors, matrices, projections, PCA intuition) Calculus & gradients (enough to understand optimisation & backprop) Optimisation & model evaluation (loss functions, cross-validation, metrics, thresholds) You’ll also get a 6-week plan, portfolio projects & a resources section you can follow without getting pulled into unnecessary theory.

Neurodiversity in Data Science Careers: Turning Different Thinking into a Superpower

Data science is all about turning messy, real-world information into decisions, products & insights. It sits at the crossroads of maths, coding, business & communication – which means it needs people who see patterns, ask unusual questions & challenge assumptions. That makes data science a natural fit for many neurodivergent people, including those with ADHD, autism & dyslexia. If you’re neurodivergent & thinking about a data science career, you might have heard comments like “you’re too distracted for complex analysis”, “too literal for stakeholder work” or “too disorganised for large projects”. In reality, the same traits that can make traditional environments difficult often line up beautifully with data science work. This guide is written for data science job seekers in the UK. We’ll explore: What neurodiversity means in a data science context How ADHD, autism & dyslexia strengths map to common data science roles Practical workplace adjustments you can request under UK law How to talk about your neurodivergence in applications & interviews By the end, you’ll have a clearer sense of where you might thrive in data science – & how to turn “different thinking” into a real career advantage.