Intermediate Java Developer (Big Data)

Global Relay
Norwich
1 month ago
Applications closed

Related Jobs

View all jobs

Project Support Specialist/Data Analyst

Data Engineer

Data Science Practitioner

Data Engineer Analytics, Assistant Vice President

Data Scientist

Who we are

For over 20 years, Global Relay has set the standard in enterprise information archiving with industry-leading cloud archiving, surveillance, eDiscovery, and analytics solutions. We securely capture and preserve the communications data of the world’s most highly regulated firms, giving them greater visibility and control over their information and ensuring compliance with stringent regulations.


Though we offer competitive compensation and benefits and all the other perks one would expect from an established company, we are not your typical technology company. Global Relay is a career-building company. A place for big ideas. New challenges. Groundbreaking innovation. It’s a place where you can genuinely make an impact – and be recognized for it.


We believe great businesses thrive on diversity, inclusion, and the contributions of all employees. To that end, we recruit candidates from different backgrounds and foster a work environment that encourages employees to collaborate and learn from each other, completely free of barriers.


Your Role

Joining the Reporting product line, you would work as a member of a small, highly focused team, responsible for delivering backend services for a highly scalable reporting and analytics platform, using leading edge technologies. This is an opportunity to work in an environment that encourages creative thinking and autonomy. We encourage our developers to think beyond a single component to build complete system solutions. Challenge yourself by learning new technologies, and apply your skills across our different projects and application domains. If you are committed to code that is clean, well‑tested, well‑reviewed, performant and secure then you’ll fit in around here.


Tech Stack

  • Micro‑services Container Platforms (OpenShift, Kubernetes, CRC, Docker)
  • File formats (Avro, Parquet, Orc)
  • Large scale data processing (Kafka)
  • Large scale data platforms (Hadoop, Trino, Spark)
  • Dependency injection frameworks (Spring, Guice)
  • Splunk
  • CI/CD Build tools (Maven, Git, Jenkins)
  • Frameworks: Vert.x
  • Text search engines (Lucene, ElasticSearch)

Your Responsibilities

  • Develop ETL and ELT jobs and processes.
  • Support data analysis and design efforts within the wider team.
  • Migrate existing services to microservices, with the goal of reducing complexity at the design and architecture level.
  • Write unit and integration tests for your Java code.
  • Collaborate with testers in development of functional test cases.
  • Develop deployment systems for Java based systems.
  • Collaborate with product owners on user story generation and refinement.
  • Monitor and support the operation of production systems.
  • Participate in knowledge sharing activities with colleagues.
  • Pair programming and peer reviews.

About You

Required Experience:



  • Minimum 3 years of Java development experience in an Agile environment, building scalable applications and services with a focus on big data solutions and analytics.
  • 2+ year experience in developing ETL / ELT processes using relevant technologies and tools.
  • Experienced in working with data lakes and data warehouse platforms for both batch and streaming data sources.
  • ANSI SQL experience or other flavours of SQL.
  • Experience of unstructured, semi‑structured and structured data processing.
  • A good understanding of ETL/ELT principles, best practices and patterns used.
  • Experienced in some big data technologies such as Hadoop, Spark and other Apache platform products.
  • Experience with RESTful services.
  • Passion for Test Driven Development.
  • CI/CD.
  • Exposure to data visualisation and Business Intelligence solutions.

What you can expect

At Global Relay, there’s no ceiling to what you can achieve. It’s the land of opportunity for the energetic, the intelligent, the driven. You’ll receive the mentoring, coaching, and support you need to reach your career goals. You’ll be part of a culture that breeds creativity and rewards perseverance and hard work. And you’ll be working alongside smart, talented individuals from diverse backgrounds, with complementary knowledge and skills.


Global Relay is an equal‑opportunity employer committed to diversity, equity, and inclusion. We seek to ensure reasonable adjustments, accommodations, and personal time are tailored to meet the unique needs of every individual.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Science Tools Do You Need to Know to Get a Data Science Job?

If you’re trying to break into data science — or progress your career — it can feel like you are drowning in names: Python, R, TensorFlow, PyTorch, SQL, Spark, AWS, Scikit-learn, Jupyter, Tableau, Power BI…the list just keeps going. With every job advert listing a different combination of tools, many applicants fall into a trap: they try to learn everything. The result? Long tool lists that sound impressive — but little depth to back them up. Here’s the straight-talk version most hiring managers won’t explicitly tell you: 👉 You don’t need to know every data science tool to get hired. 👉 You need to know the right ones — deeply — and know how to use them to solve real problems. Tools matter, but only in service of outcomes. So how many data science tools do you actually need to know to get a job? For most job seekers, the answer is not “27” — it’s more like 8–12, thoughtfully chosen and well understood. This guide explains what employers really value, which tools are core, which are role-specific, and how to focus your toolbox so your CV and interviews shine.

What Hiring Managers Look for First in Data Science Job Applications (UK Guide)

If you’re applying for data science roles in the UK, it’s crucial to understand what hiring managers focus on before they dive into your full CV. In competitive markets, recruiters and hiring managers often make their first decisions in the first 10–20 seconds of scanning an application — and in data science, there are specific signals they look for first. Data science isn’t just about coding or statistics — it’s about producing insights, shipping models, collaborating with teams, and solving real business problems. This guide helps you understand exactly what hiring managers look for first in data science applications — and how to structure your CV, portfolio and cover letter so you leap to the top of the shortlist.

The Skills Gap in Data Science Jobs: What Universities Aren’t Teaching

Data science has become one of the most visible and sought-after careers in the UK technology market. From financial services and retail to healthcare, media, government and sport, organisations increasingly rely on data scientists to extract insight, guide decisions and build predictive models. Universities have responded quickly. Degrees in data science, analytics and artificial intelligence have expanded rapidly, and many computer science courses now include data-focused pathways. And yet, despite the volume of graduates entering the market, employers across the UK consistently report the same problem: Many data science candidates are not job-ready. Vacancies remain open. Hiring processes drag on. Candidates with impressive academic backgrounds fail interviews or struggle once hired. The issue is not intelligence or effort. It is a persistent skills gap between university education and real-world data science roles. This article explores that gap in depth: what universities teach well, what they often miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data science.