Search

Senior Data Engineer (m/f/nb) - Big Data Platform

Conrad Electronic SE
location92242 Hirschau, Deutschland
VeröffentlichtVeröffentlicht: Heute
Vollzeit

At Team Conrad Big Data, we develop and operate a platform that serves as the basis for data exchange, reporting, analysis, and AI. Keywords such as high performance, scalability, central data storage, company-wide usability, provision of data for internal and external apps, self-service, etc. drive us forward every day. Agility, iterative approaches, continuous improvement, and open feedback are part of our daily routine.

Activities

  • Big Data Solution Design & Streaming: You will design and implement scalable big data solutions. A key focus will be on the architecture and operation of event streaming platforms (Confluent Kafka) to ensure real-time data processing.
  • Software Engineering & AI Integration: You will use your in-depth knowledge of Java to develop complex data services. You will also drive our innovative strength by designing and implementing AI agents that automate processes and take our data usage to a new level.
  • Data Modeling & ETL Pipelines: You will be responsible for designing efficient data models and orchestrating ETL processes (Extract, Transform, Load). In doing so, you will combine the flexibility of code (Java) with the efficiency of tools such as Matillion ETL.
  • Data Integration & Quality: You will integrate heterogeneous data sources into our data warehouse and establish strict standards for data quality and consistency so that data scientists and business analysts always have access to reliable data.
  • Performance & Optimization: You proactively monitor the performance of our platform (GCP/BigQuery), optimize queries, and ensure that our systems scale efficiently even with growing data volumes and high loads.

Requirements

  • Several years of professional experience as a big data engineer, ideally with a focus on Google Cloud Platform (GCP).
  • In-depth experience in backend development with Java (must-have). Knowledge of Python is a welcome plus.
  • You have proven expertise in event streaming architectures, especially in working with Confluent Kafka.
  • You have already gained practical experience in the development and implementation of AI agents and understand how to integrate LLMs or agentic workflows into data products.
    Solid experience in designing data models and ETL pipelines, preferably using Matillion ETL.
  • Strong analytical skills and the drive to independently solve complex technological challenges (from streaming to AI).

Application Process

We will jump into a first screening call to find out if it fits for both side. Then we will give you a technical task to solve which you will discuss with the team.