Data Engineer Kafka

Swisslinx

View: 204

Update day: 26-03-2024

Location: Zürich Zürich ZH

Category: IT - Software

Industry: Ressources humaines et recrutement

Loading ...

Job content

As one of the top suppliers to our client, a prestigious bank in Basel, Swisslinx are looking for a highly motivated and technology savvy Data Engineer with experience designing and implementing Apache Kafka as part of a Cloudera Data Platform distribution to help implement our next generation data and analytics systems.

This is a rolling contract (with possibility to extend for up to five years) role in Basel, Switzerland, initially running until the end of March 2022.

Joining a highly professional and motivated team of data and analytics professionals, you will implement, maintain and support shared data platforms and bespoke analytical systems exploiting cutting-edge technologies and modern software development practices.

The role is to develop a data transfer backbone to support event processing and streaming, which in turn supports multiple projects that aim to modernise the Bank’s data management infrastructure, including:

  • Building a new modern data warehouse and data lake architecture
  • Offering innovative data lab environments for self-service analytics
  • Enhancing capabilities to support machine learning and AI use cases

Responsibilities include:

  • Design, implementation and maintenance of an enterprise installation of Kafka
  • Participating in the creation and execution of data governance, including message design, schema validation, and versioning
  • Working with developers from other business unit technical teams to assist them in implementing functional solutions using Kafka
  • Supporting the establishment of a platform SLA, including defining non-functional requirements
  • Working with networks, data center, and infrastructure teams to optimize hardware solutions for the installation of Kafka

This is a great chance to work in a highly collaborative environment, not just among the team but with expert economists, technologists, data scientists and statisticians – and counterparts in other international organisations and central banks.

We’re looking for a candidate who’s passionate about data and analytics with the ability to think out of the box who can introduce new ideas into the team.

In order to be considered for this role, you have a broad set of data engineering skills and experience in agile software development methodologies and will possess the following skills and experience:

  • Profound experience in designing, implementing and maintaining Apache Kafka as part for Cloudera Data Platform distribution
  • Data pipeline and workflow management tools: Airflow, RunDeck,Nifi etc.
  • Stream-processing systems: Kafka, Spark-Streaming, etc.
  • Experience in designing and developing high-volume mission critical transactional data integration solutions
  • Knowledge of message queuing, stream processing and highly scalable ‘big data’ data stores
  • Building processes supporting data transformation, data structures, metadata, dependency and workload management
  • Proven experience of designing middleware message governance, topic subscription and management, including schema validation and version compatibility
  • Agile methodologies like Scrum
  • Knowledge of Service-oriented architecture and experience in API creation and management technologies (REST, SOAP etc)

The following skills are desirable:

  • Exposure to data modelling and business intelligence systems (dimensional modelling, data mining, predictive analytics)
  • Knowledge of standard data architecture patterns such as data vault, time-series data modelling etc. would be an advantage
  • Any experience of manipulating, processing and extracting value from large disconnected datasets, or working with unstructured datasets
  • SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Any experience with on-prem big data distributions Cloudera or Hortonworks
  • Relational SQL and NoSQL databases: SQL Server, Postgres, Cassandra, etc.
  • Big data technologies: Hadoop, Spark, etc.
  • Analytical tools: Dataiku, Tableau, PowerBI, Hue
  • Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Designing and developing data management and analytical systems
  • Building and optimising ‘big data’ data pipelines, architectures and data sets
  • Performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement

Are you interested to work in an international environment in one of the leading financial companies in Switzerland? Then apply now! We look forward to receiving your full application.
Loading ...
Loading ...

Deadline: 10-05-2024

Click to apply for free candidate

Apply

Loading ...
Loading ...

SIMILAR JOBS

Loading ...
Loading ...