Job type: Temps de travail : 80-100%, Temps de travail : 90-100%

Loading ...

Job content

About Swiss Re

Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime.

At Swiss Re we combine experience with creative thinking and cutting-edge expertise to create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 13,000 employees across the world.

We offer a flexible working environment where curious and adaptable people thrive. Are you interested in joining us?


About the Role


We, in the Stargate Platform Engineering team, are searching for an enthusiastic Big Data Software Engineer who is ready for an exciting career! We focus on optimizing data intensive distributed systems, building and improving data pipelines while making our platform more efficient, secure and reliable. Our team members range from PhD in Mathematics to contributors to open-source Apache Big Data projects. Whether you are a fresh graduate or more experienced professional, we look forward to getting to know you.


If you join us, you will be able to participate in a variety of interesting initiatives. For example, you will:


  • Help us improve the resilience of the world by modelling/interpreting/studying real-world data, from COVID-19 analytics to models for floods and earthquakes. Transforming terabyte large datasets using hundred of cores is normal day of work.
  • Write Big Data pipelines and analyse how data-lineage is handled in a global company in a petabyte scale.
  • Provide architecture and coding guidance in many other exciting projects in a collaborative environment with colleagues from Swiss Re’s international offices, building the data backbone of the company.
  • Work on a day-to-day basis with Palantir software engineers, using state-of-the-art concepts for solving technical problems.

About The Team


We are a diverse team and our passion to learn is what connects us. We are looking for a person who loves to learn and we will support you in this journey! We reserve time for studying, getting certificates, attending conferences, and we organize our own upskilling sessions as well. In addition to projects for our clients, we also work on innovation projects which allow us to learn new methodologies and gain fresh experience.


About You


Nobody is perfect and meets 100% of requirements. If you however meet some of the criteria below and are genuinely curious about the world of data engineering, we will be happy to meet you.


In order to be successful in the role, you need to have these technical skills and knowledge:


  • Degree in Computer Science, Computer/Electronics Engineering, Applied Mathematics, Physics or related quantitative field. We welcome different levels, including graduates.
  • Deep knowledge of Python and/or Java programming languages. You define data pipelines in a functional programming fashion. Flatmap operations and Monoids are topics that you are familiar with.
  • Knowledge of Algorithms and scalability: you are passionate about optimizing your code and minimizing both run-time and space-complexity. Identifying I/O bottlenecks or parallelisation overheads is a normal day of business for you.
  • Like to use the right tool for the right job: modifying data pipelines to reduce data-shuffling, changing algorithms to improve parallelism or applying an even loop to reduce time spent on blocking code is not rocket-science for you.
  • Fascinated by Big Data concepts and engineering. With proven track of experience with Apache Big Data projects (specially Spark).
  • IT professional holding consistent practical experience with databases and Linux. Know-how about how to optimize query planning, be it on Spark or on RDBMS.
  • Hold excellent technical writing skills: you can distil complex technical issues into small, clear nuggets of information, and are capable of effectively communicating with other technical parties.
  • Proficient English skills and a mature communication style is required.

The following items are a plus:


  • Background in Software Engineering.
  • Experience in distributed computing/databases.
  • Previous experience on high-performance-computing (e.g. other Apache Big Data frameworks e.g. Flink / Kafka / Hadoop) and functional programming (e.g. Scala).
  • AWS/Azure/GCP/Palantir/Databricks Certificates.
  • Experience with Spring/Spring Boot/Hexagonal Architecture/Domain Driven Design/OpenAPI or S.O.L.I.D. based frameworks.

We look forward to receiving your application!

Our final offer to you will be set up fairly, considering the skills and experience that you bring to the Swiss Re Group.

You can look forward to extra rewards and benefits including an attractive performance-based bonus.

We are an equal opportunity employer, and we value diversity at our company. Our aim is to live visible and invisible diversity – diversity of age, race, ethnicity, nationality, gender, gender identity, sexual orientation, religious beliefs, physical abilities, personalities and experiences – at all levels and in all functions and regions. We also collaborate in a flexible working environment, providing you with a compelling degree of autonomy to decide how, when and where to carry out your tasks.


We provide feedback to all candidates via email. If you have not heard back from us, please check your spam folder.

Loading ...
Loading ...

Deadline: 09-06-2024

Click to apply for free candidate

Apply

Loading ...
Loading ...

SIMILAR JOBS

Loading ...