Job type: 100%

Loading ...

Job content

Join KEYONIQ and become part of a fast-growing HealthTech start-up. At the forefront of AI generative HealthTech, we are aiming to revolutionize the pathway of aging. As a Junior Data Engineer, you will be able to make a significant impact in a start-up that is working towards redefining the status quo of how we age. We offer a competitive compensation package and a collaborative work environment characterized by innovation, bright minds, and opportunities for growth and career advancement. If you are passionate about AI/ML, HealthTech, enjoy a dynamic, challenging and exciting environment, and want to be part of an innovative journey, changing the way we think about aging, we encourage you to apply.

Responsibilities:

  • Develop and study graph AI algorithms for graph data analysis, graph mining, and reasoning
  • Perform data analysis on data collected from alternative data sources and existing databases • Support data-driven decision making
  • Identify project requirements, survey available technologies, and check feasibility regarding requirements and resources
  • Hands-on development on the project, running quality feedback processes, discuss and find solutions with the team
  • Developing ontologies and knowledge graphs
  • Developing back-end technologies for storage and query of knowledge graphs
  • Design domain ontologies in connection with domain experts
  • Use knowledge graph technologies to discover actionable intelligence

Requirements:

  • Knowledge and hands-on experience in graph representation learning
  • Strong programming skills in Python with ability to implement OOPs and functional programming
  • Experience building data APIs and familiarity with big data technologies such as Hadoop, Spark, and SQL
  • Knowledge of data structures and algorithms
  • Experience managing and administering both relational and graph databases
  • Expertise in building and optimizing data pipelines, architectures, and data sets
  • Familiarity with stream processing platforms such as Spark Streaming, Storm, Kafka, and Flink+Beam
  • Strong understanding of RDBMS and NoSQL databases with the ability to implement them from scratch, including graph databases.
  • Familiar with the Linux programming environment and have experience in Hadoop/spark or other similar big data platforms

Qualifications:

  • Minimum Bachelor’s degree in Computer Science, Software Engineering or related field and/or a proven project track record
  • Practical experience and proficiency in Python
  • Experience with AI/ML technologies
  • Familiarity with medical data and terminology is a plus

Job Type: 100%

Schedule:

  • Monday to Friday

Ability to commute/relocate:

  • Baar, ZG: Reliably commute or planning to relocate before starting work (Required)

Work Location: In person

Loading ...
Loading ...

Deadline: 01-06-2024

Click to apply for free candidate

Apply

Loading ...
Loading ...

SIMILAR JOBS

Loading ...
Loading ...