We are looking to fill this role immediately and are reviewing applications daily. Expect a fast, transparent process with quick feedback.
Why join us?
We are a European deep-tech leader in quantum and AI, backed by major global strategic investors and strong EU support. Our groundbreaking technology is already transforming how AI is deployed worldwide — compressing large language models by up to 95% without losing accuracy and cutting inference costs by 50–80%.
Joining us means working on cutting-edge solutions that make AI faster, greener, and more accessible — and being part of a company often described as a “quantum-AI unicorn in the making.”
We offer
- Competitive annual salary starting from €45,000, based on experience and qualifications.
- Two unique bonuses: signing bonus at incorporation and retention bonus at contract completion.
- Relocation package (if applicable).
- Up to 9-month contract, ending on June 2026.
- Hybrid role and flexible working hours.
- Be part of a fast-scaling Series B company at the forefront of deep tech.
- International exposure in a multicultural, cutting-edge environment.
As a Machine Learning Engineer you will
- Build data and model pipelines end‑to‑end: create, source, augment, and validate datasets; stand up training/fine‑tuning/evaluation flows; and ship models that meet product and customer requirements.
- Design rigorous evaluation frameworks to verify task competence and alignment; implement statistical testing, reliability checks, and continuous evaluation.
- Scale training and inference: make effective use of distributed compute, optimize throughput/latency, and identify opportunities for algorithmic or systems‑level speedups.
- Improve models post‑training: apply SFT and preference‑based or reinforcement learning methods to enhance helpfulness, safety, and reasoning.
- Optimize and specialize models: apply compression techniques to meet performance and footprint targets.
- Collaborate across research and engineering: partner with ML engineers, researchers, and software engineers on data curation, evaluation design, training runs, model serving, and observability.
- Contribute to our shared codebase: write clean, well‑tested Python; document decisions and artifacts; uphold engineering standards.
Required Qualifications
- This role requires a Bachelor's degree in Computer Science, Math, Physics, Physics, Data Science, Operations Research, or related field.
- Strong programming skills in Python and the modern ML stack (e.g., PyTorch), plus fluency with data tooling (NumPy/Pandas) and basic software practices (git, unit tests, CI).
- Solid grounding in language modelling concepts around training, evaluation, model architecture, and data.
- Comfort working with datasets at scale: collection, cleaning, filtering, labelling/annotation strategies, and quality controls.
- Experience using GPU resources and familiarity with containerized workflows (e.g., Docker) and job schedulers or cloud orchestration.
- Ability to read research papers, prototype ideas quickly, and turn them into reproducible, production‑ready code.
- Clear, pragmatic communication and a collaborative mindset.
Preferred Qualifications
- PhD in Computer Science, Math, Physics, Data Science, Operations Research, or related field, or equivalent industry experience in machine learning, data science, or related roles, with demonstrated experience with NLP or LLMs.
- Experience building foundational LLMs from the ground up
Preferred qualifications by focus area:
- Model Evaluation: track record building task‑grounded evals for LLMs, implementing or extending evaluation harnesses, and generating synthetic data for both evaluation and training; deep understanding of LLM quirks and their ties to architecture and training dynamics.
- Distributed Training: Hands‑on experience debugging multi‑node training, profiling/optimizing throughput and memory, and extending training frameworks like to new architectures or optimizers; comfort diagnosing flaky cluster issues.
- Model Compression: Strong mathematical background and experience with pruning, quantization, and NAS; ability to formulate and solve constrained optimization problems for accuracy/latency/footprint trade‑offs and to integrate results into production.
- Post‑Training: Theoretical and practical familiarity with post-training and alignment techniques; experience with SFT and preference/RL‑based methods (e.g., DPO/GRPO, RLHF).
About Multiverse Computing
Founded in 2019, we are a well-funded, fast-growing deep-tech company with a team of 180+ employees worldwide. Recognized by CB Insights (2023 & 2025) as one of the Top 100 most promising AI companies globally, we are also the largest quantum software company in the EU.
Our flagship products address critical industry needs:
- CompactifAI → a groundbreaking compression tool for foundational AI models, reducing their size by up to 95% while maintaining accuracy, enabling portability across devices from cloud to mobile and beyond.
- Singularity → a quantum and quantum-inspired optimization platform used by blue-chip companies in finance, energy, and manufacturing to solve complex challenges with immediate performance gains.
You’ll be working alongside world-leading experts in quantum computing and AI, developing solutions that deliver real-world impact for global clients. We are committed to an inclusive, ethics-driven culture that values sustainability, diversity, and collaboration — a place where passionate people can grow and thrive. Come and join us!
As an equal opportunity employer, Multiverse Computing is committed to building an inclusive workplace. The company welcomes people from all different backgrounds, including age, citizenship, ethnic and racial origins, gender identities, individuals with disabilities, marital status, religions and ideologies, and sexual orientations to apply.