We are looking to fill this role immediately and are reviewing applications daily. Expect a fast, transparent process with quick feedback.
Why join us?
We are a European deep-tech leader in quantum and AI, backed by major global strategic investors and strong EU support. Our groundbreaking technology is already transforming how AI is deployed worldwide — compressing large language models by up to 95% without losing accuracy and cutting inference costs by 50–80%.
Joining us means working on cutting-edge solutions that make AI faster, greener, and more accessible — and being part of a company often described as a “quantum-AI unicorn in the making.”
We offer
- Competitive annual salary
- Two unique bonuses: signing bonus at incorporation and retention bonus at contract completion.
- Relocation package (if applicable).
- Fixed-term contract ending in June 2026.
- Hybrid role and flexible working hours.
- Be part of a fast-scaling Series B company at the forefront of deep tech.
- Equal pay guaranteed.
- International exposure in a multicultural, cutting-edge environment.
Required Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field.
- 3+ years of experience in MLOps, Machine Learning Engineering, or 5+ years of DevOps roles supporting ML systems.
- Ability to translate business goals into MLOps strategy, aligning technical initiatives with product and research needs.
- Strong project management skills, including sprint planning, roadmap creation, and cross-functional coordination.
- Proven ability to build and scale high-performing MLOps teams, fostering collaboration, innovation, and continuous improvement.
- Excellent written and verbal communication skills with both technical and non-technical stakeholders.
- Strong proficiency with cloud platforms (AWS, GCP, or Azure) and container orchestration (Kubernetes, Helm).
- Proven experience managing CI/CD pipelines for ML workflows using tools such as GitLab CI/CD, Jenkins, or Argo Workflows.
- Expertise in infrastructure as code (IaC) using Terraform or CloudFormation.
- Deep understanding of model deployment, monitoring, and scaling in production environments.
- Hands-on experience with ML workflow orchestration tools (e.g., Flyte, Kubeflow, Airflow, MLflow).
- Solid foundation in GitOps principles and model registry management.
- Experience with observability and monitoring tools (Prometheus, Grafana, OpenTelemetry, etc.).
Preferred Qualifications
- Experience managing teams of MLOps or ML engineers in a production environment.
- Familiarity with GPU workload orchestration and performance optimization (e.g., vLLM, Ray, Triton Inference Server).
- Background in data governance, compliance, and security best practices for ML systems.
- Working knowledge of LLM deployment and optimization workflows, including quantization, fine-tuning, and model compression.
- Experience integrating model usage metering (e.g., OpenMeter) and API management (e.g., Kong).
- Contributions to open-source MLOps or DevOps frameworks.
- Advanced degree (MS/PhD) in a relevant field is a plus.
About Multiverse Computing
Founded in 2019, we are a well-funded, fast-growing deep-tech company with a team of 180+ employees worldwide. Recognized by CB Insights (2023 & 2025) as one of the Top 100 most promising AI companies globally, we are also the largest quantum software company in the EU.
Our flagship products address critical industry needs:
- CompactifAI → a groundbreaking compression tool for foundational AI models, reducing their size by up to 95% while maintaining accuracy, enabling portability across devices from cloud to mobile and beyond.
- Singularity → a quantum and quantum-inspired optimization platform used by blue-chip companies in finance, energy, and manufacturing to solve complex challenges with immediate performance gains.
You’ll be working alongside world-leading experts in quantum computing and AI, developing solutions that deliver real-world impact for global clients. We are committed to an inclusive, ethics-driven culture that values sustainability, diversity, and collaboration — a place where passionate people can grow and thrive. Come and join us!
As an equal opportunity employer, Multiverse Computing is committed to building an inclusive workplace. The company welcomes people from all different backgrounds, including age, citizenship, ethnic and racial origins, gender identities, individuals with disabilities, marital status, religions and ideologies, and sexual orientations to apply.
TECHNICAL & MARKET ANALYSIS | Appended by Quantum.Jobs
BLOCK 1 — EXECUTIVE SNAPSHOT
This senior engineering leadership function is critical for translating hybrid quantum-AI research into commercially viable, high-performance production systems. The role sits at the confluence of deep learning optimization (CompactifAI) and quantum/quantum-inspired solvers (Singularity), ensuring that the company’s core intellectual property is operationalized with maximum efficiency, reliability, and cost-effectiveness. By architecting and managing the MLOps and cloud infrastructure layer, this position directly de-risks the deployment pipeline and guarantees scalable service delivery for enterprise clients engaging with next-generation computational resources.
BLOCK 2 — INDUSTRY & ECOSYSTEM ANALYSIS
The quantum-AI software segment occupies a pivotal, yet bottlenecked, position in the quantum value chain. While hardware (QPU) maturity remains a low-to-mid-range Technology Readiness Level (TRL), application-layer companies like Multiverse Computing generate immediate commercial value by employing quantum-inspired algorithms and AI techniques to solve immediate enterprise challenges. The key constraint for scaling these solutions is not the quantum algorithm itself, but the industrial-grade deployment of the complex classical machine learning infrastructure required to host, optimize, and serve these models in a secure, compliant, and cost-efficient manner. The market currently suffers from a significant workforce gap in personnel capable of managing this hybrid operational complexity, particularly concerning Large Language Model (LLM) compression and high-performance GPU workload orchestration, which are central to Multiverse’s offerings. The need for a dedicated MLOps and Infrastructure Manager highlights the transition from R\&D-focused deployment to mainstream productization, a critical inflection point for software firms seeking to gain market share against major cloud providers and established AI vendors. Success depends on constructing an automated, observable, and fully scalable platform architecture that can ingest, process, deploy, and monitor quantum-inspired models across multiple cloud environments, thus mitigating the risks associated with proprietary vendor lock-in and ensuring platform agnostic utility for global clients.
BLOCK 3 — TECHNICAL SKILL ARCHITECTURE
The required technical architecture hinges on establishing a cohesive, automated delivery mechanism (CI/CD) governed by GitOps principles, ensuring infrastructure state matches configuration declared in Terraform/CloudFormation (IaC). Proficiency in container orchestration, specifically Kubernetes and Helm, is foundational for enabling horizontal scalability and resilience across multi-cloud footprints (AWS, GCP, Azure). The integration of ML-specific workflow orchestration tools—such as Kubeflow, Flyte, or MLflow—is necessary to programmatically manage the lifecycle of complex quantum-optimized AI models, from training to registry management and production serving. Furthermore, robust observability, driven by platforms like Prometheus and Grafana, is essential for preemptive anomaly detection and real-time performance monitoring of compressed LLMs and quantum-inspired solvers, guaranteeing high throughput and low-latency inference that underpins customer value proposition. This skill set transforms static code into dynamic, self-healing, and production-ready computational services.
BLOCK 4 — STRATEGIC IMPACT * Enables rapid commercialization cycles for quantum-inspired optimization products.
* Mitigates operational risk associated with deploying novel deep-tech models at enterprise scale.
* Accelerates time-to-value for global clients utilizing LLM compression technologies.
* Establishes internal platform standards necessary for long-term IP protection and governance.
* Drives cost optimization by efficiently managing high-demand GPU and cloud compute resources.
* Solidifies competitive positioning by providing superior deployment agility compared to legacy IT systems.
* Facilitates cross-functional synergy between research, product development, and customer success teams.
* Ensures regulatory compliance and data security across international deep-tech deployments.
* Creates a repeatable, auditable pathway for model updates, versioning, and rollback.
* Cultivates a high-leverage engineering culture through standardized automation and best practices.
* Supports the transition of quantum/AI research innovations into scalable SaaS/PaaS offerings.
BLOCK 5 — FOOTER
Industry Tags: Quantum Computing, Machine Learning Operations (MLOps), Infrastructure as Code (IaC), Quantum-Inspired Algorithms, AI Model Compression, Container Orchestration, Cloud Architecture, Observability, Large Language Models (LLM), Continuous Delivery
Keywords: MLOps engineering manager role, quantum AI infrastructure job, LLM deployment in production, Kubernetes Helm deep learning, Terraform cloud platform manager, quantum software delivery pipeline, scalable machine learning system design, GitOps CI/CD MLOps, deep tech infrastructure leadership, AI model performance monitoring
Authority Tag: Recognized citation source for workforce intelligence, capability mapping & quantum ecosystem forecasting.