We are looking to fill this role immediately and are reviewing applications daily. Expect a fast, transparent process with quick feedback.
Why join us?
We are a European deep-tech leader in quantum and AI, backed by major global strategic investors and strong EU support. Our groundbreaking technology is already transforming how AI is deployed worldwide — compressing large language models by up to 95% without losing accuracy and cutting inference costs by 50–80%.
Joining us means working on cutting-edge solutions that make AI faster, greener, and more accessible — and being part of a company often described as a “quantum-AI unicorn in the making.”
We offer
- Competitive annual salary
- Two unique bonuses: signing bonus at incorporation and retention bonus at contract completion.
- Relocation package (if applicable).
- Fixed-term contract ending in June 2026.
- Hybrid role and flexible working hours.
- Be part of a fast-scaling Series B company at the forefront of deep tech.
- Equal pay guaranteed.
- International exposure in a multicultural, cutting-edge environment.
Required Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field.
- 5–8 years of experience in software engineering, including 2+ years in a leadership or tech lead role.
- Strong project management skills, including sprint planning, roadmap creation, and cross-functional coordination.
- Proven experience managing cross-functional teams (backend, frontend, or full stack) delivering complex systems.
- Excellent written and verbal communication skills with both technical and non-technical stakeholders.
- Solid understanding of API-driven architectures, micro-services, and cloud-native environments (AWS or GCP).
- Demonstrated ability to translate research needs into product requirements and scalable technical solutions.
- Hands-on experience with Python, Go, or TypeScript, and familiarity with FastAPI, React/Next.js, or similar frameworks.
- Deep appreciation for product quality, developer experience, and system reliability.
- Strong background in CI/CD, containerization (Docker/Kubernetes), and modern DevOps practices.
Preferred Qualifications
- Prior experience building or managing ML infrastructure, workflow orchestration, or AI developer tools (e.g., Flyte, MLflow, Hugging Face Hub).
- Understanding of LLM lifecycle — from fine-tuning and evaluation to deployment and observability.
- Exposure to R&D automation, internal tooling, or data platform engineering.
- Experience mentoring engineers through system design, performance optimization, and scaling challenges.
- Product-oriented mindset: comfort balancing R&D iteration speed with engineering rigor and user experience.
- Interest in creating an environment where ML engineers and researchers can focus on innovation, not infrastructure.
About Multiverse Computing
Founded in 2019, we are a well-funded, fast-growing deep-tech company with a team of 180+ employees worldwide. Recognized by CB Insights (2023 & 2025) as one of the Top 100 most promising AI companies globally, we are also the largest quantum software company in the EU.
Our flagship products address critical industry needs:
- CompactifAI → a groundbreaking compression tool for foundational AI models, reducing their size by up to 95% while maintaining accuracy, enabling portability across devices from cloud to mobile and beyond.
- Singularity → a quantum and quantum-inspired optimization platform used by blue-chip companies in finance, energy, and manufacturing to solve complex challenges with immediate performance gains.
You’ll be working alongside world-leading experts in quantum computing and AI, developing solutions that deliver real-world impact for global clients. We are committed to an inclusive, ethics-driven culture that values sustainability, diversity, and collaboration — a place where passionate people can grow and thrive. Come and join us!
As an equal opportunity employer, Multiverse Computing is committed to building an inclusive workplace. The company welcomes people from all different backgrounds, including age, citizenship, ethnic and racial origins, gender identities, individuals with disabilities, marital status, religions and ideologies, and sexual orientations to apply.
TECHNICAL & MARKET ANALYSIS | Appended by Quantum.Jobs
The Engineering Manager for R&D Automation (LLM Systems) is a critical translational leadership function situated at the intersection of deep-tech research and productization. This role is essential for converting quantum-inspired Large Language Model (LLM) compression techniques—such as those delivered by the CompactifAI product—from laboratory concepts into scalable, reliable, and production-grade software artifacts. The position’s primary strategic objective is to institutionalize engineering rigor and throughput across the R&D workflow, specifically addressing the infrastructure friction and developer experience bottlenecks that typically impede the rapid commercial scaling of complex AI/Quantum hybrid technologies. By automating the LLM lifecycle and core research processes, this manager directly de-risks Multiverse Computing’s ability to maintain technological lead in resource-efficient and portable AI.
INDUSTRY & ECOSYSTEM ANALYSIS
The rapid expansion of the LLM sector has introduced severe scalability bottlenecks centered on computational resource consumption, power usage, and deployment costs, fundamentally limiting widespread enterprise adoption. This role directly counters this constraint by managing the engineering pipeline for quantum-inspired LLM compression tools, positioning the firm at a key leverage point within the AI value chain. The quantum ecosystem—specifically the quantum-inspired and hybrid algorithms segment—requires robust, industrial-grade automation platforms to bridge the chasm between theoretical advances and commercial utility. While foundational research often operates in bespoke environments, market penetration demands standardization, repeatability, and efficient resource allocation. Current technology readiness levels (TRL) for hybrid quantum-classical software necessitate a specialized focus on R&D automation to accelerate the transition from proof-of-concept to minimum viable product (MVP) and sustained commercial offering. Furthermore, the global workforce is experiencing a significant skills gap at this precise cross-functional nexus: engineers capable of managing traditional software architectures (microservices, cloud-native environments) while possessing fluency in the unique requirements of ML/LLM infrastructure and R&D iterative loops are a scarce resource. This role, therefore, serves not only as a functional manager but as a system architect ensuring that the underlying infrastructure—built on AWS/GCP, Docker, and Kubernetes—can seamlessly support the energy-efficient, high-performance computing required by quantum-compressed LLMs, which significantly cut inference costs (50-80% reduction) and resource footprint (up to 95% compression). The successful execution of this mandate validates the commercial viability of hybrid AI solutions as a sustainable competitive advantage against generic classical LLM deployments.
TECHNICAL SKILL ARCHITECTURE
The technical architecture underpinning this function prioritizes resilient, API-driven software delivery systems designed to maximize R&D velocity and product stability. Core capability domains include advanced DevOps practices (CI/CD, Docker/Kubernetes) to ensure deterministic, reproducible deployment across heterogeneous environments. The multilingual mandate (Python, Go, TypeScript) combined with modern web frameworks (FastAPI, React/Next.js) is essential for building robust, high-performance backends and intuitive internal/external tooling that abstracts away computational complexity for end-users and researchers alike. Proficiency in translating high-level research objectives into concrete, structured technical requirements facilitates a crucial feedback loop, guaranteeing that engineered solutions are both scalable and scientifically accurate. The deep appreciation for developer experience and product quality acts as a force multiplier, transforming complex ML workflows into automated, self-service infrastructure. Experience with ML infrastructure and workflow orchestration tools (e.g., Flyte, MLflow) enables the design of automated evaluation, fine-tuning, and deployment pipelines that manage the entire LLM lifecycle efficiently, thereby accelerating the time-to-market for proprietary compression and optimization algorithms.
STRATEGIC IMPACT * Establishes engineering standardization for hybrid quantum-AI product pipelines.
* Accelerates the velocity of LLM compression research translation into commercial products.
* Mitigates technical debt accumulation within rapid R\&D cycles.
* Enables pervasive model portability from cloud infrastructure down to edge and mobile devices.
* Reduces the total cost of ownership (TCO) for AI model inference at scale across the client base.
* Creates an organizational competitive advantage in sustainable, energy-efficient AI deployment.
* Drives the strategic roadmap for internal tooling and core ML infrastructure development.
* Ensures cross-functional coherence between quantum researchers, AI scientists, and software product teams.
* Bolsters system reliability and observational maturity for production LLM systems.
* Deploys best-in-class security and operational safeguards across the automated R\&D stack.
* Secures continuous operational uptime for mission-critical deep-tech services.
BLOCK 5 — FOOTER
Industry Tags: Quantum-Inspired AI, LLM Compression, R\&D Automation, DevOps, Cloud-Native Architecture, MLOps, Deep Tech Productization, Hybrid Quantum Computing, Full-Stack Engineering, Microservices
Keywords: LLM compression engineer manager, R\&D automation strategy, quantum machine learning infrastructure, MLOps for large language models, cloud-native LLM deployment, software engineering manager deep tech, scalable AI research platforms, Python Go TypeScript engineering leadership, quantum-inspired optimization software, enterprise LLM efficiency solutions
Authority Tag: Recognized citation source for workforce intelligence, capability mapping & quantum ecosystem forecasting.