Open Access Article
Thomas M. Bickley†
a,
Angus Mingare†
a,
Tim Weaving†
a,
Michael Williams de la Bastida†
a,
Shunzhou Wan
a,
Martina Nibbi
b,
Philipp Seitz
b,
Alexis Ralli
ac,
Peter J. Love
cd,
Minh Chung
e,
Mario Hernández Vera
e,
Laura Schulz‡
e and
Peter V. Coveney
*afg
aCentre for Computational Science, Department of Chemistry, University College London, London WC1H 0AJ, UK. E-mail: p.v.coveney@ucl.ac.uk
bTechnical University of Munich, School of Computation, Information and Technology, Department of Computer Science, Boltzmannstraße 3, 85748 Garching, Germany
cDepartment of Physics and Astronomy, Tufts University 574 Boston Avenue, Medford, MA 02155, USA
dComputational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973, USA
eLeibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities, Boltzmannstraße 1, 85748 Garching, Germany
fUCL Centre for Advanced Research Computing, Gower Street, London WC1E 6BT, UK
gInformatics Institute, University of Amsterdam, Amsterdam 1098 XH, Netherlands
First published on 10th November 2025
The advent of hybrid computing platforms consisting of quantum processing units integrated with conventional high-performance computing brings new opportunities for algorithm design. By strategically offloading select portions of the workload to classical hardware where tractable, we may broaden the applicability of quantum computation in the near term. In this perspective, we review techniques that facilitate the study of subdomains of chemical systems with quantum computers and present a proof-of-concept demonstration of quantum-selected configuration interaction deployed within a multiscale/multiphysics simulation workflow leveraging classical molecular dynamics, projection-based embedding and qubit subspace tools. This allows the technology to be utilised for simulating systems of real scientific and industrial interest, which not only brings true quantum utility closer to realisation but is also relevant as we look forward to the fault-tolerant regime.
The hybrid quantum mechanics/molecular mechanics (QM/MM) method is widespread, and allows one to situate a quantum mechanical calculation within a classical medium of point-charges, resolved using molecular mechanics (MM). The possibility of integrating quantum computational resources in QM/MM has been suggested,45–50 but implementations on real quantum devices are scarce and do not make appreciable use of the hardware, nor provide any scalability guarantees.51–54 It is becoming increasingly common to see quantum processing units (QPUs) being integrated with high-performance computing (HPC) platforms, bringing with it a need for computational workloads that challenge both quantum and classical resources. The QM/MM framework provides a realistic route towards achieving large-scale simulations that fully utilise the augmented capabilities of heterogeneous HPC + QPU systems.55–58
The motivation behind attempting such a computation lies in the exploration of the abilities of quantum computing in chemistry and material science, particularly in assisting calculations on systems which have complex electronic interactions within a large system. Prominent examples include catalysis and biomolecular systems. While current evidence for quantum advantage in stand-alone (i.e. full system) quantum chemistry simulations requires further work;59 we believe that a reasonable application of near-term quantum hardware is the deployment of quantum algorithms in a small, highly-correlated region of a much larger classical QM/MM routine.
Sections 2, 3 and 4 are dedicated to reviewing techniques that can be used to reduce the quantum resources required to simulate a chemical system, many of which we utilise in a proof-of-concept demonstration presented in Section 5. Combined application of the methods discussed within this article allows quantum computation to be deployed within large-scale chemical simulation workflows, providing a practical route to scientific and industrial utility for the technology.
Section 2 discusses classical techniques for treating large-scale environmental effects, including QM/MM for general environments and continuum models designed specifically for solvents. In Section 3 we review two embedding techniques; projection-based embedding (PBE) is chemically-motivated and allows a QM calculation to be conducted at two different levels of chemical theory, while density matrix embedding theory (DMET) leverages the Schmidt decomposition in a similar vein as tensor networks to embed a subsystem within a surrounding bath. In Section 4 we review qubit subspace techniques that exploit (approximate) symmetries for additional resource reduction, such as qubit tapering and the contextual subspace method.
As a proof-of-concept for this approach, we present a workflow in Section 5 (also visually outlined in Fig. 1) where a QSCI simulation of the proton transfer mechanism in water, related to the structural debate over the aqueous form of the hydronium ion [H3O]+,60 is performed. The results were obtained from the IQM
20-qubit superconducting device, integrated with the HPC cluster SuperMUC-NG at the Leibniz Supercomputing Centre (LRZ). While the QM/MM framework allows for the study of large chemical systems, realistic candidate structures for the QM subdomain are typically too large to be directly solved using quantum computers. One must therefore layer additional quantum embedding and qubit subspace methodologies to further distil the problem for feasibility on near-term quantum hardware.
![]() | ||
| Fig. 1 Multiscale simulation workflow for embedding quantum computational capabilities within a surrounding classical molecular dynamics environment. The workflow consists of several nested layers of abstraction. At the highest level, we identify some molecular target entity within a larger system; while the former is resolved via quantum mechanics (QM) methods, the latter is treated at the classical molecular mechanics (MM) level for computational tractability. Within the QM region, the molecule is further partitioned into an active subsystem and surrounding environment via projection-based embedding (PBE), allowing a subdomain to be treated at a higher level of QM theory, while the environment is rendered at the density functional theory (DFT) level. Finally, within the embedded QM subsystem we may deploy qubit subspace techniques to further reduce the qubit overhead to utilise near-term quantum hardware in large-scale molecular simulation workflows. This allows us to leverage quantum processing units (QPUs) integrated with high-performance computing (HPC) platforms. Sotorasib molecule in water solvent drawn with VMD.1 | ||
Introduced in 1976 by Warshel and Levitt,66 the idea of QM/MM is simple: treat a region of chemical interest with a high-accuracy, computationally-expensive quantum chemistry method, and the rest of the system is modelled with the less accurate but computationally cheaper MM. This partitioning clearly relies on the assumption that the targeted electronic effects within the QM region are mainly local and do not depend on long-range interactions with parts of the system within the MM region.
It is easy to find the individual energies of the QM and MM regions applied at their respective levels of theory, but including the interactions between the two regions is where the crux of implementing QM/MM lies. A simple approach is known as subtractive coupling, where the total energy is given by
| EQM/MM(sub)full := EQMQM + EMMfull − EMMQM, | (1) |
To capture some of these additional effects, the additive class of coupling methods can be used. In general, these methods consider the three types of interactions separately and sum them to get the total energy, for example
| EQM/MM(add)full: = EQMQM + EMMMM + EQM/MMfull, | (2) |
• Mechanical embedding: the interactions are treated at the MM level, i.e. the usual MM force field parameters are used to model bonds, angles, torsions etc. between bonded QM and MM atoms and Lennard-Jones and Coulomb potential terms for non-bonded atoms.69
• Electrostatic embedding: the QM Hamiltonian includes point charges from the MM environment as one-electron terms in the Hamiltonian, thus allowing one-way polarisability of the QM atoms by the MM atoms (see eqn (3)).
• Polarisable embedding: both the QM and MM regions are mutually polarisable and are solved in a self-consistent procedure.70
As an example, the Hamiltonian for the QM region under electrostatic QM/MM coupling (in atomic units) is
![]() | (3) |
In recent years, explorations of the potential use of quantum computing in QM/MM workflows have appeared in the literature. In a perspective piece by Blunt et al.,45 the scaling of the fault-tolerant quantum phase estimation (QPE) algorithm was investigated within the context of large-scale biomolecular simulations. Whilst it is noted that algorithmic advances such as qubitization and Hamiltonian truncation have reduced predicted QPE runtimes by orders of magnitude, workflows relevant to pharmaceutical chemistry remain beyond the limits of reasonable predicted resource expectations. It is therefore noted that embedding approaches, specifically QM/MM and the similar QM-cluster approach will be essential for large scale chemical simulations even in the fault-tolerant regime. Similarly, the review by Capone et al.47 offers some ideas on the future of multiscale chemical modelling with quantum computing, where embedding strategies including QM/MM are likely to have an essential role in enabling more chemically-relevant studies using quantum devices in the coming decades. Notwithstanding the many improvements needed in quantum hardware and algorithms, Santagati et al.48 acknowledge the potential for quantum computing to enhance the precision of QM/MM like methods by targeting the quantum computer at the region of highest electron correlation in an extended system. Some simulated-VQE results were presented by Ma et al.46 which made use of the many-body expansion method, which was chosen for its potential for integration into larger scale QM/MM workflows. In addition, polarisable embedding utilising a VQE subroutine to solve the wavefunction parameters and dipole moments self-consistently is introduced and performed on emulated hardware by Kjellgren et al.,71 and subsequently emulated with GPU acceleration within
.72
There are also several examples of actual QM/MM workflows being performed on quantum hardware, where constrained active spaces are employed to make the calculations feasible on current devices. Izsák et al.51 performed a 4 electron, 4 orbital (4e, 4o) active space energy evaluation of the enzyme ferredoxin hydrogenase and the photosensitizer temoporfin embedded within larger classical environments on Rigetti superconducting hardware, with VQE and iterative QPE algorithms respectively. Li et al.53 studied a minimal (2e, 2o) active space of five atoms of a cancer target and drug interaction. This work embeds a VQE energy evaluation of this space on superconducting device within classical surroundings. Additionally, an embedded (6e, 6o) active space computation of the proton transfer step in carbonic anhydrase II was performed by Ettenhuber et al.54 using both trapped-ion and superconducting devices.
Finally, recent work by Weisburn et al.49 and Günther et al.50 develops a three-level QM/QM/MM embedding scheme, where the two quantum regions are partitioned with bootstrap embedding. Quantum computations in these works are emulated, but they offer a potential path for multiscale chemical calculations with real quantum device assistance.
The PCM describes the solvent as a polarisable dielectric continuum surrounding the solute.74 The solute, typically treated at the quantum mechanical level, is embedded in a cavity constructed within the dielectric continuum. The solvent's response to the solute's charge distribution is modelled typically by solving the Poisson equation to compute the electrostatic potential at the cavity boundary. This potential induces a reaction field, which is incorporated into the solute's Hamiltonian, effectively accounting for solvation effects.
PCM methods are highly versatile and can be applied to a wide range of solvents and solutes. They are particularly effective for studying equilibrium properties, such as solvation free energies, solvent shifts in spectroscopy, electronic excitation energies, and reaction mechanisms in solution. For example, PCM has been employed within a time-dependent DFT to investigate the shifts in absorption and fluorescence energies in passing from apolar to polar solvent.76 While the accuracy of the model depends on the choice of cavity construction and dielectric constant used to model the solvent environment, the flexibility of PCM allows for the inclusion of non-electrostatic contributions, such as dispersion and repulsion interactions, further improving the accuracy.77 The computational cost of PCM scales with the complexity of the cavity and the dielectric response, which can be a limitation for large complex systems.
COSMO is another continuum solvation approach that approximates the solvent as a conductor.75 The solvent's response is computed by assuming that the dielectric constant of the solvent is infinite. This approximation leads to a significant simplification of the electrostatic equations, improving numerical stability and convergence while keeping it computationally efficient. After the initial conductor-like screening, a scaling factor is applied to account for realistic dielectric effects. COSMO is particularly well-suited for rapid screening of solvation effects in larger molecular systems, such as drug-like molecules or materials. Its efficiency stems from the reduced complexity of the electrostatic problem, which avoids the need for iterative solutions of the Poisson equation.
An extension of COSMO, known as COSMO for real solvents (COSMO-RS),78 integrates the continuum solvent approach and statistical thermodynamics to describe solvent effects beyond electrostatics, making it more predictive for thermodynamic properties. This has made COSMO-RS a particularly useful tool in applications such as solvation energy estimation, partition coefficients, and drug design.
Both PCM and COSMO provide efficient means to model solvation effects without the computational overhead of explicit solvent simulations. PCM is particularly well-suited for high-accuracy quantum chemical calculations due to its rigorous electrostatic treatment, while COSMO and COSMO-RS are advantageous in applications requiring stability and efficiency, such as large-scale screening studies. Recent developments have sought to combine the strengths of both methods, giving rise to hybrid approaches and extensions, such as the conductor-like modification of PCM (C-PCM).79
The method begins by selecting an active region and environment. In the original formulation, subsystems are manually selected at the level of atoms,86 affording flexibility while requiring that chemical intuition is applied. In practice, it can be difficult to predict which selection is appropriate82,90 and methods have been developed to perform this automatically.90–92 Having defined a partition, whole-system DFT is performed to obtain a set of optimised molecular orbitals.
Electrons are then localised to the active and environment subsystems using any of a variety of standard localisation procedures such as IBO,93 Pipek–Mezey94 or SPADE.95,96 Virtual orbitals can likewise be localised, for instance via VVO97 or concentric localisation.98 Where multiple system geometries are to be used, localising each individually may lead to changes in the number of active molecular orbitals and a resulting discontinuity in the potential energy surface.90,92 Procedures have been developed to avoid this, although these naturally involve some compromise between geometries.90–92,99
Having localised electrons into the two regions, we may express the DFT energy in terms of the electron densities of each subsystem γact and γenv as95
![]() | (4) |
Conspicuously absent from the above is the non-additive kinetic term. Having localised the electrons to subsystems, the environment orbitals are projected out of the Hamiltonian, thus suppressing transitions from the active region. The Fock operator of the active region is augmented with a projection term Penvproj, and a term Vemb which includes the mean-field effect of the environment on the active region,
![]() | (5) |
Two forms of projector are typically used, the μ-shift (Penvμ)ij = μ[SγenvS]ij86 and Huzinaga
.100 Here, Sij = 〈ψi|ψj〉 is the overlap of the atomic orbital basis. The effect of Penvμ is to take the environment orbitals to a high constant energy (typically 106), while Penvhuz sends the negative energy levels to their opposite value. Note that the Fermi-shifted Huzinaga projector can account for orbitals with initially positive values.101 The Huzinaga projector is constructed such that it commutes with the Fock operator, as a result, it gives more precise energies.102 Note that the PBE procedure is performed with only a single-shot embedding, with no need for computationally expensive feedback between classical and quantum methods.
With the environment frozen, the active region is self-consistently optimised under this new Fock operator, returning a wavefunction for the embedded active region |Ψactemb〉. Using a selected wavefunction method, which may be run on a quantum device, the energy of the embedded region can be calculated. Taking Hemb = hemb + g(Ψactemb), where the second term is again a two-electron term but now acting upon the embedded active wavefunction. To correct for double counting of the Coulomb term resulting from the environment's effect on the active region, and to negate the energetic effects of projection, a correction term must be included in the final total,95
![]() | (6) |
The active region wavefunction method may now be run completely independently. Any quantum simulation algorithm can be used, with the other terms derived from classical computation handled as an energy constant. We provide an example of the method in Fig. 2, showing the bond dissociation energy of perfluoromethane. In this example, the absolute value of the bond energy is marginally more accurately predicted by DFT alone; however, it serves to illustrate the reduction in problem size achieved by PBE. The CF3 molecule has 50 spin-orbitals in the STO-3G basis, which is reduced to 28 by embedding.
With currently available quantum hardware, applications have so far been limited to simple demonstrations with only a few atoms.84,89 Quantum advantage is still required to necessitate the use of a quantum processor for the embedded wavefunction method over existing classical methods. We therefore expect quantum-in-classical PBE to become a common technique in the future.
DMET begins by computing an approximate ground state wavefunction for the full system, |Ψ0〉, for example using a truncated configuration interaction theory111 or anti-symmetrised geminal power wavefunctions.112 For each subsystem that is to be treated at a higher level of theory a set of bath orbitals are computed from the low-level density matrix. A simple argument based on the Schmidt decomposition of the ground state solution |Ψ〉 shows that the size of the bath system need be no greater than the size of the subsystem under consideration. Letting
be a basis for the subsystem and
be a basis for the bath we can write
![]() | (7) |
The set of subsystem and bath orbitals then defines an embedded Hamiltonian to which a high level of theory is applied to obtain the subsystem density matrix ρA and energy contribution EA. The global density matrix ρ = |Ψ0〉〈Ψ0| is then optimised self-consistently with respect to some pre-defined cost function designed to match properties of the global state with properties obtained from the collection of subsystems. This process repeats until convergence is achieved. For a full introduction to DMET we refer the reader to Wouters et al.109 A related but distinct class of methods exists, known as bootstrap embeddings, which instead enforces consistency by directly matching overlapping fragment density matrices.113–117
In the usual case of classical-in-classical embedding, standard high-level methods such as coupled-cluster theory or the density matrix renormalisation group algorithm can be applied to the embedded Hamiltonian. However, as the size of the active space increases these methods must trade off accuracy and computational tractability. One solution is to treat many subsystems at the higher level of theory although this increases the difficulty of the self-consistent optimisation. An emerging solution is to treat the subsystem on a quantum computer using a quantum algorithm for ground state computation which handles the embedded Hamiltonian.
As these nascent devices develop they may be able to extend the utility of DMET by allowing for larger active spaces. This idea has been proposed and numerically verified for small systems.118–121 Additionally, several experiments run on real quantum hardware have yielded results matching classical benchmarks for DMET. Limited to small systems, the combination of DMET with VQE has facilitated simulations of a Hubbard lattice122 and hydrogen rings.16 Combining DMET with QSCI allowed a simulation of cyclohexane,40 which used 32 qubits on a superconducting quantum chip. This proof-of-concept demonstration adds further support for the hope that QSCI may overcome the limitations of VQE; however, it remains clear that further hardware and algorithmic developments will be required before quantum computers are able replace classical methods within the DMET framework.
One key limitation for quantum computers is that DMET is iterative and requires the one-particle reduced density matrix of the active space to enforce self-consistency with the environment. While this explicitly captures entanglement information between the active space and environment, often leading to more accurate energy predictions than one-shot embedding methods, it requires computing many more expectation values and performing expensive quantum-classical feedback. This quantum overhead may be partially alleviated by replacing DMET with a bootstrapping embedding as suggested by Liu et al.,123 however the suggested quantum subroutines preclude their implementation on near-term devices.
Qubit subspace methods adopt a more abstract approach to yield hybrid orbitals that, while there might be a loss of physical motivation, come with the benefit of additional information being encoded in each qubit via the mixing of molecular orbitals to form an entangled basis in which to describe the molecular system. After application of this orbital-mixing unitary, we apply single-qubit projection operators to fix the state of desired qubits and trace them out of the system. This may be thought of as freezing the new hybrid orbitals, although it also has the flexibility to describe different qubit bases; more generally speaking, we project onto a stabiliser subspace of the qubits.
By contrast, in a method such as Complete Active Space Self-Consistent Field (CASSCF), we still apply a unitary to optimise the orbital basis at each step, but the form of this unitary is restricted to single-particle rotations and thus cannot reach as rich a class of molecular orbital bases. In CASSCF, the orbital transformation preserves its class structure, meaning we maintain a strict separation between inactive, active, and virtual orbital spaces, whereas this does not generally apply for qubit subspace methods.
The projection operator for a single qubit indexed
, stabilised by a Pauli operator P ∈ {X, Y, Z}, is
. For a subset of qubits with indexing set
, where
is the total number of qubits, the projection onto the corresponding stabiliser subspace takes the form
![]() | (8) |
Moreover, if one wishes to first apply a unitary prior to application of the single-qubit projectors, we may define the rotated projection
![]() | (9) |
![]() | (10) |
is the partial trace over the qubits indexed by
. We note that, due to the cyclicity of trace operations, one may equivalently view this map as a rotation of the operator H, followed by single-qubit projections:
![]() | (11) |
This framework is very general, for example if U diagonalises H, then this corresponds with solving the problem exactly. Instead, one must design U in such a way that it is classically efficient to realise the rotation H → UHU†, for example by enforcing that U is (near) Clifford. Qubit subspace techniques are differentiated via the way in which we choose the rotation unitary U, qubit indices
and the eigenspace sector σ. In the following subsections we explore several approaches.
![]() | (12) |
are one and two electron integrals. The Hamiltonian is subsequently mapped onto qubits via a fermion encoding scheme such as Jordan–Wigner126 or Bravyi–Kitaev.127
In Fig. 3 we plot the canonical MO energies for benzene, C6H6, in a minimal STO-3G atomic orbital basis set. One notes that the core orbitals lie deep with a potential energy well which is typical for most chemical systems. The implication of this is that it is energetically unfavourable for electrons lying deep within this well to be excited into the valence space. Instead, it is more likely that we will observe the greatest electronic activity around the Fermi level, where we find the gap between the highest occupied (HOMO) and lowest unoccupied (LUMO) molecular orbital. This motivates the notion of a frozen core approximation, in which electrons occupying the lowest-lying MOs are frozen in-place and not allowed to be excited into higher energy states.
Viewed as a qubit subspace technique, this is the simple case where U = I is the identity, and
where μ is the vector of canonical MO energies and δ > 0 some energy threshold parameter to truncate the MO orbitals below the core potential. The sector is selected as
to enforce that core orbitals are occupied. One may also freeze the valence space in a similar way by instead setting σq = +1 to project onto unoccupied orbitals and truncating the highest-energy MOs, rather than the lowest.
A subset of symmetries that is of particular interest here are those of
-type, namely operators that describe a form of 2-fold symmetry. The
symmetries possess a useful property such that, if [H, S] = 0 and S is
, then S commutes with every term of H = ∑khkPk individually, i.e. [Pk, S] = 0 ∀k. This fact leads us to a mechanism for reducing the number of qubits in the Hamiltonian without sacrificing any accuracy, since the full and reduced Hamiltonians are isospectral up to a change in eigenvalue multiplicities.
From the theory of stabilisers,132 given an independent set of N-qubit commuting Pauli operators
, there exists a Clifford rotation C, a set of qubit indices
and bijective map
such that, for each element
, we have CSC† = P(f(S)) for a single-qubit Pauli operator P ∈ {X, Y, Z}. In other words, the unitary C maps elements of the set
onto distinct qubit positions. The positions must be distinct due to the requirement that
be independent, namely that no single element of the set is a product of other elements, and commuting, so that it is not possible to rotate X and Z onto the same qubit position, say. We may also assume without loss of generality that P = Z, since conjugation by Hadamard and phase gates relates the three choices.
Now, suppose we take
to be the set of
symmetries of a Hamiltonian H. Then, since for each Hamiltonian Pauli term Pk we have
, it must be the case that
as unitary rotations preserve commutation relations. The implication of this is that the rotated Hamiltonian term CPkC† must consist of either I or Z in the qubit positions indexed by
. These positions may subsequently be projected and dropped as in eqn (11), or in other words “tapered”, from the Hamiltonian.133
When tapering, particular care must be taken to select the correct symmetry sector σ, which can typically be assigned by some knowledge of the underlying problem. In electronic structure,
symmetries can arise either from spin up/down parity or abelian subgroups of the geometrical point-group, describing discrete rotations, reflections or inversions of the molecule; an example is given in Fig. 4. We note that, while the full point-group might have a high dimensional irreducible representation, the limitation to abelian subgroups greatly restricts the degrees of freedom. One may refer to point-group tables to correctly choose the desired symmetry sector.134,135
Contextuality provides a broad conceptual picture of non-classical correlation;138–141 the particular flavour that is exploited in the contextual subspace method is strong measurement contextuality.142 In the setting of Pauli operators, this manifests as the presence of measurement contradictions in the same vein as Peres-Mermin magic squares.138,143 These contradictions arise from the violation of commutation transitivity among non-symmetry elements of a particular Pauli measurement set. Conversely, a noncontextual set is one whose compatibility graph consists of disjoint commuting cliques, all connected to a central symmetry set. An example of a noncontextual Hamiltonian is molecular hydrogen (H2) in the minimal STO-3G basis set. Its Pauli terms consist of two disjoint cliques {ZIII, IZII, IIZI, IIIZ} and {XXYY, XYYX, YXXY, YYXX}, with the symmetry component {IIII, IIZZ, IZIZ, IZZI, ZIIZ, ZIZI, ZZII} as viewed graphically in Fig. 5a.
A noncontextual Hamiltonian has the structure
![]() | (13) |
is the symmetry generating set, K ≤ 2N + 1 is the number of disjoint cliques, and Ck is a single representative from each clique, noting that
. For example, in Fig. 5a we have K = 2,
, C1 = ZIII and C2 = XXYY. One obtains a classical objective function encoding the spectrum of HNC via a phase-space description of the underlying hidden variable model:144–146
![]() | (14) |
satisfies
. Optimising over the parameters ν, r yields the noncontextual ground state. In Fig. 5b we show the r optimisation landscape for H2, noting that the minimum coincides exactly with the FCI energy.
Molecular hydrogen is a special case; in general, the electronic structure Hamiltonian is dominated by contextual interactions. In the contextual subspace approach, one projects symmetries
of a noncontextual model system over the full Hamiltonian, with the sector σ as in the introduction to Section 4 identified by optimising the noncontextual objective function of eqn (14), taking the elements νS that correspond with the chosen symmetries
. The pairwise anticommuting clique representatives are moreover rotated onto a single Pauli operator prior to the stabiliser subspace projection, either via the sequence of rotations (RSeqRot) or linear-combination of unitaries (RLCU) construction.147 Paired with the same Clifford rotations as in qubit tapering that map
onto single-qubit Pauli operators, i.e. CSC† = Z(f(S)) for each
, we obtain the unitary U = CRSeqRot/LCU that one applies to the Hamiltonian prior to the single-qubit projections of eqn (11). We note that, due to the RSeqRot/LCU rotation, the unitary U is non-Clifford; however, in the LCU construction it is guaranteed that the increase in the number of Hamiltonian terms
is LK, and even this worst-case increase may only be encountered in highly contrived scenarios.147
The benefit of the contextual subspace method over those presented above, such as qubit tapering, is that we may project into an arbitrarily small qubit subspace, whereas tapering is limited by the number of
symmetries present. This feature allows the contextual subspace approach to accommodate any given quantum device, regardless of size, and as such describes a highly flexible approach to quantum computing for many application domains, of which quantum chemistry has been investigated most extensively.29,33,34,136,137,147,148 Of course, unlike tapering, we accumulate more error as we project into smaller subspaces, so this consideration needs to be navigated carefully. In Fig. 6 we demonstrate the decay of error as a contextual subspace is expanded from 1 to 23 qubits for sodium chloride (NaCl) STO-3G; while it begins as a 36-qubit problem in its entirety, application of the contextual subspace method allows us to achieve the target accuracy of 1.6 mHa for just 13 qubits, a considerable saving compared with the full system. These results were obtained using the Python packages
149 and
.150
Hydronium is the protonated cation of molecular water, i.e. [H3O]+. In solution, the simplest hydration structure of hydronium is the Zundel cation [H5O2]+, where the hydronium ion is hydrogen-bonded to one other water molecule. For our testbed system we consider a planar Zundel cation, where the distance between the two waters is controlled by a parameter dOO, and the free proton is placed somewhere along this length at a distance dOp thereby defining a proton ratio r: = dOp/dOO ∈ (0, 1). Additional solvent water molecules are included around this Zundel system; initially four are placed at a distance dOH from the four hydrogens of the Zundel cation, and more can be placed randomly in space beyond this. See Fig. 7 for a diagram of this geometry. As the free proton moves from being local to one water molecule (r ≈ 0.2 or 0.8), to being shared between two (r ≈ 0.5), the amount of correlation energy in the system increases by about 20% as can be seen in Fig. 8.
![]() | ||
| Fig. 7 The proton transfer geometry considered in the hardware proof-of-concept experiment. Two water molecules are separated by a distance dOO Å, with a free proton placed in-between at a distance of dOp Å from the left water. The proton ratio is defined by r: = dOp/dOO ∈ (0, 1). These atoms are treated at the QM level, where projection-based embedding is used to split the system into a density functional theory (DFT) and wavefunction (WF) subsystems. Further water atoms are placed around this system and treated at the molecular mechanics (MM) level, with the first solvation shell of four waters placed explicitly at dOH Å along the OH bonds of the QM waters. The graphic was produced with VMD.1 | ||
![]() | ||
| Fig. 8 The correlation energy Ecorr = EHF − EFCI, as a function of proton ratio r. Restricted-HF and full-CI calculations are performed on a single Zundel cation (the atoms in blue region of Fig. 7) at three water separation distances with the STO-3G basis set. Across the three water separations, the mean correlation energy increase from r = 0.2 to 0.5 is 21%. | ||
This system is then partitioned into subsystems which are treated at different levels of theory. The quantum region is defined as the Zundel cation, and the classical region comprises the remaining water molecules. The classical region is treated at the molecular mechanics level, whereas the quantum region is further partitioned into active and environment subregions via projection-based embedding (PBE), described in Section 3.1. The latter is treated with density functional theory, while the active region may be treated with conventional post-Hartree–Fock ab initio methods like coupled-cluster, or by constructing a qubit Hamiltonian and evaluating the energy with a quantum algorithm.
For the simulation we set up an electrostatically-coupled QM/MM simulation of the solvated Zundel system via the
qmmm module,149 where
151 drives the MM force evaluations and the MolSSI Driver Interface (
) package152 facilitates communication between
and
. The PBE procedure is executed with
153 using the SPADE localisation method.95,96 For the quantum chemistry calculations, the minimal STO-3G atomic basis set is employed, and Kohn–Sham DFT with the B3LYP exchange-correlation functional is used on the environment region. We obtained a Pauli Hamiltonian with
150 via the Jordan-Wigner (JW) fermion encoding.126
The resultant qubit Hamiltonian of the core O–H–O atoms encodes the exact electronic energy embedded within the DFT environment (for the given basis set and exchange–correlation functional used), with additional polarisation effects caused by the MM atoms which are modelled as point charges (see Section 2.1 for further details). We refer to this quantity as the FCI-in-DFT-in-MM energy. It is useful to note that the Zundel cation has 15 molecular orbitals in the minimal STO-3G basis set, which would therefore require 30 qubits to describe the QM system on a quantum computer under the JW encoding. However, through the application of PBE (see Section 3.1), we may project out the four environment hydrogen atoms, each contributing a single occupied molecular orbital in this basis set and thus resulting in a reduction by 8 qubits in the resulting qubit Hamiltonian. The planar Zundel system has several planes of symmetry, allowing for an additional 4 qubits to be tapered out.133 Finally, we use the frozen-core approximation which brings the final qubit count for our embedded system down to 16 qubits (see Section 4 for further detail on these methods).
For the QM/MM simulation, we place an additional 96 water molecules randomly around the Zundel system and perform a molecular dynamics (MD) propagation of these atoms, whilst the coordinates of the QM atoms with a specified (dOO, r) parameter set remain fixed. Embedded qubit Hamiltonian data is recorded for 6 steps at a time interval of 2.0 fs per step. The system size and number of steps have been chosen to produce a minimally-sized simulation which can be used as a test-bed for the multiscale embedding and subspace methods which are the main focus of this perspective. After all steps have been performed, an averaged Hamiltonian for that parameter set is produced by calculating the mean Pauli coefficient for each Pauli string in the set of Hamiltonians. The set of mean Hamiltonians are then passed to the IQM superconducting device for the FCI-in-DFT-in-MM energy evaluations.
The energy of the quantum region is calculated on the 20-qubit IQM quantum chip
using quantum-selected configuration interaction (QSCI).36–43 A set of electron configurations (determinants)
is sampled from the quantum device. We then form the configuration subspace projection operator
and project the electronic structure Hamiltonian into this space
![]() | (15) |
using classical compute resources; by solving Hvj = εjvj we obtain eigenstates
of the projected Hamiltonian that satisfy
. For the hardware results in Fig. 9 we allowed a shot-budget of 105, which produced K < 500 unique configurations for each simulation, requiring only modest compute resources to diagonalise. Since the electronic energy is obtained via classical diagonalisation, QSCI is more robust to hardware noise than algorithms such as VQE, where the energy estimation itself is susceptible to corruption by noise. By contrast, in QSCI it is only the quality of the configuration subspace that suffers.
The QSCI method requires preparing a quantum state on the quantum chip that approximates the ground state solution or, more generally, shares sufficient support with the ground state to provide an accurate approximation to its energy. The best way to prepare such a state remains an open question. Many previous works fix the parameters of a local unitary cluster Jastrow (LUCJ) ansatz using excitation amplitudes obtained from coupled-cluster calculations.36–40 For this proof-of-concept demonstration, we instead use a direct matrix product state (MPS) to circuit mapping to warm-start the quantum chip with a low-bond-dimension DMRG solution. We note there is no evidence that either of these approaches will be sufficient for quantum advantage without additional work being done by the quantum device. This is beyond the scope of this article and will be addressed in later work; instead, it serves here as a validation of our QM/MM approach and a proof-of-concept that quantum computational resources may be deployed within large-scale chemical workflows.
The results of the quantum energy evaluations are presented in Fig. 9 alongside the DMRG energy and the HF energy (obtained by direct computation, 〈ψHF|H|ψHF〉). The DMRG energy corresponds to the DMRG solution with bond-dimension truncated to 2 that is prepared on the quantum chip. We do not claim that our results outperform DMRG in general; indeed, with a moderately high bond-dimension DMRG can be used to find the exact ground state solution for this system. However, the QSCI results obtained from this reference state improve upon its accuracy which motivates the use of this method in the regime where exact DMRG becomes intractable. This accuracy improvement is attributed to the fact that the QSCI energy calculation is optimal within the subspace spanned by the sampled configurations whereas the DMRG energy is only optimal within the manifold of accessible MPS states with a capped maximum bond dimension. In this demonstration, it may also be the case that device noise enables us to sample from a larger set of relevant configurations than are present in the DMRG solution. While this effect of noise has been highlighted elsewhere as a potential benefit to QSCI,44 we do not expect it to provide a reliable mechanism for larger systems and encourage the development of methods that expand the available configuration space in a more systematic way.41–43
The energies were evaluated at three different oxygen separation values, indicated by the line dashing, and at five different proton ratios. For this proof-of-concept demonstration, the Hamiltonian has been chosen such that it can be evaluated with exact diagonalisation, so we report the energies relative to this reference. We also mark the region known as chemical accuracy (within 1.6mHa to this reference), which is motivated by the limits of real-world experimental precision.154 However, since we are using the minimal STO-3G basis set, it may be more suitable to describe this region as representing algorithmic accuracy, that is, accuracy within the limits of the basis set as opposed to true quantitative chemical accuracy.155,156 STO-3G is almost always employed as the default basis in quantum chemistry studies on quantum computers, but as the quality of quantum hardware increases we expect that more effort will be made to move to larger basis sets where results can be considered more chemically meaningful.
As can be seen, the QSCI energies are comfortably within this region at two points in the dOO = 3.3 Å evaluation, and near the boundary at several more evaluations. Whilst this precision to the exact solution is not seen across all of our energy evaluations, it is true that our implementation consistently yields more accurate energies than the HF and DMRG direct computation results, the latter of which was used to prepare the quantum circuit which is sampled from.
The proof-of-concept demonstration presented here combines classical molecular dynamics, quantum embedding and subspace methods into a single workflow, which allows quantum energy evaluations of chemical systems beyond the gas-phase regime.
In order to extend this workflow beyond the current fixed quantum geometry scheme, it is necessary to implement evaluations of the energy gradient with respect to nuclear coordinates of the QM nuclei. Within the PBE formalism, analytical nuclear gradients can be computed by introducing a total energy Lagrangian and finding its derivative when minimised with respect to the molecular orbitals.157,158 Various quantum algorithms have been proposed for evaluating energy gradients on quantum hardware, both analytical159–161 and numerical162,163 with requirements ranging from NISQ-friendly to fault-tolerant. Energy gradients may also be approximated directly with a finite-difference approach, although this raises the quantum overhead substantially due to repeated use of the device. We anticipate that challenges surrounding the evaluation of energy gradients can be tackled with a mixture of finite-difference approaches and algorithmic advances. With the ability to compute gradients, the quantum device can then be used to inform the updates of the quantum region nuclei, which inherently include the effect of the point-charge molecular mechanics atoms under the electrostatic QM/MM embedding. In such a setup, analysis of chemical properties beyond ground state energies is possible. For example, relaxation of the O–H–O atoms in the proof-of-concept example would allow for direct study of the proton transfer over a series of MD steps, allowing for accurate estimations of the energy barrier and consequent estimation of the hopping rate.
As quantum processors and error mitigation techniques develop, it will be possible to include larger subsystems within quantum algorithms, and to reduce the number of circuit measurements required to obtain accurate results. However, the hybrid methods we describe are still dependent on reasonable scaling of their components. Although these methods are flexible in how computation is partitioned between classical and quantum processors, the overall method may be constrained by either component. For example, in the case of PBE, a full-system DFT calculation is required to initialise the embedding. Similarly, determination of an initial reference state by DMRG constrains the QSCI procedure. As such, admissible systems are those for which approximate classical method can practically be performed.
On the other hand, circuit measurement and readout of quantum processors pose a challenge in scaling. It is important to scrutinise the computational overhead of quantum algorithms to understand their scalability. For example, a popular prediction for Quantum Phase Estimation (QPE) is it could take 25 hours to compute a single electronic point energy for the P450 cytochrome molecule, based on the availability of a fully error-corrected device comprised of 500
000 physical qubits each with a very low error rate of 0.001%.164 Already, this is somewhat prohibitive for practical use-cases, and furthermore this is only for the calculation of electronic energies – to obtain other ground state properties of interest, such as dipole moments or forces, requires considerable additional overhead in the QPE framework. This is because QPE does not permit direct access to the wavefunction, so further processing must be performed on the QPU itself to extract these quantities. By contrast, in VQE, as long as we can write down an observable for the quantity of interest, it can be estimated with no to little overhead beyond the base electronic energy calculation (assuming there is considerable overlap between the Pauli measurements required for these observables and the Hamiltonian). In QSCI, we gain direct access to the wavefunction owing to the classical diagonalisation step, although this imposes a strict classical limit on the size of configuration subspace we can hope to solve, which is the bottleneck of the algorithm. However, this is a limitation of all CI-based methods; the objective of QSCI is instead to identify higher quality subspaces that lead to more compact representations of the wavefunction and thus allow us to extend the limits of CI applicability in terms of system scale.
Beyond intrinsic algorithmic considerations, the scaling of quantum algorithms is consistently challenged by hardware noise. Device fidelities and coherence times are constantly improving, enabling deeper and more complex quantum circuits to be reliably executed. However, in the pre-fault-tolerant regime, error mitigation and error suppression will always be essential in obtaining useful measurement results from the quantum device. For example, for QSCI error mitigation/suppression techniques are essential to maximise the yield of configurations within the correct symmetry sector. Some techniques such as dynamical decoupling165,166 are widely useful and do not significantly increase the computational overhead as the system scales. On the other hand, for readout error mitigation, while being best practice for any quantum circuit execution, care must be taken in its implementation as naive methods exert an additional exponentially-scaling classical overhead to quantum computations.167,168 Finally, algorithm-specific error mitigation schemes such as zero-noise extrapolation169,170 and probabilistic error cancellation170,171 each come with their own trade-off between accuracy improvements and computational overhead. As we look to scale implementations of hybrid methods, a well-chosen combination of error mitigation/suppression techniques should improve the reliability of quantum computations without accumulating a prohibitive classical overhead.
The integration of these components introduces additional challenges for hybrid workflows. If QPUs are not tightly integrated into classical HPC resources, communication between devices may well undo any practical benefit of hybridisation.172
Overcoming these scaling challenges will be crucial to the development of hybrid quantum-classical algorithms. This is so that as quantum hardware continues to improve, we can begin to study systems of sufficient complexity to challenge both the available quantum and classical compute resources and obtain scientifically important results beyond the reach of classical computers alone.
To this end, we highlighted a range of classical techniques that can significantly extend the applicability of current noisy intermediate-scale quantum (NISQ) devices (Sections 2, 3, 4). By identifying subdomains within larger simulations that are suitable for quantum treatment, facilitated by the QM/MM method demonstrated here for proton hopping (Section 5), we can offload the quantum-relevant components to QPUs. Furthermore, we presented a flexible and scalable simulation workflow that leverages many of these techniques in tandem, adapting to available quantum and classical resources. Such algorithmic and architectural advances are critical to realising quantum utility well before the advent of fully fault-tolerant quantum computers.
,
, and
, as cited throughout.
superconducting device.
Footnotes |
| † These authors contributed equally to this work. |
| ‡ Current address: Argonne National Laboratory, 9700 S Cass Ave, Lemont, IL 60439, USA. |
| This journal is © The Royal Society of Chemistry 2025 |