Tensor Numerical Methods in Quantum Chemistry: from Hartree-Fock Energy to Excited States

We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, led to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in $O(n\log n)$ complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D $n\times n\times n $ Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D ``density fitting`` scheme. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excited states, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is related to the recent attempts to develop a tensor-based Hartree-Fock numerical scheme for finite lattice-structured systems, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a $L\times L\times L$ lattice manifests the linear in $L$ computational work, $O(L)$, instead of the usual $O(L^3 \log L)$ scaling by the Ewald-type approaches.


Introduction
The problems of numerical modeling the many-particle interactions in large molecular systems, lattice structured metallic clusters and crystals, proteins and nanomaterials are the most challenging tasks in modern computational physics and chemistry.The traditional approaches for these multidimensional problems are restricted to the concepts which have well recognized limitations for computations with higher accuracy and for larger non-periodic molecular systems, as well as for efficient calculation of excited states.Recent advances in numerical analysis for multidimensional problems and significant achievements in high performance computing suggest new creative approaches to these problems.
The Hartree-Fock (HF) equation governed by the 3D integral-differential operator [66,28] is one of the basic models for ab initio calculation of the ground state energy of molecular systems.It is a strongly nonlinear eigenvalue problem (EVP) in a sense, that one should solve the equation in a self-consistent way when the integral part of the governing operator depends on the solution itself.Multiple strong singularities in the electron density of a molecule due to nuclear cusps impose strong requirements on the accuracy of calculations.
Commonly used numerical methods for solution of the Hartree-Fock equation are based on the analytical computation of the arising two-electron integrals (convolution type integrals in R 3 ) in the problem adapted naturally separable Gaussian-type bases [1], by using erf-function expansions.This rigorous approach resulted in a number of efficient implementations which are widely used in computational quantum chemistry.The success of the analytical integration methods stems from the big amount of precomputed information based on the physical insight including the construction of problem adapted atomic orbitals basis sets and elaborate nonlinear optimization for calculation of density fitting basis.The known limitations of this approach appear due to a strong dependence of the numerical efficiency on the size and quality of the chosen Gaussian basis sets, that might be crucial for larger molecular clusters and heavier atoms.
The intention to replace or assist the analytical calculations for the Hartree-Fock problem by a data-sparse grid-based numerical schemes has a long history.First success was the gridbased numerical method for the diatomic molecules in [65], though this approach was not feasible to compact (3D) molecules.The wavelet multiresolution schemes [12] capable for the accurate representation of nuclear cusps, have been applied to electronic structure simulations since the seminal papers [27,62], and recently this approach was further advanced due to achievements in the high performance supercomputing [62,8,19,17].However, due to extensive computational resources, the entirely wavelet-based or sparse-grid approaches [23,73] are limited so far only to rather small atomic systems with few electrons [2].Tensor hyper-contraction decompositions in density fitting schemes have been analyzed in [31,58].
The newly developed tensor-structured numerical methods, both the name and the concept, appeared during the work on the grid-based tensor approach to the solution of the 3D Hartree-Fock problem [37,38,41].The central point is the representation of d-variate functions and integral/differential operators on large n ⊗d grids and their approximation in the low-rank tensor formats, which allows numerical calculations in O(dn) complexity instead of O(n d ) by conventional methods.
In this paper we summarize the main benefits of the tensor numerical methods in electronic structure calculations, and discuss further prospects of this approach in several directions, such as • algebraic directional density fitting and tensor factorization of the two-electron integrals and their use in the Hartree-Fock calculations; • tensor decompositions in the MP2 energy correction, and in calculation of excited states based on the Bethe-Salpeter equation; • fast tensor summation of electrostatic potentials on large 3D lattice for efficient calculation of the Fock matrix in the case of lattice-structured systems; • fast calculation of the interaction energy of the long-range potentials on large 3D lattices.
Notice that basic rank-structured tensor formats such as the canonical (PARAFAC) and Tucker tensor decompositions have been since long used in the computer science for the quantitative analysis of correlations in the experimental multidimensional data arrays in data processing and chemometrics, see [49] and references therein.In 2006 the exceptional properties of the Tucker decomposition for the discretized multidimensional functions have been revealed in [35,37], where it was proven that for a class of function-related tensors the approximation error of the Tucker decomposition decays exponentially with respect to the Tucker rank.This gives the opportunity to represent the discretized multidimensional functions and integral (convolution) operators in an algebraically separable form and thus reduce the numerical treatment of the multidimensional transforms to 1D operations.It was shown that the number of vectors in such a representation depends only logarithmically on the size of the d-dimensional grid, O(n d ), used for discretization of multivariate functions.
The above results have led to the idea to calculate the Hartree potential and the Coulomb and exchange operators by numerical quadratures [38] using Gaussian-type basis functions discretized on 3D Cartesian grids and the fast tensor-product convolution [34].In this way, the efficient lowrank canonical tensor representation to the Newton kernel was an essential contribution [7].To reduce the initial rank of the electron density, that is quadratically proportional to the number of GTO basis functions, the canonical-to-Tucker tensor transform was invented [38,37], which made computations for large tensor grids and extended molecules tractable (even in Matlab).
The initial version of tensor-structured algorithms for solving Hartree-Fock equation employed the 3D grid-based calculation of the Coulomb and exchange integral operators "on-the-fly", thus avoiding precomputation and storage of the TEI tensor [39,41].In particular, it was justified that tensor calculus allows to reduce the 3D convolution integrals to combinations of 1D convolutions, and 1D Hadamard and scalar products.Besides, these results promoted spreading of the tensorstructured methods in the community of numerical analysis [21,36,26], and further development of the tree-tensor formats like tensor-train [56] and hierarchical Tucker representations [25].
Further development of tensor methods in electronic structure calculations was due to the fast algorithm for the grid-based computation of the TEI tensor [43] in O(N 3 b ) storage in the number of basis functions N b .The fourth order TEI tensor is calculated in a form of the Cholesky factorization by using the algebraic 1D "density fitting" scheme, which applies to the product basis functions.Imposing the low-rank tensor representation of the product basis functions and the Newton convolving kernel all discretized on n × n × n Cartesian grid, the 3D integral transforms are calculated in O(n log n) complexity.Given the factorized TEI, the update of the Column and exchange parts in the Fock matrix reduces to the cheap algebraic operations.Other steps are tensor calculation of the core Hamiltonian and the efficient MP2 energy correction scheme [44], which all together gave rise to the black-box Hartree-Fock solver [45].
Due to the grid representation of basis functions basis sets are now not restricted to Gaussiantype orbitals and allowed to be any well-separable function defined on a grid.The tensor-based Hartree-Fock solver is competitive in computational time and accuracy with the standard packages based on analytical calculation of integrals.High accuracy is attained owing to easy calculations on large 3D grids up to n 3 = 10 18 , so that the resolution with mesh size h of the order of atomic radii, 10 −4 Å, is possible.
Motivated by tensor decompositions in the MP2 scheme, the new approach for calculation of the excited states in the framework of the Bethe-Salpeter equation was recently introduced [6], that employs the reduced basis method in combination with low-rank tensor approximations.
Further developments of the tensor methods in multi-particle simulations are focused on the large lattice structures in a box and nearly periodic systems, for which the tensor approach may reduce computational costs dramatically [47].One of the challenging problems in the numerical treatment of the crystalline-type molecular clusters is summation of a large number of electrostatic potentials distributed on a finite (non-periodic) lattice.The novel grid-based method for summation of the long-range potentials in the canonical and Tucker formats [46,48] works on L × L × L lattices with the computational cost O(L) instead of O(L 3 log L) by the traditional Ewald-type methods [18].The required precision is guaranteed by employing large 3D Cartesian grids for representation of potentials.The method remains efficient for lattices with non-rectangular geometries and in the presence of multiple defects [48,47].
The rest of the paper is organized as follows.In Section 2, we overview the basic tensor formats and show why the orthogonal Tucker tensor decomposition, originating from computer science and data processing, became useful for the treatment of the multidimensional functions and operators in numerical analysis.In particular, §2.3 and §2.4 discuss the matrix product states (tensor train) formats and the novel quantics tensor approximation method, respectively, while §2.5 addresses the basic tensor-structured multilinear algebra operations.Section 3 describes tensor calculus of the multidimensional convolution transform on examples of the Hartree potential ( §3.1) and the TEI tensor ( §3.2), and recalls the main building blocks in the tensor-based black-box Hartree-Fock solver.Section 4 shows benefits of the tensor approach in MP2 calculations and in calculation of the lowest part of the excitation energies in the framework of the Bethe-Salpeter equation.Section 5 describes the benefits of tensor methods in applications to lattice-type molecular systems.In particular, we present fast tensor method for the summation of the long-range interaction potentials on a 3D lattice by using the assembled vectors of their canonical and Tucker tensor representations.The important application of this approach to calculation of interaction energy of the Coulombic potentials on a lattice with sub-linear cost in the lattice size is described in detail.Appendix discusses some computational details on the Canonical-to-Tucker tensor transform, the Galerkin discretization scheme for the nonlinear Hartree-Fock equation, and the basics of the low-rank canonical tensor representation for the Newton kernel.

Rank-structured tensor representations of discretized functions and operators
In this section we discuss shortly why the rank-structured tensors, which were traditionally used in experimental data processing, appears to be useful for the separable representation of multivariate functions and operators represented on tensor product grids.The canonical and Tucker tensor decompositions have been since long used in the computer science community for the quantitative analysis of correlations in the experimental multidimensional data arrays in chemometrics and data processing, with rather weak requirements on the accuracy, see [63,49] and references therein.In the recent decade the class of matrix product states type tensor formats became popular in the simulations of quantum spin systems and quantum molecular dynamics.We refer to [49,61,36] for recent literature surveys on commonly used tensor formats.

Canonical and Tucker tensor formats
We consider a tensor of order d, as a real multidimensional array A = [a i 1 ,...,i d ] ∈ R n 1 ×...×n d numbered by a d-tuple index set 1 , with multi-index notation i = (i 1 , ..., i d ), i ℓ ∈ {1, ..., n ℓ }, ℓ = 1, ..., d.It is an element of the linear vector space R n 1 ×...×n d equipped with the Euclidean scalar product, Euclidean vectors and matrices are the special case of dth order tensors.For a general tensor, the required storage scales exponentially in the dimension, n 1 n 2 • • • n d , (the so-called "curse of dimensionality").To get rid of exponential scaling in the dimension, one can apply the rankstructured separable representations of multidimensional tensors.The simplest separable element is given by rank-1 canonical tensor, with entries u i 1 ,...,i d = u (1) A tensor in the R-term canonical format is defined by where u (ℓ) k ∈ R n ℓ are normalized vectors, and R is called the canonical rank of a tensor.It is convenient to introduce the so-called side matrices k , k = 1, . . .R. Now the storage cost is bounded by dRn.For d ≥ 3 computation of the canonical rank of a tensor U, i.e. the minimal number R in representation (2.1) and the respective decomposition, is an N -P hard problem.In the case d = 2 the representation (2.1) is merely a rank-R matrix.
We say that a tensor V is represented in the rank-r orthogonal Tucker format with the rank parameter r = (r 1 , ..., r d ), if where {v ν ℓ ∈ R n ℓ } represents the set of orthonormal vectors, and β = [β ν 1 ,...,ν d ] ∈ R r 1 ×•••×r d is the Tucker core tensor.The storage cost for the Tucker tensor is bounded by drn+r d , r = max r ℓ .In the case d = 2, the orthogonal Tucker decomposition is equivalent to the singular value decomposition (SVD) of a rectangular matrix.Figure 2.1 visualizes the canonical and Tucker tensors in the case d = 3. (2) (2) Canonical tensor decomposition leads to an ill-posed problem and, up to our best knowledge, there are no robust algebraic algorithms for the canonical approximation of an arbitrary tensor.For some classes of analytic functions, the explicit low-rank approximation can be constructed in analytic form by using sinc-quadrature approximation to the Laplace transform.The robust methods for Tucker decomposition are based on the orthogonal projections using the higher order singular value decomposition [15] (HOSVD) that is the generalization of the singular value decomposition (SVD) of matrices.
The rank-structured decomposition (approximation) of multidimensional tensors provides means for the separation of variables in the discretized representation of multivariate functions and operators, and thus the possibility to substitute multidimensional algebraic transforms by univariate operations.Notice that in the computer science community these possibilities were restricted to moderate-dimensional tensors of small mode size (with rather large rank parameters) obtained from the experimental data sets.

2.2
Tucker decomposition for function related tensors.Canonical-to-Tucker approximation In 2006 it was first verified numerically, see in [37], that the rank of the (fixed accuracy) Tucker approximation to some function related tensors depends only logarithmically on the size of the discretization grid.For a given continuous function , we introduced the function related 3rd order tensor, obtained by Galerkin discretization in a volume box using n × n × n 3D Cartesian grid.In particular, the low-rank tensor approximations were calculated for functions of the Slater type f x , and the Helmholtz potential Numerical tests demonstrated that the error of the rank-r (with r = (r, r, r)) Tucker approximation applied to these third order tensors decays exponentially with respect to the Tucker rank r.Moreover, for fixed approximation error, the Tucker rank r scales as O(log n) for above mentioned functions [34].This application revealed a number of properties of the Tucker format which were not known before.In particular, it was showed that for tensors resulting from the discretization of physically relevant multidimensional functions on the tensor grids one can find algebraically a reduced subspace in each mode of the original tensor, thus approximating the function by separated variables using a tensor product of a relatively small number of vectors.demonstrate that the exponential decay of approximation error is nearly the same for both a single potential and for a sum of the same potentials distributed in nodes of a cubic lattice.This property of the Tucker decomposition will be further gainfully applied to the fast lattice summation of interaction potentials.Motivated by these observations, we invented the canonical-to-Tucker (C2T) decomposition for function related tensors [37], based on the reduced higher order singular value decomposition (RHOSVD) (Theorem 2.5, [38]).The canonical-to-Tucker algorithm (see Appendix) combined with the Tucker-to-canonical transform serves for reducing the ranks of canonical tensors with large R. Figure 2.4 in its ALS part shows the algorithmic step for ℓ = 1, which is repeated for every mode ℓ = 1, 2, 3 (see Appendix).The computational work for the multigrid tensor decomposition C2T algorithm introduced in [38] exhibits linear complexity scaling with respect to all input parameters, O(Rnr).
The multigrid Tucker decomposition algorithm applied to full format tensors [38] leads to the complexity scaling O(n 3 ), whereas its standard version (commonly used HOSVD algorithm in computer science) scales as O(n 4 ), becoming intractable even for moderate sizes of tensors.
We conclude by notice that the optimized Tucker and canonical tensor approximations can be computed by the alternating least square (ALS) iteration with initial guess obtained by HOSVD/RHOSVD approximations.

Matrix Product States/Tensor Train format
The product-type representation of dth order tensors, which is called in the physical literature as the matrix product states (MPS) decomposition (or more generally, tensor network states models), was introduced and successfully applied in DMRG quantum computations [72,68,61], and, independently, in quantum molecular dynamics as the multilayer (ML) MCTDH methods [70,53].MPS type representations reduce the storage complexity to O(dr 2 N ), where r is the maximal rank parameter.
In the recent years the various versions of the MPS-type tensor approximations were discussed and further investigated in mathematical literature including the hierarchical dimension splitting [35], the tensor train (TT) [56,13], the quantics-TT (QTT) [33], as well as the hierarchical Tucker representation [25], which belongs to the class of tensor network states model.
The TT format is the particular form of MPS type factorization in the case of open boundary conditions.For a given rank parameter r = (r 0 , ..., r d ), the rank-r TT format contains all elements A = [a i 1 ,...,i d ] ∈ R n 1 ×...×n d , which can be represented as the contracted products of 3-tensors, that in the index notation takes a form, specified by the set of column vectors, a , or equivalently by the vectorvalued r ℓ × r ℓ+1 matrices, 2).The latter representation is written in the matrix product form, explaining the notion MPS, where Figure 2.5 illustrates the TT representation of a 5th-order tensor, where each particular entry is factorized as a product of five matrices, a i 1 ,i 2 ,...,i 5 = A (1) (i 1 )A (2) (i 2 ) . . .A (5) (i 5 ), where, for example, The rank-structured tensor formats like canonical, Tucker and MPS/TT-type decompositions also apply to matrices.For example, the d-dimensional FFT matrix over N ⊗d grid can be implemented on the rank-k tensor with the linear-logarithmic cost O(dkN log 2 N ), due to the rank-1 factorized representation where F (ℓ) N ∈ R N ×N represents the univariate FFT matrix along mode ℓ.

Quantics tensor approximation of functional vectors
In the case of large mode size, the asymptotic storage cost for a class of function related N -d tensors can be reduced to O(d log N ) by using quantics-TT (QTT) tensor approximation method [33].The QTT-type approximation of an N -vector with N = 2 L , L ∈ N, is defined as the tensor decomposition (approximation) in the canonical, TT or more general formats applied to a tensor obtained by the dyadic folding (reshaping) of the target vector to an L-dimensional 2 × ... × 2 data array (tensor) that is thought as an element of the L-dimensional quantized tensor space.
In the vector case, i.e. for d = 1, a vector where for fixed i, we have y j := x i , with j ν = j ν (i) = C ν−1 (ν = 1, ..., L) being defined via binary coding, i.e. the coefficients C ν−1 ∈ {0, 1} are found from the binary representation of i − 1, Next figure visualizes the QTT approximation process.Suppose that the quantics image for an N -vector, i.e. an element of L-dimensional quantized tensor space L j=1 R 2 with L = log 2 N , can be represented (approximated) by the low-rank canonical or TT tensor of order L, thus introducing the QTT approximation of an N -vector.Given rank parameters {r k } (k = 1, ..., L), the QTT approximation of an N -vector requires a number of representation parameters estimated by 2r 2 log 2 N ≪ N, where r k ≤ r, k = 1, ..., L, providing log-volume complexity scaling in the size of initial vector, N .For d > 1 the construction is similar [33].
The power of QTT approximation method is explained by the theoretical substantiation of the QTT approximation properties discovered in [33] and establishing the perfect rank-r decomposition for the wide class of function-related tensors obtained by sampling continuous functions over uniform (or properly refined) grid: • r = 1 for complex exponents, • r = 2 for trigonometric functions and plain-waves.
• r ≤ m + 1 for polynomials of degree m, • r is a small constant for wavelet basis functions, Gaussians, etc. all independently on the vector size N .
Notice that the notion quantics (or quantized) tensor approximation (with a shorthand QTT) originally introduced in 2009, see [33], is reminiscent of the entity "quantum of information", that mimics the minimal possible mode size (n = 2) of the quantized image.
Concerning the matrix case, it was first found in [55] by numerical tests that in some cases the dyadic reshaping of an N × N matrix with N = 2 L may lead to a small TT-rank of the resultant matrix rearranged to the tensor form.The efficient low-rank QTT representation for a class of discrete multidimensional elliptic operators (matrices) and their inverse was proven in [32].Moreover, based on the QTT approximation, the important algebraic matrix operations like FFT, convolution and wavelet transforms can be implemented by superfast algorithms in O(log 2 N )complexity, see survey paper [36] and references therein.

Rank-structured tensor operations in 1D complexity
The rank-structured tensor representation provides 1D complexity of multilinear operations with multidimensional tensors.Rank-structured tensor representation provides fast multi-linear algebra with linear complexity scaling in the dimension d.
For given canonical tensors A and B as in (2.1), with ranks R a and R b , respectively, their Euclidean scalar product can be computed by univariate operations (2.4) Summation of two tensors in the canonical format C = A + B is performed by a simple concatenation of their factor matrices, Ra ] and The rank of the resulting canonical tensor increases up to In electronic structure calculations, the 3D convolution transform with the Newton kernel, x−y , is the most computationally expensive operation.The tensor method to compute convolution over large n × n × n Cartesian grids in O(n log n) complexity was introduced in [34].Given canonical tensors A, B, their convolution product is represented by the sum of tensor products of 1D convolutions, where a m denotes the univariate convolution product of n-vectors.The cost of tensor convolution in both storage and time is estimated by O(R a R b n log n).The resulting algorithm considerably outperforms the conventional 3D FFT-based approaches of complexity O(n 3 log n), see numerics in [38].
The sequences of rank-structured operations on matrices and vectors normally lead to the increase of tensor ranks, usually being multiplied or added after each operation.The necessary rank reduction in the Tucker and MPS type formats can be implemented by stable algorithms based on the higher order SVD.In the physical community the HOSVD-type algorithms are known since longer as the Schmidt decomposition [69,68,61].In the case of canonical tensors the rank reduction can be performed by the RHOSVD algorithm based on the canonical-to-Tucker and then Tucker-to-canonical transforms described and analyzed in [37,38,48] (see §2.2 and Appendix), which demonstrated the stable behavior for most of examples in the Hartree-Fock calculations we considered so far.The stability conditions for the RHOSVD have been discussed in [38,48].
3 Tensor calculus for the Hartree-Fock equation

Calculation of multi-dimensional integrals
Tensor-structured calculation of the multidimensional convolution integral operators with the Newton kernel have been introduced in [38,41,39], where on the examples of the Hartree and exchange operators in the Hartree-Fock equation, it was shown that calculation of the 3D and 6D convolution integrals can be reduced to a combination of 1D Hadamard products, 1D convolutions and 1D scalar products.
k (x ℓ ), ℓ = 1, 2, 3, yielding their rank-1 tensor representation, k ⊗ g k ⊗ g Let us consider the tensor calculation of the Hartree potential and the corresponding Coulomb matrix, where the electron density, ρ(x) = 2 Given the discrete tensor representation of basis functions (3.1), the electron density is approximated using 1D Hadamard products of rank-1 tensors (instead of product of Gaussians), k ⊙ g (2)  m ) ⊗ (g Further, the representation of the Newton kernel 1 x−y by a canonical rank-R N tensor [7] is used (see Appendix for details), Since large ranks make tensor operations inefficient, the multigrid canonical-to-Tucker and Tuckerto-canonical algorithms should be applied to reduce the initial rank of Θ → Θ ′ by several orders of magnitude, from Then the 3D tensor representation of the Hartree potential is calculated by using the 3D tensor product convolution, which is a sum of tensor products of 1D convolutions, q .
The Coulomb matrix entries J km are obtained by 1D scalar products of V H with the Galerkin basis consisting of rank-1 tensors, The cost of 3D tensor product convolution is O(n log n) instead of O(n 3 log n) for the standard benchmark 3D convolution using the 3D FFT.Next structured calculation of 6D integrals in the exchange potential operator was introduced in [39], the contribution from the a-th orbital are approximated by tensor anzats, Here, the tensor product convolution is first calculated for each ath orbital, and then scalar products in canonical format yield the contributions to entries of the exchange Galerkin matrix from the a-th orbital.These algorithms were employed in the first tensor-structured solver using 3D grid-based evaluation of the Coulomb and exchange matrices in 1D complexity at every step of SCF EVP iteration [40,41].A sequence of dyadically refined 3D Cartesian grids was used for reducing time in first iterations, with an ε convergence criterion for switching to larger grids.This is a nonstandard computational scheme avoiding calculation of the two-electron integrals.The accuracy for small molecules like H 2 O and CH 4 was of the order of 10 −4 Hartree.Though time performance of this solver was not compatible with the standard Hartree-Fock packages it was the first proof of concept for the tensor numerical methods.

3D grid-based calculation of the two-electron integrals.
The basic tensor-structured Hartree-Fock solver employs the factorized 3D grid-based calculation of the two-electron integrals tensor, B = [b µνκλ ], in the form by using univariate tensor operations.Introduce the side matrices G (ℓ) representing on the grid the full set of canonical vectors composing the products of the Gaussian basis functions, {G µ ⊙ G ν }, where in most of our Hartree-Fock calculations grids of size n 3 = 32 • 10 3 or n 3 = 64 • 10 3 have been used.It was found that the large matrices G (ℓ) of size n × N 2 b (e.g.N 2 b = 40000 for Alanine amino acid) can be approximated with high accuracy by low rank matrices with the rank parameter bounded by R ℓ ≤ N b [43,44].The corresponding low-rank factorizations ("1D density fitting") in the form (for ℓ = 1, 2, 3) is computed by the truncated Cholesky decomposition of the symmetric, positive definite G (ℓ) G (ℓ) T (see [43,44] for more details).Based on factorization (3.4), the number of convolutions in (3.3) is reduced dramatically from N 2 b to N b at most (say from 40000 to 200).In fact, using canonical factors from the rank-R canonical tensor P R representing the Newton kernel (3.2) (see (7.8) in Appendix) we, first, precompute the set of "convolution" matrices for every space variable ℓ, ℓ = 1, 2, 3, which includes the convolution products with R ℓ ≤ N b column vectors of the matrix U (ℓ) ∈ R n×R ℓ instead of N 2 b /2 convolutions in the initial formulation.Given matrices M (ℓ) k , then the resulting 4-th order TEI tensor is represented in a matrix form The above nonstandard factorization of the TEI matrix B allows to reduce dramatically the computational cost of the standard Cholesky factorization schemes [29,5] applied to the reduced-rank symmetric positive definite matrix B. In fact, the low-rank Cholesky decomposition of B is calculated in the following sequence.First, the diagonal elements of B are calculated as Then the selected columns of the matrix B, required for the rank-truncated Cholesky factorization scheme, are computed by the following fast tensor operations leading to representation of the matrix B in the form of rank-R B approximate decomposition [43,44] This algorithm is much faster than the direct Cholesky decomposition of the matrix B with onthe-fly computation of the required column vectors.

Core Hamiltonian.
The Galerkin representation of the 3D Laplace operator in the nonlocal Gaussian basis {g k (x)} 1≤k≤N b , x ∈ R 3 , leads to the fully populated matrix A g = [a km ] ∈ R N b ×N b .Tensor calculation of the matrix entries a km for the discrete Laplacian A g in the separable Gaussian basis is reduced to 1D matrix operations [45] involving the FEM Laplacian ∆ 3 , defined on n × n × n grid, ∆ 3 = ∆ (1) 1 , where ∆ 1 = 1 h tridiag{−1, 2, −1}.Specifically, we have where G k is the tensor representation of Gaussian basis functions using the piecewise linear finite elements.In the case of large n × n × n grids, this calculation can be implemented with logarithmic cost in n by using the low-rank QTT representation of the large matrix ∆ 3 , see [32].
For tensor calculation of the nuclear potential operator we apply the rank-1 windowing operator, W α = W (1) α , for shifting the reference Newton kernel P R ∈ R 2n×2n×2n according to the coordinates of nuclei in a molecule (see Section 5 and Appendix).Then the resulting nuclear potential, P c ∈ R n×n×n , is obtained as a direct tensor sum of shifted potentials [45], W (1)  α p (1)  q ⊗ W (2) α p (2)  q ⊗ W (3) α p (3)  q .
This leads to the following representation of the Galerkin matrix, V c = [v km ], by tensor operations

Black-box tensor solver.
The tensor-structured Hartree-Fock solver [45] based on factorized calculation of the two-electron integrals [43] includes efficient tensor implementation of the MP2 energy correction [44] scheme.Though it is yet implemented in Matlab, its performance in time and accuracy is compatible with the standard packages based on analytical evaluation of the two-electron integrals.Due to 1D complexity of all calculations, it enables 3D grids of the size 10 15 , yielding mesh size of the order of atomic radii, 10 Next examples compare the results from benchmark Molpro program with calculations by the tensor-based solver by using the same Gaussian basis, but now discretized on 3D Cartesian grid.For all molecules we use the "cc-pVDZ" Gaussian basis set.The core Hamiltonian part in these calculations is taken from Molpro.Note that since in the tensor solver the density fitting is included in "blind" calculation of TEI, it is not easy to compare the CPU times of our calculations with those in "ab-initio" procedure in the standard programs, because the density fitting step there is usually considered as off-line pre-computing.
where a, b ∈ I v , i, j ∈ I o , and I o := {1, ..., N orb }, I v := {N orb + 1, ..., N b }, with N orb denoting the number of occupied orbitals.In the following, we shall use the notation The straightforward computation of the matrix V by above representation makes the dominating impact to the overall numerical cost of order O(N 5 b ).The method of complexity O(N 4 b ) based on the low-rank tensor decomposition of the matrix V was introduced in [44].Indeed, it can be shown that the rank R B = O(N b ) approximation to the TEI matrix B ≈ LL T , with the N × R B Cholesky factor L, allows to introduce the low-rank representation of the matrix V , (see [44] and [6]) and then reduce the asymptotic complexity of calculations to O(N 4 b ).Given the tensor V = [v iajb ], the second order MP2 perturbation to the HF energy is calculated by where the real numbers ε k , k = 1, ..., N b , represent the HF eigenvalues.
Introducing the so-called doubles amplitude tensor T, the MP2 perturbation takes the form of a simple scalar product of tensors, where 1 denotes the rank-1 all-ones tensor.Introducing the low ε-rank reciprocal "energy" tensor and the partly transposed tensor (transposition in indices a and b) allows to decompose the doubles amplitude tensor T as follows Notice that the denominator in (4.2) remains strongly positive if ε a > 0 for a ∈ I vir and ε i < 0 for i ∈ I occ .The latter condition (nonzero homo lumo gap) allows to prove the low ε-rank decomposition of the tensor E [43,44].
Each term in the right-hand side in (4.4) can be treated separately by using ranks-structured tensor decompositions of V and E, possibly combined with various symmetries and data sparsity.Numerical tests illustrating the tensor approach to the MP2 energy correction are presented in [44].

Toward low-rank approximation of the Bethe-Salpeter equation for calculation of excited states
One of the commonly used approaches for calculation of the excited states in molecules and solids, along with the time-dependent DFT, is based on the solution of the Bethe-Salpeter equation (BSE), see for example [11,60].The BSE approach leads to the challenging computational task on the solution of the eigenvalue problem for determining the excitation energies ω n , governed by a large fully populated matrix of size so that the computation of the entire spectrum is prohibitively expensive.Here the large matrix blocks of size N ov × N ov take a form where the diagonal "energy" matrix is defined by while the matrices W = [w ia,jb ] and W = [ w ia,jb ] are determined by permutation of the so-called static screened interaction matrix W = [w ia,jb ], via w ia,jb = w ij,ab , and [ w ia,jb ] = [w ib,aj ], respectively.In turn, the forth order tensor W = [w iajb ] is constructed by certain linear transformations of the tensor V = [v iajb ], see [11,60].
A number of numerical methods for structured eigenvalue problems have been discussed in the literature [50,4,14,54].
The tensor approach to the solution of the partial BSE eigenvalue problem for equation (4.5) proposed in [6] suggests to compute the reduced basis set by solving the simplified eigenvalue problem via the low-rank plus diagonal approximation to the matrix blocks A and B, and then solve spectral problem for the subsequent Galerkin projection of the initial system (4.5) to this reduced basis.This procedure relies entirely on multiplication of the simplified BSE matrix with vectors.
It was demonstrated on the examples of moderate size molecules [6] that a small reduced basis set, obtained by separable approximation with the rank parameters of about several tens, allows to reveal several lowest excitation energies and respective excited states with the accuracy about 0.1eV -0.02eV.Figure 4.1 illustrates the BSE energy spectrum of the NH 3 molecule (based on HF calculations with cc-pDVZ-48 GTO basis) for the lowest N red = 30 eigenvalues vs. the rank truncation parameter ε = 0.6 and 0.1, where the ranks of V and the BSE matrix block W are 4, 5 and 28, 30, respectively.For the choice ε = 0.6 and ε = 0.1, the error in the 1st (lowest) eigenvalue for the solution of the problem in reduced basis is about 0.11eV and 0.025eV, correspondingly.The CPU time in the laptop Matlab implementation of each example is about 5sec.

Tensor approach to simulation of large crystalline clusters
In this section, we briefly discuss the generalization of the tensor-based Hartree-Fock solver to the case of large lattice structured and periodic systems [46,47] arising in the numerical modeling of crystalline, metallic and polymer type compounds.

Fast tensor calculation of a lattice sum of interaction potentials
One of the challenges in the numerical treatment of large molecular systems is the summation of long-range potentials allocated on large 3D lattices.The conventional Ewald summation techniques based on a separate evaluation of contributions from the short-and long-range parts of the interaction potential exhibit O(L 3 log L) complexity scaling for a cubic L × L × L 3D lattice.
In the contrary, the main idea of the novel tensor summation method introduced in [46, 48] suggests to benefit from the low-rank tensor decomposition of the generating kernel approximated on the fine n × n × n representation grid in the 3D computational box Ω L .This allows to completely decouple the 3D sum into the three independent 1D summations, thus reducing drastically the numerical expenses.The resultant potential sum, which now requires only O(n) storage and O(nL) computational demands, is represented by a few assembled canonical/Tucker n-vectors of complicated shape (see Figure 5.1), where n is the univariate grid size for a cubic 3D lattice.
For ease of exposition, we consider the electrostatic potential the nuclear potential of a single hydrogen atom, V c (x) = Z x .Define the scaled unit cell, Ω 0 = [0, b] 3 , of size b × b × b and consider a sum of interaction potentials in a symmetric computational box Recall that b = nh, where h > 0 is the fine mesh size that is the same for all spatial variables, and n is the number of grid points for each variable.We also define the accompanying domain Ω L obtained by scaling of Ω L with the factor of 2, Ω L = 2Ω L , and, similar to (7.8), introduce the respective rank-R reference (master) tensor p (1)  q ⊗ p (2) q ⊗ p (3) q ∈ R 2n×2n×2n , (5.1) approximating 1 x in Ω L on a 2n × 2n × 2n representation grid with mesh size h.Let us consider a sum of single Coulomb potentials on a L × L × L lattice, The assembled tensor approach applies to the potentials defined on n × n × n 3D Cartesian grid.It reduces the sum over a rectangular 3D lattice, P c L ∈ R n×n×n , to the summation of directional vectors for the canonical decomposition of shifted single Newton kernels [46], where is the shift-and-windowing (onto Ω L ) separable transform along the k-grid.Remarkably that the rank of the resulting sum is the same as for the R-term canonical reference tensor (5.1) representing the single Newton kernel.The summation method in the canonical format was extended to the Tucker tensors which allows the principal generalization of this techniques to the case of rather complicated lattices with defects (Theorem 3.2, [48]), so that the resulting sum takes a form where t (ℓ) m ℓ , ℓ = 1, 2, 3, represents the Tucker vectors of the rank-r master tensor approximating the Newton kernel 1 x in Ω L on a 2n × 2n × 2n representation grid.In the case of defected lattices the increasing rank of the final sum of Tucker tensors can be reduced by the stable ALS based algorithms applicable to the Tucker tensors (see [47]).The particular examples of the lattice geometries suitable for our approach are presented in Figure 5.3, see [47] for more detailed discussion.
For the reduced Hartree-Fock equation, where the Fock operator is confined to the core Hamiltonian, the tensor-structured block-circulant representation of the Fock matrix was introduced [47] that allows the special low-rank approximation of the matrix blocks.This opens the way for the numerical treatment of large eigenvalue problems with structured matrices arising in the solution of the Hartree-Fock equation for large crystalline systems with defects.

Interaction energy of long-range potentials on finite lattices
Given the nuclear charges {Z k }, centered at points x k , k ∈ K 3 , located on a L × L × L lattice L L = {x k } with the step-size b, the interaction energy of the total electrostatic potential of these charges is defined by the lattice sum (5.4) The fast and accurate computation of electrostatic interaction energy (as well as the related forces and stresses) is one of the difficult tasks in computer modeling of macromolecular structures like finite crystal, and biological systems.
The tensor summation scheme (5.3) can be directly applied to this computational problem.In what follows we show that (5.4) can be treated as a particular case of the previous scheme served for calculation of (5.2) on a fine spacial grid.For this discussion, we assume that all charges are equal, Z k = Z.
First, notice that the rank-R reference tensor h −3 P defined in (5.1) approximates with high accuracy O(h 2 ) the (and its shifted version) Coulomb potential 1 x in Ω L (for x ≥ b that is required for the energy expression) on the fine 2n × 2n × 2n representation grid with mesh size h.Likewise, the tensor h −3 P c L approximates the potential sum V c L (x) on the same fine representation grid including the lattice points x k .
We propose to evaluate the energy expression (5.4) by using tensor sums as in (5.3), but now applied to a small sub-tensor of the rank-R canonical reference tensor P, that is P This leads to the representation of the energy sum (5.4) with accuracy O(h 2 ) in a form where the first term in brackets represents the full canonical tensor lattice sum restricted to the k-grid composing the lattice L L , while the second term introduces the correction at singular points x j − x k = 0.Here 1 ∈ R L×L×L is the all-ones tensor.By using the rank-1 tensor P 0L = P |x k =0 1, the correction term can be represented by a simple tensor operation k∈K Finally, the interaction energy E L allows the representation that can be implemented in O(L2 ) ≪ L 3 log L complexity by tensor operations with the rank-R canonical tensors in R L×L×L .Table 5.1 illustrates the performance of the algorithm described above.We compare the exact value computed by (5.4) with the approximate tensor representation in (5.5) computed on the fine representation grid with n = n 0 L, n 0 = 128.We consider the lattices consisting of Hydrogen atoms with interatomic distance 2 bohr.The size of the largest 3D lattice with 256 ) calculation of the interaction energy for the lattice electrostatic potentials.Matlab on an Intel Xeon X5650.
The presented approach for fast calculation of the interaction energy can extended to the case of non-uniform rectangular lattices and, under certain assumptions, to the case of non-equal nuclear charges Z k .Moreover, it applies to many other types of spherically symmetric interaction potentials, for example, to shielded Coulomb interaction or van der Waals attraction sums corresponding to the distance function x −2 and x −6 , respectively.

Conclusions
The goal of this paper is to attract interest of the specialists in computational quantum chemistry to recent results and open questions of the grid-based tensor approach in electronic structure calculations.Here we focus mostly on the description of main mathematical and algorithmic aspects of the tensor decomposition schemes and demonstrate their benefits in some applications.
The scope of applications which can be regarded as consistent, ranges from the Hartree-Fock energy for moderate size molecules, including "blind" calculation of TEI tensor with incorporated algebraic density fitting, to calculation of the excited states for molecules, and up to a unique superfast method for calculating the lattice potential sums and the interaction energy of long range potentials on a lattice in a finite volume.Tensor approach allows to treat above problems using moderate computational facilities.All numerics given in the paper presents implementations in Matlab.
The described numerical tools are not restricted to the applications presented here, but can be applied to various hard computational problems in (post) Hartree-Fock calculations related to accurate evaluation of multidimensional integrals, and efficient storage and manipulations with large multivariate data arrays.
The presented method for summation of long-range potentials with sub-linear computational cost does not have analogues in what is used so far in computational quantum chemistry and can have a good future.For example, it can be useful in modeling of large finite molecular structures like nanostructures or quantum dots, where the periodic approach may be inconsistent.Calculating a sum of several millions of lattice potentials on fine 3D grids takes only several seconds in Matlab on a laptop [46].This method gives also the unique possibility to present the summed 3D lattice potential in the whole computational region with very high accuracy.Integration and differentiation of this 3D potential can be easily performed on representation grid due to 1D computational costs.
Tensor approach is now being evolved for modeling the electronic structure of finite crystallinetype molecular systems [47].We hope that tensor numerical methods will have a good future in solving challenging multidimensional problems of computational quantum chemistry.
⊲ Reshape the tensor U 1 ∈ R n 1 ×r 2 ×r 3 into a matrix M U 1 ∈ R n 1 ×(r 2 r 3 ) , representing the span of the optimized subset of mode-1 columns in partially projected tensor U 1 .Compute the SVD of the matrix M U 1 : M U 1 = Z (1) S (1) V (1) , and truncate the set of singular vectors in Z (1) → Z (1) ∈ R n 1 ×r 1 , according to the restriction on the mode-1 Tucker rank, r 1 .
⊲ Update the current approximation to the mode-1 dominating subspace Z (1) r 1 → Z (1) .⊲ Implement the single loop of ALS iteration for mode ℓ = 2 and for ℓ = 3. ⊲ End of the single ALS iteration step.⊲ Repeat the complete ALS iteration m max times to obtain the optimized Tucker orthogonal side matrices Z (1) , Z (2) , Z (3) , and final projected image U 3 .
(E) Project the final iterated tensor U 3 in (7.2) using the resultant basis set in Z (3) to obtain the core tensor, β ∈ R r 1 ×r 2 ×r 3 .
The Canonical-to-Tucker algorithm can be easily modified to the ε-truncation stopping criteria.Notice that in the case of equal Tucker ranks, r ℓ = r, maximal canonical rank of the core tensor β does not exceed r 2 , see [38], which completes the Tucker-to-Canonical part of the total algorithm.(Further optimization of the canonical rank in the small-size core tensor β can be implemented by applying the ALS iterative scheme in the canonical format, see e.g.[49].) 2. The Hartree-Fock equation in AO basis set.
Usually, the Hartree-Fock equation is approximated by the standard Galerkin projection of the initial problem (7.3) by using the physically justified reduced basis sets (say, GTO type orbitals).
For a given finite Galerkin basis set {g µ } 1≤µ≤N b , g µ ∈ H 1 (R 3 ), the occupied molecular orbitals ψ i are represented (approximately) as ψ i = N b µ=1 C µi g µ , i = 1, ..., N orb .To derive an equation for the unknown coefficients matrix C = {C µi } ∈ R N b ×N orb , first, we introduce the mass (overlap) matrix S = {S µν } 1≤µ,ν≤N b , given by S µν = R 3 g µ g ν dx, and the stiffness matrix H = {h µν } of the core Hamiltonian H = − 1 2 ∆ + V c (the single-electron integrals), 3. Tensor approximation to the Newton kernel in 3D.Methods of separable approximation to multivariate spherically symmetric functions by using the Gaussian sums have been addressed in the chemical and mathematical literature since [9] and [10,64], respectively.
In this section, we discuss for the readers convenience the grid-based method for the low-rank canonical and Tucker tensor representations of a spherically symmetric functions p( x ), x ∈ R d in the particular case of the 3D Newton kernel p( x ) = 1 x , x ∈ R 3 by its projection onto the set of piecewise constant basis functions, see [7] for more details.

Figure 2 . 2 :
Figure 2.2: The Tucker approximation error in the Frobenius norm vs. the Tucker rank r for a single Slater function.

3 Figure 2 . 3 :
Figure 2.3: The Tucker approximation error in the Frobenius norm vs. the Tucker rank r for a sum of Slater potentials over the cubic 8 × 8 × 8 lattice (right).

4 :
Part of the canonical-to-Tucker decomposition algorithm for mode ℓ = 1.

Figure 3 . 1 :
Figure 3.1: Left: Glycine amino acid in a computational box.Right: approximation of the Gaussian-type basis function by a piecewise constant function.The molecule is embedded in a certain fixed computational box Ω = [−b, b] 3 ∈ R 3 , as in Fig. 3.1, left.For a given discretization parameter n ∈ N, we use the equidistant n × n × n tensor grid ω 3,n = {x i }, i ∈ I := {1, ..., n} 3 , with the mesh-size h = 2b/(n + 1).Then the Gaussian basis functions g k (x), x ∈ R 3 , are approximated by sampling their values at the centers of discretization intervals, as in Fig. 3.1, right, using one-dimensional piecewise constant basis functions g k (x) ≈ g k (x) = 3 ℓ=1 g

Figure 3 . 2 ,Figure 3 . 2 :
Figure 3.2, left, shows several vectors of the canonical representation of the Coulomb kernel along one of variables.Figure 3.2, right, represents the cross-section of the resulting nuclear potential P c for C 2 H 5 OH molecule.
−4 Å.That ensures high accuracy of calculations, which is controlled by the ε-ranks of tensor truncation.The solver works in a black-box way: input the grid-based basis-functions and coordinates of nuclei in a molecule and start the program.Calculation of TEI for H 2 O on grids 32768 3 takes two minutes on a laptop.The time for TEI with n 3 = 131072 3 for Alanine amino acid takes approximately one hour in Matlab, including incorporated density fitting.

Figure 3 .
3 shows convergence history of ab initio iterations for H 2 O molecule (cc-pVDZ-41), where TEI is computed on the 3D grid of size 131072 3 , while Figure 3.3 (right) presents the zoom of the left graphic at the last 20 iterations.The ground state energy from Molpro is shown by the black dashed line.

Figure 3 . 3 :
Figure 3.3: Left: SCF EVP iteration for H 2 O; Right: convergence of the ground state energy vs. final 20 iterations for H 2 O.

Figure 3 . 4 (
Figure 3.4 (left)  shows the SCF EVP iteration for Glycine amino acid (cc-pVDZ-170), where dashed line indicates convergence of the residual and the red solid line shows convergence of the error to ground state energy from MOLPRO package with the same basis set.Figure3.4 (right) illustrates the convergence of the ground state energy at final 20 iterations for Glycine, dashed line is the energy from Molpro.The second row in Table3.3represents times (sec) for one step Figure 3.4 (left)  shows the SCF EVP iteration for Glycine amino acid (cc-pVDZ-170), where dashed line indicates convergence of the residual and the red solid line shows convergence of the error to ground state energy from MOLPRO package with the same basis set.Figure3.4 (right) illustrates the convergence of the ground state energy at final 20 iterations for Glycine, dashed line is the energy from Molpro.The second row in Table3.3represents times (sec) for one step
obtained by tracing of P at the accompanying lattice of the double size 2L × 2L × 2L, i.e.L L = {x k } ∪ {x k ′ } ∈ Ω L .Here P |x k denotes the tensor entry corresponding to the k-th lattice point designating the atomic center x k .We are interested in the computation of the rank-R tensor P c L = [P c L |x k ] k∈K ∈ R L×L×L , where P c L |x k denotes the tensor entry corresponding to the k-th lattice point on L L .The tensor P c L can be computed at the expense O(L 2 ) by , we use the definitionsτ (x, y) := 2 N orb i=1 ψ i (x)ψ i (y), ρ(x) := τ (x, x),

R 3 V
c (x)g µ g ν dx, 1 ≤ µ, ν ≤ N b .The core Hamiltonian matrix H can be precomputed in O(N 2 b ) operations via grid-based approach.Given the finite basis set {g µ } 1≤µ≤N b , g µ ∈ H 1 (R 3 ), the associated fourth order two-electron integrals (TEI) tensor, B = [b µνλσ ], is defined entrywise by (3.3), where µ, ν, λ, σ ∈ {1, ..., N b } =: I b .In computational quantum chemistry the nonlinear terms representing the Galerkin approximation to the Hartree and exchange operators are calculated traditionally by using the low-rank Cholesky decomposition of a matrix associated with the TEI tensor B = [b µνκλ ] defined in (3.3), that initially has the computational and storage complexity of order O(N 4 b ).Introducing the N b × N b matrices J(D) and K(D), J(D) µν = N b κ,λ=1 b µν,κλ D κλ , K(D) µν = − 1 2 N b κ,λ=1 b µλ,νκ D κλ , where D = 2CC T ∈ R N b ×N b is the rank-N orb symmetric density matrix, one then represents the complete Fock matrix F by F (D) = H + J(D) + K(D).The resultant Galerkin system of nonlinear equations for the coefficients matrix C ∈ R N b ×N orb , and the respective eigenvalues Λ = diag(λ 1 , ..., λ N b ), reads as F (D)C = SCΛ, C T SC = I N , where the second equation represents the orthogonality constraints R 3 ψ i ψ j dx = δ ij , and I N denotes the N b × N b identity matrix.
) at the expense O(dnR a R b ).The Hadamard (entrywise) product of tensors A, B is defined by Y = [y i 1 ,...,i d ] := A ⊙ B, where y i 1 ,...,i d = a i 1 ,...,i d b i 1 ,...,i d .For canonical tensors A and B given in form (2.1), the Hadamard product is calculated in O(dnR a R b ) operations by 1D entrywise products of vectors,

Table 3 .
[38]e shows CPU times (sec) for the Matlab computation of V H for H 2 O molecule[38]on a SUN station using 8 Opteron Dual-Core/2600 processors (times for 3D FFT for n ≥ 1024 are obtained by extrapolation).C2T shows the time for the canonical-to-Tucker rank reduction.In a similar way, the algorithm for 3D grid-based tensor- 1: Times (sec) for the 3D tensor product convolution vs. 3D FFT convolution.

Table 3 .
3: Time (sec) for one ab-initio SCF EVP iteration and accuracy of ab-initio solution with respect to grid-size in TEI calculations.Matlab on an Intel Xeon X5650.Given the set of Hartree-Fock molecular orbitals {C p } and the corresponding energies {ε p }, p = 1, 2, ..., N b , where {C i } and {C a } denote the occupied and virtual orbitals, respectively.First, one has to transform the TEI matrix B = [b µν,λσ ], corresponding to the initial AO basis set, to those represented in the molecular orbital (MO) basis,
1: Comparison of times for the standard (T s ) and tensor-based (T tens.