Data Assimilation and Reduced Modeling for High Dimensional Problems

CIRM, Luminy, France
July 19-August 27, 2021

Approximation and learning with tree tensor networks (Anothony Nouy, École Centrale de Nantes)

Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.

In this lecture, we will give an introduction to tree tensor networks (TNs), or tree-based tensor formats.

In a first part, we will introduce approximation tools based on TNs. We will see how low-dimensional functions can be seen as high-dimensional functions through a tensorization (or quantization) procedure, that can be seen as a particular extraction of features from the input variables. We will then present some results on the approximation power (or expressivity) of TNs and discuss the role of tensorization and architecture of TNs.

In a second part, we will discuss learning aspects. First in a classical empirical risk minimization setting, we will present adaptive learning algorithms and a model selection strategy with a suitable choice of penalty derived from complexity estimates for TNs. In a least-squares setting, this procedure is shown to be minimax adaptive to a wide range of smoothness spaces. Finally in an active learning setting, we will present algorithms based on principal component analysis and least-squares projections.

[1] W. Hackbusch. Tensor spaces and numerical tensor calculus, volume 42 of Springer series in computational mathematics. Springer, Heidelberg, 2012.

[2] A. Nouy. Low-Rank Methods for High-Dimensional Approximation and Model Order Reduction. In Model Reduction and Approximation: Theory and Algorithms, chapter 4. SIAM, Philadelphia, PA, 2017.

[3] M. Ali and A. Nouy. Approximation with Tensor Networks. Part I: Approximation Spaces. arXiv:2007.00118

[4] M. Ali and A. Nouy. Approximation with Tensor Networks. Part II: Approximation Rates for Smoothness Classes. arXiv:2007.00128

[5] M. Ali and A. Nouy. Approximation with Tensor Networks. Part III: Multivariate approximation. arXiv:2101.11932

[6] B. Michel and A. Nouy. Learning with tree tensor networks: complexity estimates and model selection. arXiv:2007.01165.

[7] A. Nouy. Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats. Numerische Mathematik, 141(3):743–789, Mar 2019.

[8] C. Haberstich, A. Nouy, and G. Perrin. Active learning of tree tensor networks using optimal least-squares. arXiv preprint arXiv:2104.13436, 2021.