# Nesterov Acceleration of Alternating Least Squares for Canonical Tensor Decomposition: Momentum Step Size Selection and Restart Mechanisms^{†}^{†}thanks: This is a revised and extended version of paper arXiv:1810.05846v1 (2018).

###### Abstract

We present Nesterov-type acceleration techniques for Alternating Least Squares (ALS) methods applied to canonical tensor decomposition. While Nesterov acceleration turns gradient descent into an optimal first-order method for convex problems by adding a momentum term with a specific weight sequence, a direct application of this method and weight sequence to ALS results in erratic convergence behaviour. This is so because ALS is accelerated instead of gradient descent for our non-convex problem. Instead, we consider various restart mechanisms and suitable choices of momentum weights that enable effective acceleration. Our extensive empirical results show that the Nesterov-accelerated ALS methods with restart can be dramatically more efficient than the stand-alone ALS or Nesterov accelerated gradient methods, when problems are ill-conditioned or accurate solutions are desired. The resulting methods perform competitively with or superior to existing acceleration methods for ALS, including ALS acceleration by NCG, NGMRES, or LBFGS, and additionally enjoy the benefit of being much easier to implement. We also compare with Nesterov-type updates where the momentum weight is determined by a line search, which are equivalent or closely related to existing line search methods for ALS. On a large and ill-conditioned 711000900 tensor consisting of readings from chemical sensors to track hazardous gases, the restarted Nesterov-ALS method shows desirable robustness properties and outperforms any of the existing methods we compare with by a large factor. There is clear potential for extending our Nesterov-type acceleration approach to accelerating other optimization algorithms than ALS applied to other non-convex problems, such as Tucker tensor decomposition. Our Matlab code is available at https://github.com/hansdesterck/nonlinear-preconditioning-for-optimization.

Keywords: canonical tensor decomposition, alternating least squares, Nesterov acceleration, nonlinear acceleration, nonlinear preconditioning, nonlinear optimization

## 1 Introduction

Nesterov’s accelerated gradient descent method is a celebrated method for speeding up the convergence rate of gradient descent, achieving the optimal convergence rate obtainable for first order methods on convex problems [nesterov1983method]. Nesterov’s method is an extrapolation method that specifies carefully tuned weight sequences for the so-called momentum term which updates an iterate in the direction of the previous update. With those tailored weight sequences, optimal convergence rates can be proved for convex problems.

Recent work has seen extensions of Nesterov’s accelerated gradient method in several ways: either the method is extended to non-convex optimization problems [ghadimi2016accelerated, li2015accelerated], or Nesterov’s approach is applied to accelerate convergence of methods that are not directly of gradient descent-type, such as the Alternating Direction Method of Multipliers (ADMM) applied to convex problems[goldstein2014fast].

In this paper our goal is to extend Nesterov extrapolation to accelerating the Alternating Least Squares (ALS) method for computing the canonical approximation of a tensor by a sum of rank-one tensors — the so-called Canonical Polyadic (CP) decomposition of a tensor[kolda2009tensor]. As such, this paper attacks two challenges at the same time: we develop Nesterov-accelerated algorithms for a non-convex problem — the CP tensor decomposition problem —, and we do this by accelerating ALS steps instead of gradient descent steps. Since we accelerate ALS instead of gradient descent for our non-convex CP decomposition problem, we cannot simply use the standard sequences for the Nesterov momentum weights that lead to optimal convergence of Nesterov’s accelerated gradient method applied to convex problems; in fact, we will illustrate that using these sequences leads to erratic or divergent convergence behaviour. Instead, we investigate in this paper choices for selecting the momentum step size combined with restarting mechanisms that lead to effective acceleration methods for ALS applied to tensor decomposition. Our approach is partially inspired by recent related work on acceleration methods for nonlinear systems[nguyen2018accelerated] and on restarting mechanisms that improve Nesterov convergence for convex problems[odonoghue2015adaptive, su2016differential, goldstein2014fast]. Our approach and goals are most closely related to (independent) recent work by Ang and Gillis[ang2019accelerating], who consider Nesterov-type acceleration for Nonnegative Matrix Factorization, and also propose step size and restart mechanisms, which are different from ours.

The Nesterov acceleration methods for ALS with our proposed step size and restart mechanisms are attractive because they are simple to implement, and we show in extensive numerical tests that they are competitive with other recently proposed nonlinear acceleration methods for ALS applied to CP tensor decomposition, most of which are much more involved in terms of implementation. We also believe our step size and restart strategies are of broader interest and may be applied to Nesterov-type acceleration of other simple optimization methods of alternating or (block) coordinate descent-type applied to potentially non-convex problems.

As another contribution of this paper, we also establish links between Nesterov-type acceleration of ALS and other existing acceleration methods for ALS. We establish links between Nesterov-type extrapolation and the well-known nonlinear acceleration methods of Anderson acceleration and the Nonlinear Generalized Minimal Residual Method (NGMRES) [washio1997krylov, oosterlee2000krylov, sterck2012nonlinear, brune2015composing]. In particular, we show that the Nesterov extrapolation formula is equivalent to Anderson acceleration with window size one, and closely related to NGMRES with window size one. In our numerical results, we compare with NGMRES acceleration of ALS. Furthermore, if the Nesterov momentum weight is determined by a line search, Nesterov acceleration of ALS is closely related to line search methods for ALS that go back to the 1970s and have more recently been enhanced[harshman1970foundations, rajih2008enhanced, chen2011new, sorber2016exact]. We note that acceleration of ALS by NGMRES, NCG and LBFGS also employs line searches[sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly]. To explore the link with line search methods[harshman1970foundations, rajih2008enhanced, chen2011new, sorber2016exact], we also compare in our numerical results with Nesterov weights determined by a cubic line search. Line searches are common in acceleration methods for ALS, but are not commonly considered when Nesterov acceleration is applied to optimization algorithms in the literature, so we argue based on our numerical results that Nesterov momentum weights determined by a line search may be a useful algorithmic approach. Finally, we also explain how Nesterov acceleration of ALS can be interpreted as using ALS as a nonlinear preconditioner for Nesterov’s accelerated gradient formula, following a general framework for nonlinear preconditioning of optimization methods that was previously applied to the Nonlinear Conjugate Gradient (NCG) and Limited-Memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) methods[sterck2018nonlinearly].

### 1.1 Nesterov’s Accelerated Gradient Method.

Consider the problem of minimizing a function ,

(1) |

Nesterov’s accelerated gradient descent starts with an initial guess
.
For , given , a new iterate is obtained by first adding a
multiple of the *momentum* to to obtain an
auxiliary variable , and then performing a gradient descent step at .
The update equations at iteration are as follows:

(2) | ||||

(3) |

where the gradient descent step length and the momentum weight are suitably chosen numbers, and so that the first iteration is simply gradient descent.

There are a number of ways to choose the and so that Nesterov’s accelerated gradient descent converges at the optimal in function value for smooth convex functions. For example, when is a convex function with -Lipschitz gradient, by choosing , and as

(4) | ||||

(5) |

one obtains the following convergence rate:

(6) |

where is a minimizer of . See, e.g., Su et al.[su2016differential] for more discussion on the choices of momentum weights.

It will be useful for later considerations in this paper to rewrite the expressions for Nesterov acceleration in Eqs. (2-3) solely in terms of or . For the variables, we obtain

(7) |

where the function is introduced to represent the result of one steepest-descent step with step size :

(8) |

Similarly, for the variables, we can write

(9) |

Recently, there are a number of works that apply Nesterov’s acceleration technique to non-convex problems. A modified Nesterov accelerated gradient descent method has been developed that enjoys the same convergence guarantees as gradient descent on non-convex optimization problems, and maintains the optimal first order convergence rate on convex problems[ghadimi2016accelerated]. A Nesterov accelerated proximal gradient algorithm was developed that is guaranteed to converge to a critical point, and maintains the optimal first order convergence rate on convex problems[li2015accelerated].

Nesterov’s accelerated gradient method is known to exhibit oscillatory behavior on convex problems. An interesting discussion on this is provided by Su et al.[su2016differential] which formulates an ODE as the continuous time analogue of Nesterov’s method. Such oscillatory behavior happens when the method approaches convergence, and can be alleviated by restarting the algorithm using the current iterate as the initial solution, usually resetting the sequence of momentum weights to its initial state close to 0. An explanation has been provided of why resetting the momentum weight to a small value is effective using the ODE formulation of Nesterov’s accelerated gradient descent[su2016differential]. The use of adaptive restarting has been explored for convex problems,[odonoghue2015adaptive] and Nguyen et al.[nguyen2018accelerated] explored the use of adaptive restarting and adaptive momentum weights for nonlinear systems of equations resulting from finite element approximation of PDEs. Our work is the first study of a general Nesterov-accelerated ALS scheme.

### 1.2 Nesterov Extrapolation as a Generic Nonlinear Acceleration Scheme

We can now generalize the Nesterov extrapolation approach and apply it to accelerate other optimization methods than steepest descent. Considering a generic iterative optimization method with update formula

we replace the steepest-descent operator by the generic update function in Nesterov update formulas Eq. (7) or Eq. (9) to obtain

(10) |

(11) |

This extrapolation approach has been used previously to accelerate, for example, ADMM, where represents an ADMM update[goldstein2014fast]. Nesterov’s technique has also been used to accelerate an approximate Newton method[ye2017nesterov].

In this paper, we consider Nesterov-type acceleration of ALS for CP tensor decomposition. We write for the ALS update function, and obtain accelerated formulas

(12) |

and

(13) |

Replacing gradient directions by update directions provided by ALS is essentially also an approach that has been taken to obtain nonlinear acceleration of ALS by NGMRES, NCG and LBFGS[sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly]; in the case of Nesterov’s method the procedure is extremely simple and easy to implement.

In most of what follows, we will focus on the update formula in the variables, and we will simplify notation for the case of ALS acceleration by defining

(14) |

and writing the update formula for Nesterov-accelerated ALS as

(15) |

Recently, the application of Nesterov acceleration to ALS for canonical tensor decomposition was considered by Wang et al.[wang2018accelerating] However, they only tried the vanilla Nesterov technique with a standard Nesterov momentum sequence and without restarting or line search mechanisms and not surprisingly they fail to obtain acceleration of ALS. The main goal of this paper is to investigate suitable choices for the momentum weights in Eq. (15), and for restart mechanisms that enable effective convergence patterns when applying the Nesterov-ALS method of Eq. (15).

### 1.3 Canonical Tensor Decomposition.

Tensor decomposition has wide applications in machine learning, signal processing, numerical linear algebra, computer vision, natural language processing and many other fields[kolda2009tensor]. This paper focuses on the Canonical Polyadic (CP) decomposition of tensors[kolda2009tensor], which is also called the CANDECOMP/PARAFAC decomposition. CP decomposition approximates a given tensor by a low-rank tensor composed of a sum of rank-one terms, , where is the vector outer product. Specifically, defining the factor matrices , we minimize the error in the Frobenius norm by considering the objective function

(16) |

Finding efficient methods for computing tensor decomposition is an active area of research, but the alternating least squares (ALS) algorithm is still considered one of the most efficient algorithms for CP decomposition.[acar2011scalable] Alternative optimization methods for CP that have been considered in the literature include quasi-Newton methods such as NCG and LBFGS[acar2011scalable], nonlinear least-squares approaches using Gauss–Newton and Levenberg–Marquardt algorithms[paatero1997weighted, tomasi2006comparison, phan2013low], and stochastic gradient descent[sidiropoulos2017tensor].

ALS finds a CP decomposition in an iterative way. In each iteration, ALS sequentially updates a block of variables at a time by minimizing expression (16), while keeping the other blocks fixed: first is updated, then , and so on. Updating a factor matrix is a linear least-squares problem that can be solved in closed form. The ALS update equations for the can be derived by considering expressions for the gradient of . For example, for a tensor of order 3, with factor matrices of sizes , , , expressions for the gradient of are given by Acar et al.[acar2011scalable]

(17) | |||

(18) | |||

(19) |

where denotes the Khatri-Rao product (the ‘matching columnwise’ Kronecker product[kolda2009tensor, acar2011scalable]), denotes component-wise multiplication, and is the matricized version of tensor in direction .[kolda2009tensor, acar2011scalable] For example, in the first equation, is of size , and is of size . The matrix is called a ‘matricized-tensor times Khatri-Rao product’ (often abbreviated as ‘MTTKRP’).

The update equations for ALS can be derived from setting each of the gradient components in (17)-(19) equal to zero[acar2011scalable]:

(20) | |||

(21) | |||

(22) |

In each ALS iteration, the factor matrices are sequentially updated by solving these systems in order for and . It can be shown that these are the normal equations for minimizing in , , and , respectively, while keeping the other factor matrices constant[acar2011scalable]. For example, Eq. (20) are the normal equations for minimizing the least-squares functional , keeping and fixed. with respect to

Collecting the matrix elements of the ’s in a vector , we use to denote the updated variables after performing one full ALS iteration starting from .

When the CP decomposition problem is ill-conditioned, ALS can be slow to converge [acar2011scalable], and recently a number of methods have been proposed to accelerate ALS. This includes acceleration by previously mentioned methods such as NGMRES[sterck2012nonlinear], NCG[sterck2015nonlinearly], and LBFGS[sterck2018nonlinearly]. An approach has also been proposed based on the Aitken-Stefensen acceleration technique.[wang2018accelerating] These acceleration techniques can substantially improve ALS convergence speed when problems are ill-conditioned or an accurate solution is required.

### 1.4 Main Approach and Contributions of this Paper.

As explained above, our basic approach is to apply Nesterov acceleration to ALS in a manner that is equivalent to replacing the gradient update in the second step of Nesterov’s method, Eq. (3), by an ALS step. However, applying this procedure directly fails for several reasons. First, it is not clear to which extent the momentum weight sequence of (5), which guarantees optimal convergence for gradient acceleration in the convex case, applies at all to our case of ALS acceleration for a non-convex problem. Second, and more generally, it is well-known that optimization methods for non-convex problems require mechanisms to safeguard against ‘bad steps’, especially when the solution is not close to a local minimum[nocedal2006numerical]. The main contribution of this paper is to propose and explore restart-based safeguarding mechanisms for Nesterov acceleration applied to ALS, along with momentum weight selection. This leads to a family of acceleration methods for ALS that are competitive with or outperform previously described highly efficient nonlinear acceleration methods for ALS. We also compare with choosing the momentum weight using line searches, which is another way to obtain a safeguarding mechanism and is equivalent to line search methods for ALS that go back to the 1970s [harshman1970foundations, rajih2008enhanced, chen2011new, sorber2016exact].

As further motivation for the problem we address and for our approach, Fig. 1 illustrates the convergence difficulties that ALS may experience for ill-conditioned CP tensor decomposition problems, and how nonlinear acceleration may allow to remove these convergence difficulties. For the standard ill-conditioned synthetic test problem that is the focus of Fig. 1 (see Section 4 for the problem description), ALS converges slowly (black curve). It is known that standard gradient-based methods such as gradient descent (GD), NCG or LBFGS that do not rely on ALS, perform more poorly than ALS[acar2011scalable], so it is no surprise that applying Nesterov’s accelerated gradient method to the problem (for example, with the gradient descent step length determined by a standard cubic line search[acar2011scalable, sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly], cyan curve) leads to worse performance than ALS. Nonlinear acceleration of ALS, however, can substantially improve convergence[sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly], and we pursue this using Nesterov acceleration in this paper. However, as expected, applying Nesterov acceleration (with the standard Nesterov momentum weight sequence) directly to ALS for our non-convex problem, by replacing the gradient step in the Nesterov formula by a step in the ALS direction, does not work and leads to erratic convergence behaviour (magenta curve).

As the main contribution of this paper, we consider restart mechanisms to stabilize the convergence behaviour of Nesterov-accelerated ALS, and we study how two key parameters, the momentum step and the restart condition, should be set. The blue curves in Fig. 1 show two examples of the acceleration that can be provided by two variants of the family of restarted Nesterov-ALS methods we consider. One of these variants (Nesterov-ALS-RG-SN-D2) uses Nesterov’s sequence for the momentum weights, and another successful variant simply always uses momentum weight one (Nesterov-ALS-RG-S1-E). The naming scheme for the Nesterov-ALS variants that we consider will be explained in Section 4. Note that, for our non-convex problem, the cascading convergence pattern of our restarted variant with Nesterov’s sequence for the momentum weights, Nesterov-ALS-RG-SN-D2, is very similar to the convergence patterns observed in the work of Su et al.[su2016differential] on restart mechanisms for Nesterov acceleration (using Nesterov’s sequence) in the convex setting. Extensive numerical tests to be provided in Section 4 show that the best-performing Nesterov-ALS scheme is achieved when using the gradient ratio as momentum weight (as in Nguyen et al.[nguyen2018accelerated]), and restarting when the objective value increases. We also compare with determining the Nesterov momentum weight in each iteration using a cubic line search (LS) (red curve). The resulting Nesterov-ALS-LS method (which is similar to classical line search methods for ALS [harshman1970foundations, rajih2008enhanced, chen2011new, sorber2016exact]) is competitive with or superior to other recently developed nonlinear acceleration techniques for ALS that use line searches, such as NGMRES-ALS (green curve), with the advantage that Nesterov-ALS-LS is much easier to implement. However, the line searches may require multiple evaluations of and its gradient and can be expensive.

The convergence theory of Nesterov’s accelerated gradient method for convex problems does not apply in our case due to the non-convex setting of the CP problem, and because we accelerate ALS steps instead of gradient steps. In fact, in the context of nonlinear convergence acceleration for ALS, few theoretical results on convergence are available [sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly]. We will, however, demonstrate numerically, for representative synthetic and real-world test problems, that our Nesterov-accelerated ALS methods are competitive with or outperform existing acceleration methods for ALS. In particular, some of the Nesterov-ALS methods substantially outperform other acceleration methods for ALS when applied to a large real-world ill-conditioned 711000900 tensor.

The remainder of this paper is structured as follows. Section 2 presents our general Nesterov-ALS scheme and discusses its instantiations, focusing on the choice of momentum weights and restarting mechanisms. Section 3 discusses and establishes links between the Nesterov-accelerated ALS methods we consider, and existing acceleration methods for ALS. In Section 4, we perform an extensive experimental study of our algorithm by comparing it with a number of acceleration schemes on several benchmark datasets. Section 5 concludes the paper.

## 2 Nesterov-Accelerated ALS Methods

We consider Nesterov-type acceleration of ALS as in Eq. (13), but a direct application of a standard Nesterov momentum weight sequence for convex problems as in Eqs. (4–5) does not work. A typical behavior is illustrated by the magenta curve in Fig. 1, which suggests that the algorithm gets stuck in a highly suboptimal region. Such erratic behavior arises here, and not in Nesterov’s accelerated gradient descent for convex problems, because the ALS update we use for our non-convex problem is very different from gradient descent. Below we propose a general restart method to safeguard against bad steps, and investigate suitable choices for the momentum weights .

### 2.1 Nesterov-ALS with Restart.

Our general Nesterov-ALS scheme with restart is shown in Algorithm 1. Besides incorporating the momentum term in the update rule (line 12), there are two other important ingredients in our algorithm: adaptive restarting (line 5-7), and adaptive momentum weight (line 9). The precise expressions we use for restarting and computing the momentum weight are explained in the following subsections. In each iteration of the algorithm we compute a new update according to the update rule (12) with momentum term (line 12). Before computing the update, we check whether a restart is needed (line 5) due to a bad current iterate. When we restart, we discard the current bad iterate (line 6), and compute a simple ALS update instead (ALS always reduces and is thus well-behaved), by setting equal to zero (line 7) such that (line 12) computes an ALS update. Note that, when a bad iterate is discarded, we don’t decrease the iteration index by one, but instead set the current iterate equal to the previously accepted iterate , which then occurs twice in the sequence of iterates. We wrote the algorithm down this way because we can then use to count work (properly keeping track of the cost to compute the rejected iterate), but the algorithm can of course also be written without duplicating the previous iterate when an iterate is rejected. The index keeps track of the number of iterates since restarting, which is used for some of our strategies to compute the momentum weight , see Section 2.2. The condition is required in (line 5), which checks whether a restart is needed, to make sure that each restart (computing an ALS iteration) is followed by at least one other iteration before another restart can be triggered (because otherwise the algorithm could get stuck in the same iterate).

Various termination criteria may be used. In our experiments, we terminate when the gradient 2-norm reaches a set tolerance:

Here is the number of variables in the low-rank tensor approximation.

The momentum weight and the restart condition need to be specified to turn the scheme into concrete algorithms. We discuss the choices used in Section 2.2 and Section 2.3 below.

### 2.2 Momentum Weight Choices for Nesterov-ALS with Restart.

Naturally, we can ask whether a momentum weight sequence that guarantees optimal convergence for convex problems is applicable in our case. We consider the momentum weight rule defined in Eq. 5, but adapted to take restart into account in Algorithm 1:

(23) |

where is defined in Eq. 4. Restart is taken into account by using instead of as the index on the RHS.

Following Nguyen et al.[nguyen2018accelerated], we also consider using the
*gradient ratio* as the momentum weight

(24) |

This momentum weight rule can be motivated as follows[nguyen2018accelerated]. When the gradient norm drops significantly, that is, when convergence is fast, the algorithm performs a step closer to the ALS update, because momentum may not really be needed and may in fact be detrimental, potentially leading to overshoots and oscillations. When the gradient norm does not change much, that is, when the algorithm is not making much progress, acceleration may be beneficial and a closer to 1 is obtained by the formula.

Finally, since we observe that Nesterov’s sequence Eq. 5 produces values that are always of the order of 1 and approach 1 steadily as increases, we can simply consider a choice of for our non-convex problems, where we rely on the restart mechanism to correct any bad iterates that may result, replacing them by an ALS step. Perhaps surprisingly, the numerical results to be presented below show that this simplest of choices for may work well, if combined with suitable restart conditions.

### 2.3 Restart Conditions for Nesterov-ALS.

One natural restarting strategy is *function restarting* (see, e.g., O’donoghue and Candes[odonoghue2015adaptive, su2016differential] for its use in the convex setting), which restarts
when the algorithm fails to sufficiently decrease the function value.
We consider condition

(25) |

Here, we normally use , but can be used to allow for *delay*.
We normally take , but we have found that it sometimes pays off to allow
for modest increase in before restarting, and a value of facilitates that.
If and , the condition guarantees that the algorithm will make some
progress in each iteration, because the ALS step that is carried out after a restart
is guaranteed to decrease . However, requiring strict decrease may preclude
accelerated iterates (the first accelerated iterate may always be rejected in favor of an
ALS update), so either or allows for a few accelerated iterates to initially
increase , after which they may decrease in further iterations in a much
faster way than ALS, potentially resulting in substantial acceleration of ALS.
While function restarting (with and ) has been observed to significantly improve convergence for convex problems, no theoretical convergence rate has been obtained[odonoghue2015adaptive, su2016differential].

Following Su et al.[su2016differential], we also consider the *speed
restarting* strategy which restarts when

(26) |

Intuitively, this condition means that the speed along the convergence trajectory, as measured by the change in , drops. Su et al.[su2016differential] showed that speed restarting leads to guaranteed improvement in convergence rate for convex problems.

Another natural strategy is to restart when the gradient norm satisfies

(27) |

where, as above, can be chosen to be equal to or greater than one.
This *gradient restarting* strategy (with ) has been used in conjunction with
gradient ratio momentum weight by Nguyen et al.[nguyen2018accelerated], and
a similar condition on the residual has been used for Nesterov acceleration of ADMM
for convex problems by Goldstein et al.[goldstein2014fast].

When we use a value of in the above restart conditions, we have found in our experiments that it pays off to allow for a larger immediately after the restart, and then decrease in subsequent steps. In particular, in our numerical tests below, we set = 1.25, and decrease in every subsequent step by 0.02, until reaches 1.15.

### 2.4 Nesterov-ALS with Line Search.

To compare our numerical results with existing acceleration methods for ALS that use a line search in the direction of the ALS update[harshman1970foundations, rajih2008enhanced, chen2011new, sterck2012nonlinear, sorber2016exact], we also consider an approach where the momentum weight in the Nesterov extrapolation formula (12) is determined by a line search. In Section 3.2 we explain the equivalence of this approach with existing line search methods to accelerate ALS[harshman1970foundations, rajih2008enhanced, chen2011new, sterck2012nonlinear, sorber2016exact]. The line search to determine the momentum weight safeguards against bad steps introduced by the term, so an additional restart mechanism is not needed. (Note that ALS itself always reduces and is not prone to introducing bad steps; if the line search does not find a suitable extrapolation point, it simply returns and the ALS step is accepted, which can be considered as a restart mechanism built-in into the line search.) For the line search approach, we determine in each iteration as an approximate solution of

(28) |

We use the standard Moré-Thuente cubic line search that has been used extensively for tensor decomposition methods[acar2011scalable, sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly]. This inexact line search finds a value of that satisfies the Wolfe conditions, which impose a sufficient descent condition and a curvature condition. Each iteration of this iterative line search requires the computation of the function value, , and its gradient. As such, the line search can be quite expensive. In our numerical tests, we use the following line search parameters: for the descent condition, for the curvature condition, a starting search step length of 1, and a maximum of 20 line search iterations.

## 3 Relation with Existing Acceleration Methods

In this section we elucidate the relation of the Nesterov acceleration methods for ALS discussed in this paper with other existing convergence acceleration methods for ALS.

### 3.1 NGMRES and Anderson Acceleration

The Nonlinear Generalized Minimal Residual Method[sterck2012nonlinear, sterck2013steepest] (NGMRES) for minimizing accelerates convergence of a nonlinear iteration by considering the -step extrapolation formula

(29) |

where , in combination with a line search to stabilize the iteration when the iterate is far from a stationary point,

(30) |

where is determined by a line search. The expansion coefficients are computed in each step by solving a small linear least-squares problem that minimizes a linearization of . NGMRES was originally proposed as a convergence accelerator for solving a system of nonlinear algebraic equations , and it was shown to be essentially equivalent to the GMRES method in the case of a linear system [washio1997krylov, oosterlee2000krylov]. Note that the nonlinear iteration can also be seen as an inner-iteration nonlinear preconditioner for NGMRES, see also Section 3.3 below.

NGMRES is closely related to the classical Anderson acceleration method for solving ,[brune2015composing] which uses the expansion formula

(31) |

where the are also determined to approximately minimize . Anderson acceleration can be viewed as a multi-secant method[fang2009two] and also reduces to GMRES for the case of linear systems[walker2011anderson].

It can be seen immediately that the from of the Nesterov extrapolation formula (15) is equivalent to the Anderson extrapolation formula (31) with one step (), and is also closely related to NGMRES with one step (Eq. (29)). The difference, of course, is that the extrapolation weight in Nesterov’s formula (15) is usually determined differently than for Anderson acceleration and NGMRES.

### 3.2 Line Search Methods for ALS

As early as 1970, Harshman[harshman1970foundations] described enhancing ALS convergence for CP tensor decomposition by an over-relaxation in the direction of the ALS update:

(32) |

where and a value between 1.2 and 1.3 was recommended for the over-relaxation parameter .
This extrapolation step is of the same form as NGMRES with window size one, see Eq. (29), and is similar to (but not the same as) Nesterov extrapolation formula (15) and Anderson acceleration with .
In later work, in Eq. (32) was determined in each step using a line search, obtaining a value of that approximately minimizes .
In particular, the so-called *enhanced line search* approach[rajih2008enhanced] computes the optimal in Eq. (32) in an accurate manner, which is possible because, for finding the optimal CP decomposition in the 2-norm, the objective is a polynomial function of .
Enhanced line search was later extended to broader tensor optimization problems and to exact plane search[sorber2016exact].
Chen et al.[chen2011new] consider one-step extrapolation according to the update formula

(33) |

where again , which is of the same form as the Nesterov-type update we consider in this paper, Eq. (15) (and, thus, the same as Anderson acceleration with one step), but Chen et al.[chen2011new] determine using a line search. Chen et al.[chen2011new] also considered two variants of two-step extrapolation formula

(34) |

where the coefficients in the extrapolation are pre-determined based on a so-called geometric or algebraic ansatz. This approach is similar to NGMRES or Anderson acceleration with window size two, except that the latter methods aim to determine optimal extrapolation coefficients by solving, in each iteration, a linear least-squares problem for a window size of . NGMRES and Anderson, thus, adapt the extrapolation coefficients each iteration to minimize the residual of the gradient, for general window size, whereas the methods of Chen et al.[chen2011new] are limited to two-step extrapolation and use extrapolation coefficients that are fixed over the iterations and are determined in an ad-hoc way. Numerically it has been shown[sterck2012nonlinear] that NGMRES with window size greater than one (i.e., multistep extrapolation) can be much faster than (enhanced) line search of the form (32) (i.e., one-step extrapolation); it was found that NGMRES window size is a good choice for efficient acceleration of ALS for CP tensor decomposition[sterck2012nonlinear]. Finally, we note that acceleration of ALS by NGMRES, NCG and LBFGS also employs line searches as a globalization mechanism to guard against erratic iterates[sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly].

While the main focus of this paper is on investigating momentum weights and restart mechanisms for Nesterov update formula (15) for ALS, we also compare with a version of update formula (15) where is determined by a line search, similar to existing methods for accelerating ALS convergence that use line searches [harshman1970foundations, rajih2008enhanced, chen2011new, sterck2012nonlinear, sorber2016exact].

### 3.3 Nonlinear Preconditioning: Left Preconditioning and Right Preconditioning

Applying Nesterov’s acceleration approach to ALS as explained in Section 1.2 can also be interpreted in the context of nonlinear preconditioning[brune2015composing] for optimization methods. In recent work[sterck2018nonlinearly] a general framework for nonlinear preconditioning of optimization methods was formulated and applied to nonlinear preconditioning of NCG and LBFGS. This framework extends the concepts of linear preconditioning for linear systems

to genuinely nonlinear preconditioners for nonlinear optimization.
In the context of linear systems, the concept of *left preconditioning* multiplies the linear system with a nonsingular preconditioning matrix , aiming to improve the convergence of iterative methods (such as GMRES) applied to the preconditioned system

Left preconditioning can be seen as taking linear combinations of the equations to be solved.
In the *right preconditioning* approach, on the other hand, a linear change of the variables is considered by means of a nonsingular preconditioning matrix , and the iterative method is then applied to the preconditioned system

We now explain how these ideas can be extended to minimizing nonlinear functions using genuinely nonlinear preconditioners, where the preconditioner is not given by a linear coordinate transformation that is encoded by a matrix multiplication, but by a genuinely nonlinear transformation.

#### 3.3.1 Nonlinear Left Preconditioning

In the linear case, left preconditioning of, for example, the GMRES method, can be understood as a combination of two methods: an inner preconditioning iteration

(35) |

where is the left preconditioning matrix, is combined with the outer GMRES iteration (where each iteration of the combined method typically uses one inner update and one outer update)[washio1997krylov, oosterlee2000krylov, sterck2012nonlinear]. One can say that inner iteration (35) preconditions the GMRES outer iteration, speeding up the convergence of GMRES, or, in an alternative view, one can say that GMRES is an outer iteration that accelerates the convergence of inner iteration (35). Possible choices for preconditioning matrix include, for example, the lower triangular part of (Gauss-Seidel), or the constant diagonal preconditioner (Richardson iteration), with a suitably chosen .

It is instructive to consider the case of solving a linear system where is Symmetric Positive Definite (SPD). In this case, solving is equivalent to minimizing . The gradient of is given by

(36) |

In this case, it can be seen that inner iteration preconditioner (35) using Richardson iteration is equivalent to steepest descent:

(37) |

More generally, when using a preconditioning matrix in the case of an SPD system, the expression in (35) can be called the preconditioned gradient. Note that the preconditioned gradient can be obtained from the preconditioning iteration (35) by

(38) |

In words, the preconditioned gradient vector is given by the update vector of the linear preconditioning iteration.

Conceptually, this can be generalized to nonlinear preconditioning as follows. Standard nonlinear optimization methods such as NCG or LBFGS use gradient directions to minimize a nonlinear function . However, if a simple nonlinear iterative method for minimizing is available that converges faster than gradient descent, and, in a sense, provides better search directions than the gradient, we can use this iterative method as an inner preconditioning iteration for NCG or LBFGS, and make use of the more suitable search directions provided by this method. For example, in the case of ALS for canonical tensor decomposition, we can use the nonlinear iteration

(39) |

as the inner iteration preconditioner of NCG or LBFGS. In analogy with the linear SPD case, we can interpret the direction provided by ALS as a nonlinearly preconditioned gradient direction, , given by

(40) |

in analogy with (38) and (35). To bring this idea into practice, we simply replace the gradient direction in the update formulas for NCG or LBFGS by the nolinearly preconditioned gradient , i.e., by the update vector that the nonlinear preconditioner (used as an inner iteration) provides. In a sense, standard NCG and LBFGS can be understood as using steepest descent (gradient directions) as the inner iteration. In left-nonlinearly preconditioned NCG and LBFGS, we replace the steepest-descent inner iteration by the nonlinear preconditioner iteration (e.g., ALS). Another way to understand this is that standard NCG and LBFGS are iterative methods that work on solving , i.e., their update formulas drive the gradient to zero (equivalent to solving in the linear case). Nonlinearly preconditioned NCG or LBFGS work on solving the nonlinearly preconditioned equation (which is really the fixed-point equation ), i.e., their update formulas drive the nonlinearly preconditioned gradient to zero (equivalent to solving in the linear case). Further details and numerical results can be found in a recent paper establishing this general formalism of nonlinear left preconditioning for optimization[sterck2018nonlinearly]. In this formalism, we can say that the nonlinear preconditioner is used as an inner iteration for the outer-iteration NCG and LBFGS methods, or, alternatively, we can say that NCG and LBFGS are used as nonlinear convergence accelerators for the inner iteration (e.g., ALS).

This nonlinear left preconditioning can in principle be applied to any nonlinear optimization method (not just NCG or LBFGS), and it can clearly also be applied to Nesterov’s accelerated gradient descent method as in Eq. (9), when, for example, using ALS as the nonlinear preconditioner for CP tensor decomposition. In fact, in the case of Nesterov’s method, the nonlinear left preconditioning procedure is very simple: it basically amounts to replacing the gradient update in Eq. 3 by , the step provided by ALS, and directly results in the nonlinearly preconditioned formulas of Eq. (10) or Eq. (11), so the Nesterov acceleration formula (13) for ALS can also be interpreted as using ALS as a nonlinear preconditioner for Nesterov’s accelerated gradient formula (9).

#### 3.3.2 Nonlinear Right Preconditioning – Transformation Preconditioning for Optimization

Similarly, nonlinear preconditioning techniques can be derived for optimization that rely on a nonlinear change of variables[sterck2018nonlinearly], inspired by right preconditioning in the linear case. In the so-called transformation preconditioning approach[sterck2018nonlinearly] (which is a form of right preconditioning), nonlinearly preconditioned versions of NCG and LBFGS are derived as follows. One first considers minimization of using a linear change of variables

and then applies standard NCG and LBFGS (in the variable) to minimize

Transforming the resulting update formulas back to the original variables using

(41) |

introduces the linear preconditioning matrix

(42) |

in expressions like the scalar product of gradients:

(43) |

The resulting linearly preconditioned NCG and LBFGS methods are well-known[hager2006survey, luenberger1984linear]. However, in recent work this approach was extended to nonlinear preconditioning by replacing the linearly preconditioned gradient in expressions like (43) by the nonlinearly preconditioned gradient direction given by the update vector that is provided by the nonlinear preconditioning iteration (39): as in (38). This gives different nonlinearly preconditioned iterations for NCG and LBFGS than the nonlinear left preconditioning approach discussed above. An attractive feature of this nonlinear transformation preconditioning is that it reduces to well-known linear preconditioning techniques for NCG and LBFGS in the case of the linear change of variables [hager2006survey, luenberger1984linear]. In practice, both left and transformation nonlinear preconditioning may lead to dramatic improvements in convergence for NCG and LBFGS[sterck2018nonlinearly]. In this paper, we use the transformation preconditioning versions of NCG-ALS and LBFGS-ALS in the numerical results. In the case of Nesterov’s method, which is a simple iteration that does not involve scalar products of gradients, the two procedures of nonlinear left preconditioning and transformation preconditioning give the same result, i.e., both approaches lead to the nonlinearly preconditioned formulas of Eq. (10) or Eq. (11).

More broadly, nonlinear preconditioning has a long but not widely known history in computational science, and is currently an active area of ongoing research[brune2015composing], including in the optimization context[sterck2018nonlinearly]. In our numerical results, we compare Nesterov acceleration of ALS — i.e., Nesterov’s method with ALS as nonlinear preconditioner —, with NCG and LBFGS acceleration of ALS — i.e., NCG and LBFGS with ALS as nonlinear preconditioner. Numerical results will show that Nesterov-ALS is often competitive with the more sophisticated LBFGS-ALS. In addition, Nesterov acceleration is attractive because it is much easier to implement than LBFGS acceleration.

## 4 Numerical Tests

We evaluated our Nesterov-ALS algorithm with various choices for momentum weights and restart mechanisms on a set of synthetic CP test problems that have been carefully designed and used in many papers, and three real-world datasets of different sizes and originating from different applications. All numerical tests were performed in Matlab, using the Tensor Toolbox[TTB_Software] and the Poblano Toolbox for optimization[SAND2010-1422].

### 4.1 Naming Convention for Nesterov-ALS Schemes.

We use the following naming conventions for the restarting strategies and momentum weight strategies defined in Section 2. For the restarted Nesterov-ALS schemes, we append Nesterov-ALS with the abbreviations in Table 1 to denote the restarting scheme used, and the choice for the momentum weight .

For example, Nesterov-ALS-RF-SG means using restarting based on function value (RF) and momentum step based on gradient ratio (SG). For most tests we don’t use delay (i.e., in Eq. 25 or Eq. 27), and is usually set to 1 in Eq. 25 or Eq. 27. Appending D or E to the name indicates that a delay is used, and that is used, respectively. The line search Nesterov-ALS scheme we compare with is denoted Nesterov-ALS-LS.

Abbreviation | Explanation |
---|---|

RF | function restarting as in Eq. 25 |

RG | gradient restarting as in Eq. 27 |

RX | speed restarting as in Eq. 26 |

SN | Nesterov step as in Eq. 5 |

SG | gradient ratio step as in Eq. 24 |

S1 | constant step 1 |

D | delay in restart condition |

E | in restart condition |

### 4.2 Baseline Algorithms.

We compare our proposed Nesterov-ALS schemes with the recently proposed nonlinear acceleration methods for ALS using GMRES[sterck2012nonlinear], NCG[sterck2015nonlinearly], and LBFGS[sterck2018nonlinearly]. These methods will be denoted in the result figures as GMRES-ALS, NCG-ALS, and LBFGS-ALS, respectively.

### 4.3 Synthetic Test Problems and Results.

We use the synthetic tensor test problems considered by[acar2011scalable] and used in many papers as a standard benchmark test problem for CP decomposition[sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly]. As described in more detail elsewhere[sterck2012nonlinear], we generate six classes of random three-way tensors with highly collinear columns in the factor matrices. We add two types of random noise to the tensors generated from the factor matrices (homoscedastic and heteroscedastic noise[acar2011scalable, sterck2012nonlinear]), and then compute low-rank CP decompositions of the resulting tensors.

Due to the high collinearity, the problems are ill-conditioned and ALS is slow to converge[acar2011scalable]. All tensors have equal size in the three tensor dimensions. The six classes differ in their choice of tensor sizes (), decomposition rank (), and noise parameters and ( and ), in combinations that are specified in Table 2 of Appendix A.

To compare how various methods perform on these synthetic problems, we generate 10 random tensor instances with an associated random initial guess for each of the six problem classes, and run each method on each of the 60 test problems, with a convergence tolerance . We then present so-called -plot performance profiles[sterck2012nonlinear], as explained below, to compare the relative performance of the methods over the test problem set.

*Optimal restarted Nesterov-ALS method*.
Our extensive experiments on both the synthetic and real-world datasets (as
indicated in tests below and in Appendix B) suggest that the optimal
restarted Nesterov-ALS method is the one using function restarting (RF) and
gradient ratio momentum steps (SG), i.e., Nesterov-ALS-RF-SG.
As a comparison, the study of Nguyen et al.[nguyen2018accelerated]
suggests that gradient restarting and gradient
ratio momentum weights work well for accelerating gradient descent
in the context of nonlinear system solving.

Fig. 2 shows the performance of our optimal Nesterov-ALS-RF-SG method on the synthetic test problems, with an ablation analysis that compares it with those variants obtained by varying one hyper-parameter of Nesterov-ALS-RF-SG at a time. In this -plot, we display, for each method, the fraction of the 60 problem runs for which the method execution time is within a factor of the fastest method for that problem. For example, for , the plot shows the fraction of the 60 problems for which each method is the fastest. For , the plot shows, for each method, the fraction of the 60 problems for which the method reaches the convergence tolerance in a time within a factor of two of the fastest method for that problem, etc. As such, the area between curves is a measure for the relative performance of the methods, with the curves at the top performing the best.

We can see that several variants have comparable performance to Nesterov-ALS-RF-SG, so the optimal choice of restart mechanism and momentum weight is not very sensitive. For these tests, changing the delay parameter has least effect on the performance. Interestingly, this is then followed by changing the momentum weight to be a constant of 1. It thus appears that, for our non-convex problem, the simple choice of combined with a suitable restart mechanism leads to a highly performant acceleration method. This is followed by changing function restarting to gradient restarting and speed restarting, respectively. More detailed numerical results comparing Nesterov-ALS-RF-SG with a broader variation of restarted Nesterov-ALS methods are shown in Appendix B, further confirming that Nesterov-ALS-RF-SG generally performs the best among the family of restarted Nesterov-ALS methods we have considered, for the synthetic test problems.

Fig. 3 compares Nesterov-ALS-RF-SG with the line search version Nesterov-ALS-LS (equivalent or similar to existing line search methods to accelerate ALS[harshman1970foundations, rajih2008enhanced, chen2011new, sorber2016exact]), and with several other existing accelerated ALS methods, namely, GMRES-ALS, NCG-ALS, and LBFGS-ALS. For high accuracy (top panel), Nesterov-ALS-RF-SG performs similarly to the best performing of the existing methods we compare with, LBFGS-ALS[sterck2018nonlinearly]. It performs substantially better than Nesterov-ALS-LS (it avoids the expensive line searches). Nevertheless, Nesterov-ALS-LS is competitive with the existing NGMRES-ALS[sterck2012nonlinear], and superior to NCG-ALS[sterck2015nonlinearly].

For low accuracy (bottom panel), Nesterov-ALS-RF-SG and Nesterov-ALS-LS still perform very well. ALS is now more competitive, which is not unexpected, since ALS is often efficient at reducing the initial error quickly, but then may converge slowly later on for difficult problems.

### 4.4 The Enron Dataset and Results.

The Enron dataset is a subset of the corporate email communications that were released to the public as part of the 2002 Federal Energy Regulatory Commission (FERC) investigations following the Enron bankruptcy. After various steps of pre-processing[chi2012tensors], a sender receiver month tensor of size 10510528 was obtained. We perform rank-10 CP decompositions for Enron. Fig. 4 shows gradient norm convergence for one typical test run, and a -plot for 60 runs with different random initial guesses and convergence tolerances (high accuracy, middle panel) and (lower accuracy, bottom panel). For this well-conditioned problem, ALS converges fast and does not need acceleration. In fact, the acceleration overhead makes ALS faster than any of the accelerated methods. This is consistent with previously reported results[acar2011scalable, sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly] for well-conditioned problems.

### 4.5 The Claus Dataset and Results.

The claus dataset is a 520161 tensor consisting of fluorescence measurements of 5 samples containing 3 amino acids, taken for 201 emission wavelengths and 61 excitation wavelengths. Each amino acid corresponds to a rank-one component[andersen2003practical]. We perform a rank-3 CP decomposition for claus. Fig. 5 shows gradient norm convergence for one test run, and a -plot for 60 runs with different random initial guesses and convergence tolerances (high accuracy, middle panel) and (low accuracy, bottom panel). For this medium-conditioned problem, substantial acceleration of ALS can be obtained if high accuracy is desired, and Nesterov-ALS-RF-SG performs as well as the best methods we compare with, but it is much easier to implement. For low accuracy, ALS is more competitive, but Nesterov-ALS-RF-SG still outperforms it.

### 4.6 The Gas3 Dataset and Results.

Gas3 is relatively large and has multiway structure. It is a 711000900 tensor consisting of readings from 71 chemical sensors used for tracking hazardous gases over 1000 time steps[vergara2013performance]. There were three gases, and 300 experiments were performed for each gas, varying fan speed and room temperature. We perform a rank-5 CP decomposition for Gas3.

Fig. 6 shows gradient norm convergence for one typical test run, and a -plot for 20 runs with different random initial guesses and convergence tolerance (high accuracy, middle panel) and (low accuracy, bottom panel).

For this highly ill-conditioned problem, ALS converges slowly, and NGMRES-ALS, NCG-ALS and LBFGS-ALS behave erratically. For high accuracy, our newly proposed Nesterov-ALS-RF-SG very substantially outperforms all other methods (not only for the convergence profile shown in the top panel, but for the large majority of the 20 tests with random initial guesses). Nesterov-ALS-RF-SG performs much more robustly for this highly ill-conditioned problem than any of the other accelerated methods, and reaches high accuracy much faster than any other method. We were initially surprised that Nesterov-ALS-RF-SG performs so much better than the other accelerated ALS methods we compare with, and have so far not found a clear indication why this is the case. One possible explanation is that the line search employed in NGMRES-ALS, NCG-ALS and LBFGS-ALS may suffer robustness issues due to the ill-conditioning of the problem. For low accuracy, ALS is clearly the fastest: it is efficient at reducing the initial error quickly, but converges slowly later on for this difficult problem.

The other tests in this paper indicate that our proposed Nesterov-ALS-RF-SG acceleration method is competitive with leading existing acceleration methods for ALS, and, additionally, this result gives some initial indication that Nesterov-ALS-RF-SG is surprisingly robust for highly ill-conditioned problems. This will need to be investigated further when these methods, which will be made available as part of the Poblano toolbox for optimization, will be applied by us and others to other demanding CP decomposition problems.

### 4.7 Discussion on Per-Iteration Computational Cost

It is important to compare the computational cost per iteration of our Nesterov-accelerated ALS methods with plain ALS iterations. For simplicity, we discuss the case of dense order-3 tensors of size . For , the dominant cost in each ALS step is the formation of the three MTTKRPs on the left-hand side in Eqs. (20)-(22), which takes approximately arithmetic operations. At the end of an ALS iteration, the function value can be computed at virtually no cost by noting that

where is pre-computed, the MTTKRP is re-used, and is the dot product between two matrices (i.e., the sum of the entries of the element-wise multiplication of the matrices). The third term equals , and, due to the structure of the rank-1 terms, it can be computed in operations, which is negligible compared to the cost of the MTTKRPs in ALS. In fact, this efficient computation is the default algorithm for computing the function value in the ALS implementation of the Tensor Toolbox[TTB_Software].

Computing the three MTTKRPs is also the dominant cost when calculating the gradient of , according to Eqs. (17)-(19), and following a gradient computation the function value comes virtually for free, so the dominant cost of a function + gradient evaluation is also of , the same cost as an ALS iteration.

In our Nesterov-accelerated ALS methods, we use the gradient and/or function value to compute the restart condition and/or the step length, so we perform one function + gradient evaluation per iteration in the implementation of all our methods, in addition to the cost of one ALS step per iteration. For this reason, each of our Nesterov-accelerated ALS iterations is about twice as expensive as a regular ALS iteration. Still, the numerical results in the previous sections show clearly that the Nesterov acceleration tends to reduce the number of iterations required for (accurate) convergence by much more than a factor of 2, making the accelerated methods clear winners over ALS (despite the doubled cost per iteration), for difficult problems or when accurate solutions are required.

Note that it is possible to avoid the doubling of the per-iteration cost by considering acceleration variants that only rely on function values for the restart mechanism, and determine the step length without using gradient information. In that respect, our accelerated method with function restart and constant momentum weight one is attractive. Note also that the restarted acceleration methods in Ang and Gillis[ang2019accelerating] for Nonnegative Matrix Factorization use more elaborate strategies for determining step lengths that also do not require gradient information.

Finally, it is interesting to observe that, due to the line searches, the per-iteration cost of our NGMRES-ALS, Anderson-ALS and LBFGS-ALS methods is typically 4 to 5 times the cost of 1 ALS iteration, and the per-iteration cost of NCG-ALS is typically 6 to 7 times the cost of 1 ALS iteration. Still, some of these methods achieve very large reductions in the number of iterations required for convergence, and are often among the most efficient, in particular, LBFGS-ALS and NGMRES-ALS/Anderson-ALS.

### 4.8 Further Comparison with Line Search Methods

It is interesting to further compare numerical results with the existing line search methods that were discussed in Section 3.2. Figure 7 compares convergence for synthetic test problem 2 from Table 2, for several methods. The top panel shows the gradient norm as a function of time, and the bottom panel the convergence of the objective, , to its minimum value, , as a function of time. We compute as the smallest value of obtained by any algorithm after performing sufficiently many iterations to make converge to machine accuracy.

One of our proposed Nesterov-accelerated methods with restart, Nesterov-ALS-RG-SG (green), is compared with several existing line search methods.

We first compare with NGMRES-ALS with window size 1, NGMRES-ALS-w1 (blue-dashed), which is equivalent to Harshman’s extrapolation with line search[harshman1970foundations, rajih2008enhanced]. We use the Moré-Thuente cubic line search rather than an exact line search[rajih2008enhanced], but it is known that the exact line search is expensive since it requires function evaluations[acar2011scalable, sterck2012nonlinear], whereas the cubic line search typically only requires about 2 or 3 function evaluations[acar2011scalable].

We also compare with Nesterov extrapolation with line search, Nesterov-ALS-LS (green-dashed), which is equivalent with Chen et al.’s 1-step extrapolation with line search[chen2011new], except that we don’t use an exact line search. Finally, we compare with Anderson acceleration with window size 2, Anderson-ALS-w2 (red-dashed), which uses the same form of extrapolation as Chen et al.’s 2-step extrapolation[chen2011new], but determines optimal expansion coefficients by solving a least-squares problem in each step, whereas Chen et al.’s method makes an ad-hoc choice for the expansion coefficients that is the same in all iterations. As such, we can use Anderson-ALS-w2 as an (optimistic) proxy for estimating the performance of Chen et al.’s 2-step extrapolation[chen2011new].

The results in Figure 7 show that our proposed Nesterov-accelerated method with restart, Nesterov-ALS-RG-SG (green), converges much faster than the three existing line search methods (dashed). Also, for Harshman’s extrapolation with line search (blue-dashed, NGMRES-ALS-w1), the result for NGMRES-ALS-w20 (blue) shows that a window size for NGMRES greater than 2 leads to much faster convergence. Similarly, for Chen et al.’s 2-step extrapolation with line search (similar to Anderson-ALS-w2, red-dashed), the result for Anderson-ALS-w20 (red) shows that a window size for Anderson acceleration greater than 2 leads to much faster convergence. These convergence plots are for a specific example, but they are typical for the relative performance of the methods.

Note, finally, that the problem in Figure 7 is an ill-conditioned problem, so reaches machine accuracy before the gradient does.

## 5 Conclusion

We have proposed Nesterov-ALS methods with effective choices for momentum weights and restart mechanisms that are simple and easy to implement as compared to several existing nonlinearly accelerated ALS methods, such as GMRES-ALS, NCG-ALS, and LBFGS-ALS[sterck2012nonlinear, sterck2015nonlinearly, sterck2018nonlinearly]. The optimal variant, using function restarting and gradient ratio momentum weight, is competitive with or superior to stand-alone ALS and GMRES-ALS, NCG-ALS, LBFGS-ALS and line search ALS[harshman1970foundations, rajih2008enhanced, chen2011new, sorber2016exact].

Simple nonlinear iterative optimization methods like ALS and coordinate descent (CD) are widely used in a variety of application domains. There is clear potential for extending our approach to accelerating such simple optimization methods for other non-convex problems. A specific example is Tucker tensor decomposition[kolda2009tensor]. NCG, NGMRES and LBFGS acceleration have been applied to Tucker decomposition[Hans_Tucker_decomp, sterck2018nonlinearly], using a manifold approach to maintain the Tucker orthogonality constraints, and this approach can directly be extended to the Nesterov acceleration methods discussed in this paper.

More generally, we have formulated a Nesterov-type acceleration approach that can effectively accelerate optimization algorithms different from gradient descent (such as ALS) and for non-convex problems (such as CP tensor decomposition), using simple choices for momentum weights (as simple as setting them equal to one) and suitable restart mechanisms. The proposed methods are competitive in terms of performance with any existing acceleration methods for ALS and are very simple to implement, and initial tests on a challenging real-world problem indicate desirable robustness properties.

Matlab code implementing the proposed Nesterov acceleration methods is freely available at https://github.com/hansdesterck/nonlinear-preconditioning-for-optimization. The code includes implementations of all Nesterov-ALS variants, as well as NCG-ALS, NMGRES-ALS, LBFGS-ALS and Anderson-ALS. The implementations are generic in that any suitable nonlinear preconditioner can be provided, not just ALS, and the nonlinearly preconditioned methods can be applied to any suitable optimization problem, not just canonical tensor decomposition.

## Acknowledgments

The work of H. D. S. was partially supported by an NSERC Discovery Grant.

## References

## Appendix A Parameters for synthetic CP test problems

Table 2 lists the parameters for the standard ill-conditioned synthetic test problems used in the paper[acar2011scalable]. The specific choices of parameters for the six classes in Table 2 correspond to test problems 7-12 in De Sterck[sterck2012nonlinear]. All tensors have equal size in the three tensor dimensions, and have high collinearity . The six classes differ in their choice of tensor sizes (), decomposition rank (), and noise parameters and .

problem | |||||
---|---|---|---|---|---|

1 | 20 | 0.9 | 3 | 0 | 0 |

2 | 20 | 0.9 | 5 | 1 | 1 |

3 | 50 | 0.9 | 3 | 0 | 0 |

4 | 50 | 0.9 | 5 | 1 | 1 |

5 | 100 | 0.9 | 3 | 0 | 0 |

6 | 100 | 0.9 | 5 | 1 | 1 |

## Appendix B Detailed comparisons for different restarting strategies

Figs. 10, 9 and 8 show -plots for variants of the restarted Nesterov-ALS schemes, for the case of function restart (RF, Fig. 8), gradient restart (RG, Fig. 9), and speed restart (RX, Fig. 10), applied to the synthetic test problems.

For each of the restart mechanisms, several of the restarted Nesterov-ALS variants typically outperform ALS, NCG-ALS[sterck2015nonlinearly] and NGMRES-ALS[sterck2012nonlinear].

Several of the best-performing restarted Nesterov-ALS variants are also competitive with the best existing nonlinear acceleration method for ALS we compare with, LBFGS-ALS[sterck2018nonlinearly], and they are much easier to implement.

Among the restart mechanisms tested, function restart (Fig. 8) substantially outperforms gradient restart (Fig. 9), and, in particular, speed restart (Fig. 10).

The -plots confirm that Nesterov-ALS-RF-SG, using function restarting and gradient ratio momentum weight, consistently performs as one of the best methods, making it our recommended choice for ALS acceleration.