Tensorlab 3.0 March 2016
In these release notes, all new features and updates are discussed in detail. Structured tensors, algorithms for the decomposition in multilinear rank-$(L_r,L_r,1)$ terms and tensorization are completely new. For the other topics, it is indicated which specific commands are new and which have been updated. Each topic consists of a short overview and a number of key items. The full description of an item can be uncovered by clicking it. Use the show/hide buttons next to the topic titles to (un)cover all items at once.
Structured tensors
Most optimization-based algorithms are able to exploit the efficient representation of structured tensors. To accommodate this, a large number of new methods have been implemented. This section gives an overview of the different formats supported and of the new algorithms.
Supported formats
CPD, BTD, LMLRA, TT, Hankel and Loewner.
The following formats are supported:
- Canonical polyadic decomposition: a cell of factor matrices.
- Block term decomposition: a cell of cells with factor matrices and core tensors.
- Low multilinear rank approximation: a cell containing a cell of factor matrices, and a core tensor.
- Tensor Trains: a cell of a matrix, third-order tensors and a matrix.
- Hankel: a special struct.
- Loewner: a special struct.
Tensorlab can perform efficient computations on tensors given in the TT format, but does not offer algorithms to compute this decomposition. For the computation of the decomposition, we refer to specialized toolboxes. The
ttgen
command can be used to generate a full tensor from its TT representation.Core computational routines
frob
,inprod
,mtkrprod
andmtkronprod
.The routines
frob
,inprod
,mtkrprod
andmtkronprod
are used extensively in the evaluation of objective functions and gradients for tensors decompositions. They now have specialized implementations that exploit each of the structured tensor formats.Structure detection and validation
getstructure
,isvalidtensor
anddetectstructure
.getstructure
determines the type of a structured tensor.isvalidtensor
verifies whether the efficient representation of a tensor is correct, e.g., whether all factor matrices in a CPD representation have $R$ columns.detectstructure
detects (conjugate) symmetry and Hankel structure in a dense tensor. In the case of Hankel structure, the efficient representation is returned as well.
Extended
ful
methodExpand incomplete, sparse and structured tensors and create subtensors.
ful
creates the full tensor from a given dense, incomplete, sparse and structured representation, or a subtensor if linear indices or subscripts are given. For example:U = cpd_rnd([10,11,12], 5); T1 = ful(U); T2 = ful(U,1:10); % = T1(1:10) T3 = ful(U,2:3,':',6); % = T1(2:3,:,6)
Auxiliary functions
getsize
andgetorder
.The functions
getsize
andgetorder
determine the size and order of a tensor regardless of the tensor format.
Canonical polyadic decomposition
The high-level algorithm cpd
has an improved initialization and computation
strategy. A new large-scale algorithm cpd_rbs
, an improved algorithm for
incomplete tensors, and a method for computing the Cramér-Rao bound
(cpd_crb
) are introduced. Structured tensors are supported in most
optimization-based algorithms and a number of new options are added.
Improved high-level strategy for
cpd
New compression strategy, support for structured tensors and complex decompositions.
mlsvd_rsi
is the new default compression method for dense, sparse and small structured tensors. For incomplete tensors and large structured tensors the previous default,lmlra_aca
, is used. A structured tensor is small if its number of entries is smaller than theExpandLimit
option.- The
Complex
option allows the user to compute a CPD over the complex field, even if the given tensor has real entries. - Hankel structure is detected and exploited automatically if a dense tensor
is given. The detection can be disabled using the
ExploitStructure
option. - The frequently used options
Display
,MaxIter
,TolX
,TolFun
,CGMaxIter
andUseCPDI
are passed automatically to the algorithm and refinement step.
cpd_rbs
Large-scale algorithm using randomized block sampling.
In each iteration the
cpd_rbs
algorithm randomly samples a small block from a large-scale tensor to update the current approximation. This way large tensors that do not fit into the main memory can still be decomposed.Improved performance for incomplete tensors
UseCPDI
option forcpd_nls
.The
cpd_nls
algorithm now contains a specialized kernel for incomplete tensors which can be selected using theUseCPDI
option. The kernel implements the exact Gramian of the Jacobian, instead of the approximation which is used by default. While the exact implementation is slower per iteration, it improves the convergence behavior resulting in a reduced number of iterations needed to achieve convergence.Cramér-Rao bound
Cramér-Rao bound for a CPD and additive i.i.d. Gaussian noise.
The
cpd_crb
function computes the diagonal of the Cramér-Rao bound matrix which gives a lower bound on the variance of each variable. Two methods are available: a fast but less accurate block-diagonal approximation and an estimate based on the full Cramér-Rao bound matrix.Structured tensor support
Most optimization-based algorithms, line and plane search methods exploit the efficient representation of structured tensors.
The optimization-based algorithms
cpd_nls
,cpd_minf
,cpd_als
andcpd_rbs
, and the line and plane search functionscpd_els
,cpd_aels
,cpd_eps
accept structured tensors as input. The efficient representation of structured tensors is exploited to reduce the computational cost.cpdres
also accepts structured tensors, but it does not exploit the efficient representation. To computefrob(cpdres(T,U))
, the new routinefrobcpdres(T,U)
can be used as it exploits the structure.cpd_gevd
Slices
option added to use random slices.cpd_gevd
decomposes a tensor using a generalized eigenvalue decomposition of two of its slices. The slices can be generated using MLSVD compression (the default) or using random linear combinations.cpd_core
New computational kernel for optimization-based CPD algorithms.
The
cpd_nls
andcpd_minf
algorithms are now wrappers for thecpd_core
routine, and use the nonlinear least squares and the unconstrained nonlinear optimization, respectively.Improved
rankest
New options and defaults.
- The
MaxRelErr
option determines the first rank to try instead of theMinRelErr
option.MinRelErr
determines the last rank tried. MinR
now has precedence if specified by the user.- The
Complex
option allows rank estimation over the complex field, even if the tensor has real entries. - The solver options
TolFun
andTolX
are determined based onMinRelErr
to allow rank estimation for noiseless tensors. - Structure is detected nor exploited by default.
- A lower bound based on the multilinear singular values can also be computed for sparse tensors.
rankest
also works for structured tensors.
- The
Extra options in
cpd_rnd
Optimal scaling and fixed angles between vectors.
OptimalScaling = true
scales each random rank-1 term such that the residual between the given tensor and the random initialization is minimal in least squares sense.Angle
fixes the angle between the factors vectors in one or more modes.
Various
Performance updates, bug fixes, new functions, etc.
cpdgen
now balances the Khatri-Rao product matrices to use less memory.cpdgen
now also generates subtensors using linear or subscript indices:cpdgen(U, 1:3); cpdgen(U, 1, ':', 3:4);
cpd_rnd
no longer returns orthogonal factor matrices if a tensor is given instead of a size vector. Set theOrth
option to true to obtain orthogonal factors.Fixed convergence problem in
cpd_nls
for complex tensors with less than 100 variables. Due to this bug, the optimal decomposition could not be found for these small complex tensors.frobcpdres
is a new function to compute the Frobenius norm of the residual between the tensor and a (C)PD. The function exploits the efficient representation of structured tensors.
Decomposition in multilinear rank-$(L_r,L_r,1)$ terms
A new family of algorithms for the decomposition in multilinear
rank-$(L_r,L_r,1)$ terms has been developed. Next to optimization-based algorithms,
there are also algebraic methods (ll1_gevd
), initialization routines
(ll1_rnd
), and a high-level decomposition algorithm that takes care of the
initialization (ll1
). As the decomposition in multilinear rank-$(L_r,L_r,1)$
terms is a special case of both the polyadic and the block term decomposition,
both formats are supported.
ll1
A high-level algorithm.
ll1
computes a decomposition in multilinear rank-$(L_r,L_r,1)$ terms using one of the main optimization routinesll1_nls
orll1_minf
. These algorithms are initialized automatically, e.g., with the result ofll1_gevd
. To improve the computational performance an extra compression step can be performed, and Hankel structure can be detected and exploited. For example,Ures = ll1(T, L);
The frequently used options
Display
,MaxIter
,TolX
,TolFun
, andCGMaxIter
are passed automatically to the algorithm and refinement step.ll1_minf
andll1_nls
The main optimization algorithms.
Both algorithms rely on the
ll1_core
routine to implement an unconstrained nonlinear optimization algorithm (ll1_minf
) or a nonlinear least squares algorithm (ll1_nls
). For example,Uinit = ll1_rnd(size_tens, L); Ures = ll1_nls(T, Uinit, L);
Structured tensor support
All optimization-based routines support structured tensors.
The optimization-based algorithms
ll1_nls
andll1_minf
accept structured tensors as input. The efficient representation of structured tensors is exploited to reduce the computational cost.ll1res
also accepts structured tensors, but it does not exploit the efficient representation. To computefrob(ll1res(T,U,L))
, the new routinefrobll1res(T,U,L)
can be used as it exploits the structure.ll1_gevd
An algebraic algorithm based on the generalized eigenvalue decomposition.
ll1_gevd
decomposes a tensor using a generalized eigenvalue decomposition of two of its slices. The slices can be generated using MLSVD compression or using random linear combinations. For example,Ures = ll1_gevd(T, L); Ures = ll1_gevd(T, L, 'Slices', 'random');
Problem generation and error computation
ll1_rnd
,ll1gen
,ll1convert
,ll1res
, andfrobll1res
.ll1_rnd
creates pseudo-random factors.ll1gen
expands factors to a full tensor or a subtensor using linear or subscript indices:% BTD format ll1gen(U, 1:3); ll1gen(U, 1, ':', 3:4); % CPD format ll1gen(U, L, 1:3); ll1gen(U, L, 1, ':', 3:4);
ll1convert
converts the BTD format to the CPD format and vice versa.ll1res
computes the residual between the tensor and an approximation.frobll1res
computes the Frobenius norm of the residual and takes the efficient representation of structured tensors into account.
Multilinear singular value decomposition and low multilinear rank approximation
Two new algorithms for the MLSVD have been added: mlsvds
for sparse tensors
and mlsvd_rsi
for dense or sparse large-scale tensors. Extra options have been
added to existing MLSVD algorithms. An overview:
mlsvd
Added
LargeScale
andFullSVD
options.LargeScale = true
uses the eigenvalue decomposition of $\mat{M}\mat{M}^{\T}$ in which $\mat{M}$ is an matricization of the tensor to compute the factor matrices (instead of the SVD of $\mat{M}$).FullSVD = true
uses the full SVD instead of the economy size SVD.
mlsvds
MLSVD for sparse tensors.
mlsvds
computes the truncated MLSVD of a sparse tensor by exploiting the sparsity.mlsvd_rsi
Randomized MLSVD for dense and sparse tensors.
mlsvd_rsi
uses randomized singular value decompositions in combination with subspace iteration to compute a (sequentially) truncated MLSVD. It is often as accurate asmlsvd
and is (much) faster.mlrank
andmlrankest
Added support for sparse tensors.
The multilinear rank can now be computed (
mlrank
) or estimated (mlrankest
) for sparse tensors.
The LMLRA algorithms now support structured tensors, the resulting factors U
and core tensor S
are normalized by default, mlsvd_rsi
is used as default
initialization method and extra options have been added. A more detailed overview:
Updated
lmlra
algorithmmlsvd_rsi
is new default initialization.The
mlsvd_rsi
algorithm replaces thelmlra_aca
algorithm as the default initialization method for dense, sparse and small structured tensors. Incomplete and larger structured tensors uselmlra_aca
as the default initialization. A structured tensor is small if its number of entries is smaller than theExpandLimit
option.Structured tensors are fully supported by
lmlra
.lmlra_core
New computational kernel for optimization-based CPD algorithms.
The
lmlra_nls
andlmlra_minf
algorithms are now wrappers for thelmlra_core
routine, and use the nonlinear least squares and the unconstrained nonlinear optimization, respectively.Normalization of results
All LMLRA algorithms now normalize the results.
The output factor matrices
U
and core tensorS
oflmlra
,lmlra_nls
,lmlra_minf
andlmlra_rnd
are now normalized. This means the factor matricesU{n}
have orthonormal columns,S
is all-orthogonal and for an $N$th-order tensor, the $(N-1)$th-order slices are sorted in a the order of non-increasing Frobenius norm.lmlra_aca
Added support for structured tensors and
FillCore
option.The
lmlra_aca
algorithm now accepts structured tensors as input. TheFillCore
option can be used to make sure that the resulting core tensor has the requested size, even if the requested size is larger than the multilinear rank of the tensor.Structured tensor support
All optimization-based routines support structured tensors.
The optimization-based algorithms
lmlra_nls
andlmlra_minf
, andlmlra_aca
accept structured tensors as input. The efficient representation of the structured tensors is exploited to reduce the computational cost.lmlrares
also accepts structured tensors, but it does not exploit the efficient representation. To computefrob(lmlrares(T,U))
, the new routinefroblmlrares(T,U)
can be used as it exploits the structure.Various
lmlragen
,lmlrares
andfroblmlrares
.lmlragen
can now generate subtensors using linear or subscript indices:lmlragen(U, S, 1:3); lmlragen(U, S, 1, ':', 3:4);
lmlrares
accepts structured tensors, but does not exploit the efficient representation.froblmlrares
computes the Frobenius norm of the residual between a tensor and an approximation. This method can exploit the efficient representation of structured tensors.
Block term decomposition
A number of minor changes have been made to the block term decomposition algorithms:
btd_rnd
Normalization is turned on by default.
Each term is now normalized by default, i.e., all factor matrices have orthonormal columns, and the core tensor is all-orthogonal. For each $N$th order core tensor, the $(N-1)$th-order slices are sorted in the order of non-increasing Frobenius norm.
Support structured tensors
All optimization-based algorithms accept structured tensors.
The optimization-based algorithms
btd_nls
andbtd_minf
accept structured tensors as input. The efficient representation of the structured tensors is exploited to reduce the computational cost.btdres
also accepts structured tensors, but it does not exploit the efficient representation. To computefrob(btdres(T,U))
, the new routinefrobbtdres(T,U)
can be used as it exploits the structure.btd_core
New computational kernel for optimization-based BTD algorithms.
The
btd_nls
andbtd_minf
algorithms are now wrappers for thebtd_core
routine, and use the nonlinear least squares and the unconstrained nonlinear optimization, respectively.Various
btdgen
,btdres
andfrobbtdres
.btdgen
can now generate subtensors using linear and subscript indices:btdgen(U, 1:3); btdgen(U, 1, ':', 3:4);
btdres
accepts structured tensors, but does not exploit the efficient representation.frobbtdres
computes the Frobenius norm of the residual between a tensor and an approximation. This method can exploit the efficient representation of structured tensors.
Tensorization methods
Tensorization is defined as the transformation or mapping of lower-order data to higher-order data. A new family of deterministic (de)tensorization techniques is introduced, and some new stochastic tensorization techniques have been added.
Hankelization
hankelize
anddehankelize
.hankelize
constructs Hankel matrices or tensors. The user can indicate the mode along which the fibers are tensorized, as well as the order of the Hankelization and the size of the resulting Hankel matrices or tensors. Instead of a dense tensor, an efficient representation can be obtained as well.dehankelize
extracts the original data from a Hankel matrix or Hankel tensor. The user can indicate the modes along which the detensorization is carried out, as well as the method to be used (single fiber extraction, mean, median, ...).
Löwnerization
loewnerize
anddeloewnerize
.loewnerize
constructs Löwner matrices or tensors. The user can indicate the mode along which the fibers are tensorized, as well as the order of the Löwnerization and the size of the resulting Löwner matrices or tensors. Instead of a dense tensor, an efficient representation can be obtained as well.deloewnerize
extracts the original data from a Löwner matrix or Löwner tensor. The user can indicate the modes along which the detensorization is carried out.
Segmentation
segmentize
anddesegmentize
.segmentize
applies reshaping/folding to segment the given data. The user can indicate the mode along which the fibers are tensorized, as well as the order of the segmentation and the size of the resulting matrices or tensors. Furthermore, shifts can be passed to indicate the overlap between segments. Instead of a dense tensor, an efficient representation can be obtained as well.desegmentize
extracts the original data from the segmented matrix or tensor. The user can indicate the dimensions which need to be detensorized as well as the method to be used (single fiber extraction, mean, median, ...).
Decimation
decimate
anddedecimate
.decimate
applies reshaping/folding to decimate the given data. The user can indicate the dimension along which the fibers are tensorized, as well as the order of the decimation and the size of the resulting matrices or tensors. Furthermore, shifts can be passed to indicate whether segments are overlapping. Instead of a dense tensor, an efficient representation can be obtained as well.dedecimate
extracts the original data from the decimated matrix or tensor. The user can indicate the modes along which the detensorization is carried out, as well as the method to be used (single fiber extraction, mean, median, ...).
New covariance method
dcov
dcov
computes covariance matrices along a specific dimension.
New and updated cumulant methods
cum4
,xcum4
andstcum4
cum4
has a reduced memory footprint.xcum4
computes the fourth-order cross-cumulant (new).stcum4
computes the fourth-order spatio-temporal cumulant (new).
Structured data fusion
Thanks to a new language parser (sdf_check
) it is easier to formulate SDF
models, to investigate them and to correct errors. Three new factorizations
types have been added and the handling of incomplete tensors has been improved.
Two new solvers for symmetric and/or coupled CPDs are introduced, as well as
nine new transformations. Six transformations have been generalized. Finally,
new advanced language concepts are introduced.
More lenient language
Fewer braces are necessary to create models.
The new, dedicated language parser
sdf_check
automatically adds the braces{
and}
if this can be done unambiguously. Arrays are converted to cells if necessary. For example, the following model is now allowed:model = struct; model.variables = randn(3,2); model.factors = {1, @struct_nonneg}; model.factorizations{1}.data = T; model.factorizations{1}.cpd = [1 1 1];
Error checking using
sdf_check
A new dedicated language parser that helps finding model errors.
The
sdf_check
routine parses every SDF model and locates lines that contain syntax errors.sdf_check(model)
also tests the consistency of the model, e.g., it tests whether all transformations can be applied and whether all dimensions agree. All solvers run only when a model is free of syntax and consistency errors.Investigating a model using
sdf_check
sdf_check(model, 'print')
creates a model overview with clickable links allowing the user to investigate every step.The
sdf_check
method can print a model overview after the syntax and consistency have been tested. This overview contains clickable links allowing the user to view each factor separately. This way intermediate results of transformations can be inspected. Clickable links are only supported in the Matlab Command Window.Three new factorization types
ll1
,lmlra
andregL0
are dedicated factorization types.The
ll1
factorization type can be used instead ofstruct_LL1
to compute a decomposition in multilinear rank-$(L_r,L_r,1)$ terms.model.factorizations.myll1.data = T; model.factorizations.myll1.ll1 = {'A','B','C'}; model.factorizations.myll1.L = L;
Similarly, the
lmlra
factorization type can be used for the low multilinear rank approximation.Finally,
regL0
implements a smoothed L0 regularization term $\sum_i 1 - \text{exp}(-\frac{x_i^2}{\sigma^2})$.
Improved performance for incomplete tensors
cpdi
factorization type forsdf_nls
.The
sdf_nls
algorithm now has a specialized kernel for incomplete tensors which can be selected using thecpdi
factorization type. The kernel implements the exact Gramian of the Jacobian, instead of the approximation which is used by default. While the exact implementation is slower per iteration, it improves the convergence behavior resulting in a reduced number of iterations needed to achieve convergence. For example:model.factorizations{1}.data = T; model.factorizations{1}.cpdi = {'A', 'B', 'C'};
ccpd_nls
andccpd_minf
New solvers for coupled and/or symmetric CPDs.
The
ccpd_nls
andccpd_minf
solvers can be used to compute the CPD of one or more tensors that share factor matrices and/or have a (partially) symmetric decomposition. These dedicated solvers are often faster than the more generalsdf_nls
andsdf_minf
solvers. The standard SDF syntax can be used, but no structure can be imposed on the factors. Theccpd_nls
andccpd_minf
methods are able to exploit symmetry in the decomposition as well as in the given tensor(s). Both solvers rely onccpd_core
for the computational routines.issymmetric
fieldExploit symmetry in data using the
ccpd_minf
andcpd_nls
algorithms.The
ccpd_minf
andccpd_nls
algorithms are able to exploit symmetry in the decomposition as well as in the given tensor(s). By specifyingissymmetric = true
, the same symmetry as in the decomposition is considered. For example:model.factorizations.data = T; model.factorizations.cpd = [1 2 1]; model.factorizations.issymmetric = true; % exploit that T(:,j,:) = T(:,j,:).';
If
issymmetric
is not provided by the user,ccpd_minf
andccpd_nls
automatically determine whether the data are symmetric.Advanced new language constructs
The
transform
field, implicit factors and struct array factorizations facilitate the construction of models significantly.The
transform
field applies a transformation to multiple variables in a single line. This simplifies the creation of dynamic models in which several factors have the same structure. For example:model.transform = {1:3, @struct_nonneg};
creates three factor matrices with non-negative entries using variables 1, 2 and 3.
The
ImplicitFactors
option allows factorizations to use variables directly as factors without the need to explicitly create these factors. This option is disabled by default.The
factorizations
field can be a struct array. This allows the user to create many factorizations in a single line, which can be useful when many datasets are coupled. For example:data = {T1, T2} % two tensors ind = {[1 2 3], [3 4 5]} % factor three is shared. model.factorizations = struct('data', data, 'cpd', ind);
Nine new transformations
struct_kron
,struct_kr
,struct_fd
,struct_nop
,struct_const
,struct_select
,struct_cauchy
andstruct_exp
.struct_kron
implements the Kronecker product of $N$ vectors or matrices, e.g., $\mat{A}\kron\mat{B}\kron\mat{C}$.struct_kr
implements the Khatri-Rao product of $N$ vectors or matrices, e.g., $\mat{A}\kr\mat{B}\kr\mat{C}$.struct_fd
computes finite differences of order one or two of each column of a matrix.struct_nop
passes all variables and options without doing anything, which can be useful for testing purposes.struct_const
keeps the variable constant. A mask can be used to select which entries are constant.struct_prod
multiplies a variable along a given mode. The result is then squeezed.struct_select
allows an entry of a cell variable to be selected.struct_cauchy
implements the structure of a Cauchy matrix.struct_exp
models a factor matrix with exponentials as columns.
Six updated transformations
struct_plus
,struct_sum
,struct_times
,struct_vander
,struct_poly
,struct_rational
.struct_plus
computes the element-wise sum of a variable and a constant, or the element-wise sum of multiple variables (and an optional constant).struct_sum
sums a variable along a given mode. The result is now squeezed.struct_times
computes the element-wise product of a variable and a constant, or the element-wise product of multiple variables (and an optional constant).struct_vander
accepts a fourth argument to form a transposed Vandermonde matrix.struct_poly
ensures that every column of a matrix is a polynomial. Now, different basis functions are supported: monomial, Chebyshev of the first kind, Chebyshev of the second kind and Legendre. It is also possible to use barycentric interpolation instead of coefficients.struct_rational
ensures that every column of a matrix is a rational function. Now, different basis functions are supported: monomial, Chebyshev of the first kind, Chebyshev of the second kind and Legendre. It is also possible to use barycentric interpolation instead of coefficients.
Two additional functions have been added to facilitate working with polynomials:
transform_poly
andgenpolybasis
.Support structured tensors
All optimization-based algorithms accept structured tensors.
The optimization-based algorithms
sdf_nls
,sdf_minf
,ccpd_nls
andccpd_minf
accept structured tensors as input. The efficient representation of structured tensors is exploited to reduce the computational cost.Performance improvements
Speed improvements for simple models.
The overhead has been reduced for specific cases to reduce the computation time:
- Gradients and blocks in the Gramian of the Jacobian corresponding to constant factors are no longer computed.
- References are resolved faster for non-BTD types by removing cell conversions.
- Factors without transformations bypass large, expensive and unnecessary parts of the code.
weight
andrelweight
Setting the (relative) importance of different factorizations and/or regularization terms.
In Tensorlab 2, the importance of different factorizations and/or regularization terms can be set using the
Weight
andRelWeight
options when callingsdf_nls
orsdf_minf
. In Tensorlab 3, (relative) weights can also be set in the model itself using theweight
andrelweight
fields. For example:model.factorizations{1}.data = T; model.factorizations{1}.cpd = 1:3; model.factorizations{1}.weight = 10;
Persistent storage for transformations
Intermediate results can be cached for all iterations.
Transformations can make use of a
state.persistent
field. The contents of this fields are computed only once for each call to the solver. This is useful for intermediate results that do not depend on the variables, e.g., a basis matrix for polynomials. Results that depend on the variables can be stored in thestate
output as before.The
(constant)
keywordEnabling scalar constants.
The
(constant)
keyword can be used to explicitly declare a (sub)factor to be constant. This way, it is possible to create a scalar constant, which was not possible before. The following example creates a factorlambda
as constant 1 followed by the variablea
:model.factors.lambda = { {1, '(constant)'} , 'a' };
Absolute and relative error
Both errors are computed for each dataset.
All solvers compute the absolute and relative error for each factorization. For example:
[sol, output] = sdf_nls(model); output.relerr
Extended documentation
Five chapters for beginners to expert users.
The SDF documentation has been entirely rewritten. A first chapter introduces the language in a structured way. The different concepts are illustrated with many examples. In the second chapter, more elaborate examples are shown. In the third chapter, advanced modeling techniques are discussed. Chapter four explains how the user can implement a transformation. Finally, the full specification of the language is given in the fifth chapter.
Various
Visualization
New
visualize
function.The
visualize
function allows the user to walk through a higher-order dataset while plotting first-, second- or third-order slices.visualize
can for instance be used to compare the result of a decomposition with the original data. Support regions indicate where the model is valid. The dimensions can be formatted to match the underlying variable. Seehelp visualize
for all options.slice3
andvoxel3
now support dimension transforms. The values of the sliders inslice3
andsurf3
can be set externally using theValues
options.Flexible option parsing
Use key-value pairs or option structs.
Most methods use an input parser allowing the use of the option structures as in previous versions of Tensorlab, as well as key-value pairs. Most options are now case insensitive. The following calls to
cpd
are equivalent.options = struct; options.Display = 1; options.Algorithm = @cpd_nls cpd(T, R, options); cpd(T, R, 'display', 1, 'Algorithm', @cpd_nls);
Improved support for sparse tensors
tens2vec
,tens2mat
,vec2tens
andmat2tens
.The unfolding routines
tens2vec
andtens2mat
and folding routinesvec2tens
andmat2tens
now work for sparse tensors as well.noisy
Corrected SNR, extra distributions and large-scale version.
The noise added by
noisy
now has the correct signal-to-noise ratio (SNR). (Previously the SNR was a factor two too large.) Different noise distributions are supported and the noise tensor is now an output argument. Finally, a large scale version has been implemented which uses less memory.Auxiliary functions for polynomials
genpolybasis
andtranform_poly
.genpolybasis
andtransform_poly
are able to generate polynomial basis functions and convert coefficients from one basis and/or domain to another one.New and updated auxiliary functions
kron
,outprod
,fmt
,contract
,fixedAngleVect
,gevd_bal
andkmeans
.kron
now computes the Kronecker product of an arbitrary number of tensors and is faster for vectors.outprod
computes the outer product of vectors, matrices and/or tensors.fmt
now ensures that theval
,ind
andsub
fields are column vectors. For efficient representations of structured tensorsfmt
checks whether the representation is valid usingisvalidtensor
.contract
computes the contraction of a tensor with one or more vectors.fixedAngleVect
uses the simplex method to generate vectors with a specified angle between them.gevd_bal
implements a balanced method for the generalized eigenvalue decomposition.kmeans
can no longer stay in an infinite loop because the number of iterations is now limited.
Absolute tolerance for optimization routines
A new stopping criterion.
The optimization algorithms now stop when the objective value is below a threshold
TolAbs
. (The existing thresholdTolFun
uses the relative objective function value.) By defaultTolAbs = -inf
for general optimization routines andTolAbs = 0
for nonlinear least squares algorithms.Documentation, demos, website
HTML and PDF documentation, more practical demos and a new website.
The documentation has been rewritten and has been considerably expanded to include the new features, to clarify explanations and to bring all functionality to the user's attention. Both an HTML and a PDF version are available. A number of more practical demos are provided to get the user started right away. Finally, a new website has been designed.
Tensorlab 2.0 January 2014
Tensorlab 2.0
- New feature: full support for sparse and incomplete tensors.
- New feature: structured data fusion (SDF) allows to define your own (coupled) tensor factorizations and impose structure on the factors with an intuitive domain specific language.
- New feature: SDF comes with 32 types of factor structure to choose from (orthgonality, nonnegativity, Toeplitz, ...).
- New feature: low multilinear rank approximation by adaptive cross-approximation (lmlra_aca).
- Improved complex optimization algorithms.
- Speed improvements across the board thanks to optimized kernels.
Tensorlab 2.01
- Removed namespace clash of converting incomplete and sparse tensors to full tensors.
- Fixed a bug where factors consisting of both horizontal and vertical concatenation of subfactors resulted in an error.
Tensorlab 2.02
- Visualization with voxel3 now runs much more smoothly.
- Added cum3 for computing the third-order cumulant.
- Added new optimization algorithm: SR1 with CG-Steihaug.
- Fixed a memory leak in L-BFGS methods, significantly improving the speed of most _minf methods.
- Fixed many bugs related to SDF, rankest, cpd and cpd_gevd.
- Updated (de)serialize functions.
Tensorlab 1.0 February 2013
Tensorlab 1.01
- Performance improvements and bug fixes in bivariate polynomial system solving (
polysol2
andpolyval2
). - Bug fixes in
cpd_gevd
andlmlra_hooi
. - Reference updates.
Tensorlab 1.02
- New feature: alternating least squares algorithm for computing structured and symmetric tensor decompositions cpds_als.
- New feature: it is now possible to select exact line or plane search for optimization-based algorithms.
- Improved robustness of polysol2.
- New feature: optionally display convergence progress in command window, including a hyperlink to summarize convergence in a plot.
- Improvement: line search function signature simplified. As a result, it is now easy to supply optimization methods with custom line searches.
- Improvement: more outputs are recorded, such as the relative step size.
- Small bug fixes and performance improvements.