Structured tensor computations: blocking, symmetries and kronecker factorizations

2012 
In this thesis we will explore the extensions of several ideas that have proven very successful in matrix computations to the rapidly maturing field of tensor computation. We will mainly focus on the use of blocking techniques, exploiting various different symmetries and developing new computational algorithms and factorizations. In Chapter 2 we develop a novel method to embed a higher-order tensor in a larger, symmetric tensor. Such an embedding at the matrix level is well-known and has been used successfully to derive important matrix algorithms and is one of several ways of connecting the concepts of eigenvalues and singular values for matrices. Our method for higher-order tensors is a generalization of the matrix case, and we use it to derive a previously unknown connection between the concepts of tensor eigenvalues and tensor singular values. We also show how this symmetric tensor embedding can be used to generalize algorithms originally developed for symmetric tensors to arbitrary tensors while preserving their convergence properties. Block tensors are becoming increasingly important within the field of numerical multilinear algebra. Accordingly, it is appropriate to develop an infrastructure that supports reasoning about block tensor computation. In Chapter 3 we establish concise notation that is suitable for the analysis and development of block tensor algorithms, prove several useful block tensor identities, and make precise the notion of a block tensor unfolding. In Chapter 4 we define a new block-based tensor operation that generalizes the matrix Kronecker product. Using this operation, we introduce a new tensor decomposition which extends the Kronecker Product SVD and has many attractive properties. The block unfoldings introduced in Chapter 3 play a pivotal role in the development of these tensor Kronecker methods. Chapter 5 covers two special topics related to the overall theme of this thesis. First, a tensor decomposition based on the matrix QR decomposition with partial pivoting is introduced and its potential for low-rank approximation is explored. Then a power method that efficiently exploits the structure of partially symmetric tensors is proposed, and we investigate the singular value and singular vector properties of such tensors. Overall, these results show how many ideas from matrix computations can be successfully extended to tensors. Just as matrix algorithms have increasingly been tuned to exploit structure such as blocking and symmetries, the same can be done in the tensor setting.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    66
    References
    10
    Citations
    NaN
    KQI
    []