TensorLy 0.5 is out!¶
We have been working hard to bring lots of new functionalities and improvements to TensorLy. This release is the result of a collaborative efforts by the TensorLy developers. In particular, we are happy to now have a core community of amazing developers around the world.
Particular thanks to our core team, Aaron Meurer, Aaron Meyer, Jeremy Cohen, Julia Gusak, Marie Roald, Yngve Mardal Moe, Anima Anandkumar and Yannis Panagakis. Many thanks to Anthony Scopatz, Jim Crist-Harif, Hameer Abbasi and Scott Sievert for the sparse backend and to all our contributors for making TensorLy the go-to library for tensor learning in Python!
What is TensorLy?¶
TensorLy is a high-level API for tensor methods and deep tensorized neural networks in Python, designed to make tensor learning simple and accessible. It has a flexible backend system which allows to seamlessly perform computation with NumPy, MXNet, PyTorch, TensorFlow, CuPy or JAX, and run methods at scale on CPU or GPU.
tensorly_pyramid
What's new in 0.5?¶
Well, a lot, it turns out! We are always looking to simplify the API and make using TensorLy easier. This release brings lots of new features but also quality of life improvements and bug fixes. We hope you like it!
New Tensor Algebra Backend¶
You can now choose to either use the regular tensor algebra functions or defer all tensor algebra to einsum. This flexible interface also allows users to quickly and easily implement their own efficient tensor algebra routines.
New JAX Backend¶
You can now use Jax as a backend and get all the TensorLy operations running in JAX!
Class API¶
We have added Scikit-Learn like objects for manipulating decomposition, so you can now use theses classes to get the decompositions. The functional API is still there too.
New decompositions¶
We have several new decompositions:
- Parafac2
- CP via Robust tensor power iteration
- Symmetric CP through robust tensor power iteration
- Tensor-Train-Matrix : decompose a matrix by tensorizing it and expressing it in the TT-matrix format!
Decomposition improvements¶
CP now has option for line-search and l2 regularization! Both CP and Tucker now support fixed modes and masked decomposition. CP now supports sparsity.
Sparse decomposition¶
If you have not yet tried the sparse decomposition, you should! We have now added sparse symmetric robust tensor power iteration!
Efficient operations on decomposed tensors¶
You can now directly porform tensor algebraic operations (e.g. n-mode product) on decomposed tensors directly, without reconstructing the full tensor first. CP has the most for now but we eventually plan to add them for all decompositions!
Choosing the rank¶
Rank selection: validate_cp_rank, option to set rank=‘same’ or rank=float to automatically determine the rank.
API changes¶
We have unified the API, and Kruskal-tensors have been renamed cp_tensors, to be consistent with the decomposition. Matrix-product-state have been renamed tensor-train, the named under which they have been popularized in Machine-Learning.
How do I get it?¶
With pip (recommended)¶
pip install -U tensorly
With conda¶
conda install -c tensorly tensorly
From Git (development)¶
### **Development (from git)**
# clone the repository
git clone https://github.com/tensorly/tensorly
# Move to that repository
cd tensorly
# Install in editable mode with `-e` or, equivalently, `--editable`
pip install -e .
Check that you have the correct version installed:¶
import tensorly as tl
print(tl.__version__)
Become a contributor!¶
One of the best thing about open-source is the communities and we encourage you to join TensorLy's! Don't be shy and don't worry: we are all learning and the core team is here to help you!
Have a look at the development guide and head over to Github to open your issues or pull-requests!
Overview of the new features in more detail¶
New Tensor Algebra Backend¶
from tensorly import tenalg
from tensorly import random
tucker_tensor = random.random_tucker(shape=(10, 11, 12), rank=(10, 11, 12), full=False)
By default, the tensor algebra backend is 'core', but you can explicitely set it:
tenalg.set_tenalg_backend('core')
%timeit tl.tucker_to_tensor(tucker_tensor)
Let's now try the new einsum backend:
tenalg.set_tenalg_backend('einsum')
%timeit tl.tucker_to_tensor(tucker_tensor)
New JAX Backend¶
JAX is now officially supported!
First, make sure you have Jax installed (see the installation guide). For CPU:
pip install --upgrade pip
pip install --upgrade jax jaxlib # CPU-only version
You can then set the backend to 'jax'
and have it execute all your code:
tl.set_backend('jax')
tensor = random.random_tensor(shape=(10, 11, 12))
type(tensor)
from tensorly.decomposition import tucker
tucker_tensor = tucker(tensor, rank=(10, 11, 12))
Let's reconstruct the full tensor and computer the reconstruction error:
reconstruction = tl.tucker_to_tensor(tucker_tensor)
rec_error = tl.norm(reconstruction - tensor)/tl.norm(tensor)
print(f'Relative reconstruction error={rec_error}')
type(reconstruction)
Class API¶
We have added Scikit-Learn like objects for manipulating decomposition. For instance, let's import CP:
from tensorly.decomposition import CP
tl.set_backend('numpy')
Let's create a small tensor to decompose and a CP instance to decompose it:
tensor = random.random_tensor(shape=(5, 6, 7))
cp = CP(rank=42, init='random', l2_reg=0.0001)
print(cp)
You can now fit the decomposition to a tensor:
kr = cp.fit_transform(tensor)
kr
rec = kr.to_tensor()
rec_error = tl.norm(tensor - rec)/tl.norm(tensor)
print(rec_error)
from tensorly.decomposition import Parafac2
tensor = random.random_tensor((10, 10, 10))
Let's create an instance of Parafac2
parafac2 = Parafac2(rank=10)
We can now fit our instance to the data and get the decomposition:
decomposed = parafac2.fit_transform(tensor)
rec = tl.parafac2_tensor.parafac2_to_tensor(decomposed)
rec_error = tl.norm(tensor - rec)/tl.norm(tensor)
print(rec_error)
CP via Robust Tensor Power Iterations¶
You can now use the robust tensor power iteration to compute a CP decomposition](http://tensorly.org/dev/modules/generated/tensorly.decomposition.CPPower.html#tensorly.decomposition.CPPower)!
We also added the symmetric CP, also via symmetric robust tensor power iteration.
TT-Matrix¶
We also now support the TT-Matrix format. This allows to efficiently conmpress a matrix by tensorizing and expressing it in the TT-format.
Manipulate decomposed tensors¶
We now have a simple API to directly manipulate decomposed tensors, expect the number of supported operations to grow quickly!
cp_tensor = random.random_cp(shape=(10, 11, 12), rank=10)
M = random.random_tensor(shape=(20, 11))
Before, to say, perform an n-mode product between your CP tensor and the matrix M, you would have to reconstruct the full tensor, and perform the operation:
%timeit tenalg.mode_dot(tl.cp_to_tensor(cp_tensor), M, mode=1)
Now, you can do the operation directly on your decomposed tensor!
Option 1:¶
%timeit cp_tensor.mode_dot(M, mode=1, copy=True)
Option 2¶
%timeit tl.cp_tensor.cp_mode_dot(cp_tensor, M, mode=1, copy=True)
Inplace operations¶
Note that if you do not set copy=True
, the tensor will be modified inplace:
cp_tensor
tl.cp_tensor.cp_mode_dot(cp_tensor, M, mode=1, copy=False)
cp_tensor
tensor = random.random_tensor((10, 10, 10))
cp = CP(rank=42, init='random', tol=0, linesearch=False)
print(cp)
Let's run CP with and without line-search several times and compare the average reconstruction error:
n_repeat = 10
errors = []
for i in range(n_repeat):
kr = cp.fit_transform(tensor)
rec = kr.to_tensor()
rec_error = tl.norm(tensor - rec)/tl.norm(tensor)
errors.append(rec_error)
print(tl.mean(errors))
Now let's consider the same CP but with line-search:
cp.linesearch = True
errors_ls = []
for i in range(n_repeat):
kr = cp.fit_transform(tensor)
rec = kr.to_tensor()
rec_error = tl.norm(tensor - rec)/tl.norm(tensor)
errors_ls.append(rec_error)
print(tl.mean(errors_ls))
ratio = tl.mean(errors)/tl.mean(errors_ls)
if ratio > 1:
print(f'Relative reconstruction error with line search is {ratio:.2f} times better than with regular CP.')
else:
print(f'Relative reconstruction error with regular CP is {1/ratio:.2f} times better than CP with line-search.')
L2 regularization¶
You can also use l2 regularization when performing the CP decomposition with ALS, simply by setting l2_reg
to any value different from 0!
Sparsity¶
Also in CP, you can now specify recover a sparse tensor in addition to the low-rank approximation. Simply set sparsity to denote either the desired fraction or number of non-zero elements in the sparse_component of the tensor!
Choosing the rank¶
Also new in 0.5, in most decompositions, you can now automatically set the rank to preserve the number of parameters (rank = same
). This means that the parameters of the decomposition will have a total number of parameters that is (approximately) the same as the original tensor.
Alternatively, you can set rank
to a be a float
to specify a fraction of the number of parameters of the original tensor. For instance, rank=0.5
will result in half the parameters while 1.5
will result in 1.5 times more parameters!
Sparse Tensor Decompositioon¶
If you have not yet tried our sparse backend, now is the time! We support sparse tensors decomposition using the PyData Sparse library! Checkout the notebooks for sparse CP decomposition, sparse non-negative CP, sparse CP with missing values, sparse symmetric CP, and finally sparse Tucker on the NeuIPS Sparse Dataset from FROSTT.
API changes¶
We unified the API, by renaming Kruskal-tensors have been renamed to cp_tensors, to be consistent with the decomposition. Matrix-product-state have been renamed tensor-train, the named under which they have been popularized in Machine-Learning. kruskal_tensor
and mps_tensor
have been deprecated in favour of cp_tensor
and tt_tensor
respectively and will be removed in the next release.
That's all folks!¶
That's all... for now! We hope you like all the new features. If you have any feedback, please reach out, open an issue on Github or create a Pull-Request! We welcome contributions and are here to help if you don't know where to start!