-
Notifications
You must be signed in to change notification settings - Fork 129
Basic Usage: Python
Here we assume that the Python installation has been completed and validated. If you have not yet done that, then please visit the installation page and follow the instructions there to install and validate the tsnecuda module.
The usage of the tsnecuda library is extremely simple to get started with. Currently, the Barnes-Hut method is only supported for projection into two dimensions, thus, for any other projected dimension, the Naive method will be used, requiring significant GPU and CPU memory usage.
A simple example is as follows:
>>> import numpy as np
>>> from tsnecuda import TSNE
>>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]])
>>> X_embedded = TSNE().fit_transform(X)
>>> X_embedded.shape
(4, 2)
Thus, we can see from the above that we generally follow the calling conventions of the Sklearn code. In fact, almost all of the Sklearn options are supported. If we want to change the learning rate and perplexity, we can do so when we construct the TSNE object:
>>> import numpy as np
>>> from tsnecuda import TSNE
>>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]])
>>> X_embedded = TSNE(perplexity=64.0, learning_rate=270).fit_transform(X)
>>> X_embedded.shape
(4, 2)
If you want to project into more than two dimensions, and are OK using the naive version of the code with an O(N^2) complexity (however still quick running time), then you can do so by the following:
>>> import numpy as np
>>> from tsnecuda import TSNE
>>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]])
>>> X_embedded = TSNE(method='naive', n_components=10).fit_transform(X)
>>> X_embedded.shape
(4, 2)
The full list of parameters, and a more detailed overview of the API can be found at the read the docs.