Embedding

Decomposition

graspy.embed.select_dimension(X, n_components=None, n_elbows=2, threshold=None, return_likelihoods=False)[source]

Generates profile likelihood from array based on Zhu and Godsie method. Elbows correspond to the optimal embedding dimension.

Parameters:
X : 1d or 2d array-like

Input array generate profile likelihoods for. If 1d-array, it should be sorted in decreasing order. If 2d-array, shape should be (n_samples, n_features).

n_components : int, optional, default: None.

Number of components to embed. If None, n_components = floor(log2(min(n_samples, n_features))). Ignored if X is 1d-array.

n_elbows : int, optional, default: 2.

Number of likelihood elbows to return. Must be > 1.

threshold : float, int, optional, default: None

If given, only consider the singular values that are > threshold. Must be >= 0.

return_likelihoods : bool, optional, default: False

If True, returns the all likelihoods associated with each elbow.

Returns:
elbows : list

Elbows indicate subsequent optimal embedding dimensions. Number of elbows may be less than n_elbows if there are not enough singular values.

sing_vals : list

The singular values associated with each elbow.

likelihoods : list of array-like

Array of likelihoods of the corresponding to each elbow. Only returned if return_likelihoods is True.

References

[1]Zhu, M. and Ghodsi, A. (2006). Automatic dimensionality selection from the scree plot via the use of profile likelihood. Computational Statistics & Data Analysis, 51(2), pp.918-930.
graspy.embed.selectSVD(X, n_components=None, n_elbows=2, algorithm='randomized', n_iter=5)[source]

Dimensionality reduction using SVD.

Performs linear dimensionality reduction by using either full singular value decomposition (SVD) or truncated SVD. Full SVD is performed using SciPy's wrapper for ARPACK, while truncated SVD is performed using either SciPy's wrapper for LAPACK or Sklearn's implementation of randomized SVD.

It also performs optimal dimensionality selectiong using Zhu & Godsie algorithm [1] if number of target dimension is not specified.

Parameters:
X : array-like, shape (n_samples, n_features)

The data to perform svd on.

n_components : int or None, default = None

Desired dimensionality of output data. If "full", n_components must be <= min(X.shape). Otherwise, n_components must be < min(X.shape). If None, then optimal dimensions will be chosen by select_dimension using n_elbows argument.

n_elbows : int, optional, default: 2

If n_compoents=None, then compute the optimal embedding dimension using select_dimension. Otherwise, ignored.

algorithm : {'full', 'truncated' (default), 'randomized'}, optional

SVD solver to use:

  • 'full'
    Computes full svd using scipy.linalg.svd
  • 'truncated'
    Computes truncated svd using scipy.sparse.linalg.svd
  • 'randomized'
    Computes randomized svd using sklearn.utils.extmath.randomized_svd
n_iter : int, optional (default = 5)

Number of iterations for randomized SVD solver. Not used by 'full' or 'truncated'. The default is larger than the default in randomized_svd to handle sparse matrices that may have large slowly decaying spectrum.

Returns:
U: array-like, shape (n_samples, n_components)

Left singular vectors corresponding to singular values.

D: array-like, shape (n_components)

Singular values in decreasing order, as a 1d array.

V: array-like, shape (n_components, n_samples)

Right singular vectors corresponding to singular values.

References

[1](1, 2) Zhu, M. and Ghodsi, A. (2006). Automatic dimensionality selection from the scree plot via the use of profile likelihood. Computational Statistics & Data Analysis, 51(2), pp.918-930.

Single graph embedding

class graspy.embed.AdjacencySpectralEmbed(n_components=None, n_elbows=2, algorithm='randomized', n_iter=5)[source]

Class for computing the adjacency spectral embedding of a graph

The adjacency spectral embedding (ASE) is a k-dimensional Euclidean representation of the graph based on its adjacency matrix [1]. It relies on an SVD to reduce the dimensionality to the specified k, or if k is unspecified, can find a number of dimensions automatically (see graspy.embed.selectSVD).

Parameters:
n_components : int or None, default = None

Desired dimensionality of output data. If "full", n_components must be <= min(X.shape). Otherwise, n_components must be < min(X.shape). If None, then optimal dimensions will be chosen by select_dimension using n_elbows argument.

n_elbows : int, optional, default: 2

If n_compoents=None, then compute the optimal embedding dimension using select_dimension. Otherwise, ignored.

algorithm : {'full', 'truncated' (default), 'randomized'}, optional

SVD solver to use:

  • 'full'
    Computes full svd using scipy.linalg.svd
  • 'truncated'
    Computes truncated svd using scipy.sparse.linalg.svd
  • 'randomized'
    Computes randomized svd using sklearn.utils.extmath.randomized_svd
n_iter : int, optional (default = 5)

Number of iterations for randomized SVD solver. Not used by 'full' or 'truncated'. The default is larger than the default in randomized_svd to handle sparse matrices that may have large slowly decaying spectrum.

Attributes:
latent_left_ : array, shape (n_samples, n_components)

Estimated left latent positions of the graph.

latent_right_ : array, shape (n_samples, n_components), or None

Only computed when the graph is directed, or adjacency matrix is assymetric. Estimated right latent positions of the graph. Otherwise, None.

singular_values_ : array, shape (n_components)

Singular values associated with the latent position matrices.

indices_ : array, or None

If lcc is True, these are the indices of the vertices that were kept.

Notes

The singular value decomposition:

\[A = U \Sigma V^T\]

is used to find an orthonormal basis for a matrix, which in our case is the adjacency matrix of the graph. These basis vectors (in the matrices U or V) are ordered according to the amount of variance they explain in the original matrix. By selecting a subset of these basis vectors (through our choice of dimensionality reduction) we can find a lower dimensional space in which to represent the graph

References

[1](1, 2) Sussman, D.L., Tang, M., Fishkind, D.E., Priebe, C.E. "A Consistent Adjacency Spectral Embedding for Stochastic Blockmodel Graphs," Journal of the American Statistical Association, Vol. 107(499), 2012
fit(graph)[source]

Fit ASE model to input graph

Parameters:
graph : array_like or networkx.Graph

input graph to embed. see graphstats.utils.import_graph

Returns:
self : returns an instance of self.
fit_transform(graph)

Fit the model with graphs and apply the transformation.

n_dimension is either automatically determined or based on user input.

Parameters:
graph: np.ndarray or networkx.Graph
Returns:
out : np.ndarray, shape (n_vertices, n_dimension) OR tuple (len 2)

where both elements have shape (n_vertices, n_dimension) A single np.ndarray represents the latent position of an undirected graph, wheras a tuple represents the left and right latent positions for a directed graph

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it's possible to update each component of a nested object.

Returns:
self
class graspy.embed.LaplacianSpectralEmbed(form='DAD', n_components=None, n_elbows=2, algorithm='randomized', n_iter=5)[source]

Class for computing the laplacian spectral embedding of a graph

The laplacian spectral embedding (LSE) is a k-dimensional Euclidean representation of the graph based on its Laplacian matrix [1]. It relies on an SVD to reduce the dimensionality to the specified k, or if k is unspecified, can find a number of dimensions automatically.

Parameters:
n_components : int or None, default = None

Desired dimensionality of output data. If "full", n_components must be <= min(X.shape). Otherwise, n_components must be < min(X.shape). If None, then optimal dimensions will be chosen by select_dimension using n_elbows argument.

n_elbows : int, optional, default: 2

If n_compoents=None, then compute the optimal embedding dimension using select_dimension. Otherwise, ignored.

algorithm : {'full', 'truncated' (default), 'randomized'}, optional

SVD solver to use:

  • 'full'
    Computes full svd using scipy.linalg.svd
  • 'truncated'
    Computes truncated svd using scipy.sparse.linalg.svd
  • 'randomized'
    Computes randomized svd using sklearn.utils.extmath.randomized_svd
n_iter : int, optional (default = 5)

Number of iterations for randomized SVD solver. Not used by 'full' or 'truncated'. The default is larger than the default in randomized_svd to handle sparse matrices that may have large slowly decaying spectrum.

Attributes:
latent_left_ : array, shape (n_samples, n_components)

Estimated left latent positions of the graph.

latent_right_ : array, shape (n_samples, n_components), or None

Only computed when the graph is directed, or adjacency matrix is assymetric. Estimated right latent positions of the graph. Otherwise, None.

singular_values_ : array, shape (n_components)

Singular values associated with the latent position matrices.

indices_ : array, or None

If lcc is True, these are the indices of the vertices that were kept.

Notes

The singular value decomposition:

\[A = U \Sigma V^T\]

is used to find an orthonormal basis for a matrix, which in our case is the Laplacian matrix of the graph. These basis vectors (in the matrices U or V) are ordered according to the amount of variance they explain in the original matrix. By selecting a subset of these basis vectors (through our choice of dimensionality reduction) we can find a lower dimensional space in which to represent the graph

References

[1](1, 2) Sussman, D.L., Tang, M., Fishkind, D.E., Priebe, C.E. "A Consistent Adjacency Spectral Embedding for Stochastic Blockmodel Graphs," Journal of the American Statistical Association, Vol. 107(499), 2012
fit(graph)[source]

Fit LSE model to input graph

By default, uses the Laplacian normalization of the form:

\[L = D^{-1/2} A D^{-1/2}\]
Parameters:
graph : array_like or networkx.Graph

Input graph to embed. see graphstats.utils.import_graph

form : {'DAD' (default), 'I-DAD'}, optional

Specifies the type of Laplacian normalization to use.

Returns:
self : returns an instance of self.
fit_transform(graph)

Fit the model with graphs and apply the transformation.

n_dimension is either automatically determined or based on user input.

Parameters:
graph: np.ndarray or networkx.Graph
Returns:
out : np.ndarray, shape (n_vertices, n_dimension) OR tuple (len 2)

where both elements have shape (n_vertices, n_dimension) A single np.ndarray represents the latent position of an undirected graph, wheras a tuple represents the left and right latent positions for a directed graph

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it's possible to update each component of a nested object.

Returns:
self

Multiple graph embedding

class graspy.embed.OmnibusEmbed(n_components=None, n_elbows=2, algorithm='randomized', n_iter=5)[source]

Omnibus embedding of arbitrary number of input graphs with matched vertex sets.

Given \(A_1, A_2, ..., A_m\) a collection of (possibly weighted) adjacency matrices of a collection \(m\) undirected graphs with matched vertices. Then the \((mn \times mn)\) omnibus matrix, \(M\), has the subgraph where \(M_{ij} = \frac{1}{2}(A_i + A_j)\). The omnibus matrix is then embedded using adjacency spectral embedding.

Parameters:
n_components : int or None, default = None

Desired dimensionality of output data. If "full", n_components must be <= min(X.shape). Otherwise, n_components must be < min(X.shape). If None, then optimal dimensions will be chosen by select_dimension using n_elbows argument.

n_elbows : int, optional, default: 2

If n_compoents=None, then compute the optimal embedding dimension using select_dimension. Otherwise, ignored.

algorithm : {'full', 'truncated' (default), 'randomized'}, optional

SVD solver to use:

  • 'full'
    Computes full svd using scipy.linalg.svd
  • 'truncated'
    Computes truncated svd using scipy.sparse.linalg.svd
  • 'randomized'
    Computes randomized svd using sklearn.utils.extmath.randomized_svd
n_iter : int, optional (default = 5)

Number of iterations for randomized SVD solver. Not used by 'full' or 'truncated'. The default is larger than the default in randomized_svd to handle sparse matrices that may have large slowly decaying spectrum.

Attributes:
n_graphs_ : int

Number of graphs

n_vertices_ : int

Number of vertices in each graph

latent_left_ : array, shape (n_samples, n_components)

Estimated left latent positions of the graph.

latent_right_ : array, shape (n_samples, n_components), or None

Only computed when the graph is directed, or adjacency matrix is asymmetric. Estimated right latent positions of the graph. Otherwise, None.

singular_values_ : array, shape (n_components)

Singular values associated with the latent position matrices.

indices_ : array, or None

If lcc is True, these are the indices of the vertices that were kept.

fit(graphs)[source]

Fit the model with graphs.

Parameters:
graphs : list of graphs, or array-like

List of array-like, (n_vertices, n_vertices), or list of networkx.Graph. If array-like, the shape must be (n_graphs, n_vertices, n_vertices)

Returns:
self : returns an instance of self.
fit_transform(graphs)[source]

Fit the model with graphs and apply the embedding on graphs. n_dimension is either automatically determined or based on user input.

Parameters:
graphs : list of graphs

List of array-like, (n_vertices, n_vertices), or list of networkx.Graph.

Returns:
out : array-like, shape (n_vertices * n_graphs, n_dimension) if input

graphs were symmetric. If graphs were directed, returns tuple of two arrays (same shape as above) where the first corresponds to the left latent positions, and the right to the right latent positions

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it's possible to update each component of a nested object.

Returns:
self

Dissimilarity graph embedding

class graspy.embed.ClassicalMDS(n_components=None, dissimilarity='euclidean')[source]

Classical multidimensional scaling (cMDS).

cMDS seeks a low-dimensional representation of the data in which the distances respect well the distances in the original high-dimensional space.

Parameters:
n_components : int, or None

Number of components to keep. If None, then it will run select_dimension to find the optimal embedding dimension.

dissimilarity : 'euclidean' | 'precomputed', optional, default: 'euclidean'

Dissimilarity measure to use:

  • 'euclidean':
    Pairwise Euclidean distances between points in the dataset.
  • 'precomputed':
    Pre-computed dissimilarities are passed directly to fit and fit_transform.
Attributes:
n_components : int

Equals the parameter n_components. If input n_components was None, then equals the optimal embedding dimension.

components_ : array, shape (n_components, n_features)

Principal axes in feature space.

singular_values_ : array, shape (n_components,)

The singular values corresponding to each of the selected components.

dissimilarity_matrix_ : array, shape (n_features, n_features)

Dissimilarity matrix

References

Wickelmaier, Florian. "An introduction to MDS." Sound Quality Research Unit, Aalborg University, Denmark 46.5 (2003).

fit(X)[source]

Fit the model with X.

Parameters:
X : array_like

If dissimilarity=='precomputed', the input should be the dissimilarity matrix with shape (n_samples, n_samples). If dissimilarity=='euclidean', then the input should be 2d-array with shape (n_samples, n_features) or a 3d-array with shape (n_samples, n_features_1, n_features_2).

fit_transform(X)[source]

Fit the data from X, and returns the embedded coordinates.

Parameters:
X : nd-array

If dissimilarity=='precomputed', the input should be the dissimilarity matrix with shape (n_samples, n_samples). If dissimilarity=='euclidean', then the input should be array with shape (n_samples, n_features) or a nd-array with shape (n_samples, n_features_1, n_features_2, ..., n_features_d). First axis of nd-array must be n_samples.

Returns:
X_new : array-like, shape (n_samples, n_components)

Embedded input.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it's possible to update each component of a nested object.

Returns:
self