闷骚的灌汤包 · Fitting data to ...· 1 周前 · |
爱跑步的松鼠 · models/docs/community/ ...· 2 周前 · |
旅途中的硬盘 · Solved: Re: Help ...· 2 周前 · |
气宇轩昂的自行车 · Conversion· 2 周前 · |
想出国的大象 · Intel - OpenVINO™ | ...· 2 周前 · |
有胆有识的键盘 · Smart Strings | ...· 3 周前 · |
不拘小节的毛衣 · Problems with new ...· 4 周前 · |
酷酷的日光灯 · Tenea Kouros · ...· 1 月前 · |
犯傻的绿茶 · .NET DLL加密代码混淆 Eziriz ...· 1 月前 · |
forward input num models |
https://pytorch-geometric-signed-directed.readthedocs.io/en/latest/modules/model.html |
慷慨大方的薯片
2 月前 |
The MagNet model for node classification from the MagNet: A Neural Network for Directed Graphs. paper.
num_features ( int ) – Size of each input sample.
hidden ( int , optional ) – Number of hidden channels. Default: 2.
K ( int , optional ) – Order of the Chebyshev polynomial plus 1, i.e., Chebyshev filter size \(K\) . Default: 2.
q ( float , optional ) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.
label_dim ( int , optional ) – Number of output classes. Default: 2.
activation
(
bool
,
optional
) – whether to use activation function or not. (default:
False
)
trainable_q
(
bool
,
optional
) – whether to set q to be trainable or not. (default:
False
)
layer ( int , optional ) – Number of MagNetConv layers. Deafult: 2.
dropout
(
float
,
optional
) – Dropout value. (default:
False
)
normalization
(
str
,
optional
) – The normalization scheme for the magnetic
Laplacian (default:
sym
):
1.
None
: No normalization
\(\mathbf{L} = \mathbf{D} - \mathbf{A} \odot \exp(i \Theta^{(q)})\)
2.
"sym"
: Symmetric normalization
\(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A}
\mathbf{D}^{-1/2} \odot \exp(i \Theta^{(q)})\)
odot
denotes the element-wise multiplication.
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the __norm__ matrix on first execution, and will use the
cached version for further executions.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
Making a forward pass of the MagNet node classification model.
real, imag (PyTorch Float Tensor) - Node features.
edge_index (PyTorch Long Tensor) - Edge indices.
edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.
log_prob (PyTorch Float Tensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
An implementation of the DiGCN model without inception blocks for node classification from the Digraph Inception Convolutional Networks paper.
num_features ( int ) – Dimension of input features.
hidden ( int ) – Hidden dimension.
label_dim ( int ) – Number of clusters.
dropout ( float ) – Dropout value. (Default: 0.5)
Making a forward pass of the DiGCN node classification model without inception blocks.
x (PyTorch FloatTensor) - Node features.
edge_index (PyTorch LongTensor) - Edge indices.
edge_weight (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.
x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
An implementation of the DiGCN model with inception blocks for node classification from the Digraph Inception Convolutional Networks paper.
num_features ( int ) – Dimention of input features.
hidden ( int ) – Hidden dimention.
label_dim ( int ) – Number of clusters.
dropout ( float ) – Dropout value.
Making a forward pass of the DiGCN node classification model.
x (PyTorch FloatTensor) - Node features.
edge_index_tuple (PyTorch LongTensor) - Tuple of edge indices.
edge_weight_tuple (PyTorch FloatTensor, optional) - Tuple of edge weights corresponding to edge indices.
x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
The directed graph clustering model from the DIGRAC: Digraph Clustering Based on Flow Imbalance paper.
num_features ( int ) – Number of features.
hidden ( int ) – Hidden dimensions of the initial MLP.
nclass ( int ) – Number of clusters.
dropout ( float ) – Dropout probability.
hop ( int ) – Number of hops to consider.
fill_value ( float ) – Value for added self-loops.
Making a forward pass of the DIGRAC node clustering model.
edge_index (PyTorch FloatTensor) - Edge indices.
edge_weight (PyTorch FloatTensor) - Edge weights.
features (PyTorch FloatTensor) - Input node features, with shape (num_nodes, num_features).
z (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*hidden).
output (PyTorch FloatTensor) - Log of prob, with shape (num_nodes, num_clusters).
predictions_cluster (PyTorch LongTensor) - Predicted labels.
prob (PyTorch FloatTensor) - Probability assignment matrix of different clusters, with shape (num_nodes, num_clusters).
An implementation of the DGCN node classification model from Directed Graph Convolutional Network paper.
num_features ( int ) – Dimention of input features.
hidden ( int ) – Hidden dimention.
label_dim ( int ) – Output dimension.
dropout ( float , optional ) – Dropout value. Default: None.
improved
(
bool
,
optional
) – If set to
True
, the layer computes
\(\mathbf{\hat{A}}\)
as
\(\mathbf{A} + 2\mathbf{I}\)
.
(default:
False
)
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the computation of
\(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}}
\mathbf{\hat{D}}^{-1/2}\)
on first execution, and will use the
cached version for further executions.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
Making a forward pass of the DGCN node classification model.
x (PyTorch FloatTensor) - Node features.
edge_index (PyTorch LongTensor) - Edge indices.
edge_in, edge_out (PyTorch LongTensor) - Edge indices for input and output directions, respectively.
in_w, out_w (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.
x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
An implementation of the DiGCL model from the Directed Graph Contrastive Learning paper.
in_channels ( int ) – Dimension of input features.
activation ( str ) – Activation funciton to use.
num_hidden ( int ) – Hidden dimension.
num_proj_hidden ( int ) – Hidden dimension for projection.
tau ( float ) – Tau value in the loss.
num_layers ( int ) – Number of layers for encoder.
Semi-supervised loss function. Space complexity: O(BN) (semi_loss: O(N^2))
z1 (PyTorch FloatTensor) - Node features.
z2 (PyTorch FloatTensor) - Node features.
loss (PyTorch FloatTensor) - Loss.
Making a forward pass of the DiGCL model.
x (PyTorch FloatTensor) - Node features.
edge_index (PyTorch LongTensor) - Edge indices.
edge_weight (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.
x (PyTorch FloatTensor) - Embeddings for all nodes, with shape (num_nodes, out_channels).
The DiGCL contrastive loss.
z1, z2 (PyTorch FloatTensor) - Node hidden representations.
mean (bool, optional) - Whether to return the mean of loss values, default True, otherwise return sum.
batch_size (int, optional) - Batch size, if 0 this means full-batch. Default 0.
ret (PyTorch FloatTensor) - Loss.
Nonlinear transformation of the input hidden feature.
z (PyTorch FloatTensor) - Node features.
z (PyTorch FloatTensor) - Projected node features.
Semi-supervised loss function.
z1 (PyTorch FloatTensor) - Node features.
z2 (PyTorch FloatTensor) - Node features.
loss (PyTorch FloatTensor) - Loss.
Normalized similarity calculation.
z1 (PyTorch FloatTensor) - Node features.
z2 (PyTorch FloatTensor) - Node features.
z (PyTorch FloatTensor) - Node-wise similarity.
The MagNet model for link prediction from the MagNet: A Neural Network for Directed Graphs. paper.
num_features ( int ) – Size of each input sample.
hidden ( int , optional ) – Number of hidden channels. Default: 2.
K ( int , optional ) – Order of the Chebyshev polynomial plus 1, i.e., Chebyshev filter size \(K\) . Default: 2.
q ( float , optional ) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.
label_dim ( int , optional ) – Number of output classes. Default: 2.
activation
(
bool
,
optional
) – whether to use activation function or not. (default:
True
)
trainable_q
(
bool
,
optional
) – whether to set q to be trainable or not. (default:
False
)
layer ( int , optional ) – Number of MagNetConv layers. Deafult: 2.
dropout
(
float
,
optional
) – Dropout value. (default:
0.5
)
normalization
(
str
,
optional
) – The normalization scheme for the magnetic
Laplacian (default:
sym
):
1.
None
: No normalization
\(\mathbf{L} = \mathbf{D} - \mathbf{A} Hadamard \exp(i \Theta^{(q)})\)
2.
"sym"
: Symmetric normalization
\(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A}
\mathbf{D}^{-1/2} Hadamard \exp(i \Theta^{(q)})\)
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the __norm__ matrix on first execution, and will use the
cached version for further executions.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
Making a forward pass of the MagNet node classification model.
real, imag (PyTorch Float Tensor) - Node features.
edge_index (PyTorch Long Tensor) - Edge indices.
query_edges (PyTorch Long Tensor) - Edge indices for querying labels.
edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.
log_prob (PyTorch Float Tensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
An implementation of the DiGCN model without inception blocks for link prediction from the Digraph Inception Convolutional Networks paper.
num_features ( int ) – Dimension of input features.
hidden ( int ) – Hidden dimension.
label_dim ( int ) – The dimension of labels.
dropout ( float ) – Dropout value. (Default: 0.5)
Making a forward pass of the DiGCN node classification model without inception blocks.
x (PyTorch FloatTensor) - Node features.
edge_index (PyTorch LongTensor) - Edge indices.
edge_weight (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.
query_edges (PyTorch Long Tensor) - Edge indices for querying labels.
x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
An implementation of the DiGCN model with inception blocks for link prediction from the Digraph Inception Convolutional Networks paper.
num_features ( int ) – Dimention of input features.
hidden ( int ) – Hidden dimention.
num_clusters ( int ) – Number of clusters.
dropout ( float ) – Dropout value.
Making a forward pass of the DiGCN node classification model.
x (PyTorch FloatTensor) - Node features.
edge_index_tuple (PyTorch LongTensor) - Tuple of edge indices.
query_edges (PyTorch Long Tensor) - Edge indices for querying labels.
edge_weight_tuple (PyTorch FloatTensor, optional) - Tuple of edge weights corresponding to edge indices.
x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
An implementation of the DGCN link prediction model from Directed Graph Convolutional Network paper.
input_dim ( int ) – Dimention of input features.
filter_num ( int ) – Hidden dimention.
label_dim ( int ) – Output dimension.
dropout ( float , optional ) – Dropout value. Default: None.
improved
(
bool
,
optional
) – If set to
True
, the layer computes
\(\mathbf{\hat{A}}\)
as
\(\mathbf{A} + 2\mathbf{I}\)
.
(default:
False
)
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the computation of
\(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}}
\mathbf{\hat{D}}^{-1/2}\)
on first execution, and will use the
cached version for further executions.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
Making a forward pass of the DGCN node classification model.
x (PyTorch FloatTensor) - Node features.
edge_index (PyTorch LongTensor) - Edge indices.
edge_in, edge_out (PyTorch LongTensor) - Edge indices for input and output directions, respectively.
in_w, out_w (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.
x (PyTorch FloatTensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
The magnetic graph convolutional operator from the MagNet: A Neural Network for Directed Graphs. paper \(\mathbf{\hat{L}}\) denotes the scaled and normalized magnetic Laplacian \(\frac{2\mathbf{L}}{\lambda_{\max}} - \mathbf{I}\) .
in_channels ( int ) – Size of each input sample.
out_channels ( int ) – Size of each output sample.
K ( int ) – Order of the Chebyshev polynomial plus 1, i.e., Chebyshev filter size \(K\) .
q ( float , optional ) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.
trainable_q
(
bool
,
optional
) – whether to set q to be trainable or not. (default:
False
)
normalization
(
str
,
optional
) – The normalization scheme for the magnetic
Laplacian (default:
sym
):
1.
None
: No normalization
\(\mathbf{L} = \mathbf{D} - \mathbf{A} \odot \exp(i \Theta^{(q)})\)
2.
"sym"
: Symmetric normalization
\(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A}
\mathbf{D}^{-1/2} \odot \exp(i \Theta^{(q)})\)
odot
denotes the element-wise multiplication.
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the __norm__ matrix on first execution, and will use the
cached version for further executions.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
bias
(
bool
,
optional
) – If set to
False
, the layer will not learn
an additive bias. (default:
True
)
**kwargs
(
optional
) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
Making a forward pass of the MagNet Convolution layer.
x_real, x_imag (PyTorch Float Tensor) - Node features.
edge_index (PyTorch Long Tensor) - Edge indices.
edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.
lambda_max (optional, but mandatory if normalization is None) - Largest eigenvalue of Laplacian.
out_real, out_imag (PyTorch Float Tensor) - Hidden state tensor for all nodes, with shape (N_nodes, F_out).
Constructs messages from node
\(j\)
to node
\(i\)
in analogy to
\(\phi_{\mathbf{\Theta}}\)
for each edge in
edge_index
.
This function can take any argument as input which was initially
passed to
propagate()
.
Furthermore, tensors passed to
propagate()
can be mapped to the
respective nodes
\(i\)
and
\(j\)
by appending
_i
or
_j
to the variable name,
.e.g.
x_i
and
x_j
.
The graph convolutional operator from the Digraph Inception Convolutional Networks paper. The spectral operation is the same with Kipf’s GCN. DiGCN preprocesses the adjacency matrix and does not require a norm operation during the convolution operation.
in_channels ( int ) – Size of each input sample.
out_channels ( int ) – Size of each output sample.
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the adj matrix on first execution, and will use the
cached version for further executions.
Please note that, all the normalized adj matrices (including undirected)
are calculated in the dataset preprocessing to reduce time comsume.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
bias
(
bool
,
optional
) – If set to
False
, the layer will not learn
an additive bias. (default:
True
)
**kwargs
(
optional
) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
Making a forward pass of the DiGCN Convolution layer.
x (PyTorch FloatTensor) - Node features.
edge_index (PyTorch LongTensor) - Edge indices.
edge_weight (PyTorch FloatTensor, optional) - Edge weights corresponding to edge indices.
x (PyTorch FloatTensor) - Hidden state tensor for all nodes.
Constructs messages from node
\(j\)
to node
\(i\)
in analogy to
\(\phi_{\mathbf{\Theta}}\)
for each edge in
edge_index
.
This function can take any argument as input which was initially
passed to
propagate()
.
Furthermore, tensors passed to
propagate()
can be mapped to the
respective nodes
\(i\)
and
\(j\)
by appending
_i
or
_j
to the variable name,
.e.g.
x_i
and
x_j
.
Updates node embeddings in analogy to
\(\gamma_{\mathbf{\Theta}}\)
for each node
\(i \in \mathcal{V}\)
.
Takes in the output of aggregation as first argument and any argument
which was initially passed to
propagate()
.
An implementation of the inception block model from the Digraph Inception Convolutional Networks paper.
in_dim ( int ) – Dimention of input.
out_dim ( int ) – Dimention of output.
Making a forward pass of the DiGCN inception block model.
x (PyTorch FloatTensor) - Node features.
edge_index, edge_index2 (PyTorch LongTensor) - Edge indices.
edge_weight, edge_weight2 (PyTorch FloatTensor) - Edge weights corresponding to edge indices.
x0, x1, x2 (PyTorch FloatTensor) - Hidden representations.
The directed mixed-path aggregation model from the DIGRAC: Digraph Clustering Based on Flow Imbalance paper.
hop ( int ) – Number of hops to consider.
fill_value
(
float
,
optional
) – The layer computes
\(\mathbf{\hat{A}}\)
as
\(\mathbf{A} + fill_value*\mathbf{I}\)
.
(default:
0.5
)
Making a forward pass of DIMPA.
x_s (PyTorch FloatTensor) - Souce hidden representations.
x_t (PyTorch FloatTensor) - Target hidden representations.
edge_index (PyTorch FloatTensor) - Edge indices.
edge_weight (PyTorch FloatTensor) - Edge weights.
feat (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*input_dim).
An implementatino of the graph convolutional operator from the Directed Graph Convolutional Network paper. The same as Kipf’s GCN but remove trainable weights.
improved
(
bool
,
optional
) – If set to
True
, the layer computes
\(\mathbf{\hat{A}}\)
as
\(\mathbf{A} + 2\mathbf{I}\)
.
(default:
False
)
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the computation of
\(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}}
\mathbf{\hat{D}}^{-1/2}\)
on first execution, and will use the
cached version for further executions.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
add_self_loops
(
bool
,
optional
) – If set to
False
, will not add
self-loops to the input graph. (default:
True
)
normalize
(
bool
,
optional
) – Whether to add self-loops and compute
symmetric normalization coefficients on the fly.
(default:
True
)
**kwargs
(
optional
) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
Making a forward pass of the graph convolutional operator.
x (PyTorch FloatTensor) - Node features.
edge_index (Adj) - Edge indices.
edge_weight (OptTensor, optional) - Edge weights corresponding to edge indices.
out (PyTorch FloatTensor) - Hidden state tensor for all nodes.
Constructs messages from node
\(j\)
to node
\(i\)
in analogy to
\(\phi_{\mathbf{\Theta}}\)
for each edge in
edge_index
.
This function can take any argument as input which was initially
passed to
propagate()
.
Furthermore, tensors passed to
propagate()
can be mapped to the
respective nodes
\(i\)
and
\(j\)
by appending
_i
or
_j
to the variable name,
.e.g.
x_i
and
x_j
.
Fuses computations of
message()
and
aggregate()
into a
single function.
If applicable, this saves both time and memory since messages do not
explicitly need to be materialized.
This function will only gets called in case it is implemented and
propagation takes place based on a
torch_sparse.SparseTensor
.
The signed graph clustering model from the SSSNET: Semi-Supervised Signed Network Clustering paper.
nfeat ( int ) – Number of features.
hidden ( int ) – Hidden dimensions of the initial MLP.
nclass ( int ) – Number of clusters.
dropout ( float ) – Dropout probability.
hop ( int ) – Number of hops to consider.
fill_value ( float ) – Value for added self-loops for the positive part of the adjacency matrix.
directed
(
bool
,
optional
) – Whether the input network is directed or not. (default:
False
)
bias
(
bool
,
optional
) – If set to
False
, the layer will not learn an additive bias. (default:
True
)
Making a forward pass of the SSSNET.
edge_index_p, edge_index_n (PyTorch FloatTensor) - Edge indices for positive and negative parts.
edge_weight_p, edge_weight_n (PyTorch FloatTensor) - Edge weights for positive and nagative parts.
features (PyTorch FloatTensor) - Input node features, with shape (num_nodes, num_features).
z (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*hidden) for undirected graphs and (num_nodes, 4*hidden) for directed graphs.
output (PyTorch FloatTensor) - Log of prob, with shape (num_nodes, num_clusters).
predictions_cluster (PyTorch LongTensor) - Predicted labels.
prob (PyTorch FloatTensor) - Probability assignment matrix of different clusters, with shape (num_nodes, num_clusters).
The signed graph link prediction model adapted from the SSSNET: Semi-Supervised Signed Network Clustering paper.
nfeat ( int ) – Number of features.
hidden ( int ) – Hidden dimensions of the initial MLP.
nclass ( int ) – Number of link classes.
dropout ( float ) – Dropout probability.
hop ( int ) – Number of hops to consider.
fill_value ( float ) – Value for added self-loops for the positive part of the adjacency matrix.
directed
(
bool
,
optional
) – Whether the input network is directed or not. (default:
False
)
bias
(
bool
,
optional
) – If set to
False
, the layer will not learn an additive bias. (default:
True
)
Making a forward pass of the SSSNET.
edge_index_p, edge_index_n (PyTorch FloatTensor) - Edge indices for positive and negative parts.
edge_weight_p, edge_weight_n (PyTorch FloatTensor) - Edge weights for positive and nagative parts.
features (PyTorch FloatTensor) - Input node features, with shape (num_nodes, num_features).
query_edges (PyTorch Long Tensor) - Edge indices for querying labels.
log_prob (PyTorch Float Tensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
The signed mixed-path aggregation model from the SSSNET: Semi-Supervised Signed Network Clustering paper.
hop ( int ) – Number of hops to consider.
fill_value ( float ) – Value for added self-loops for the positive part of the adjacency matrix.
directed
(
bool
,
optional
) – Whether the input network is directed or not. (default:
False
)
Making a forward pass of SIMPA.
edge_index_p, edge_index_n (PyTorch FloatTensor) - Edge indices for positive and negative parts.
edge_weight_p, edge_weight_n (PyTorch FloatTensor) - Edge weights for positive and nagative parts.
x_p (PyTorch FloatTensor) - Souce positive hidden representations.
x_n (PyTorch FloatTensor) - Souce negative hidden representations.
x_pt (PyTorch FloatTensor, optional) - Target positive hidden representations. Default: None.
x_nt (PyTorch FloatTensor, optional) - Target negative hidden representations. Default: None.
feat (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*input_dim) for undirected graphs and (num_nodes, 4*input_dim) for directed graphs.
The SDGNN model from “SDGNN: Learning Node Representation for Signed Directed Networks” paper.
node_num ( int , optional ) – The number of nodes.
edge_index_s
(
LongTensor
) – The edgelist with sign. (e.g.,
torch.LongTensor([[0,
1,
-1],
[0,
2,
1]])
)
in_dim ( int , optional ) – Size of each input sample features. Defaults to 20.
out_dim ( int ) – Size of each hidden embeddings. Defaults to 20.
layer_num ( int , optional ) – Number of layers. Defaults to 2.
init_emb
– (FloatTensor, optional): The initial embeddings. Defaults to
None
, which will use TSVD as initial embeddings.
init_emb_grad
(
bool optional
) – Whether to set the initial embeddings to be trainable. (default:
False
)
lamb_d
(
float
,
optional
) – Balances the direction loss contributions of the overall objective. (default:
1.0
)
lamb_t
(
float
,
optional
) – Balances the triangle loss contributions of the overall objective. (default:
1.0
)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Although the recipe for forward pass needs to be defined within
this function, one should call the
Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
The signed graph attention network model (SiGAT) from the “Signed Graph Attention Networks” paper.
node_num ( [ type ] ) – Number of node.
edge_index_s ( list ) – The edgelist with sign. (e.g., [[0, 1, -1]] )
in_dim ( int , optional ) – Size of each input sample features. Defaults to 20.
out_dim ( int ) – Size of each output embeddings. Defaults to 20.
init_emb
– (FloatTensor, optional): The initial embeddings. Defaults to
None
, which will use TSVD as initial embeddings.
init_emb_grad
(
bool optional
) – Whether to set the initial embeddings to be trainable. (default:
False
)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Although the recipe for forward pass needs to be defined within
this function, one should call the
Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
The signed graph convolutional network model from the
“Signed Graph
Convolutional Network”
paper.
Internally, the first part of this module uses the
torch_geometric.nn.conv.SignedConv
operator.
We have made some modifications to the original model
torch_geometric.nn.SignedGCN
for the uniformity of model inputs.
node_num ( int ) – The number of nodes.
edge_index_s ( LongTensor ) – The edgelist with sign. (e.g., torch.LongTensor([[0, 1, -1], [0, 2, 1]]) )
in_dim ( int , optional ) – Size of each input sample features. Defaults to 64.
out_dim ( int , optional ) – Size of each output embeddings. Defaults to 64.
layer_num ( int , optional ) – Number of layers. Defaults to 2.
init_emb
– (FloatTensor, optional): The initial embeddings. Defaults to
None
, which will use TSVD as initial embeddings.
init_emb_grad
(
bool optional
) – Whether to set the initial embeddings to be trainable. (default:
False
)
lamb
(
float
,
optional
) – Balances the contributions of the overall
objective. (default:
5
)
norm_emb
(
bool
,
optional
) – Whether to normalize embeddings. (default:
False
)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Although the recipe for forward pass needs to be defined within
this function, one should call the
Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
The signed graph attentional layers operator from the
“Learning Signed
Network Embedding via Graph Attention”
paper
:param node_num: The number of nodes.
:type node_num: int
:param edge_index_s: The edgelist with sign. (e.g., torch.LongTensor([[0, 1, -1], [0, 2, 1]]) )
:type edge_index_s: LongTensor
:param in_dim: Size of each input sample features. Defaults to 64.
:type in_dim: int, optional
:param out_dim: Size of each output embeddings. Defaults to 64.
:type out_dim: int, optional
:param layer_num: Number of layers. Defaults to 2.
:type layer_num: int, optional
:param init_emb: (FloatTensor, optional): The initial embeddings. Defaults to
None
, which will use TSVD as initial embeddings.
:param init_emb_grad: Optimize initial embeddings or not.
:type init_emb_grad: bool, optional
:param lamb: Balances the contributions of the overall
Defines the computation performed at every call.
Should be overridden by all subclasses.
Although the recipe for forward pass needs to be defined within
this function, one should call the
Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
The signed graph attentional layers operator from the “Learning Signed Network Embedding via Graph Attention” paper
\[ \begin{align}\begin{aligned}\mathbf{h}_{i}^{\mathcal{B}(l)}=\tanh \left(\sum_{j \in \hat{\mathcal{N}}_{i}^{+}, k \in \mathcal{N}_{i}^{-}} \alpha_{i j}^{\mathcal{B}(l)} \mathbf{h}_{j}^{\mathcal{B}(l-1)} \mathbf{W}^{\mathcal{B}(l)} +\alpha_{i k}^{\mathcal{B}(l)} \mathbf{h}_{k}^{\mathcal{U}(l-1)} \mathbf{W}^{\mathcal{B}(l)}\right)\\\mathbf{h}_{i}^{\mathcal{U}(l)}=\tanh \left(\sum_{j \in \hat{\mathcal{N}}_{i}^{+}, k \in \mathcal{N}_{i}^{-}} \alpha_{i j}^{\mathcal{U}(l)} \mathbf{h}_{j}^{\mathcal{U}(l-1)} \mathbf{W}^{\mathcal{U}(l)} +\alpha_{i k}^{\mathcal{U}(l)} \mathbf{h}_{k}^{\mathcal{B}(l-1)} \mathbf{W}^{\mathcal{U}(l)}\right)\end{aligned}\end{align} \]
In case
first_aggr
is
False
, the layer expects
x
to be
a tensor where
x[:,
:in_dim]
denotes the positive node features
\(\mathbf{X}^{(\textrm{pos})}\)
and
x[:,
in_dim:]
denotes
the negative node features
\(\mathbf{X}^{(\textrm{neg})}\)
.
in_dim
(
int
or
tuple
) – Size of each input sample, or
-1
to
derive the size from the first input(s) to the forward method.
A tuple corresponds to the sizes of source and target
dimensionalities.
out_dim ( int ) – Size of each output sample.
first_aggr ( bool ) – Denotes which aggregation formula to use.
bias
(
bool
,
optional
) – If set to
False
, the layer will not learn
an additive bias. (default:
True
)
**kwargs
(
optional
) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
Constructs messages from node
\(j\)
to node
\(i\)
in analogy to
\(\phi_{\mathbf{\Theta}}\)
for each edge in
edge_index
.
This function can take any argument as input which was initially
passed to
propagate()
.
Furthermore, tensors passed to
propagate()
can be mapped to the
respective nodes
\(i\)
and
\(j\)
by appending
_i
or
_j
to the variable name,
.e.g.
x_i
and
x_j
.
The signed graph convolutional operator from the “Signed Graph Convolutional Network” paper
\[ \begin{align}\begin{aligned}\mathbf{x}_v^{(\textrm{pos})} &= \mathbf{\Theta}^{(\textrm{pos})} \left[ \frac{1}{|\mathcal{N}^{+}(v)|} \sum_{w \in \mathcal{N}^{+}(v)} \mathbf{x}_w , \mathbf{x}_v \right]\\\mathbf{x}_v^{(\textrm{neg})} &= \mathbf{\Theta}^{(\textrm{neg})} \left[ \frac{1}{|\mathcal{N}^{-}(v)|} \sum_{w \in \mathcal{N}^{-}(v)} \mathbf{x}_w , \mathbf{x}_v \right]\end{aligned}\end{align} \]
if
first_aggr
is set to
True
, and
otherwise.
In case
first_aggr
is
False
, the layer expects
x
to be
a tensor where
x[:,
:in_dim]
denotes the positive node features
\(\mathbf{X}^{(\textrm{pos})}\)
and
x[:,
in_dim:]
denotes
the negative node features
\(\mathbf{X}^{(\textrm{neg})}\)
.
in_dim
(
int
or
tuple
) – Size of each input sample, or
-1
to
derive the size from the first input(s) to the forward method.
A tuple corresponds to the sizes of source and target
dimensionalities.
out_dim ( int ) – Size of each output sample.
first_aggr ( bool ) – Denotes which aggregation formula to use.
norm_emb
(
bool
) – Whether to normalize embeddings. (default:
False
)
bias
(
bool
,
optional
) – If set to
False
, the layer will not learn
an additive bias. (default:
True
)
norm_emb
– Denotes embedding is normalized or not. (default:
False
)
**kwargs
(
optional
) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
Defines the computation performed at every call.
Should be overridden by all subclasses.
Although the recipe for forward pass needs to be defined within
this function, one should call the
Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Constructs messages from node
\(j\)
to node
\(i\)
in analogy to
\(\phi_{\mathbf{\Theta}}\)
for each edge in
edge_index
.
This function can take any argument as input which was initially
passed to
propagate()
.
Furthermore, tensors passed to
propagate()
can be mapped to the
respective nodes
\(i\)
and
\(j\)
by appending
_i
or
_j
to the variable name,
.e.g.
x_i
and
x_j
.
Fuses computations of
message()
and
aggregate()
into a
single function.
If applicable, this saves both time and memory since messages do not
explicitly need to be materialized.
This function will only gets called in case it is implemented and
propagation takes place based on a
torch_sparse.SparseTensor
.
The MSGNN model for link prediction from the MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian paper.
num_features ( int ) – Size of each input sample.
hidden ( int , optional ) – Number of hidden channels. Default: 2.
K ( int , optional ) – Order of the Chebyshev polynomial plus 1, i.e., Chebyshev filter size \(K\) . Default: 2.
q ( float , optional ) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.
label_dim ( int , optional ) – Number of output classes. Default: 2.
activation
(
bool
,
optional
) – whether to use activation function or not. (default:
True
)
trainable_q
(
bool
,
optional
) – whether to set q to be trainable or not. (default:
False
)
layer ( int , optional ) – Number of MSConv layers. Deafult: 2.
dropout
(
float
,
optional
) – Dropout value. (default:
0.5
)
normalization
(
str
,
optional
) – The normalization scheme for the signed directed
Laplacian (default:
sym
):
1.
None
: No normalization
\(\mathbf{L} = \bar{\mathbf{D}} - \mathbf{A} Hadamard \exp(i \Theta^{(q)})\)
2.
"sym"
: Symmetric normalization
\(\mathbf{L} = \mathbf{I} - \bar{\mathbf{D}}^{-1/2} \mathbf{A}
\bar{\mathbf{D}}^{-1/2} Hadamard \exp(i \Theta^{(q)})\)
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the __norm__ matrix on first execution, and will use the
cached version for further executions.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
conv_bias
(
bool
,
optional
) – Whether to use bias in the convolutional layers, default
True
.
absolute_degree
(
bool
,
optional
) – Whether to calculate the degree matrix with respect to absolute entries of the adjacency matrix. (default:
True
)
Making a forward pass of the MagNet node classification model.
real, imag (PyTorch Float Tensor) - Node features.
edge_index (PyTorch Long Tensor) - Edge indices.
query_edges (PyTorch Long Tensor) - Edge indices for querying labels.
edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.
log_prob (PyTorch Float Tensor) - Logarithmic class probabilities for all nodes, with shape (num_nodes, num_classes).
The MSGNN model for node classification from the MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian paper.
num_features ( int ) – Size of each input sample.
hidden ( int , optional ) – Number of hidden channels. Default: 2.
K ( int , optional ) – Order of the Chebyshev polynomial. Default: 2.
q ( float , optional ) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.
label_dim ( int , optional ) – Number of output classes. Default: 2.
activation
(
bool
,
optional
) – whether to use activation function or not. (default:
False
)
trainable_q
(
bool
,
optional
) – whether to set q to be trainable or not. (default:
False
)
layer ( int , optional ) – Number of MSConv layers. Deafult: 2.
dropout
(
float
,
optional
) – Dropout value. (default:
False
)
normalization
(
str
,
optional
) – The normalization scheme for the signed directed
Laplacian (default:
sym
):
1.
None
: No normalization
\(\mathbf{L} = \bar{\mathbf{D}} - \mathbf{A} \odot \exp(i \Theta^{(q)})\)
2.
"sym"
: Symmetric normalization
\(\mathbf{L} = \mathbf{I} - \bar{\mathbf{D}}^{-1/2} \mathbf{A}
\bar{\mathbf{D}}^{-1/2} \odot \exp(i \Theta^{(q)})\)
odot
denotes the element-wise multiplication.
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the __norm__ matrix on first execution, and will use the
cached version for further executions.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
conv_bias
(
bool
,
optional
) – Whether to use bias in the convolutional layers, default
True
.
absolute_degree
(
bool
,
optional
) – Whether to calculate the degree matrix with respect to absolute entries of the adjacency matrix. (default:
True
)
Making a forward pass of the MagNet node classification model.
real, imag (PyTorch Float Tensor) - Node features.
edge_index (PyTorch Long Tensor) - Edge indices.
edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.
z (PyTorch FloatTensor) - Embedding matrix, with shape (num_nodes, 2*hidden) for undirected graphs and (num_nodes, 4*hidden) for directed graphs.
output (PyTorch FloatTensor) - Log of prob, with shape (num_nodes, num_clusters).
predictions_cluster (PyTorch LongTensor) - Predicted labels.
prob (PyTorch FloatTensor) - Probability assignment matrix of different clusters, with shape (num_nodes, num_clusters).
Magnetic Signed Laplacian Convolution Layer from the MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian paper.
in_channels ( int ) – Size of each input sample.
out_channels ( int ) – Size of each output sample.
K ( int ) – Order of the Chebyshev polynomial plus 1, i.e., Chebyshev filter size \(K\) .
q ( float , optional ) – Initial value of the phase parameter, 0 <= q <= 0.25. Default: 0.25.
trainable_q
(
bool
,
optional
) – whether to set q to be trainable or not. (default:
False
)
normalization
(
str
,
optional
) – The normalization scheme for the magnetic
Laplacian (default:
sym
):
1.
None
: No normalization
\(\mathbf{L} = \bar{\mathbf{D}} - \mathbf{A} \odot \exp(i \Theta^{(q)})\)
2.
"sym"
: Symmetric normalization
\(\mathbf{L} = \mathbf{I} - \bar{\mathbf{D}}^{-1/2} \mathbf{A}
\bar{\mathbf{D}}^{-1/2} \odot \exp(i \Theta^{(q)})\)
odot
denotes the element-wise multiplication.
cached
(
bool
,
optional
) – If set to
True
, the layer will cache
the __norm__ matrix on first execution, and will use the
cached version for further executions.
This parameter should only be set to
True
in transductive
learning scenarios. (default:
False
)
bias
(
bool
,
optional
) – If set to
False
, the layer will not learn
an additive bias. (default:
True
)
absolute_degree
(
bool
,
optional
) – Whether to calculate the degree matrix with respect to absolute entries of the adjacency matrix. (default:
True
)
**kwargs
(
optional
) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
Making a forward pass of the Signed Directed Magnetic Laplacian Convolution layer.
x_real, x_imag (PyTorch Float Tensor) - Node features.
edge_index (PyTorch Long Tensor) - Edge indices.
edge_weight (PyTorch Float Tensor, optional) - Edge weights corresponding to edge indices.
lambda_max (optional, but mandatory if normalization is None) - Largest eigenvalue of Laplacian.
out_real, out_imag (PyTorch Float Tensor) - Hidden state tensor for all nodes, with shape (N_nodes, F_out).
Constructs messages from node
\(j\)
to node
\(i\)
in analogy to
\(\phi_{\mathbf{\Theta}}\)
for each edge in
edge_index
.
This function can take any argument as input which was initially
passed to
propagate()
.
Furthermore, tensors passed to
propagate()
can be mapped to the
respective nodes
\(i\)
and
\(j\)
by appending
_i
or
_j
to the variable name,
.e.g.
x_i
and
x_j
.
The complex ReLU layer from the MagNet: A Neural Network for Directed Graphs. paper.
complex_relu ( real : torch.FloatTensor , img : torch.FloatTensor ) [source] ¶Complex ReLU function.
real, imag (PyTorch Float Tensor) - Node features.
real, imag (PyTorch Float Tensor) - Node features after complex ReLU.
Making a forward pass of the complex ReLU layer.
real, imag (PyTorch Float Tensor) - Node features.
real, imag (PyTorch Float Tensor) - Node features after complex ReLU.
气宇轩昂的自行车 · Conversion 2 周前 |