megnet.layers.graph package¶
Submodules¶
Module contents¶
Graph layers implementations
- class CrystalGraphLayer(*args, **kwargs)[source]¶
Bases:
megnet.layers.graph.base.GraphNetworkLayer
The CGCNN graph implementation as described in the paper
Xie et al. PHYSICAL REVIEW LETTERS 120, 145301 (2018)
- call(inputs, mask=None)¶
the logic of the layer, returns the final graph
- compute_output_shape(input_shape)[source]¶
compute static output shapes, returns list of tuple shapes
- phi_v(b_e_p, inputs)[source]¶
update the atom attributes by the results from previous step b_e_p and all the inputs returns v_p.
- Parameters
activation (str) – Default: None. The activation function used for each sub-neural network. Examples include ‘relu’, ‘softmax’, ‘tanh’, ‘sigmoid’ and etc.
use_bias (bool) – Default: True. Whether to use the bias term in the neural network.
kernel_initializer (str) – Default: ‘glorot_uniform’. Initialization function for the layer kernel weights,
bias_initializer (str) – Default: ‘zeros’
activity_regularizer (str) – Default: None. The regularization function for the output
kernel_constraint (str) – Default: None. Keras constraint for kernel values
bias_constraint (str) – Default: None .Keras constraint for bias values
kwargs (dictionary) – additional keyword args
- build(input_shapes)[source]¶
Build the weights for the layer :param input_shapes: the shapes of all input tensors :type input_shapes: sequence of tuple
- compute_output_shape(input_shape)[source]¶
Compute output shapes from input shapes :param input_shape: input shapes :type input_shape: sequence of tuple
Returns: sequence of tuples output shapes
- get_config()[source]¶
Part of keras layer interface, where the signature is converted into a dict
- Returns
configurational dictionary
- phi_e(inputs)[source]¶
Edge update function :param inputs: :type inputs: tuple of tensor
- Returns
output tensor
- phi_u(b_e_p, b_v_p, inputs)[source]¶
- Parameters
b_e_p (tf.Tensor) – edge/bond to global aggregated tensor
b_v_p (tf.Tensor) – node/atom to global aggregated tensor
inputs (Sequence) – list or tuple for the graph inputs
- Returns
updated globa/state attributes
- phi_v(b_ei_p, inputs)[source]¶
Node update function :param b_ei_p: edge aggregated tensor :type b_ei_p: tensor :param inputs: other graph inputs :type inputs: tuple of tensors
Returns: updated node tensor
- rho_e_u(e_p, inputs)[source]¶
aggregate edge to state :param e_p: edge tensor :type e_p: tensor :param inputs: other graph input tensors :type inputs: tuple of tensors
Returns: edge aggregated tensor for states
- class GraphNetworkLayer(*args, **kwargs)[source]¶
Bases:
keras.engine.base_layer.Layer
Implementation of a graph network layer. Current implementation is based on neural networks for each update function, and sum or mean for each aggregation function
- Method:
call(inputs, mask=None): the logic of the layer, returns the final graph compute_output_shape(input_shape): compute static output shapes, returns list of tuple shapes build(input_shape): initialize the weights and biases for each function phi_e(inputs): update function for bonds and returns updated bond attribute e_p rho_e_v(e_p, inputs): aggregate updated bonds e_p to per atom attributes, b_e_p phi_v(b_e_p, inputs): update the atom attributes by the results from previous step b_e_p and all the inputs
returns v_p.
rho_e_u(e_p, inputs): aggregate bonds to global attribute rho_v_u(v_p, inputs): aggregate atom to global attributes get_config(): part of keras interface for serialization
- Parameters
activation (str) – Default: None. The activation function used for each sub-neural network. Examples include ‘relu’, ‘softmax’, ‘tanh’, ‘sigmoid’ and etc.
use_bias (bool) – Default: True. Whether to use the bias term in the neural network.
kernel_initializer (str) – Default: ‘glorot_uniform’. Initialization function for the layer kernel weights,
bias_initializer (str) – Default: ‘zeros’
activity_regularizer (str) – Default: None. The regularization function for the output
kernel_constraint (str) – Default: None. Keras constraint for kernel values
bias_constraint (str) – Default: None .Keras constraint for bias values
**kwargs –
- call(inputs: Sequence, mask=None) Sequence [source]¶
Core logic of graph network :param inputs: input tensors :type inputs: Sequence :param mask: mask tensor :type mask: tensor
Returns: output tensor
- get_config() Dict [source]¶
Part of keras layer interface, where the signature is converted into a dict :returns: configurational dictionary
- phi_e(inputs: Sequence) tensorflow.python.framework.ops.Tensor [source]¶
This is for updating the edge attributes ek’ = phi_e(ek, vrk, vsk, u)
- Parameters
inputs (Sequence) – list or tuple for the graph inputs
- Returns
updated edge/bond attributes
- phi_u(b_e_p: tensorflow.python.framework.ops.Tensor, b_v_p: tensorflow.python.framework.ops.Tensor, inputs: Sequence) tensorflow.python.framework.ops.Tensor [source]¶
u’ = phi_u(bar e’, bar v’, u) :param b_e_p: edge/bond to global aggregated tensor :type b_e_p: tf.Tensor :param b_v_p: node/atom to global aggregated tensor :type b_v_p: tf.Tensor :param inputs: list or tuple for the graph inputs :type inputs: Sequence
- Returns
updated globa/state attributes
- phi_v(b_ei_p: tensorflow.python.framework.ops.Tensor, inputs: Sequence)[source]¶
Step 3. Compute updated node attributes v_i’ = phi_v(bar e_i, vi, u)
- Parameters
b_ei_p (tf.Tensor) – edge-to-node aggregated tensor
inputs (Sequence) – list or tuple for the graph inputs
- Returns
updated node/atom attributes
- rho_e_u(e_p: tensorflow.python.framework.ops.Tensor, inputs: Sequence) tensorflow.python.framework.ops.Tensor [source]¶
let V’ = {v’} i = 1:Nv let E’ = {(e_k’, rk, sk)} k = 1:Ne bar e’ = rho_e_u(E’)
- Parameters
e_p (tf.Tensor) – updated edge/bond attributes
inputs (Sequence) – list or tuple for the graph inputs
- Returns
edge/bond to global/state aggregated tensor
- rho_e_v(e_p: tensorflow.python.framework.ops.Tensor, inputs: Sequence) tensorflow.python.framework.ops.Tensor [source]¶
This is for step 2, aggregate edge attributes per node Ei’ = {(ek’, rk, sk)} with rk =i, k=1:Ne
- Parameters
e_p (tf.Tensor) – the updated edge attributes
inputs (Sequence) – list or tuple for the graph inputs
- Returns
edge/bond to node/atom aggregated tensor
- rho_v_u(v_p: tensorflow.python.framework.ops.Tensor, inputs: Sequence) tensorflow.python.framework.ops.Tensor [source]¶
bar v’ = rho_v_u(V’)
- Parameters
v_p (tf.Tensor) – updated atom/node attributes
inputs (Sequence) – list or tuple for the graph inputs
- Returns
atom/node to global/state aggregated tensor
- class InteractionLayer(*args, **kwargs)[source]¶
Bases:
megnet.layers.graph.base.GraphNetworkLayer
The Continuous filter InteractionLayer in Schnet
Schütt et al. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions
- call(inputs, mask=None)¶
the logic of the layer, returns the final graph
- compute_output_shape(input_shape)[source]¶
compute static output shapes, returns list of tuple shapes
- phi_v(b_e_p, inputs)[source]¶
update the atom attributes by the results from previous step b_e_p and all the inputs returns v_p.
- Parameters
activation (str) – Default: None. The activation function used for each sub-neural network. Examples include ‘relu’, ‘softmax’, ‘tanh’, ‘sigmoid’ and etc.
use_bias (bool) – Default: True. Whether to use the bias term in the neural network.
kernel_initializer (str) – Default: ‘glorot_uniform’. Initialization function for the layer kernel weights,
bias_initializer (str) – Default: ‘zeros’
activity_regularizer (str) – Default: None. The regularization function for the output
kernel_constraint (str) – Default: None. Keras constraint for kernel values
bias_constraint (str) – Default: None .Keras constraint for bias values
- build(input_shapes)[source]¶
Build the weights for the layer :param input_shapes: the shapes of all input tensors :type input_shapes: sequence of tuple
- compute_output_shape(input_shape)[source]¶
Compute output shapes from input shapes :param input_shape: input shapes :type input_shape: sequence of tuple
Returns: sequence of tuples output shapes
- get_config()[source]¶
Part of keras layer interface, where the signature is converted into a dict
- Returns
configurational dictionary
- phi_e(inputs)[source]¶
Edge update function :param inputs: :type inputs: tuple of tensor
- Returns
output tensor
- phi_u(b_e_p, b_v_p, inputs)[source]¶
- Parameters
b_e_p (tf.Tensor) – edge/bond to global aggregated tensor
b_v_p (tf.Tensor) – node/atom to global aggregated tensor
inputs (Sequence) – list or tuple for the graph inputs
- Returns
updated globa/state attributes
- phi_v(b_ei_p, inputs)[source]¶
Node update function :param b_ei_p: edge aggregated tensor :type b_ei_p: tensor :param inputs: other graph inputs :type inputs: tuple of tensors
Returns: updated node tensor
- rho_e_u(e_p, inputs)[source]¶
aggregate edge to state :param e_p: edge tensor :type e_p: tensor :param inputs: other graph input tensors :type inputs: tuple of tensors
Returns: edge aggregated tensor for states
- class MEGNetLayer(*args, **kwargs)[source]¶
Bases:
megnet.layers.graph.base.GraphNetworkLayer
The MEGNet graph implementation as described in the paper
Chen, Chi; Ye, Weike Ye; Zuo, Yunxing; Zheng, Chen; Ong, Shyue Ping. Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals, 2018, arXiv preprint. [arXiv:1812.05055](https://arxiv.org/abs/1812.05055) .. method:: call(inputs, mask=None)
the logic of the layer, returns the final graph
- compute_output_shape(input_shape)[source]¶
compute static output shapes, returns list of tuple shapes
- phi_v(b_e_p, inputs)[source]¶
update the atom attributes by the results from previous step b_e_p and all the inputs returns v_p.
- Parameters
units_v (list of integers) – the hidden layer sizes for node update neural network
units_e (list of integers) – the hidden layer sizes for edge update neural network
units_u (list of integers) – the hidden layer sizes for state update neural network
pool_method (str) – ‘mean’ or ‘sum’, determines how information is gathered to nodes from neighboring edges
activation (str) – Default: None. The activation function used for each sub-neural network. Examples include ‘relu’, ‘softmax’, ‘tanh’, ‘sigmoid’ and etc.
use_bias (bool) – Default: True. Whether to use the bias term in the neural network.
kernel_initializer (str) – Default: ‘glorot_uniform’. Initialization function for the layer kernel weights,
bias_initializer (str) – Default: ‘zeros’
activity_regularizer (str) – Default: None. The regularization function for the output
kernel_constraint (str) – Default: None. Keras constraint for kernel values
bias_constraint (str) – Default: None .Keras constraint for bias values
- build(input_shapes)[source]¶
Build the weights for the layer :param input_shapes: the shapes of all input tensors :type input_shapes: sequence of tuple
- compute_output_shape(input_shape)[source]¶
Compute output shapes from input shapes :param input_shape: input shapes :type input_shape: sequence of tuple
Returns: sequence of tuples output shapes
- get_config()[source]¶
Part of keras layer interface, where the signature is converted into a dict
- Returns
configurational dictionary
- phi_e(inputs)[source]¶
Edge update function :param inputs: :type inputs: tuple of tensor
- Returns
output tensor
- phi_u(b_e_p, b_v_p, inputs)[source]¶
- Parameters
b_e_p (tf.Tensor) – edge/bond to global aggregated tensor
b_v_p (tf.Tensor) – node/atom to global aggregated tensor
inputs (Sequence) – list or tuple for the graph inputs
- Returns
updated globa/state attributes
- phi_v(b_ei_p, inputs)[source]¶
Node update function :param b_ei_p: edge aggregated tensor :type b_ei_p: tensor :param inputs: other graph inputs :type inputs: tuple of tensors
Returns: updated node tensor
- rho_e_u(e_p, inputs)[source]¶
aggregate edge to state :param e_p: edge tensor :type e_p: tensor :param inputs: other graph input tensors :type inputs: tuple of tensors
Returns: edge aggregated tensor for states