megnet.layers package

Subpackages

Module contents

Megnet layers implementations. This subpackage includes

  1. Graph convolution layers

  2. Readout layers

class CrystalGraphLayer(*args, **kwargs)[source]

Bases: megnet.layers.graph.base.GraphNetworkLayer

The CGCNN graph implementation as described in the paper

Xie et al. PHYSICAL REVIEW LETTERS 120, 145301 (2018)

call(inputs, mask=None)

the logic of the layer, returns the final graph

compute_output_shape(input_shape)[source]

compute static output shapes, returns list of tuple shapes

build(input_shape)[source]

initialize the weights and biases for each function

phi_e(inputs)[source]

update function for bonds and returns updated bond attribute e_p

rho_e_v(e_p, inputs)[source]

aggregate updated bonds e_p to per atom attributes, b_e_p

phi_v(b_e_p, inputs)[source]

update the atom attributes by the results from previous step b_e_p and all the inputs returns v_p.

rho_e_u(e_p, inputs)[source]

aggregate bonds to global attribute

rho_v_u(v_p, inputs)[source]

aggregate atom to global attributes

get_config()[source]

part of keras interface for serialization

Parameters
  • activation (str) – Default: None. The activation function used for each sub-neural network. Examples include ‘relu’, ‘softmax’, ‘tanh’, ‘sigmoid’ and etc.

  • use_bias (bool) – Default: True. Whether to use the bias term in the neural network.

  • kernel_initializer (str) – Default: ‘glorot_uniform’. Initialization function for the layer kernel weights,

  • bias_initializer (str) – Default: ‘zeros’

  • activity_regularizer (str) – Default: None. The regularization function for the output

  • kernel_constraint (str) – Default: None. Keras constraint for kernel values

  • bias_constraint (str) – Default: None .Keras constraint for bias values

  • kwargs (dictionary) – additional keyword args

build(input_shapes)[source]

Build the weights for the layer :param input_shapes: the shapes of all input tensors :type input_shapes: sequence of tuple

compute_output_shape(input_shape)[source]

Compute output shapes from input shapes :param input_shape: input shapes :type input_shape: sequence of tuple

Returns: sequence of tuples output shapes

get_config()[source]

Part of keras layer interface, where the signature is converted into a dict

Returns

configurational dictionary

phi_e(inputs)[source]

Edge update function :param inputs: :type inputs: tuple of tensor

Returns

output tensor

phi_u(b_e_p, b_v_p, inputs)[source]
Parameters
  • b_e_p (tf.Tensor) – edge/bond to global aggregated tensor

  • b_v_p (tf.Tensor) – node/atom to global aggregated tensor

  • inputs (Sequence) – list or tuple for the graph inputs

Returns

updated globa/state attributes

phi_v(b_ei_p, inputs)[source]

Node update function :param b_ei_p: edge aggregated tensor :type b_ei_p: tensor :param inputs: other graph inputs :type inputs: tuple of tensors

Returns: updated node tensor

rho_e_u(e_p, inputs)[source]

aggregate edge to state :param e_p: edge tensor :type e_p: tensor :param inputs: other graph input tensors :type inputs: tuple of tensors

Returns: edge aggregated tensor for states

rho_e_v(e_p, inputs)[source]

Reduce edge attributes to node attribute, eqn 5 in the paper :param e_p: updated bond :param inputs: the whole input list

Returns: summed tensor

rho_v_u(v_p, inputs)[source]
Parameters
  • v_p (tf.Tensor) – updated atom/node attributes

  • inputs (Sequence) – list or tuple for the graph inputs

Returns

atom/node to global/state aggregated tensor

class GaussianExpansion(*args, **kwargs)[source]

Bases: keras.engine.base_layer.Layer

Simple Gaussian expansion. A vector of distance [d1, d2, d3, …, dn] is expanded to a matrix of shape [n, m], where m is the number of Gaussian basis centers

Parameters
  • centers (np.ndarray) – Gaussian basis centers

  • width (float) – width of the Gaussian basis

  • **kwargs

build(input_shape)[source]

build the layer :param input_shape: tuple of int for the input shape :type input_shape: tuple

call(inputs, masks=None)[source]

The core logic function

Parameters
  • inputs (tf.Tensor) – input distance tensor, with shape [None, n]

  • masks (tf.Tensor) – bool tensor, not used here

compute_output_shape(input_shape)[source]

Compute the output shape, used in older keras API

get_config()[source]

Get layer configurations

class InteractionLayer(*args, **kwargs)[source]

Bases: megnet.layers.graph.base.GraphNetworkLayer

The Continuous filter InteractionLayer in Schnet

Schütt et al. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions

call(inputs, mask=None)

the logic of the layer, returns the final graph

compute_output_shape(input_shape)[source]

compute static output shapes, returns list of tuple shapes

build(input_shape)[source]

initialize the weights and biases for each function

phi_e(inputs)[source]

update function for bonds and returns updated bond attribute e_p

rho_e_v(e_p, inputs)[source]

aggregate updated bonds e_p to per atom attributes, b_e_p

phi_v(b_e_p, inputs)[source]

update the atom attributes by the results from previous step b_e_p and all the inputs returns v_p.

rho_e_u(e_p, inputs)[source]

aggregate bonds to global attribute

rho_v_u(v_p, inputs)[source]

aggregate atom to global attributes

get_config()[source]

part of keras interface for serialization

Parameters
  • activation (str) – Default: None. The activation function used for each sub-neural network. Examples include ‘relu’, ‘softmax’, ‘tanh’, ‘sigmoid’ and etc.

  • use_bias (bool) – Default: True. Whether to use the bias term in the neural network.

  • kernel_initializer (str) – Default: ‘glorot_uniform’. Initialization function for the layer kernel weights,

  • bias_initializer (str) – Default: ‘zeros’

  • activity_regularizer (str) – Default: None. The regularization function for the output

  • kernel_constraint (str) – Default: None. Keras constraint for kernel values

  • bias_constraint (str) – Default: None .Keras constraint for bias values

build(input_shapes)[source]

Build the weights for the layer :param input_shapes: the shapes of all input tensors :type input_shapes: sequence of tuple

compute_output_shape(input_shape)[source]

Compute output shapes from input shapes :param input_shape: input shapes :type input_shape: sequence of tuple

Returns: sequence of tuples output shapes

get_config()[source]

Part of keras layer interface, where the signature is converted into a dict

Returns

configurational dictionary

phi_e(inputs)[source]

Edge update function :param inputs: :type inputs: tuple of tensor

Returns

output tensor

phi_u(b_e_p, b_v_p, inputs)[source]
Parameters
  • b_e_p (tf.Tensor) – edge/bond to global aggregated tensor

  • b_v_p (tf.Tensor) – node/atom to global aggregated tensor

  • inputs (Sequence) – list or tuple for the graph inputs

Returns

updated globa/state attributes

phi_v(b_ei_p, inputs)[source]

Node update function :param b_ei_p: edge aggregated tensor :type b_ei_p: tensor :param inputs: other graph inputs :type inputs: tuple of tensors

Returns: updated node tensor

rho_e_u(e_p, inputs)[source]

aggregate edge to state :param e_p: edge tensor :type e_p: tensor :param inputs: other graph input tensors :type inputs: tuple of tensors

Returns: edge aggregated tensor for states

rho_e_v(e_p, inputs)[source]

Reduce edge attributes to node attribute, eqn 5 in the paper :param e_p: updated bond :param inputs: the whole input list

Returns: summed tensor

rho_v_u(v_p, inputs)[source]
Parameters
  • v_p (tf.Tensor) – updated atom/node attributes

  • inputs (Sequence) – list or tuple for the graph inputs

Returns

atom/node to global/state aggregated tensor

class LinearWithIndex(*args, **kwargs)[source]

Bases: keras.engine.base_layer.Layer

Sum or average the node/edge attributes to get a structure-level vector

Parameters
  • mode – (str) ‘mean’, ‘sum’, ‘max’, ‘mean’ or ‘prod’

  • **kwargs

build(input_shape)[source]

Build tensors :param input_shape: input shapes :type input_shape: sequence of tuple

call(inputs, mask=None)[source]

Main logic :param inputs: input tensors :type inputs: tuple of tensor :param mask: mask tensor :type mask: tensor

Returns: output tensor

compute_output_shape(input_shape)[source]

Compute output shapes from input shapes :param input_shape: input shapes :type input_shape: sequence of tuple

Returns: sequence of tuples output shapes

get_config()[source]

Part of keras layer interface, where the signature is converted into a dict

Returns

configurational dictionary

class MEGNetLayer(*args, **kwargs)[source]

Bases: megnet.layers.graph.base.GraphNetworkLayer

The MEGNet graph implementation as described in the paper

Chen, Chi; Ye, Weike Ye; Zuo, Yunxing; Zheng, Chen; Ong, Shyue Ping. Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals, 2018, arXiv preprint. [arXiv:1812.05055](https://arxiv.org/abs/1812.05055) .. method:: call(inputs, mask=None)

the logic of the layer, returns the final graph

compute_output_shape(input_shape)[source]

compute static output shapes, returns list of tuple shapes

build(input_shape)[source]

initialize the weights and biases for each function

phi_e(inputs)[source]

update function for bonds and returns updated bond attribute e_p

rho_e_v(e_p, inputs)[source]

aggregate updated bonds e_p to per atom attributes, b_e_p

phi_v(b_e_p, inputs)[source]

update the atom attributes by the results from previous step b_e_p and all the inputs returns v_p.

rho_e_u(e_p, inputs)[source]

aggregate bonds to global attribute

rho_v_u(v_p, inputs)[source]

aggregate atom to global attributes

get_config()[source]

part of keras interface for serialization

Parameters
  • units_v (list of integers) – the hidden layer sizes for node update neural network

  • units_e (list of integers) – the hidden layer sizes for edge update neural network

  • units_u (list of integers) – the hidden layer sizes for state update neural network

  • pool_method (str) – ‘mean’ or ‘sum’, determines how information is gathered to nodes from neighboring edges

  • activation (str) – Default: None. The activation function used for each sub-neural network. Examples include ‘relu’, ‘softmax’, ‘tanh’, ‘sigmoid’ and etc.

  • use_bias (bool) – Default: True. Whether to use the bias term in the neural network.

  • kernel_initializer (str) – Default: ‘glorot_uniform’. Initialization function for the layer kernel weights,

  • bias_initializer (str) – Default: ‘zeros’

  • activity_regularizer (str) – Default: None. The regularization function for the output

  • kernel_constraint (str) – Default: None. Keras constraint for kernel values

  • bias_constraint (str) – Default: None .Keras constraint for bias values

build(input_shapes)[source]

Build the weights for the layer :param input_shapes: the shapes of all input tensors :type input_shapes: sequence of tuple

compute_output_shape(input_shape)[source]

Compute output shapes from input shapes :param input_shape: input shapes :type input_shape: sequence of tuple

Returns: sequence of tuples output shapes

get_config()[source]

Part of keras layer interface, where the signature is converted into a dict

Returns

configurational dictionary

phi_e(inputs)[source]

Edge update function :param inputs: :type inputs: tuple of tensor

Returns

output tensor

phi_u(b_e_p, b_v_p, inputs)[source]
Parameters
  • b_e_p (tf.Tensor) – edge/bond to global aggregated tensor

  • b_v_p (tf.Tensor) – node/atom to global aggregated tensor

  • inputs (Sequence) – list or tuple for the graph inputs

Returns

updated globa/state attributes

phi_v(b_ei_p, inputs)[source]

Node update function :param b_ei_p: edge aggregated tensor :type b_ei_p: tensor :param inputs: other graph inputs :type inputs: tuple of tensors

Returns: updated node tensor

rho_e_u(e_p, inputs)[source]

aggregate edge to state :param e_p: edge tensor :type e_p: tensor :param inputs: other graph input tensors :type inputs: tuple of tensors

Returns: edge aggregated tensor for states

rho_e_v(e_p, inputs)[source]

Reduce edge attributes to node attribute, eqn 5 in the paper :param e_p: updated bond :param inputs: the whole input list

Returns: summed tensor

rho_v_u(v_p, inputs)[source]
Parameters
  • v_p (tf.Tensor) – updated atom/node attributes

  • inputs (Sequence) – list or tuple for the graph inputs

Returns

atom/node to global/state aggregated tensor

class Set2Set(*args, **kwargs)[source]

Bases: keras.engine.base_layer.Layer

For a set of vectors, the set2set neural network maps it to a single vector. The order invariance is acheived by a attention mechanism. See Vinyals, Oriol, Samy Bengio, and Manjunath Kudlur. “Order matters: Sequence to sequence for sets.” arXiv preprint arXiv:1511.06391 (2015).

Parameters
  • T – (int) recurrent step

  • n_hidden – (int) number of hidden units

  • activation – (str or object) activation function

  • activation_lstm – (str or object) activation function for lstm

  • recurrent_activation – (str or object) activation function for recurrent step

  • kernel_initializer – (str or object) initializer for kernel weights

  • recurrent_initializer – (str or object) initializer for recurrent weights

  • bias_initializer – (str or object) initializer for biases

  • use_bias – (bool) whether to use biases

  • unit_forget_bias – (bool) whether to use basis in forget gate

  • kernel_regularizer – (str or object) regularizer for kernel weights

  • recurrent_regularizer – (str or object) regularizer for recurrent weights

  • bias_regularizer – (str or object) regularizer for biases

  • kernel_constraint – (str or object) constraint for kernel weights

  • recurrent_constraint – (str or object) constraint for recurrent weights

  • bias_constraint – (str or object) constraint for biases

  • kwargs – other inputs for keras Layer class

build(input_shape)[source]

Build tensors :param input_shape: input shapes :type input_shape: sequence of tuple

call(inputs, mask=None)[source]

Main logic :param inputs: input tensors :type inputs: tuple of tensor :param mask: mask tensor :type mask: tensor

Returns: output tensor

compute_output_shape(input_shape)[source]

Compute output shapes from input shapes :param input_shape: input shapes :type input_shape: sequence of tuple

Returns: sequence of tuples output shapes

get_config()[source]

Part of keras layer interface, where the signature is converted into a dict

Returns

configurational dictionary

keras_layer_deserialize(config, custom_objects=None)

Instantiates a layer from a config dictionary.

Parameters
  • config – dict of the form {‘class_name’: str, ‘config’: dict}

  • custom_objects – dict mapping class names (or function names) of custom (non-Keras) objects to class/functions

Returns

Layer instance (may be Model, Sequential, Network, Layer…)

Example:

```python # Configuration of Dense(32, activation=’relu’) config = {

‘class_name’: ‘Dense’, ‘config’: {

‘activation’: ‘relu’, ‘activity_regularizer’: None, ‘bias_constraint’: None, ‘bias_initializer’: {‘class_name’: ‘Zeros’, ‘config’: {}}, ‘bias_regularizer’: None, ‘dtype’: ‘float32’, ‘kernel_constraint’: None, ‘kernel_initializer’: {‘class_name’: ‘GlorotUniform’,

‘config’: {‘seed’: None}},

‘kernel_regularizer’: None, ‘name’: ‘dense’, ‘trainable’: True, ‘units’: 32, ‘use_bias’: True

}

} dense_layer = tf.keras.layers.deserialize(config) ```

mean_squared_error_with_scale(y_true, y_pred, scale=10000)[source]

Keras default log for tracking progress shows two decimal points, here we multiply the mse by a factor to fully show the loss in progress bar

Parameters
  • y_true – (tensor) training y

  • y_pred – (tensor) predicted y

  • scale – (int or float) factor to multiply with mse

Returns

scaled mse (float)

softplus2(x)[source]

out = log(exp(x)+1) - log(2) softplus function that is 0 at x=0, the implementation aims at avoiding overflow

Parameters

x – (Tensor) input tensor

Returns

(Tensor) output tensor

swish(x)[source]

out = x * sigmoid(x)

Parameters

x – (Tensor) input tensor

Returns

(Tensor) output tensor