Ternary layers of the ANN2SNN package

The following ternary layers are supported in the ANN2SNN package:

Classes of the ANN2SNN package that implement ternary layers.

Class

Description

TernaryDense

Implements a fully connected layer, which is analogous to the Dense layer of the Keras library. When working with a fully connected layer, it is recommended to specify the number of neurons in the layer (the units parameter) equal to or less than 512. If a neuron of a layer has more than 512 connections, such a neuron will be represented as several neurons on the AltAI-1 neuromorphic processor.

TernaryConv1D

Implements a one-dimensional convolutional layer, which is analogous to the Conv1D layer of the Keras library. When working with this layer, it is recommended to set the following restrictions:

  • Kernel size (kernel_size parameter)—no more than 20.
  • Number of filters (the filters parameter)—no more than 64.

TernaryConv2D

Implements a two-dimensional convolutional layer, which is analogous to the Conv2D layer in the Keras library. When working with this layer, it is recommended to follow the following restrictions:

  • Do not use large kernels (kernel_size parameter). The optimal kernel sizes are [2, 2], [3, 3] and [4, 4].
  • Limit the size of input images to 128x128 and the number of channels to 10.
  • Limit the number of filters (filters parameter) to 32.

TernaryLocallyConnected1D

Implements a one-dimensional locally connected layer, which is analogous to the LocallyConnected1D layer of the Keras library. When working with this layer, it is recommended to set the following restrictions:

  • Kernel size (kernel_size parameter)—no more than 20.
  • Number of filters (filters parameter)—no more than 64.

TernaryLocallyConnected2D

Implements a two-dimensional locally connected layer, which is analogous to the LocallyConnected2D layer of the Keras library. When working with this layer, it is recommended to follow the following restrictions:

  • Do not use large kernels (kernel_size parameter). The optimal kernel sizes are [2, 2], [3, 3] and [4, 4].
  • Limit the size of input images to 128x128 and the number of channels to 10.
  • Limit the number of filters (filters parameter) to 32.

TernaryDepthwiseConv1D

Implements a one-dimensional convolutional layer, which is analogous to the DepthwiseConv1D layer of the Keras library. When working with this layer, it is recommended to set the following restrictions:

  • Kernel size (kernel_size parameter)—no more than 20.
  • Number of filters (filters parameter)—no more than 64.

TernaryDepthwiseConv2D

Implements a two-dimensional convolutional layer, which is analogous to the DepthwiseConv2D layer in the Keras library. When working with this layer, it is recommended to follow the following restrictions:

  • Do not use large kernels (kernel_size parameter). The optimal kernel sizes are [2, 2], [3, 3] and [4, 4].
  • Limit the size of input images to 128x128 and the number of channels to 10.
  • Limit the number of filters (filters parameter) to 32.

MaxPool1D

The standard implementation of the MaxPool1D layer of the Keras library is supported. To reduce the amount of neuromorphic processor hardware required to execute the neural network, it is recommended to eliminate the use of this layer by increasing the stride parameter in the convolutional layers.

MaxPool2D

The standard implementation of the MaxPool2D layer of the Keras library is supported. To reduce the amount of neuromorphic processor hardware required to execute the neural network, it is recommended to eliminate the use of this layer by increasing the stride parameter in the convolutional layers.

BatchNormalization

The standard implementation of the BatchNormalization layer of the Keras library is supported. The scale parameter is not supported and must be defined as false.

The BatchNormalization layer must come after the Activation layer.

InputLayer

The standard implementation of the InputLayer of the Keras library is supported. The input data must be of a binary type.

Flatten

The standard implementation of the Flatten layer of the Keras library is supported.

Reshape

The standard implementation of the Reshape layer of the Keras library is supported.

Concatenate

The standard implementation of the Concatenate layer of the Keras library is supported.

To place the neural network on the AltAI-1 neuromorphic processor, specify the Heaviside function as the activation function of each layer. When training a neural network using the gradient descent method, a piecewise-linear approximation of the sigmoid function is used (hard sigmoid function).

Page top