In the asset tree, select the neural network element that you want to edit.
A list of options appears on the right.
In the upper-right corner of the window, click the Edit button.
In the Name field, specify a new name for the ML model element.
In the Description field, specify a new description for the ML model.
If necessary, in the General element settings settings block, do the following:
In the Reminder period (sec) field, specify the period in seconds, upon reaching which the ML model will generate a repeated incident if anomalous behavior is retained in each UTG node.
The default value of this setting is 0, which corresponds to no reminders.
In the Period of recurring alert suppression (sec) field, specify the period in seconds during which the ML model does not log repeated incidents for the same element.
The default value of this setting is 0 (repeat incidents not suppressed).
In the Grid step (sec) field, specify the element's UTG period in seconds expressed as a decimal.
In the Incident status drop-down list, select a status to be automatically assigned to incidents logged by the ML model element.
In the Incident cause drop-down list, select the cause to be automatically set for incidents logged by the ML model element.
In the Color of incident dot indicators field, select the color of the indicator points of the incidents logged by the ML model element on the graphs in the Monitoring and History sections.
In the Detection threshold field, specify a prediction error threshold value upon reaching which an incident is logged.
In the Expert opinion field, specify the expert opinion to be automatically created for incidents logged by the ML model element.
Kaspersky MLAD supports the following ML model neural network element architectures: Dense, RNN, CNN, TCN, or Transformer.
If you need to change the architecture parameters of a neural network element and the power exponent and smoothing value of the cumulative prediction error, use the toggle switch to enable Advanced neural network settings.
If necessary, in the Main settings settings block, do the following:
In the Input tags drop-down list, select one or more tags that serve as the source data for predicting the values of the output tags.
In the Output tags drop-down list, select one or several tags whose behavior is predicted by the model element.
If extended setup mode is enabled, use the MSE power exponent field to specify the cumulative prediction error power exponent in decimal format.
If extended setup mode is enabled, use the Smoothing factor field to specify the cumulative prediction error smoothing value in decimal format.
If necessary, in the Window settings settings block, do the following:
In the Input window (steps) field, specify the size of the input value window, from which the ML model element predicts the output values.
In the Output window offset field, specify the number of steps by which the beginning of the output window will be shifted relative to the beginning of the input window.
In the Output window (steps) field, specify an output tag prediction length calculated from the input tags on the input window.
If you have selected a neural network element with a dense architecture, do the following:
In the Multipliers for calculating number of neurons per layer field, provide the multipliers, separated by a comma without spaces, by which to multiply the number of input tags to calculate the number of neurons in the ML model element layers.
In the Activation function per layer field, specify one of the following activation functions on each layer of an ML model element separated by a comma without spaces:
relu: A non-linear activation function that converts an input value to a value between 0 and positive infinity.
selu: A monotonically increasing function that enables normalization based on the central limit theorem.
linear: A linear function that is a straight line proportional to the input data.
sigmoid: A non-linear function that converts input values to values between 0 and 1.
tanh: A hyperbolic tangent function that converts input values to values between -1 and 1.
softmax: A function that converts a vector of values to a probability distribution that adds up to 1.
The default value of this setting is relu,relu,relu.
If you are adding a neural network element with an RNN architecture, do the following:
In the GRU neurons per layer field, specify the number of GRU neurons on layers separated by a comma without spaces.
The default value of this parameter is 40,40.
In the Number of neurons in TimeDistributed layer field, specify the number of neurons distributed in time on the layers of the decoder separated by a comma without spaces.
The default value of this parameter is 40,20.
If you have selected a neural network element with a CNN architecture, do the following in the CNN architecture settings settings block:
In the Filter size per layer field, specify the size of the filters for each layer of the element separated by a comma without spaces.
The default value of this parameter is 2,2,2.
In the Filters per layer field, specify the number of filters for each layer of the ML model element separated by a comma without spaces.
The default value of this parameter is 50,50,50.
In the MaxPooling window size per layer field, specify the maximum sampling window size values separated by a comma without spaces.
The default value of this parameter is 2,2,2.
In the Number of neurons in decoder field, specify the number of neurons on the layers of the decoder.
If you have selected a neural network element with a TCN architecture, do the following:
In the Regularization field, specify the regularization coefficient in decimal format to prevent overfitting of the ML model element.
The default value of this parameter is 0.1.
In the Size of filters field, specify the sizes of the filters for the ML model element.
The default value of this parameter is 2.
In the Dilation per layer field, specify the exponential expansion values of the output data on the layers separated by a comma without spaces.
The default value of this parameter is 1,2,4.
In the Activation function drop-down list, select one of the following activation functions:
linear: A linear activation function whose result is proportional to the input value.
relu: A non-linear activation function that converts an input value to a value between zero and positive infinity. If the input value is less than or equal to zero, the function returns a value of zero; otherwise, the function returns the input value.
The default value of this parameter is linear.
In the Number of stacks of residual blocks field, specify the number of encoders.
The default value of this parameter is 1.
In the Decoder layer type field, select one of the following types of layer to precede the output layer:
TimeDistributedDense (default): A fully connected architecture layer.
GRU: A layer with a recurrent architecture.
If you have selected a neural network element with a transformer architecture, do the following:
In the Encoder regularization field, specify the regularization coefficient in the encoder in decimal format.
The default value of this parameter is 0.01.
In the Number of attention heads field, specify the number of attention heads.
The default value of this parameter is 1.
In the Number of encoders field, specify the number of encoders.
The default value of this parameter is 1.
In the Multipliers for calculating number of neurons per layer field, provide the factors, separated by a comma without spaces, by which to multiply the number of input tags to calculate the number of neurons in the decoding layers.
In the upper-right corner of the window, click the Save button.