Recommended hardware
This section lists the hardware requirements for processing an incoming event stream in KUMA at various Events per Second (EPS) rates.
The table below lists the hardware and software requirements for installing the KUMA components, assuming that the ClickHouse cluster only accepts INSERT queries. Hardware requirements for SELECT queries are calculated separately for the particular database usage profile of the customer.
Recommended hardware for ClickHouse cluster storage
The configuration of the equipment must be chosen based on the system load profile. You can use the "All-in-one" configuration for an event stream of under 10,000 EPS and when using graphical panels supplied with the system.
KUMA supports Intel and AMD CPUs with SSE 4.2 and AVX instruction set support.
|
Up to 3,000 EPS |
Up to 10,000 EPS |
Up to 20,000 EPS |
Up to 50,000 EPS |
---|---|---|---|---|
KUMA Core requirements |
– |
– |
One device. Device characteristics:
|
One device. Device characteristics:
|
Collector requirements |
– |
– |
One device. Device characteristics:
|
Two devices. Characteristics of each device:
|
Correlator requirements |
– |
– |
One device. Device characteristics:
|
One device. Device characteristics:
|
Event router requirements |
– |
CPU: 3 cores RAM: 240 MB. |
CPU: 5 cores RAM: 250 MB. |
CPU: 5 cores RAM: 7000 MB. |
Keeper requirements |
– |
– |
Three devices. Characteristics of each device:
|
Three devices. Characteristics of each device:
|
Storage requirements |
– |
– |
Two devices. Characteristics of each device:
The recommended transfer rate between ClickHouse nodes is at least 10 Gbps if the data stream is equal to or exceeds 20,000 EPS.
|
Four devices. Characteristics of each device:
The recommended transfer rate between ClickHouse nodes is at least 10 Gbps if the data stream is equal to or exceeds 20,000 EPS. |
Operating systems |
|
|||
TLS ciphersuites |
TLS versions 1.2 and 1.3 are supported. Integration with a server that does not support the TLS versions and ciphersuites that KUMA requires is impossible. Supported TLS 1.2 ciphersuites:
Supported TLS 1.3 ciphersuites:
|
Recommended configurations for different load levels
The table below describes the configurations and recommended resources for KUMA components depending on the EPS level, including storage requirements and sizing recommendations.
Recommended hardware configurations and resource consumption by KUMA depending on EPS
Setting |
Up to 3,000 EPS |
Up to 10,000 EPS |
Up to 20,000 EPS |
Up to 50,000 EPS |
---|---|---|---|---|
Installation type |
Installation on a single server |
Installation on a single server |
Distributed installation on multiple servers |
Distributed installation on multiple servers |
Number of servers and their purpose |
One server that hosts all system components. |
One server that hosts all system components. |
One server for the KUMA Core. One server for the collector One server for the correlator. Three dedicated servers with the keeper role. Two servers for the Storage. |
One server for the KUMA Core. Two servers for the Collector. One server for the correlator. Three dedicated servers with the keeper role. Four servers for the storage. |
Server hardware |
At least 16 threads or 16 vCPUs. At least 32 GB of RAM. At least 500 GB in the /opt directory. Data storage type: SSD. Data transfer rate: at least 100 Mbps. |
At least 24 threads or 24 vCPUs. At least 64 GB of RAM. At least 500 GB in the /opt directory. Data storage type: SSD. Data transfer rate: at least 100 Mbps. |
The KUMA Core, collector, correlator and keeper require at least 16 threads or 16 vCPUs and at least 64 GB of RAM per server. The storage requires fast SSDs, high IOPS, at least 1 Gbps of network bandwidth. |
Same specifications as for EPS 20,000, but with twice the number of collectors and storage servers. Higher requirements for network connectivity between all nodes: at least 1 Gbps of network bandwidth is recommended. |
Storage configuration (ClickHouse) |
One shard, one replica. Disk buffer enabled. |
One shard, one replica. Disk buffer enabled. |
Two shards, two replicas in each. If high availability is not required, one server with one replica per shard may be used. |
Two shards, two replicas in each. If high availability is not required, two servers with one replica per shard may be used. |
Additional recommendations for specifying and sizing the configuration:
Resource usage for a single-server installation at different EPS levels
The table below lists the total resource usage by a single-server installation (collector + KUMA Core + storage and other services) rather than the usage by individual components. A single-server installation allows processing streams of up to 500,000 EPS using multiple collectors (for example, 5 collectors at 100,000 EPS each) on one server, on which all services are deployed together. In distributed configurations, scaling can be achieved by adding collectors and servers with the storage role.
Resources required for a single-server installation depending on EPS
Event stream (EPS) |
Threads or vCPUs |
RAM, GB |
---|---|---|
1000 |
1 |
2 |
3000 |
2 |
2 |
5000 |
4 |
4 |
10,000 |
4 |
8 |
20,000 |
8 |
16 |
50,000 |
12 |
18 |
100,000 |
34 |
20 |
150,000 |
40 |
24 |
200,000 |
48 |
36 |
300,000 |
56 |
40 |
500,000 |
90 |
150 |
Working in virtual environments
The following virtual environments are supported for installing KUMA:
Working in cloud environments
KUMA can work in a cloud infrastructure. The system can be installed on virtual machines following the IaaS (infrastructure-as-a-service) model.
For a cloud infrastructure, we recommend using the single-server configuration for 3000 EPS and 10,000 EPS. Virtual machines must satisfy the hardware and software requirements of a regular installation.
When choosing the disk subsystem of the server, use the "number of input/output operations (IOPS)" parameter as the reference. The recommended minimum value is 1000 IOPS.
Resource recommendations for the Collector component
Consider that for event processing efficiency, the CPU core count is more important than the clock rate. For example, eight CPU cores with a medium clock rate can process events more efficiently than four CPU cores with a high clock rate.
Consider also that the amount of RAM utilized by the collector depends on configured enrichment methods (DNS, accounts, assets, enrichment with data from Kaspersky CyberTrace) and whether aggregation is used (RAM consumption is influenced by the data aggregation window setting, the number of fields used for aggregation of data, volume of data in fields being aggregated). The utilization of computation resources by KUMA depends on the type of events being parsed and the efficiency of the normalizer.
For example, with an event stream of 1000 EPS and event enrichment disabled (event enrichment is disabled, event aggregation is disabled, 5000 accounts, 5000 assets per tenant), one collector requires the following resources:
For example, to support 5 collectors that do not perform event enrichment, you must allocate the following resources: 5 CPU cores, 2.5 GB of RAM, and 5 GB of free disk space.
Kaspersky recommendations for storage servers
You must use high-speed protocols, such as Fibre Channel or iSCSI 10G for the connection of the data storage system to storage servers. We do not recommend using application-level protocols such as NFS or SMB to connect data storage systems.
On ClickHouse cluster servers, we recommend using the ext4 file system.
If you are using RAID arrays, we recommend using RAID 0 for high performance, or RAID 10 for high performance and high availability.
To ensure high availability and performance of the data storage subsystem, we recommend making sure that all ClickHouse nodes are deployed strictly on different disk arrays.
If you are using a virtualized infrastructure to host system components, we recommend deploying ClickHouse cluster nodes on different hypervisors. You must prevent any two virtual machines with ClickHouse from running on the same hypervisor.
For high-load KUMA installations, we recommend installing ClickHouse on physical servers.
Requirements for agent devices
You must install agents on network infrastructure devices that will send data to the KUMA collector. Device requirements are listed in the following table.
|
Windows devices |
Linux devices |
---|---|---|
CPU |
Single-core, 1.4 GHz or higher |
Single-core, 1.4 GHz or higher |
RAM |
512 MB |
512 MB |
Free disk space |
1 GB |
1 GB |
Operating systems |
|
|
Hardware requirements for installing KUMA agents with WEC, ETW, and WMI transports
KUMA agents installed on Windows devices can use various types of transports to receive events: WEC (Windows Event Collector), ETW (Event Tracing for Windows), and WMI (Windows Management Instrumentation). The performance of the agents and their impact on the system depend on the limitations imposed by Windows and on the amount and type of events processed (see the table below).
Requirements for agents by transport type
Transport type |
Maximum performance (EPS) |
Limitations imposed by Windows |
Limitations imposed by KUMA |
Recommended agent configuration |
CPU and RAM usage |
---|---|---|---|---|---|
WEC |
Up to 2500 events per second |
When writing to system logs (Application, System, Hardware Events), Windows limits the rate to approximately 2500 events per second. When writing to multiple logs at the same time, performance is further reduced: up to 1100 EPS with 2 logs, up to 800 EPS with 3 logs, and up to 500 EPS with 4 logs. |
The agent can process events in the range of 1500–2500 EPS without lag. |
CPU: 1 core with 2 threads or 2 vCPUs. RAM: 4 GB. |
Average CPU load by the agent: up to 0.18 (equivalent to approximately 0.28% of the total system performance). RAM usage: up to 209 MB. |
ETW |
Up to 700 events per second |
Only one system log is supported: Microsoft → Windows → DNS-Server. The Windows operating system does not allow writing more than 700 events per second to this log. |
No limitations on the KUMA side have been identified, however, when EPS exceeds 700, the agent does not receive data due to Windows restrictions. |
CPU: 1 core with 2 threads or 2 vCPUs. RAM: 4 GB. |
Average CPU load by the agent: up to 0.267 (equivalent to approximately 0.42% of the total system performance). RAM usage: up to 186 MB. |
WMI |
Up to 5000 events per second |
You can circumvent Windows restrictions by connecting multiple devices. Each additional device increases the total event volume that can be processed by 1000–1500 EPS. |
Upon reaching 5000 EPS, the agent starts processing events with a lag. |
CPU: 1 core with 2 threads or 2 vCPUs. RAM: 4 GB. |
Average CPU load by the agent: up to 0.61 (equivalent to approximately 0.91% of the total system performance). RAM usage: up to 311 MB. |
Hardware requirements for KUMA agents on Linux devices depending on EPS
KUMA agents installed on Linux devices can use various types of transports. The hardware requirements for KUMA agents depend on the type of transport and the EPS (see the table below).
Requirements for KUMA agents on Linux devices depending on EPS
Transport type |
10,000 EPS |
20,000 EPS |
60,000 EPS |
100,000 EPS |
---|---|---|---|---|
HTTP |
CPU: 3 cores RAM: 160 MB. |
CPU: 5 cores RAM: 260 MB. |
CPU: 5 cores RAM: 260 MB. |
CPU: 5 cores RAM: 300 MB. |
Hardware requirements for using the Score AI service and asset status
CPU: 2 cores, 2.7 GHz.
RAM: 8 GB
Storage: SSD or HDD.
Free disk space: 10 GB for the service itself and space for the OS.
Requirements for client devices for managing the KUMA web interface
CPU:
RAM: 8 GB
Supported browsers:
Device requirements for installing KUMA on Kubernetes
Minimum configuration of a Kubernetes cluster for deployment of a high-availability KUMA configuration:
The minimum hardware requirements for devices for installing KUMA on Kubernetes are listed in the table below.
Minimum hardware requirements for installing KUMA on Kubernetes
|
Balancer |
Controller |
Worker node |
---|---|---|---|
CPU |
1 core with 2 threads or 2 vCPUs. |
1 core with 2 threads or 2 vCPUs. |
12 threads or 12 vCPUs. |
RAM |
At least 2 GB |
At least 2 GB |
At least 24 GB |
Free disk space |
At least 30 GB |
At least 30 GB |
At least 1 TB in the /opt directory.
At least 32 GB in the /var/lib directory.
|
Network bandwidth |
10 Gbps |
10 Gbps |
10 Gbps |