Data Collector Server Memory and CPU Guidelines
  
Version 10.3.00P13
Data Collector Server Memory and CPU Guidelines
Use the following guidelines for Data Collector Servers.
Installation on a VM is recommended
CPU: 2 - 4 CPUs
Memory: 32 GiB minimum; If collecting from more than 40 backup servers, contact Support for recommendations.
Disk Space: 200 GiB minimum; If collecting File Analytics data, an additional 300 GiB of disk space is recommended.
Customize the Linux File Handle Setting for Large Collections
In Linux, a portion of memory is designated for file handles, which is the mechanism used to determine the number of files that can be open at one time. The default value is 1024. For large data collection policy environments, this number may need to be increased to 8192. A large environment is characterized as any collector that is collecting from 20 or more subsystems, such as 20+ TSM instances or 20+ unique arrays.
To change the number of file handles, take the following steps.
1. On the Linux Data Collector server, edit /etc/security/limits.conf and at the end of the file, add these lines.
root soft nofile 8192
root hard nofile 8192
2. Log out and log back in as root to execute the following commands to validate all values have been set to 8192.
ulimit –n
ulimit –Hn
ulimit –Sn
3. Restart the Data Collector.
Factors Impacting Data Collector Performance and Memory Requirements
Because every environment has a unique set of resources, configured and tuned specifically for that environment,
there is no
one size fits all formula. Several factors can impact performance and memory requirements:
Number of active Data Collector Policies
Number of hosts and active probes per host
Number and types of storage arrays
Number of LUNs
Polling frequency and number of devices polled
Amount of data transmitted
Performance of array device managers