Pre-Installation Setup > Prerequisites for Adding Data Collectors
  
Version 9.2.00
Prerequisites for Adding Data Collectors
64-bit OS. See the APTARE StorageConsole Certified Configurations Guide for supported operating systems.
Support Java Runtime Environment (JRE) 1.7.
Reside in the same time zone as the subsystem from which data will be collected.
If the backup servers belong to a cluster, StorageConsole communicates with the cluster for data collection.
For performance reasons, APTARE recommends that you do not install Data Collectors on the same server as the StorageConsole Portal. However, if you must have both on the same server, verify that the Portal and Data Collector software do not reside in the same directory.
Install only one Data Collector on a server (or OS instance).
NetApp Systems
NetApp Systems: NetApp ONTAP version 7.3.3 or later.
NetApp ONTAP I2P (inode to pathname) must be configured.
NetApp User with API Privileges.
When collecting from large NetApp Systems (greater than 1 PB):
When collecting CIFS shares that are located at a site that is not local to the Data Collector server, a separate Data Collector is recommended for the remote site. This mitigates access issues associated with WAN traversal.
 
Creating a NetApp User with API Privileges
This role and user is required for collection from NetApp-7 systems.
To create a new user, with the required privileges, on a NetApp system, use the following Command Line Interface (CLI) steps. For the role command, do not include a space after the comma.
filer> useradmin role add apifarole -a login-http-admin,api-*
filer> useradmin group add apifagroup -r apifarole
filer> useradmin user add apifauser -g apifagroup
If api-* does not meet your security requirements, additional File Analytics privileges can be configured using the following steps:
filer> useradmin role add apifarole -a api-volume-list-info,api-nfs-exportfs-list-rules,api-cifs-share-list-iter-start,api-cifs-share-list-iter-next,api-cifs-share-list-iter-end,api-snapdiff-iter-start,api-snapdiff-iter-next,api-snapdiff-iter-end,login-http-admin,api-volume-options-list-info,api-snapshot-list-info,api-snapshot-delete,api-snapshot-create,api-nameservice-map-uid-to-user-name
filer> useradmin group add apifagroup -r apifarole
filer> useradmin user add apifauser -g apifagroup
Note: For the role command, do not include a space after the comma.
 
CIFS Shares
The Data Collector must be running a Windows operating system that is at least Windows server 2003. This Data Collector can collect both Linux and Windows shares.
The Windows LAN Manager authentication level, in the local security policy security options, must be modified to: Send LM & NTLM - use NTLMv2 session security if negotiated. This allows the Data Collector to invoke the net use command with the password supplied on the command line. Without this setting, later versions of Windows will terminate with a system error 86 (invalid password).
Windows CIFS Shares collection requires the Windows Domain User ID. This User ID must have Administrative privileges.
Linux CIFS Shares collection requires super-user root privileges. Access control commands, such as sudo, sesudo, and pbrun are also supported. If using any of the access control commands, verify that the User ID has sudo, sesudo, or pbrun privileges.
Collection of owner data for Windows and CIFS is configurable via Advanced Parameters. Data collection completes faster when owner data is not collected. You can configure an Advanced Parameter, FA_RESOLVE_OWNERS set to N, to disable owner collection. To access Advanced Parameters in the Portal, select Admin > Advanced > Parameters.
 
 
When collecting from large NetApp Systems (greater than 1 PB):
When collecting CIFS shares that are located at a site that is not local to the Data Collector server, a separate Data Collector is recommended for the remote site. This mitigates access issues associated with WAN traversal.
Host Inventory File Analytics Probe
Using Host Resources data collection, hosts are discovered and added to the Host Inventory. Once a host is listed in the inventory, it can be selected and the File Analytics probe can be configured. To access the Host Inventory to enable File Analytics probes: Admin > Data Collection > Host Inventory
Note that by design, File Analytics host resources data collection occurs via activation of the probe in the Host Inventory window in the Portal. Collection does not occur under the following circumstance:
The Validate option in the Portal’s Host Inventory window only runs a connectivity check. It does not collect File Analytics data.
File Analytics Probe Configurations by Operating System
Windows servers: A Data Collector must be running on a Windows 2008 server. A Domain Administrator ID is required when collecting file-level data for File Analytics.
Linux servers: Only Linux is supported (not HP-UX or AIX), with the following requirements:
Root user access is supported.
Non-root user access with sudo access control is supported.
Non-root user access without sudo is not supported.
Running collection with a sudo user on a Linux server requires the addition of a an access control command for the server in the Host Inventory’s Manage Access Control window:
Admin > Data Collection > Host Inventory
Also, an advanced parameter must be created: FA_USE_SUDO set to Y.
To access Advanced Parameters in the Portal, select Admin > Advanced > Parameters.
Both Windows and Linux Servers
If running collection via the checkinstall utility, verify the following:
An advanced parameter must be created: FA_HOST_VALIDATE set to Y.
To access Advanced Parameters in the Portal, select Admin > Advanced > Parameters.
Best Practices for Host Inventory File Analytics Probes
File Analytics should be configured to run daily for all hosts/servers.
Since most environments have hundreds, even thousands of hosts, it is recommended that File Analytics probes be configured in a staggered schedule so as not to overload the Data Collector server.