Summary: Helpful recommendations for: tuning the infrastructure on which GigaSpaces XAP is running, boosting its performance, and improving its scalability.

Check your Infrastructure First

No matter what kind of optimization you perform, you cannot ignore your infrastructure. Therefore, you must verify that you have the following:

  • Sufficient physical and virtual memory
  • Sufficient disk speed
  • A tuned database
  • Sufficient CPU power to handle the load
  • Network cards configured for speed
  • A JVM with a fast JIT

Max Processes and File Descriptors/Handlers Limit

Linux

Linux has a Max Processes per user, as well as the limit of file descriptors allowed (which relates to processes, files, sockets and threads). This feature allows you to control the number of processes an existing user on the server may be authorized to have.

To improve performance and stability, you must set the limit of processes for the super-user root to be at least 8192, but note that 32k, or even unlimited is also adequate:

ulimit -u unlimited

Before deciding about the proper values of the file descriptors, a further testing and monitoring is required on the actual environment. 8K,16K or 32K is used just an example.

Verify that you set the ulimit using the -n option e.g. ulimit -n 8192, rather than ulimit 8192. ulimit defaults to ulimit -f. If no parameter is set, it sets the maximum file size in 512k blocks, which might cause a fatal process crash

How do I configure the File Descriptors on Linux?

In /etc/system file, the descriptors hard limit should be set (8192), and the file descriptors soft limit should be increased from 1024 to 8192 as shown below:

set rlim_fd_max=8192
set rlim_fd_cur=8192

Edit /etc/system with root access and reboot the server. After reboot, please, run the following in the application account:
ulimit -n
It should report 8192.

To change the default value, modify the /etc/security/limits.conf file.

Modify the ulimit value when having many concurrent users accessing the space.

Windows

Windows 2003 has no parameter dealing directly with the number of file handles, it is not explicitly limited, but file handles allocations take part of heap shared section which is relatively small (default 512KB). Heap being exhausted might lead to the application failure.

How do I configure the File Handlers on Windows?

To increase it run regedit - HKEY_LOCAL_MACHINE->SYSTEM->CurrentControlSet->Control->Session Manager->Subsystems:
in the key "Windows" find "SharedSection=1024,3072,512", where 512KB is the size of heap shared section for the processes running in the background. The value should be increased, the recommendation is to increase it initially to 1024KB, max value 3072. Reboot is necessary to enable the new setting.

See also - File Descriptors - changing the value for Unix and Windows

One of reports in Sun bug database describes the fixed bug (fix done in JVM 1.5 RC1) which mention file handles limit 2035 per jvm - the case has the test java code attached. It could be used to check the influence of the registry reconfiguration.

GigaSpaces.com - Legal Notice - 3rd Party Licenses - Site Map - API Docs - Forum - Downloads - Blog - White Papers - Contact Tech Writing - Gen. by Atlassian Confluence