Summary: Client Proxy Connectivity and reconnection options
OverviewA client proxy object used by the application to interface with the data-grid. The application is unaware of it , but under-the-hood it maintains at the client side a mapping of the locations for all the data-grid members (logical partitions) and their physical location. Using this information it routs the requests (read/write) to the correct target logical partition or perform parallel map-reduce style activity when needed (readMultiple/writeMultiple/Execute). The master location of the logical partitions mapping is the Lookup Service. It is responsible to update the client proxy with the latest location of each logical partition when the proxy bootstrap itself, when the system scales or when there is a failure that triggers a promotion of a backup into a primary and the creation of the new backup instace. Each space operation on a data-grid cluster is routed to a single member or multiple cluster members. The routing done based on the operation type and data partitioning policy. To make this routing possible and efficient, the client proxy holds a set of remote stubs to the relevant cluster members. The proxy connectivity policy determines which remote proxy members should be constructed at client startup, and how to monitor them to control the failover response time and reconnection behavior. The client proxy monitors the cluster members in two ways: checking existing stubs and locating members that does not have stubs created for yet. Client Proxy Connectivity ConfigurationThe proxy Connectivity settings should be specified as part of the space configuration (server side). You may specify these as part of the Space Component or via API. The settings are loaded into the client side once it connects to the space. Client proxy connectivity settings controlled via the following settings:
An optimization was applied to the monitoring algorithm, to not check the available spaces if they are constantly in use, i.e. constantly handling user operations. Client Proxy ReconnectionTo allow a space client performing the different space operation (read,write,take,execute) to reconnect to a space cluster that has been completely shutdown and restarted, make sure you increase the space-config.proxy-settings.connection-retries parameter to have a higher value than the default. A value of 100 will provide you several minutes to restart the space cluster before the client will fail with com.j_spaces.core.client.FinderException. Reconnection with Notify RegistrationTo allow a client using notifications (using the Session Based Messaging API or the Notify Container) to reconnect and also re-register for notifications, use the LeaseListener. See the Re-Register after complete space shutdown and restart section for details.
Reconnection with Blocked OperationsWhen using blocked operations such as read operation with a timeout>0, take operation timeout>0 or Polling Container, GigaSpaces can't guaranty the read/take timeout will match the specified timeout once a client reconnects to a space that was shutdown and restarted (or failed and client routed to the backup instance). Once the client reconnects, the entire read operation is reinitiated, ignoring the amount of time the client already spent waiting for a matching object before the client was disconnected. Reconnection with Local Cache/ViewFor information regarding local cache/view reconnection, refer to Local Cache or Local View. Unicast Lookup Service DiscoveryIf you are using Unicast lookup service discovery you should set the com.gigaspaces.unicast.interval system property to allow the client to keep searching for the lookup service in case it was terminated and later restarted while the client was running. See the How to Configure Unicast Discovery for details. Inactive Space retriesWhen a primary space partition fails and a backup space partition is being elected as a primary, client proxy will recognize the primary failure and route the requests to the backup. Election of a backup to primary is done using active election process and this is not instantaneous (might take few seconds based on the active election configuration parameters). Any client requests directed to this partition during this time will still complete because there is a retry logic for recognizing com.gigaspaces.cluster.activeelection.InactiveSpaceException conditions, where proxy retries the same operation until the space becomes active/primary. Retry limits and gap between retries can be configured using following system properties on the client side:
|
![]() |
GigaSpaces.com - Legal Notice - 3rd Party Licenses - Site Map - API Docs - Forum - Downloads - Blog - White Papers - Contact Tech Writing - Gen. by Atlassian Confluence |