Summary: Client Proxy Connectivity and reconnection options

Overview

Each space operation on a cluster is routed to a single member or multiple cluster members. The routing done based on the operation type and data partitioning policy. To make this routing possible and efficient, the client proxy holds a set of remote stubs to the relevant cluster members. The proxy connectivity policy determines which remote proxy members should be constructed at client startup, and how to monitor them to control the failover response time and reconnection behavior. The client proxy monitors the cluster members in two ways: checking existing stubs and locating members that does not have stubs created for yet.

Client Proxy Connectivity Configuration

The proxy Connectivity settings should be specified as part of the space configuration (server side). You may specify these as part of the Space Component or via API. The settings are loaded into the client side once it connects to the space. Client proxy connectivity settings controlled via the following settings:

Property Description Default Unit
space-config.proxy-settings.connection-monitor Determine the proxy monitoring policy for space cluster members .Options:
all - Full monitoring. The client proxy establishes a connection with all cluster members immediately at startup, and all the cluster members are monitored as long as the client proxy is alive. This policy should be used when failover time is important, and needs to be minimal.
on_demand - Monitoring on demand. All the connections to remote spaces are established on demand only, and once the connection is established, the connected space and its backups are monitored by the clustered proxy. This policy should be used when only part of the cluster is used, for example a scenario when all client operations go to the same partition.
none - No monitoring. All the connections to remote spaces are established on demand only, and no monitoring is done. This policy eliminates the monitoring overhead completely - no unnecessary lookups or pings, but it can also increase the failover time.
all  
space-config.proxy-settings.ping-frequency Specify the ping frequency for cluster members that were already found by the proxy. Replacing the previous liveness-monitor-frequency system property used with old versions. 10000 ms
space-config.proxy-settings.lookup-frequency Specify the lookup frequency for cluster members that were never looked up by the proxy, or never joined the cluster. Replacing the previous liveness-detector-frequency system property used with old versions. 5000 ms
space-config.proxy-settings.connection-retries Specify the retry count for establishing a connection with unavailable cluster member before failing the operation. 10  
cluster-config.groups.group.fail-over-policy.fail-over-find-timeout Specify the wait time between each retry. 2000 ms

An optimization was applied to the monitoring algorithm, to not check the available spaces if they are constantly in use, i.e. constantly handling user operations.

Client Proxy Reconnection

To allow a space client performing the different space operation (read,write,take,execute) to reconnect to a space cluster that has been completely shutdown and restarted, make sure you increase the space-config.proxy-settings.connection-retries parameter to have a higher value than the default. A value of 100 will provide you several minutes to restart the space cluster before the client will fail with com.j_spaces.core.client.FinderException.

Reconnection with Notify Registration

To allow a client using notifications (using the Session Based Messaging API or the Notify Container) to reconnect and also re-register for notifications, use the LeaseListener. See the Re-Register after complete space shutdown and restart section for details.

See the Resending Notifications after a Space-Client Disconnection section for space-client disconnection behavior.

Reconnection with Blocked Operations

When using blocked operations such as read operation with a timeout>0, take operation timeout>0 or Polling Container, GigaSpaces can't guaranty the read/take timeout will match the specified timeout once a client reconnects to a space that was shutdown and restarted (or failed and client routed to the backup instance).

Once the client reconnects, the entire read operation is reinitiated, ignoring the amount of time the client already spent waiting for a matching object before the client was disconnected.

Reconnection with Local Cache/View

A client using a local view or a local cache will try to reconnect once the space is terminated/un-deployed.
The client will have space-config.proxy-settings.connection-retries attempts to reconnect and later will have another space-config.dist-cache.retry-connections attempts to reconnect with 5 seconds (configurable) delay in between each attempt.

Please note that a local view/cache that is working against a secured space will lose its credentials after the space-config.proxy-settings.connection-retries are exhausted.

In case the client will fail to reconnect, com.j_spaces.core.client.CacheException wrapped with org.openspaces.core.UncategorizedSpaceException will be thrown. During this retry duration time, the client will be able to read data from its local cache/view.

Local cache/view that exhausts their space-config.dist-cache.retry-connections retry count are unusable and cannot be re-initialized. You should have a high value for the space-config.dist-cache.retry-connections to allow the client to continue and look for its master space in case of a long disconnection time:

GigaSpace gigaview = new GigaSpaceConfigurer(new LocalViewSpaceConfigurer(gigaspace.getSpace()).
	addView(new View<MySpaceClass>(MySpaceClass.class , "")).
	addProperty("space-config.dist-cache.retry-connections", "100").
	localView()).gigaSpace();

See the Local View and Local Cache for additional details.

See below example setting the space-config.proxy-settings.connection-retries parameter with a pu.xml encapsulating an embedded space:

<os-core:space id="space" url="/./space">
    <os-core:properties>
        <props>
            <prop key="space-config.proxy-settings.connection-retries">100</prop>
        </props>
    </os-core:properties>
</os-core:space>

See below example how you should set the space-config.proxy-settings.connection-retries property when deploying a Data-Grid:

>gs deploy-space -cluster schema=partitioned-sync2backup total_members=20,1 -properties
 "embed://space-config.proxy-settings.connection-retries=100" myIMDG

Unicast Lookup Service Discovery

If you are using Unicast lookup service discovery you should set the com.gigaspaces.unicast.interval system property to allow the client to keep searching for the lookup service in case it was terminated and later restarted while the client was running. See the How to Configure Unicast Discovery for details.

GigaSpaces.com - Legal Notice - 3rd Party Licenses - Site Map - API Docs - Forum - Downloads - Blog - White Papers - Contact Tech Writing - Gen. by Atlassian Confluence