Summary: Explains how to deploy and manage an Elastic Processing Unit (EPU)
Overview
Basic steps when using the EPU:
Here is a simple example scaling a running EPU. With the following illustration the system using initially 2 machines, 20 partitions , 20 instances per machine (40 instances total), 4 instances per GSC, GSC capacity is 8GB. Total memory capacity 80 GB.: After scaling it to leverage 10 machines, we will have 4 instances per machine, 1 instance per GSC. Total memory capacity 400 GB. Here is a quick EPU tutorial:
This page has three main sections:
EPU DeploymentThe deployment of a partitioned (space based) EPU and stateless/web EPU is done via the Admin API. In order for the deployment to work, the Admin API must first discover a running GSM, ESM (managers) and running GSAs (GigaSpaces agents). // Wait for the discovery of the managers and at least one GigaSpaces agent Admin admin = new AdminFactory().addGroup("myGroup").create(); admin.getGridServiceAgents().waitForAtLeastOne(); admin.getElasticServiceManagers().waitForAtLeastOne(); GridServiceManager gsm = admin.getGridServiceManagers().waitForAtLeastOne(); Maximum Memory CapacityThe EPU deployment requires two important properties:
Here is a typical example for a memory capacity Processing Unit deployment. The example also includes a scale trigger that is explained in the following sections of this page. // Deploy the Elastic Stateful Processing Unit ProcessingUnit pu = gsm.deploy( new ElasticStatefulProcessingUnitDeployment(new File("myPU.jar")) .memoryCapacityPerContainer(16,MemoryUnit.GIGABYTES) .maxMemoryCapacity(512,MemoryUnit.GIGABYTES) //initial scale .scale(new ManualCapacityScaleConfigurer() .memoryCapacity(128,MemoryUnit.GIGABYTES) .create())); ); Here is again the same example, this time the deployed Processing Unit is a pure Space (no jar files): // Deploy the Elastic Space ProcessingUnit pu = gsm.deploy( new ElasticSpaceDeployment("mySpace") .memoryCapacityPerContainer(16,MemoryUnit.GIGABYTES) .maxMemoryCapacity(512,MemoryUnit.GIGABYTES) //initial scale .scale( new ManualCapacityScaleConfigurer() .memoryCapacity(128,MemoryUnit.GIGABYTES) .create()) ); These two properties are used to calculate the number of partitions for the Processing Unit as follows: minTotalNumberOfInstances = ceil(maxMemoryCapacity/memoryCapacityPerContainer) = ceil(1024/256) = 4 numberOfPartitions = ceil(minTotalNumberOfInstances/(1+numberOfBackupsPerPartition)) = ceil(4/(1+1)) = 2
Maximum Number of CPU CoresIn many cases when you should take the number of space operations per second into consideration when scaling the system. The memory utilization will be a secondary factor when calculating the required scale. For example, if the system performs mostly data updates (as opposed to reading data), the CPU resources could be a limiting factor more than the total memory capacity. In these cases use the maxNumberOfCpuCores deployment property. Here is a typical deployment example that includes CPU capacity planning: // Deploy the EPU ProcessingUnit pu = gsm.deploy( new ElasticStatefulProcessingUnitDeployment(new File("myPU.jar")) .memoryCapacityPerContainer(16,MemoryUnit.GIGABYTES) .maxMemoryCapacity(512,MemoryUnit.GIGABYTES) .maxNumberOfCpuCores(32) // continously scale as new machines are started .scale(new EagerScaleConfig()) ); The maxNumberOfCpuCores property provides an estimate for the maximum total number of CPU cores on machines that have one or more primary processing unit instances deployed (instances that are not in backup state). Internally the number of partitions is calculated as follows: minTotalNumberOfInstances = ceil(maxMemoryCapacity/memoryCapacityPerContainer) = ceil(1024/256)=4 minNumberOfPrimaryInstances = ceil(maxNumberOfCpuCores/minNumberOfCpuCoresPerMachine) = ceil(8/2) = 4 numberOfPartitions = max(minNumberOfPrimaryInstances, ceil(minTotalNumberOfInstances/(1+numberOfBackupsPerPartition)) = max(4, 4/(1+1) ) = 4 In order to evaluate the minNumberOfCpuCoresPerMachine, the deployment communicates with each discovered GigaSpaces agent and collects the number of CPU cores the operating system reports. In case a machine provisioning plugin (cloud) is used, the plugin provides that estimate instead. The minNumberOfCpuCoresPerMachine deployment property can also be explicitly defined. Explicit Number of PartitionsThe numberOfPartitions property allows explicit definition of the number of space partitions. When the numberOfPartitions property is defined then maxMemoryCapacity and maxNumberOfCpuCores should not be defined. // Deploy the EPU ProcessingUnit pu = gsm.deploy( new ElasticStatefulProcessingUnitDeployment(new File("myPU.jar")) .memoryCapacityPerContainer(16,MemoryUnit.GIGABYTES) .numberOfPartitions(12) .scale(new EagerScaleConfig()) ); Here is another example, deployment with explicit number of partitions and memory capacity scale trigger: // Deploy the EPU ProcessingUnit pu = gsm.deploy( new ElasticStatefulProcessingUnitDeployment(new File("myPU.jar")) .memoryCapacityPerContainer(16,MemoryUnit.GIGABYTES) .numberOfPartitions(12) .scale(new ManualCapacityScaleConfigurer() .memoryCapacity(16,MemoryUnit.GIGABYTES) .create()) ) ); // Application continues Thread.sleep(10000); // Scale out to 32GB memory pu.scale(new ManualCapacityScaleConfigurer() .memoryCapacity(32,MemoryUnit.GIGABYTES) .create() ); Specifying number of partitions explicitly is recommended only when fine grained scale triggers are required. The example below illustrating 12 partitions system (12 primaries + 12 backups = 24 instances). See how the system scales to have increased total memory capacity as a function of the number of Containers and memoryCapacityPerContainer:
memoryCapacityPerContainer 6G
memoryCapacityPerContainer 12G
memoryCapacityPerContainer 24G
Deployment on a Single Machine (for development purposes)For development and demonstration purposes, it is very convenient to deploy the EPU on a single machine. By default, the minimum number of machines is two (for high availability concerns). This could be changed using the singleMachineDeployment property. // Deploy the EPU ProcessingUnit pu = gsm.deploy( new ElasticStatefulProcessingUnitDeployment(new File("myPU.jar")) .memoryCapacityPerContainer(256,MemoryUnit.MEGABYTES) .maxMemoryCapacity(1024,MemoryUnit.MEGABYTES) .singleMachineDeployment() // deploy on a single machine // other processes running on machine would have at least 2GB left .dedicatedMachineProvisioning( new DiscoveredMachineProvisioningConfigurer() .reservedMemoryCapacityPerMachine(2,MemoryUnit.GIGABYTES) .create()) //initial scale .scale(new ManualCapacityScaleConfigurer() .memoryCapacity(512,MemoryUnit.MEGABYTES) .create()) ); Stateless / Web Elastic Processing UnitsStateless Processing Units do not include an embedded space, and therefore are not partitioned. Deployment of stateless processing unit is performed by specifying the required total number of CPU cores. This ensures 1 container per machine. // Deploy the Elastic Stateless Processing Unit ProcessingUnit pu = gsm.deploy( new ElasticStatelessProcessingUnitDeployment("servlet.war") .memoryCapacityPerContainer(4,MemoryUnit.GIGABYTES) //initial scale .scale( new ManualCapacityScaleConfigurer() .numberOfCpuCores(10) .create()) ); Scale TriggersManual Capacity Scale TriggerThe system administrator may specify the memory and/or CPU core resources required for the processing unit in production. This should be specified during the deployment time, and could be specified also anytime after the deployment. The memory capacity trigger affects the number of provisioned containers. If there are not enough machines to host the provisioned containers the trigger also affects the number of provisioned machines. The number of CPUs affect directly the number of provisioned machines (even if it means that some of the machines have unused memory). When specifying both memory and cores capacity requirements as part of the deploy and scale routines, the EPU will be deployed successfully only when both memory and cores resources can be allocated (sufficient amount of memory and cores across the available machines). If you would like to the memory capacity requirements to take precedence on the cores capacity requirements, have lower values for the cores capacity requirements than the exact existing cores count. Here is an example how you can scale a deployed EPU memory and CPU capacity. Step 1 - Deploy the PU:We deploy the PU having 512GB as the maximum total amount of memory utilized both for primary and backup instances where the entire system should consume maximum of 32 cores. At start only 128GB and 8 cores will be utilized. ProcessingUnit pu = gsm.deploy( new ElasticStatefulProcessingUnitDeployment(new File("myPU.jar")) .memoryCapacityPerContainer(16,MemoryUnit.GIGABYTES) .maxMemoryCapacity(512,MemoryUnit.GIGABYTES) .maxNumberOfCpuCores(32) // set the initial memory and CPU capacity .scale(new ManualCapacityScaleConfigurer() .memoryCapacity(128,MemoryUnit.GIGABYTES) .numberOfCpuCores(8) .create()) ); // Wait until the deployment is complete. pu.waitForSpace().waitFor(pu.getTotalNumberOfInstances()); Step 2 - Increase the memory capacity from 128GB to 256GB and number of cores from 8 to 16:ProcessingUnit pu = admin.getProcessingUnits().waitFor("myPU", 5,TimeUnit.SECONDS); //get the PU // increasing the memory capacity will start new containers // existing machines if enough free memory is available pu.scale(new ManualCapacityScaleConfigurer() .memoryCapacity(256,MemoryUnit.GIGABYTES) .numberOfCpuCores(16) .create()); Step 3 - Increase the memory capacity from 256GB to 512GB and number of cores from 16 to 32:ProcessingUnit pu = admin.getProcessingUnits().waitFor("myPU", 5,TimeUnit.SECONDS); //get the PU // scales out to more CPU cores (existing containers are terminated on existing machines and // new are started on new machines if not enough CPU cores are available on existing machines) pu.scale(new ManualCapacityScaleConfigurer() .memoryCapacity(512,MemoryUnit.GIGABYTES) .numberOfCpuCores(32) .create()); Step 4 - Decrease the memory capacity and CPU capacity:ProcessingUnit pu = admin.getProcessingUnits().waitFor("myPU", 5,TimeUnit.SECONDS); //get the PU pu.scale(new ManualCapacityScaleConfigurer() .memoryCapacity(128,MemoryUnit.GIGABYTES) .numberOfCpuCores(8) .create()); Eager Scale TriggerEager trigger scales the EPU on all available machines and new machines joining the GigaSpaces Grid. Each new machine running a GigaSpaces agent automatically starts a new container hosting the EPU partition instance(s) relocated from some other container. To use the Eager Scale Trigger you should scale the EPU using the EagerScaleConfigurer: pu.scale(new EagerScaleConfigurer().create());
The Eager trigger has the following limitations:
ProcessingUnit pu1 = gsm.deploy( new ElasticSpaceDeployment("eagerspace1") .maxMemoryCapacity(10, MemoryUnit.GIGABYTES) .memoryCapacityPerContainer(1, MemoryUnit.GIGABYTES) // discover only agents with "zone1" .dedicatedMachineProvisioning( new DiscoveredMachineProvisioningConfigurer() .addGridServiceAgentZone("zone1") .removeGridServiceAgentsWithoutZone() .create()) // eager scale .scale(new EagerScaleConfigurer() .create()) ); ProcessingUnit pu2 = gsm.deploy( new ElasticSpaceDeployment("eagerspace2") .maxMemoryCapacity(10, MemoryUnit.GIGABYTES) .memoryCapacityPerContainer(1, MemoryUnit.GIGABYTES) //discover only agents with "zone2" .dedicatedMachineProvisioning( new DiscoveredMachineProvisioningConfigurer() .addGridServiceAgentZone("zone2") .removeGridServiceAgentsWithoutZone() .create()) //eager scale .scale(new EagerScaleConfigurer() .create()) ); The differences between the Eager scale trigger and a Manual capacity trigger in terms of the maximum amount of memory and CPU are:
Machine ProvisioningSystem BootstrappingEach machine requires a single running GigaSpaces Agent. The example below shows how to start a new GigaSpaces agent. The command line parameters instruct the agents to communicate with each other and start the specified amount of managers. It does not start any containers automatically. The EPU starts containers on demand. That means that potentially any machine could be a management machine:
Windows
rem Agent deployment that potentially can start management processes
set LOOKUPGROUPS=myGroup
set JSHOMEDIR=d:\gigaspaces
start cmd /c "%JSHOMEDIR%\bin\gs-agent.bat gsa.global.esm 1 gsa.gsc 0 gsa.global.gsm 2 gsa.global.lus 2"
Linux # Agent deployment that potentially can start management processes
export LOOKUPGROUPS=myGroup
export JSHOMEDIR=~/gigaspaces
nohup ${JSHOMEDIR}/bin/gs-agent.sh gsa.global.esm 1 gsa.gsc 0 gsa.global.gsm 2 gsa.global.lus 2 > /dev/null 2>&1 &
Dedicated Management MachinesIn case you prefer having dedicated management machines, start GigaSpaces agents with the above settings on two machines, and start the rest of the GigaSpaces agents with the settings below. The command line parameters instruct the GigaSpaces agents not to start managers. It does not start any containers automatically. The EPU starts containers on demand:
Windows
rem Agent that does not start management processes
set LOOKUPGROUPS=myGroup
set JSHOMEDIR=d:\gigaspaces
start cmd /c "%JSHOMEDIR%\bin\gs-agent.bat gsa.global.esm 0 gsa.gsc 0 gsa.global.gsm 0 gsa.global.lus 0"
Linux # Agent that does not start management processes
export LOOKUPGROUPS=myGroup
export JSHOMEDIR=~/gigaspaces
nohup ${JSHOMEDIR}/bin/gs-agent.sh gsa.global.esm 0 gsa.gsc 0 gsa.global.gsm 0 gsa.global.lus 0 > /dev/null 2>&1 &
Configure the EPU scale config to use dedicatedManagementMachines, and reduce the reservedMemoryCapacityPerMachine. For more information consult the . Zone Based Machine ProvisioningThe EPU can be deployed into specific zone. This allows you to determine the specific locations of the EPU instances. You may have multiple EPU deployed, each into a difference zone. To specify the agent zone you should set the zone name before starting the agent - here is an example how to set the agent zone to ZoneX: export GSA_JAVA_OPTIONS="-Dcom.gs.zones=zoneX ${GSA_JAVA_OPTIONS}"
gs-agent.sh gsa.global.lus 1 gsa.lus 0 gsa.global.gsm 1 gsa.gsm 0 gsa.gsc 0 gsa.global.esm 1
When deploying the EPU you should specify the zone you would like the EPU will be deployed into: ProcessingUnit pu1 = gsm.deploy( new ElasticSpaceDeployment("mySpace") .maxMemoryCapacity(512, MemoryUnit.GIGABYTES) .memoryCapacityPerContainer(10, MemoryUnit.GIGABYTES) // discover only agents with "zoneX" .dedicatedMachineProvisioning(new DiscoveredMachineProvisioningConfigurer() .addGridServiceAgentZone("zoneX") .removeGridServiceAgentsWithoutZone() .create()) .scale(new ManualCapacityScaleConfigurer() .memoryCapacity(256,MemoryUnit.GIGABYTES) .create()) With the above the mySpace EPU will be deployed only into agents associated with zoneX where other agents without a zone specified will be ignored. Automatic Machine ProvisioningThe EPU supports automatic Virtual Machines provisioning through custom plugins. The plugins are open source http://svn.openspaces.org/cvi/trunk and provide an implementation of and . When deploying an EPU pass an instance of as the deployment property. ProcessingUnit pu = gsm.deploy( new ElasticStatefulProcessingUnitDeployment(new File("myPU.jar")) .memoryCapacityPerContainer(16,MemoryUnit.GIGABYTES) .maxMemoryCapacity(512,MemoryUnit.GIGABYTES) .maxNumberOfCpuCores(32) //automatically start new virtual machines on demand .dedicatedMachineProvisioning(new XenServerMachineProvisioning("xenserver.properties")) ); When deploying Gigaspaces XAP on the management machine(s) place the plug-in JAR file under /gigaspaces-xap/lib/platform/esm folder. The ESM then loads classes specified by the machineProvisioning configuration. These classes need to implement either or . That class must also implement which has resemblance to the Spring Bean. Automatic RebalancingEach stateful processing unit (embedding a space) has a fixed number of logical partitions. The number of logical partitions can be specified by the user or calculated by GigaSpaces using the memory capacity requirements during the deployment time. Each logical partition has by default two instances - a primary and a backup. Instances of an EPU are automatically relocated between containers until the following criteria is met:
How Rebalancing works?GigaSpaces runtime environment differentiate between a Container (GSC) or Grid node that is running within a single JVM instance and an IMDG node, also called a logical partition. A partition has one primary instance and zero or more backup instances. A grid node hosts zero or more logical partitions, these may be primary or backup instances belong to different logical partitions. Logical partitions (primary or backup instances) may relocate between Grid nodes during runtime. A deployed stateful PU may expand its capacity or shrink its capacity in real-time by adding or removing Grid nodes and relocating existing logical partitions to the newly started Grid nodes. If the system selects to relocate a primary instance, it will first switch its activity mode into a backup mode and the existing backup instance of the same logical partition will be switched into a primary mode. Once the new backup will be relocated, it will recover its data from the existing primary. This how the PU expands its capacity without disruption to the client application. GigaSpaces rebalancing controls which logical partition is moving, where it is moving to and whether to move a primary to a backup instance. Without such control, the system might choose to move partitions into containers that are fully consumed or move too many instances into the same container. This will obviously crash the system. Adaptive SLAGigaSpaces adjust the high-availability SLA dynamically to cope with the current system memory resources. This means that if there is not sufficient memory capacity to instantiate all the backup instances, GigaSpaces will relax the SLA in runtime to allow the system to continue and run. Once the system identifies that there are enough resources to accommodate all the backups, it will start the missing backups automatically. Re-balancing Automatic Process Considerations
Shared Machine Provisioning
For sharing a machine between processing units, you would need to specify a sharing ID - a simple string identifying your sharing policy. When a processing unit is requested to scale-out, a new machine is marked with the sharing ID of the processing unit. This ensures that no other processing unit will race to occupy the machine. This sharing ID is later used to match against other processing unit's sharing ID - to allow or deny sharing the same machine resource. To simulate a 'public' sharing policy, use the same sharing ID for all deployments. For example .sharedMachineProvisioning("public"). The following example, shows two elastic stateless processing units that may share each others machine resources. // Deploy the Elastic Stateless Processing Unit on "site1" ProcessingUnit puA = gsm.deploy( new ElasticStatelessProcessingUnitDeployment("servlet.war") .name("servlet-A") .memoryCapacityPerContainer(4,MemoryUnit.GIGABYTES) //initial scale .scale( new ManualCapacityScaleConfigurer() .memoryCapacity(1,MemoryUnit.GIGABYTES) .create()) .sharedMachineProvisioning("site1", machineProvisioningConfig) ); // Deploy the Elastic Stateless Processing Unit on "site1" ProcessingUnit puB = gsm.deploy( new ElasticStatelessProcessingUnitDeployment("servlet.war") .name("servlet-B") .memoryCapacityPerContainer(4,MemoryUnit.GIGABYTES) //initial scale .scale( new ManualCapacityScaleConfigurer() .memoryCapacity(1,MemoryUnit.GIGABYTES) .create()) .sharedMachineProvisioning("site1", machineProvisioningConfig) ); Main Configuration PropertiesElastic Deployment Topology ConfigurationHere are the main configuration properties you may use with the and the :
Scale Strategy ConfigurationHere are the main configuration properties you may use with the and the :
EPU ExampleThe following demo illustrates the EPU. You may run it on your development machine or on your deployment machine.
Running the Example1. Download and Install gs-agent gsa.esm 1 gsa.gsc 0 gsa.lus 1 gsa.gsm 1 This will start an agent without any running GSCs. call C:\gigaspaces-xap-premium-8.0.0-ga\bin\setenv.bat
java -cp bin;%GS_JARS% -Djava.rmi.server.hostname=127.0.0.1 -DlocalMachineDemo=true com.test.scaledemo.ScaleDemoMain
Demo Expected Instances DistributionWhen running the GS-UI you will have the following displayed: Demo Expected OutputThe Client application will display the following output:
Initial State
Welcome to GigaSpaces scalability Demo Log file: C:\gigaspaces-xap-premium-8.0.0-ga\logs\2011-03-01~12.34-gigaspaces-service-127.0.0.1-6760.log Created Admin - OK! Data Grid PU not running - initial deploy --- > Local Machine Demo - Starting initial deploy - Deploying a PU with:64MB Tue Mar 01 12:34:53 EST 2011>> Total Memory used:0.0 MB - Progress:0.0 % done - Total Containers:0 Tue Mar 01 12:34:57 EST 2011>> Total Memory used:0.0 MB - Progress:0.0 % done - Total Containers:1 Tue Mar 01 12:35:01 EST 2011>> Total Memory used:0.0 MB - Progress:0.0 % done - Total Containers:1 Tue Mar 01 12:35:05 EST 2011>> Total Memory used:0.0 MB - Progress:0.0 % done - Total Containers:2 Tue Mar 01 12:35:09 EST 2011>> Total Memory used:0.0 MB - Progress:0.0 % done - Total Containers:2 Tue Mar 01 12:35:13 EST 2011>> Total Memory used:0.0 MB - Progress:0.0 % done - Total Containers:2 Tue Mar 01 12:35:17 EST 2011>> Total Memory used:64.0 MB - Progress:100.0 % done - Total Containers:2 Initial Deploy done! - Time to deploy system:32 seconds Scaling to 128 MB About to start changing data-grid memory capacity from 64.0 MB to 128 MB Hit enter to scale the data grid... Tue Mar 01 12:37:02 EST 2011>> Total Memory used:64.0 MB - Progress:50.0 % done - Total Containers:2 Tue Mar 01 12:37:04 EST 2011>> Total Memory used:64.0 MB - Progress:50.0 % done - Total Containers:2 Tue Mar 01 12:37:06 EST 2011>> Total Memory used:64.0 MB - Progress:50.0 % done - Total Containers:3 Tue Mar 01 12:37:08 EST 2011>> Total Memory used:64.0 MB - Progress:50.0 % done - Total Containers:3 Tue Mar 01 12:37:10 EST 2011>> Total Memory used:64.0 MB - Progress:50.0 % done - Total Containers:4 Tue Mar 01 12:37:14 EST 2011>> Total Memory used:64.0 MB - Progress:50.0 % done - Total Containers:4 Tue Mar 01 12:37:17 EST 2011>> Total Memory used:96.0 MB - Progress:75.0 % done - Total Containers:4 Tue Mar 01 12:37:21 EST 2011>> Total Memory used:96.0 MB - Progress:75.0 % done - Total Containers:4 Tue Mar 01 12:37:25 EST 2011>> Total Memory used:96.0 MB - Progress:75.0 % done - Total Containers:4 Tue Mar 01 12:37:27 EST 2011>> Total Memory used:128.0 MB - Progress:100.0 % done - Total Containers:4 Data-Grid Memory capacity change done! - Time to scale system:27 seconds Scaling to 256 MB About to start changing data-grid memory capacity from 128.0 MB to 256 MB Hit enter to scale the data grid... Tue Mar 01 12:38:21 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:4 Tue Mar 01 12:38:23 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:4 Tue Mar 01 12:38:25 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:5 Tue Mar 01 12:38:27 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:5 Tue Mar 01 12:38:29 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:6 Tue Mar 01 12:38:31 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:6 Tue Mar 01 12:38:33 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:7 Tue Mar 01 12:38:35 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:7 Tue Mar 01 12:38:37 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:8 Tue Mar 01 12:38:41 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:8 Tue Mar 01 12:38:43 EST 2011>> Total Memory used:160.0 MB - Progress:62.5 % done - Total Containers:8 Tue Mar 01 12:38:47 EST 2011>> Total Memory used:160.0 MB - Progress:62.5 % done - Total Containers:8 Tue Mar 01 12:38:51 EST 2011>> Total Memory used:160.0 MB - Progress:62.5 % done - Total Containers:8 Tue Mar 01 12:38:53 EST 2011>> Total Memory used:192.0 MB - Progress:75.0 % done - Total Containers:8 Tue Mar 01 12:38:57 EST 2011>> Total Memory used:192.0 MB - Progress:75.0 % done - Total Containers:8 Tue Mar 01 12:39:01 EST 2011>> Total Memory used:224.0 MB - Progress:87.5 % done - Total Containers:8 Tue Mar 01 12:39:05 EST 2011>> Total Memory used:224.0 MB - Progress:87.5 % done - Total Containers:8 Tue Mar 01 12:39:09 EST 2011>> Total Memory used:224.0 MB - Progress:87.5 % done - Total Containers:8 Tue Mar 01 12:39:11 EST 2011>> Total Memory used:256.0 MB - Progress:100.0 % done - Total Containers:8 Data-Grid Memory capacity change done! - Time to scale system:51 seconds Scaling to 64 MB About to start changing data-grid memory capacity from 256.0 MB to 64 MB Hit enter to scale the data grid... Tue Mar 01 12:40:11 EST 2011>> Total Memory used:256.0 MB - Progress:25.0 % done - Total Containers:8 Tue Mar 01 12:40:14 EST 2011>> Total Memory used:224.0 MB - Progress:28.6 % done - Total Containers:7 Tue Mar 01 12:40:18 EST 2011>> Total Memory used:192.0 MB - Progress:33.3 % done - Total Containers:7 Tue Mar 01 12:40:22 EST 2011>> Total Memory used:192.0 MB - Progress:33.3 % done - Total Containers:6 Tue Mar 01 12:40:26 EST 2011>> Total Memory used:160.0 MB - Progress:40.0 % done - Total Containers:6 Tue Mar 01 12:40:28 EST 2011>> Total Memory used:160.0 MB - Progress:40.0 % done - Total Containers:5 Tue Mar 01 12:40:32 EST 2011>> Total Memory used:128.0 MB - Progress:50.0 % done - Total Containers:5 Tue Mar 01 12:40:36 EST 2011>> Total Memory used:96.0 MB - Progress:66.7 % done - Total Containers:4 Tue Mar 01 12:40:38 EST 2011>> Total Memory used:96.0 MB - Progress:66.7 % done - Total Containers:3 Tue Mar 01 12:40:42 EST 2011>> Total Memory used:64.0 MB - Progress:100.0 % done - Total Containers:3 Data-Grid Memory capacity change done! - Time to scale system:33 seconds Considerations
|
![]() |
GigaSpaces.com - Legal Notice - 3rd Party Licenses - Site Map - API Docs - Forum - Downloads - Blog - White Papers - Contact Tech Writing - Gen. by Atlassian Confluence |