Summary: Explains how to deploy your processing unit onto the GigaSpaces Service Grid to get automated SLA management and self-healing capabilities

Overview

Deploying your processing unit to the service grid is the preferred way to run your it in your production environment. The service grid provides the following main benefits to the processing unit deployed onto it:

  • Automatic distribution and provisioning of the processing unit instances: When deploying to the service grid the GigaSpaces Manager identifies the relevant GigaSpaces Containers and takes care of distributing the processing unit binaries to them. So you do not need to manually install the processing unit anywhere on the cluster
  • SLA enforcement: The GigaSpaces Manager is also responsible for enforcing your processing unit's SLA. At deployment time, it will create the specified number of processing unit instances and provision them to the running containers while enforcing all the deployment requirements, such as memory and CPU utilization, or specific deployment zones. At runtime, it will monitor the processing unit instances, and if one of them fails or becomes unavailable it will re-instantiate automatically on another container.

The Deployment Process

Once built according to the processing unit directory structure, the processing unit can be deployed via the various deployment tools available in GigaSpaces XAP (UI, CLI, Ant, Maven or the Admin API).

After you package the processing unit and deploy it via one of the deployment tools, the deployment tool uploads it to all the running GSMs, where it is extracted and provisioned to the GSCs.

To Jar or Not to Jar
The recommended way to deploy the processing unit is by packaging it into a .jar or a .zip archive and specifying the location of the packaged file to the deployment tool in use.
However, GigaSpaces XAP also supports the deployment of exploded processing units. The deployment tool will package the processing unit directories into a jar file automatically). Another option to deploy a processing unit is by placing the exploded processing unit under the deploy directory of each of the GSMs and issuing a deploy command with the processing unit name (the name of the directory under the deploy directory).

Distribution of Processing Unit Binaries to the Running GSCs

By default, when a processing unit instance is provisioned to run on a certain GSC, the GSC downloads the processing unit archive from the GSM into the <GigaSpaces Root>/work/deployed-processing-units directory (The location of this directory can be overridden via the com.gs.work system property).

Downloading the processing unit archive to the GSC is the recommended option, but it can be disabled. In order to disable it, the pu.download deployment property should be set to false. This will not download the entire archive to the GSC, but will force the GSC to load the processing unit classes one at a time from the GSM via a URLClassLoader.

Deploying the Processing Unit Using the Various Deployment Tools

GigaSpaces provides several options to deploy a processing unit onto the Service Grid. Below you can find a simple deployment example with the various deployment tools for deploying a processing unit archive called myPU.jar located in the usr/gigaspaces directory:

Admin API

Deploying via code is done using the GigaSpaces Admin API. The following example shows how to deploy the myPU.jar processing unit using one of the available GSMs. For more details please consult the documentation and javadoc of the Admin API.

Admin admin = new AdminFactory().addGroup("myGroup").createAdmin();
File puArchive = new File("/opt/gigaspaces/myPU.jar");
ProcessingUnit myPU = admin.getGridServiceManagers().deploy(new ProcessingUnitDeployment(puArchive));

Ant

Deploying with Ant is based on the org.openspaces.pu.container.servicegrid.deploy.Deploy class (in fact, all of the deployment tools use this class although it is not exposed directly to the end user).

In the below example we create an Ant macro using this class and use it to deploy our processing unit. The deploy class is executable via its main() method, and can accept various parameters to control the deployment process. These parameters are identical to these of the deploy CLI command, for a complete list of the available parameters please consult the deploy CLI reference documentation..

<deploy file="/opt/gigaspaces/myPU.jar" />

<macrodef name="deploy">
    <attribute name="file"/>
    <sequential>
        <java classname="org.openspaces.pu.container.servicegrid.deploy.Deploy" fork="false">
            <classpath refid="all-libs"/>
            <arg value="-groups" />
            <arg value="mygroup" />
            <arg value="@{file}"/>
        </java>
    </sequential>
</macrodef>

GigaSpaces CLI

Deploying via the CLI is based on the deploy command. This command accepts various parameters to control the deployment process. These parameters are documented in full in the deploy CLI reference documentation..

> <gigaspaces root>/bin/gs.sh(bat) deploy myPU.jar

GigaSpaces UI
  • Open the GigaSpaces UI by launching {{<gigaspaces root>/bin/gs-ui.sh(bat)
  • Click the "Deploy Application" button at the top left of the window
  • In the deployment wizard, click ... to select your processing unit archive, and then click Deploy

Hot Deploy

To enable business continuity in a better manner, having system upgrade without any downtime, here is a simple procedure you should follow when you would like to perform a hot deploy, upgrading a PU that includes both a business logic and a collocated embedded space:
1. Upload the PU new/modified classes (i.e. polling container SpaceDataEvent implementation or relevant listener class and any other dependency classes) to the PU deploy folder on all the GSM machines.
2. Restart the PU instance running the backup space. This will force the backup PU instance to reload a new version of the business logic classes from the GSM.
3. Wait for the backup PU to fully recover its data from the primary.
4. Restart the Primary PU instance. This will turn the existing backup instance to become a primary instance. The previous primary will turn into a backup, load the new business logic classes and recover its data from the existing primary.
5. Optional - You can restart the existing primary to force it to switch into a backup instance again. The new primary will also use the new version of the business logic classes.

You can script the above procedure via the Administration and Monitoring API, allowing you to perform system upgrade without downtime.

Restart a running PU via the GS-UI

To restart a running PU (all instances) via the GS-UI you should:
1. Start the GS-UI - move into the Deployed Processing Unit tab
2. Right click the mouse on the PU instance you want to restart
3. Select the restart menu option

4. Confirm the operation

5. Within few seconds the restart operation will be completed. If the amount of data to recover is large (few millions of objects), this might take few minutes.
6. Repeat steps 2-4 for all backup instances.
7. Repeat steps 2-4 for all primary instances. This will switch the relevant backup to be a primary mode where the existing primary will switch into a backup mode.

Restart a running PU via the Admin API

The ProcessingUnitInstance includes few restart methods you may use to restart a PU instance:

restart() 
restartAndWait() 
restartAndWait(long timeout, TimeUnit timeUnit)

Here is an example code that is using the ProcessingUnitInstance.restart to restart the entire PU instances in an automatic manner:

import java.util.concurrent.TimeUnit;
import java.util.logging.Logger;

import org.openspaces.admin.Admin;
import org.openspaces.admin.AdminFactory;
import org.openspaces.admin.pu.ProcessingUnit;
import org.openspaces.admin.pu.ProcessingUnitInstance;
import com.gigaspaces.cluster.activeelection.SpaceMode;

public class PUReatartMain {
	static Logger logger = Logger.getLogger("PUReatart");
	
	public static void main(String[] args) {
		String puToReatart = "myPU";
		Admin admin = new AdminFactory().createAdmin();

		ProcessingUnit processingUnit = admin.getProcessingUnits().waitFor(
				puToReatart, 10, TimeUnit.SECONDS);

                if (processingUnit == null) 
                {
		   logger.info("can't get PU instances for "+puToReatart );
		   admin.close();
		   System.exit(0);
                }

                // Wait for all the members to be discovered
                processingUnit.waitFor(processingUnit.getTotalNumberOfInstances());

		ProcessingUnitInstance[] puInstances = processingUnit.getInstances();
		// restart all backups
		for (int i = 0; i < puInstances.length; i++) {
                   
			if (getSpaceMode(puInstances[i]) == SpaceMode.BACKUP) {
				restartPUInstance(puInstances[i]);
			}
		}

		// restart all primaries
		for (int i = 0; i < puInstances.length; i++) {
			if (getSpaceMode(puInstances[i]) == SpaceMode.PRIMARY) {
				restartPUInstance(puInstances[i]);
			}
		}
		admin.close();
		System.exit(0);
	}
	private static void restartPUInstance(
			ProcessingUnitInstance pi) {
		final String instStr = getSpaceMode(pi) != SpaceMode.PRIMARY?"backup" : "primary";
		logger.info("restarting instance " + pi.getInstanceId()
				+ " on " + pi.getMachine().getHostName() + "["
				+ pi.getMachine().getHostAddress() + "] GSC PID:"
				+ pi.getVirtualMachine().getDetails().getPid() + " mode:"
				+ instStr + "...");
		
		pi = pi.restartAndWait();
		logger.info("done");
	}
        private static SpaceMode getSpaceMode(ProcessingUnitInstance pi){
		while (pi.getSpaceInstance().getMode() == SpaceMode.NONE){
			try {
				Thread.sleep(1000);
			}
			catch(Exception e){}
		}
		return pi.getSpaceInstance().getMode();		
	}
	
}
GigaSpaces.com - Legal Notice - 3rd Party Licenses - Site Map - API Docs - Forum - Downloads - Blog - White Papers - Contact Tech Writing - Gen. by Atlassian Confluence