Summary: Reliable Asynchronous Persistency (Mirror) - advanced topics.
Custom Mirror Service NameA Mirror Service can be configured per Space cluster. You can't have multiple Mirror services configured for the same space cluster.
If you have multiple different space clusters, each with its own Mirror service running, you should use a different name for each Mirror Service. The Mirror Service name is used as part of the space config, specified via the "cluster-config.mirror-service.url" property. Its default is "jini://*/mirror-service_container/mirror-service" which match the "mirror-service" that is used as part of the url property used to start the mirror service. As an example, let's say we would like to call my mirror service mymirror-service (instead of the default mirror-service). Here is how the mirror service should be started: <os-core:space id="space" url="/./mymirror-service" schema="mirror" space-sync-endpoint="mirrorSynchronizationEndpoint" /> Here is how the space should be started: <os-core:space id="space" url="/./mySpace" schema="persistent" mirror="true" space-data-source="spaceDataSource"> <os-core:properties> <props> <prop key="cluster-config.mirror-service.url"> jini://*/mymirror-service_container/mymirror-service </prop> </props> </os-core:properties> </os-core:space> Implementing a Custom Mirror Data SourceGigaSpaces has a built in Hibernate Space Persistency implementation which is a SpaceSynchronizationEndpoint extension. You can implement your own Mirror very easily to accommodate your exact needs. See example below: Show code... And here is how this can be configured within the mirror configuration: Show configuration...
Multiple MirrorsIn some cases you may need to asynchronously persist data both into a relational database and a file, or persist the data into a relational database and transfer some of the data into some other system. In such a case you may need to have multiple mirrors. In order to implement this, you should have one base mirror (for example the Hibernate Space Persistency) and extend it to include the extra functionality you may need. See the Mirror Monitor for a simple example how such approach should be implemented. Handling Mirror ExceptionsSince the space synchronization endpoint configured for the mirror service communicates with the database, it may run into database related errors, such as constraint violations, wrong class mappings (if the Hibernate-based space synchronization endpoint implementation is used), or other database-related errors. By default, these errors are propagated to the replicating space (primary space instance), and will appear in its logs. In such a case, the primary space will try to replicate the batch the caused the error to the mirror service again, until it succeeds (meaning that no exception has been exposed to the user's application code). To override and extend this behavior, you can implement an exception handler that will be called when an exception is thrown from the Mirror back to the primary space. This exception handler can log the exception at the mirror side, throw it back to the space, ignore it or execute any user specific code. Here is an example of how this is done using the org.openspaces.persistency.patterns.SpaceSynchronizationEndpointExceptionHandler provided with OpenSpaces: <bean id="hibernateSpaceSynchronizationEndpoint" class="org.openspaces.persistency.hibernate.DefaultHibernateSpaceSynchronizationEndpointFactoryBean"> <property name="sessionFactory" ref="sessionFactory"/> </bean> <bean id="exceptionHandler" class="eg.MyExceptionHandler"/> <bean id="exceptionHandlingSpaceSynchronizationEndpoint" class="org.openspaces.persistency.patterns.SpaceSynchronizationEndpointExceptionHandler"> <constructor-arg ref="hibernateSpaceSynchronizationEndpoint"/> <constructor-arg ref="exceptionHandler"/> </bean> <os-core:space id="space" url="/./mirror-service" schema="mirror" space-sync-endpoint="exceptionHandlingSpaceSynchronizationEndpoint"/> With the above, we use the SpaceSynchronizationEndpointExceptionHandler, and wrap the DefaultHibernateSpaceSynchronizationEndpoint with it (and pass that to the space). On the SpaceSynchronizationEndpointExceptionHandler, we set our own implementation of the PersistencyExceptionHandler, to be called when there is an exception. With the PersistencyExceptionHandler you can decide what to do with the Exception: "swallow it", execute some logic, or rethrow it.
Mirror behavior with Distributed TransactionsWhen using the Jini Distributed Transaction Manager and persisting data through the mirror service, each partition sends its transaction data to the Mirror on commit. The mirror service receives the replication messages in bulk from each partition that participated in the transaction. In order to keep data consistency when persisting the data, these bulks should to be consolidated at the mirror service.
By default this property is set to group-by-replication-bulk and executeBulk() groups several transactions and executes them together. The group size is defined by the mirror replication bulk-size. Setting this property will cause a SpaceSynchronizationEndpoint.onTransactionSynchronization invocation for each transaction separately. Example Of Getting The Transaction's Metadatapublic class MySpaceSynchronizationEndpoint extends SpaceSynchronizationEndpoint { @Override public void onTransactionSynchronization(TransactionData transactionData) { TransactionParticipantMetaData metaData = transactionData.getTransactionParticipantMetaData(); int participantId = metaData.getParticipantId(); int participantsCount = metaData.getTransactionParticipantsCount(); TransactionUniqueId transactionId = metaData.getTransactionUniqueId(); // ... } } Notes: In 9.0.1 a new transaction participant meta data interface is introduced. /** * Represents a transaction meta data for a specific transaction participant. * @since 9.0.1 */ public interface TransactionParticipantMetaData { /** * The id of the space that committed the transaction. * @return the participantId */ public int getParticipantId(); /** * Number of participants in the transaction * @return the participantsCount */ public int getTransactionParticipantsCount(); /** * The id of the transaction * @return the transactionId */ public TransactionUniqueId getTransactionUniqueId(); } TransactionUniqueId.java: /** * Represents a transaction unique id constructed from the transaction manager which created the transaction * Uuid and the transaction id. * @since 9.0.1 */ public interface TransactionUniqueId { /** * @return The transaction manager which created the transaction {@link Uuid}. */ Uuid getTransactionManagerId(); /** * @return The transaction id. */ Object getTransactionId(); } Built-In Mirror Distributed Transaction ConsolidationDistributed transaction consolidation is configured in the space instances replicating data to the mirror. In the following example we configure a space in pu.xml with transaction consolidation mode enabled:
<os-core:space id="space" url="/./mySpace"> <os-core:properties> <props> <prop key="cluster-config.groups.group.repl-policy.processing-type"> multi-source </prop> </props> </os-core:properties> </os-core:space> As specified in the example above, it is required to set the "cluster-config.groups.group.repl-policy.processing-type" property to "multi-source". In order to take advantage of this feature, mirror operation grouping should be set to "group-by-space-transaction" in mirror: <os-core:space id="mirror" url="/./mirror-service" schema="mirror" space-sync-endpoint="spaceSynchronizationEndpoint"> <os-core:properties> <props> <prop key="space-config.mirror-service.operation-grouping"> group-by-space-transaction </prop> </props> </os-core:properties> </os-core:space> group-by-replication-bulk with multi-source will gather several transactions and that might cause longer delays,waiting to all transactions participants. Distributed Transaction Consolidation Example:public class MySpaceSynchronizationEndpoint extends SpaceSynchronizationEndpoint { @Override public void onTransactionSynchronization(TransactionData transactionData) { if (transactionData.isConsolidated()) { // this is a consolidated distributed transaction... ConsolidatedDistributedTransactionMetaData metaData = transactionData.getConsolidatedDistributedTransactionMetaData(); } } @Override public void onTransactionConsolidationFailure(ConsolidationParticipantData participantData) { // intercept transaction consolidation failure & decide whether to commit or abort this participant data if (sunnyDay) participantData.commit(); else participantData.abort(); } } Distributed transaction consolidation is done by waiting for all the transaction participants' data before processing is done by the mirror. In some cases certain distributed transaction participants data might be delayed due to network delay or disconnection and therefore may cause delays in replication. In order to prevent this delay, it is possible to set a timeout parameter which indicates how much time to wait for distributed transactions participants data before processing the data individually for each participant. Please note that while waiting for a distributed transaction to entirely arrive the mirror, replication isn't waiting but keeping the data flow and preventing from conflicts to happen. The following example demonstrates how to set the timeout for waiting for distributed transaction data to arrive, it is also possible to set the amount of new operations to perform before processing data individually for each participant: <os-core:space id="mirror" url="/./mirror-service" schema="mirror" space-sync-endpoint="spaceSynchronizationEndpoint"> <os-core:properties> <props> <prop key="space-config.mirror-service.operation-grouping"> group-by-space-transaction </prop> </props> </os-core:properties> <os-core:tx-support dist-tx-wait-timeout-millis="10000" dist-tx-wait-for-opers="20" /> </os-core:space> Distributed transaction participants' data will be processed individually if ten seconds have passed and all of the participants data has not arrived or if 20 new operations were executed after the distributed transaction.
Usage ScenariosWriting Asynchronously to the Mirror Data SourceThe following is a schematic flow of a synchronous replicated cluster with three members, which are communicating with a Mirror Service: Reading from the Data SourceThe Mirror Service space is used to asynchronously persist data into the data source. As noted elsewhere, the Mirror is not a regular space, and should not be interacted with directly. Thus, data can't be read from the data source using the Mirror Service space. Nonetheless, the data might be read by other spaces which are configured with a space data source. The data-grid pu.xml needs to be configured to use an space data source which, when dealing with a Mirror, is central to the cluster. Here is a schematic flow of how a Mirror Service asynchronously receives data, to persist into an data source, while the cluster is reading data directly from the data source. Partitioning Over a Central Mirror Data SourceWhen partitioning data, each partition asynchronously replicates data into the Mirror Service. Each partition can read back data that belongs to it (according to the load-balancing policy defined). Here is a schematic flow of how two partitions (each a primary-backup pair) asynchronously interact with a data source: Considerations and Known Issues
TroubleshootingLog MessagesThe space persistency logging level can be modified as part of the <GigaSpaces Root>\config\gs_logging.properties file. By default, it is set to java.util.logging.Level.INFO: com.gigaspaces.persistent.level = INFO Logging is divided according to java.util.logging.Level as follows:
Failover HandlingThis section describes how the GigaSpaces Mirror Service handles different failure scenarios. The following table lists the services involved, and how the failure is handled in the cluster. Active services are green, while failed services are red.
Unlikely Failure ScenariosThe following failure scenarios are highly unlikely. However, it might be useful to understand how such scenarios are handled by GigaSpaces. This is detailed in the table below. Active services are green, while failed services are red.
|
![]() |
GigaSpaces.com - Legal Notice - 3rd Party Licenses - Site Map - API Docs - Forum - Downloads - Blog - White Papers - Contact Tech Writing - Gen. by Atlassian Confluence |