Summary: The LRU Cache policy
OverviewWhen running in LRU cache policy mode, the space evicts the "oldest" objects from its memory. "Oldest" objects are determined by the time they were written, updated or read in the space. In a persistent space mode, evicting a space object means that a space object is simply be removed from the space memory, but is still be available through the underlying RDBMS. The space reloads this object back into the space memory only if it was requested by a specific read operation. The space memory manager uses a dedicated thread called Evictor - this thread handles the eviction of objects and identifying memory shortage events. In general, eviction can be done using:
Evicting an object from the space requires the space engine to lock the LRU chain during the object removal, and to update the relevant indexes. This means that the eviction based on available memory that is done in batches, might impact the space responsiveness to client requests. Still, you might need to use this in case you can't estimate the amount of objects within the space. How LRU Eviction WorksLRU eviction has 2 eviction strategies: This strategy checks the amount of space objects, and evicts the relevant object. One object is evicted when the maximum amount of objects is reached. This eviction routine is called when:
2. Based on the amount of available memory the JVM hosting the space has - when using this strategy, you should perform some tuning to provide deterministic behavior. This strategy is turned on when the space-config.engine.memory_usage.enabled value is true. This strategy is very complicated to use when you have multiple spaces running within the same JVM. The Eviction FlowLRU eviction based on the amount of available memory, performs the following:
The used memory rate is calculated via: Used_memory_rate = (Runtime.totalMemory() - Runtime.freeMemory() * 100.0) / Runtime.maxMemory() SpaceMemoryShortageExceptionThe org.openspaces.core.SpaceMemoryShortageException (which wraps the com.j_spaces.core.MemoryShortageException) is thrown when:
If a client is running a local cache, and the local cache cannot evict its data fast enough, or somehow there is no available memory for the local cache to function, the following is thrown: org.openspaces.core.SpaceMemoryShortageException: Memory shortage at: host: MachineHostName, container: mySpace_container_container1, space mySpace_container_DCache, total memory: 1527 mb, used memory: 1497 mb
Monitoring the Space Memory Manager ActivityYou can monitor the memory manager activity for a space running in LRU mode by moving the com.gigaspaces.core.memorymanager logging entry to FINE. 22:42:44,915 FINE [com.gigaspaces.core.memorymanager] - SpaceName: mySpace Cache eviction started: Available memory[%]85.39833755194752 22:42:44,917 FINE [com.gigaspaces.core.memorymanager] - Call evict on operation: true 22:42:44,925 FINE [com.gigaspaces.core.memorymanager] - Batch evicted size=500 22:42:44,926 FINE [com.gigaspaces.core.memorymanager] - Call evict on operation: true 22:42:44,929 FINE [com.gigaspaces.core.memorymanager] - rate=85.46128254359517 free-memory=7305896 max-memory=50266112 22:42:44,932 FINE [com.gigaspaces.core.memorymanager] - Call evict on operation: true 22:42:44,938 FINE [com.gigaspaces.core.memorymanager] - SpaceName: mySpace Cache eviction finished: Available memory[%]85.46128254359517 evicted all entries. You may change the logging level of the com.gigaspaces.core.memorymanager while the space is running. Start JConsole (you may start it via the GigaSpaces Management Center) for the JVM hosting the space running and change the com.gigaspaces.core.memorymanager logging level to FINE. See below screenshot:
Controlling the Eviction BehaviorThe space-config.engine.memory_usage properties provides options for controlling the space memory utilization, and allows you to evict objects from the space. Objects are evicted when the number of cached objects reaches its maximum size, or when memory usage reaches its limit. high_watermark_percentage >= write_only_block_percentage >= write_only_check_percentage >= low_watermark_percentage See below example how you can configure the LRU eviction settings: <os-core:space id="space" url="/./mySpace"> <os-core:properties> <props> <prop key="space-config.engine.memory_usage.enabled">true</prop> <prop key="space-config.engine.cache_policy">0</prop> <prop key="space-config.engine.cache_size">5000000</prop> <prop key="space-config.engine.memory_usage.high_watermark_percentage">90</prop> <prop key="space-config.engine.memory_usage.write_only_block_percentage">85</prop> <prop key="space-config.engine.memory_usage.write_only_check_percentage">76</prop> <prop key="space-config.engine.memory_usage.low_watermark_percentage">75</prop> <prop key="space-config.engine.memory_usage.eviction_batch_size">500</prop> <prop key="space-config.engine.memory_usage.retry_yield_time">2000</prop> <prop key="space-config.engine.memory_usage.retry_count">5</prop> <prop key="space-config.engine.memory_usage.explicit-gc">false</prop> </props> </os-core:properties> </os-core:space> LRU Touch ActivityLRU touch activity kicks-in when the percentage of objects within the space exceeds space-config.engine.lruTouchThreshold where the space-config.engine.cache_size is the max amount. This avoid the overhead involved with the LRU activity. A 0 value means always touch, 100 means no touch at all. When setting the space-config.engine.lruTouchThreshold value as 100, it turns the eviction to run in a FIFO mode. Reloading DataWhen a persistent space (using External Data Source), running in LRU cache policy mode, is started/deployed, it loads data from the underlying data source before being available for clients to access. The default behavior is to load data up to 50% of the space-config.engine.cache_size value. When the space-config.engine.memory_usage is true (evicting data from the space, based on free heap size), is it recommended to have a large value for the space-config.engine.cache_size property. This instructs the space engine to ignore the amount of space objects when launching the eviction mechanism. This ensures that the eviction is based only on heap size free memory. The combination of large space-config.engine.initial_load and a large space-config.engine.cache_size, may lead to out-of-memory problems. To avoid this, configure the space-config.engine.initial_load to have a low value. With the example below, each partition will load 100000 objects - 10% out of the space-config.engine.cache_size: <os-core:space id="space" url="/./mySpace" schema="persistent" external-data-source="hibernateDataSource"> <os-core:properties> <props> <prop key="space-config.engine.memory_usage.enabled">true</prop> <prop key="space-config.engine.cache_policy">0</prop> <prop key="space-config.engine.initial_load">10</prop> <prop key="space-config.engine.cache_size">1000000</prop> <prop key="cluster-config.cache-loader.external-data-source">true</prop> <prop key="cluster-config.cache-loader.central-data-source">true</prop> </props> </os-core:properties> </os-core:space> The space-config.engine.initial_load_class property can be used to specify specific class(s) data to load. How can I get Deterministic Behavior During Eviction of Objects?In order to have deterministic behavior of the memory manager when evicting objects, based on the amount of free memory in such a way that it:
you should:
Here are good settings for a JVM with a 2G heap size and a 5K object size. With the following settings, eviction happens once the JVM consumes more than 1.4 G. <os-core:space id="space" url="/./mySpace" schema="persistent" external-data-source="hibernateDataSource"> <os-core:properties> <props> <prop key="space-config.engine.cache_policy">0</prop> <prop key="space-config.engine.cache_size">200000</prop> <prop key="space-config.engine.memory_usage.enabled">true</prop> <prop key="space-config.engine.memory_usage.high_watermark_percentage">70</prop> <prop key="space-config.engine.memory_usage.write_only_block_percentage">68</prop> <prop key="space-config.engine.memory_usage.write_only_check_percentage">65</prop> <prop key="space-config.engine.memory_usage.low_watermark_percentage">60</prop> <prop key="space-config.engine.memory_usage.eviction_batch_size">2000</prop> <prop key="space-config.engine.memory_usage.retry_count">100</prop> <prop key="space-config.engine.memory_usage.explicit-gc">false</prop> <prop key="space-config.engine.memory_usage.retry_yield_time">4000</prop> </props> </os-core:properties> </os-core:space> Here are the Java arguments (using incremental GC) to use for the JVM running the Space/GSC: -Xmx2g -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:ParallelGCThreads=8 -XX:+UseParNewGC -XX:+CMSIncrementalPacing -XX:MaxGCPauseMillis=1000 When there are a small number of objects within the space (less than 50,000), with a relatively large size (100K and above), and you are running with an LRU cache policy, you should:
Garbage Collection Behavior and Space Response Time TangoIn general, when the JVM garbage collection is called, there is a chance that clients accessing the space are affected. See below an example of regular GC behavior, when eviction is going on (based on available memory), and new objects are written into the space: Incremental GC behavior has more moderate activity with on-going garbage collection, without the risk of missing a garbage collection, and getting OOME - see below for an example of behavior when eviction is going on (based on available memory) and new objects are written into the space: When the LRU eviction is based on the maximum amount of objects, the memory utilization graph looks like this - a very small amplitude. This behavior is achieved because the memory manager evicts objects one by one from the space, rather than in batches. So the amount of work the JVM garbage collector needs to perform is relatively small. This also does not affect the clients communicating with the space, and provides a very deterministic response time - i.e. a very small chance of a client hiccup.
space-config.engine.memory_usage.explicit-gcThe memory manager has a very delicate feature, called the explicit-gc. When enabled, the space performs an explicit Garbage Collection (GC) call before checking how much memory is used. When turned on, this blocks clients from accessing the space during the GC activity. This can cause a domino affect, resulting in un-needed failover, or client total hang. The problem is severe in a clustered environment, where both the primary and backup space JVM make an explicit GC call at the same time, holding back the primary from both serving the client, and from sending operations to the backup. With a small value for the space-config.engine.memory_usage.retry_yield_time, or when the space-config.engine.memory_usage.explicit-gc is turned off (false as a value), the space might evict most of its data, once the space-config.engine.memory_usage.write_only_block_percentage, or the space-config.engine.memory_usage.high_watermark_percentage is breached. This happens since the JVM hosting the space might not perform garbage collection immediately between each eviction cycle, resulting in the memory usage remaining intact, causing another eviction cycle to be called. When using the space-config.engine.memory_usage.explicit-gc option:
|
![]() |
GigaSpaces.com - Legal Notice - 3rd Party Licenses - Site Map - API Docs - Forum - Downloads - Blog - White Papers - Contact Tech Writing - Gen. by Atlassian Confluence |