Summary: Setting Space cache policy, memory usage, and rules for exceeding physical memory capacity.
Overview
The Memory Management facility is used to assist the client in avoiding a situation where a space server gets into an out-of-memory failure scenario. Based on the configured cache policy, the memory manager protects the space (and the application, in the case it is running collocated with the space) from consuming memory beyond a defined threshold.
The client/Application is expected to have some business logic that will handle the MemoryShortageException that may be thrown. Without such business logic, the space server or a client local cache may eventually exhaust all their parent process available memory resources.
Most of the considerations described in this topic are also relevant for the client application when running a Local Cache running with a LRU Cache policy.
The space memory can be managed using the following mechanisms:
Eviction policy: You can set the policy to run ALL IN CACHE or LRU (Least Recently Used).
Memory Manager: Provides options for controlling the VM that is hosting the space memory utilization. It allows you to define thresholds for situations where the memory becomes over-utilized.
Cache Eviction Policies
The space supports two cache eviction policies: LRU-Cache Policy (code 0) and ALL IN CACHE-Cache Policy (code 1) defined via the the space-config.engine.cache_policy property.
//Creates an LRU cache policy when defining the Space.
ISpaceProxy spaceProxy = CreatePersistantSpace(new Dictionary<string, string> {{"space-config.engine.cache_policy", "0"}});
ALL IN CACHE-Cache Policy: Assumes the VM hosting the space instance has enough heap to hold all data in memory.
LRU-Cache Policy: Assumes the VM hosting the space instance does not have enough heap to hold all data in memory. By default the ALL IN CACHE policy is used for an in-memory data grid, and the LRU-Cache Policy is used for a persistent space with an external data source.
Calculating the Available Memory
Both the ALL_IN_CACHE and LRU cache policies calculate the VM's available memory to determine whether to throw a MemoryShortageException or to start to evict objects.
Before throwing a MemoryShortageException the local cache/local view/space performs an explicit garbage collection call, allowing the VM to reclaim any unused heap memory. Explicitly invoking garbage collection may happen at the client side when running a local cache or a local view, or at the space side (VM hosting the GSC).
The explicit garbage collection call reduces the probability of throwing a MemoryShortageException in the case where the VM does have some available memory left. However, such a call might impact the client side (when running local cache/view) or space-side responsiveness since all VM threads are paused during the garbage collection activity. When the client or space uses a large heap size, this might introduce a long pause.
You can disable explicit calls to the garbage collector by adding one of the following parameters to the client or space side VM parameters list:
DisableExplicitGC
ExplicitGCInvokesConcurrent
As this feature is designed to protect your application, explicit calls to the garbage collector should only be disabled when you have determined it is completely necessary.
Handling Large VM Heap Sizes
Use the following values to configure the VM to use large heap sizes (over 10GB):
These values represent 400MB difference between the high_watermark_percentage and the low_watermark_percentage when having 10GB max heap size. The above values ensure the memory manager does not waste memory, but throws a MemoryShortageException when running in ALL_IN_CACHE or evicts objects when running in LRU cache policy mode when the absolute amount of VM available memory is low.
With large JVM heap size it is recommended to use the CMS Garbage collector. This will avoid long Garbage collector pauses.
Memory Manager Activity when Initializing the Space
In this phase of the space life cycle, the space checks for the amount of available memory. This is relevant when the space performs a warm start, such as ExternalDataSource.initialLoad().
Memory Manager and Transient Objects
Transient Objects are specified using the SpaceClass \(persist=false\) decoration. You may specify transient decoration at the class or object level (field method level decoration). When using transient objects, note that they are:
Included in the free heap size calculation.
Included in the count of total objects (for max cache size).
Not evicted when running in LRU cache policy mode.
You may use the transient object option to prevent the space from evicting objects when running in LRU cache policy mode.
Memory Manager's Synchronous Eviction
Since LRU eviction can be costly, it is done asynchronously by the memory manager. However, when the amount of used memory reaches a threshold, LRU eviction by the memory manager is done synchronously and API calls to the space are blocked. The synchronous eviction watermark can be set by the space-config.engine.memory_usage.synchronous_eviction_watermark_percentage memory manager parameter.
Explicit Eviction of Objects from the Space
Objects can be evicted explicitly from the space by calling the TakeMultiple or Clear operations on the ISpaceProxy interface combined with the TakeModifiers.EVICT_ONLY modifier. The Clear operation only returns the number of objects actually evicted from the space. The TakeMultiple operation returns the actual objects that were evicted.
Using clear()
ISpaceProxy proxy = ...;
User template = new User();
// Using clear - evicts all the objects of type User from the space
int numEvicted = proxy.Clear(template, TakeModifiers.EVICT_ONLY);
Using TakeMultiple()
ISpaceProxy proxy = ...;
User template = new User();
// Using takeMultiple - evicts all the objects of type User from the space
User[] evictedUsers = proxy.TakeMultiple(template, Integer.MAX_VALUE, TakeModifiers.EVICT_ONLY);
The TakeModifiers.EVICT_ONLY modifier:
can be used with any take operation: Take, TakeById, TakeMultiple, etc.
can only be used with LRU policy.
ignores the timeout argument, when specified. The operations always return immediately.
does not propagate the Take or Clear calls to the underlying database (EDS layer) when running in synchronous or asynchronous persistency mode. For example, a Take operation might return a null value, while a matching object exists in the underlying database.
ignores the TakeModifiers.EVICT_ONLYwhen used in a transactional operation. A Take or Clear in the context of a transaction will not result eviction.
Exceeding Physical Memory Capacity
The overall space capacity is not necessarily limited to the capacity of its physical memory. Currently there are two options for exceeding this limit:
Using an LRU andExternal Data Source: In this mode, all space data is kept in the database and therefore the space capacity is dependent on the database capacity rather than the memory capacity. The space maintains in memory, a partial image of the persistent view in an LRU basis.
UsingPartitioned Space: In this mode, the space utilizes the physical memory of multiple VMs. This means the application using the space is able to access all the space instances transparently, as if they were a single space with higher memory capacity.
Memory Manager Parameters
The following properties used to control the memory manager.
Property
Description
Default value
Supported with Cache Policy
space-config.engine.cache_size
Defines the maximum size of the space cache. This is the total number of space objects across all space class instances, within a single space. This parameter is ignored when running an ALL_IN_CACHE cache policy.
Specifies the recommended lower threshold for the VM heap size that should be occupied by the space container. When the system reaches the high_watermark_percentage, it evicts objects on an LRU basis, and attempts to reach this low_watermark_percentage. This process continues until there are no more objects to be evicted, or until memory use reaches the low_watermark_percentage.
Specifies an upper threshold for checking only write-type operations. Above this level, all operations are checked.
76
ALL_IN_CACHE
space-config.engine.memory_usage. retry_count
Number of retries to lower the memory level below the Low_watermark_percentage. If after all retries, the memory level is still above the space-config.engine.memory_usage.write_only_block_percentage, a MemoryShortageException is thrown for that write request.
5
LRU
space-config.engine.memory_usage. explicit-gc
If true, the garbage collector is called explicitly before trying to evict.
When using the LRU cache policy, setting space-config.engine.memory_usage.explicit-gc=false may cause the garbage collector to evict fewer objects than the defined minimum (low watermark percentage). This tag is false by default, because setting the garbage collector explicitly consumes a large amount of CPU, thus affecting performance. Therefore, set space-config.engine.memory_usage.explicit-gc=true only if you want to ensure that the minimum amount of objects are evicted from the space (and not less than the minimum).
Time (in milliseconds) to wait after evicting a batch of objects, before measuring the current memory utilization.
50
LRU
space-config.engine.initial_load
Sets the default amount (%) of data loaded from underlying data source before being available to the client application, when a persistent space running in LRU cache policy mode is started/deployed. See the Reloading Data section for details.
LRU touch activity kicks-in when the percentage of objects within the space exceeds space-config.engine.memory_usage.lruTouchThreshold where the space-config.engine.cache_size is the max amount. This avoid the overhead involved with the LRU activity. A 0 value means always touch, 100 means no touch at all. The default value of the space-config.engine.memory_usage.lruTouchThreshold is 50 which means the LRU touch activity will kick-in when the amount of objects within the space will cross half of the amount specified by the space-config.engine.cache_size value.
50
LRU
A MemoryShortageException is thrown only when the VM garbage collection and the eviction mechanism do not evict enough memory. This can happen if the space-config.engine.memory_usage.low_watermark_percentage value is too high.