Table of Contents

Improving Invocation Handler Performance

Tune your invocation handler for the best possible performance.

Optimize the In-Process Client Cache

The ScaleOut Client cache is an in-process near cache that reduces serialization and data transfer overhead.

  1. When selecting an eviction strategy for the client cache, use either the "Random" or "Random-MaxMemory" strategy.
  2. Make the client cache as large as possible, but keep its size within practical limits--an invocation handler runs on the same machine as the ScaleOut service, and it should not compete with the ScaleOut service for RAM.

Perform "Fast Reads" in your Handler's Evaluate Method

The context argument that is provided to your invocation handler's Evaluate method contains a FastReadVersion property that can be used to cut round trips to the ScaleOut service.

Use the supplied FastReadVersion value by setting it on a ReadOptions instance during a read call. This causes a Read operation to skip its version check with the ScaleOut service if the in-process client cache already contains the supplied version of the object.

class FindInactiveCarts : ForEach<string, ShoppingCart>
{
    public override void Evaluate(string key, OperationContext<string, ShoppingCart> context)
    {
        var readOpt = new ReadOptions() { FastReadVersion = context.FastReadVersion };
        var readResponse = context.Cache.Read(key, readOpt);
        // ...
    }
}
Warning

Using fast reads introduces the slim possibility of a race: another client/caller may update the targeted object in the brief interval between the ScaleOut service supplying the version to your Evaluate() method and your subsequent read call. This would cause a stale object to be returned to the read caller. Fast reads should therefore only be used when external updates are not used or when stale reads are acceptable to business requirements.

Use Custom Serialization

For simplicity, .NET's ubiquitous BinaryFormatter is used as the default serializer for objects stored in the cache. Although the BinaryFormatter is flexible, it does not perform well. Performance-sensitive applications should a more efficient serializer.

Use .NET's Server GC

The .NET runtime defaults to using workstation garbage collection, which is intended for end-user client applications. Improve your application's throughput by configuring .NET to use server garbage collection.

See Run-time configuration options for garbage collection for more information.

Note

If you are running your handler in an Invocation Grid, the Invocation Grid project template is already configured to use Server GC.

Increase the Key-String Cache Size

When processing invocation requests for a cache that uses System.String as the key type, the Scaleout.Client library must retrieve the original strings from the ScaleOut service to supply keys to an invocation handler. The key-string cache eliminates these round trips.

By default, the key-string cache stores 10,000 recently-accessed keys. If your invocation handler regularly processes more than 10,000 objects per invocation, consider increasing the key-string cache size.

The size of the key-string cache can be changed using either the keystringCacheSize configuration setting or at runtime using CacheBuilder.SetKeystringCacheSize.

Tip

If your cache's string keys never exceed 26 bytes (when encoded as UTF-8), use the ShortStringKeyEncoder class to encode your keys. Short strings offer better performance because they fit inside the ScaleOut service's internal key structure, so they do not require extra round trips or a key-string cache. Just be sure to use the ShortStringKeyEncoder on both your PMI Client and PMI Handler applications!