In-Memory Data Grid Technology
In-memory data grids (IMDGs) can help your applications unlock their full performance potential on server farms, while offloading database servers. ScaleOut Software's suite of products deliver the industry's leading IMDG solution by combining:
- fast access that outperforms other storage alternatives,
- scalability capable of handling the largest server-farm loads,
- high availability to meet stringent mission-critical requirements,
- ease of use designed to keep development time and management to an absolute minimum,
- powerful analysis capabilities that let you quickly query and analyze data stored in the grid, and
- comprehensive management features for viewing, managing, and backing up grid data.
This section explores some of the exciting technology that helps ScaleOut StateServer deliver these benefits to you.
Why In-Memory Data Grids?
With the advent of Web and application server farms and HPC compute grids, database servers have increasingly been used to hold mission-critical but relatively short-lived and fast-changing data that must be accessible across the farm. Examples include session-state, e-commerce shopping carts, SOAP requests, and intermediate results from data grid computations. Because of their fast turnover and high update rates, these new types of data have increasingly bottlenecked database servers. In addition, repeatedly accessing database servers to access slowly changing data, such as user profiles and product lists, has further added to the performance bottleneck.
In-memory data grids offer an important storage alternative that dramatically lowers cost and boosts application performance. The foundation of an IMDG is fast, scalable distributed caching, which keeps rapidly changing data close to where it is needed and quickly accessible. By moving data closer to where it is used and taking advantage of a server farm's inherent scalability, this technology significantly enhances overall application performance, while making more efficient use of the data center's computational, networking, and storage resources. To achieve this, in-memory data grids add analysis capabilities, employing a server farm's scalable computational resources to enable fast, in-depth insights into trends and patterns, and comprehensive tools for managing and archiving data.
Managing Workload-Data Effectively
As the popularity of server farms and compute grids - and the amount of data they must manage - has steadily grown, a new type of mission-critical but relatively short-lived data, called "workload-data," has emerged. This type of data includes session-state, shopping carts, cached database results, SOAP requests, financial data, grid computing data, and other rapidly accessed, fast-changing application data.
Applications have traditionally stored workload-data within the local memory of the running process. This technique, called "in process," works well on a single server but is ineffective on server farms and grids, where workload-data need to be accessible across the grid so that they can be retrieved and updated by all servers. Global accessibility maximizes overall performance by allowing the incoming client load to be evenly distributed across the servers and by ensuring that clients are not affected by server failures.
To provide global accessibility, some applications may turn to storing workload-data in a centralized, back-end database server (DBMS) so that it can be retrieved from any server and preserved in case of server outages. However, database servers are designed to handle long-term, line-of-business (LOB) data, such as inventory, purchase orders, billing records, and other long lived business data. As the following table illustrates, workload-data have different characteristics that make them poorly suited for storage in database servers:
|Data preservation||Critical||Critical, but reproducible|
|Access:update ratio||~4:1 or higher||~1:1|
|Fast access and update||Less important||Very important|
In addition, database servers can be costly (especially if clustering is used), and traffic to and from the data storage tier creates a bottleneck that impacts both performance and scalability.
To ensure that server-farm and grid applications effectively handle large workloads and deliver high performance, in-memory data grids, such as ScaleOut StateServer, store workload-data on a server farm or grid for fast, scalable access and global accessibility by all servers. They take advantage of a server farm's inherent ability to survive failures and maintain high availability. In-memory data grids also can harness the server farm's computational resources to enable fast, efficient analysis of workload-data stored within the grid, making them the foundation for the next generation of parallel data analysis.
Partitioned Storage for Scaled Performance
The key to achieving high performance and scalability on server farms and HPC compute grids is to distribute a workload across the server farm so that all servers can simultaneously share a portion of the work. This reduces the delays in handling sub-tasks, and as the workload increases, servers can be added to scale the processing power of the farm.
To reap these performance benefits for fast, scalable data storage, ScaleOut StateServer automatically partitions all of the data grid's stored objects across the farm and simultaneously processes access requests on all servers. This reduces access times and scales the overall throughput of the data grid. It also avoids "hot spots" that can arise if objects are stored on the servers where they are created.
As servers are added to the grid, ScaleOut StateServer automatically repartitions and rebalances the storage workload to scale throughput. Likewise, if servers are removed, ScaleOut StateServer coalesces stored objects on the surviving servers and rebalances the storage workload as necessary.
Integrated, Internal Caching
To keep access times fast, ScaleOut StateServer automatically caches grid-based objects on the servers where they were most recently accessed. Two levels of coherent, internal caching are employed to ensure the fastest possible access times. These caches are fully integrated into ScaleOut StateServer to automatically and transparently accelerate performance without involving the developer in configuring and coordinating these mechanisms.
One level of internal caching holds objects within the StateServer service process on each grid server. This speeds up repeated accesses to these objects by avoiding the networking overheads required to copy them from remote hosts. A second level of internal caching holds deserialized objects within ScaleOut StateServer's client libraries. This cache sidesteps the CPU overhead required to retrieve deserialized objects when they are repeatedly accessed from the distributed cache. This dramatically lowers access times so that they are very close to "in process" speeds. The following diagram shows an example of a client application retrieving an object from a remote server within the distributed cache. ScaleOut StateServer automatically caches the object's deserialized data within the client library on the requesting server:
Integrated Data Replication for High Availability
High availability of grid data is essential for mission-critical applications. ScaleOut StateServer ensures that data is never lost - even if a server in the farm fails - by automatically replicating all stored objects on up to two additional grid servers. If a server goes offline or loses network connectivity, ScaleOut StateServer retrieves its objects from replicas stored on other servers in the grid, and it creates new replicas to maintain redundant storage as part of its "self-healing" process.
ScaleOut StateServer uses a fixed number of replicas to ensure that the storage capacity of the in-memory data grid scales as servers are added to the farm; replicating objects to all servers would quickly overflow the memory of each server. The following diagram depicts an object (in blue) and its two replicas (in red) stored within the distributed cache on different servers. ScaleOut StateServer automatically takes care of creating, load-balancing, and accessing replicas so that the developer can benefit from the cache's high availability without having to manage these details.
Traditional master/slave replication mechanisms are subject to "split brain" problems in which network outages make it impossible to keep all replicas synchronized. If this ever occurs, the in-memory data grid could lose updates and return "stale" data to its clients. To avoid split brain problems, ScaleOut StateServer employs patented technology that provides scalable, highly available, quorum-based updates to grid data. This technology, which originated in the design of highly available server clusters, delivers high performance while ensuring that a consistent view of grid data is always maintained. It transparently handles a variety of server and network failures without any intervention on the part of the developer.
Scalable Failure Detection and Recovery
A major challenge for server farms is the design of a global membership mechanism that scales with the number of servers in the farm. This membership mechanism determines which servers in the farm are active and able to participate in the in-memory data grid. An integral aspect of this mechanism is the use of a heartbeat protocol between servers that quickly detects server or networking outages and initiates recovery. ScaleOut StateServer uses a patent-pending, scalable, point-to-point heartbeat architecture to efficiently detect failures without flooding the server farm's network with heartbeat packets. Heartbeat failures automatically trigger ScaleOut StateServer's multicast discovery protocol to automatically determine the set of surviving servers. This is followed by "self-healing" recovery, which quickly restores access to grid data and dynamically rebalances the storage load across the IMDG. The following diagram depicts ScaleOut StateServer's use of heartbeat channels to detect server and networking outages:
The Benefits of a Scalable, Highly Available Architecture
ScaleOut StateServer's scalable, highly available, and fully symmetric storage architecture creates a foundation that benefits every function of the in-memory data grid, including:
- APIs and asynchronous event handling,
- transparent access to a backing store,
- data replication to remote IMDGs,
- remote client access, and
- parallel query and parallel method invocation for data analysis.
ScaleOut StateServer provides comprehensive APIs for managing the lifetime of stored objects, including object timeouts, dependency relationships, backing store access, and memory reclamation of the least recently used objects. Because of its highly available storage, the in-memory data grid automatically ensures that these properties are preserved after a server fails. Also, ScaleOut StateServer's scalable design allows the handling of object expiration and notification events to scale with the IMDG's size by simultaneously processing asynchronous events on all servers, as depicted by the red arrows in the following diagram. Moreover, if a server fails, its pending events are automatically directed to surviving servers so that delivery is guaranteed.
Likewise, ScaleOut GeoServer's remote data replication uses scalable and highly available connections that take full advantage of all servers in each in-memory data grid. To maximize performance and availability, all servers within both the local and remote StateServer grids participate in data replication, as shown in the following diagram:
Since all local servers participate in handling local object updates, they simultaneously replicate these updates to the remote in-memory data grid. This maximizes both the throughput and scalability of data replication. In addition, the servers download load-balancing information to most efficiently direct object updates to individual servers in the remote IMDG. This information is automatically updated whenever a membership or load-balancing change occurs at the remote grid. Likewise, this architecture ensures that data replication is not interrupted if servers in either the local or remote grid should fail.
ScaleOut Remote Client also leverages ScaleOut StateServer's scalable, highly available architecture to give applications the fastest possible performance. Its client libraries use multiple, simultaneous connections to the in-memory data grid to automatically scale networking throughput with the size of the grid. These connections are depicted by the red arrows in the following diagram. The client libraries also maintain load-balancing information obtained from the IMDG so that access requests are directly sent to the authoritative grid servers for maximum performance.
To maintain high availability in case a grid server should fail or be taken offline, the client libraries automatically re-establish a connection as necessary to other servers in the in-memory data grid. Likewise, if a new grid server is added, the remote client automatically detects the new server and communicates with it.
Integrated Ease of Use
ScaleOut Software has designed its products to deliver industry-leading performance while keeping the application designer's valuable development time to an absolute minimum. This is accomplished by creating the simplest possible view of the in-memory data grid for the application developer and hiding the details of object placement, performance scaling, and data replication for high availability. To a large extent, the application developer can view the IMDG as if it were just a local data cache which is accessed by the customary add, retrieve, update, and remove operations on cached objects. Object locking for synchronization across threads and servers is built into these operations and occurs automatically. To keep the developer's view as simple as possible, almost all of the extensive technology described above is hidden within the grid's infrastructure.
To see an example of these highly integrated mechanisms at work, consider an object retrieve operation. When the developer uses this one-line API method to access an object stored in the grid, ScaleOut StateServer transparently takes several steps that allow the IMDG to maintain its high performance, scalability, and high availability:
- The API client library looks in its local, deserialized client-side cache to see if the object's data is available.
- If necessary, the local server looks in its serialized data cache to see if the object or a replica is stored locally.
- The client or local server determines which remote server holds the object and forwards the request to that server.
- If a server fails, the failure is detected, and the server is removed from the IMDG. The access request is automatically re-forwarded to another server that has a replica for the object.
- If a server is added and the object is load-balanced to another server, the access request is automatically re-forwarded.
- If requested by the client, the object is locked for unique access by the client. The object's lock is maintained on the authoritative grid server and associated replica servers in case a failure should occur. (The lock is released when the client application updates or unlocks the object.)
- The object's data is returned to the client and cached on the local server if necessary. If an internal cache is full, older objects are flushed to make room for the new object.
This discussion shows how ScaleOut StateServer's integrated design hides the many details of data storage within the data grid so that the application developer does not have to devote time and energy to configuring and managing them. In particular, ScaleOut StateServer transparently handles the issues of server farm membership, object placement, scaling, recovery, creating and managing replicas, and handling synchronization on object access.
Parallel Data Analysis
ScaleOut Analytics Server has introduced powerful new technology for analyzing data stored in the in-memory data grid. Called parallel method invocation (PMI), this feature lets developers easily implement "map/reduce" semantics to operate on cached data. PMI inverts traditional access for a distributed cache by automatically pushing the user's computational logic to every node in the grid, thus eliminating the need to move cached data between compute servers at runtime. By automatically performing an application's map and reduce methods across all caching servers, PMI automatically scales performance and reduces network traffic, while dramatically simplifying development time.
ScaleOut StateServer uses several innovations to ensure that PMI delivers scalable performance with integrated high availability. Method invocations are multicast to all servers, and results are automatically and efficiently merged using a built-in binary merge tree. A multithreaded execution engine runs on each grid server to transparently employ all available processors and cores when executing method invocations. Also, a patent-pending technique for implementing parallel method invocation across all grid servers assures continued execution in case of server failures.
ScaleOut StateServer's scalable, highly available architecture enables in-memory data grids to provide a highly effective foundation for building scalable, mission-critical applications. It also integrates and encapsulates this powerful technology wherever possible so that the application developer can maintain focus on application design. The use of a IMDG lets developers take advantage of the full potential for scaling and high availability offered by server farms and HPC compute grids. It unlocks huge performance benefits for a wide range of applications and also forms the basis for the next generation of data-parallel analysis. ScaleOut Software is committed to the ongoing development of data grid technology, which has become an important foundation for data storage and analysis in scalable computing.