ScaleOut ComputeServer®

  • Track and analyze live data for operational intelligence and real-time feedback.
  • Use in-memory storage and computing to enable fast data-parallel computations.

Take Action Immediately by Analyzing Live, Operational Data

ScaleOut ComputeServer combines a scalable, in-memory data grid with a powerful data-parallel compute engine to deliver immediate results for operational intelligence, real-time feedback, and time-sensitive analytics. Now you can run continuous, in-memory computations across a fast-changing dataset and obtain results with extremely low latency to capture business opportunities and identify operational issues.

See a recent benchmark where ComputeServer clocked continuous computations over a 1TB fast-changing dataset every 3 seconds.


See how ComputeServer reduced the computation time for a hedge fund’s risk analysis from 15 minutes to 1 second.


Fast, In-Memory Computing

In-Memory Computing for Operational Intelligence

Operational systems generate streams of fast-changing data that need to be tracked, correlated, and analyzed to identify patterns and trends — and then generate immediate feedback to steer operations. This is called operational intelligence. Organizations that have it can deliver better results, boost cost-effectiveness, and identify perishable business opportunities that others miss. Traditional business intelligence, with its batch processing systems and disk-based data storage, simply cannot keep up with operational systems. In-memory computing can.

ScaleOut ComputeServer delivers in-memory computing in a form that’s ideal for operational intelligence. It combines ScaleOut’s industry-leading, in-memory data grid with an integrated, data-parallel compute engine to create a breakthrough platform for building applications that generate real-time results. Unlike pure streaming or event processing systems, such as Storm or Spark Streaming, ScaleOut ComputeServer makes it easy to track and correlate fast-changing data using an in-memory, object-oriented model. Blazingly fast and scalable data-parallel computing identifies important patterns and trends in this data with extremely low latency so that immediate feedback can be generated and action taken. Now operational intelligence is both possible and within easy reach.

Operational Intelligence vs. Business Intelligence

Although business intelligence (BI) has evolved over the last several years with the adoption of Hadoop, its focus remains on examining very large, static data sets to identify long term trends and opportunities. As a result, most BI implementations use latency-insensitive techniques, such as batch processing and disk-based data storage. Although recent innovations, such as Spark, have employed in-memory computing techniques to improve efficiency and lower latency, they are designed to accelerate BI rather than to directly integrate with operational systems.

Until now, operational systems have not been able to deploy computing technology which tracks and analyzes live data to generate immediate feedback, that is, to provide operational intelligence (OI). ScaleOut ComputeServer was designed to make OI possible. By combining an in-memory data grid with an integrated compute engine, it can track live data with both low latency and high availability using a straightforward, object-oriented model. Now live systems have the technology they need for OI.

Data-Parallel Computing — Made Easy

Data scientists have known for decades that data-parallel computing is both fast and remarkably easier to use than other techniques for parallel processing. Hadoop developers have brought this technology into the 21st century to focus on business intelligence. Now ScaleOut ComputeServer combines in-memory and data-parallel computing to unlock the benefits of operational intelligence.

ScaleOut ComputeServer makes data-parallel computing extremely easy for application developers to learn and use by integrating it with popular programming languages such as Java and C#. Called “parallel method invocation” (PMI), this approach lets developers write data-parallel computations as language-defined methods on collections of objects stored in the in-memory data grid. ScaleOut ComputeServer automatically deploys the code and runs these methods in parallel across the grid. PMI-based applications are simple to write, debug, and maintain — and they run extremely fast.

Unlike complex parallel computing platforms, such as Hadoop and Storm, PMI requires no tuning to extract maximum performance. Its simple, yet powerful data-parallel model derived from parallel supercomputing sidesteps the complexities inherent to Hadoop MapReduce, such as constructing key spaces, optimizing reducers, and combining results. It also avoids Storm’s complex, task-parallel execution pipeline while providing a straightforward means to track incoming events, correlate them, and maintain an in-memory model of a live system.

To make distributed, data-parallel programming accessible to anyone familiar with .NET’s Task Parallel Library (TPL), ScaleOut ComputeServer includes an operator called “Distributed ForEach” modeled after the TPL’s widely used Parallel.ForEach operator. This feature seamlessly extends data-parallel computing across ScaleOut’s in-memory data grid to handle much larger data sets than otherwise possible. It delivers fast, scalable performance while avoiding network-intensive data motion and unnecessary garbage collection. Integration with LINQ query allows applications to specify exactly which data needs to be processed within a large collection.

Learn more about ScaleOut ComputeServer’s ease of use.

Integrated Parallel Query

ScaleOut ComputeServer integrates parallel query into PMI to select objects for analysis, providing a simple, object-oriented filter based on object properties. This can dramatically reduce the processing workload and shorten compute times.

Global Merging of Results

To expedite feeding results back to an operational system, ScaleOut ComputeServer adds a unique feature to combine results. Developers define a method for merging results, and then PMI runs this method in parallel across all servers to generate a single, globally-combined result.


Parallel Method Invocation Runs in Parallel on ScaleOut’s IMDG

Low-Latency and Scalable Performance

When running PMI applications using ScaleOut ComputeServer, the in-memory data grid’s compute engine automatically squeezes performance out of all grid servers and cores to complete the computation as fast as possible. The engine eliminates batch scheduling overhead, typically starting up computations in less than a millisecond. Since the data is already hosted in memory within the grid, no time is wasted moving data from disk or across the network. PMI delivers the lowest possible latency for performing data-parallel computations and generating operational intelligence.

Operational intelligence needs to be able to maintain low latency even as the system it tracks increases its workload. ScaleOut ComputeServer makes this easy. Performance scaling simply requires adding servers to the in-memory data grid, which automatically redistributes and rebalances the workload to take advantage of the new servers. Storage capacity, access throughput, and compute power all grow linearly, and execution times stay fast.

Flexible Management of Live Data

The Power of Object-Oriented Storage for Live Data

A core requirement for operational intelligence is to track the state of a live system. ScaleOut ComputeServer addresses this need with its scalable, highly available, in-memory data grid (IMDG). The power of an IMDG derives from its object-oriented view of data, which allows millions of entities to be tracked as individual objects, each maintaining values for a set of class-defined properties. OI applications track and correlate changes to these entities by updating these objects as events flow into the grid.

For example, imagine an OI system tracking millions of cable TV viewers as they select shows and change channels. By using ScaleOut ComputeServer’s IMDG, viewers can be represented as a huge collection of in-memory objects which are individually updated as channel-switch events flow into the grid. Each stored object tracks the behavior of a single viewer, correlating a sequence of events and enriching this data with programming information and viewer preferences. ScaleOut ComputeServer’s data-parallel compute engine uses this data to analyze the population of viewers in parallel and immediately detect important patterns and trends.

Storage Models for Objects, Big and Small

Operational systems embody a variety of entities, some complex and others simple. This requires that the object-oriented storage system be able to handle divergent requirements, including support for rich, heavyweight objects and small, lightweight objects. For example, an e-commerce company might need to store semantically rich objects for its shopping carts as well as small objects for clickstream analysis.

To meet this need, ScaleOut ComputeServer provides two in-memory storage models. Designed for large, complex objects, the Named Cache supports rich functionality such as property-oriented query, dependency relationships, timeouts, distributed locking, global access, and much more. With the Named Map, ScaleOut ComputeServer adds Java ConcurrentMap and .NET ConcurrentDictionary semantics to efficiently organize large populations of small data objects and minimize the amount of metadata associated with each. In both named caches and named maps, applications can create, read, update, and delete objects to manage live data, as well as perform data-parallel analysis using PMI.


Storage Models for both large and small objects


Designed for Operational Systems

Operational intelligence imposes data storage requirements not usually found in systems designed for business intelligence. Tracking and correlating live events requires handling a continuous stream of updates to individual objects with very low latency. This allows the system to maintain a model of the live system, analyze it in parallel, and generate immediate results. ScaleOut ComputeServer’s integrated in-memory data grid with its flexible storage models was specifically designed to provide the very low access latency required for operational intelligence.

Unlike business intelligence, which uses offline batch-processing, OI systems require continuous availability so that analysis results are always available to provide feedback to a live system. ScaleOut ComputeServer was designed from the ground up to provide high availability in both its data storage and compute engine. It makes use of patented and patent-pending techniques to ensure that in-memory data is always available and that data-parallel processing efficiently completes even if a server or network outage occurs.

Computing on Individual Objects

In addition to analyzing collections of objects in parallel, it is often valuable to be able to perform a user-defined computation on individual objects. To meet this need, ScaleOut ComputeServer includes a mechanism called Single Method Invocation (SMI) which lets applications invoke user-defined methods on specific objects. Similar to stored procedures in database systems, SMI enables applications to efficiently analyze and optionally update objects in place within the in-memory data grid.

SMI has many uses. For example, it enhances the ability of OI applications to intelligently update objects as events flow in from a live system. It also enables the construction of column-oriented analyses in which each object represents a column of data that is analyzed in a single operation.

Industry-Leading Ease of Use

Fast Time to Insight: No Special Knowledge Required

Let’s face it. Complicated parallel processing frameworks are intimidating. ScaleOut ComputeServer was specifically designed to lower the barriers to adoption and get you up and running quickly. Its object-oriented approach to data-parallel programming leverages everything you already know about Java, C#, or C++ and seamlessly adds the simplest possible model for data-parallel execution. The net result is that applications for operational intelligence are straightforward to write and run without the need for specialized knowledge of complex semantics and tuning parameters.

Let’s take a closer look. ScaleOut ComputeServer organizes in-memory object storage as straightforward object collections in which objects are individually accessible using simple create/read/update/delete APIs. This means that they can track and correlate incoming events from a live system using a straightforward object model. Likewise, object collections can be queried using their class properties and analyzed in parallel just by defining and invoking class methods. All aspects of application design take full advantage of well understood object-oriented concepts while leveraging more than three decades of experience in simplifying parallel supercomputing. That’s the power that in-memory data grids and parallel method invocation (PMI) bring to operational intelligence.

Automatic Deployment

ScaleOut ComputeServer takes ease of use a step further by automating all deployment steps. Using a unique software concept called an “invocation grid,” it lets the application developer pre-stage the execution environment by shipping all required executable programs and libraries to grid servers. This eliminates the need to manually deploy application code and libraries on each server in the cluster, and it ensures that all servers are properly configured. As a further benefit, invocation grids accelerate startup times by avoiding the overhead of shipping code for each parallel method invocation.

ScaleOut Management Pack™ Included!

To further simplify application development, ScaleOut ComputeServer comes bundled with ScaleOut Management Pack which includes comprehensive tools for observing, managing, and archiving grid-based data. This means that developers can directly track the state of the grid and easily verify that their applications are running as intended. From development to deployment to testing, ScaleOut ComputeServer makes it easy.

Try ScaleOut for free

Use the power of in-memory computing in minutes on Windows or Linux.

Try for Free

Not ready to download?