High Performance Computing

NAVIGATING THE DATA DELUGE WITH HIGH PERFORMANCE ADVANCED CLUSTER ARCHITECTURE

MACHINE LEARNING DILEMMA

A ‘Data Deluge’ results when the amount of new data generated surpasses an organization’s power to manage it, the analysts’ capacity to analyze it, and the researchers’ ability to deduce any useful conclusions from it.

Over two exabytes of data is generated each and every day in nearly every imaginable way. This copious outcome is fueled by factors that compound each other. Data sensors are exploding: cameras, phones, digital assistants (Alexa, Siri, etc.), cars etc. Massive growth in image resolution now allows for 3D image resolution at 20 times the size of 2D. Both the increasing resolution of data collected and its frequency has generated this data explosion.

TAKING ON THE CHALLENGES OF DATA DELUGE

How is an organization supposed to survive their own Data Deluge? Every institution in every industry will face this challenge. Those that are best equipped and prepared to handle it will have the best chance to survive. And those that embrace it and take advantage will have the best chance to thrive.

Whether generated by the healthcare industry, the government or even professional sports leagues, the huge collection of data is not being created simply because it can through advances in technology, but in an effort to improve the health, welfare and safety of the individuals served. How the industry will accomplish solving these data complexities is constantly evolving today, but regardless it will necessitate leveraging artificial intelligence and machine learning. In order to meet accelerating advancements in AI technologies, Hardware configurations must also evolve rapidly to collect, compute, and process all this data.

While traditional data analytics continue to get more complex, they ultimately give way to more advanced AI systems. And when the data is so large and so complex that humans can no longer keep up with the AI systems, machine learning is needed to allow systems to continue to evolve and keep up with growing data needs. As the data tsunami continues to explode a powerful solution using intelligent machines for data access, process, storage and analysis is needed to keep up with evolving capabilities and requirements.

A POWERFUL SOLUTION

Artificial intelligence and machine learning can be a powerful solution for challenges facing the government today. A study at the Harvard Kennedy School identified several areas that can benefit from AI/ML:

Resource Allocation

  • Administrative support is needed to speed up task completion
  • Inquiry response times are long due to insufficient support

Large Datasets

  • Dataset is too large for employees to work with efficiently
  • Internal and external datasets can be combined to enhance outputs and insights
  • Data is highly structured with years of history

Experts Shortage

  • Basic questions can be answered, freeing up time for experts
  • Niche issues can be learned to support experts in research

Predictable Scenario

  • Situation is predictable based on historical data
  • Prediction will help with time-sensitive responses

Procedural

  • Task is repetitive in nature
  • Inputs/outputs have binary answer

Diverse Data

  • Data includes visual/spatial and auditory/linguistic information
  • Qualitative and quantitative data needs to be summarized regularly

THE KEEPER SOLUTION

The engineers at Keeper Technology have decades of experience helping government and commercial customers manage some of the largest data repositories. Most of these are part of high velocity data environments where data rarely stands still. High-speed ingest, time-critical processing, complex data analytics, and digital dissemination typically characterize such environments. Keeper Technology also has years of experience working with and protecting data in some of the most secure environments of the Intelligence Community today.

Combining this experience with years of developing turn-key and cost-effective solutions, Keeper Technology has developed a computing cluster that is ideal for AI/ML and data analytics. The converged infrastructure design combines processing, storage, and data protection in minimal rack space.

The latest Intel processors and high-speed memory provide the optimum platform for the analytic engine whether its Hadoop, HPCC Systems, Spark, Splunk, or other data analytics package. The only way to fully leverage the incredibly high performance of the processors is to ensure they’re not starved for data. The Keeper Cluster can feed massive amounts of data to the processors from the tightly coupled NVMe storage layer. Tying the cluster together is a very low latency InfiniBand fabric. The salient features are:

Processing
The cluster is made up of 32 redundant processing nodes with no single point of failure. Up to 1,792 processing cores (over 3,500 threads) running at speeds as high as 3.3 Ghz (base frequency) provide the most processing power for a system in its class.

Memory
With a maximum of 64 TB of aggregate memory, even the largest data sets can remain resident in memory. Larger searches, faster training, more detailed models: Ultimately, quicker and better results.

Storage
Up to 2.2 PB of usable storage is included in the cluster. The NVMe storage is fully protected against drive and node failures with configurable protection schemes (mirroring, single parity, dual parity, erasure coding) spreading data across multiple nodes. Data can be dedicated to a processing element or shared across multiple nodes or the entire cluster. With 500 GB/s access to storage and 10’s of Millions of IOPs the application can run at maximum performance without having to worry about being starved for data.

Networking
To ensure data flows freely between nodes, the cluster is based on a fully redundant InfiniBand backplane with ~150ns latency between nodes. The network adapters also provide protocol offload to keep the processors focused on the task at hand.

Access
The independent access network is also fully redundant and provides 80 GB/s of connectivity to your network core.

Density
The whole cluster takes less than one half of a standard 19” rack.

HIGH PERFORMANCE IS A NATURAL RESULT OF OUR TURN-KEY ARCHITECTURE

Architecture Differentiators

  • Scale-out performance across multiple NVME servers
  • Non invasive cluster-wide data protection
  • Local latency at data center scale

The Keeper solution is a scalable cluster with flexible configurations. It is designed specifically for running the kinds of AI and machine learning processes required to truly harness the torrent of data that new technologies are creating. The data deluge is here. The only way to thrive is to embrace it and take advantage of it.


Integration Facts

You've invested countless resources into your current data management solutions. There's a way to keep those systems, and make the upgrade necessary for inevitable changes.

keeperSAFE® assimilates into your existing environment by directly supporting your existing protocols. Download a use case sheet to learn how.