StorReduce is the only deduplication platform built from the ground up for cloud object storage. StorReduce is designed to meet the unique requirements of companies using public, private or hybrid cloud storage for large volumes of data. StorReduce sits between client applications and cloud storage, transparently deduplicating data inline at speeds of up to 1400 MB/s per server, reducing storage costs, speeding up the time for transfer between clouds and freeing your data for use in the cloud via standard cloud APIs.
StorReduce sits between client programs (wanting to store and retrieve data) and the object store, such as Amazon S3 or HGST ActiveScale, and transparently deduplicates and rehydrates data inline as needed.
Unlike legacy solutions, StorReduce does not buffer or stage data to disk, making it significantly more cost effective to deploy and ensuring your data is more secure - StorReduce only confirms data has been stored once it is written to the object store.
StorReduce uses a variable-length block splitting algorithm that enables it to deduplicate complex data types such as virtual machines, databases and other unstructured data.
StorReduce can be deployed in a scale-out cluster utilizing up to 31 servers. Scale-out clusters enable a single global deduplication namespace to manage hundreds of petabytes of data at tens of gigabytes per second of throughput.
StorReduce clusters support high availability via an active-active architecture. Clusters can be designed to span cloud availability zones and regions to ensure data access is maintained even in the unlikely event of a complete failure of a cloud datacentre.
StorReduce maintains a small index of meta data on fast local storage. Each StorReduce server keeps its own independent index. All index data can be rebuilt from the log of transactions stored in the object Store - this enables new StorReduce servers to be started on-premises or on-cloud to quickly access stored data for disaster recovery.
The StorReduce server is optimized for scalability, high throughput and low latency. The internal architecture is highly optimized for data deduplication, and to ensure that performance is maintained even when running in a public cloud environment.
A single StorReduce server is capable of sustained speeds of up to 1400 MB/s per server, for both reads and writes, and up to 31 servers can be combined in to a StorReduce scale-out cluster to achieve tens of gigabytes per second of throughput.
StorReduce supports up to 80 petabytes of data per server, and hundreds of petabytes per cluster.
StorReduce supports replication of deduplicated data between cloud regions, between public clouds or between public and private cloud.
By only replicating the unique data StorReduce can save significant costs on outgoing bandwidth and replica storage, and can speed up transfer times by up to 97%.
A second StorReduce server or cluster can be run in the replica region or cloud to enable immediate read-only access to the replicated data. This is perfectly suited for an additional layer of redundancy and for disaster recovery in the cloud.
Wether you wish to replace your legacy on-premises storage appliances such as Data Domain or finally move off of tape, StorReduce makes the cloud more affordable than ever before. This is the case even factoring in migration costs as it greatly reduces the cost for transmission bandwidth and on-cloud storage - StorReduce typically achieves 90% - 97% deduplication on backup data.
StorReduce can be installed on-premises to deduplicate data before it is sent to the cloud, often speeding up transfer times 20 - 30x.
StorReduce works with Veritas NetBackup, CommVault, Veeam, Oracle RMAN and other leading enterprise backup solutions.
StorReduce's scalability for large data sets and its very fast throughput make it an ideal solution for on-cloud big data companies wanting to decrease their cloud storage and also use data mining services like Elastic, EMR or Hadoop in real time.
Data in StorReduce is accessable by standard cloud object storage interfaces and can be used by any big data or analytics service.
The StorReduce server uses object storage for all persistent data. It acts as a client and supports making use of the Amazon S3 REST API or Azure Blob Storage API to store all its data in a single bucket. This means StorReduce may be used with any Amazon S3 or Azure-compatible object storage solution.
In addition to public cloud-based object storage solutions, StorReduce works with all major private object stores including HGST ActiveScale, IBM Cloud Object Storage (formerly Cleversafe), Hitachi Content Platform and SwiftStack.
Officially supported object storage services include Amazon S3 (including Infrequent Access), Google Object Storage (including Nearline), Azure Blob Storage, IBM Cloud Object Storage, Hitachi Content Platform, HGST ActiveScale, Cloudian, Riak CS and SwiftStack.
© Copyright 2015-2018 StorReduce, Inc. All rights reserved.