Technology Partner


We are in the midst of a data storm, faced with skyrocketing volumes of unstructured data.

Traditional data analysis techniques are no longer adequate, yet businesses increasingly rely on insightful analysis of huge data volumes to stay competitive. IT departments are challenged to expand data centers with budgets that are not keeping pace with data growth.

To address these challenges, many IT departments are using Apache Hadoop, an open source software library that provides a framework for distributed processing of large data sets across clusters of servers. However, the basic Hadoop architecture brings two limitations:

  • Data protection is handled in software. Every time a data set is written, the Hadoop File System (HDFS) makes two more copies in case a disk drive or data node fails. This reduces disk utilization capacity and network throughput performance
  • The Hadoop metadata repository is a single point of failure

On Access and Technology partner have developed a platform architecture that addresses the two major limitations of the basic Hadoop architecture while improving network speed and efficiency:

Improves on the basic Apache Hadoop architecture

  • Improves storage availability, scalability, performance and manageability
  • Improves fault tolerance

Increases network speed and efficiency

  • Supports 10 Gb/s converged Ethernet and beyond
  • Supports provisioning of the network to carry flows, such as storage, data or voice, with unique prioritization and quality of service profiles
  • Improves application performance across the network



partner-ecosystem-vmware-diagram

dot

    TERMS OF USE |    PRIVACY |   COOKIE POLICY |    CODE OF CONDUCT |    SITE MAP