Cargando…

Big Data Challenges in the Era of Data Deluge (2/2)

<!--HTML--><p>For the better or for the worse the amount of data generated in the world grows exponentially. The year of 2012 was dubbed as the year of Big Data and Data Deluge, in 2013 petabyte scale is referenced matter&shy;of&shy;factly and exabyte size is now in the vocabular...

Descripción completa

Detalles Bibliográficos
Autor principal: Volvovski, Ilya
Lenguaje:eng
Publicado: 2015
Materias:
Acceso en línea:http://cds.cern.ch/record/2000306
Descripción
Sumario:<!--HTML--><p>For the better or for the worse the amount of data generated in the world grows exponentially. The year of 2012 was dubbed as the year of Big Data and Data Deluge, in 2013 petabyte scale is referenced matter&shy;of&shy;factly and exabyte size is now in the vocabulary of storage providers and large organization. The traditional copy based technology doesn&rsquo;t scale into this size territory, relational DBs give up on many billions rows in tables; typical File Systems are not designed to store trillions of objects. Disks fail, networks are not always available. Yet individuals, businesses and academic institutions demand 100% availability with no data loss. Is this the final dead end? These lectures will describe a storage system, based on IDA (Information Dispersal Algorithm) unlimited in scale, with a very high level of reliability, availability, and unbounded scalable indexing. And all this without any central facility anywhere in the system and thus no single point of failure or any scalability barriers.</p> <p>Discussed in this lecture:</p> <ul> <li>What does it take to build a practical modern storage system</li> <li>Major practical system characteristics&nbsp;</li> <li>Examples of how these principles could be applied</li> </ul>