Zettabytes of data hog up space and resources
- The amount of data created by devices is nearly 100 times greater than the amount of data stored, according to the Cisco Global Cloud Index. Devices are expected to produce 847 zettabytes of data annually by 2021 — nearly four times the amount created in 2016.
- Data center storage capacity is expected to grow nearly fourfold in this period to 2.6 zettabytes as the amount of data stored nearly quintuples to 1.3 zettabytes. Big Data is expected to see an eight-fold increase and to account for 30% of data stored in data centers.
- With more data comes data center improvements. In cloud data centers, the amount of workloads and compute instances available per server is expected to increase to 13.2 by 2021, an increase over 2016's 8.8. Traditional data centers will see a growth from 2.4 to 3.8 in this "workload and compute instance density" in this same span.
To put the size of a zettabyte into perspective, if every gigabyte in a zettabyte were a brick, one zettabyte would be the equivalent of 258 Great Walls of China, with each wall comprised of almost 3.9 billion bricks, according to Cisco.
Feeling nostalgic for the days when petabytes and terabytes felt like monumental units? Better get used to it, because IDC is forecasting that the "global data sphere" will reach 163 zettabytes by 2025, which is more than 42,000 Great Walls of gigabytes.
That's just a long way of saying that the volume of data stored is mind boggling. The amount being created is even more so — and the Internet of Things hasn't even hit its peak, with millions more devices set to hit networks and create mountains more of data every year.
In fact, IDC estimates that individuals will have 4,800 interactions with IoT devices daily by 2025 — or one interaction every 18 seconds.
The companies collecting all this have the daunting task of taking unstructured data, finding what is important and identifying patterns within. Data scientists and engineers can thank artificial intelligence and machine learning for saving hours of their time wading through information troves for valuable nuggets.
But collecting all this data and setting researchers to work is pointless unless a company has a plan to harness it effectively and monetize it. Otherwise, paying for all that space in a data center to hold troves of unstructured and unutilized data is a drain on company resources.
Follow Alex Hickey on Twitter