Opinion

Why fog computing will power IoT in 2017

The following is a guest article from Dr. Rob MacInnis, CEO and founder of AetherWorks.

When it comes to processing and storing data, should we expect cloud to continue to reign as the go-to option of 2017? The data suggest that this might be the case.

For example, spending on public cloud storage is predicted to reach 17% of total enterprise storage by 2017, up from 8% today. IT spending on cloud infrastructure, according to IDC, will exceed $37 billion in 2016, an increase of more than 15% from the previous year and, by 2020, nearly equal the amount spent on traditional, on-premises IT.

Dr. Rob MacInnis, CEO and founder of AetherWorks.
AetherWorks
 

So yes, cloud computing will likely continue as a go-to alternative to on-prem in 2017. But that doesn’t mean it's always the best option. What if there was a way to leverage the best of cloud and on-prem, with the speed, resiliency, bandwidth and scalability to power new technologies such as the Internet of Things (IoT) and machine learning?

There already is — and it’s closer than you might think.  From where I sit, the future is clear: in 2017, the cloud will clear — to make way for fog.

Between two poles

It’s easy to understand why cloud computing has grown so tremendously in recent years. With cloud, organizations can now access more storage and processing power than all but the mightiest of organizations could hope to assemble on-site.

Cloud resources are accessible from around the globe, are easy to scale up and& down, and you only pay for what you use. Conversely, on-site resources require significant capital expenditure, don’t scale well, and require IT staff to monitor and maintain.

With those downsides you’d think the choice would be obvious, but on-site has two big advantages over cloud: higher bandwidth (network capacity), and lower latency (data travel time). In a world where it’s still faster to ship a 2TB hard disk to Amazon in the mail rather than to upload it via the internet, there has to be a better way to store and process large quantities of data from the edge of a network.

Imagine a world where you can have everything you want: cloud-like scalability and metered billing, together with the speed and bandwidth of on-site resources. As shown in the following chart, there’s a great gap of opportunity between on-prem and the cloud:

The big four for cloud vs. on-site computing; fog lives between the poles.
AetherWorks
 

Fog: The space between

This brings us to fog computing. Fog looks like a cloud, but it’s closer to you: just like in the real world, fog sits between you and the cloud.

Fog enables us to pursue the goal of optimizing the geographical proximity of data and computation: if we can process data closer to where it is produced, then we save bandwidth and time, allowing for a faster response to ever-increasing volumes of data.

Devices called fog nodes can be deployed at any network connection, regardless of location, and can be communicated with just like the cloud. As long as a computer has a processor, storage, and network connectivity, it can be a fog node.

The nodes extend the cloud and bring it closer to the connected device sending or receiving data, thus reducing latency and increasing the speed with which we can process and react to incoming data. While healthcare is an obvious area where low latency is critical for awareness and response, shorter transmission times would benefit every industry from transportation to manufacturing, oil and gas, banking — even mining.

Further, fog enables a number of exciting opportunities, such as harnessing the latent computational resources — the unused storage space and processor time — on the computers that surround us in our daily lives (think office workstations).

By abstracting over ‘where’ computation takes place, fog computing promises to enable more efficient resource consumption by distributing application services to the most logical, efficient location anywhere between a data source and the cloud, including in your own office.

Why does this matter for IoT and machine learning

Take for example, IoT. According to IDC, the number of devices connected to the internet was 13 billion in 2015. By 2020, this number will swell to over 30 billion. Traditional computing devices like smartphones, watches, and tablets will be less than a third of the total, while spending on IoT solutions will grow from almost $700 billion in 2016 to nearly $1.9 trillion in 2021.

Further, according to Cisco, IoT devices will generate 600 zetabytes by 2020. This is 275 times higher than the projected traffic traveling from data centers to end users. This represents a monumental shift in the quantity and direction of data flow on the internet – one which neither cloud computing nor traditional on-prem IT were designed to handle.

Similarly, machine learning requires immense quantities of data in order to train the systems properly – data that, in a cloud model, must be sent half-way across the world to be processed. Imagine if, instead, you distribute the computational load of training a neural network amongst the computers closest to where the data is coming from.

When you use the most geographically-appropriate computational resources you make processing huge quantities of sensor data possible. The cloud will still be useful for centralized coordination and for combining and distributing the collective learnings back out towards the edge — but the fog is what will enable us to not simply handle, but embrace, the future of IoT, machine-learning, and all of the real world data-intensive applications of the future.

It is inevitable: you can’t get farther away than cloud, and you can’t get closer than on-site. The middle — the fog — is where the action is going to happen.

Filed Under: Cloud Computing Infrastructure
Top image credit: Pixabay / Skitterphoto