Everything that around us is the world of innovation. Since few years, Artificial intelligence, internet of things, big data, and high-speed wireless networking have done miracles we could ever imagine. As a result, we can now utilize the improved living standards, using super-fast gadgets, and making better business decisions using data analytics. However, with the increasing demand and soaring usage of cloud storage, its capacity, now requiring an improved approach.
For that, in 2012, Cisco introduced a new approach and termed it as “Fog computing”.
Fog computing is a decentralized computing infrastructure in which data, compute, storage and applications are located somewhere between the data source and the cloud. It’s a new distributed architecture that allows further distribution of core functions such as computing, communication, control, and decision making while keeping things closer to the data origin.
Fog computing is now positioned as a layer to reduce the latency in hybrid cloud scenarios. The terminology refers to a new breed of applications and services, particularly when it comes to data management and analytics. It is a layer in the middle of the cloud and the hardware. In cloud computing data needs to be accessed to the central mainframe. Fog Computing offers local and faster accessibility to edge devices.
Fog computing is the concept of a network fabric that stretches from the outer edges of where data is created to where it will eventually be stored, whether that's in the cloud or in a customer’s data center.
Fog computing will be handling all critical work. The way the IoT is growing, it needs a special infrastructure base that can handle all its requirements. At present, fog computing is on a surge and it seems to be the most feasible option available. Fog computing can create low-latency network connections between devices and analytics endpoints. This architecture, in turn, reduces the amount of bandwidth needed compared to if that data had to be sent all the way back to a data center or cloud for processing. It can also be used in scenarios where there is no bandwidth connection to send data, so it must be processed close to where it is created.
Benefits of Fog computing
The development of fog computing frameworks gives organizations more choices for processing data wherever it is most appropriate to do so. For some applications, data may need to be processed as quickly as possible. Users also can place security features in a fog network, from segmented network traffic to virtual firewalls to protect it.
Better for security
Fog nodes can be protected using the same controls, procedures, and policy you use in other areas of the IT environment.
One of the principal reasons why fog computing is beneficial is the amount of flexibility it affords organizations. An enterprise can have a developer create a fog application before deploying it.
The sensitive data can be analyzed locally instead of sending it to the cloud for analysis. The IT team can keep track and control the devices that collect, analyze and store data.
Fog computing can save network bandwidth by processing selected data locally, instead of sending it to the cloud for analysis.
One of the biggest advantages of fog computing is that it reduces latency. Data doesn’t need to be sent to the cloud in order to be processed and sidestepping this problem makes analyzing and processing data much more efficient.
Fog computing is a decentralized form of networking, it offers a wider range of geographical distribution than traditional networking or cloud computing. This results in a better quality of service for the end user.
In many environments, the ability to analyze data in real time is most important. Eliminating the in-efficiency and latency that comes with cloud services means that the user can receive genuine real-time analytics.
How does Fog computing work?
Fog computing works by deploying fog nodes throughout your network. Devices from controllers, switches, routers, and video cameras can act as fog nodes. These fog nodes can then be deployed in target areas such as your office floor or within a vehicle. When an IoT device generates data this can then be analyzed via one of these nodes without having to be sent all the way back to the cloud.
The processes are:
Firstly, signals from IoT devices are wired to an automation controller which then executes a control system program to automate the devices.
Then control system program sends data through to an OPC server or protocol gateway.
The data is then converted into a protocol that can be more easily understood by internet-based services. This is a protocol like HTTP or MQTT.
After that, the data is sent to a fog node or IoT gateway which collects the data for further analysis.
This will filter the data and in some cases save it to hand over to the cloud later.
As per the theoretical model of fog computing, fog computing nodes are physically and functionally operative between edge nodes and centralized cloud. Fog computing is also considered to be more energy efficient as compared to cloud computing.
Real life applications of Fog computing
There are a variety of use cases that have been identified as potential ideal scenarios for fog computing.
A host of use cases call for real-time analytics. From manufacturing systems that need to be able to react to events as they happen, to financial institutions that use real-time data to inform trading decisions or monitor for fraud. Fog computing deployments can help facilitate the transfer of data between where it created and a variety of places where it needs to go.
Video cameras are used in public places, parking lots and residential areas to enhance safety and security. The bandwidth of visual data collected over a large-scale network makes it impossible to carry the data to the clod and collect real-time insights. Real-time monitoring and discovery of irregularities carry strict low latency requirements on surveillance systems. Fog computing helps in real time, latency sensitive distributed surveillance systems that uphold privacy. With the help of fog architecture, video processing is logically divided between fog nodes located within the cloud and the cameras. It helps in real time tracking, anomaly detection, and collection of results from data captured over time.
Semi-autonomous and self-driving cars
Having cars operate independently requires a capability to locally analyze certain data in real-time, such as surroundings, driving conditions, and directions. Other data may need to be sent back to a manufacturer to help improve vehicle maintenance or track vehicle usage. A fog computing environment would enable communications for all of these data sources both at the edge in the car and to its endpoint at the manufacturer.
Some experts believe the expected rollout of 5G mobile connections in 2018 and beyond could create more opportunity for fog computing. This super grade technology, in some cases, requires very dense antenna deployments. In some circumstances, antennas need to be less than 20 kilometers from one another. In a use case like this, a fog computing architecture could be created among these stations that include a centralized controller that manages applications running on this 5G network, and handles connections to back-end data centers or clouds.
In the near future, fog computing will take over and cloud will be pushed to the sidelines. Fog computing will be handling all critical work. The way the IoT is growing, it needs a special infrastructure base that can handle all its requirements. At present, fog computing is on a surge and it seems to be the most feasible option available.
Stock photo from BeeBright