Cloud computing is de basis van het internet der dingen, en het internet der dingen kan de cloudarchitectuur veranderen

With the development of time, the concept of Internet of Things involves more and more things. In addition to having internal sensors and processors built into them, these things are also directly connected to the network, transmitting their data online. While home automation may be the “main” use case for this concept, if the refrigerator runs out of milk, for example, the refrigerator will automatically order milk from the grocery store. But the scope of IoT applications is actually getting bigger and bigger. We will have many things that interact with each other but are independent of each other, offices will automatically order office items when needed without our intervention, and even sensors on our clothes and bodies will transmit our health data to our doctors in real time. This type of M2M (machine-to-machine) communication is key.

DTU/Edge Gateway/IoT-platform/Gateway-moduleDTU/Edge Gateway/IoT-platform/Gateway-module

To realize the full potential benefits of IoT, cloud computing must be the foundation of IoT. The idea behind the Internet is that most of the data collected should be transmitted online, so that applications can effectively aggregate, analyze and utilize this data. Now let’s go back to the refrigerator example. In this example, instead of the refrigerator ordering milk from the grocery store itself, the refrigerator transmits all of its data, including current food inventory and user consumption, to the app, which then reads and analyzes the data. The decision to purchase is then made based on factors such as the user’s current food budget and how long it will take for the milk to be delivered, and the cloud is the ideal home for these applications.

If all our daily products were equipped with this, the amount of data generated would be huge. Therefore, IoT must consider how to store and analyze the data generated. This is not just a question of the amount of data, but also the speed at which this data is generated. Sensors are generating more and more data faster than most commercial applications can process it.

Cloud-based solutions are fundamental to dealing with the volume and velocity of data generation. The cloud can automatically and dynamically provide provisioned storage resources based on our needs without manual intervention. The cloud also gives us the ability to access virtual storage through cloud database clusters or virtualized physical storage that can adjust capacity without downtime, and the ability to access large storage resource pools, which are not possible locally.

The second question about this data is what to do with it. There are two difficulties with this problem. The first difficulty is how to process all the data points obtained from each different object in real time. The second difficulty is to extract useful information from all the available data points collected and to correlate the information obtained from different objects to add real value to the stored data.

Although real-time processing may seem simple—take in data, analyze it, and then make use of it—that’s not the case in real-time. Let’s go back to the refrigerator example. Imagine that every time someone opens the refrigerator door, the refrigerator sends a data packet. These data packets include what has been moved and what has been put in. Let us estimate that there are about 2 billion refrigerators in the world, and the refrigerator door is opened and closed 4 times a day, so 8 billion data packets will be generated in a day, which is about 100,000 data packets per second on average. This amount is very amazing. Worse, these data points may be concentrated at characteristic times of the day (mainly mornings and evenings). If we prepare processing capacity based on maximum load, a lot of infrastructure will be wasted.

Once real-time processing is performed, we will encounter the second difficulty, which is how to extract useful information from these stored data and bring them to a higher level and no longer be personal matters. It would be great for you personally if your refrigerator could automatically place your order from the grocery store for you, but what if the manufacturer knew that refrigerators from certain regions had a tendency to overheat, or that refrigerators storing certain items wore out their useful life too quickly? , then it will be of greater significance to manufacturers. To extract this kind of information from stored data, we need to leverage existing big data solutions (and some on the horizon).

Cloud computing is ideally suited to handle these problems. In the first pain point, dynamic allocation (and reclamation) of processing resources is allowed, allowing applications that need to analyze refrigerator data in real time to cope with these massive data volumes and to optimize infrastructure costs. In the second difficulty, cloud computing can collaborate with big data solutions.

In summary, the Internet of Things may change the overall architecture of cloud computing, but at the same time cloud computing is also critical to achieving this change. In terms of virtualized computing resources, although applications can dynamically allocate these resources without manual intervention, cloud computing will not have any development if this is the case. Because the Internet of Things is the only driving force for their development.

Neem contact met ons op