Comparing Cloud Versus On-device AI: When to Use Which?



In recent years, the cloud has been instrumental in propelling artificial intelligence applications to the mainstream. Popular cloud platforms such as Amazon Web Services, Microsoft Azure and Google Cloud have allowed businesses of all sizes to tap into the network resources needed to support AI applications, promoting the growth of the global enterprise AI market to a value of $845.4 million in 2017 with a projected annual compound growth rate of 48.7 percent through 2022. But even as cloud AI consolidates its position, its supremacy is already being challenged by the emergence of edge computing AI, which shifts AI workloads from the cloud to local Internet of Things devices such as smartphones, wearables, smart home devices and smart factory sensors. The edge computing market will grow at a CAGR of 30 percent until 2022, accelerating AI development, TrendForce projects.

As these two differing models of AI deployment vie for market share, end users need to know the difference between them. Here’s a comparison of how cloud-based and on-device AI stack up in terms of efficiency, convenience and privacy, so that you know the pros and cons of each and when it makes sense to use which.

Efficiency

One of the points for comparison between cloud AI and on-device AI is their efficiency at processing data. Cloud servers are designed to process centralized data, whereas edge devices are well-suited for processing local data. For instance, it’s efficient to use the cloud to manage a retailer’s inventory-control data and apps, since inventory data may need to be collected from multiple store locations and cloud servers are positioned to coordinate collection and processing of such data. In contrast, it wouldn’t be efficient to distribute and process data for all locations on local devices, since this would unnecessarily strain bandwidth and slow down the network.

On the other hand, security data from sensors in the same store’s alarm system would make sense to process locally, speeding up response time to security breaches, rather than requiring a delay as data is transmitted to remote cloud servers. Similarly, personal security biometric authentication is better served by an on-device artificial intelligence platform such as Qualcomm’s Snapdragon 845 mobile platform, which can process facial recognition directly on a smartphone rather than requiring the user to wait for the data to be transmitted to the cloud in order to unlock their device. When this type of quick local response is required, on-device AI is more efficient than cloud AI. However, cloud AI remains efficient for central processing of large amounts of data from multiple locations.

Convenience

Convenience is another point of comparison between cloud and on-device AI. One reason many users find cloud AI convenient is that it allows you to tap into large-scale artificial intelligence applications without having to build your own large local IT infrastructure. For instance, even a small business owner who runs their company from a smartphone can leverage the vast resources of Amazon Web Services to run business intelligence applications that would be cost-prohibitive if they had to be run locally.

On the other hand, a connected or autonomous car that needs to rapidly process data from navigational landmarks, internal car sensors and other vehicles must have this data available in real-time, and can’t afford to wait for the data to be transmitted through the cloud. In cases such as this, it’s more convenient to process the data locally through on-device artificial intelligence platforms.

Privacy

Privacy concerns also differ from the cloud to edge devices. One concern about the cloud is that it places sensitive data on networks connected to the internet where it can be potentially hacked by cybercriminals. Over 1 in 5 files uploaded to cloud-based file-sharing service contain sensitive data, such as corporate intellectual property. Keeping this type of data on local devices and networks can leave it less vulnerable in many instances.

On the other hand, local devices aren’t immune to hacking, either, and when they are connected to the Internet of Things, they can add points of vulnerability to networks. For instance, if an employee smartphone gets lost or stolen or infected with a virus, an unauthorized user can potentially use the device to compromise a company’s entire network.

But at the same time, local devices can also enjoy security advantages, such as the ability to use on-device AI for automated malware detection. Meanwhile, cloud AI can tap into the professional security resources of large cloud providers, while local devices may be limited by the resources immediately available to the device owner. When it comes to privacy, both the cloud and edge devices have some vulnerabilities as well as strength, and there is no magic bullet against all attack forms except for sound security practices.

Cloud-based AI and on-device AI are best seen as complementary rather than competing approaches. Cloud AI can be more efficient for handling large central databases pooling data from many endpoints, while on-device AI can be more efficient at rapidly processing local data from a single device or local network. Similarly, the cloud is convenient when smaller companies need to tap into large-scale AI resources, but edge devices may be more convenient when local resources are sufficient to process relevant data for AI applications. Both the cloud and edge devices have security vulnerabilities, but they also both boast respective strengths, such as the ability of on-device AI to automatically detect malware intrusions or the ability of cloud providers to deliver professional security to local clients. As these two technologies continue to mature, cloud and on-device AI will come to be seen as complementary tools to each be used as suited for specific applications.