Table of Contents
What Is Distributed Cloud Computing?
To meet performance, redundancy, and regulatory requirements, Distributed Cloud Computing generalizes the cloud computing concept to locate, process, and serve data and applications from geographically distributed facilities.
Users that do not wish to develop, buy, or maintain their own IT infrastructure can use the classic cloud computing model, which provides on-demand, metered access to computing resources such as storage, servers, databases, and applications. Virtualization techniques allow isolation and protection of individual user data, and public cloud service providers maintain and run massive server farms whose resources are shared among customers. All of the monitoring and administrative components of keeping the cloud up and operating are transparent to cloud users, and site redundancy across regions allows for recovery from outages and disasters.
Dispersed cloud computing extends distributed computing to the cloud infrastructure itself, spreading processing effort across several, networked servers. A distributed cloud is an application execution environment in which application components are hosted in geographically separated locations that match the program’s needs.
To provide for more responsive and performant service delivery for some types of applications where latency is crucial and bulk data transmission to and from a central cloud is costly
As is the case in the EU, which may demand that data never leave the user’s country.
To keep certain data and processes within an organization’s private cloud or data center, with which a public cloud is linked.
Beyond the protection given by local, regional, or national site redundancy, organizations might be harmed by large-scale outages.
Based on the aforementioned requirements, the distributed cloud service provider ensures end-to-end management for the best placement of data, computing operations, and network connectivity. From the perspective of the cloud user, it looks to be a single solution.
A Content Delivery Network (CDN) is an example of a distributed cloud, in which storage (for example, video content) is distributed over multiple geographical locations to reduce delivery delay. Enterprises that use CDNs to distribute content benefit from the ability to scale both storage and performance, which is invisible to the CDN provider.
Edge computing refers to a system in which data is processed as close as feasible to its source. Edge computing is useful for applications that require low latency and high throughput, or for which it is too expensive to transport data to a distant cloud for processing. In circumstances where the transport network is bandwidth-constrained or unreliable, or if the data is too sensitive to be delivered through public networks, even if encrypted, edge computing can help.
As a result, edge computing is an extension of distributed cloud computing rather than a new computing paradigm. Edge computing resources can be thought of as a “mini” cloud data center, with edge storage and computing resources coupled to bigger cloud data centers for big data processing and bulk storage.
Please see the Examples section below for some usage scenarios.
The ability of cloud service users to avoid maintaining and operating their own IT infrastructure and convert CAPEX to OPEX by employing a utility-like model of purchasing computation and storage on demand is a fundamental benefit of using cloud services.
Some additional features are available for purchase with distributed cloud computing, such as the ability to request that certain data be kept in specified countries or that a specific performance target for latency or throughput is reached. Between the user and the cloud provider, these are formalized as Service Level Agreements (SLAs).
The cloud provider’s role is to disguise the complexity of how such SLAs are met. This could entail expanding cloud infrastructure in specific locations or working with existing cloud providers in such areas. Furthermore, high-speed data linkages between these geographically scattered data centers must be established.
Major cloud providers have their own technologies that they can integrate into these scattered cloud data centers to ensure that data, computation, and storage are intelligently placed to meet SLAs, all while remaining transparent to cloud service consumers.
Some usage scenarios to motivate distributed cloud computing include the following:
While sending traffic and engine data back to a central cloud, autonomously driven trucks driving in echelon can locally process data from onboard and road sensors to maintain a stable speed and separation between each other and other vehicles. A fleet management application on a regional cloud monitors their journey to the destination, analyzing data from many vehicles to determine ideal routes and identify vehicles needing maintenance.
A large over-the-top video service provider relies on a central cloud to transcode and format videos for various device kinds and networks. It uses globally scattered CDNs to cache material in numerous forms. It pre-positions content in caches nearest to end-users in anticipation of high demand for a newly launched series in a certain region—for example, storage collocated with cable head ends to serve a home location, or at 5G base stations in congested metropolitan areas for mobile viewing.
Distributed cloud computing extends the classic, large data center-based cloud concept to a set of geographically dispersed distributed cloud infrastructure components.
Distributed cloud computing keeps on-demand computing and storage expanding while bringing it closer to where it’s needed for better performance.
Edge computing is a component of distributed cloud computing that represents the outermost reaches of a cloud architecture.