The concept of cloud computing dates to the 1950s when large-scale mainframes with high-volume processing power became available.
Cloud computing is an evolution of technology over time
In order to make efficient use of the computing power of mainframes, the practice of time-sharing, or resource pooling, evolved. Using dumb terminals, whose sole purpose was to facilitate access to the mainframes, multiple users were able to access the same data storage layer and CPU power from any terminal.
In the 1970s, with the release of an operating system called Virtual Machine (VM), it became possible for mainframes to have multiple virtual systems, or virtual machines, on a single physical node.
Virtualization
The virtual machine operating system evolved from the 1950s application of shared access to a mainframe by allowing multiple distinct computing environments to exist on the same physical hardware. Each virtual machine hosted guest operating systems that behaved as though they had their own memory, CPU, and hard drives, even though these were shared resources.
Virtualization thus became a technology driver and a huge catalyst for some of the biggest evolutions in communications and computing. Even 20 years ago, physical hardware was quite expensive. With the internet becoming more accessible, and the need to make hardware costs more viable, servers were virtualized into shared hosting environments, virtual private servers, and virtual dedicated servers, using the same types of functionality provided by the virtual machine operating system.
So, for example, if a company needed ‘x’ number of physical systems to run their applications, they could take one physical node and split it into multiple virtual systems.
Hypervisors
This was enabled by hypervisors. A hypervisor is a small software layer that enables multiple operating systems to run alongside each other, sharing the same physical computing resources.
A hypervisor also separates the Virtual Machines logically, assigning each its own slice of the underlying computing power, memory, and storage, preventing the virtual machines from interfering with each other. So, if, for example, one operating system suffers a crash or a security compromise, the others keep working.
As technologies and hypervisors improved and were able to share and deliver resources reliably, some companies decided to make the cloud’s benefits accessible to users who didn’t have an abundance of physical servers to create their own cloud computing infrastructure.
Since the servers were already online, the process of spinning up a new instance was instantaneous. Users could now order cloud resources they needed from a larger pool of available resources, and they could pay for them on a per-use basis, also known as Pay-As-You-Go.
This pay-as-you-go or utility computing model became one of the key drivers behind cloud computing’s taking off. The pay-per-use model allowed companies and even individual developers to pay for the computing resources as and when they used them, just like units of electricity.
This allowed them to switch to a more cash-flow-friendly OpEx model from a CapEx model. This model appealed to all sizes of companies, those who had little or no hardware, and even those that had lots of hardware, because now, instead of making huge capital expenditures on hardware, they could pay for computing resources as and when needed.
It also allowed them to scale their workloads during usage peaks, and scale down when usage subsided.
And this gave rise to modern-day cloud computing. The impact of the evolution of the cloud has been immense.
Reference:
1 Comment