Cloud and Obnoxious - An EXTREMELY Brief History of Cloud Computing

When I started this most recent position, the decision to go to the cloud was already made. After 3+ years, I have learned a lot and want to share some of the things with you. I want to do a series of cloud posts and the first one is a bit of history which was helpful for me for several reasons:

  1. It helps me understand where some of the products we see today came from
  2. I lived through most of it - because I’m old
  3. I like to pretend I’m @Bart_Elia writing novellas on long plane flights - only from my kitchen.

Over the next few posts, I want to talk about the things a company should consider before they move to the cloud. If you have any specific questions or topics, please feel free to send them my way: mwonsil at perceptron dot com. But for now, a little background.

In the beginning, computing power was expensive. Only very large organizations could afford mainframes. They were large and required a lot of care. To offset the expense of these systems, people would often sell the unused capacity. This was called Time Sharing. People would buy excess time during off-peak hours.

image

The companies that owned these expensive mainframes, couldn’t even afford a second one to have as a development system or a test system. In 1972, IBM created virtual machine software to run more than one operating system on their large computers.

Over time, companies like DEC, HP, IBM, Prime, Wang, Data General, and Apollo started to deliver minicomputers to bring computing power to more people. The minicomputer was smaller and more affordable. For both mainframes and minicomputers, people used a device called a terminal to communicate with the computer system.

image

In 1983, DEC added the ability to cluster several of these computers together to get even more power and add redundancy. The idea of clustering would become very important to the future of Cloud Computing.

image

Computers continued to get smaller and eventually there were computers on everybody’s desk. They were called microcomputers. These computers replaced the terminals and used software to emulated the physical terminals. The microcomputer started to get more powerful. So powerful, they started to replace the minicomputers as servers.

image image

The servers were much less expensive than a minicomputer, so much so that we began to see “server sprawl” in the computer room. It seemed every application required dedicated servers with different operating systems and other services with specific versions, but the price of several servers was still cheaper than the minicomputers they replaced. However, the management of more and more servers (licenses, electricity, cooling, etc.) was starting to climb.

image

Over 25 years since IBMs original virtual machine, the movement of virtualization begins again. In 1999, VMWare releases VMWare Desktop allowing users to run multiple operating systems on a microcomputer. A few years later, the Linux community released several virtualization strategies, but these were different. Instead of virtualizing the entire operating system, they allowed the operating system to create “containers” which would run in a separated space than other containers. Each container had the libraries required to run their application and had limited access to CPU, memory, and even have their own IP address. With the ability to spin up many containers, companies had to create container managers. Several companies created container management software but in 2013, Docker won a lot of the mind-share because they created a whole ecosystem around containers from creation to deployment. In 2017, an opensource project called Kubernetes became the standard for container management and is now supported by Google, Microsoft, and Amazon. Several companies came out with virtualization software for managing servers.

The power of the Cloud comes from the combination of virtualization and clustering. Data Centers can combine hundreds of computers in order to run millions of containers and operating systems at the same time. The clustering software provides redundancy and gives data centers the ability to adjust scale on-the-fly. With the drastic reduction of the price of storage and some powerful automation, this is the infrastructure that makes Cloud Computing what it is today.

In upcoming posts, we’ll discuss what products cloud companies offer, what should a company think about if they are considering moving to the cloud, and what I have experienced with Epicor’s cloud solutions and what that future may hold.

9 Likes

I think I am horrified I actually remember those boxes so well. That Compaq Luggable was a beast. 28 Lbs.

And remember DECs VMS -> Microsoft Windows NT
VMS++ -> WNT
The NT foundations started in DEC VMS in the late 70s was amazing to follow thru into the 90s.

Looking forward to the overview!

2 Likes

Very interesting, keep it coming!

2 Likes

Thanks @Mark_Wonsil good stuff, moved to Experts’ Corner

1 Like

They did a great interview with Dave Cutler a few years ago where he talks about all of that:

Also, Richard Campbell from .net Rocks is writing a book about the history of .net. He put out a video going over some of the things.

Last, you can’t really have a history of the cloud without a history of Unix, this one by Rob Pike of Bell Labs and Google.

1 Like

Very true John. Last year Microsoft said that over half of Azure is running Linux.

Mark W.

I had an instructor who interned at Bell Labs when they created this little transistor thing
https://beatriceco.com/bti/porticus/bell/belllabs_transistor.html

1 Like

Dave Cutler is excellent.
Rich Campbell a huge source of knowledge and great in person. He spoke at the local dotNet Users group and I have had drinks with him at MS Build a couple of times.

1 Like

My current favorite Bell Labs employee and Columbia Professor

2 Likes

Bahaha, I miss him as Monk…
The wife loves Mrs. Maisel so I catch glimpses once in a while

2 Likes

I remember flying out to California with the Compaq (sewing machine) computer - carefully placed in the overhead. I think it even had a 10MB (not Gig) hard drive and Built-in 9" green screen monitor

1 Like

@LarsonSolutions we are dating each other with discussions on that beast. I think I carried that on my first field call out of school to a customer where I swapped a 5Meg hard disk, handed him a bill for $5800 USD ($12k todays dollars) and he was happy as heck for it being so cheap.

@Bart_Elia Somebody really should do a history of Epicor at Insights! Do you have any people from the original systems still there?

For example, what’s the story with MfgSys and where the heck did Addison come from?

Usually those old farts like myself are cornered with a coffee or adult beverage for those chats. We’d not want to bore the majority :rofl:

I could tell the story about my first day at DCD…and telling my co-workers that I was supposed to have 50% travel…and they all started laughing. Or that when
I asked about general ledger integration…was told there was no GL…whut?

That being said, I still miss the old Ship Vision screens. Yeah, cubes, dimensions and all that jazz are nifty…but one screen gave a helluva lot of info at
a glance.

1 Like

There are divergent histories around due to mergers so need to be careful to get both major branches. There was the Platinum acquisition so most published history (such as Wikipedia) flows back from there but the original DCD-Classic/Vantage/Epicor
evolution was from the DCD/Dataworks branch. You can still see vestiges of it in field names like DCD-User. I was never so glad as to get off DCD-Classic on AS/400 and upgrade to Windows based Vantage 4.0. Around that time I still had a copy of the JSCS-II
manual – the progenitor of Classic.

-Todd C.

1 Like