Next step in the evolution of the mainframe
Almost a decade ago I wrote a series of articles with “My future runs on
system Z” as their main title dedicated to prove that the IBM mainframe still
was a platform that companies had to take into account when making their
strategical platform choices.
A decade has passed in which I stopped writing articles. A decade also in
which IBM modernized the mainframe at an incredible speed. Today I still work
on the mainframe and I believe it’s value increased so much that I have to pick
up my pen again to write down my own experiences with this modern platform.
But before I start writing new articles I want to look back for a short
time and start with this golden oldie. My last article published some 10 years ago……
Once upon a time the IT business started out with
mainframes, but there came a time the whole world shouted that mainframes were
too large an expensive.
So, the industry developed cheaper processors and put
a server in just about everyone’s hands like pizza boxes.
That was great! Servers were implemented everywhere,
really customized to the customer’s needs and above all it seemed to be very
cheap.
Unfortunately this created an enormous sprawling of
servers and their implementations and soon we needed a dedicated maintenance
group for just about every server in the company.
Overall the maintainability decreased and IT operational costs threatened to grow sky high again.
Happily there were some very smart guys among us and they “invented” virtualization to solve this and instead of creating physical sprawl by keep on adding physical servers we now created virtual sprawl.
At this point I began to think about the “good old
days” and started to daydream about the 1960s mainframe that already had solved
these issues.
But looking back in the past is for old guys, sitting
back in the corner and grumbling about modernization.
Since I didn’t want to end up like Statler or Waldorf
(the two grumpy, disagreeable old men criticizing the whole cast of the Muppet
show from their balcony seats), I quit daydreaming and got back to reality.
I jumped back on the road to modernization and tried
to follow the rest in keeping the good direction …
and I witnessed the maturity of nice look and feel GUI’s, browser technology, application integration and service oriented architecture in a world were babies can use the internet before they can say “daddy”.
Whow! I really enjoyed all this new stuff and what’s more it seemed to work! Still did some glazing back in the past because now we use frameworks and patterns and in the old days we called this standard modules and best practices, but what’s in a name…..
The growing exploitation of these new technologies
made it more evident that when usage and required functionality increases new
issues like scalability, reliability, security, performance, availability were
raised. It also became clear that trying to solve these issues resulted in an
increase of IT costs. Often more robust, complex solutions must be build, more
iron is needed and that is simply not given away for free. Today’s newly build
solutions will become the legacy of tomorrow.
Suddenly got a déjà-vu because weren’t these reasons
to put the mainframe on a side track long time ago……
Oeps, did it again……don’t look back…. what has been,
has been…..keep focussed on the present because the industry evolves and offers
a solution for all these issues by delivering IT resources as services, where
IT resources can be almost anything like applications, computer power, storage
capacity, networking, programming tools, etc.
Massively scalable IT resources become available as
needed and with the customer paying only for what the organization actually
consumes.
Amazon.com, Google, and lots of others are rushing to
build highly virtualized, massively scalable, and (hopefully) bulletproof IT
infrastructures accessible over the Internet or, to use the latest buzz word
phrase, “in the cloud”.
Our mainframe data centre was our IT infrastructure cloud long before there was a cloud. Virtualized and highly scalable, bulletproof and accessible over the network, the mainframe has delivered cloud computing for years.
Mainframe data centres that need this kind of scalability already have parallel sysplex capabilities that enable mainframes to act together as a single system image. The parallel sysplex combines data sharing and parallel computing, effectively enabling a cluster of mainframes to share a workload for high performance and high availability.
For the past 50 years, the mainframe has served as the backbone of large-scale computing. It has both adapted to new requirements and adopted and exploited new technology. It is being used today in ways that were literally unimaginable back in the 1960s. So to the extent that the past is prologue, it’s probably fair to say that when it comes to the “mainframe” …. you ain’t seen nothing yet.
Cloud computing is just a next step in the evolution of the mainframe.
So far this article that I wrote a decade ago. In my next article I will talk about a possible approach to adopt a cloud way of working on the mainframe and about some things to consider when business owners has to decide to move or not to move to the cloud from a mainframe perspective.Harry van Irsel has more than 30 years of experience in IT, specialized in mainframe innovation, architecting and engineering. He has a strong vision on innovation and is always motivated to find innovative solutions for addressing emerging needs of the organization or enhance existing services.
Propagating vision and belief in the mainframe in a constructive and objective way combined with the passion to combine traditional and modern technologies to produce the “best-of-both-world solution” resulted in winning several Innovation Awards.
Harry is also 2018, 2019, 2020 and 2021 IBM Champion for innovative thought leadership in the technical community.
“Views expressed in this article are my own”.