The Case for Mainframe DevOps: All Quiet on Planet z?
In many if not most organizations we speak to, there are some defining characteristics of the mainframe environment. First is its enduring value. Market surveys agree with continued loyalty from the mainframe community. Indeed, if we look at the IT estate that has built up over the years, the investments made over time have led to incredible returns in terms of business value and competitive advantage. But usually, very often in fact – and this is our second characteristic – these tremendous returns have come at a cost of what now is very high complexity in terms of the IT estate.
A third defining characteristic from all that complexity is where the time is spent today by mainframe application teams. The reality for most organizations is that the majority of budget and time is spent simply ‘keeping the lights on’ (by which I mean the day-to-day running of the organization). Some organizations will spend 70, 80 or even 90% of their available IT resources just doing ‘lights on’ activities, such as maintenance updates, managing the maintenance backlog, regulatory changes, and bug fixes to core systems.
This leaves very little time and budget to spend on innovation – innovation to drive success. And business success in 2015 needs greater flexibility, and velocity in deliveries from IT.
Enter the Hero to Save the Day
IT faces an interesting reality in which core applications have outlived the very processes and technologies used to create them originally. These processes and technologies are now showing signs of age and are unable to support a more flexible and inclusive model that systems of 2015 need.
And that’s where DevOps comes in…
Building on the concepts of the agile manifesto, where smaller chunks of work are taken, worked on and delivered in smaller timeframes by a dedicated team, DevOps seeks to add further discipline and flexibility by driving a more focused and inclusive process across IT. This espouses collaboration, flexibility and – ultimately – internal operational efficiency.
DevOps makes a lot of sense for small teams building new code with few distractions. It lends itself to new developments among co-located teams using modern technology across an open structure.
All that’s needed is for development teams to adopt it in the mainframe environment…
Sorry, we need to do what?
However this is far from straightforward for the mainframe world. There are a number of reasons, let’s look at three of the more obvious concerns.
Firstly, much of the work is NOT new code. The core systems and the updates they require make up most of the IT backlog. And these need to be planned and undertaken in a regimented fashion, where mainframe resources, available skills, business priority and a serious risk aversion drive the operational culture. Multiple updates to the same system may not be a smart move when the cost of production failure is significant.
Secondly, DevOps is built on an agile model. Most mainframe shops are not, with large teams and a formalised structure and fairly fixed delivery model. Culturally, DevOps is misaligned with the mainframe world today. One commentator phrased it this way, “one of the biggest challenges is the cultural shift, which will be necessary to break away from the old siloed ways of working”
And thirdly, the issue is technical: in all but a few cases the underlying mainframe delivery technology is not contemporary, and typically doesn’t lend itself to a collaborative or efficient workflow. It works for the way the processes were established years ago. It doesn’t necessarily bend to support another model.
But all is not lost. If we remind ourselves of our core objectives: to make delivery cycles more efficient, to foster collaboration, to improve the velocity of deliveries, we can look at practical ways of removing barriers to success. Online Agile Coach journal and Roger Brown recently did just that in the Blog Post ‘Agile in a COBOL world‘ I’d recommend you read after you finish up here…
Let’s look at three common situations in the mainframe world. In fact these are 3 problems we have encountered recently with customers. Let’s consider these as our barriers to better efficiency.
One goal of DevOps is to accelerate throughput by enabling many resources to work simultaneously on code deliveries, or ‘concurrent deliveries’. This isn’t always how things are set up in the mainframe world, and is often restricted by promotional models, Source Code Management configuration, and usually, mainframe development procedures.
Another objective is to accelerate the task of application development, and associated analysis and testing, to enable each individual to get more done. Inefficient or out of date tooling might not allow for this.
Finally, and a real concern for many of our clients, is that irrespective of development speed, there is a rigid, fixed and lengthy test phase which is dictated by QA procedures and available mainframe resources. Most clients we speak to are maxed out in terms of mainframe test resources due to the volume of testing effort needed.
Conclusion – Get Smarter
The challenges above are by no means uncommon; they are, however, major rocks in the road in moving towards a more efficient delivery model, the cornerstone of the DevOps ethos.
But there is a way forward. We will explore each of these challenges – and a practical solution for each that is based on contemporary technology – in forthcoming blogs. For those looking to streamline mainframe delivery using DevOps principles, the news is good, and is timely. To quote a recent publication “Not only is DevOps on the mainframe Mission Possible, but also it’s becoming Mission Cricital”. In the meantime find out more by contacting me on Twitter or an expert at Micro Focus.
 Sources – Gartner, Forrester and anecdotal customer information
 Mobile to Mainframe DevOps for Dummies – Rosalind Radcliffe (IBM Limited Edition), 2015