Run the Tests that fail first…
In a previous blog we explored the operational challenges that IT organizations face – in this blog we’re going to dive into the one of those operational challenges, IT Backlog.
Micro Focus has introduced IT backlog before (looking at the ‘lights on’ burden, and the true cost of it). However what the beleaguered CIO or IT manager really wants to hear is some practical solutions to address their concerns and their growing IT backlog.
Let’s explore some practicalities to help beat the backlog.
Once upon a time
This is a true story. A conversation took place at a software engineering lab, nearly 20 years ago. The engineering team, part of a UK software vendor, had a backlog of important client deliveries to make, each of them comprising bespoke, tailored software products.
The commercial manager was responsible for collecting payment for orders shipped and would therefore, around month-end, he would spend a lot of time in engineering. He was greeted by busy software engineers and quality assurance staff running, reviewing and reporting test activities as they sought to complete QA and meet release criteria.
In the software engineering world, testing is a vital, unavoidable and significant effort. Accepted wisdom based on notable research is that anything up to 50% of the duration of software projects will be spent in some form of testing.
This engineering shop was no different. Complex technology required exacting testing and thorough review. There would be no compromise on quality. Our commercial manager was frustrated. Why does it take so long to test?
The engineering team would point to the variety of test cases. While they might be fully automated, investigating test fails takes time and potentially involves software corrections, which require re-testing. These “fails” might not be found until the end of the testing phase. The commercial manager reflected and then offered “Why don’t we run the tests that fail first”?
Alas, it was no easier to predict what might fail than to have determined without testing that the software was error-free.
Here in the data centre of 2014, things have evolved. The applications have grown beyond recognition, serving a host of new business needs. Their composition and complexity is unprecedented. For mainframes, extrapolate this by ten. The issues, the backlog, are all on an enterprise scale.
Whether one determines this as ‘progress’ is entirely a matter of perspective. What is unarguable is that even after two decades of technological advancement, a number of factors remain constant.
They’re the rules
First, IT cannot compromise on quality. Compromising makes bad things happen, as big name brands, including Target, Co-op, and RBS know to their considerable cost. Even a small flaw in quality tends to become big news pretty quickly.
Second, IT is as busy as ever. With up to a third more workload outstanding than even this time two years ago, pressure remains as high as ever to deliver, fast. Yet the environment hasn’t necessarily expanded to meet that demand. For example…
- Availability of staff (some organizations effectively sub out QA which needs to be booked; these are shared services but which are booked by the day way ahead of time). Worse still these guys have to know tons of stuff about the environment they are meant to be testing. These skills aren’t just lying around waiting to be used.
- Or servers. Some testing “resource” is again rented as a shared service. This time is at a premium… whether that’s mainframe test regions, QA machines, software chargebacks, there is often a usage cost for all but the most commoditised technology.
- Finally, IT expenditure is only just emerging from the long dark shadow of economic gloom –discretionary IT operating budgets may not cover the extra resource now required.
So the problem persists – how do we get more done with the current limitations on time, budget and resources? And how can the ‘tech deficit’ gap be reduced to allow an organization achieve what they set-out to do while tackling the IT Backlog head-on?
While the workload grows, thanks to all these modern new applications, the fixed relationship between time, resource and quality remains, regardless of underlying technology or methodology. When demand outstrips supply a backlog is created. It’s pretty simple, when you think about it.
Hang on. I’ve got it!
What would our commercial manager have said? Something along the lines of…
- What if we could enable more staff – when needed – to get up and running in our testing environment
- What if we could add more testing horsepower when we need it and remove any restrictions our current environment has?
- What if we could add these extra resources, flexibly, without incurring additional cost?
Essentially he’s looking to resolve his backlog issues with the power of positive thinking. He’s positive that by ticking these boxes, he can make inroads into his IT backlog.
A Micro Focus client – a major FS organization – tried to reduce their backlog by solving the issues the hard-thinking commercial manager had identified. They were looking to establish greater flexibility, without incurring additional cost, and without compromising quality, to accelerate IT deliveries in their mainframe-based IT environment.
Like the commercial manager, they needed…
- Flexibility – improved time-to-delivery by eradicating delivery bottlenecks
- Quality – Improved application quality without consuming additional costs
- Cost efficiency – Better cost management by testing more in less time with cheaper resources
Sound familiar? Fine sentiments, and a good idea, but how does it actually happen?
It’s just crazy enough to work
Micro Focus can provide all the key components of the mainframe test environment without consuming additional, busy system resources. With this totally flexible, scalable and co-operative testing environment, there’s no waiting for a testing time slot, no waiting to set up a test region. The test environment is always on. And there can be as many instances as are required by the project.
Micro Focus provides this enterprise scale environment for testing, based on commodity servers, to streamline the delivery of core mainframe applications.
Enterprise Test Server takes the pressure off. There will always be testing that needs to be completed on the target platform, for example security and performance tuning, but by enabling test cycles to be conducted on a low cost commodity platform the major bottlenecks are eradicated. Any risks previously associated with not running enough testing can be fully mitigated as test capacity is no longer an issue.
And this approach can scale out for as much testing as IT needs to do. It will cope with requirements for extra releases and extra test cycles without an additional testing charge. This makes it the most cost efficient means of evolving testing processes to support today’s demands.
Customers using this have actually saved testing costs overall, and shortened their time to delivery by 50%. Our commercial manager would have loved it. We can run all the tests first, we could have said. But whether you are a commercial manager, a delivery manager, a QA manager, or responsible for mainframe systems, this flexible approach to delivery could be transformational.
We’ll be looking further at IT backlog and how additional Micro Focus technology can turn great ideas into further backlog-shrinking solutions in our next blog. It’s out soon!