When is a glitch not a glitch?

How a harmless word can disguise dangerously inadequate core system funding

A recent Financial Conduct Authority press release announced stringent £42m fines for the RBS group of banks. These penalties related to the service outages that denied more than 6.5 million customers access to their funds for a number of weeks in 2012.

There are many more recent stories. These include high-profile problems with the Obamacare website and at the Co-operative Bank. Most readers will recall recent issues with the NATS computer that grounded dozens of flights and hundreds of passengers.

The common denominator among all of them is that in each case, fundamental IT system failures were dismissed as a ‘glitch’. It is my view that they are nothing of the sort. Without launching an etymological crusade, in my world a glitch is a minor technical aberration, a skip in a downloaded music track, for example. It is not something that paralyses international transport or banking systems. This is no ‘glitch’ and in some cases those responsible for maintaining supposedly robust IT infrastructures have admitted as much. But while we continue to use such benign terminology we risk letting them off the hook. Let’s review two of these cases in more depth.

In 2012, NatWest staff tried to install an update on the RBS payment processing system, known as CA-7. This is a job scheduling and workflow automation software package commonly used by banks and other large enterprises using IBM mainframes. A large number of NatWest’s 7.5 million personal banking customers and more than 100,000 Ulster Bank customers were affected.

Outsource the problem

So why did it happen? BBC business correspondent Robert Peston wondered if outsourcing might be behind the problem, while an RBS spokesperson blamed  RBS’ own systems. The fact that no-one can be sure who is looking after what cannot fill RBS customers with a great deal of confidence. Fast forward to 2013 and another so-called glitch means that  RBS customers to go to bed with healthy current accounts and wake up apparently in debt. Once again, people were left high and dry by decades of underinvestment in their bank’s IT. Which, coming as it did on Cyber Monday, showed that RBS’ sense of timing was no better than their customer service.

Quoted in the MailOnline, RBS Chief executive Ross McEwann claimed the bank was now investing heavily in building “reliable” IT systems just  as Iain Chidgey of data management company Delphix, noted that the increasing frequency of software glitches in the banking industry are often  caused by insufficient testing. So, not really a glitch at all, then?

Meanwhile, a fourth NatWest ‘glitch’ has been attributed to a Denial of Service (DDoS) attack. Difficult to predict but, with rigorous testing, straightforward enough to prepare for, cyber attacks are not new and customers must  wonder how many glitches equal poor IT planning and management.

Flight delayed

You will recall the technical fault that caused widespread flight disruption when a single line of computer code at the National Air Traffic Services (NATS) control centre at Swanwick failed.

NATS Chief Executive Richard Deakin explained that this glitch was “buried” among four million lines of code spread across 50 different systems. He said NATS was spending an extra £575 million to avoid a repeat, but Business Secretary Vince Cable accused the company of “skimping on large-scale investment”. Deakin went on to warn that updating some “elderly” systems posed a “challenge”.

And there is the problem. These core systems have evolved over time and discovering the root of the issue is a significant challenge in itself. New applications have been layered on the original functionality to become huge, sprawling webs of interconnectivity. However, the underlying systems are sound. Many of the mainframes used by finance houses were built on COBOL, which has been proven over many decades to be sufficiently robust to provide ‘glitch-free’ service, and flexible enough to be modernized to adapt to meet the demands of 21st century consumers. What might be lacking is the resourcing, tooling and underlying investment required to nurture and evolve these systems to support business in 2015.

Take the RBS case. Much of their technology was never designed for a 24 hour, always-on world. When the original application was built, banking was more straightforward. Transactions happened during opening hours and records were updated during an overnight batch run. Maintenance and upgrade work could take place out of hours. Now, there is no ‘out of hours’ and failed upgrades or similar maintenance tasks have an immediate, and significant impact. As NATS’ Richard Deakin noted, improvements must be made “while the engine was still running”.

It doesn’t have to be like this

But blaming “elderly” systems is disingenuous. Investment simply needs to catch up with the business demands being placed on IT today. With support, these core applications can be ‘retuned’ to handle the increasing demands.

The reasons for the ‘glitches’ outlined above are as many and varied as the industries they represent and there is no simple ‘just do this’ panacea.

My point is that banks and other financial houses owe it to their customers to achieve a better understanding of their application landscape and improve how they undertake remedial and innovative work.

Micro Focus and Borland help organizations bring more stability to their mission-critical mainframe systems and so avoid the need for colourful phraseology. Our customers include many high-profile finance and insurance houses with profiles very similar to, for example, NatWest.

The Micro Focus Mainframe Solution, for example, offers mainframe owners end-to-end visibility of the application portfolio. Tools such as Enterprise Developer and Test Server can bring modern efficiencies, more transparent quality controls and improved delivery cycles to trusted, business-critical mainframe environments.

blog_images.10timesvalue

Our support means that our customers avoid having to hope that people will continue to see poor IT housekeeping and a lack of foresight as unavoidable ‘computer quirks’. But as the body of evidence grows, this seems like wishful thinking. The bottom line is surely to either invest in better tooling, or a bigger dictionary.

andyki_LThumb

 

 

 

 

Andy King

UK, Ireland and South Africa Country Manager

 

 

 

Share this post:
Tweet about this on TwitterShare on FacebookShare on LinkedInGoogle+

Leave a Reply

Your email address will not be published. Required fields are marked *