Lucky for some: z13 Launches today

Micro Focus is an IBM Partner and virtually all of our technology complements the legendary IBM Mainframe. We’re understandably very excited about the z13 news from IBM. Here’s why by Derek Britton.

IBM today announced a new generation of Mainframe, the z13, which should help businesses globally do more – more powerfully, more mobile-accessibility. more securely & more efficiently than ever before.

The Mainframe of today is very different to the Mainframe that launched 51 years ago this year –less power consumed, less environmental impact and less total cost of ownership to name a few massive benefits, as IBM continues to invest and innovate to support the growing needs of today’s economy.

And it is very much a digital economy: “The z13 is designed to handle billions of transactions for the mobile economy.  Only the IBM mainframe can put the power of the world’s most secure datacenters in the palm of your hand,” said Tom Rosamilia, senior vice president, IBM Systems.

Chris Livesey, Micro Focus CMO, comments “As a long term IBM partner, Micro Focus have provided innovative enterprise development and test software for the IBM mainframe environment over many years. IBM z13 is the industry leading business execution environment for enterprise application workload where scalability, performance and extensibility are core requirements for successful IT service delivery. Micro Focus is delighted to support the latest generation of IBM mainframe technology innovation, the z13”.

For more insights, you can find the IBM press release here.

A Solid Foundation

The IBM release builds on a long heritage of innovation, of course. 50 years of it in fact. Last year we helped celebrate the 50th birthday of the IBM Mainframe, as well as having provided a whole range of support to our mainframe customers.

We’ve explored way of helping mainframers get ahead of the compliance game, tackle their growing IT backlog, deliver more value from their mainframe Outsourcing agreements, and offered new solutions to improve end-user efficiency and green screen modernization.

And we’ve taken our message on the road. Our Developer Days running across North America – often in conjunction with IBM – are spreading in popularity, geography and frequency (see the latest schedule here and drop in to see how efficiently we can help your application modernization efforts when we’re passing). #DevDay0

We take pride in attending the frequent #mainframedebate on Twitter, we have our own dedicated Mainframe50 account and hands on hearts – delivered over 20 ‘Mainframe’ themed blogs in the past 12 months.

Mainframe50press

Best of Friends

If the mainframe is the mainstay production engine of choice for business, then COBOL is its application language of choice. Micro Focus has a rich 39 year heritage with COBOL as our mainstay. And speaking of history, amazing Grace Hopper’s 107th Birthday was cheered and we’re supporting a movie being created about her stellar contribution to IT, which will doubtless feature some of the very first mainframes.

GoogleDoodle

Over the years we’ve built truly outstanding Enterprise products to help IT shops modernise their ‘legacy’ estates in a pain and risk-free way, which themselves are complementary to the mainframe hardware and systems provided by IBM.  Recently, IBM endorsed Micro Focus’ application development technology for IBM mainframes, Enterprise Developer.

In April 2013, Greg Lotko, former VP and business line executive, IBM System z, said, “We are continually working with our technology partners to help our clients maximise the value in their IBM mainframes and this latest innovation from Micro Focus is a great example of that commitment.”

More is More 

We see the opportunity to leverage the mainframe to deliver more business value as better than ever, whether it’s helping someone understand the enterprise application estate they have ignored for years as it just keeps working, right through to Mainframe development and testing solutions. From taking the 1st modernisation steps and making mainframe interface users more effective, to ensuring that our networks still communicate the volumes and speeds we now demand, Micro Focus is truly in this game. COBOL of old still runs the world today, COBOL of old can be COBOL of new, as modern as the new z13.

Yesterday Deon Newman from IBM announced ‘we’re ready’ pre-System z13 launch. Governments, Federal agencies, FSS concerns, insurance companies, travel and transportation companies, educational establishments  – whoever you are – if you have invested in this brilliant technology in the past and would like to bring it into the future – we’re more than ready too!

Have a point of view about the new mainframe? Let us know!

blog_images.10timesvalue

Outsourcing – Extracting maximum value from the Mainframe

Many organizations are choosing the option to explore outsourcing – contracting out all or parts of their business processes to an outsourcer, also known as a systems integrator (SI). This enables the organization to focus on its core competencies, while mitigating any weaknesses by using the expertise of an outsourcer.
This blog explores details of the trend towards outsourcing and its pitfalls, offering guidance on strengthening the partnership between organizations and the outsourcer by addressing some ongoing concerns.

The here and now

According to recent research, which polls 590 CIOs and IT directors from nine countries around the world, nearly half of all organizations with mainframes are currently outsourcing the development and maintenance of their mainframe applications to SIs. Over 60% of respondents say they have some form of outsourcing agreement.  The outsourcing market has grown vastly over the past decade and it looks like the trend is set to continue with the outsourcing market expected to reach USD 4.49 billion globally in 2020.

By outsourcing, organizations are aiming to derive business value, yet the difficulty of establishing and managing an effective and cost-efficient outsource model is well-known to organizations across the globe. The result: an operational imbalance between organizations and their external suppliers – and the industry seems to agree…

Outsourcing reality

The mainframe has been the bedrock for masses of IT environments over the past fifty years and will continue to be so according to research. Yet, many organizations are looking to leverage the reliability and capabilities of the mainframe to accomplish even more – and, as such, an increasing number of CIOs are looking towards outsourcing.

Over many years key applications have advanced in order to meet business demand – in support of this the skills required to maintain and develop these applications have evolved. Ultimately, as a direct result, a well-publicized skills deficit within mainframe development has been created – whereby demand is outweighing supply. College leavers have limited COBOL programming knowledge and other object-orientated languages, such as Java, are currently the ‘in-thing’. Consequently, recent years have seen an increasing number of organizations exploring outsourcing options so they can benefit from skills which in-house teams are lacking.

A 2012 study led by Compuware Corporation surveyed 520 CIOs and had a look at the attitudes towards – and the experiences with – mainframe outsourcing. The study outlines that:

  • 71% of organizations are frustrated by the hidden costs of mainframe outsourcing
  • 67% expressed dissatisfaction with the quality of new applications or services provided by the outsourcer
  • 88% of organizations on CPU consumption-based pay structures believe their outsourcer could manage costs better
  • 68% of organizations outsource maintenance of mainframe applications because their in-house team no longer has the knowledge to maintain them
  • 80% of organizations believe difficulties in knowledge transfer are impacting the quality of outsourced projects.

Though the results outlined above depict an overall mixed experience, it’s important to recognize the vital role an outsourcer can play when the balance is right. A good understanding of the challenges which may arise will enable an organization considering outsourcing to be one-step ahead, providing preparation time to implement processes and technology to ensure a successful relationship.

The challenges of outsourcing

Let’s consider a number of typical concerns facing organizations looking to outsource application maintenance for parts of their IT portfolio:

Inherited application complexity

Many years of innovation and change has inevitably created a highly complex application environment. As a result, for both parties, getting up to speed is often difficult and time-consuming as access to vital application knowledge is slow.

Difficulty of task

More likely than not, the part of the portfolio being outsourced is, unsurprisingly, motivated by the current difficulty, costs, or sheer effort of the client doing tasks themselves. Large ‘legacy’ systems are often poorly documented, written and maintained by many developers over the years. This lack of insight and inconsistent approach makes them difficult to enhance and innovate. Sometimes, the outsourcer can inherit unexpected challenges, immediately jeopardizing the initial objectives.

Reliance on older technology

It can sometimes come as a surprise that the existing processes to support rapid application change are dated at best. While the client might outsource the task of changing applications to gain more people to do the work; what it doesn’t do is fundamentally improve the efficiency of the process. Often there is a reliance on older technology and processes which are not fit for 21st century IT delivery or user expectation.

Limited delivery and testing cycles

Another significant bottleneck is the normally highly-regimented schedule for delivery and testing. Driven by hardware or system constraints, there are typically fixed windows of opportunity for development and QA phases. With such delay comes rework, and with rework comes additional resource burden, and cost – each phase in the coding, debugging, unit test and QA phase consumes vital resources. In many cases though, increasing Million Instructions Per Second (MIPS) to accommodate the Outsource Service Level Agreement (SLA) is not an option.

Client IT resources are precious

These include key staff, who are constantly in firefighting mode, as well as the hardware and infrastructure, which keeps the whole operation running. While adding extra SI staff to the mix might provide more developer resource, meeting this new increase in demand for infrastructure is not easy. Additional hardware resources may well be needed and day-to-day response times may be longer if the outsourcer’s staff are in a different time zone.

Getting the balance right for outsourcing success

While IT organizations require better value, faster turnaround, enhanced quality deliverables, and innovation from their SIs, the system integrators struggle to contain costs, cope with inherited application complexity, and manage large project teams all of whom may be accessing the mainframe and further increasing MIPS usage. Understandably there will be obstacles along the journey.

Getting the right balance ensures outsourcing success, and that comes down to having the right technology. It should focus on knowledge transfer and quality of code changes, enable a higher degree of quality assurance, and have faster delivery cycle turnarounds. Most importantly, it must provide significantly more computing capacity to get the job done efficiently.

To ensure both client and SI gain, they need to:

Gain a comprehensive understanding of application portfolios

A solid knowledge foundation enables architects to quickly identify ways to boost application efficiency and flexibility as well as accelerate optimization activities and ongoing maintenance.

Provide greater capacity for application change and testing

The latest integrated development environment (IDE) technology can improve productivity by up to 40% and remove any capacity bottlenecks, enabling Service Delivery and QA teams to cut through workload with unprecedented speed, subsequently accelerating delivery times.

Introduce quality assurance earlier in the process

Perform a variety of pre-production testing on low cost commodity hardware, avoiding unnecessary cost and delay. Meet delivery demands even at peak testing times without compromise.

Minimize mainframe usage and contention to reduce cost

Analyze, develop and test without incurring the costs of additional MIPS usage. Reduce the ongoing cost of mainframe testing resources and contain costs of expanding test resources by exploiting lower-cost alternatives.

The Micro Focus way…

From the start, Micro Focus helps efficiently manage outsourcing planning.  By exposing the application landscape, application complexity becomes simplified for better knowledge transfer and more accurate specifications.  An enterprise development environment that supports cross-skilling, removes mainframe constraints and lower infrastructure cost through reduced MIPS usage – as a result the hundreds of SI programmers may not have to touch the mainframe at all.

Micro Focus Mainframe Solutions address the imbalance, enabling organizations and their suppliers to meet the challenges of outsourcing and gain more value while reducing the hidden costs of the outsourcing contract.

Interested in finding out more? Make sure you read our white paper to take the first steps towards smarter outsourcing.

blog_images.10timesvalue

New horizons or same old scene?

We’ve now had a few days to settle into the new year, but as we take down the festive decorations and look at our business imperatives for 2015, is anything different in enterprise IT? This blog glances at the current picture of global technology, where so much seems like it is changing, but a lot appears to remain the same.

So is that the case? Well, yes and no. While the world appears to rotate ever faster each year and the pace of change in technology is at unprecedented levels, organizations the world over continue to rely on tried and trusted technology to run their most cherished business systems. Yet, the notion that ’tried and trusted‘ equates to ’staid and unchanging‘ is far from reality.

We’ve now had a few days to settle into the new year, but as we take down the festive decorations and look at our business imperatives for 2015, is anything different in enterprise IT? This blog glances at the current picture of global technology, where so much seems like it is changing, but a lot appears to remain the same.

So is that the case? Well, yes and no. While the world appears to rotate ever faster each year and the pace of change in technology is at unprecedented levels, organizations the world over continue to rely on tried and trusted technology to run their most cherished business systems. Yet, the notion that ’tried and trusted‘ equates to ’staid and unchanging‘ is far from reality.

The Mainframe is New?

IBM will shortly be unveiling a new product range, illustrating once again the continued investment and commitment to mainframe technology and their enterprise customers. Forrester’s Richard Fichera suggests to readers that their “current large mainframe workloads will be with [them] for the long-term”, reflecting the findings of the ninth annual mainframe survey from BMC, “2014 Annual Mainframe Research Results: Bringing IT to Life Through Digital Transformation”. The findings were clear: the mainframe remains part of the long-term business strategy and continues to shape the future of IT, according to 91% of respondents.

COBOL: Practically Popular

Underpinning all this is the enterprise application technology of choice, COBOL.  Yes, COBOL. One of the key barometers of usage and interest, the TIOBE index shows COBOL at a lofty 13th position in their global language rankings for January 2015 – an all-time high, by our estimates. It certainly represents a rise as a key “indicator of the popularity” of the COBOL language, now in its 56th year. We at Micro Focus are delighted, but not surprised: our own experience tells us that COBOL remains the language of choice for critical enterprise applications the world over. It is as suited for core business systems today as when it was first conceived.

logo-tiobe-18e757f1fbf7f113

Looking Ahead

Back to thoughts of the year ahead. Industry analysts typically can’t resist the temptation to predict how the IT world will look in the year ahead. 2015 is no different. Many commentators agree on the continued rise of the Internet of Things (IoT) and its potential to impact on our daily lives; the continued expansion and proliferation of mobile technologies and the rising belief that, in a maturing digital market, this is now the ‘age of the application”.

One industry observer, IDC, predicts the following trends in disruptive technology for 2015:

  • Nearly one third of total spending will be focused on new technologies such as mobile, cloud and big data
  • Wireless data will be the largest ($536bn) and fastest growing (13%) segment of telecom spending
  • Spending on the greater cloud ecosystem (public, private, enabling IT and services) will reach $118bn (almost $200bn in 2018), $70bn ($126bn in 2018) of which will be spent on public clouds
  • Worldwide spending on big data-related software, hardware, and services will reach $125bn.

While none of these are new phenomena, the scale of their predicted rise certainly is. Indeed, these volumes of expenditure only serve to cement the belief in the enormous market potential of what is still relatively new technology. The revolution is upon us.

We’re all socialites now

As of 2014, around 1.8bn internet users have accessed social networks; 170 million from the United States. Social media seems to reach beyond the trivialities of consumer use. Perhaps more interestingly, big business seems to be embracing it as a genuine business tool, too. Organizations including the Bank of England are using Twitter and Facebook to help test the market. Similarly, a Facebook application, ‘Link, Like, Love’, enables American Express cardholders to link their cards to a dashboard of deals from the likes of Whole Foods, Dunkin’ Donuts and Virgin America. Five million ‘Likes’ suggest that other companies may follow this route.

The CIO is dead, long live the CIO

The jury is still out on who IT leadership will collaborate with, and behave, in supporting the business. As Ian Cox explains on CIO.com this month, “CEOs are under pressure to move their business into the digital world” and that “more of them will look to their CIO for help in leading the transformation. And that will need to be a different type of CIO if they are to make a success of digital”.

In his particularly lucid piece, What can the CIO expect?, he goes on to mention the continued challenge of CIO and CMO teamwork, but offers a positive conclusion “There is no war between the CIO and CMO”.

Hear, hear. Instead, the right technology can deliver for all parties. It doesn’t even have to be brand new…

Find out for yourself how Micro Focus supports innovation through smart technology by dropping us a line, and discover for yourself why COBOL remains the enterprise application language of choice at a forthcoming Developer Day event, soon.

#DevDay0

www.microfocus.com

ps. The definition of the TIOBE index can be found here.

When is a glitch not a glitch?

Micro Focus CGM Andy King looks at some high-profile problems with the Obamacare website and at the Co-operative Bank. Most readers will recall recent issues with the NATS computer that grounded dozens of flights and hundreds of passengers

How a harmless word can disguise dangerously inadequate core system funding

A recent Financial Conduct Authority press release announced stringent £42m fines for the RBS group of banks. These penalties related to the service outages that denied more than 6.5 million customers access to their funds for a number of weeks in 2012.

There are many more recent stories. These include high-profile problems with the Obamacare website and at the Co-operative Bank. Most readers will recall recent issues with the NATS computer that grounded dozens of flights and hundreds of passengers.

The common denominator among all of them is that in each case, fundamental IT system failures were dismissed as a ‘glitch’. It is my view that they are nothing of the sort. Without launching an etymological crusade, in my world a glitch is a minor technical aberration, a skip in a downloaded music track, for example. It is not something that paralyses international transport or banking systems. This is no ‘glitch’ and in some cases those responsible for maintaining supposedly robust IT infrastructures have admitted as much. But while we continue to use such benign terminology we risk letting them off the hook. Let’s review two of these cases in more depth.

In 2012, NatWest staff tried to install an update on the RBS payment processing system, known as CA-7. This is a job scheduling and workflow automation software package commonly used by banks and other large enterprises using IBM mainframes. A large number of NatWest’s 7.5 million personal banking customers and more than 100,000 Ulster Bank customers were affected.

Outsource the problem

So why did it happen? BBC business correspondent Robert Peston wondered if outsourcing might be behind the problem, while an RBS spokesperson blamed  RBS’ own systems. The fact that no-one can be sure who is looking after what cannot fill RBS customers with a great deal of confidence. Fast forward to 2013 and another so-called glitch means that  RBS customers to go to bed with healthy current accounts and wake up apparently in debt. Once again, people were left high and dry by decades of underinvestment in their bank’s IT. Which, coming as it did on Cyber Monday, showed that RBS’ sense of timing was no better than their customer service.

Quoted in the MailOnline, RBS Chief executive Ross McEwann claimed the bank was now investing heavily in building “reliable” IT systems just  as Iain Chidgey of data management company Delphix, noted that the increasing frequency of software glitches in the banking industry are often  caused by insufficient testing. So, not really a glitch at all, then?

Meanwhile, a fourth NatWest ‘glitch’ has been attributed to a Denial of Service (DDoS) attack. Difficult to predict but, with rigorous testing, straightforward enough to prepare for, cyber attacks are not new and customers must  wonder how many glitches equal poor IT planning and management.

Flight delayed

You will recall the technical fault that caused widespread flight disruption when a single line of computer code at the National Air Traffic Services (NATS) control centre at Swanwick failed.

NATS Chief Executive Richard Deakin explained that this glitch was “buried” among four million lines of code spread across 50 different systems. He said NATS was spending an extra £575 million to avoid a repeat, but Business Secretary Vince Cable accused the company of “skimping on large-scale investment”. Deakin went on to warn that updating some “elderly” systems posed a “challenge”.

And there is the problem. These core systems have evolved over time and discovering the root of the issue is a significant challenge in itself. New applications have been layered on the original functionality to become huge, sprawling webs of interconnectivity. However, the underlying systems are sound. Many of the mainframes used by finance houses were built on COBOL, which has been proven over many decades to be sufficiently robust to provide ‘glitch-free’ service, and flexible enough to be modernized to adapt to meet the demands of 21st century consumers. What might be lacking is the resourcing, tooling and underlying investment required to nurture and evolve these systems to support business in 2015.

Take the RBS case. Much of their technology was never designed for a 24 hour, always-on world. When the original application was built, banking was more straightforward. Transactions happened during opening hours and records were updated during an overnight batch run. Maintenance and upgrade work could take place out of hours. Now, there is no ‘out of hours’ and failed upgrades or similar maintenance tasks have an immediate, and significant impact. As NATS’ Richard Deakin noted, improvements must be made “while the engine was still running”.

It doesn’t have to be like this

But blaming “elderly” systems is disingenuous. Investment simply needs to catch up with the business demands being placed on IT today. With support, these core applications can be ‘retuned’ to handle the increasing demands.

The reasons for the ‘glitches’ outlined above are as many and varied as the industries they represent and there is no simple ‘just do this’ panacea.

My point is that banks and other financial houses owe it to their customers to achieve a better understanding of their application landscape and improve how they undertake remedial and innovative work.

Micro Focus and Borland help organizations bring more stability to their mission-critical mainframe systems and so avoid the need for colourful phraseology. Our customers include many high-profile finance and insurance houses with profiles very similar to, for example, NatWest.

The Micro Focus Mainframe Solution, for example, offers mainframe owners end-to-end visibility of the application portfolio. Tools such as Enterprise Developer and Test Server can bring modern efficiencies, more transparent quality controls and improved delivery cycles to trusted, business-critical mainframe environments.

blog_images.10timesvalue

Our support means that our customers avoid having to hope that people will continue to see poor IT housekeeping and a lack of foresight as unavoidable ‘computer quirks’. But as the body of evidence grows, this seems like wishful thinking. The bottom line is surely to either invest in better tooling, or a bigger dictionary.

andyki_LThumb

 

 

 

 

Andy King

UK, Ireland and South Africa Country Manager