Lucky for some: z13 Launches today

Micro Focus is an IBM Partner and virtually all of our technology complements the legendary IBM Mainframe. We’re understandably very excited about the z13 news from IBM. Here’s why by Derek Britton.

IBM today announced a new generation of Mainframe, the z13, which should help businesses globally do more – more powerfully, more mobile-accessibility. more securely & more efficiently than ever before.

The Mainframe of today is very different to the Mainframe that launched 51 years ago this year –less power consumed, less environmental impact and less total cost of ownership to name a few massive benefits, as IBM continues to invest and innovate to support the growing needs of today’s economy.

And it is very much a digital economy: “The z13 is designed to handle billions of transactions for the mobile economy.  Only the IBM mainframe can put the power of the world’s most secure datacenters in the palm of your hand,” said Tom Rosamilia, senior vice president, IBM Systems.

Chris Livesey, Micro Focus CMO, comments “As a long term IBM partner, Micro Focus have provided innovative enterprise development and test software for the IBM mainframe environment over many years. IBM z13 is the industry leading business execution environment for enterprise application workload where scalability, performance and extensibility are core requirements for successful IT service delivery. Micro Focus is delighted to support the latest generation of IBM mainframe technology innovation, the z13”.

For more insights, you can find the IBM press release here.

A Solid Foundation

The IBM release builds on a long heritage of innovation, of course. 50 years of it in fact. Last year we helped celebrate the 50th birthday of the IBM Mainframe, as well as having provided a whole range of support to our mainframe customers.

We’ve explored way of helping mainframers get ahead of the compliance game, tackle their growing IT backlog, deliver more value from their mainframe Outsourcing agreements, and offered new solutions to improve end-user efficiency and green screen modernization.

And we’ve taken our message on the road. Our Developer Days running across North America – often in conjunction with IBM – are spreading in popularity, geography and frequency (see the latest schedule here and drop in to see how efficiently we can help your application modernization efforts when we’re passing). #DevDay0

We take pride in attending the frequent #mainframedebate on Twitter, we have our own dedicated Mainframe50 account and hands on hearts – delivered over 20 ‘Mainframe’ themed blogs in the past 12 months.

Mainframe50press

Best of Friends

If the mainframe is the mainstay production engine of choice for business, then COBOL is its application language of choice. Micro Focus has a rich 39 year heritage with COBOL as our mainstay. And speaking of history, amazing Grace Hopper’s 107th Birthday was cheered and we’re supporting a movie being created about her stellar contribution to IT, which will doubtless feature some of the very first mainframes.

GoogleDoodle

Over the years we’ve built truly outstanding Enterprise products to help IT shops modernise their ‘legacy’ estates in a pain and risk-free way, which themselves are complementary to the mainframe hardware and systems provided by IBM.  Recently, IBM endorsed Micro Focus’ application development technology for IBM mainframes, Enterprise Developer.

In April 2013, Greg Lotko, former VP and business line executive, IBM System z, said, “We are continually working with our technology partners to help our clients maximise the value in their IBM mainframes and this latest innovation from Micro Focus is a great example of that commitment.”

More is More 

We see the opportunity to leverage the mainframe to deliver more business value as better than ever, whether it’s helping someone understand the enterprise application estate they have ignored for years as it just keeps working, right through to Mainframe development and testing solutions. From taking the 1st modernisation steps and making mainframe interface users more effective, to ensuring that our networks still communicate the volumes and speeds we now demand, Micro Focus is truly in this game. COBOL of old still runs the world today, COBOL of old can be COBOL of new, as modern as the new z13.

Yesterday Deon Newman from IBM announced ‘we’re ready’ pre-System z13 launch. Governments, Federal agencies, FSS concerns, insurance companies, travel and transportation companies, educational establishments  – whoever you are – if you have invested in this brilliant technology in the past and would like to bring it into the future – we’re more than ready too!

Have a point of view about the new mainframe? Let us know!

blog_images.10timesvalue

Outsourcing – Extracting maximum value from the Mainframe

Many organizations are choosing the option to explore outsourcing – contracting out all or parts of their business processes to an outsourcer, also known as a systems integrator (SI). This enables the organization to focus on its core competencies, while mitigating any weaknesses by using the expertise of an outsourcer.
This blog explores details of the trend towards outsourcing and its pitfalls, offering guidance on strengthening the partnership between organizations and the outsourcer by addressing some ongoing concerns.

The here and now

According to recent research, which polls 590 CIOs and IT directors from nine countries around the world, nearly half of all organizations with mainframes are currently outsourcing the development and maintenance of their mainframe applications to SIs. Over 60% of respondents say they have some form of outsourcing agreement.  The outsourcing market has grown vastly over the past decade and it looks like the trend is set to continue with the outsourcing market expected to reach USD 4.49 billion globally in 2020.

By outsourcing, organizations are aiming to derive business value, yet the difficulty of establishing and managing an effective and cost-efficient outsource model is well-known to organizations across the globe. The result: an operational imbalance between organizations and their external suppliers – and the industry seems to agree…

Outsourcing reality

The mainframe has been the bedrock for masses of IT environments over the past fifty years and will continue to be so according to research. Yet, many organizations are looking to leverage the reliability and capabilities of the mainframe to accomplish even more – and, as such, an increasing number of CIOs are looking towards outsourcing.

Over many years key applications have advanced in order to meet business demand – in support of this the skills required to maintain and develop these applications have evolved. Ultimately, as a direct result, a well-publicized skills deficit within mainframe development has been created – whereby demand is outweighing supply. College leavers have limited COBOL programming knowledge and other object-orientated languages, such as Java, are currently the ‘in-thing’. Consequently, recent years have seen an increasing number of organizations exploring outsourcing options so they can benefit from skills which in-house teams are lacking.

A 2012 study led by Compuware Corporation surveyed 520 CIOs and had a look at the attitudes towards – and the experiences with – mainframe outsourcing. The study outlines that:

  • 71% of organizations are frustrated by the hidden costs of mainframe outsourcing
  • 67% expressed dissatisfaction with the quality of new applications or services provided by the outsourcer
  • 88% of organizations on CPU consumption-based pay structures believe their outsourcer could manage costs better
  • 68% of organizations outsource maintenance of mainframe applications because their in-house team no longer has the knowledge to maintain them
  • 80% of organizations believe difficulties in knowledge transfer are impacting the quality of outsourced projects.

Though the results outlined above depict an overall mixed experience, it’s important to recognize the vital role an outsourcer can play when the balance is right. A good understanding of the challenges which may arise will enable an organization considering outsourcing to be one-step ahead, providing preparation time to implement processes and technology to ensure a successful relationship.

The challenges of outsourcing

Let’s consider a number of typical concerns facing organizations looking to outsource application maintenance for parts of their IT portfolio:

Inherited application complexity

Many years of innovation and change has inevitably created a highly complex application environment. As a result, for both parties, getting up to speed is often difficult and time-consuming as access to vital application knowledge is slow.

Difficulty of task

More likely than not, the part of the portfolio being outsourced is, unsurprisingly, motivated by the current difficulty, costs, or sheer effort of the client doing tasks themselves. Large ‘legacy’ systems are often poorly documented, written and maintained by many developers over the years. This lack of insight and inconsistent approach makes them difficult to enhance and innovate. Sometimes, the outsourcer can inherit unexpected challenges, immediately jeopardizing the initial objectives.

Reliance on older technology

It can sometimes come as a surprise that the existing processes to support rapid application change are dated at best. While the client might outsource the task of changing applications to gain more people to do the work; what it doesn’t do is fundamentally improve the efficiency of the process. Often there is a reliance on older technology and processes which are not fit for 21st century IT delivery or user expectation.

Limited delivery and testing cycles

Another significant bottleneck is the normally highly-regimented schedule for delivery and testing. Driven by hardware or system constraints, there are typically fixed windows of opportunity for development and QA phases. With such delay comes rework, and with rework comes additional resource burden, and cost – each phase in the coding, debugging, unit test and QA phase consumes vital resources. In many cases though, increasing Million Instructions Per Second (MIPS) to accommodate the Outsource Service Level Agreement (SLA) is not an option.

Client IT resources are precious

These include key staff, who are constantly in firefighting mode, as well as the hardware and infrastructure, which keeps the whole operation running. While adding extra SI staff to the mix might provide more developer resource, meeting this new increase in demand for infrastructure is not easy. Additional hardware resources may well be needed and day-to-day response times may be longer if the outsourcer’s staff are in a different time zone.

Getting the balance right for outsourcing success

While IT organizations require better value, faster turnaround, enhanced quality deliverables, and innovation from their SIs, the system integrators struggle to contain costs, cope with inherited application complexity, and manage large project teams all of whom may be accessing the mainframe and further increasing MIPS usage. Understandably there will be obstacles along the journey.

Getting the right balance ensures outsourcing success, and that comes down to having the right technology. It should focus on knowledge transfer and quality of code changes, enable a higher degree of quality assurance, and have faster delivery cycle turnarounds. Most importantly, it must provide significantly more computing capacity to get the job done efficiently.

To ensure both client and SI gain, they need to:

Gain a comprehensive understanding of application portfolios

A solid knowledge foundation enables architects to quickly identify ways to boost application efficiency and flexibility as well as accelerate optimization activities and ongoing maintenance.

Provide greater capacity for application change and testing

The latest integrated development environment (IDE) technology can improve productivity by up to 40% and remove any capacity bottlenecks, enabling Service Delivery and QA teams to cut through workload with unprecedented speed, subsequently accelerating delivery times.

Introduce quality assurance earlier in the process

Perform a variety of pre-production testing on low cost commodity hardware, avoiding unnecessary cost and delay. Meet delivery demands even at peak testing times without compromise.

Minimize mainframe usage and contention to reduce cost

Analyze, develop and test without incurring the costs of additional MIPS usage. Reduce the ongoing cost of mainframe testing resources and contain costs of expanding test resources by exploiting lower-cost alternatives.

The Micro Focus way…

From the start, Micro Focus helps efficiently manage outsourcing planning.  By exposing the application landscape, application complexity becomes simplified for better knowledge transfer and more accurate specifications.  An enterprise development environment that supports cross-skilling, removes mainframe constraints and lower infrastructure cost through reduced MIPS usage – as a result the hundreds of SI programmers may not have to touch the mainframe at all.

Micro Focus Mainframe Solutions address the imbalance, enabling organizations and their suppliers to meet the challenges of outsourcing and gain more value while reducing the hidden costs of the outsourcing contract.

Interested in finding out more? Make sure you read our white paper to take the first steps towards smarter outsourcing.

blog_images.10timesvalue

IT Debt – Can IT Pay Its Own Way?

Introduction

Coined a few years ago to help measure a specific industry trend, the phrase ’IT Debt‘ is now a de-facto  term in the IT world. A quick Google search shows, however, that it is not well defined, and the phrase is often misused, misunderstood and applied generically instead of on a case-specific basis. This blog attempts to unpick the truth from the fable by defining IT Debt, and exploring its causes and the wider industry impact of such a phenomenon.

IT what?

‘IT Debt’ is well-established term promoted by Gartner in 2010 to apply a quantifiable measurement to the backlog of incomplete or yet-to-be-started IT change projects. The accompanying research reported a rise in the amount of unfinished IT activities, at a global, macro perspective. When they wrote their press release, Gartner had the “enterprise or public sector organization” front of mind as most likely to suffer from IT Debt, and were particularly focused on the backlog of application maintenance activities.

Let’s define those terms again here.

  • IT Backlog – outstanding work IT has to undertake in order to fulfil existing requirements
  • IT Debt – a quantifiable measurement of IT Backlog

IT Debt later started being used interchangeably with similarly debt-focused phrases, such as Technical Debt, IT backlog or – to borrow a phrase from Agile methodology – the stuff in the icebox. Looking objectively at the issue, it will be helpful to think of the IT Backlog as the focus of discussion – “IT Debt” is merely a way of measuring it.

How Did It Get Like This?

The concept of a lengthy ‘to do list’ is by no means a difficult one and is certainly not new in of itself. IT or Data Processing departments would have long maintained a list of work items, prioritized accordingly, and would be working through this list, in the same way any functional area of any organization might. The monetary value adds arguably greater clarity (and potentially therefore concern).

Of course this being an IT term, causative factors can be many and varied. There are a number of elements that can and will contribute to an organization’s IT Backlog in differing measures. It isn’t fuelled by a single element. It is not platform or technology specific. It builds up, application by application. The defining characteristic of a backlog is that it’s the cumulative effect of a number of contributory factors that have accrued over time.

The root causes for the IT Backlog not only unearthed by research but suggested by customers, partners and commentators are wide and varied. They include:

  • Historical IT investments The IT world is highly complex; supporting this complexity is an onerous task and previous IT investment decisions may have been a good idea at the time but are now a high ongoing burden to keep running – there’s more on this here. Gartner echoed this major ‘budget’ concern in their original research too.
  • Current IT prioritization With 70% of all IT spend typically going on keeping the lights on, further impetus on clearing the backlog isn’t perceived as being a revenue-generating activity, so it may go under the radar in favour of more customer or revenue-centric initiatives. A strategy that sensibly and appropriately invests in what is a housekeeping exercise is not easy to justify.
  • Human Resources The lack of appropriate skills is another potential issue, because identifying the solution is one thing but getting ‘your people’ to resolve it can be quite another. Building a solution to a requirement the basis of which requires very specific know-how might just be seen as too difficult or costly to resource.
  • Unplanned Backlog Pre-ordained, planned work on the backlog is one thing, but IT priority is seldom isolated from the business in this way. Organizations are at the mercy of shareholders, corporate events, regulatory bodies and even the judiciary. Compliance projects and M&A activities typically find their way to the top of the list unannounced, pushing other backlog activities further down the list.
  • New Technology / Innovation Many CIOs and IT Managers will point to the external pressures – such as the disruptive technologies companies must work with to maintain market share – that are causing them to delay other tasks.
  • Processes and Tooling Incumbent technology and tools are not necessarily set up to deal with a lengthy application maintenance shortfall. The efficiency or otherwise of the execution of IT changes will have a bearing on how much backlog can be reduced and when.
  • Improvement Process With no rigor for monitoring and controlling the application portfolio, it is often harder to plan and prioritize application backlog activities systemically. Gartner suggested this in 2010, and more recent research suggests that only half of organizations have an appropriate process for managing the portfolio this way.
  • Vendor Relationships Filtering the must-do from the nice-to-have and ensuring the right technical and 3rd party strategy is in place is an important IT task. Not adding to the longer-term backlog as a result of procurement decisions remains an important and ongoing challenge for the organization’s senior architects and decision makers.

This list is by no means exhaustive. In the whitepaper Modern Approaches to Rapid Application Modernization, IDC argue other potential culprits could include high maintenance costs, “rigid systems that remain resistant to change”, “lack of interoperability” and ”outdated user interface technology”.  Of course, the chances are that no two organizations will have the same blend of factors.  In truth, it’s going to be an unhelpful cocktail of any number of these issues. 

Wherever it resides it’s not a Mainframe Problem

If IT Backlogs exist, they obviously live somewhere. They pertain to or manifest themselves in certain corporate servers. Yet in the IDC report (see above), there is no mention of platform as a salient factor in shaping IT Backlog. No link at all.

The IT Backlog is simply the confluence of any number of factors – tools, process, people, politics, available cash, desire to change, strategy – that will contribute, in greater or lesser concentrations, to create an application maintenance shortfall of work. It doesn’t follow that mainframe owners, or those running ‘legacy’ applications, are grappling with IT Backlog more than anyone else. Indeed, frequently the opposite is true.

An IBM report, noted a positive ‘cost of ownership’ for their System z against distributed servers. It noted that consolidating servers increased IT staff productivity and reduced operational costs – keeping the lights on – by around 57%, further proof that neither the mainframe, nor alternative, mass-distribution systems are the culprit. Other research also highlighted other causes of IT Backlog, choosing to look beyond platforms and ‘legacy’ applications.

From our own research through Vanson Bourne, we surveyed the views of nearly 600 CIOs , which was captured in the whitepaper, The State of Enterprise IT: re-examining Attitudes to Core IT Systems,   IT Debt results by company size revealed an interesting perspective. While average estimates for IT Debt grow with company size, this trend only applies to the entire portfolio. Taking the mainframe portion alone, the largest companies actually witnessed a drop, making its percentage contribution towards the IT Debt much lower than the smaller companies.

Clear evidence shows us that IT Backlog is not mainframe-specific. Indeed, it should not be pinned to any given platform at all. Correlating any link between the choice of platform and the consequent presence of IT Debt is misleading.

Paying Your Way

The term IT Debt was introduced to provide some clarity and impetus to what was observed as a growing industry concern. Reactions to this were varied as the debate ensued, though most agreed it was an issue that would require attention.

In our view, the headlong rush to rip and replace perfectly good business applications (many of them COBOL based) and replace them with new code that may – or may not – do exactly the same job doesn’t seem wise. Swapping one problem for another is like clearing an overdraft with a loan you can’t pay back – with terrible consequences for your finances.

Taking a more balanced view of tackling the factors contributing the backlog avoids unnecessary risk in a long term strategy for operational improvement. And help is at hand to tackle many of the root causes.

Arguably the best place to start is with greater focus on the backlog at a systemic level. Isolating and planning backlog busting projects is facilitated by new incarnations of application knowledge technology, and smarter tools for making application changes.

Getting the work done needs the right resources. Lots of people are learning COBOL and many of the companies supposedly struggling with ‘legacy’ systems are at the forefront of the digital economy.

Longer term, training new generations of Enterprise techies is important. Efforts from Micro Focus and IBM’s master the mainframe initiative suggest that the problem is being met by some smart thinking all round, while the recent celebrations around the mainframe’s 50th birthday have added further impetus to a broader appreciation of the value of that platform.

You’re All Set

The phrase ‘IT Debt’ is surrounded by ambiguity, which hasn’t helped the industry understand the problem well. Conjecture over the relevance of underlying platform hasn’t helped either. Backlogs are caused by multifarious issues, and it is important to examine those causes within your organization, rather than reacting to the headline of IT Debt.

Today, establishing a successful mitigation strategy that tackles root causes is a genuine possibility.  The backlog burden need not be out of control. Embracing change by seeking to enhance existing, valuable IT assets using smarter processes and technology, enables backlog to be managed effectively, and without introducing risky, unnecessarily draconian change.

 

 

Beyond Terminal Emulation: Rumba+ looking good with Award-winning usability

We always said it – and now we can prove it. Rumba+ really does represent a class-leading Terminal Emulation and User Interface modernization solution. . Good enough to have won a 2014 Product of the Year Award from Mobility Tech Zone.

rumbachampionFrom who…?

Mobility Tech Zone is the go-to web resource for the mobile broadband industry. It’s a rolling feed of news, analysis, product info and downloadable resources for mobility and communications professionals. It has a clear focus on the innovations they need to overcome the challenges of disruptive technologies like BYOD and the rush to mobile.

Mobility Tech Zone is sponsored by TMC and Crossfire Media and TMC’s remit includes call center technologies, just one area where Rumba+ is proving to be a game-changer. It’s likely that the judges recognise the potential of Rumba+ to make a real difference to their specialist area. Maybe that’s why they were “very impressed” with Rumba+….

Award-winning mobility

Carl Ford, CEO and Community Developer, Crossfire Media, explains. “As leaders in the evolution to mobility we feel our award winners are delivering on enabling users by making carriers and enterprise networks capable of supporting our exponential data demands.”

In other words, Carl was looking for products that genuinely enable mobility in data transfer. That could almost be a product description for Rumba, which is bringing mobility to business applications and enabling a new view on established applications, bringing a fresh perspective on the world of green-screens – and all without changing a single line of code. Now that’s innovation.

As the Mobility Tech Zone award has recognised, Rumba delivers on the ‘enabling users’ aspect –major players like Allianz are accessing their core applications with user-friendly home tech, such as iPads, Windows and web browsers to slash ‘on-boarding’ times and boost business efficiency.

Meanwhile Aviva Italia have used Rumba+ to improve the user experience for the people who use their applications – improving their chances of recruiting and retaining industry talent, as digital expert Matt Ballantine notes on his blog. Many other Rumba+ customers enjoy the productivity gains and bypass the accessibility issues plaguing nearly half the companies appearing in the ‘IT Growth and Transformation’ survey recently commissioned by managed services firm Control Circle.

Achievable modernization from Rumba +

Similarly, this Vanson Bourne survey discovered that 54% of CIOs believe their green screens are “negatively impacting end user retention and recruitment”, while 98% of respondents recognise that new features would boost productivity but think modernization is too difficult. Well not anymore!

Rumba+ is risk-free, cost-effective technology that improves end-user efficiency without recoding. Need five more reasons to make the move? Try these.

Delivering modern user experiences is high on the CIO agenda. Rumba + consistently outperforms other products in the terminal emulation space, as Adam Rates, Senior IT Manager at Allianz UK notes in their case study …. and Mobility Tech Zone agree. Try it for yourself right here….

Rumba+

Every blog has its day

… or why we’re moving away from whitepapers

Imagine this. You have spent many thousands of dollars developing a software product or solution that resolves pain points for a rich portfolio of potential customers. They need this product. Your messaging is finely honed to address different audiences. Now comes the tricky part – what’s the best medium for delivering those messages?

Our industry is about 50 years old and is growing at a faster pace than ever. And in that time our mechanism for demonstrating market relevance is invariably a lengthy, brow-furrowing analysis. Improving end user efficiency? Whitepaper. Bringing flexibility to agile development? Whitepaper.  But how can time-poor CIOs and IT Leaders spend hours reading a 12 page document telling them that they are, er, time-poor?

blogblogsmallAnd so on and so forth…

Sometimes, whitepapers are the best option for fully exploring a theme or idea. Micro Focus offers plenty of whitepapers, as do Borland. They are important for building credibility, and demonstrating the full understanding of a problem needed to convince the market – and our customers – that we have the solutions. But no-one ever wrote a cheque on the back of a whitepaper. At best, they may prompt a phone call, or a download. So what else is there? Tweets are too ephemeral, magazine articles are more approachable than whitepapers but opportunities are rare. Infographics have little substance. But there’s a better way.

Show me the money

Our blog is the go-to medium for complex ideas, simply expressed. It’s a quick-turnover platform so there is always some fresh content. Whether it’s part of a series or a here today, gone tomorrow one-off, the Micro Focus and Borland blogs are increasingly becoming the mouthpiece of both companies. And because we address many different audiences we need different voices – and the blog enables that flexibility. Whether it’s Derek Britton talking to mainframe owners, Chris Livesey talking to CIOs – or Frank talking to devs – then we have the platform and the messages to engage, inform and surprise you. And not a 12-pager in sight.

See you there.

 

Bridging the Gap between success and failure

Few would argue that for most organizations, staying ahead of the competition requires continuous innovation to differentiate themselves from others in the marketplace.

To achieve this – to make the right decisions at the right time – they need to understand their business functions and IT infrastructure. In other words, they need the right knowledge.

This blog looks at how the IT knowledge gap can have a significant impact on an organization’s ability to innovate and how such risks can be prevented.

Some have failed to bridge that gap. The electronics giant Sanyo, retailer Comet and the well-documented case of Woolworths are all recent, and well-known, examples of organizations that have made a number of critical decisions about the health of their business. The fact they made the wrong call is undisputed. The question, then, is what lack of understanding caused them to choose the wrong path?

To better understand this, we need to explore a few questions: What is the Knowledge Gap, what creates it, and how do we prevent it?

KnowledgeGapFig1What is a Knowledge Gap?

Organizations can become comfortable with the level of knowledge that is collectively held internally and therefore rely on a limited number of specialists within their core functions, particularly IT.

The organizations mentioned above faced unexpected challenges. These could include an increase in demand or change in business strategy and ‘the Knowledge Gap’ could have caused them to fail to react positively – or quickly – enough.  The same could happen to yours.

Figure 1 illustrates an apparent gap in knowledge which can be exacerbated by a number of factors. Organizations could, for example, be working on multiple projects, planning a number of product launches and/or having to comply with new regulations. These are common themes and likely to occur on a regular basis, stretching organizations’ ability to successfully meet all demands.

In short – the Knowledge Gap is the distance between the capabilities an organization has and those they want.

What creates a Knowledge Gap?

There is no fixed list of factors which influence an organization’s Knowledge Gap, although in our experience the Gap is influenced by three main factors: People, Process and Technology (Figure 2).

Let’s look at each factor in more detail.

KnowledgeGapFig2

People

People underpin any organization’s success but, frequently, many rely on a few, highly-skilled staff and an unexpected change ­– such as those already outlined – can cause disruption.

Organizations are constantly evolving. The introduction of new IT systems, technology and processes to name just a few examples will impact on the people within an organization, often requiring them to develop a new skill set. Any organization without the resources or capabilities to invest in new staff or to further train current staff will ‘fall short’ and a gap will appear.

Take – for example – the transition from the typewriter to desktop PC. The shift required entire workforces to learn and adapt their working ways.  That quantum leap could occur again…

Process

Business applications, systems and databases were originally built to suit the then-current, work environment, staffing, processes and technology. But what was fit for purpose then – anything from 10 to 50 years ago – may be less so now.  The challenges that modern businesses and organizations face today are very different and those people, processes and staff may not be able to handle them efficiently.

Example? Take the move away from cheque books. What was once the default mechanism for transferring money is now being phased out. Most of us now bank online and use our mobiles, consequently the banks have had to adjust their processes to suit.

Technology

Technology plays a big part in the creation of a Knowledge Gap. Generally speaking, technology has increased in complexity over the years to the extent that some organizations don’t always understand their own technological infrastructure – and usually have very few IT professionals with the knowledge to interpret these systems.

Consequently, the changes and alterations needed to bring the system up to modern standards are often postponed or delayed – impacting the organization’s ability to innovate in years to come. A common scenario for organizations is that decades-old technology is being asked to meet today’s more demanding specification requirements.

There’s no better example of how technology can rapidly change than the introduction of smartphones, GPS, access to thousands of apps, high quality cameras. These are all features that were unimaginable just ten years ago.

And organizations themselves will expect to have thousands of applications, hiding thousands more inter-dependencies, each with hundreds ­– if not thousands – of individual programs, screens and points of data access. A simple, manual alteration to this network just isn’t feasible.

What underpins all of these factors?

Time. The Knowledge Gap is not shaped overnight. Instead, it usually evolves and expands over many years. As time passes, complexity, change and deviation have all crept into a once well-designed system which worked efficiently but is understood by very few people a few decades later.

How do we prevent a Knowledge Gap?

In the 1980s, JVC’s VHS, Sony’s Betamax and Philips 2000 range fought a format war for control of the market in consumer-level videotapes. The war raged for many years; Betamax had the better technology while VHS had the better marketing. By the mid-80’s VHS held over 80% of market share in the US, despite Betamax having a two year head start.

The success of VHS was due to a number of dynamics, including better distribution channels, more distinguished partnerships and relevant business and marketing plans. Betamax simply did not hold the same level of knowledge as the VHS technology.

This prime example of a Knowledge Gap can be linked to all three factors mentioned earlier: People, Process and Technology. And once one or more of these factors doesn’t perform as expected or required then a gap in knowledge is likely to occur.

Close the Gap

So how can we prevent the gap? The right enabling technology can deliver efficiencies that close the Knowledge Gap and bridge the Betamax-style chasm before it opens up. With Micro Focus Enterprise Analyzer, developers within an organization have the detailed information – the insight – they need to support the gap-bridging modernization activities needed to move forward with confidence.

If you want to know more about how Enterprise Analyzer can bridge the Knowledge Gap within your organization, then our white paper, Bridging the Knowledge Maintenance Gap, has all you need.

 

The shock of the new – Addressing the unexpected impacts of modernization

Nothing moves on faster than technology. We can probably all agree that the pressure to modernize in the world of IT is both inescapable and ongoing. The difference lies in how we address the issue – and the solution may not be where you’re looking.

An objective perspective

Application development tools really can address the challenges of modernization. And that’s not us talking. It’s the view of this recent – and objective – CIC Report which describes this phenomenon in detail. It’s also the subject of this blog, and the supporting webinar.

So how do industry analysts Creative Intellect Consulting, who research trends in the software market, suggest countering the unexpected impacts of modernization? Is ‘doing nothing’ a sensible option in resolving the problems of out-of-date infrastructure, inefficient ‘legacy’ development tools and the ongoing skills crisis?

Impact assessment statement

As motorists and lottery winners will attest, the phrase “unexpected impact” can mean different things. For the purposes of the CIC report, we’re talking about the good kind – where help arrives from an unexpected source. But help with what, exactly?

It’s the contemporary IT dilemma: striking a balance between keeping the lights on today and innovating for the future. Organizations reliant on IT scarcely need reminding that, as CIC point out, competition and technology “place relentless pressure on organizations”. ‘Must-have’ mobile technologies are particularly demanding of up-to-the-minute functionalities. It’s hard to keep up – and if you don’t, your competitors will simply pull ahead of you. That, at least, is not an option.

Keep calm and carry on …

One thing is clear. Keeping pace – or doing nothing – will cost you. “Sweeping change is expensive and disruptive”, but doing nothing might cost you your business – as it falls further and further behind. Let’s assess the impacts and possible resolutions of ‘doing nothing’.

Using obsolete hardware and outdated software alone is risky. Combining the functionality of current core systems and the adaptability of new architectures enables businesses to keep up with hardware, software, databases, middleware and security advancements, reducing the risk. A complex application portfolio means high MIPs, so increasing productivity will keep maintenance backlog at bay and make more room for MIPS.

Or do something new?

CIC acknowledge that ‘legacy’ tools for maintaining software development are “less capable, less integrated and less productive than those for mainstream languages”, but new technologies can bridge the gap between older and newer languages. ‘Legacy’ systems working with the latest advancements? Now that could work.

The IT industry’s silo mentality reduces efficiency and productivity. COBOL and Java skills are not being cross-pollinated between teams for example. With cross-language application development, quality improves and competitiveness increases with it.

There will be further blogs, but those looking for a speedy resolution could do worse than keep reading. Softening the blow of modernization within the application development environment is close at hand….

Impact protection

The latest application development environments – Visual Studio and Eclipse – bring contemporary functionalities to the mainframe space.  According to CIC, the Micro Focus Enterprise Product Set straddles old and new by bringing “legacy language support to the mainstream (with) state-of-the-art IDEs”. Businesses use the latest application development tools to cut the maintenance backlog, improve productivity, and keep up with the pace of change.

By increasing productivity, the tools introduce new options for modernization strategies. As the seemingly permanent backlog of maintenance tasks diminishes, more room for development and innovation is created. With a modernized infrastructure and productive teams, inefficient ‘legacy’ development tools are a thing of the past. And as CIC acknowledge, “then you have options.”

The next blog unravels some typical modernization strategies and the options involved. It will reveal how analyzing and understanding your application portfolio will map out the innovative opportunities you need to absorb the modernization impact.

Until then – download your report, book your place on the webinar – and keep your seat belts secured.

The future of the mainframe: A CIO survey by The Standish Group

Standish Group recently undertook a survey of CIOs at Fortune 1000 companies about their use of the mainframe to discover what they believe the future holds for the mainframe.

What does the future hold for the mainframe? It’s a question that’s frequently asked and to provide some answers once and for all the Standish Group recently undertook a survey of CIOs at Fortune 1000 companies about their use of the mainframe.

The survey findings give valuable insight into the perceptions and intentions of the CIOs:

  • 70% said that while the mainframe plays a strategic role in their organization today, in five years NONE of the CIOs considered that the mainframe would play a central role
  • 59% propose to migrate core mainframe applications to a Windows, UNIX or Linux platform
  • 78% are either currently engaged in a modernization exercise or plan to be within 18 months – leaving 22% without a modernization plan.

This is just a snapshot of the main findings, you can read more detail here

With over 600 successful migrations, Micro Focus has an unparalleled pedigree in proven mainframe modernization strategies and experiences To learn more about mainframe migration and the benefits it delivers, click here to download the white paper Survival of the Fittest, which explores three proven modernization strategies in detail.

Micro Focus achieves position in the Leader’s quadrant in Gartner, Inc.’s Magic Quadrant for Integrated Software Quality Suites.

Micro Focus has been recognized as a Leader based on our completeness of vision and ability to execute.

Achieving a position in the Leader quadrant comes eighteen months after our acquisition of Borland and Compuware’s ASQ business. In that time Micro Focus has built on the strengths of key requirements definition and management, test automation and change management products: Caliber, Silk and StarTeam, re-invigorating them and providing a clear direction for customers.

Micro Focus CEO, Nigel Clifford, says, “In our opinion, being positioned as a leader in the software quality space by Gartner is a testament to the successful integration of the Borland and Compuware products over the past eighteen months. We are delighted to be so highly‐regarded in this dynamic and fast growing market.”

You can view Gartner, Inc’s Magic Quadrant for Integrated Software Quality Suites here

You can also get your hands on free trials of the latest Micro Focus Automated Software Quality product releases: SilkTest, SilkPerformer, SilkCentral Test Manager, CaliberRDM and StarTeam Enterprise Edition.

Research paper: The current state of terminal emulation

The Micro Focus terminal emulation report evaluates the current state of this mature market.

The Micro Focus terminal emulation report evaluates the current state of this mature market. Over the past 6 months, Micro Focus conducted hundreds of interviews of customers, partners, and industry leaders to get the status of the trends and realities of the terminal emulation market. Major analyst firms no longer cover terminal emulation as a market segment. Customer have been left on their own to make terminal emulation decisions regarding operating system, security, and usability upgrades. Because of this, Micro Focus wanted to provide a snapshot of the industry to help customers make better decisions in their IT environments.

Download the report

Introducing the new cobol.com – Cobol Makes Life Better

The future’s never been brighter for Cobol. And that’s got to make life better for you, too.

With 50 years under its belt, Cobol is set to remain the dominant language for business applications for the next 50 years. Having consistently seen off the young pretenders, Cobol has continued to evolve to meet every new demand thrown at it, from both business and technology.

Business applications written in Cobol are faster, more precise and more powerful than ever. And now it’s easier than ever to run them on the platforms that make the most business sense – now and in the future.

Visit http://www.cobol.com for more information.