Budget Misinformation Abounds

The October issue of Government Executive features “Budget Musings,” which outlines some of the federal government budget speculation as we move toward the upcoming Presidential and Congressional elections. The article cites some shocking polls indicative of the average voters’ understanding of the federal budget, such as:

  • The average CNN poll respondent said food stamps account for 10% of federal spending. Reality is that 2% of federal spending is attributed to food stamp programs.
  • Respondents in a Cornell University poll where 44% of the group received Social Security checks and 40% received Medicare coverage said that they have never used a government social program.

The article also references Scholar Norman J. Ornstein, co-author of It’s Even Worse Than It Looks, predicting that if sequestration comes to pass, “1 million pounds of tainted meat would reach grocery store shelves”  due to the cuts in food processing and agriculture inspection. I mean this as no disrespect to the poll participants or Mr. Ornstein, but misinformation abounds.

I don’t know about you – but I find all the speculations about the budget confusing and unsettling. Rather than taking a truly pragmatic approach, the election season has caused the budget discussion to be supplemented with scare tactic campaign ads and misleading rhetoric – from both sides of the aisle.

At Micro Focus Federal, we’re doing our part to help agencies with mainframe-based systems and applications reduce their budgets, often in year one. Unfortunately, we need a much larger, strategic approach to create the overall cost savings to keep sequestration cuts from coming to fruition. What are your ideas? How can we help?  Let me know your thoughts. Connect with us in the comments section below, on Facebook or Twitter.

Mainframes in the Era of Mobility

The recent explosion of mobile applications has dramatically altered the consumer landscape, making it the norm for users and customers alike to expect access and support anytime, anywhere. With Cisco recently reporting that mobile-connected devices are set to exceed the world’s population this year, it’s no surprise that the surge is overflowing into the enterprise. While there are a few leading innovators in what is now being termed “enterprise mobility”, many are still struggling to take the first steps towards a streamlined strategy. The question is no longer “Do we?” but “How do we?”

Enterprise mobility is an opportunity to extend market reach, but not without risk. The resources required and the additions to IT infrastructure often weigh the enterprise down with high costs and performance strains on existing operations. While these newly developed applications may appear far from traditional, they in fact rely the more traditional elements of IT infrastructure including the mainframe.

In order to provide a valuable function, mobile applications need to access core mainframe-based business functions  mainframes. With mainframe resources already extremely precious commodities, organizations must seek a strategy to add new mobile capabilities cost-effectively without endangering functioning IT systems or depleting current performance levels.

Assessment of the IT Portfolio

To develop any sort of viable solution to the mobility challenge, it is first important to truly understand the current IT portfolio. A comprehensive view of the IT environment, based on application portfolio management (APM) technologies will reveal more useful insights than simple analysis of a single application on its own. Importantly, it will help determine which applications to address to  accommodate new mobile applications or other innovations. Indeed, you may even find ways to streamline activity using this insight – a major U.S. bank examined one area of its portfolio in preparation for a legislative compliance project and identified 40% of the code, and therefore 40% of the maintenance effort, was redundant. The bank was able to instantaneously slash costs, meaning the modernization effort paid for itself immediately.

This view must be accompanied with clarity around the purposes and value of each application and their interactions. While a surface view may flag certain  applications for their age, a deeper dive may reveal the value they hold as a competitive differentiator. Additionally, a holistic view will uncover redundancies and dependencies among applications. These actionable insights will direct the organization as to which applications can be removed, innovated or modernized.

For CIOs and IT departments, having an accurate knowledge of the mainframe environment provides clarity around customer demand, operating costs, application value, staffing levels and system dependencies. In instances that are specifically geared towards integrating mobile applications, a clear APM strategy is imperative to make informed decisions looking at all of the pieces of the puzzle.

Re-Use IT Infrastructure

To help manage the cost of the mobile strategy, organizations can look to re-use as much current functioning IT resource as possible in mobile application delivery. Many organizations are tempted to build applications from the ground up, but the costs and complications can be astronomical, especially in cases where back-end systems are being rewritten for the sake of new mobile interfaces. Typically, existing back-end core systems provide the business functionality needed, so it is merely a case of establishing an integration layer between the new mobile applications and the back-end processes that can support them. Although some scenarios may require rewriting or building anew, re-use should be pursued whenever possible to avoid complicating already complex IT tasks and incurring high costs.

Comcast Washington, a subdivision of the leading communication company and cable service provider, faced a situation in which agents were unable to efficiently manage customer service calls because the mainframe-hosted customer service applications did not have access to service availability data housed on a GIS system. This disconnect resulted in decreased customer satisfaction and loss of potential subscribers. In the face off this business dilemma, Comcast Washington needed a solution that would improve workflow efficiency and connect customer services applications to the necessary data – without breaking the bank or wasting time.

In this case, they chose a robust integration platform that fuses mainframe data with external applications. By adding this layer, Comcast Washington was able to utilize the otherwise functioning infrastructure already in place. The additional platform successfully integrated the external customer service application into day-to-day business operations, and, most importantly, it did so without any risk to the integrity of the existing systems.  Customer service agents saw quick deployment and great ease of use. A drag-and-drop tool allows them to view the application on the mainframe and easily navigate to the customer information they need. For Comcast, the project directly translated into easy integration and reduced mainframe maintenance costs.

Across the board, a key benefit of modernizing applications through re-use is the savings – including the time, people and financial resources needed to work towards supporting new IT initiatives, such as cloud and mobile device management. For mainframe applications that are built on original COBOL code, there is an easy path towards reuse. The highly portable nature of the language makes it easy for it to be reused elsewhere in the IT portfolio. As new business needs arise, COBOL applications can adapt to new deployment architectures and emerging technologies. Businesses have been able to deploy COBOL systems onto dozens of environments including .NET, Linux, JVM, Windows, UNIX and Cloud – ensuring that modernization projects through rehosting applications are almost always applicable as an alternative to the costly rewrites.

Independent of language or environment choice, modernizing through reuse and utilizing the freed resources, businesses are able to future-proof IT systems and ensure efficiency, cost-savings and innovation moving forward.

Sanity-Check the Mainframe Workload

A comprehensive mobile strategy will, inevitably, demand more processing and throughput in the business, including on back-end systems, which will incur cost. In order to enable the core systems to cope with projected new workload for mobile apps, the organization can look at discrete, lower-value tasks or workloads that could be undertaken elsewhere. By selecting appropriate software life-cycle activities, or even application workload to run in its most appropriate environment, this “fit for purpose” exercise can help IT reduce the system strain without losing the business benefits of existing core infrastructure, thereby freeing up space for the additional capacity the new mobile applications will require.

Recently, Owens & Minors, Inc., a healthcare supply chain management company, found itself in a tricky situation. The company relies on its’ highly competitive business logic to provide exceptional customer service and support. As such, it could not risk taking any action that would jeopardize it and thus its biggest competitive advantage. Concurrently, the organization was eager to continue driving innovation and business growth – it just needed to reduce costs that could then be reinvested.

To address the situation, Owen & Minors looked to modernize its ERP system. The organization estimated that a rewrite or replacement project would incur costs of between $100 million – $200 million, while also putting the systems unique business logic at risk. Instead, they moved forward with a modernization approach to move the ERP system workload from the mainframe to a Microsoft Windows and SQL server. Theproject garnered significant savings by both forestalling future costs and reducing annual maintenance costs.

The business case for mainframe rehosting can often be compelling, as doing so has the potential in many cases of reducing MIPS usage and often cutting operating costs. Depending on the size of applications and activities being modernized, in some cases organizations can achieve a saving of up to 90% of the initial IT budget. While organizations such as KCS Railroad were able to take 90% of their IT budget off the books by choosing to move their mainframe environment, other companies have opted to rehost specific parts of their mainframe workload, streamlining key parts of their IT operation. For example, Banco Espirito Santo (BES) in Portugal projected savings of over $3 milllion per year simply by rehosting select parts of its mainframe testing activities, but importantly accelerated delivery of key business system updates.

Ultimately, organizations can save capacity, which can then be made available to support mobile initiatives, by simply identifying key bottlenecks and then removing them by exploiting contemporary technology. For customers looking to embrace new zEnterprise architecture, for example, the multi-platform flexibility in the new environment is a potentially huge benefit of the new levels of choice provided in this consolidated mainframe ecosystem.

Mobile applications are not to be ignored in today’s business environment, but the associated risks cannot be ignored either. The potential IT burden should play a significant role in the enterprise’s decision-making process. By reusing what works, and by running systems where they add the most value, businesses can remove any resulting pressure on mainframe capability and enter the mobile world with full confidence. The innovation that follows will prove invaluable as companies strive to not only adapt, but to excel in the evolving world of enterprise mobility.

This article first appeared in the IT industry publication DB Trends and Applications, October 2012.

The legacy myth

A problem by definition

Shut your eyes. Think of what the word legacy means to you. Anyone seeing a Grandfather clock or similar? Chances are you’re getting a picture of something outdated and archaic, handed down by a predecessor. That’s where you and the dictionary part company.

First things first
While the entry in Collins includes the standard ‘gift of … property by will… received from an ancestor’ line, there’s a more interesting definition. ‘A typically beneficial product from the past.’ Hold that thought. We’ll come back to it shortly.

Until the Games themselves started, ‘legacy’ was probably the buzzword of the Summer of 2012  – namely, lots of worries about what we’d do with the Olympic infrastructure when the competitors moved on. There were concerns about whether we would be left enriched or bankrupted by what was handed on to us.

Which brings us neatly back to the point of this blog – and a lot of what you’re going to read in the next two – that is, what the term ‘legacy’ means within IT. And whether it’s correct.
Put simply, the word ‘legacy’ denotes software or hardware that has been superseded but is difficult to replace. It refers to older IT systems and outmoded software or platforms. It’s an undeniably negative connotation, condemning entire IT estates with suppositional, biased views and untested hypotheses.

It gets worse
Another source uses even more evocative language: “Legacy systems utilize outmoded programming languages, software and/or hardware …. no longer supported by the vendors.” Other choice words include, “expense, effort and potential risk” when moving “data and key business processes to more advanced and contemporary technologies[1].”

Techopedia isn’t in the legacy fan club, either. “Legacy … refers to outdated computer systems, programming languages or application software … used instead of available upgraded versions[2].” A legacy system can be problematic due to compatibility issues, obsoletion or lack of security support. What a statement!

This attitude pervades much of the thinking among the enterprise community. ‘Problematic’ has somehow evolved to be ‘catastrophic’, says Businessdictionary.com: they see legacy as being an “Obsolete computer system that may still be in use because its data cannot be changed to newer or standard formats, or its application programs cannot be upgraded.”[3]

In summary, if you have a legacy system then you’ve got computer landfill.
Now, some of this is fair. Processes and terminology that are no longer relevant can confuse programmers and no-one is pretending that ‘legacy’ and ‘cutting edge’ systems are interchangeable or easily confused. To go further, most of us would like to always have the most advanced technology at our disposal, and be ready to just move on to the Next Big Thing as soon as possible.

Wait a minute…
Don’t give up on it just yet. The reality is that most organizations use legacy systems to some extent. The word has a markedly different meaning in the modern IT context. Wikipedia says, “A legacy system is an old method, technology, computer system, or application program[4]”. So it’s old. OK. That’s fine. So is the Boeing 747 and no-one’s complaining about that.

Let’s remember the Collins definition of legacy – the beneficial product from the past. Maybe the problem is the word ‘old’ and, more specifically, the fact that ‘legacy’ and ‘old’ are used interchangeably when they are not. We live in an ageist society that is especially tough on technology. Unless a car is lucky enough to be a classic, it’s just old. Standing still equates to going backwards. So – anything old must be regressive, right?

Maybe not. Non-technical executives are pretty comfortable with their ‘old’ mainframes, viewing legacy systems as ‘tried and true technology’, despite their antiquarian composition. Even as NASA powered down their last mainframe, they were remarking how good it was. This is essentially because these things really work (and executives tend to like that). So it’s not really about age. It’s about their incompatibility with modern programs, systems – and current attitudes.

So the answer is: modify.
There’s no need to introduce risk and increase cost by replacing your core (legacy) system, says technology guru Chad Fowler, “Even in the case of software, ‘legacy’ is an indication of success.” Some bits work, some don’t. So, keep what’s good, replace what isn’t and lo and behold, your legacy is sitting at the heart of your business, doing everything it should once again. Not old, just….re-tuned.

So what’s our point?
We believe that perception is everything. Free the word legacy from its negative connotations and you can look at your, er, more established assets as just that – assets. Because if they are managed correctly, are run with the right applications and are seen as part of a solution, not a problem, then your so-called “legacy system” can underpin your future, not be confined to your past. What you do with that Grandfather clock is up to you.

Next time: The story continues: the great and good comment on where the word legacy came from – and the trouble it has caused us.

Co-authored by Helen Withington, Derek Britton and Steve Moore


[1] http://financecareers.about.com/od/informationtechnology/a/legacysystems.htm

[2] http://www.techopedia.com/definition/635/legacy-system

[3] http://www.businessdictionary.com/definition/legacy-system.html

[4] http://en.wikipedia.org/wiki/Legacy_system

Giant leap or small step?

Following on
Our last blog post, ’Turning your Platform into a launch pad‘, underlined the point that reusing/salvaging the core applications makes perfect sense when moving away from the platform. In this sequel, we outline the steps involved in making that platform migration activity a totally viable option.

Ready for progression
So you are ready to move from your current mainframe platform – maybe Unisys or HP – to a more contemporary, cost-efficient open system. You realise that it’s the way forward for your business.

You’ve read the whitepapers and case studies. You are ready to resolve many of your business issues with a leap to a new platform. You’re going to progress from the big unit whirring away in the basement, to deploy under Windows, UNIX or Linux, and maybe venture into modern architectures such as Cloud, .NET and JVM. The great potential for cost savings by moving to an open platform appeals to your CFO and the modernization options have captured the imagination of the CTO.

What’s the catch?
Does this sound too good to be true? Any move contains an element of risk. After all, your IT system is the engine room of your business. If it doesn’t work, for whatever reason, then you stop driving forward. And in today’s fast-moving business environment if you stand still, then you’re going backwards.

So – how can we keep that cherished engine ticking over while retuning it to meet modern business challenges? How can we avoid breaking the system we need while looking to create the capabilities we want? The strategy must be to retain the business continuity of your mainframe while the move takes place. The key is to reuse as much of the engine as possible.

Re-use what works
To stretch the analogy a little further, COBOL is the fuel that runs this engine and your application logic is the on-board computer. Together they keep your business ticking over. The integrity of your application logic must remain uncompromised wherever you run it. Moving platforms means change, but this change leads to the business benefits, such as lower TCO and business agility. Because the Micro Focus Visual COBOL compiler and deployment systems support all major COBOL standards, a Unisys COBOL application will happily work under say, Windows.

Only change what is necessary
Of course, the whole system won’t move untouched. You are likely to want the application user interface to change – after all, a clear transformation from green-screen application to web-based delivery is typically a major improvement aspect of the exercise.  In most cases, the resulting UI will closely match the original workflow so end users can enjoy an improved experience and a reassuringly familiar process.

Also, the way data is stored will alter, but not the data itself. Your platform migration is a good opportunity to upgrade your data storage systems from flat files to a relational database system to boost your analytical capabilities while maintaining vital data integrity. Automated tooling can modify the application source code to obtain data from a new source without impacting business rules. And while some third party tooling might also need replacing in the new environment, the amount of change remains manageable.

Having completed hundreds upon hundreds of these projects before, Micro Focus’ Delivery Partners not only have decades of background knowledge and expertise in these proprietary mainframe systems, but have also delivered complex migration projects using Micro Focus technology.

In conclusion…
Platform migration from proprietary platforms presents more benefits than risks, thanks particularly to the portability of COBOL, which runs and behaves the same on a multitude of platforms. And of course, once you have taken that small step to a contemporary platform, the world of modern, innovative opportunities, including mobile access and cloud computing, will present itself to you.

Assisted by Micro Focus and our delivery partners, our customers have saved millions of dollars every year, reduced TCO and reaped all the benefits of operating from a contemporary platform. Fire up the engine and get in gear. You’re ready to make your move…

Financial Regulations – Surviving the Technical Nightmare

Recent high-profile compliance failures in the financial industry are leading to stricter regulations with more rigid authorizations. Only recently, the Financial Services Authority demanded details of how major banks plan to prevent a repeat of the Royal Bank of Scotland’s disastrous mainframe blackout. The increased emphasis on compliance and control, combined with rapidly changing business requirements, means organizations need to demonstrate robust internal governance and reporting systems, albeit in an increasingly fluid and fast paced environment, making IT more expensive and complicated.

Last year banks were subjected to over 60 regulatory changes per day , putting huge pressure on IT teams to react and keep quality to the required level. Regulators are cracking down on non-compliance with substantial fines. Research from The Chartered Institute of Internal Auditors (CIIA) found that 60% of fines charged by the Financial Services Authority (FSA) last year were down to weaknesses in the risk management systems of financial services firms, costing £38.5m in fines during 2011. To manage these changes manually is a very difficult task.

While the regulations themselves may be simple enough to comprehend, the technical requirements for their implementation in IT are often far from it. Core IT system engines, typically residing on big mainframes, are optimized for scalable, resilient production performance and provide the horsepower to keep financial enterprises running. However, with new regulations coming in so fast many organizations are unsure of the best strategy to use.

How IT responds to regulatory change is a critical challenge. Over-burdened IT environments, already working at near-full capacity, could negatively impact the resources and bandwidth available to prepare for these changes – drastically slowing the process down and costing large amounts of money.

Additionally, the larger the organization, the higher the cost of modernizing or responding to regulatory change. Large organizations need to find ways of managing spiralling costs brought about from increases in mainframe capacity.

A genuine option is to review existing core system workload to identify bottlenecks, which could – especially on traditional mainframe systems – reside in the development, testing or even production deployment activities. Using smart technology and exploiting new advances in mainframe technology (IBM’s zEnterprise system, for example, condenses z/OS, Linux, UNIX and Windows servers into a single environment), IT teams can juggle major IT activities across this more flexible environment. This is because it behaves the same way as more traditional mainframe environments, but offers even greater capacity and flexibility.

By looking at how core IT system delivery activities and workload can be optimized across all available platforms, organizations can then refine their development, testing and redeployment activities at a lower cost while also meeting delivery timeframes. By exploiting new technology to modernize mainframe delivery, bottlenecks in the software development cycle will be eliminated and testing and delivery schedules will be able to be met without the previous limitations of mainframe capacity.

Regulatory changes will have an impact on technology and IT managers need to look at alternative environments so that mainframe activities are not hindered. Otherwise, they could face major setbacks in terms of capacity or worst still, regulatory fines.

Micro Focus has been helping clients modernize core systems to solve a variety of business, legislative and operational challenges for decades. By offering organizations the opportunity to eliminate bottlenecks in their delivery cycles, unwelcome costs including regulatory fines can be avoided, while service delivery can be accerlated. With the new advances in mainframe technology, greater capacity and a more flexible environment will be introduced, enabling businesses to cope efficiently with core system workloads.

For information on the Micro Focus Enterprise product set, click here.

1 Thomson Reuters Governance, Risk & Compliance

Will the Upcoming Election Ever Discuss Legacy Systems?

Written by Tod Tompkins, VP of Federal Sales, North America

The first 2012 Presidential debate was held this week. Economics and deficit reduction dominated much of the air time. (Even Big Bird weighed in on cost savings!) As expected, news outlets dissected the debate from numerous angles, fact-checking and commenting on the areas of contention. InformationWeek published “Obama vs. Romney: 6 Tech Policy Differences” on Friday. It is an interesting article that provides details around the candidates’ thoughts and the technology policy and funding implications in education, cybersecurity, telecommunications, immigration and trade. But – what about our mission critical legacy systems? I know I sound like I’m on a stump speech circuit – but it is important to discuss the mainframe-based systems and applications that run some of the most critical government programs – and how to maintain and modernize these systems to meet government’s new and evolving requirements.

On a related note, Government Executive just published an article clarifying the sequestration threat and stopgap spending measures. I encourage you to read it. Both parties, executives and technical leads, the public and private sectors all need to come together to help government reduce its costs and meet the projected budget cuts to reduce the deficit. Do you have ideas to help? Let me know your thoughts. Connect with us in the comments section below, on Facebook or Twitter.