Introducing Micro Focus Enterprise Sync: Delivering Faster Change

Delivering Mainframe DevOps involves managing a lot more change a lot more often. This might need improving processes, but also demands more of technology. Amie Johnson unveils how Micro Focus is supporting collaborative change.

Introduction

At Micro Focus, we believe mainframe organizations can achieve DevOps levels of efficiency by just taking advantage of modern, efficient tools, agile development practices and fostering better team collaboration. It’s simply a matter of incrementally removing application delivery bottlenecks.

As such, Micro Focus just introduced a new product within our Enterprise Solution set aimed at helping mainframe developers deliver new releases, faster.

Enterprise Sync tackles head on one of the major delivery bottlenecks our customers encounter: coordinating and orchestrating rapid code change – needed in a DevOps model – using conventional mainframe configuration management tools.

The product supports rapid, contemporary parallel development to provide a means to adopt a more agile delivery method across mainframe development teams.

Why can’t we deliver multiple streams?

DevOps promises to eradicate delays in IT delivery. So, in the mainframe world, what’s the bottleneck?

One of the issues is all about how deliveries are managed. As robust as they are, trusted old mainframe configuration management tools weren’t designed to support parallel development, so multi-stream code merges are difficult, manual and prone to error. But, these mainframe configuration management tools hold unique configuration detail and metadata which are essential to supporting critical mainframe applications. So, while replacing such tools completely is out of the question, customers are looking for ways to support a more agile delivery model.

Removing Barriers

The Micro Focus solution, Enterprise Sync, helps solve the bottleneck associated with a desire to introduce parallel development activities. It does this by replicating mainframe source code to a distributed software configuration management platform. Code changes made via parallel development on the distributed platform are automatically synchronized with the mainframe SCM environment, such as CA Endevor. The integration and synchronization effectively introduces a new paradigm of speed and accuracy in delivering parallel development streams for mainframe delivery. This seamless integration with established software change management tools uniquely addresses the need to deliver faster change while preserving the organization’s valuable investment in mainframe processes and their software change and configuration management environment.

ES1

As part of the wider Micro Focus Enterprise product set, Enterprise Sync works collaboratively with our flagship mainframe application development tool, Enterprise Developer, to deliver:

  • Easier parallel development at scale across releases or teams
  • Greater efficiency through management and visualization of code change using modern tools
  • Alignment with current mainframe development process and source code
  • Improved developer productivity through continuous integration of key updates

ES2

ES3

Find out more

Establishing a modern mainframe delivery environment may be central to your DevOps strategy. Learn more about how Micro Focus can help with a complementary Value Profile Service. See what’s possible and hear more about how Micro Focus has helped transform mainframe application delivery.

Achieve DevOps levels of efficiency, flexibility and collaboration. Learn more about the new Enterprise Sync release on the website, or download the product datasheet.

ES4

#DevDay Report – so what does COBOL look like now?

David Lawrence reports back from the latest Micro Focus #DevDays and what COBOL looks like these days. With Partners like Astadia it seems like anything’s possible…..including Mobile Augmented Reality! Read on.

To most people, COBOL applications probably look like this:

dlpic1

and are thought to do nothing more than this:

DLpic2

These applications are likely to be COBOL-based. After all, COBOL is the application language for business. With over 240 billion (with a b) lines of code still in production, the fact is that COBOL is used in thousands, if not millions, of applications that have nothing to do with finance.

It’s called the COmmon Business Oriented Language for a reason. The reason is that it was designed to automate the processing of any business transaction, regardless of the nature of the business.

Did you realize that COBOL is also widely used by municipalities, utilities and transportation companies?

At our Nashville Micro Focus DevDay event on June 21, the audience was treated to a very interesting presentation by a major American railroad organization, where they showed us how their COBOL application inventory runs their daily operations (scheduling, rolling stock management, crews, train make up and dispatch).

Earlier in the month we heard from a client who was using COBOL applications to capture, monitor and analyze game and player statistics in the world of major league baseball.

Many attendees of our COBOL and mainframe app dev community events, DevDay, are managing crucial COBOL applications as the lifeblood of their business. From managing retailers’ stock control systems, to haulage and logistics organziations’ shipments and deliveries, from healthcare, pharma and food production organizations, to major financial service, insurance and wealth management systems.

Those applications contain decades of valuable business rules and logic. Imagine if there was a way to make use of all that knowledge, by say using it to more accurately render a street diagram.

You say “Yes, that’s nice, but I already have Google Maps.” All very well and good. But what if you are a utility company trying to locate a troublesome underground asset, such as a leaking valve or short circuited, overheating power cable?

Astadia has come up with a very interesting solution that combines wealth of intelligence built into the COBOL applications that are invariably the heart and brains of most large utilities or municipalities with modern GPS-enabled devices

DevDay Boston

I had a chance to see this first hand at DevDay Boston. DevDay is a traveling exposition that features the newest offerings from Micro Focus combined with real life experiences from customers.

Astadia, a Micro Focus partner and application modernization consultancy, visted our Boston DevDays and showed us their mobile augmented reality application which enhances street view data with additional information needed by field crews.

Steve Steuart, one of Astadia’s Senior Directors, visted our Boston DevDays, and introduced the attendees to ARGIS, their augmented reality solution that helps field engineers locate underground or otherwise hidden physical infrastructure asset such as power and water distribution equipment.

I watched as Steve explained and demonstrated ARGIS overlaying, in real time, the locations of manhole covers and drains in the vicinity of the Marriott onto a Google Maps image of the area surrounding the Marriott Hotel . .. Steve explained that ARGIS was using the GPS in the tablet and mining the intelligence from the COBOL application used by the Boston Department of Public works department to track the locations in real time, superimposed over the street view, the precise location of the network of pipes and valves supplying water to the area

Here’s a picture .. certainly worth a thousand words, wouldn’t you say?

Below you see how the Astadia‘s ARGIS Augmented Reality system sources the data of the local utility company’s COBOL application inventory to give clear visual indications of the locations of key field infrastructure components (e.g. pipes, valves, transformers) over a view of what the field engineer is actually seeing. Nice to have when you’re trying to work out where to dig, isn’t it?

Poc1

Very imaginative indeed, but at the heart of this new innovation, the important data and logic comes from, guess where? . . yes, it comes from a COBOL application. Micro Focus solutions help mine and reuse those crucial business rules locked up in our customers’ portfolio of proven, reliable COBOL applications. This will prolong their longevity and flow of value to the business. Why take all that risk and spend millions to replicate intelligence that already exists, but which has been hard to utilize effectively?

Afterwards, I spoke with Steve – Astadia’s senior director who remarked: “As long as Micro Focus continues to invest in COBOL, COBOL will continue to be relevant.”

Speaking afterwards with Micro Focus’ Director of COBOL Solutions, Ed Airey, he commented

“We are always thrilled to see how our partners and customers are taking advantage of the innovation possible in our COBOL technology to build applications that meet their needs in the digital age. Astadia’s ARGIS product is great. I’m not surprised to see how far they’ve been able to extend their application set in this way – Visual COBOL was designed with exactly that sort of innovation in mind. The only constant in IT is change, and with Micro Focus COBOL in their corner our customers are able to modernize much faster and more effectively than they realize”.

See real world applications and how they can be modernized at a Micro Focus DevDay near you. For more information on our COBOL Delivery and Mainframe Solutions, go here.

David Lawrence

Global Sales Enablement Specialist

DLblog

Start over, or with what you know?

Derek Britton’s last blog looked at the appetite for change in IT. This time, he looks at real-world tactics for implementing large-scale change, and assesses the risks involved.

Introduction

In my recent blog I drew upon overwhelming market evidence to conclude that today’s IT leadership faces unprecedented demand for change in an age of bewildering complexity. That “change”, however, can arrive in many shapes and forms, and the choice of strategy may differ according to a whole range of criteria – technical investments to date, available skills, organizational strategy, customer preference, marketing strategy, cost of implementation, and many more besides. This blog explores and contrasts a couple of the options IT leaders have.

Starting Over?

Ever felt like just starting over? The difficulty of changing complex back-end IT systems, when staffing is so tight, where the pressure to change is so high, with an ever-growing backlog – there is point at which the temptation to swap out the hulking, seething old system with something new, functional and modern, will arrive.

Sizing Up the Task

We’re sometimes asked by senior managers in enterprise development shops, how they should assess whether to rewrite or replace a system versus keeping it going and modernizing it. They sense there is danger in replacing the current system, but can’t quantify to other stakeholders why what is.

Of course, it is impossible to give a simple answer for every case, but there are some very common pitfalls in embarking on a major system overhaul. These can include:

  • High Risk and High Cost involved
  • Lost business opportunity while embarking on this project
  • Little ‘new’ value in what is fundamentally a replacement activity

This sounds a rather unpleasant list. Not only is it unpleasant, but the ramifications in the industry are all too stark. These are just a few randomly-selected examples of high profile “project failures” where major organizations have attempted a major IT overhaul project.

  • State of Washington pulled the plug on their $40M LAMP project. It was six times more expensive than original system
  • HCA ended their MARS project, taking a $110M-$130M charge as a result
  • State of California abandoned a $2 billion court management system (a five-year, $27 million plan to develop a system for keeping track of the state’s 31 million drivers’ licenses and 38 million vehicle registrations)
  • The U.S. Navy spent $1 Billion on a failed ERP project

Exceptional Stuff?

OK, so there have been some high-profile mistakes. But might they be merely the exception rather than the rule? Another source of truth are those who spend their time following and reporting on the IT industry. And two such organizations, Gartner and Standish, have reported more than one about the frequency of failed overhaul projects. A variety of studies over the years keeps coming back to the risks involved. Anything up to a 70% failure is cited in analyst studies when talking about rewriting core systems.

Building a case for a rewrite

Either way, many IT leaders will want specific projections for their own business, not abstract or vague examples from elsewhere.

Using as an example a rewrite project[1] – where in this case a new system is built from scratch, by hand (as opposed to automatically generated) in another language such as Java. Let’s allow some improvement in performance because we’re using a new, modern tool to build the new system (by the way, COBOL works in this modern environment too, but let’s just ignore that for now).

Let’s calculate the cost – conceptually

Rewrite Cost = (application size) x (80% efficiency from modern frameworks) x (developer cost per day) / speed of writing

The constants being used in this case were as follows –

  • The size of the application, a very modest system, was roughly 2 Million lines of code, written in COBOL
  • The per-day developer cost was $410/day
  • The assumed throughput of building new applications was estimated at 100 lines of code per day, which is a very generous daily rate.

Calculated, this is a cost of $6.5M. Or, in days’ effort, about 16,000.

Considerations worth stating:

  • This is purely to build the new application. Not to test it in any way. You would need, of course, rigorous QA and end-user acceptance testing.
  • This is purely to pay for this rewrite. In 10 years when this system gets outmoded, or the appetite for another technology is high, or if there are concerns over IT skills, do you earmark similar budget?
  • This assumes a lot about whether the new application could replicate the very unique business rules captured in the COBOL code – but which are unlikely to be well understood or documented today.

A well-trodden path to modernization

Another client, one of the world’s largest retailers, looked at a variety of options for change, among them modernizing, and rewriting. They concluded the rewrite would be at least 4 times more expensive to build, and would take 7 or 8 times longer to deliver, than modernizing what they had. They opted to modernize.

mod

 

Elsewhere, other clients have drawn the same conclusions.

“Because of the flexibility and choice within [Micro Focus] COBOL, we were able to realize an eight month ROI on this project – which allowed us to go to market much faster than planned.”

— Mauro Cancellieri,  Manager. Ramao Calcados

“Some of our competitors have written their applications in Java, and they’ve proven not to be as stable, fast or scalable as our systems. Our COBOL-based [banking solution] however, has proved very robust under high workloads and deliver a speed that can’t be matched by Java applications.”

— Dean Mathieson, Product Development Manager, FNS / TCS

Our Recommendation

Core business systems define the organization; they – in many cases – are the organization. The applications that provide mortgage decisions, make insurance calculations, confirm holiday bookings, manage the production lines at car manufacturers, process and track parcel deliveries, they offer priceless value. Protecting their value and embracing the future needs a pragmatic, low-risk approach that leverages the valued IT assets that already work, delivers innovation and an ROI faster than other approaches, and is considerably less expensive.

If you are looking at IT strategic change, talk to us, and we’d love to discuss our approach.



[1] We can’t speculate on the costs involved with package replacement projects – it wouldn’t be fair for us to estimate the price of an ERP or CRM package, for example.

Market Attitudes to Modernization

The tried-and-trusted enterprise-scale server of choice is casually regarded as an unchanging world. Yet today’s digital world means the mainframe is being asked to do greater and greater things. Derek Britton investigates big-iron market attitudes to change.

Keeping the Mainframe Modern

A Firm Foundation

The IBM Mainframe environment has been on active duty since the mid 1960’s and remains the platform of choice for the vast majority of the world’s most successful organizations. However, technology has evolved at an unprecedented pace in the last generation, and today’s enterprise server market is more competitive than ever. So it would be wholly fair to ask whether the mainframe remains as popular as ever.

You don’t have to look too hard for the answer. Whether you are reading reports from surveys conducted by CA, Compuware, Syncsort, BMC, IBM or Micro Focus, the response is loud and clear – the mainframe is the heart of the business.

Summarizing the surveys we’ve seen, for many organizations the Mainframe remains an unequivocally strategic asset. Typical survey responses depict up to 90% of the industry seeing the mainframe platform as being strategic for at least another decade (Sources: BMC, Compuware and others).

It could also be argued that the value of the platform is a reflection of the applications which it supports. So perhaps unsurprisingly, a survey conducted by Micro Focus showed that over 85% of Mainframe applications are considered strategic.

plus ça change

However, the appetite for change is also evident. Again, this holds true in the digital age. An unprecedentedly large global market, with more vocal users than ever, are demanding greater change across an unprecedented variety of access methods (devices). No system devised in the 1960s or 1970s could have possibly conceived the notion of the internet, of the mobile age, of the internet of things; yet that’s what they do have to do today – cope with this new world. Understandably, surveys reflect that: Micro Focus found two-thirds of those surveyed recognize a need to ‘do things differently’ in terms of application/service delivery and are seeking a more efficient approach.

The scale of change seems to be a growing problem that is impossible to avoid. In another survey, results show that IT is failing to keep up with the pace of change. A study by Vanson Bourne revealed that IT Backlogs (also referred to as IT Debt) had increased by 29% in just 18 months. Extrapolated, that’s the same as the workload doubling in less than five years. Supply is utterly failing demand.

Supporting this, and driven by new customer demands in today’s digital economy, over 40% of respondents confirmed that they are actively seeking to modernize their applications to support next generation technologies including Java, RDBMS, REST-based web services and .NET.  Many are also seeking to leverage modern development tools (Source: Micro Focus)

And it isn’t just technical change. The process of delivery is also being reviewed. We know from Gartner that 25% of the Global 2000 have adopted DevOps in response to the need for accelerated change, and that this figure is growing at 21% each year, suggesting the market is evolving towards a model of more frequent delivery.

pluscachange

Crossroads

Taking what works and improving it is not, however, the only option. Intrepid technologists might be tempted by a more draconian approach, hoping to manage and mitigate the associated cost and risk.

Package replacements take considerable budget and time, only to deliver – typically – a rough equivalent of the old system. Both unique competitive advantage is compromised, and of course packages are available to the open market. Such an approach is known to have a 40% failure rate, according to Standish Group. Custom rewrite projects appear to be riskier still, the same report talking about a 70% failure rate and extremely lengthy and costly projects.

Worse still, reports from CAST Software suggest that Java (a typical replacement language choice) is around 4 times more costly to maintain than equivalent COBOL-based core systems. The risks of such a drastic change are clear.

Moving Ahead

Meeting future need never ends. Today’s innovation is tomorrow’s standard. Change is the only true constant. As such, the established methods of providing business value need to be constantly scrutinized and challenged. The mainframe market sees its inherent value and regards the platform as well-placed to support the future.

And to meet the demands of the digital age, the mainframe world is evolving: new complimentary technology and methods will provide the greater efficiencies it needs to keep up with the pace of change. Find out more at MicroFocus.com and / or download the white paper ‘Discover the State of Enterprise IT

Change – the only constant in IT?

Change is a constant in our lives. Organizations have altered beyond recognition in just a decade, and IT is struggling to keep pace. Managing change efficiently is therefore critical. To help us, Derek Britton set off to find that rarest of IT treasures: technology that just keeps on going.

Introduction

A recent survey of IT leaders reported their backlog had increased by a third in 18 months. IT’s mountain to climb had just received fresh snowfall. While a lot is reported about digital and disruptive technologies causing the change, even the mundane needs attention. The basics, such as desktop platforms, server rooms, are staples of IT on a frequent release cadence from the vendors.

Platform Change: It’s the Law

Moore’s Law suggests an ongoing, dramatic improvement processor performance, and the manufacturers continue to innovate to provide more and more power to the platform and operating system vendors, as well as the technology vendor and end user communities at large. And the law of competition suggests that as one vendor releases a new variant of operating system, chock full of new capability and uniqueness, their rivals will aim to leapfrog them in their subsequent launch. Such is the tidal flow of the distributed computing world. Indeed, major vendors are even competing with themselves (for example Oracle promotes both Solaris and Linux, IBM AIX and Linux, even Windows will ship with Unbuntu pre-loaded now).

platform

Keep the Frequency Clear

Looking at some of the recent history of operating system releases, support lifespans and retirements, across Windows, UNIX and Linux operating systems, a drumbeat of updates exists. While some specifics may vary, it becomes quite clear quite quickly that major releases are running at a pulse rate of once every 3 to 5 years. Perhaps interspersed by point releases, service packs or other patch or fix mechanisms, the major launches – often accompanied by fanfares and marketing effort – hit the streets twice or more each decade[1]. (Support for any given release will commonly run for longer).

Why does that matter?

This matters for one simple reason: Applications Mean Business. It means those platforms that need to be swapped out regularly house the most important IT assets the organization has, namely the core systems and data that run the business. These are the applications that must not fail, and which must continue into the future – and survive any underlying hardware change.

Failing to keep up with the pace of change has the potential of putting an organization at a competitive disadvantage, or potentially failing internal or regulatory audits. For example, Windows XP was retired as a mainstream product in 2009. Extended support was dropped in 2014. Yet it has 11% market share in 2016 source, according to netmarketshare.com (add the link). Therefore, business applications running on XP are, by definition, out of support, and may be in breach of internal or regulatory stipulations.

Time for a Change?

There is at least some merit in considering whether the old machinery being decommissioned would be a smart time to look at replacing the old systems which ran on those moribund servers. After all, those applications been around a while, and no-one typically has much kind to say about them except they never seem to break.

This is one view, but taking a broader perspective might illustrate the frailties of that approach –

  • First, swapping out applications is time-consuming and expensive. Rewriting or buying packages costs serious money and will take a long time to implement. Years rather than months, they will be an all-consuming and major IT project.
  • Questionable return is the next issue – by which we mean we are swapping out a perfectly good application set, for one which might do what is needed (the success rate of such replacement projects is notoriously low, failure rates of between 40 and 70% have been reported in the industry) And the new system? It is potentially the same system being used by a competitor.
  • Perhaps the most worrying issue of all is that this major undertaking is a single point in time, but as we have already stated, is that it is a cyclical activity. Platforms change frequently, so this isn’t a one-time situation, this is a repeated task. Which means it needs to be done cost-efficiently, without undue cost or risk.

platform2

Keep on Running

And here’s the funny thing, while there are very few constants in the IT world (operating systems, platforms, even people change over time), there are one or two technologies that have stood the test of time. COBOL as a language environment is the bedrock of business systems and is one of the very few technologies offering forward compatibility to ensure the same system can work from the past on today’s – and tomorrow’s – platforms.

Using the latest Micro Focus solutions, customers can use their old COBOL-based systems, unchanged, in today’s platform mix. And tomorrow too, whatever the platform strategy, those applications will run. In terms of cost and risk, taking what already works and moving it – unchanged – to a new environment, is about as low risk as it can get.

Very few technologies that have a decades-old heritage can get anywhere close to claiming that level of forwards-compatibility. Added to which no other technology is supported yesterday, today and tomorrow on such a comprehensive array of platforms.

The only constant is change. Except the other one: Micro Focus’ COBOL.

Platform3[1] Source: Micro Focus research

DevOps and Organizational Culture

In the Micro Focus blog series on DevOps, Derek Britton looked the bottlenecks of low collaboration, inefficient development and lengthy testing cycles and how they can be overcome with a pragmatic, technological solution. Here, he turns his attention to that most indiscernible of obstacles: corporate culture.

Letting it breathe

In the Micro Focus blog series on DevOps, I looked the bottlenecks of low collaboration, inefficient development and lengthy testing cycles and how a pragmatic technological solution can overcome them. Here my attention turns to that most indiscernible of obstacles: corporate culture.

Introduction

It has been said that 2016 could be the year DevOps came of age. It continues to gain mindshare including large enterprise accounts. Gartner projects a quarter of the Global 2,000 will have adopted DevOps this year, growing by 21%. Reflecting growing popularity, SHARE in March 2016 has its own DevOps track: “DevOps in the Enterprise”. SHARE has also added a DevOps discussion to its “EXECUforum” agenda, entitled “DevOps: Cultural Mindset”. (We are delighted to join luminaries from IBM, Compuware and CA on the panel).

Share_Event_SanAntonio_social01big

DevOps, as the name suggests, is a technical approach to a necessary business change, namely building new services faster for the business. Put another way, DevOps is a process change, a means to an end. Changing to embrace DevOps affects a number of disparate organizational elements, from IT groups to business, user and customer communities. Clearly, there are important cultural questions in terms of how the organization is ready to embrace DevOps. Culture is widely recognized as being at least as important as strategy. As one report observed, “Changing the culture and mind-set of people is not easy.”

Cultural Barriers to Adoption

Adopting DevOps may flounder for a variety of cultural reasons.

Why are we doing this? While establishing an agile-based methodology in the IT organization makes a lot of sense and, according to history, yielded impressive early results, the parent organization may often be ignorant of the new process. In fact, the business may still expect product roadmap milestones to be planned and met on an annualised basis as part of a traditional regimented plan-build-deliver cycle, unaware of the new dynamic. For IT, breaking down Portfolio and Product plans into Epic and then Iterations is -with a group of trained professionals working together –both viable and valuable. However, ensuring such plans are agreed and acted upon by the consumers of the technology (whether internal users or sales/marketing/customer representatives) is much, much harder, especially when the reason for the change is not clear outside IT. So while like-minded technicians might flock towards DevOps, end-users won’t subscribe to what is probably perceived as extra work for them. They just want results, working apps, not more work. They can’t see the benefit of change.

DevOps3

Moving from big to small. DevOps espouses more rapid, incremental deliveries and a tighter feedback cycle to resolve difficulties and achieve customer satisfaction more quickly. That switch requires the shift from large-scale orchestrated deliveries to more frequent, smaller-scale incremental efforts. The change in dynamics mean there will be greater coordination required by more people on a more regular basis, and a certain level of disentanglement of both application sets and job functions. As one observer put it, successful adoption will require teams to “Embrace the Chaos”. But chaotic it will be, and for larger, more-established, more hierarchical organizations, or those who preside over larger (sometimes referred to as monolithic) systems, that chaos will be most keenly felt.

We don’t have the bandwidth. Restricted infrastructure resources, a problem sometimes faced in large organizations with many parallel work streams is another genuine concern. In some organisations, the change from one model to another might just feel too big. One of the most common bottlenecks is the inability to undertake rapid test cycles as part of a process of continuous integration. With autonomous teams and no restrictions on test environments, as DevOps will need, rapid testing sounds viable. However in a more traditional, regimented IT world, where resources are allocated as part of a planned-for, charged-for system, “just running some tests” is not as simple as that. It might take days, if not longer, to commission a test environment and a very real budget to manage.

Cultural Change – Practicality and Transparency

DevOps is a major upheaval, a major change program. Such change programs need to be clearly outlined, understood, and measurable. So how might DevOps promote wider cultural acceptance?

Get Out There. Failure to involve all stakeholders in a major change program will result in inevitable resistance. Ignorance of why the change is being made will hamper progress. Establishing a clear vision across the organization of why there is a new approach to software deliveries is the fundamental cornerstone of its adoption. To that end, stakeholders need to hear that the reason for the new approach is that the organization is trying to improve the quality of the technology service, and is therefore aiming to deliver more frequently to determine faster feedback, and course-correct. Such a vision is predicated by a top-down C-level sponsorship. After all, the “end game” is new services and customer satisfaction or some other tangible strategic business benefit: DevOps is merely a vehicle to achieve that. Explained this way, especially in the always-on digital age, it is wholly appropriate and acceptable for the supplier to seek to engage more frequently with their users. In a business context in vision, the purpose of DevOps becomes far more tangible and sensible to non-IT stakeholders.

Get Amongst IT. Similarly, for teams responsible for delivering software, from the development, QA and operations functions, previous functional silos and hierarchy no longer apply so readily. But transitioning to a team-oriented structure may take time. Some organizations are borrowing ideas from Agile by establishing functional teams which are temporary for the duration of a major release, or epic, etc. Additionally, many IT organizations are driving internal change with the help of a senior DevOps champion. One organization I know has chosen their new CIO specifically because of their DevOps experience and vision.

Build Bridges. There may appear to be no straightforward resolution to incumbent resource availability (be it related to people, hardware or software), and this is where pragmatism and practicality comes to the fore. Previously accepted practices and platforms may not be as fixed as might first appear. There are a variety of technical solutions available to improving development, testing and efficiency of collaboration for mainframe teams. They can realistically achieve far greater frequency and reach a wider variety of users by exploiting new technical solutions. (One example is Micro Focus’ solution, here).

DevOps bandwidth

Conclusion

Larger organizations have every opportunity to embrace DevOps by taking the cultural aspect of change as seriously as the underlying technical and operational approach they are aiming to use. Practical and pragmatic solutions exist to overcome fundamental operational roadblocks; a comprehensive and transparent cultural change program will also be needed to promote widespread adoption. As a recent ComputerWorld headline put it “Culture is Key to DevOps Success”.

Find Micro Focus at booth #525 at SHARE or visit our dedicated DevOps resources if you can’t attend the event in person.

Achieve peak performance at #MFSummit2016

The inaugural Micro Focus cross-portfolio summit opens this week. Andy King, General Manager for UK and Ireland offers his insights as to what to expect from the program.

This is a big week for myself and Micro Focus. On Wednesday, I raise the curtain on the future of our new company and our products for the customers who want us to take them into tomorrow.

Since the 2014 merger with the Attachmate Group, we have become one company operating two product portfolios across six solution areas. The single aim is to meet our customers’ mission-critical IT infrastructure needs with enterprise-grade, proprietary or open source solutions.

But what does that mean in reality? We are all about to find out.

#MFSummit2016: Current challenge, future success is our first cross-portfolio conference. The format mixes formal sessions and face-to-face opportunities, informative overviews with deep-dive, issue-specific questioning. It is a first chance to check out the roadmaps, and share experiences with our experts.

The focus is firmly on interaction; product specialists and fellow customers will be there to discuss your business and IT change issues. Set your itinerary to get maximum value from the day. The 12 sessions are split into three broad themes.

SUMMITT

Build. Operate. Secure.

Whether your IT issues span every area of build, operate and secure, or are confined to one or two, Micro Focus has it covered with a diverse range of products and solutions that will help to meet the challenges of change. I’ve selected three sessions to illustrate the point.

Secure

Dave Mount, UK Solutions Consulting Director presents an Introduction to Identity, Access and Security. Dave’s view is that understanding and managing identity enables better control of internal and external threats. He illustrates how our solutions can help our customers better understand and manage these threats. Find out how from 11 to 11.30pm.

Operate

From 1.30 to 2.20 pm David Shepherd, Solutions Consultant, Micro Focus and Stephen Mogg, Solutions Consultant SUSE discuss how Micro Focus and SUSE could help customers meet escalating storage requirements and costs with secure, scalable, highly-available and cost-effective file storage that works with your current infrastructure. If that would help you, then check out The Race for Space: File Storage Challenges and Solutions.

Build

Immediately after that, our COBOL guys, Scot Nielsen, Snr Product Manager and Alwyn Royall, Solutions Consultant, present Innovation and the Next Generation of COBOL Apps. It’s a demo-led look at the future that show the way forward for modernising COBOL application development and deployment in new architectures. So if you are ready for new innovation from older applications, get along to see that between  2.20 to 3.10 pm.

Networking opportunities?

Of course. Whether you are enjoying refreshments, post-event drinks – or your complementary lunch – alongside industry representatives, product experts and customers, visiting the pods for demos or roadmap walkthroughs, then the whole day is a refreshingly informal way to resolve your technical questions or business challenges. Alternatively, ask your question of the expert panel at the Q & A session at 3.45 to 4.15 pm.

PH House

In summary

Our promise to delegates is that after a visit to #MFSummit2016 they will be in a better position to navigate the challenges of business and IT change.

Wherever you are in your IT strategy, Micro Focus solutions enable our customers to innovate faster with less risk and embrace new business models. #MFSummit2016 is our opportunity to show you which solutions will work for you, where – and how.

Sounds attractive? You’ll really like our stylish venue, Prince Philip House. It is handy for Piccadilly, Charing Cross and St James’s Park Tube stations. Attendance is free, but book here first.

I’ll be speaking from 9.30. See you there?

Discovering DevOps … for the Mainframe

That font of knowledge Wikipedia tells us that DevOps “aims to help an organization rapidly produce software products” – but can this much-lauded IT delivery methodology make light work of some of the most complex, and business-critical, mainframe development tasks? In the first of a series of blogs on DevOps, Micro Focus Product Marketing Director Derek Britton lifts the lid…

The Case for Mainframe DevOps: All Quiet on Planet z?

In many if not most organizations we speak to, there are some defining characteristics of the mainframe environment. First is its enduring value. Market surveys agree with continued loyalty from the mainframe community. Indeed, if we look at the IT estate that has built up over the years, the investments made over time have led to incredible returns in terms of business value and competitive advantage. But usually, very often in fact – and this is our second characteristic – these tremendous returns have come at a cost of what now is very high complexity in terms of the IT estate.

A third defining characteristic from all that complexity is where the time is spent today by mainframe application teams. The reality for most organizations is that the majority of budget and time is spent simply ‘keeping the lights on’ (by which I mean the day-to-day running of the organization). Some organizations will spend 70, 80 or even 90%[1] of their available IT resources just doing ‘lights on’ activities, such as maintenance updates, managing the maintenance backlog, regulatory changes, and bug fixes to core systems.

This leaves very little time and budget to spend on innovation – innovation to drive success. And business success in 2015 needs greater flexibility, and velocity in deliveries from IT.

CIOz13

Enter the Hero to Save the Day

IT faces an interesting reality in which core applications have outlived the very processes and technologies used to create them originally. These processes and technologies are now showing signs of age and are unable to support a more flexible and inclusive model that systems of 2015 need.

And that’s where DevOps comes in…

Building on the concepts of the agile manifesto, where smaller chunks of work are taken, worked on and delivered in smaller timeframes by a dedicated team, DevOps seeks to add further discipline and flexibility by driving a more focused and inclusive process across IT. This espouses collaboration, flexibility and – ultimately – internal operational efficiency.

DevOps makes a lot of sense for small teams building new code with few distractions. It lends itself to new developments among co-located teams using modern technology across an open structure.

All that’s needed is for development teams to adopt it in the mainframe environment…

superCIOlady

Sorry, we need to do what?

However this is far from straightforward for the mainframe world. There are a number of reasons, let’s look at three of the more obvious concerns.

Firstly, much of the work is NOT new code. The core systems and the updates they require make up most of the IT backlog. And these need to be planned and undertaken in a regimented fashion, where mainframe resources, available skills, business priority and a serious risk aversion drive the operational culture. Multiple updates to the same system may not be a smart move when the cost of production failure is significant.

Secondly, DevOps is built on an agile model. Most mainframe shops are not, with large teams and a formalised structure and fairly fixed delivery model. Culturally, DevOps is misaligned with the mainframe world today. One commentator phrased it this way, “one of the biggest challenges is the cultural shift, which will be necessary to break away from the old siloed ways of working”

And thirdly, the issue is technical: in all but a few cases the underlying mainframe delivery technology is not contemporary, and typically doesn’t lend itself to a collaborative or efficient workflow. It works for the way the processes were established years ago. It doesn’t necessarily bend to support another model.

But all is not lost. If we remind ourselves of our core objectives: to make delivery cycles more efficient, to foster collaboration, to improve the velocity of deliveries, we can look at practical ways of removing barriers to success. Online Agile Coach journal and Roger Brown recently did just that in the Blog Post ‘Agile in a COBOL world‘ I’d recommend you read after you finish up here…

Reality Bites

Let’s look at three common situations in the mainframe world. In fact these are 3 problems we have encountered recently with customers. Let’s consider these as our barriers to better efficiency.

One goal of DevOps is to accelerate throughput by enabling many resources to work simultaneously on code deliveries, or ‘concurrent deliveries’. This isn’t always how things are set up in the mainframe world, and is often restricted by promotional models, Source Code Management configuration, and usually, mainframe development procedures.

Another objective is to accelerate the task of application development, and associated analysis and testing, to enable each individual to get more done. Inefficient or out of date tooling might not allow for this.

Finally, and a real concern for many of our clients, is that irrespective of development speed, there is a rigid, fixed and lengthy test phase which is dictated by QA procedures and available mainframe resources. Most clients we speak to are maxed out in terms of mainframe test resources due to the volume of testing effort needed.

Conclusion – Get Smarter

The challenges above are by no means uncommon; they are, however, major rocks in the road in moving towards a more efficient delivery model, the cornerstone of the DevOps ethos.

But there is a way forward. We will explore each of these challenges – and a practical solution for each that is based on contemporary technology – in forthcoming blogs. For those looking to streamline mainframe delivery using DevOps principles, the news is good, and is timely. To quote a recent publication “Not only is DevOps on the mainframe Mission Possible, but also it’s becoming Mission Cricital[2]”. In the meantime find out more by contacting me on Twitter or an expert at Micro Focus.

[1] Sources – Gartner, Forrester and anecdotal customer information

[2] Mobile to Mainframe DevOps for Dummies – Rosalind Radcliffe (IBM Limited Edition), 2015

DerekB

Taking COBOL mobile

Organizations without mobile capabilities – or a strategy to achieve them – are standing still. But with the right technology, even older COBOL applications have the potential to go mobile. COBOL has a long, rich history of innovation and is adding to it every day…

Hands up if you have a drawer full of old mobile phones that you will probably never use again? That’s a lot of hands. Sure, we all need a spare, but if you are likely to swap your touch-screen smartphone for your Nokia 6310 then keep your hands up … thought so.

My point is that the increasing consumer adoption of all things mobile, namely phones, devices, apps and services – even our exploration of the Internet of Things – represents an irreversible trend.

The mobile arena is the battleground for today’s digital business. Gartner predicts that by 2017, mobile consumers will download or access mobile apps and services more than 268 billion times. That’s – potentially – a cool $77bn in mobile revenues. The key word is potentially. Any modern business wishing to ride that wave must offer their customers the opportunity to experience their business services digitally or surrender that business to the competitors that can.

iStock_000015940814Small

Balancing act

That is fine in principle. But any businesses must exercise cost control and maintain a ‘balance’ of new innovations and the BAU, ‘lights-on’ work. Essentially, anything leveraging modern tech must deliver a fast return to the business to pay its way – and that means giving the customer what they want sooner than other players in this marketplace.

Typically, organizations with large customer bases that need to deliver applications and data via consumer friendly services – think banks, insurance companies, airlines – are likely to have substantial investments in COBOL. Clearly, these systems were not built with mobile or the cloud in mind and the original developers will not have built in the requisite flex to create digital experiences through mobile applications. Yet the imperative to deliver them remains, so success depends on access to customer data and the ability to leverage core IP and business logic within these COBOL systems.

As has been noted – replacing time-proven COBOL code for an unknown commodity makes little business sense, particularly as COBOL has the inherent capabilities to deliver what the business – and crucially, the customer – needs.

 Portability: the foundation of COBOL’s legacy

For more than 50 years COBOL has embraced continuous innovation. Remember when ATMs were a novelty? Think about how technology has driven the advancement of logistics, banking, equity trading systems – all thanks to COBOL. Ask the Treasury of the Republic of Cyprus about how they have streamlined efficiency and achieved real savings with the language of the future.

Right now, COBOL is connecting more than 500,000,000 mobile customers. So the potential is there. The challenge for the developer is in bridging the gap between the existing technology and the modern capabilities required to take COBOL companies into the future.

The solution to that challenge could be easier than you think. As our ‘fast path to COBOL’ journey explains, re-use is the new ‘start from scratch’. Take your data and applications – your core business logic and competitive advantage – and create something new and exciting from it. Modern tools, such as the Visual Studio and Eclipse are the launchpad for delivering new mobile services faster and the workspace for folding modern languages such as Java, Objective C and C# into current COBOL systems.COBOLallaround

Micro Focus – taking COBOL mobile

COBOL has been our core business for nearly 40 years and bridging the gap between older and new technologies remains our primary mission. If you’re ready to derive more business value from your business applications, take a look at our COBOL to Mobile solutions.

Developers: Take advantage of these free resources.  Get started today with our handy demo code, video and ‘how to guide.

takingcobolmobile

Back to our original point; those old Nokia mobile phones in your drawer might be old, but they still work. The technology has simply evolved and with our help, so can yours.

But don’t bother swapping your laptop for a ZX Spectrum. With only 16KB of RAM to play with, your chances of reading this blog are pretty slim.

MelBurns

Low-risk Modernization – an Insurers best policy?

Big business faces big challenges and embracing future needs and customer demand is no easy thing, especially with so much complexity within IT. A recent article in the insurance industry press introduced the possibility that organizations should consider abandoning current core IT systems and starting again. Derek Britton on the risks and alternatives.

Information Week’s Insurance and Technology ezine carries “The Rocky Road of Modernization” a thought-provoking piece from Kelly Sheridan which discusses some important considerations around systems renewal or modernization in the insurance sector. Sheridan’s view of customer focus and operational efficiency doubtlessly represent the cornerstone elements of an effective, modern IT strategy for many insurance firms.

The article then attempts to outline possible IT strategies for effective change, including the assertion that “systems produced today” are “better able to handle the modern … environment”. This is unsurprising, as old IT systems are much derided. After all, how can decades-old systems serve today’s organizational needs? Even the taxonomy “legacy systems” is viewed within IT as a largely pejorative term. When organizations set out to replace their so-called legacy applications, removing a decades-old working system can be difficult. Even if the effort succeeds, a lot of money is being spent for very little in return. What replaces it is – fundamentally – merely a like-for-like equivalent. Yet extensive budget, resource and upheaval is consumed on this venture.

Replacement: Too risky for insurance companies?

And in reality, the viability of system-wide replacement carries considerable risks. Swapping out one system for another compels the organization to cope with significant changes, including functional equivalence, data integrity, user acceptance, training, hardware and software commissioning, among others.

The new system is untested, the system being replaced is undocumented – and the possibilities for error are huge. Studies undertaken by industry commentators and analysts talk about “failure” rates of between 40% and 70%, depending on the nature of the project, where implementations are excessively late, over budget or just never delivered. The IT press is littered with examples – just Google the words “IT failure”. The vast majority of these are new implementations.

Furthermore, a question arises around the article’s assertion that older systems are “expensive to operate” and rely on “outdated” skills, such as COBOL programming. This attempts to capture a nuanced argument in a general statement and should be challenged. Core systems such as COBOL are, from a maintenance perspective, easier to understand and manage than more equivalent languages. In terms of available skills, interestingly most developers in 2014, with their knowledge of IDEs such as Eclipse or Visual Studio, can easily pick up COBOL, the latest versions of which also work within these environments.

Keep it COBOL

Any concerns around the availability of current, incumbent skills, are also worth closer scrutiny. COBOL, the building block of most insurance applications, has gained an undeserved reputation as uncool and outdated relative to where high-paying developer jobs are trending – a perception that might explain why only 27 percent of Universities teach COBOL.

However, demand creates supply and greater dialogue between employers and academia can help correct the issue. Furthermore, Micro Focus is working with over 300 universities and colleges today to provide up-to-date COBOL training and technology to support both industry and universities alike and give tomorrow’s developers a future-proof career. After all, who else will maintain the applications that underpin the insurance houses’ core processes?

Another assertion made here that systems produced today are “less expensive to maintain” is, again, not necessarily the case. Take Java, for example. While it performs well for mobility requirements, it can lead to higher “technical debt” – the Gartner-coined term that defines the eventual consequences of poor system design, software architecture, or software development within a codebase. That’s a lot more than COBOL. According to CAST Software’s CRASH report, the estimated technical debt of Java is $5.42 per line of code, compared to $1.26 per line of COBOL.

Anyone planning to implement a modernization project should assess risk, cost, competitive advantage and time to implement as key considerations. Reusing current, working, trusted systems, then defining appropriate strategies to modify them, requires lower-scale change which delivers value improvements quickly but without undue risk. Interestingly, the same reuse strategy helps tackle issues around compliance and IT Backlog, concerns which also beset the insurance sector.

Newspaper Headlines

Many well-known insurance brands, including Nationwide, Aviva, Allianz, HCL, SCFBMIC and United Life have successfully repurposed older systems and interfaces by reusing core COBOL applications, saving time and effort which can then be devoted to other customer-facing initiatives. Meanwhile, similarly risk-averse organizations in the banking sector have also recently chosen Micro Focus technology to support their core IT modernization strategies.

In conclusion

The market expects insurers be cost-efficient and – naturally – risk-averse. Embarking on a modernization strategy based on reuse is a low-risk route to better customer service and operational efficiency. Sticking with the programming language that has successfully underpinned the core application since it was created is not “outdated”, but common sense.

As we have discussed, with a little fine-tuning, these applications can deliver the future innovation required to meet the requirements of tomorrow without compromising what is there today. Micro Focus technology protects the intellectual property of core applications, and enables the system transformation needed to embrace new technology and meet new market requirements. We believe that for our insurance clients it really is the best policy.

 

Godfrey

Regulation Acceleration

Regulation and Compliance shouldn’t be big news – after all, the IT world has had to conform to rules and regulations for years. Yet it seems every day there’s bad publicity and a hefty fine for yet another major corporation facing a non-compliance ruling. This blog looks again at the challenge that just won’t go away – and asks what we can learn.

I have spoken before about regulatory compliance and the necessity for IT to look to make systematic improvements to how it supports a variety of compliance and regulatory changes. It seems that in 2014 there are no signs of things getting any easier. Let’s take a look of the state of regulation today.

Here is a sample of publicity in the recent months:

The usual suspects

The biggest single cluster of regulatory news affects the financial services industry. Adversely affected by the global economic downturn, financial services organizations have since been the target of stringent new regulatory controls. Unsurprisingly perhaps, news abounds across a variety of “non-compliance” issues in the industry.

In a case of internal compliance, Credit Suisse were recently reported as investigating two of its own dealers for trading rule transgressions. A broader industry issue especially in the UK has been PPI mis-selling. Recently, the UK Financial Conduct Authority (FCA) probe has prompted 2.5 million PPI cases to be reopened. The impact of PPI regulatory measures was reported as a cause of Lloyds’ Bank profit fall.

Meanwhile, other regulations were contravened in high-profile cases. Deutsche Bank was fined over fiscal reporting, while the Royal Bank of Scotland’s mortgage advice irregularities resulted in a fine of £14.5M ($23.7M). Elsewhere, the LIBOR rigging scandal has hit Lloyds to the tune of £218M ($356M), meanwhile Bank of Scotland’s “double-billing” scandal was cited in the lawcourts as “unconscionable” whilst the FCA continues to investigate them.

In terms of notoriety, however, spare a thought for Citi Group – its part in the financial meltdown has resulted in an astonishing $7Bn penalty, as reported in a press release.

Newspaper Headlines

The verdict by industry observers is understandably blunt. Trust in Banks is still “years away”, according to the chairman of the UK Treasury Committee, Andrew Tyrie. Meanwhile in some cases, jittery Fund managers are deserting banking stocks. And there’s no sign of things easing up – regulators are getting more stringent in their measures, while the recent SEPA regulation is being closely followed by an equally exacting new control, FATCA – the Foreign Account Tax Compliance Act, set to go live in 2015.

Not just financial services

Regulatory compliance, and failure thereof, is by no means the exclusive remit of the financial services industry. Electronics giants Philips, Samsung and Infineon were subject to a total of 138m-euro fine over pricing irregularities. Telecoms giant Verizon was fined $7.4M over consumer clarity complaints, while Energy supplier EDF was ordered to pay £3M ($4.9) to support vulnerable customers after failing to manage complaints.

It’s no secret

Data privacy regulations are a hot topic, and most news reported on the topic is bad news for the brands in question. High profile stories surround data privacy breaches have recently hit the headlines at Home Depot, Supervalu and UPS. However the press saved the most column inches for the unfolding Community Health Systems saga, where the data hack is reported to have affected 4.5 million customers.

Emerging from the shadows

What do all these stories have in common? The attribute that links them is that each story has been reported in the last few months. So, a commonplace, recurring theme suggesting a recurring challenge across a variety of industries.

The cost of non-compliance in individual cases might mean specific and often eye-watering fines, while the longer-term operational impact on a variety of industries, not least the financial services sector is untold risk and potentially irreparable brand damage. Coping with this is being taken very seriously – industry publication Banking Technology reports a Bank of England estimate that 70,000 new finance roles will be created in Europe alone to help tackle increasing compliance workload.

T2VBlogforJordan

But headcount is not the only requirement. Throwing more staff at a problem where the processes and supporting technology is outmoded and inefficient is simply more chefs in a tiny kitchen.

Technology needs to be part of the solution.

And it can be. Micro Focus’ approach to IT regulation sees the challenge as a three-pronged issue – find the root of non-compliance, fix the issue, and then validate the change. We refer to this as Find It, Fix It, Test It. This approach leverages the best in technology to help automate and streamline these critical IT change projects, which all too often have unmovable, aggressive timescales. If you need to accelerate your regulatory efforts, we can help.