Academics, Analysts and Anchormen: Saluting the Admiral

Introduction

In 1987 I sat my first semester (we call them terms in the UK) at university, studying a Bachelor’s in Computer Science. One of my first assignments was to pick up and learn one of a broad range of computer languages. COBOL was picked first because it was a “good place to start as it’s easy to learn[1]”. Originally designed for business users with instructions that were straightforward to learn. They were right, it was a great place to start and my relationship with COBOL is a long way from over 30 years later.

A Great New Idea?

Little did I know I was using a technology that had been conceived 30 years beforehand. In 2019, one of the greatest technology inventions of the last century, the COBOL computer programming language, will celebrate its ruby anniversary. While not as widely known or anywhere near as popular as in its in 1960s and 70s heyday, it remains the stalwart of a vast amount of vital commercial IT systems globally. Anecdotal evidence suggests the majority of the world’s key business transactions still use a COBOL back-end process.

However, the celebrated, windswept technology pioneers of Jobs, Turing, Bernars-Lee and Torvalds were not even in the room when this idea first germinated. Indeed, a committee of US Government and industry experts had assembled to discuss the matter of computer programming for the masses, a concept they felt without which would halt the progress of technological advancement. Step forward the precocious talent of Grace Murray. With her present on the Codasyl committee, the notion of a programming language that was “English-like” and which “anyone could read” was devised and added to the requirements. The original aim of the language being cross platform was achieved later, but the ideas still stood as the blueprint.

Soon enough, as scientists too, the inevitable acronym-based name arrived –

  • Everyone can do it? Common.
  • Designed with commerce in mind? Business Oriented.
  • A way of operating the computer? Language.

This was about 1959. To provide some context that was the year during which rationing was still in force in the UK, and 5 years before the mainframe computer had been first released. Bill Haley was still rockin’ ‘til broad daylight, or so the contemporary tune said.

Grace Hopper (then Murray) was already the embodiment of dedication. She wasn’t tall enough to meet the entrance criteria for the US Navy, yet managed to get in on merit in 1944. And while her stature was diminutive, her intellect knew no bounds. She was credited for a range of accolades during an illustrious career, as wide and varied as –

  1. Coining the term ‘debug’ to refer to taking errors out of programming language code. The term was a literal reference to a bug (a moth) which had short-circuited the electrical supply to a computer her team was using
  2. Hopper’s later work on language standards, where she was instrumental in defining the relevant test cases to prove language compliance, ensured longer-term portability could be planned for and verified. Anyone from a testing background can thank Hopper for furthering the concept of test cases in computing
  3. Coining the phrase, which I will paraphrase rather than misquote, that it is sometimes easier to seek forgiveness than permission. I can only speculate that the inventors of “seize the day” and “just do it” would have been impressed with the notion. Her pioneering spirit and organizational skills ensured she delivered on many of her ideas.
  4. Characterising time using a visual aid: she invited people to conceptualize the speed of sound by how far electricity would travel in a nanosecond. She offered people a small stick, which she labelled a “nanosecond” – across the internet people still boast about receiving a Nanosecond from Hopper
  5. Cutting the TV chat-show host David Letterman down to size . A formidable and sometimes brusque lady, her appearance on the Letterman Show in 1980s is still hilarious.

A lasting legacy

Later rising to the rank of rear Admiral, and employed by the Navy until she was 79, Hopper is however best known for being the guiding hand behind COBOL, a project that eventually concluded in 1959 and found commercial breakthroughs a few years later. Within a decade, the world’s largest (and richest) organisations had invested in mainframe-hosted COBOL Data Processing systems. Many of them have kept the concept today, though most of the systems themselves (machinery, language usage, storage, interfaces etc.) have changed almost beyond recognition. However, mainframes and COBOL are still running most of the world’s biggest banks, insurers, government departments, plus significant numbers of healthcare, manufacturing, transportation and even retail systems.

Hopper died in 1992 at the age of 85. In 2016 Hopper posthumously received the Presidential Medal of Freedom from Barack Obama. In February 2017, Yale University announced it would rename one of its colleges in Hopper’s honour.

Grace Hopper remains inspirational for scientists, for academics, for women in technology, biographers, film-makers, COBOL and computing enthusiasts and pioneers, and for anyone who has been in business computing in the last five decades. We also happen to think she’d like our new COBOL product too. The legacy of technological innovation she embodied lives on.

[1] The environment provided was something called COBOL/2, a PC-based COBOL development system. The vendor was Micro Focus.

Trying to Transform (Part 2): the 420 million mph rate of change

Introduction

Organizations continually have to innovate to match the marketplace-driven rate of change. Readers of the Micro Focus blogsite know that I’m continually banging this drum. The issue seems relentless. Some even refer to tsunamis. But how fast is it?

An article from a recent edition of the UK Guardian newspaper attempted to gauge what the pace of change actually is, using the tried and tested motoring analogy. Here’s a quote.

If a 1971 car had improved at the same rate as computer chips, then 2015 models would have had top speeds of about 420 million mph. Before the end of 2017 models that go twice as fast again will arrive in showrooms.” Still trying to keep up? Good luck with that.

Of course this is taking Moore’s law to a slightly dubious conclusion. However, the point holds that the clamour for change, the need for constant reinvention, innovation and improvement, that’s not letting up any time soon.

The slow need not apply

But how quickly an organisation can achieve the innovation needed to compete in the digitally-enabled marketplace may depend on the IT infrastructure. Clearly, innovation is easier for funky, smaller start-ups with no core systems or customer data to worry about to drag along with them. But the established enterprise needn’t be left in the slow lane. Indeed look at some of the astonishing advances in mainframe performance and any nagging concern that it can’t support today’s business quickly dissipates.

Meanwhile, innovation through smart software can improve speed, efficiency, collaboration, and customer engagement. With the help of the right enabling technology, mainframe and other large organizations can match external digital disruption with their brand of innovation. Because innovation isn’t any one thing, and therefore the solution must be as comprehensive as the challenge. So what’s the secret to getting the enterprise up to speed? The answer for many is digital transformation.

Digital what?

OK, Digital Transformation may be neologism rather than accepted parlance, the term is common enough that Gartner get it and it has its own wiki definition:

“Digital transformation is the change associated with the application of digital technology in all aspects of human society”

Our customers have told us they are trying to transform, and while they have different ideas about what digital transformation means to them, Micro Focus is very clear about what it means to us.

Digital transformation is how we help keep our mainframe and enterprise customers competitive in a digital world. It can either be tangible, like a better mobile app, a better web interface on to a core system, getting into new markets quicker, ensuring a better overall customer experience, or simply doing things better to answer the challenges posed by the digital economy.

For us, the future is a place where to keep up with change, organizations will need to change the way everything happens. And for IT, that’s Building smarter systems even faster, continuing to Operate them carefully and efficiently, while keeping the organization’s systems and data, especially the critical mainframe-based information, Secure, these are the things that matter to the CIO, not to mention the rest of the executive team.

This is the practical, business incarnation of innovation, but to us the solution is as smart as it is efficient: realizing new value from old. Squeezing extra organizational benefit through increased efficiency, agility and cost savings from the data and business logic you already own. The pace of change is accelerating, so why opt for a standing start? We suggest you use what is, quite literally, already running.

Talking Transformation

Your digital story is your own journey, but the conversation is hotting up. Hear more by joining us at an upcoming event. Taste the Micro Focus flavor of innovation at the upcoming SHARE event. Or join us at the forthcoming Micro Focus #Summit2017.

New Year – new snapshot: the Arcati Mainframe Yearbook 2017

Introduction

Trends come and go in the IT industry, and predictions often dominate the headlines at the turn of the year. Speculation and no small amount of idle guesswork starts to fill the pages of the IT press. What welcome news therefore when Arcati publishes its annual Mainframe Yearbook.  Aside from the usual vendor-sponsored material, the hidden gem is the Mainframe User Survey. Testing the water of the global mainframe market, the survey aims to capture a snapshot of what Arcati describes as “the System z user community’s existing hardware and software configuration, and … their plans and concerns for 2017”.

While the sample of 100 respondents is relatively modest, the findings of its survey conducted in November 2016 were well worth a read. Here are a few observations from my reading of the Report.

Big Business

The first data point that jumps off the page is the sort of organization that uses the mainframe. A couple of questions help us deduce an obvious conclusion – the mainframe still means big business. This hasn’t changed; with the study revealing that over 50% of responses have mainframe estates of over 10,000 MIPS, and nearly half work in organizations of more than 5,000 employees (major sectors include banking, insurance, manufacturing, retail and government). Such organizations have committed to the mainframe: over a quarter have already invested in the new IBM z13 mainframe.

…And Growing

A few other pointers suggest the trend is upward, at least in terms of overall usage. Nearly half are seeing single digit MIPS growth this year, while nearly a third are witnessing over 10% growth in MIPS usage. For a hardware platform often cited for being in decline, that’s a significant amount of new workload. While the survey doesn’t make it clear what form that increase takes, I’ve published my view about that before. Whatever the reason, it seemed unsurprising that the number of respondents who regard the mainframe as a “legacy platform” has actually reduced by 12 percentage points since the previous survey.

Linux is in the (Main) Frame

The survey asked a few questions about Linux in the mainframe arena, and the responses were positive. Linux on z is in play at a third of all those surveyed, with another 13% aiming to adopt it soon. Meantime, IBM’s new dedicated Linux box, LinuxONE, is now installed at, or is planned to be, at a quarter of those surveyed.

Destination DevOps

With a mere 5% of respondents confirming their use of DevOps, the survey suggests at first glance a lack of uptake in the approach. However, with 48%  planning to use it soon, this makes a majority of respondents on a DevOps trajectory. This is consistent with a growth trend based on Gartner’s 2015 prediction that 45% of enterprises will planning to adopt DevOps (see my blog here). Whatever the numbers turn out to be, the trend looks set to become an inextricable part of the enterprise IT landscape.

Cost of Support

Considering the line of questioning around cost of support compared various platforms it only seems  worth mentioning that the author noted “Support costs of Linux and Windows were growing faster than the mainframe’s”. The issue around “Support”, however, did not extend to asking about available skills or indeed training programs or other investments to ensure support could continue.

Future considerations?

It is hard to make any material observations about the mainframe in the broader enterprise IT context because there was no questioning around multi-platform applications or workload balancing, where a hybrid platform model, with a mainframe at its core, serves a variety of business needs, applications and workload types. So often, the mainframe is the mother-ship, but by no means the only enterprise platform. For the next iteration of the survey, we would welcome further lines of questioning around workload, skills, security and cloud as sensible additions.

Conclusion

There are a small number of important independent perspectives on the mainframe community, about which we report from time to time, and Arcati is one such voice. The survey reflects an important set of data about the continued reliance upon and usage of the mainframe environment. Get your copy here.

Another such community voice is, of course, the annual SHARE event. This year it takes place in San Jose, California. Micro Focus will be there, as part of the mainframe community. See you there.

Trying to Transform

Here’s an interesting statistic. According to a report, only 61 of the Fortune 500 top global companies have remained on that illustrious list since 1955. That’s only 12%. It’s not unreasonable to extrapolate that 88% of the Fortune 500 of 2075 will be different again. That’s over 400 organizations that won’t stand the test of time.

What do such sobering prospects mean for the CEO of most major corporations? Simple – innovation. Innovation and transformation are the relentless treadmill of change and the continuous quest for differentiation. These are what an organization will need for a competitive edge in the future.

But in this digital economy, what does transformation look like?

Time for Change

Key findings from a recent report (the 2016 State of Digital Transformation, by research and consulting firm Altimeter) shared the following trends affecting organizational digital transformation:

  • Customer experience is the top driver for change
  • A majority of respondents see the catalyst for change as evolving customer behaviour and preference. A great number still see that as a significant challenge
  • Nearly half saw a positive result on business as a result of digital transformation
  • Four out of five saw innovation as top of the digital transformation initiatives

Much of this is echoed by a study The Future of Work commissioned by Google.

The three most prevalent outcomes of adopting “digital technologies” were cited as

  • Improving customer experience
  • Improving internal communication
  • Enhancing internal productivity

More specifically, the benefits experienced of adopting digital technology were mentioned as

  • Responding faster to changing needs
  • Optimizing business processes
  • Increasing revenue and profits

Meanwhile, the report states that the digital technologies that are perceived as having the most future impact were a top five of Cloud, Tablets, Smartphones, Social Media and Mobile Apps.

So, leveraging new technology, putting the customer first, and driving innovation seem all to connect together to yield tangible benefits for organizations that are seeking to transform themselves. Great.

But it’s not without its downside. None of this, alas, is easy. Let’s look at some of the challenges cited the same study, and reflect on how they could be mitigated.

More Than Meets The Eye?

Seamlessly changing to support a new business model or customer experience is easy to conceive. We’ve all seen the film Transformers, right? But in practical, here-and-now IT terms, this is not quite so simple. What are the challenges?

The studies cited a few challenges: let’s look at some of them.

Challenge: What exactly is the customer journey?

In the studies, while a refined customer experience was seen as key, 71% saw understanding that behaviour as a major challenge. Unsurprisingly, only half had mapped out the customer journey. More worrying is that a poor digital customer experience means, over 90% of the time, unhappy customers won’t complain – but they will not return. (Source: www.returnonbehaviour.com ).

Our View: The new expectation of the digitally-savvy customer is all important in both B2C and B2B. Failure to assess, determine, plan, build and execute a renewed experience that maps to the new customer requirement is highly risky. That’s why Micro Focus’ Build story incorporates facilities to map, define, implement and test against all aspects of the customer experience, to maximize the success rates of newly-available apps or business services.

Challenge: Who’s doing this?

The studies also showed an ownership disparity. Some of the digital innovation is driven from the CIO’s organization (19%), some from the CMO (34%), and the newly-emerging Chief Digital office (15%) is also getting some of the funding and remit. So who’s in charge and where’s the budget, and is the solution comprehensive? These are all outstanding questions in an increasingly siloed digital workplace.

Our View: While organizationally there may be barriers, the culture of collaboration and inclusiveness can be reinforced by appropriate technology. Technology provides both visibility and insight into objectives, tasks, issues, releases and test cases, not to mention the applications themselves. This garners a stronger tie between all stakeholder groups, across a range of technology platforms, as organizations seek to deliver faster.

Challenge: Are we nimble enough?

Rapid response to new requirements hinges on how fast, and frequently, an organization can deliver new services. Fundamentally, it requires an agile approach – yet 63% saw a challenge in their organization being agile enough. Furthermore, the new DevOps paradigm is not yet the de-facto norm, much as many would want it to be.

Our View: Some of the barriers to success with Agile and DevOps boil down to inadequate technology provision, which is easily resolved – Micro Focus’ breadth of capability up and down the DevOps tool-chain directly tackles many of the most recognized bottlenecks to adoption, from core systems appdev to agile requirements management. Meanwhile, the culture changes of improved teamwork, visibility and collaboration are further supported by open, flexible technology that ensures everyone is fully immersed in and aware of the new model.

Challenge: Who’s paying?

With over 40% reporting strong ROI results, cost effectiveness of any transformation project remains imperative. A lot of CapEx is earmarked and there needs to be an ROI. With significant bottom line savings seen by a variety of clients using its technology, Micro Focus’ approach is always to plan how such innovation will pay for itself in the shortest possible timeframe.

Bridge Old and New

IT infrastructure and how it supports an organization’s business model is no longer the glacial, lumbering machine it once could be. Business demands rapid response to change. Whether its building new customer experiences, establishing and operating new systems and devices, or ensuring clients and the corporation protect key data and access points, Micro Focus continues to invest to support today’s digital agenda.

Of course, innovation or any other form of business transformation will take on different forms depending on the organization, geography, industry and customer base, and looks different to everyone we listen to. What remains true for all is that the business innovation we offer our customers enables them to be more efficient, to deliver new products and services, to operate in new markets, and to deepen their engagement with their customers.

Transforming? You better be. If so, talk to us, or join us at one of our events soon.

We Built This City on…DevOps

With a history that is more industrial than inspirational, a few eyebrows were raised when Hull won the bid to become the UK’s city of culture for 2017. While unlikely, it is now true, and the jewel of East Riding is boasting further transformation as it settles in to its new role as the cultural pioneer for the continent.  Why not? After all, cultures change, attitudes change. People’s behaviour, no matter what you tell them to do, will ultimately decide outcomes. Or, as Peter Drucker put it, Culture eats Strategy for breakfast.

As we look ahead to other cultural changes in 2017, the seemingly ubiquitous DevOps approach looks like a change that has already made it to the mainstream.

But there remains an open question about whether implementing DevOps is really a culture shift in IT, or whether it’s more of a strategic direction. Or, indeed, whether it’s a bit of both. I took a look at some recent industry commentary to try to unravel whether a pot of DevOps culture would indeed munch away on a strategic breakfast.

A mainstream culture?

Recently, I reported that Gartner predicted about 45% of the enterprise IT world were on a DevOps trajectory. 2017 could be, statistically at least, the year when DevOps goes mainstream. That’s upheaval for a lot of organizations.

We’ve spoken before about the cultural aspects of DevOps transformation: in a recent blog I outlined three fundamental tenets of embracing the necessary cultural tectonic shift required for larger IT organizations to embrace DevOps:

  • Stakeholder Management

Agree the “end game” of superior new services and customer satisfaction with key sponsors, and outline that DevOps is a vehicle to achieve that. Articulated  in today’s digital age it is imperative that the IT team (the supplier) seeks to engage more frequently with their users.

  • Working around Internal Barriers

Hierarchies are hard to break down, and a more nimble approach is often to establish cross-functional teams to take on specific projects that are valuable to the business, but relatively finite in scope, such that the benefits of working in a team-oriented approach become self-evident quickly. Add to this the use of internal DevOps champions to espouse and explain the overall approach.

  • Being Smart with Technology

There are a variety of technical solutions available to improving development, testing and efficiency of collaboration for mainframe teams. Hitherto deal-breaking delays and bottlenecks caused by older procedures and even older tooling can be removed simply by being smart about what goes into the DevOps tool-chain. Take a look at David Lawrence’s excellent review of the new Micro Focus technology to support better configuration and delivery management of mainframe applications.

In a recent blog, John Gentry talked about the “Culture Shift” foundational to a successful DevOps adoption. The SHARE EXECUForum 2016 show held a round-table discussion specifically about the cultural changes required for DevOps. Culture clearly matters. However, these and Drucker’s pronouncements notwithstanding, culture is only half the story.

Strategic Value?

The strategic benefit of DevOps is critical. CIO.com recently talked about how DevOps can help “redefine IT strategy”. After all, why spend all that time on cultural upheaval without a clear view of the resultant value?

In another recent article, the key benefits of DevOps adoption were outlined as

  • Fostering Genuine Collaboration inside and outside IT
  • Establishing End-to-End automation
  • Delivering Faster
  • Establishing closer ties with the user

Elsewhere, an overtly positive piece by Automic gave no fewer than 10 good reasons to embrace DevOps, including fostering agility, saving costs, turning failure into continuous improvement, removing silos, find issues more quickly and building a more collaborative environment.

How such goals become measurable metrics isn’t made clear by the authors, but the fact remains that most commentators see significant strategic value in DevOps. Little wonder that this year’s session agenda at SHARE includes a track called DevOps in the Enterprise, while the events calendar for 2017 looks just as busy again with DevOps shows.

Make It Real

So far that’s a lot of talk and not a lot of specific detail. Changing organizational culture is so nebulous as to be almost indefinable – shifting IT culture toward a DevOps oriented approach covers a multitude of factors in terms of behaviour, structure, teamwork, communication and technology it’s worthy of studies in its own right.  Strategically, transforming IT to be a DevOps shop requires significant changes in flexibility, efficiency and collaboration between teams, as well as an inevitable refresh in the underlying tool chain, as it is often referred.

To truly succeed at DevOps, one has to look and the specific requirements and desired outcomes:  being able to work out specifically, tangibly and measurably what is needed, and how it can be achieved, is critical. Without this you have a lot of change and little clarity on whether it does any good.

Micro Focus’ recent white paper “From Theory to Reality” (download here) discusses the joint issues of cultural and operational change as enterprise-scale IT shops look to gain benefits from adopting a DevOps model. It cites three real customer situations where each has tackled a specific situation in its own way, and the results of doing so.

Learn More

Each organization’s DevOps journey will be different, and must meet specific internal needs. Why not join Micro Focus at the upcoming SHARE, DevDay or #MFSummit2017 shows to hear for how major IT organizations are transforming how they deliver value through DevOps, with the help of Micro Focus technology.

If you want to build an IT service citadel of the future, it had better be on something concrete. Talk to Micro Focus to find out how.

Linux – the new workload workhorse

Linux continues to gain in popularity, and there are more deployments each year, even in the mainframe world. What’s driving all the interest and, frankly, all the workload? We deployed Derek Britton to find out.

Reports suggest that there continues to be a significant uptick in the number of deployments on to Linux servers worldwide. In November 16’s IBM Systems Magazine article “Why more z Systems customer are running Linux”, for example, we are told that “nearly 50 percent of z Systems clients are using Linux”. We also know that Linux overtook other UNIX systems in terms of market share as far back as 2013 (source: Linux Foundation).

Meanwhile, the 11th annual BMC Mainframe Market survey (source: BMC) reports that 67% of mainframe organizations have witnessed increasing capacity this year, with the percentage respondents using Linux in production rising to 52%.

Now, across the broader market, what incarnation of Linux might be chosen is a topic all of its own. Data centers running Enterprise versions of RedHat, SUSE or Oracle variants is an option, as is using a Linux-based Cloud deployment, as would be the ground-breaking LinuxONE technology or the new Linux on Power platform from IBM, or indeed running a Linux partition on their mainframe. The flexibility, choice and power is certainly there to leverage.

SUSELinuxOne

Why Now?

One of the obvious questions this throws up is what sort of workload is being deployed on to Linux? Or, put another way, what is driving organizations and their IT teams to choose Linux (or any other modern environment for that matter) as a production environment? The aforementioned IBM Systems Magazine article confirmed that IBM has (Linux) clients “doing payments, payroll, mobile banking … critical applications”. It goes without saying that some production workload is much more at home on z/OS, but IBM sensibly provides the options the market is clearly looking for in the digital age.

And tempting as it might be to talk about all the benefits of Linux, open source and other recent innovations from the vendors, this isn’t what drives change. Businesses drive IT innovation – changes in circumstances are behind many of the smartest IT decisions. Necessity is the mother of invention, or in this case innovation. So what are those needs?

Accelerating Market Footprint

One of our clients looked at branching out into new territories. Their core systems needed to be replicated across new data centers in each country, a fairly typical situation. However, the uniqueness and scale of the operation made matters difficult for provisioning IT operations as quickly as the business plan wanted. They were looking for a faster way to have tried-and-trusted IT systems up and running, supporting their new regional centers.

Smart Data Compliance

A financial services client was also looking at international expansion. However, due to data privacy laws in the new region, they were unable to manage the new operation from their head office. Instead, they needed to establish the right – low-scale, yet compatible – IT footprint in the new region. The question therefore was what viable options could replicate existing mainframe business functionality at a lower scale?

klasse2

Reaching New Clients

A very successful mainframe applications provider with an aggressive growth strategy was looking for further market opportunity. They identified that their market penetration and growth plans precluded them from establishing sufficient growth with their existing model. One important option to them was to investigate reaching clients in their market who were currently not using their prescribed deployment platform. Simply put, they needed to explore more platform options to support market growth.

Getting Fit for Purpose

New demands of fresh, critical workload create questions about priority and bandwidth. Some clients we know have adopted a headlong approach into big data and in-line analytics. Their view is there is no place better than z/OS to run these core operations. The question this creates is how to provision the necessary headroom without incurring unplanned increases. Of course, there’s always a commercial answer, but oftentimes the capacity available on Linux is simply waiting to be leveraged. Sometimes, some traditional z/OS workload might not all be equally important – some of it might be a historical circumstance. It then becomes a question of choices. Moving standalone priority B workload around might be viable and support higher priority z/OS projects.

Flexibility is Key…

The above scenarios represent real situations faced by large enterprises. What do all these drivers have in common? Probably the simplest way to label it is the issue of flexibility. Responding to change, rapidly, is driving IT innovation. Finding smart ways to deliver bomb-proof systems – core applications that already add value, already support the business – into new channels to support, quickly, going into a new territory, splitting data centers, reaching new clients, sometimes where the traditional platform isn’t appropriate for the model, is the demand. Linux makes sense as a viable, enterprise-scale solution in a growing range of cases.

…and so is the application!

For so many of the world’s largest IT organizations applications literally mean business. They keep the operations ticking over, and without them the organization would be unable to function. Many of those systems have been relied upon over years, built on the solid foundation of the COBOL language. COBOL’s continued evolution in its 6th decade, and Micro Focus unrivalled support of COBOL across dozens of leading platforms mean when bullet-proof core systems need contemporary levels of flexibility, COBOL and Linux are the natural, low-risk option. It’s no wonder that Micro Focus sees more and more Linux deployments of COBOL applications than ever.

Conclusion

Is Linux alone here? Not at all. One could easily argue that other UNIX variants and Windows are viable production systems for many application workloads. That’s not the argument here – platform viability is the choice of the customer. What’s important is that organizations need to be able to make smart decisions to support rapid business change. Advancements in technology such as Linux, alongside the power and portability of COBOL, help them do just that.

Micro Focus #DevDay doubles-down in Dallas

The #COBOL community roadshow continued recently as Micro Focus #DevDay landed in Dallas, TX. But this time was special – there were two events instead of one. Derek Britton went along to find out more.

A numbers game

Just as COBOL processes some of the most important numeric transactions globally, we learned of some telling statistics at the most recent #DevDay – held this month in Dallas.

Very interestingly, the show started with an award for Dallas – host of the most frequent #DevDay events. This was Micro Focus’ 4th time in Dallas in as many years hosting a COBOL community meeting. Over 200 delegates have attended our Dallas-hosted events in the last few years. Of course, Dallas is only part of a major global program – Micro Focus has hosted nearly forty #DevDay customer meetings since the program was started a few years ago.

DD1

But these numbers are dwarfed by the next: thousands of customers use Micro Focus’ COBOL technology today. What do they have in common? They are all committed to using the right tools to build the next generation core business applications, to run wherever they need to be run. This community also includes over one thousand Independent Software Vendors who have chosen COBOL as their language platform for the scalability, performance and portability their commercial packages need.

Last year we asked that global community their thoughts of the language. An overwhelming 85% said COBOL remains strategic in their organization. However, two-thirds of the same group said they were looking to improve the efficiency of how they delivered those applications.

We also heard that this global COBOL community is supported by Micro Focus’ $60M investment each year, which it makes across a range of COBOL and related technology products. This week, we also saw where some of that investment is made. One way of explaining how is by product area, where our technology is split across two communities. It was those two communities who held separate #DevDay meetings in the same location.

Micro Focus #DevDay

The Micro Focus #DevDay event is no stranger to our blog site. It is designed with the Micro Focus customer community in mind – showcasing latest products such as Visual COBOL and Enterprise Developer to the traditional Micro Focus user base.

Highlights of the Dallas session included a major focus on key new technical innovations. The first of these explored building REST-based services in a managed-code world using COBOL. Our experts demonstrated the simple steps to build, for example, mobile payment systems, using trusted COBOL routines and a simple RESTful integration layer. They later demonstrated a newly available support for advanced CICS Web Services, connecting trusted mainframe systems with new digital devices with a seamless, modern interface.

DD2

We also heard news of the latest product releases – with versions 2.3.2 of Enterprise Developer and Visual COBOL, which are newly available, including a range of major enhancements plus support for new environments such as Linux on IBM Power Systems and Windows 10. Some delegates got a chance to test drive the new version themselves in the hands-on lab.

The #DevDay event continues to be hugely successful and touches down next in December, in Chicago.

Acu #DevDay

The ACU COBOL technology is an established product line, acquired originally from AcuCorp, which joined the Micro Focus family just a little over a decade ago. The Acu range, known now as extend, boasts thousands of users.

Arguably the highlight of the day was the announcement of the brand-new Acu2Web capability.  Available to participating clients as part of the extend 10.1 product Beta program, Acu2Web demonstrates Micro Focus commitment to a digital future in its Acu COBOL technology, and solves a genuine market need. The challenge was a real one – a community one: access to the same core COBOL application system, from any device, with any interface, on any system, to behave the same way, using the same setup. In yesteryear, a limited albeit complex engineering task, the problem has been exacerbated beyond all recognition by the proliferation of new devices and platforms, all of which need to access trusted back-end systems.

This was the challenge we set ourselves – and that’s what we’ve built into our latest Acu extend technology – a seamless, transparent access mechanism to core Acu-built COBOL apps from any device.  The Acu2Web facility builds the wiring and plumbing for any access point, no matter where, as the access diagram below outlines.

DD0

Acu2Web is one of the new exciting capabilities being made available in extend 10.1. The Beta program is underway to qualifying clients. The roadmap milestones outlined during the event give a 10.1 release date in early 2017.

A global community… supported globally

The focused customer technology event is an important community touch-point for Micro Focus – but it certainly isn’t the only one. The same community thrives online, not least at Community.microfocus.com. Available to all, this forum provides tips and tricks for technology usage including suggestions from technical staff, consultants and customers alike. Importantly, product areas such as Acu have their own dedicated pages (see below).

 

DD4

Through the community, our social media site, and our academic program, Micro Focus continues to fly the flag for COBOL skills. Just shy of 400 higher education establishments are training their students to learn COBOL with Micro Focus COBOL products, building the next generation of COBOL talent.

In Summary

#DevDays are the perfect opportunity to witness the significant new product capabilities now available to our clients. Both product sets have undergone transformational updates to directly address real market demand.

I caught up with the host of the Micro Focus Developer Days, Ed Airey, who summarised Micro Focus’ approach “We are proud to host events that bring our entire COBOL development community together, to exchange ideas, learn new capabilities, and explore how to embrace future needs using modern technology. We remain committed to our community and look forward to more events of this nature in the future”.

Two product lines; one global COBOL community.

Find more about how our products can support you at www.microfocus.com

Great technology never gets old – Linux celebrates 25 years!

As Linux celebrates its 25th birthday, there’s plenty of good cheer going round. Derek Britton grabs a slice of cake and looks into a few of the reasons to celebrate.

Happy 25th Birthday Linux

It’s quite hard to imagine a world without Linux in it, but in reality one of the industry de-facto standard operating environments has just reached its quarter century anniversary. This blog looks at the story of how we got here.

In the IT world of 1991, the desktop market was just blossoming, the personal computer was becoming more powerful, intel were breaking Moore’s law with reckless abandon, and Microsoft were starting to get their act together with a brand new exciting development that was to hit the streets a year later, called Windows. The server market was also expanding. An interminable list of organizations including IBM, HP, Sun, TI, Siemens, ICL, Sequent, DEC, SCO, SGI, Olivetti were building proprietary chips, machines and UNIX variants. UNIX had already by that stage enjoyed significant success since making the leap from academia to commerce, and everyone was trying to get a share of the spoils.

Faced with such a crowded market, how did Linux take off?

The phenomenon that was the Linux revolution has been ascribed to a number of factors, including the market desire for choice, technical freedom, and value for money.

The products on the market at the time were entirely proprietary and cost a lot of money. A vendor lock-in and an expensive contract was not all that appealing to CIOs looking to derive value from their investments in what were ironically referred to as  “open systems” (given the proprietary nature of the systems in question).

Linux plugged the gap in the market of true openness. Because the ownership was in the hands of the community, there were no proprietary elements. And the open source nature of the kernel meant that provided you had a piece of suitable hardware, Linux was basically free to use.

takeoff

Technical Altruism

The devisor of Linux, Linux Torvalds, set about improving on other UNIX kernels available at the time, but took the stance that the project should be entirely open. While the idea was his, he merely wanted to invite others to help the idea take root. Indeed Torvalds’ own view of the name was that it sounded too egotistical, and for the first 6 months of the project, the acronym FREAX (an amalgam of “free”, “freak” and “x”) was used as the working title. Only later did he accept that Linux might work better.

Whether such altruism would yield any fruit is easy enough to quantify. Recently, the Linux Foundation released the Linux Kernel Development report stats showing that more than 13,500 developers from 1,300 companies have contributed to the Linux kernel since 2005. Moreover, it isn’t just hobbyist techies in academic labs. The same report indicates that among the top organizations sponsoring Linux kernel development since the last report (which was published in March 2015) included industry giants such as Intel, Red Hat, Samsung, SUSE, IBM, Google, AMD and ARM.

Linux – A Global Player

So much for contributions to the kernel itself, but what about the whole environment, and what about deployments in industry? Did Linux make any headway in the commercial world? Of course the answer is resoundingly affirmative.

Consider just a few of the Linux implementations:

  •  Thousands of major commercial, academic and governmental organizations are now Linux devotees
  • The number of users of Linux is estimated at 86 million, according to SUSE.com
  • Android, the de-facto mobile device environment, is Linux-based
  • The world’s most powerful supercomputers are Linux-based
  • Some of the world’s largest companies, including Amazon and Google, rely heavily on Linux-based servers

Little wonder then that in 2013, Linux overtook the market share of all other proprietary UNIX systems.

But if its open source, who will pay for its future?

A question mark about whether an open source (read: free) environment could be commercial sustainable must also be answered. Arguably the best way to do this might be to look at the health of the organizations who seek to make Linux a commercially viable product. These are the vendors of the various Linux distributions, such as SUSE, Red Hat and Oracle.

Looking at the health of the Linux line of business in each case, we see highly profitable organizations with trend-beating revenue growth in a tough market sector.

Consider all the other players in the sector with their commitment to Linux. IBM has invested millions of dollars in Linux, introducing a new range of Linux-only mainframes branded as LinuxOne. Meanwhile in what might have been seen as unthinkable a few years ago, Windows vendor Microsoft has launched partnerships with Linux vendors including SUSE and Red Hat to provide a collaborative cloud hosting solution.

MaytheOSbewithyou

Now it’s old, we need to get rid of it, right?

Well we’ve heard it all before, haven’t we? It’s getting on a bit so we need to replace it. Like mainframes, like COBOL, like CICS, like Java. These technologies have enjoyed significant anniversaries recently. And in not one single case can you justifiably argue that the age of the technology means it warrants discontinuing. Indeed, most of the ideas might have been formed some time ago, but not unlike Linux, in each case the community and vendors responsible have continued to enhance, improve and augment the technology to keep it relevant, up to date, and viable for the modern era.

In technology, the myth that age implies a lack of value is diametrically incorrect. In IT, age demonstrates value.

No surprises.

At Micro Focus, we love Linux, and we’re not surprised by its success. We’ve long since advocated the use of innovative technology to help support existing value IT investments. Systems and applications that run businesses should be supported, enhanced, innovated, and modernized. At a low cost, without any risk. That’s what Micro Focus has done. Whether it’s with the applications themselves or with the underlying operating environment, building and operating today’s and tomorrow’s digital infrastructure is what we do best.

Indeed, speaking of birthdays, Micro Focus is 40 this year. Enduring value is no stranger to us. Now, who brought the candles?

The true cost of free

There always exists the low-cost vendor who offers something for free to win market share. In enterprise IT, it is worth examining what free really means. Derek Britton goes in search of a genuine bargain

Introduction

IT leaders want to help accelerate business growth by implementing technology to deliver value quickly. They usually stipulate in the same breath the need for value for money. The pursuit of the good value purchase is endless. No wonder then that vendors who offer “use our product for free” often get some attention. This blog looks at the true cost of ‘free’.

Measuring Value

We all use desktop or mobile apps which, if they stopped working – and let’s face it, they do from time to time – wouldn’t really matter to us. We would mutter something, roll our eyes, and re-start the app. That’s not to say that people aren’t annoyed if they’ve not saved some important work when their application stops, but typically the impact is nothing more than a briefly disgruntled user.

But if an application is doing something critical or stategically important for an organization, then it is higher up on value scale. For example, an ATM application, savings account, package or logistics, money transfer, credit check, insurance quote, travel booking, retail transaction.  What if it went wrong? What if you also needed it to run elsewhere? What value would you put on that? Vitally, what would happen to the organization if you couldn’t do those things?

valuequal

Get it for free

Application Development tooling and processes tend to incur a charge, as the link between the technology and the valuable application is easily determined. However, there is required additional technology to deploy and run the built applications. Here, the enticement of a “free” product is very tempting at this stage. After all, why should anyone pay to run an application that’s already been built? Many technology markets have commoditised to the point where the relative price has fallen significantly. Inevitably, some vendors are trying the “free” route to win market share.

But for enterprise-class systems, one has to consider the level of service being provided with a “free” product. Here’s what you can expect.

Deployment for free typically offers no responsibility if something goes wrong with that production system. Therefore internal IT teams must be prepared to respond to applications not working, or find an alternative means of insuring against that risk.

A free product means, inevitably, no revenue is generated by the vendor. Which means reinvestment in future innovations or customer requirements is squeezed. As an example, choice of platform may be limited, or some 3rd party software support or certification. Soon enough an enticing free product starts to look unfit for purpose due to missing capability, or missing platform support.

Another typical area of exposure is customer support, which is likely to be thin on the ground because there is insufficient funding for the emergency assistance provided by a customer support team.

In a nutshell, if the business relies on robust, core applications, what would happen if something goes wrong with a free product?

An Open and Shut Case?

Consider Open Source and UNIX. In a time when UNIX was a collection of vendor-specific variants, all tied to machinery (AIX, Solaris, HP/UX, Unixware/SCO), there was no true “open” version for UNIX, there was no standard. The stage was set for someone to break the mould. Linus Torvalds created a new, open source operating system kernel. Free to the world, many different people have contributed to it, technology hobbyists, college students, even major corporations.  Linux today represents a triumph of transparency, and Linux, and Open Source is here to stay.

However, that’s not the whole story. It still needed someone to recognize the market for a commercial service around this new environment. Without the support service offered by SUSE, Red Hat and others, Linux would not be the success it is today.

Today, major global organizations use Linux for core business systems. Linux now outsells other UNIX variants by some distance. Why? Not just because it was free or open source, but because the valuable service it provided organizations with was good value. But people opt to pay for additional support because their organizations must be able to rectify any problems, which is where organizations such as SUSE and Red Hat come in. Linus Torvalds was the father of the idea, but SUSE, Red Hat (and their competitors) made it a viable commercial technology.

Genuine return

Robust, valuable core applications will require certain characteristics to mitigate any risk of failure. Such risks will be unacceptable for higher-value core systems. Of course, many such systems are COBOL-based. Such criteria might include:

  • Access to a dedicated team of experts to resolve and prioritize any issues those systems encounter
  • Choice of platform – to be able to run applications wherever they are needed
  • Support for the IT environment today and in the future – certification against key 3rd party technology
  • A high-performance, robust and scalable deployment product, capable of supporting large-scale enterprise COBOL systems

The Price is Right

Robust and resilient applications are the lifeblood of the organization. With 4 decades of experience and thousands of customers, Micro Focus provides an award-winning 24/7 support service. We invest over $50M each year in our COBOL and related product research and development. You won’t find a more robust deployment environment for COBOL anywhere.

But cheap alternatives exist. The question one must pose, therefore, is what does free really cost? When core applications are meant to work around your business needs – not the other way around, any compromise on capability, functionality or support introduces risk to the business.

Micro Focus’ deployment technology ensures that business critical COBOL applications that must not fail work whenever and wherever needed, and will continue to work in the future;  and that if something ever goes wrong, the industry leader is just a mouse click away.

Anything that is free is certainly enticing, but does zero cost mean good value? As someone once said, “The bitterness of poor quality remains long after the sweetness of low price is forgotten”.

DevOps – pressing ahead

In an IT world that seems to be accelerating all the time, the clamour for faster delivery practices continues. Derek Britton takes a quick look at recent press and industry reports.

Introduction

In many customer meetings I tend to notice the wry smiles when the discussion turns to the topic of IT delivery frequency. The truth is, I don’t recall any conversation where the client has been asked to deliver less to the business than last year. No-one told me, “we’re going fast, and it’s fast enough, thanks”.

The ever-changing needs of an increasingly-vocal user community guarantees that IT’s workload continues to be a challenge. And this prevails across new systems of engagement (mobile and web interfaces, new user devices etc.) as well as systems of record (the back-office, data management, number crunching business logic upon which those systems of engagement depend for their core information).

Moving at pace, however, needs to be carefully managed. Less haste, more speed, in fact. Gartner says a quarter of the Global2000 top companies will be using DevOps this year. Let’s look to another deadline-driven entity, the press, for a current view.

wordle5

Banking on DevOps

Speaking to a conference of over 400 at a DevOps conference in London, ING Bank global CIO Ron van Kemenade says investment in new skills and a transition to DevOps is critical as the bank adjusts to a mobile and online future through its “Think Forward” digital strategy.

“We wanted to establish a culture and environment where building, testing and releasing software can happen rapidly, frequently and more reliably. When beginning this journey we started with what matters most: people,” van Kemenade says.

Putting the focus on engineering talent and creating multi-disciplinary teams where software developers partner with operations and business staff has led to more automated processes, a sharp reduction of handovers and a “collaborative performance culture”, he adds.

Speaking at the same event, Jonathan Smart, head of development services at Barclays, talked up an eighteen-month push by the bank to incorporate agile processes across the enterprise

Over the past year-and-a-half, the amount of “strategic spend” going into agile practices and processes has risen from four percent to more than 50%, says Smart, and the company now has over 800 teams involved

To accelerate its own transformation, BBVA has adopting a new corporate culture based on agile methodologies. “The Group needs a cultural change in order to accelerate the implementation of transformation projects. It means moving away from rigid organizational structures toward a more collaborative way of working”, explains Antonio Bravo, BBVA’s Head of Strategy & Planning. “The main goal is to increase the speed and quality of execution.”

Worth SHARing

Little wonder that the IBM mainframe community organization, SHARE, is continuing a significant focus on DevOps at the forthcoming August 2016 show in Atlanta. Tuesday’s keynote speech is called z/OS and DevOps: Communication, Culture and Cloud”, given by members of the Walmart mainframe DevOps team.

Meanwhile, an article featured in Datamation, and tweeted by SHARE, provides further evidence and arguments in favour of adopting the practice. It cites a report from “2016 State of DevOps Report” which says, “[Developers using DevOps] spend 22 percent less time on unplanned work and rework, and are able to spend 29 percent more time on new work”

shareatlanta

Time to Focus

Of course, Micro Focus are neither strangers to SHARE nor to DevOps. At a recent SHARE event, we attended the DevOps discussion panel, discussing technical, operational and cultural aspects.

More recently, Micro Focus’s Solution Director Ed Airey penned an informative article published in SDTimes, outlining a smart approach to mainframe DevOps. The rationale, he says, is simple – competitive pressure to do more.

“Competitive differentiation depends on [organizations’] ability to get software capabilities to market quickly, get feedback, and do it again”

Addressing major challenges to make DevOps a reality, in both mainframe and distributed environments, Airey talks about how major question marks facing DevOps teams can be tackled with smart technology, and refined process; questions such as collaboration, development process, culture, skills, internal justification. He concludes with encouraging projected results, “Standardizing on common tooling also enables productivity improvements, sometimes as high as 40%.”

Of course – not everyone is convinced

Modern delivery practices aren’t for everyone. And indeed some issues sound quite daunting. Take Cloud deployment for example.

Sounds daunting? A recent Tech Crunch article certainly thought so.

We are treated to a variety of clichés about the topic such as “ancient realm” and “the archaic programs”. However, the publication failed to notice some important things about the topic.

Central to the piece is whether COBOL based existing systems could be “moved” to another platform. The inference was that this was an unprecedented, risky exercise. What’s perhaps surprising, to the author at least, is that platform change is no stranger to COBOL. Micro Focus’ support of over 500 platforms since its inception 40 years ago is supplemented by the fact that the COBOL language, thanks to our investment, is highly portable and – perhaps most importantly in this case – platforms such as the Cloud or more specifically Red Hat (alongside SUSE, Oracle and many other brands of UNIX too) are fully supported with our Micro Focus range. That is to say, there was never any issue moving COBOL to these new platforms: you just need to know who to ask.

cloud1

Moving Ahead

Anyway, I can’t stop for long, we’re moving fast ourselves, continuing the DevOps discussion. Upcoming deadlines? Find us at SHARE in Atlanta in August, or visit us at a DevDay in the near future, or catch up with us on our website where we’ll be talking more about DevOps and smarter mainframe delivery soon.

Start over, or with what you know?

Derek Britton’s last blog looked at the appetite for change in IT. This time, he looks at real-world tactics for implementing large-scale change, and assesses the risks involved.

Introduction

In my recent blog I drew upon overwhelming market evidence to conclude that today’s IT leadership faces unprecedented demand for change in an age of bewildering complexity. That “change”, however, can arrive in many shapes and forms, and the choice of strategy may differ according to a whole range of criteria – technical investments to date, available skills, organizational strategy, customer preference, marketing strategy, cost of implementation, and many more besides. This blog explores and contrasts a couple of the options IT leaders have.

Starting Over?

Ever felt like just starting over? The difficulty of changing complex back-end IT systems, when staffing is so tight, where the pressure to change is so high, with an ever-growing backlog – there is point at which the temptation to swap out the hulking, seething old system with something new, functional and modern, will arrive.

Sizing Up the Task

We’re sometimes asked by senior managers in enterprise development shops, how they should assess whether to rewrite or replace a system versus keeping it going and modernizing it. They sense there is danger in replacing the current system, but can’t quantify to other stakeholders why what is.

Of course, it is impossible to give a simple answer for every case, but there are some very common pitfalls in embarking on a major system overhaul. These can include:

  • High Risk and High Cost involved
  • Lost business opportunity while embarking on this project
  • Little ‘new’ value in what is fundamentally a replacement activity

This sounds a rather unpleasant list. Not only is it unpleasant, but the ramifications in the industry are all too stark. These are just a few randomly-selected examples of high profile “project failures” where major organizations have attempted a major IT overhaul project.

  • State of Washington pulled the plug on their $40M LAMP project. It was six times more expensive than original system
  • HCA ended their MARS project, taking a $110M-$130M charge as a result
  • State of California abandoned a $2 billion court management system (a five-year, $27 million plan to develop a system for keeping track of the state’s 31 million drivers’ licenses and 38 million vehicle registrations)
  • The U.S. Navy spent $1 Billion on a failed ERP project

Exceptional Stuff?

OK, so there have been some high-profile mistakes. But might they be merely the exception rather than the rule? Another source of truth are those who spend their time following and reporting on the IT industry. And two such organizations, Gartner and Standish, have reported more than one about the frequency of failed overhaul projects. A variety of studies over the years keeps coming back to the risks involved. Anything up to a 70% failure is cited in analyst studies when talking about rewriting core systems.

Building a case for a rewrite

Either way, many IT leaders will want specific projections for their own business, not abstract or vague examples from elsewhere.

Using as an example a rewrite project[1] – where in this case a new system is built from scratch, by hand (as opposed to automatically generated) in another language such as Java. Let’s allow some improvement in performance because we’re using a new, modern tool to build the new system (by the way, COBOL works in this modern environment too, but let’s just ignore that for now).

Let’s calculate the cost – conceptually

Rewrite Cost = (application size) x (80% efficiency from modern frameworks) x (developer cost per day) / speed of writing

The constants being used in this case were as follows –

  • The size of the application, a very modest system, was roughly 2 Million lines of code, written in COBOL
  • The per-day developer cost was $410/day
  • The assumed throughput of building new applications was estimated at 100 lines of code per day, which is a very generous daily rate.

Calculated, this is a cost of $6.5M. Or, in days’ effort, about 16,000.

Considerations worth stating:

  • This is purely to build the new application. Not to test it in any way. You would need, of course, rigorous QA and end-user acceptance testing.
  • This is purely to pay for this rewrite. In 10 years when this system gets outmoded, or the appetite for another technology is high, or if there are concerns over IT skills, do you earmark similar budget?
  • This assumes a lot about whether the new application could replicate the very unique business rules captured in the COBOL code – but which are unlikely to be well understood or documented today.

A well-trodden path to modernization

Another client, one of the world’s largest retailers, looked at a variety of options for change, among them modernizing, and rewriting. They concluded the rewrite would be at least 4 times more expensive to build, and would take 7 or 8 times longer to deliver, than modernizing what they had. They opted to modernize.

mod

 

Elsewhere, other clients have drawn the same conclusions.

“Because of the flexibility and choice within [Micro Focus] COBOL, we were able to realize an eight month ROI on this project – which allowed us to go to market much faster than planned.”

— Mauro Cancellieri,  Manager. Ramao Calcados

“Some of our competitors have written their applications in Java, and they’ve proven not to be as stable, fast or scalable as our systems. Our COBOL-based [banking solution] however, has proved very robust under high workloads and deliver a speed that can’t be matched by Java applications.”

— Dean Mathieson, Product Development Manager, FNS / TCS

Our Recommendation

Core business systems define the organization; they – in many cases – are the organization. The applications that provide mortgage decisions, make insurance calculations, confirm holiday bookings, manage the production lines at car manufacturers, process and track parcel deliveries, they offer priceless value. Protecting their value and embracing the future needs a pragmatic, low-risk approach that leverages the valued IT assets that already work, delivers innovation and an ROI faster than other approaches, and is considerably less expensive.

If you are looking at IT strategic change, talk to us, and we’d love to discuss our approach.



[1] We can’t speculate on the costs involved with package replacement projects – it wouldn’t be fair for us to estimate the price of an ERP or CRM package, for example.