New Year – new snapshot: the Arcati Mainframe Yearbook 2017

Introduction

Trends come and go in the IT industry, and predictions often dominate the headlines at the turn of the year. Speculation and no small amount of idle guesswork starts to fill the pages of the IT press. What welcome news therefore when Arcati publishes its annual Mainframe Yearbook.  Aside from the usual vendor-sponsored material, the hidden gem is the Mainframe User Survey. Testing the water of the global mainframe market, the survey aims to capture a snapshot of what Arcati describes as “the System z user community’s existing hardware and software configuration, and … their plans and concerns for 2017”.

While the sample of 100 respondents is relatively modest, the findings of its survey conducted in November 2016 were well worth a read. Here are a few observations from my reading of the Report.

Big Business

The first data point that jumps off the page is the sort of organization that uses the mainframe. A couple of questions help us deduce an obvious conclusion – the mainframe still means big business. This hasn’t changed; with the study revealing that over 50% of responses have mainframe estates of over 10,000 MIPS, and nearly half work in organizations of more than 5,000 employees (major sectors include banking, insurance, manufacturing, retail and government). Such organizations have committed to the mainframe: over a quarter have already invested in the new IBM z13 mainframe.

…And Growing

A few other pointers suggest the trend is upward, at least in terms of overall usage. Nearly half are seeing single digit MIPS growth this year, while nearly a third are witnessing over 10% growth in MIPS usage. For a hardware platform often cited for being in decline, that’s a significant amount of new workload. While the survey doesn’t make it clear what form that increase takes, I’ve published my view about that before. Whatever the reason, it seemed unsurprising that the number of respondents who regard the mainframe as a “legacy platform” has actually reduced by 12 percentage points since the previous survey.

Linux is in the (Main) Frame

The survey asked a few questions about Linux in the mainframe arena, and the responses were positive. Linux on z is in play at a third of all those surveyed, with another 13% aiming to adopt it soon. Meantime, IBM’s new dedicated Linux box, LinuxONE, is now installed at, or is planned to be, at a quarter of those surveyed.

Destination DevOps

With a mere 5% of respondents confirming their use of DevOps, the survey suggests at first glance a lack of uptake in the approach. However, with 48%  planning to use it soon, this makes a majority of respondents on a DevOps trajectory. This is consistent with a growth trend based on Gartner’s 2015 prediction that 45% of enterprises will planning to adopt DevOps (see my blog here). Whatever the numbers turn out to be, the trend looks set to become an inextricable part of the enterprise IT landscape.

Cost of Support

Considering the line of questioning around cost of support compared various platforms it only seems  worth mentioning that the author noted “Support costs of Linux and Windows were growing faster than the mainframe’s”. The issue around “Support”, however, did not extend to asking about available skills or indeed training programs or other investments to ensure support could continue.

Future considerations?

It is hard to make any material observations about the mainframe in the broader enterprise IT context because there was no questioning around multi-platform applications or workload balancing, where a hybrid platform model, with a mainframe at its core, serves a variety of business needs, applications and workload types. So often, the mainframe is the mother-ship, but by no means the only enterprise platform. For the next iteration of the survey, we would welcome further lines of questioning around workload, skills, security and cloud as sensible additions.

Conclusion

There are a small number of important independent perspectives on the mainframe community, about which we report from time to time, and Arcati is one such voice. The survey reflects an important set of data about the continued reliance upon and usage of the mainframe environment. Get your copy here.

Another such community voice is, of course, the annual SHARE event. This year it takes place in San Jose, California. Micro Focus will be there, as part of the mainframe community. See you there.

Trying to Transform

Here’s an interesting statistic. According to a report, only 61 of the Fortune 500 top global companies have remained on that illustrious list since 1955. That’s only 12%. It’s not unreasonable to extrapolate that 88% of the Fortune 500 of 2075 will be different again. That’s over 400 organizations that won’t stand the test of time.

What do such sobering prospects mean for the CEO of most major corporations? Simple – innovation. Innovation and transformation are the relentless treadmill of change and the continuous quest for differentiation. These are what an organization will need for a competitive edge in the future.

But in this digital economy, what does transformation look like?

Time for Change

Key findings from a recent report (the 2016 State of Digital Transformation, by research and consulting firm Altimeter) shared the following trends affecting organizational digital transformation:

  • Customer experience is the top driver for change
  • A majority of respondents see the catalyst for change as evolving customer behaviour and preference. A great number still see that as a significant challenge
  • Nearly half saw a positive result on business as a result of digital transformation
  • Four out of five saw innovation as top of the digital transformation initiatives

Much of this is echoed by a study The Future of Work commissioned by Google.

The three most prevalent outcomes of adopting “digital technologies” were cited as

  • Improving customer experience
  • Improving internal communication
  • Enhancing internal productivity

More specifically, the benefits experienced of adopting digital technology were mentioned as

  • Responding faster to changing needs
  • Optimizing business processes
  • Increasing revenue and profits

Meanwhile, the report states that the digital technologies that are perceived as having the most future impact were a top five of Cloud, Tablets, Smartphones, Social Media and Mobile Apps.

So, leveraging new technology, putting the customer first, and driving innovation seem all to connect together to yield tangible benefits for organizations that are seeking to transform themselves. Great.

But it’s not without its downside. None of this, alas, is easy. Let’s look at some of the challenges cited the same study, and reflect on how they could be mitigated.

More Than Meets The Eye?

Seamlessly changing to support a new business model or customer experience is easy to conceive. We’ve all seen the film Transformers, right? But in practical, here-and-now IT terms, this is not quite so simple. What are the challenges?

The studies cited a few challenges: let’s look at some of them.

Challenge: What exactly is the customer journey?

In the studies, while a refined customer experience was seen as key, 71% saw understanding that behaviour as a major challenge. Unsurprisingly, only half had mapped out the customer journey. More worrying is that a poor digital customer experience means, over 90% of the time, unhappy customers won’t complain – but they will not return. (Source: www.returnonbehaviour.com ).

Our View: The new expectation of the digitally-savvy customer is all important in both B2C and B2B. Failure to assess, determine, plan, build and execute a renewed experience that maps to the new customer requirement is highly risky. That’s why Micro Focus’ Build story incorporates facilities to map, define, implement and test against all aspects of the customer experience, to maximize the success rates of newly-available apps or business services.

Challenge: Who’s doing this?

The studies also showed an ownership disparity. Some of the digital innovation is driven from the CIO’s organization (19%), some from the CMO (34%), and the newly-emerging Chief Digital office (15%) is also getting some of the funding and remit. So who’s in charge and where’s the budget, and is the solution comprehensive? These are all outstanding questions in an increasingly siloed digital workplace.

Our View: While organizationally there may be barriers, the culture of collaboration and inclusiveness can be reinforced by appropriate technology. Technology provides both visibility and insight into objectives, tasks, issues, releases and test cases, not to mention the applications themselves. This garners a stronger tie between all stakeholder groups, across a range of technology platforms, as organizations seek to deliver faster.

Challenge: Are we nimble enough?

Rapid response to new requirements hinges on how fast, and frequently, an organization can deliver new services. Fundamentally, it requires an agile approach – yet 63% saw a challenge in their organization being agile enough. Furthermore, the new DevOps paradigm is not yet the de-facto norm, much as many would want it to be.

Our View: Some of the barriers to success with Agile and DevOps boil down to inadequate technology provision, which is easily resolved – Micro Focus’ breadth of capability up and down the DevOps tool-chain directly tackles many of the most recognized bottlenecks to adoption, from core systems appdev to agile requirements management. Meanwhile, the culture changes of improved teamwork, visibility and collaboration are further supported by open, flexible technology that ensures everyone is fully immersed in and aware of the new model.

Challenge: Who’s paying?

With over 40% reporting strong ROI results, cost effectiveness of any transformation project remains imperative. A lot of CapEx is earmarked and there needs to be an ROI. With significant bottom line savings seen by a variety of clients using its technology, Micro Focus’ approach is always to plan how such innovation will pay for itself in the shortest possible timeframe.

Bridge Old and New

IT infrastructure and how it supports an organization’s business model is no longer the glacial, lumbering machine it once could be. Business demands rapid response to change. Whether its building new customer experiences, establishing and operating new systems and devices, or ensuring clients and the corporation protect key data and access points, Micro Focus continues to invest to support today’s digital agenda.

Of course, innovation or any other form of business transformation will take on different forms depending on the organization, geography, industry and customer base, and looks different to everyone we listen to. What remains true for all is that the business innovation we offer our customers enables them to be more efficient, to deliver new products and services, to operate in new markets, and to deepen their engagement with their customers.

Transforming? You better be. If so, talk to us, or join us at one of our events soon.

Geo-fencing: securing authentication?

Micro Focus is leading the industry in geo-fencing and Advanced Authentication with it’s NetIQ portfolio. Simon Puleo looks at this fascinating new area and suggests some potential and very practical uses for this technology in his latest blog

Are you are one of the 500 million users who recently had their account details stolen from Yahoo?

Chances are that criminals will use them for credential stuffing – using automation to try different combinations of passwords and usernames at multiple sites to login to your accounts.

So you’re probably thinking the same as me – that a single username and password is no longer sufficient protection from malicious log-in, especially when recycled on multiple sites.

DeYahoo1

Is your identity on the line?

Indeed, 75% of respondents to a September 2016 Ponemon study agreed that “single-factor authentication no longer effectively protects unauthorized access to information.”

Biometric authentication is one solution and is already a feature of newer iPhones. However, skimmers and shimmers are already seeking to undermine even this.

Perhaps geo-fencing, the emerging alternative, can address the balancing act between user experience and security? It provides effective authentication and can be easily deployed for users with a GPS device. Let’s take a closer look at what this technology is, and how it can be used.

What is geo-fencing?

Geo-fencing enables software administrators to define geographical boundaries. They draw a shape around the perimeter of a building or area where they want to enforce a virtual barrier.  It is really that easy. The administrator decides who can access what within that barrier, based on GPS coordinates. In the example below, an admin has set a policy that only state employees with a GPS can access systems within the Capitol Building.

cap

Let’s dive deeper, and differentiate between geo-location and geo-fencing. Because geo-location uses your IP it can be easily spoofed or fooled, and is not geographically accurate. However geo-fencing is based on GPS coordinates from satellites tracking latitude and longitude.

While GPS can be spoofed it requires loads of expensive scientific equipment and certain features to validate the signal. Using geo-coordinates enables new sets of policies and controls to ensure security and enforce seamless verification, keeping it easy for the user to log-in and hard for the criminal to break in. Consider the below example:

Security Policy: Users must logout when leaving their work area.

Real-world scenario: Let’s go and get a coffee right now. Ever drop what you are doing, leaving your PC unlocked and vulnerable to insider attacks? Sure you have.

Control: Based on a geo-fence as small as five feet, users could be logged out when they leave their cube with a geo device, then logged back in when they return. It’s a perfect combination of convenience, caffeine and security.

Patient safety, IT security 

This scenario may sound incredible, but Troy Drewry, a Micro Focus Product Manager, explains that it is not that far-fetched. Troy shared his excitement for the topic – and a number of geo based authentication projects he is involved in – with me. One effort is enabling doctors and medical staff to login and logout of workstations simply by their physical location. This could help save valuable time in time-critical ER situations while still enforcing HIPAA policies.

Another project is working with an innovative bank that is researching using geo-fencing around ATMs to provide another factor of validation.  In this scenario, geo-fencing could have the advantage of PIN-less transactions, circumventing skimmers.

As he explained to me, “What is interesting to me is that with geo-fencing and user location as a factor of authentication, it means that security and convenience are less at odds.” I couldn’t agree more. Pressing the button on my hard token to login to my bank accounts seems almost anachronistic; geo-fencing is charting a new route for authentication.

Micro Focus is leading the industry in geo-fencing and Advanced Authentication. To learn more, speak with one of our specialists or click here.

 

Change – the only constant in IT?

Change is a constant in our lives. Organizations have altered beyond recognition in just a decade, and IT is struggling to keep pace. Managing change efficiently is therefore critical. To help us, Derek Britton set off to find that rarest of IT treasures: technology that just keeps on going.

Introduction

A recent survey of IT leaders reported their backlog had increased by a third in 18 months. IT’s mountain to climb had just received fresh snowfall. While a lot is reported about digital and disruptive technologies causing the change, even the mundane needs attention. The basics, such as desktop platforms, server rooms, are staples of IT on a frequent release cadence from the vendors.

Platform Change: It’s the Law

Moore’s Law suggests an ongoing, dramatic improvement processor performance, and the manufacturers continue to innovate to provide more and more power to the platform and operating system vendors, as well as the technology vendor and end user communities at large. And the law of competition suggests that as one vendor releases a new variant of operating system, chock full of new capability and uniqueness, their rivals will aim to leapfrog them in their subsequent launch. Such is the tidal flow of the distributed computing world. Indeed, major vendors are even competing with themselves (for example Oracle promotes both Solaris and Linux, IBM AIX and Linux, even Windows will ship with Unbuntu pre-loaded now).

platform

Keep the Frequency Clear

Looking at some of the recent history of operating system releases, support lifespans and retirements, across Windows, UNIX and Linux operating systems, a drumbeat of updates exists. While some specifics may vary, it becomes quite clear quite quickly that major releases are running at a pulse rate of once every 3 to 5 years. Perhaps interspersed by point releases, service packs or other patch or fix mechanisms, the major launches – often accompanied by fanfares and marketing effort – hit the streets twice or more each decade[1]. (Support for any given release will commonly run for longer).

Why does that matter?

This matters for one simple reason: Applications Mean Business. It means those platforms that need to be swapped out regularly house the most important IT assets the organization has, namely the core systems and data that run the business. These are the applications that must not fail, and which must continue into the future – and survive any underlying hardware change.

Failing to keep up with the pace of change has the potential of putting an organization at a competitive disadvantage, or potentially failing internal or regulatory audits. For example, Windows XP was retired as a mainstream product in 2009. Extended support was dropped in 2014. Yet it has 11% market share in 2016 source, according to netmarketshare.com (add the link). Therefore, business applications running on XP are, by definition, out of support, and may be in breach of internal or regulatory stipulations.

Time for a Change?

There is at least some merit in considering whether the old machinery being decommissioned would be a smart time to look at replacing the old systems which ran on those moribund servers. After all, those applications been around a while, and no-one typically has much kind to say about them except they never seem to break.

This is one view, but taking a broader perspective might illustrate the frailties of that approach –

  • First, swapping out applications is time-consuming and expensive. Rewriting or buying packages costs serious money and will take a long time to implement. Years rather than months, they will be an all-consuming and major IT project.
  • Questionable return is the next issue – by which we mean we are swapping out a perfectly good application set, for one which might do what is needed (the success rate of such replacement projects is notoriously low, failure rates of between 40 and 70% have been reported in the industry) And the new system? It is potentially the same system being used by a competitor.
  • Perhaps the most worrying issue of all is that this major undertaking is a single point in time, but as we have already stated, is that it is a cyclical activity. Platforms change frequently, so this isn’t a one-time situation, this is a repeated task. Which means it needs to be done cost-efficiently, without undue cost or risk.

platform2

Keep on Running

And here’s the funny thing, while there are very few constants in the IT world (operating systems, platforms, even people change over time), there are one or two technologies that have stood the test of time. COBOL as a language environment is the bedrock of business systems and is one of the very few technologies offering forward compatibility to ensure the same system can work from the past on today’s – and tomorrow’s – platforms.

Using the latest Micro Focus solutions, customers can use their old COBOL-based systems, unchanged, in today’s platform mix. And tomorrow too, whatever the platform strategy, those applications will run. In terms of cost and risk, taking what already works and moving it – unchanged – to a new environment, is about as low risk as it can get.

Very few technologies that have a decades-old heritage can get anywhere close to claiming that level of forwards-compatibility. Added to which no other technology is supported yesterday, today and tomorrow on such a comprehensive array of platforms.

The only constant is change. Except the other one: Micro Focus’ COBOL.

Platform3[1] Source: Micro Focus research

Insider Threat: A New Perspective

How can IT security managers reduce the risk of the insider threat? Simon Puleo makes an interesting case in this great new blog.

Did you ever think that the person sitting next to you could be considered an insider threat to your organization?  It is hard to believe the malicious activity could be so close to home, however when you consider that hackers use social profiles to target users with elevated privilege to systems or data it raises an eyebrow.   According to a 2015 Black Hat Survey 45% of hackers say that privileged account credentials are their most coveted target.   Hackers are looking to take advantage of the insider and exploiting those privileges because these insiders have access to the most sensitive and lucrative data.  Secondarily, the insider or person you have coffee with maybe involved in espionage (spying), data destruction and data theft.

How does this lend to a new perspective?  Most of the time when I discuss cyber criminals they fall into these broad categories:

  • Organized Crime -Much like Al Capone ran an organized crime syndicate with the purpose of profiting from smuggled alcohol.  The twenty first century organized crime profits from stolen and smuggled credit cards, holding systems hostage and stolen IP.  Extortion, fraud and theft are their calling cards.
  • Hactivists – Those exploiting the internet and stealing secrets for social causes, think of Anonymous.
  • Nation State – Secret groups in governments all over the worlds designed to spy, steal government and private intellectual property.
  • The Black Hat – The renegade hacker or individuals who hack for fun or simply to spread chaos.

Insider Threat

But, the insider could be you or me, it is anyone with access to systems or data.  The insider is the careless user who shares a password or leaves their computer unlocked.  The insider is the unknowing pawn to the criminal hacker installing malware and viruses as the result of a social engineering or spear-fishing attack.  The insider is the person who uses their access for malicious activities, perhaps they are part of an organized crime ring, a disgruntled employee or mentally unstable person.  Regardless, the goal of the cyber-criminal whether on or off premise is to obtain the ‘keys to the kingdom’ that is access to files, systems and data.

Up until recently the most proactive measures to stop the insider were education campaigns targeted at good security practices, security policy and anti-virus tools.  These measures are not enough and traditional solutions like IDS, IPS and firewalls are focused on the perimeter, not on the insider who is a user and consumer of data. So what approach can we take from an IT perspective to be proactive from the insider threat?

Enforce the concept of ‘least privilege’, in simpler terms, ensure that users, and especially privileged users, have access to only the files and systems that they need to effectively do their jobs.  The receptionist needs a different set of access to systems than the accountant.  This could go as far as only giving them access during the right times as well.  Consider, does a receptionist need access to the directory after business hours?

IAS 3

Manage access to systems and files in a way that ensures the identity of every user.  How can we ensure the right users are accessing systems and how can we prove who they say they are? Is a user name and password enough assurance for access to transactional data? Multi-factor authentication is the best way using something that a user knows like a password, something they have like a token and something about them physically like a fingerprint or facial scan.  This may sound futuristic but when information is valuable more organizations are turning to multiple types of verification.

And finally we need to monitor what users are doing in real-time to ensure they aren’t accidently or maliciously changing and/or deleting data.  When suspicious behavior is found, tools are needed to quickly find out who is performing an activity, and what is happening, so that security teams can quickly take action to minimize damage from potential threats.  When especially high risk user activity is detected, access to sensitive systems can be automatically revoked.

How can you reduce the risk of the insider threat?  Start with ‘least privilege’, restrict access to sensitive data to only those with need.  Develop controls and policies to ensure users have the right privileges, challenge users when accessing sensitive data and monitor malicious behavior, lead with an Identity-Powered Security approach.  To learn more about the insider and privileged users download the flashpoint paper Privileged Users: Managing Hidden Risk in Your Organization

In the mainframe world, 13 into 16 does go

2015 was a busy year for big blue. Derek Britton ruminates over the last 12 months’ major events and industry chatter in the mainframe world, as we look forward to 2016 being another exciting year.

I’ve enjoyed the recent spate of articles and blogs from the team at Compuware, talking about the mainframe in an entertainingly assertive way (see here for examples). Many of us in the mainframe community share the passion and belief in the mainframe as the enterprise class IT server of choice. Indeed the positioning of the IBM z13 as the enterprise class server at the heart of the digital economy has considerable merit.

Elsewhere in the press, studies from a variety of sources (including BMC, Compuware, Syncsort and Delphix) reveal ongoing support for and usage of the mainframe environment.

CIOz13

While the mainframe environment is derided by some, not least those that would benefit from clients moving to an alternative environment, not all industry commentators follow that line.

The fanfare greeting the z13 last year was remarkably positive (an example from Motley Fool here) – IBM’s mainframe revenue results have followed that positive direction. Later in the year we were treated to more excitement with the release of the LinuxOne range of IBM mainframe-based Linux servers, and the recommitment to the Open Mainframe Project (with the help of SUSE).

Linuxfound

In recent months the mainframe has recieved some upbeat and well-considered press. SD Times ran an excellent review of the mainframe world – “Are mainframes still road worthy” which talked about a buoyant, positive market (though not without its challenges). More recently still was the Forbes article, penned by Adrian Bridgwater. In this article, the premise that the mainframe is in some way outmoded (the title calls this out immediately – “How to rescue a dead mainframe programmer”) is explored and debunked. Citing some recent software announcements, the article explores two of the key focus areas for the mainframe community right now. Notably Skills and DevOps.

On skills, the challenges were clearly identified – “all the older guys who knew … mainframe systems … are retiring”. This creates an issue when, as it goes on “existing mainframe server systems are … well suited to large-scale datacentre … environments”, and therefore need to be sustained and evolved.

The article then infers that resolving the reliance on older tooling may play its part. Micro Focus agrees wholeheartedly, such that this is less of an issue than IT leaders might fear. Antiquated technology is hard to find staff to use, yet needn’t be a problem. Modern mainframe development technology is readily available which provides the same environment for mainframe teams as is being used by other developers. This unified approach provides potential cross-pollination between various development teams and has been successfully adopted by Micro Focus customers looking to extend their supply of skilled mainframe talent. One organization now has an average mainframe developer age of 26 as a result of their Micro Focus investment.

Micro Focus’ overall approach to the Mainframe and COBOL skills question is outlined here.

DevOps3

The other subject of the Forbes article and topic du jour: DevOps. You can’t move for DevOps discussions right now. At the next SHARE event in March 2016 –a barometer for the psyche of the mainframe world – the event now has its own DevOps track: “DevOps in the Enterprise”. As you would expect Micro Focus will be there and presenting our own DevOps session).

Vendors have arrived at the DevOps party at various times. Compuware are mentioned in the Forbes article as “empowering”, according to CEO Chris O’Malley, “Agile DevOps teams to master the mainframe”. Such facilities – in a genuine agile environment – present integration needs across a variety of 3rd party tools (not just those mentioned in the article). However any steps forward in integration and support between mainframe-centric tooling and DevOps technology is a step in the right direction. Micro Focus’ support, within its Enterprise product set, a range of agile tools, including its own Borland range (Atlas, Silk, AccuRev, StarTeam), plus Jenkins, Sonar Source etc. Integration with Endevor, SCLM or Changeman on the mainframe, takes that mantra further. Our customers are using such facilities today as the central hub for their DevOps-based mainframe delivery processes.

What this means, in simple terms, is that the challenges facing mainframers in the move to an agile process – poor interoperability, lack of productivity, inflexible testing capability, insufficient collaboration, low overall delivery velocity, inefficiency – stand a real chance of being fixed to create a meaningful improvement in throughput and flexibility. Learn more about how that happens here.

SUSELinuxOne

Bridgwater concludes with the forward-thinking label, “Agile mainframes”. And with the right skills and technology available, which they most certainly can be; he’s right. You can see it for yourself at SHARE or join us at a DevDay this year to witness the power of agile COBOL app development from Micro Focus.

Delving into DevOps – the Data

DevOps is on everyone’s lips these days, but we’re not all talking about it in the same way. Derek Britton takes a look at the latest industry study to find if there’s anything we can all agree on

The Customer, DevOps and Micro Focus

Recently, we’ve been speaking frequently with our clients about the popular DevOps topic, and we are hearing more examples of its implementation, usage and success. Lately, we heard of one client who has built a DevOps framework at the centre of their entire IT operation; we have seen another client recruit a new CIO specifically because of his DevOps experience.

… oh, and everyone else

This is indicative of a broader industry appeal around the topic. According to one observer, “we are seeing the gold rush phase for DevOps in 2015[1]”. Consider just a few of the public events around Devops in recent months:

Each of these will no doubt repeat in 2016, joining the Cloud and DevOps World Forum – and that’s not to mention the plethora of vendor-specific events that will showcase their own DevOps angle.

DevOps2

Read all about it

Of course, there’s no need necessarily to travel to learn. Publications on the topic are wide and varied ranging from the highly accessible Mainframe DevOps for Dummies (by IBM luminary Rosalind Radcliffe, as launched at SHARE, August 2015) to the acclaimed and comprehensive Phoenix Project by Gene Kim. Meanwhile, taking that knowledge further presents seemingly limitless possibilities. Devops-related certification, training, press articles and blogs is seemingly limitless, as are the mentions of DevOps as a key element of many a vendor’s messaging today.  (I can’t tell you how many times I have read a vendor promote that they are “The Devops Company”).

Delving into DevOps

Now, while it could be argued that some of the DevOps documentation is slanted according to the perspective of the authors, this is often the case while new trends emerge and attempt to define themselves clearly. Establishing a de-facto “truth” from the various viewpoints is often the task of broader surveys and industry studies. A CA study from 2013 suggested tangible results  and inferred at least some direction on how the industry was embracing the idea. More recently, another global survey was conducted byDevOps.com which may – we anticipate – provide further insight.

A third example of an industry study is the far-reaching and illuminating 2015 Annual State of DevOps: “A global perspective on the evolution of DevOps”, conducted by Gleanster/Delphix.

DevOps4

A Real State

The study surveyed 2,381 IT practitioners or leaders from across the globe, including 49% at CIO or IT Director level.  The appetite and effort towards DevOps adoption was evident in the response – very interestingly 73% of those surveyed had already set up a dedicated DevOps group. Other results include a number of interesting perspectives that Micro Focus shares.

  • When looking at the specific DevOps practice, the results reported that “continuous integration” was the 2nd most popular activity among DevOps leaders, with 64% respondents agreeing to this stated aim. This is consistent with Micro Focus’ view where efficient, repeatable and rapid build and test cycles were a key requirement in DevOps adoption[2].
  • In terms of who drives DevOps – the question was put as bluntly as “Dev or Ops?” The results showed Dev as the senior partner (50%), with Ops at 17%. A “shared” leadership was 34% (one can only assume the numbers were rounded up as they didn’t total 100% exactly). This is consistent with Micro Focus’ assertion that Development often has to act as the “leading light” in DevOps activities.
  • The rationale and motivation for DevOps saw a top 3 in terms of responses listed as Faster delivery (88%), Faster bug detection (69%), and Greater Delivery Frequency (64%). This is consistent with Micro Focus own market view, where the drive towards faster delivery of more predictable, high quality deliveries is a fundamental principle of DevOps adoption. This is especially true for our mainframe clients, where reliability and availability are critical.
  • Finally, a soul-searching question about “how effective at DevOps” each organization was provided some interesting insight. While “leaders” were very upbeat (96% saying they were “very” or “somewhat” effective), those who classed themselves as “practitioners” were less positive, with nearly two-thirds saying they were only “somewhat effective” or “ineffective”. The disparity between leadership and practice is perhaps not atypical, but it indicates or at the very least begs the question that desire may outstrip the reality in many cases.

DevOpsBlog

 

Hope and Hype

The study makes interesting reading. DevOps enjoys growing clarity, purpose and investment, yet faces significant ongoing challenges. Aiming towards faster delivery, higher frequency and better bug detection will improve results and reputation, such that the hope will catch up with the hype.

Fixing such specific challenges in the delivery cycle is the cornerstone of the Micro Focus solution for mainframe DevOps: providing practical solutions to real industry challenges. We look forward to the debate continuing.



[1] Gleanster, Delphix, 2015

[2] Most popular was agile data management

Building a Robust Test Automation Framework – Best Practice

According to Trigent Software’s own research a robust test automation framework ranks highly on their list of Software Testing ‘must-haves’. When executed in a structured manner, it helps improve the overall quality and reliability of a software. Read more from Test Architect Raghurikan in this fantastic guest blog…

Through our experience and research, ranking high on our list is a robust test automation framework. When executed in a structured manner, it helps improve the overall quality and reliability of a software.

The software development industry always faces a time crunch when it comes to the last mile, i.e. Testing. Ask any software developer and he will tell you that the development team in any corner of the world, wants 1. Testing activities to be faster than humanly possible, 2. Deliver results which are laser accurate and 3. All of these without compromising quality. Manual testing fails to live up to these expectations and therefore, are least preferred. Test Automation is therefore, the best choice as it helps accelerate testing and delivers fast results. To ensure that test automation works well, there is the need for a robust test frameworks which acts as the core foundation for the automation life cycle. If we don’t build the right framework, the results could be:

  • Non-modularized tests
  • Maintenance difficulties
  • Inconsistent test results

All of which will result in escalating costs bringing down the ROI considerably.

Best Practices

Framework Organization
The automation framework needs to be well organized to make it easier to understand and work with the framework. Organized framework provides an easier way to expand and maintain. Items to be considered are

  • Easier way to manage resources, configurations, input test data, test cases and utility functions
  • Provide support for adding new features
  • Easy option to integrate with Automation tool, third part tools, databases etc…
  • Have the standard scripting guidelines that needs to be followed to be across

Good Design

Automation tests are used for long terms regression runs to reduce the testing turnaround time, hence, the design involved should be good so that the tests can be maintained easily and yield reliable tests results. Following are some of the good design step

  • Separation of application locators from the test code so that the locators can be updated in the locator file independently on change. Example: To use locators from the object map, external excel or xml file
  • Separate test data from the code and pull data from the external sources such as excel, text file, csv or xml file. Whenever required we can just update the data in the file
  • Organize tests as modules/ functions, so that they are re-usable and easy to manage. Have application/ business logic in the separate class and call them from the test class
  • Tests should start from the base state and ensure that it recovers and continues when there are intermittent test failures

Configuration options

The framework should provide option to choose the configurations at run time so that it can be used as per the test execution requirement. Some of the configurations include

  • Ability to choose test execution environment such as QA, Staging or Production
  • Ability to choose the browser
  • Ability to choose the operating system, platform
  • Ability to mark for priority dependency and groups for the tests

Re-usable libraries

Libraries helps in grouping of the application utilities and hide the complex implementation logic from the external world. It helps in code reusability and easy to maintain code.

  • Build the library of utilities, business logic, external connections
  • Build the library of generic functions of the framework

Reports and logs

To evaluate the effectiveness of automation we need to right set of results, the automation framework should provide all the detailed required to test execution.

  • Provide logs with necessary details of problem with the custom message
  • Have reports which provides the detailed execution status with Pass/ Fail/ Skipped category along with the screenshots

Version Control and Continuous Integration

To effectively control the automation framework we need to keep track of it, hence the version control system is required to achieve this. Have the framework integrated with the version control.

Also we need to run the regression suite continuously to ensure that the tests are running fine and the application functionality is as expected, hence the continuous integration system is required to support for execution of the and results monitoring.

If we build the Robust Automation framework with the above capabilities, we gain the below benefits

  • Increase product reliability – Accurate, efficient, automated regression tests – reduce risks
  • Reduce the product release cycle time – Improve the time to market, Reduce QA cycle time
  • Improve efficiency and effectiveness of QA – free QA team to focus manual efforts where needed
Raghukiran
Test Architect

Raghukiran

 

 

 

 

Raghukiran, Test Architect with Trigent Software, has over a decades’ experience building test frameworks for automation tools which are scalable and efficient. He is keenly interested in exploring new automation tools, continuous integration setups for nightly execution and automation coverage. In the blog `Best Practices for building Robust Test Automation Framework’ he discusses the best practices to be followed for building a Robust Test Automation framework.

Micro Focus and microservices

A growing trend in the software world is microservices, but is it an important innovation or just a passing fad? Derek Britton looks to the experts to find out.

The  interview with middleware expert Mark Little, featured in ZDNet, reveals some fascinating perspectives behind the interest in the industry trend microservices, including – importantly – how microservices might coexist with existing IT investments.

Micro what?

Microservices. A new term to some, but with a growing market awareness, microservices is  a “software architecture style… [in] which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs”.

Microservices has what SDTimes refers to as “striking resemblance to service-oriented architecture”. Such process are “small, highly decoupled and focus on doing a small task”, while the idea behind small services is akin to the “Unix philosophy of ‘Do one thing and do it well’”.

Small wonder perhaps that exponents of this approach include Amazon, Netflix and Bluemix. Outside these pioneers, what challenges ahead does it face as microservices looks to become main-stream?

Micro versus monolith

In the same way that technologies including CORBA, SOA and Web Services appeared over time, the pursuit of more efficient methods of how applications and systems communicate has never been off the table. While in each case these technologies offered an exciting new paradigm with profound potential, none of them became the ubiquitous global standard to replace all other protocols. Why is that? Simple really – there was too much complexity and cost involved in changing what was already there.

Interestingly, in the same article, Mark Little entertains the idea of a pragmatic approach.  “If you’ve got something that doesn’t work, you should still look to see if there’s some of it that you could carve off and keep – particularly if you’ve had it deployed for 20 or 30 years”. He adds “‘even where software is not working, avoid re-implementing everything from scratch because there may be elements that could be retained”.

This is profoundly shrewd. Accepting and exploiting the available, working infrastructure, retaining the value where possible, enables micro service creation to focus only on what new technology is needed. Using this approach, a journey towards a microservices architecture is a simpler, easier and less risky one as a result. But is that really possible? Let’s explore two popular incumbent core technologies that might need examination – COBOL and CORBA – to which Little had referred.

Godfrey

COBOL

Now in its 6th decade of industry use, it is hardly surprising that mature COBOL systems, so often the lifeblood of enterprise IT, are identified as important systems of record with which new microservices may need to integrate.

Little makes it clear that COBOL could well form the underlying support for robust systems that don’t need to change, “if it’s implemented in COBOL, then it’s battle-tested”, he phrases it. The value of this trusted application is that it represents “COBOL code that you could take and use again…”

One of the seemingly incompatible aspects of talking about COBOL and microservices together is that these technologies hail from entirely different eras of technical evolution. However, the continued evolution of the COBOL language and supporting technology ensures that it remains as current as contemporary approaches may require. Indeed, Product Director for Micro Focus’ Visual COBOL product, Scot Nielsen, confirmed “the upcoming (Visual COBOL) release includes support for REST-ful style services that is well suited as an integration mechanism for creating COBOL Microservices”.

iStock_000005265503XSmall

CORBA

One microservices definition refers to it as a model that “might also integrate with other applications via either web services or a message broker”. Little refers to CORBA as a fairly typical communications model that may need to co-exist with microservices.

CORBA-based communications technology, conceived over 20 years ago, is a valued communications protocol that supports mission-critical systems across a variety of industries including FS, Telco, aerospace, government and manufacturing. Micro Focus CORBA Solutions Product Director, John McHugh, said that it was a typical case in many customers for them to look at “protecting existing investments to lower costs and risk … to use [CORBA] products in conjunction with emerging trends and expanding their value and footprint: it is one of the reasons why CORBA continues to thrive”.

Talking of the broad technology choices enterprises have faced over time, McHugh continued “it demonstrates again the fragmentation that can occur when a new architectural style is introduced, and why any technology innovation should be seen as part of an evolutionary journey”.

Conclusion – Building Bridges

Tried and trusted systems of record, featuring transactional, messaging and core business processing components are the lifeblood of most successful organizations. This is the technology that makes the organization work. Labels such as monolithic, old, cumbersome might be lazily applied, but just as appropriate would be mission-critical, valued, trusted, and reliable.

These systems are in the comet tail of technology innovation. These ideas came to market some time ago, stood up to scrutiny, enjoyed widespread adoption, and just carry on working, without fail, quietly in the background; meanwhile, and continuing the comet theme, white-hot innovation drives new technical direction.

As Mark Little explains, and as Micro Focus would agree, bridging from valued technology investments into new innovative ways of working – microservices being a great example – is both technically possible and eminently sensible.

Learn more about Micro Focus’ COBOL and CORBA offerings at our website.

DerekB

Compuware survey – CIOs make big plans for Big Iron

“You hear about big data, you hear about cloud, you hear about analytics and systems of insight. These are all coming together at a critical point in time.” – Dr. John Kelly, SVP, IBM.

A recent Compuware survey supports a longstanding Micro Focus view. Derek Britton checks out the whitepaper…

So it’s not just us, then? As this press release explains, Compuware recently surveyed 350 CIOs to assess CIOs’ perception of their most valuable IT asset and discovered that the mainframe retains the confidence of those whose success depends on it.

We were pleased, but not the least surprised, at the findings. The results of our own 2014 survey of 590 CIOs, through Vanson Bourne, were in line with Compuware’s findings. Namely that CIOs recognise the value of the IP invested in their mainframe infrastructures – and the risks associated with rewrites and the ‘lift and shift’ approach to application modernization.

The contents of the subsequent Micro Focus whitepaper, The State of Enterprise IT – Re-examining Attitudes to Core IT systems, reads like a CIO to-do list; issues covered included managing enterprise ‘IT Debt’, the burden of compliance and outsourcing. If that sounds like you, then download it here.

Back to Compuware; their whitepaper notes that “It is clear that CIOs fully recognize the power and value of the mainframe … 88% of respondents indicated that they believe it will remain a key business asset for at least the next 10 years”.

Unfortunately, there is an image issue to overcome. Mainframe longevity means that many CIOs are probably subconsciously referencing archaic tech. But to remain relevant, anything or anyone must evolve over time; both mainframes and Minis have been around 50 years – and have you seen these? Mainframes have evolved. The new z13 is the most powerful unit that IBM has ever produced. And they wouldn’t commit all that R&D money to anything not destined to be a massive commercial success. So, it makes sense to work with mainframes rather than booking a skip, clearing out the server room and hoping for the best.

CIOz13

Future proof

This clunking, wheezing machinery – yeah, right – is often omitted from the dialogue around the contemporary issues dominating the CIOs’ inbox. But with the support of the right tooling, pressing issues such as Big Data, the move to Mobile and the Cloud can all be handled by the big beasts of Big Blue.

Some CIOs already see this potential – certainly, 81% of Compuware’s respondents recognise that the mainframe can deliver greater Big Data throughput than commodity hardware alone, with 61% already doing just that. There’s more; 78% see the mainframe as a “key enabler of innovation”. And why shouldn’t they? No CIO wants to be without the customer insight that effective data analysis can deliver, or be able to follow their rivals by taking their applications to the Cloud, Mobile, or their customers’ preferred platform.

“Behooves”?

Another challenge is losing the development skills required to maintain older mainframe applications in an apparent explosion of retirement parties and ‘We’ll Miss You!’ cards. Compuware summarise their concerns thus: “Unfortunately, a ticking time-bomb seriously threatens the ability of companies to preserve and advance their mainframe IP. The Baby Boomers who created the code … will soon pass the reins to a new generation that lacks mainframe skills and experience. This is not going to be an easy transition.”

CIO Goodbye

Indeed. As this press release explains, 55% of the IT leaders surveyed by Vanson Bourne believe it is “highly likely” or “certain” that the original knowledge of their mainframe applications and supporting data structure has left the organization. Similarly, 73% confirm that their organization’s documentation is incomplete. Innovation isn’t easy when no-one is sure how the thing works.

Back to Compuware; “The mainframe environment is complex and decades-old code often lacks adequate documentation. It behooves [IT leaders] to be more aggressive about successfully transitioning stewardship of [their] mainframe intellectual property to the next generation of IT professionals—who do not currently have the mainframe-related capabilities that companies will require over the next decade.”

There’s a plan for that

A lack of documentation is unhelpful, but may not be the apocalyptic scenario Compuware suggest. Our skills campaign is a battle fought on three fronts – namely increased productivity from COBOL developers, cross-training developers working in other languages and enlisting the help of academic partners – that will enable organisations to maintain their mainframes, take their COBOL applications into the future and enable the future innovation that creates or maintains a market advantage. All it needs is the right strategy and market-leading tooling.

Clearly, there are challenges. But equally there are options to resolve them. Practical suggestions in another Micro Focus whitepaper, Reducing the IT Backlog, One Bottleneck at a Time, include a 40% cost reduction and 25% development efficiency improvement that will make serious inroads into any enterprise IT backlogs.

So, what have we learned? From the CIO perspective, that Big Iron can – and will – play a significant role in their future IT strategy. The Micro Focus view is that our mainframe solution can enable these powerful business machines to handle many current CIO challenges. If ‘doing more with what you already have’ is a maxim that you must now live by, start living – book a value profile service. It is an important first step on the journey to enterprise application modernization.

 

 

Dear RBS: invest in better systems. Not adjectives.

Ho hum. Another day, another example of RBS presenting IT system neglect as a ‘glitch’…

Analysts, industry commentators and – most importantly – frustrated RBS, Ulster Bank and NatWest customers took to Twitter, keen to remind RBS that their latest IT-based debacle is by no means their first offence.

Analysts, industry commentators and – most importantly – frustrated RBS, Ulster Bank and NatWest customers took to Twitter, keen to remind RBS that their latest IT-based debacle is by no means their first offence. Their most recent attempt to stretch the definition of the word ‘glitch’ was predictable to those of us familiar with their IT infrastructure and deeply irritating for those trying to access their own cash.

And as wearily familiar as the story itself – a host IT failure inconveniencing customers – were the excuses. Perhaps the RBS public relations machine has seen more investment than the IT running their banking operations, because it seems more than half a million HMRC payments had, apparently, not ‘disappeared’ at all, but were merely “delayed.” And although the problem had been “fixed”, customers would be denied access to their child or working tax credits for at least another 48 hours. Remember – these are benefits payments. Two days is a long time to wait for food.

After the comments, the reality

The footnote of an online story is rarely a repository of reasoned argument. Indeed, if you want sensational journalism and conspiracy theories then the comments section of the Mail Online usually has what you need. But the ‘Related Stories’ section after this Finextra blog is more interesting. Note how every other story relates to a meltdown, or the fine that follows it.

That’s because anyone who understand mainframes and COBOL won’t buy the legacy technology/glitch excuse. With proper investment, older mainframes running COBOL applications run just fine. As this blog points out, the DWP make 2.5m benefit payments every single day without a problem. Indeed, some high-profile organizations are using similar tech to defend whole countries and launch rockets: the US Navy is at the forefront of technological breakthroughs and NASA is helping to push back the boundaries of human understanding. No glitches there.

Our own Andy King didn’t buy the glitch angle when RBS tried it last time. Iain Chidgey of data management company Delphix, points the finger at insufficient testing. A distinctly unimpressed Vince Cable suspected “skimping on large-scale investment” is NATS systems when thousands of airline customers were left grounded by a similar computer schism. There are so many more examples.

More examples - click the image for details
More examples – click the image for details

After RBS let their customers down on – of all times – Cyber Monday, Group Chief Executive Ross McEwan described the failure as “unacceptable” and issued a heartfelt mea culpa: “For decades, RBS failed to invest properly in its systems. It will take time, but we are investing heavily in building IT systems our customers can rely on.” All good, but that was in 2013 and two glitches ago. And despite a £750m improvement programme, we seem to be no further forward. Where exactly is this “heavy investment” going?

To be fair, there has been work. But nothing beyond a low-level and inevitable ‘consolidation’ exercise that any large organisation would do to as a by-the-numbers efficiency drive or cost-cutting exercise. No-one is suggesting that these systems are not old. Clearly they are. But properly supported, older technology helps NASA send probes to Mars. And customers of other banks access their cash.

Mainframes – tomorrow’s tech?

Perhaps the issue is that RBS think they are, like many other mainframe owners, fighting fires on too many fronts to enable the innovation that could help their systems deliver modern performance from an older footprint. Banking is heavily regulated, so meeting compliance targets are a challenge. Every organization with an IT function has an IT Backlog. So there’s another. Perhaps their investment is being swallowed up by these activities that do little more than keep the lights on.

RBS recently announced better than expected financial results with pre-tax profits expected to double to £2.65bn. So the money is there. Well, let’s hope so. Imagine if the funding they had committed to application modernization and innovation was to be “delayed”? In a world where a business reputation can be destroyed in the time it takes to tweet, it makes sense to invest in core systems rather than PR. Micro Focus Mainframe solutions can enable long-established enterprise applications with modern functionality. Find me on Twitter if you want to talk more….