Taking an interest – the rise of the challenger bank

Recent reputational damage among market leaders has coincided with the arrival of so-called Challenger Banks. Derek Britton takes a look at the impact of IT and other factors on the competitive landscape.


Most of us stay with the same bank forever. Business banking also follows a model of brand loyalty. It is almost diametrically different to the insurance market where, by comparison, the expectation is that the customer will proactively look for the “best” supplier at each annual renewal date. Yet somehow a market dominated for so long by major incumbents is being gradually eroded by new entrants. Indeed, high streets in the UK are greeting a brand new banking brand, Metro Bank, opening new branches.

A lot has changed in a short space of time. Building on from Challenger banks: on the lawns of retail banking?, this blog takes the Challenger Bank discussion further by examining this seismic shift in the banking sector and asks if this change is here to stay.


Market Interest

Big players in the banking market enjoy significant market share. In the UK, the “Big Five” (HSBC, Barclays, Lloyds, RBS and Santander) account for 85% of the market, while in the US the much more fragmented marketplace still has big names, where “the largest five banks in the U.S. now control nearly 45 percent of the industry’s total assets.”

Yet retail banking is far from a static market. The banking crisis saw unprecedented levels of consolidation and attrition in the market during which household names such as Wachovia (formerly First Union), Merrill Lynch, Northern Rock, Alliance and Leicester and over two dozen Spanish banks ceased to exist. In 2009 alone, 140 US banking brands ceased trading.

But when there is ebb, there is flow. When Metro Bank opened in 2010, it was the first new company to be granted a banking licence in 150 years. Atom Bank followed suit, joined by other new entrants. “Challenger Banks” – as they are now often termed – comprise both established retail organizations that have moved into banking (e.g Tesco, M&S), and new banking start-ups.


A Cutting Edge

What are these new banks offering that makes them attractive?

New UK banking start-ups including Atom and Starling offer an online-only service, and are passing on some of that cost efficiency with offer lower interest rates on loans and higher rates on savings.

Their focus on mobile apps and strong user experience, they are aiming to attract the growing demographic of technically savvy, online banking customers. The number of people using mobile banking, for example, is set to double in the next 4 years. And it seems to be working. Growth figures are outperforming the bigger banks and lending volumes have increased significantly.

Additionally, offering a “voice” to the customer community is part of the challenger proposition. German digital bank Fidor, which launched in the UK in September 2015, bases its banking model around its online community, offering customers a voice – through its social media platform – in how the bank is run.

Furthermore, the new brands have sustained none of the reputational damage[1] suffered by the major players in the wake of the banking crisis, including brand-damaging scandals around IT crashes, LIBOR, FOREX and PPI mis-selling, insider trading and data breaches. As recently as April 2015, an industry steering group implored banks to “raise their game” to improve their reputation. Yet regulatory violations are reported, and fines continue up to the present day.


Too Big To Fail?

Surely the incumbent giants of retail banking have a response ready?

The first witness for the defence is, paradoxically perhaps, the issue of trust. Compared with a known banking brand, does a customer trust a bank with little or no experience? Despite a couple of banking giants losing the trust of some customers, others are still trusted more than newer entrants in terms of expertise and capability. People know what to expect from the long-established retail bank branch.

Related to this, was the issue of whether the bank could provide appropriate – by that read reliable – technology. Brand loyalty, according to research, hinges on banks providing good technology, said an overwhelming 80% of respondents to one survey. Whether for challengers it is too early to tell how good that technology is, is an interesting question.

The next issue concerns regulatory readiness in terms of funding. Chief executive of the British Banker’s association, Anthony Browne, commented:

“Challenger banks are obliged to hold more capital than more established banks, which have data stretching back decades, allowing them to show … they are less risky. Challengers do not have that track record. This can mean that…a challenger is obliged to hold eight times’ as much capital as a larger bank.

There is also a question of technology. And it is a big question. Customers might  see the mobile app as the face of the bank, but the core processing takes place away from the user’s screen, which requires a reliable, resilient infrastructure: a bank’s IT system is its business. By running long-standing reliable systems, retail banks possess both trusted technologies and are evolving quickly. For instance, the mobile banking apps on our phones are invariable supported by typically COBOL applications written many decades ago, and still running today, on mainframe machinery. This is technology which has a heritage of investment, value and uniqueness at the larger banks, but which clearly does not exist in the same way in the back office of the Challengers.

Finally there the thorny issue of the branch network. Despite significant closures, the established banks’ branch network far outnumbers[2] new incumbents. And while the question of the value of the branch itself in terms of day-to-day consume banking, the attitude to branches remains benign. According to the 2014 UK YouGov Poll for the British Banking Association “57% of banking customers themselves believe access to a branch is important, even if they choose not to use branches”.

Adding it all up

In the UK, the most disrupted banking market, those Challenger banks still only account for a small proportion of the market. Some way to go perhaps before a major dent appears.

Yet the changes are afoot.  The comprehensive banking results review published by KPMG in 2015 stated “The Challengers are outperforming the Big Five in terms of growth (compound annual growth rate 8.2% between 2012 and 2014 compared to a reduction of 2.9% for the Big Five).” While barely noticeable from a distance, the market continues to shift. A former Chief Executive of one major bank predicted branch numbers and employees in the sector “may decline by as much as 50% over the next ten years” in response to Challenger threats and the new digital economy. It is no great surprise that a significant investment in one Challenger Bank, Atom, was recently made by a major EMEA Banking giant, BBVA.

While the Challengers have some advantage – they can adapt to change quickly, enjoy a fairly benign reputation so far, and have fewer overheads, the major incumbents possess the systems, the skills and the deeper pockets, not to mention the major market share, to navigate the choppy waters of market change. The situation seems finely poised.

Many clients in the financial services sector choose Micro Focus to support their important enterprise application modernization strategy. Take Standard Chartered Bank, China for instance. As well as helping it meet regulatory requirements, Micro Focus technology improved performance and response times, reducing a 3 hour job to just 3 minutes, cutting time to market of new services by 25%.

Customers are trusting banks who possess good technology. And IT could well be the next major battlefield between established and challenger banks in a rapidly changing market.


[1] A poll carried out by YouGov in April 2013 for the Public Trust in Banking symposium, found that 73% of respondents believed the reputation of banking was bad.

[2] According to the KPMG Report “The Game Changers”, 2015, the average branch network size for the “big 5” UK banks is over 1,400. The average branch network for smaller Challenger Banks (e.g. Metro. OneSavings) is 37.

How can I make my tests execute faster?

This is a common question that often arises from Silk Test users. In most instances we find that a number of efficiency gains can be obtained from changes to how the tests are scripted.

This is a common question that often arises from Silk Test users. In most instances we find that a number of efficiency gains can be obtained from changes to how the tests are scripted. The Silk Test suite provides many ways of interacting with an application under test (AUT), some of which can drastically reduce the overall execution time of your tests. This blog provides information on how the findAll method can help reduce the overall execution time of tests with Silk4J. The procedure is similar for the other Silk Test clients.

The following code snippet was created by a real Silk4J user to determine the number of different controls within a Web Application. When executing tests in Chrome and Firefox, the user found that the execution time was significantly higher than in Internet Explorer. When tracing through the Silk4J framework of the user, we discovered that the vast majority of the additional execution time was being lost in a utility method called countObjects. The utility method was scripted as follows:

public void countObjects( ) {

desktop.setOption(“OPT_TRUELOG_MODE”, 0);

int iCount = 0;

int i = 0;
while(desktop.exists(“//BrowserApplication//BrowserWindow//INPUT[“+(i+1)+”]”) ) {

iCount += i;

while(desktop.exists(“//BrowserApplication//BrowserWindow//SELECT[“+(i+1)+”]”) ) { i++;

iCount += i;

i=0; while(desktop.exists(“//BrowserApplication//BrowserWindow//BUTTON[“+(i+1)+”]”) ) { i++; }

iCount += i;

while(desktop.exists(“//BrowserApplication//BrowserWindow//A[“+(i+1)+”]”) ) {

iCount += i;

System.out.println(“Counted “+iCount+” items.”);


The initial analysis of the method indicated that the code was not efficient for the task at hand.

Why was the above code not efficient?

The following areas were identified as being not efficient:

  1. The method counts each control individually; therefore if there are 100 controls to be counted, the above snippet would make over 100 calls to the browser to count the required controls.
  2. The use of while loops in the above fashion would eventually lead to unnecessary calls to the browser and therefore execution time which was being wasted.

How could the code be made efficient?

Analysis of the countObjects method revealed that the same functionality could be achieved in four Silk4J statements. To make this possible the method was modified to use the Silk4J method findAll. This method returns a list of controls matching a specific locator string with a single call, therefore it has the following benefits over the original approach:

  1. Each control does not need to be found individually before incrementing the count.
  2. No unnecessary calls are made to the browser

The modifications resulted in the following method:

public void countObjects ( ) {

int iCount = 0;

iCount += desktop.findAll (“//INPUT”).size( );
iCount += desktop.findAll (“//SELECT”).size( );
iCount += desktop.findAll (“//BUTTON”).size( );
iCount += desktop.findAll (“//A”).size( );

System.out.println (“Counted “+iCount+” items.”);


Visually the modifications have already reduced the number of lines of code required to perform the same functionality as the original utility method. There is now also no requirement for loops of any sort and from an execution standpoint, we were able to demonstrate the following performance gains:


The above chart and table demonstrate how much time a user can gain by simplifying their code through the use of other methods within the Silk4J API that are more suitable for a particular task. The performance gains in Google Chrome and Mozilla Firefox are hugely substantial, while the execution time in Internet Explorer is now less that a second. Overall this process has resulted in better code, better efficiency, and ultimately time saved.

Can I apply this to other parts of an automation framework?

In the following example the user needed to verify that the labels and values within a number of tables have the correct text values. Again, the execution time in both Mozilla Firefox and Google Chrome was considerably higher than the execution time in Internet Explorer. For example, the user experienced a great difference in execution times when executing the following method:

public void verifyLabels (String[ ][ ] expected) {

for (int i=1;i<=17; i++) {
DomElement lbl = desktop.find (“//TH [“+i+”]”);
DomElement value = desktop.find (“//TD [“+i+”]”);
Assert.assertEquals (expected [i-1] [0], lbl.getText ( ) );
Assert.assertEquals (expected [i-1] [1], value.getText ( ) );


Why was the above code not efficient?

In the above method two find operations are being executed inside the for-loop. Each iteration of the loop increases the index of the controls to be found. To find all the elements of interest and to retrieve the text of those elements, 68 calls to the browser are required.

Where are the efficiency gains?

As the loop is simply increasing the locator index, which indicates that Silk4J is looking for a lot of similar controls, the find operations within the for-loop can be replaced with two findAll operations outside of the loop. This modification immediately reduces the number of calls that must be made to the browser through Silk4J to 36 and the method now reads as follows:

public void verifyLabelsEff(String[ ][ ] expected) {

List <DomElement> labels = desktop. <DomElement>findAll (“//TH”);
List <DomElement> values = desktop.<DomElement>findAll (“//TD”);

for (int i=0;i<17; i++) {
Assert.assertEquals(expected [i] [0], labels.get(i).getText ( ) );
Assert.assertEquals(expected [i] [1], values.get(i).getText ( ) );


Performance Impact

The chart below highlights how small changes in your Silk4J framework can have a large impact on the replay performance of tests.



Give thanks… for Cyber Sunday

Black Friday and Cyber Monday now have a new playmate; Cyber Sunday opens its virtual doors for the first time this year. Derek Britton fires up the laptop, gets his credit cards out and asks how well it will perform on the day.

Black Friday for Dummies: enjoy your Thanksgiving break by going shopping. Public holidays have long been marked by a bout of retail therapy, and retailers are geared up to take advantage of our behaviour. Black Friday is now a key pre-Christmas spike in terms of retail revenue; in 2014, a staggering $47bn changed hands over this period.

But what are you going to do if you’re unable to go shopping on the Friday of Thanksgiving, or can’t imagine anything worse than grappling with strangers on the shop floor for the sake of a few bargains? After all, aren’t we all online now?

Say hello to Cyber Monday –a solution for those unwilling or unable to actually go shopping. Retailers have forensically targeted this demographic, and Cyber Monday offers similar sales incentives to those who prefer to hunt their bargains virtually.

But… on a Monday? Why can we only shop online on a Monday?

Perhaps the rationale is that Thanksgiving includes the whole weekend and the beginning of the following week is the best place for the virtual version. Alternatively, perhaps the retailers anticipate most buyers will be back online once they get back to work and likely to prefer a virtual shopping trip to actual work. Either way, the choice of the Monday seems arbitrary.

Why Wait?

That seems to have driven Wal-Mart to offer the same promotional campaigns via their websites this Sunday instead of waiting until Monday, as Reuters reported this week.

Their view, I assume, is that being online is a 24/7 existence.  The logic is irrefutable even if an invisible truce has been forever broken. Because Wal-Mart’s behaviour will reset the template for the US internet retail industry, this week and into the future. And we already have a name for this new phenomenon. Yes, Cyber Sunday is here. Probably to stay.

Variable Consumer Demand

This isn’t setting any precedents in the UK. For a long time, the post-Christmas promotions in the UK have crept incrementally earlier and earlier. Many bargains are available way before Christmas.

It’s no secret that this promotional activity is ultimately designed to boost sales. These peaks and troughs in retail internet traffic underpins what analysts call Variable Consumer Demand.

There are other examples. They include major dates in the retail world – Christmas, Superbowl Sunday – and, to a lesser extent, the seasonal stock clearance exercises we call ‘the sales’. The elasticity of these sales and promotional periods that it becomes something of a challenge to remember a point when paying full price was an option.

IT Peaks and Troughs

Business has long coped with peaks and troughs, but extending the rollercoaster of demand in terms of web traffic creates significant challenges for IT – namely, supporting regular business demand while meeting high demand with the same infrastructure. To put that in context, ‘peak’ demand may exceed the ‘normal’ traffic demand by some distance.

Many large retailers have suffered catastrophic problems by simply failing to predict the demand; their IT systems simply collapsed under the weight of web traffic. Loss of service, as described in Renato Quedas’ recent blog, can impact organizations to the tune of $300,000 per hour.

You need to perform on the day

Establishing an online promotion without preparing the underlying IT infrastructure to cope with demand is at best a risky approach and a gamble that many cannot afford to lose. Keeping out of the headlines for the wrong reasons is, however, well within the retailers’ grasp. It is a question of assessing the potential risk and using contemporary technology to meet it.

The Silk Performer™ and Silk Performer CloudBurst™ products from Borland enable software quality teams to rapidly launch any size peak-load performance tests without the burden of managing complex infrastructures. The Borland white paper Testing Times for e-Commerce is available now.

Want to find out more?

Wal-Mart has the confidence to expand their online promotion, and their actions will not go unnoticed by other retailers. Another period of peak consumer demand is upon us. See our Black Friday infographic and take a look at how smart organizations have tackled variable demand to ensure they grab their slice of the Thanksgiving pie.


DevOps & Quality Automation

DevOps means different things across the IT & development landscape. When it comes to quality, it’s about constantly monitoring your application across all end user endpoints, for both functional and performance needs. The goal is to maintain a positive end user experience while continuing to make the entire process more efficient from both speed and cost perspectives.

So, how does DevOps work in the testing arena? Extremely well for Borland Silk customers, as Archie Roboostoff explains in this blog.

DevOps means different things across the IT and development landscape. Take quality. The bottom line is to maintain a positive end-user experience while making the process more efficient from both speed and cost perspectives. The top line is to constantly monitor your application across all end-user endpoints, for both functional and performance needs.

The old-school method required a combination of record/replay or manual tests being run across large virtual instances of platform and browser combinations. This was great for eliminating functional and performance defects, but as more browsers and devices came to market, testing across these variants became more demanding. This lead to organizations making the tradeoff of delivery speed vs adequate testing. As many of their customers will testify, it’s a compromise too many – and one that is not required now.

For example?

One of our large financial customers was spending more time building infrastructures than testing their applications. They had almost doubled the size of their quality teams while reducing their overall test effectiveness – as the missed deadlines and budget over-runs confirmed. Their products were falling behind competitors, the end-user experience was poor and their ability to deliver new products was very unpredictable.

It doesn’t have to be this way. This blog post is a walk-through of using Silk to achieve an automated quality infrastructure for DevOps in just a few short steps. It is quick and simple. This process works for any operating environment, test desktop and web application across any combination of platform and device.

Step 1 – Create the tests

The continuous monitoring of quality in a DevOps context is about taking any development changes such as nightly builds, hotfixes, updates and full releases, and rapidly scaling out the testing environment to improve the end-user experience. This swiftly identifies any device- or browser-specific issues caused by the changes quickly, exactly where the issue occurred. Many organizations using manual application testing do so because they feel automation is too difficult to or requires a different skillset. With Silk, less technically-adept users create scripts either visually while coders work from within the IDE.

Image 1 – Visual test creation with Silk – no technical skill needed.
Image 1 – Visual test creation with Silk – no technical skill needed.
Image 2 – IDE test creation – either in Eclipse or Visual Studio
Image 2 – IDE test creation – either in Eclipse or Visual Studio


Step 2 – Verify and Catalog the newly created tests

To confirm that our test works, we need to set a context that makes collaboration easier and test reuse simpler.

Verifying the test runs is straightforward – create your test and select one or more target browsers or devices to run the test on. It doesn’t matter where or how the test was created; if the test was recorded using Internet Explorer, it can be verified to work across any browser or device. To select the test target just point and click.

Running a test across any browser or device.
Running a test across any browser or device.

Making it work for DevOps

Our test works, so let’s add some context to the test for better collaboration and reuse in the larger DevOps infrastructure. A unique Silk feature, ‘Keyword Driven Tests’, enables cataloging and providing business/end user context to a large number of tests. In this case, we will provide a keyword for our test. Keywords can be assembled to reflect a series of transactions or use cases. For example, adding keywords like ‘verifyLogin’ and ‘checkoutShoppingCart will create tests to check the robustness of the login and shopping cart checkout.

We are now able to better collaborate with business stakeholders and in the DevOps context, enabling the creation of a variety of use cases that get to the root of any post-deployment issues. Keywords are also able to take parameters and pass them to the tests in question making automation even easier.

Image 4 – Keywords being added to tests.
Image 4 – Keywords being added to tests.

Step 3 – Create an execution environment

How do we get to a point where continuous deployment and integration is overcoming the challenges of today’s software market? The key is to set up an environment that can replicate these real world conditions. As we have said, previously this would require a number of development/test boxes or virtual machines. These were expensive to maintain and difficult to set up. For DevOps, Silk can manage and deploy an ‘on demand’ test environment either in a public cloud, private cloud, or on-premise within the confines of the enterprise data center.

With Silk, setting this up is easy and straightforward. So let’s do it. In this example, we will setup an execution environment with the Amazon AWS infrastructure. Even though this is in a public environment, all tests and information access are secure. This environment can be set up in a more private cloud setting, on premise within the firewall –even on the tester’s individual machine. Whatever the environment parameters, Silk has you covered.

Image 5 – Connecting Silk to an execution environment.
Image 5 – Connecting Silk to an execution environment.


The Amazon test case

We are about to connect Silk to an Amazon instance using AWS credentials given to us by Amazon. Silk will set up the browser and device combinations so that we can rapidly deploy our applications for testing. Rather than manually setting up a range of VMs to test different versions of Chrome, Firefox, IE, Edge, etc, Silk will spin up the instances, configure them, deploy the application, run the tests, gather the results, and then close them out. It is in this context that we start to really take advantage of some key DevOps practices and principles.

Silk will take the tests we have created and pass them to the environment you have set up. Do you have applications inside the firewall that need to be tested from an external environment? No problem. Silk’s secure technology can tunnel through.

Image 6 – Tunneling through the firewall with Silk to test private internal applications from public execution environments.
Image 6 – Tunneling through the firewall with Silk to test private internal applications
from public execution environments.

Step 5 – Run the tests

Now we’ve created the tests, created the context and set up the execution environment, we must determine when – and how – we want these tests to run. In most DevOps environments, tests are triggered to run from a continuous integration environment managed with Jenkins, Hudson, Cloudbees, etc. Whatever the preferred solution, Silk will execute tests from any of these providers.

When the tests are executed, depending on the selected configurations, Silk will run through the tests and provide a detailed analysis across all browsers, devices, keywords, and tests. Remember – the more the better, as you want to use what your end-users will be using. Better safe than sorry; along with the analysis, screen shots of each test display the evolving trends for that application. This represents visual confirmation that the test was either successful or highlighted an issue – especially important in responsive and dynamic web designs.

Image 7 – Results analysis from Silk running a series of tests and keywords through the execution environment.
Image 7 – Results analysis from Silk running a series of tests and keywords
through the execution environment.

Each test is outlined across the platform/browser/device combination and the end user benefits from a visual representation along with detailed results analysis. Management appreciate the high level dashboard summary showing trends across targets.

Image 8 – Dashboard of runs over time.

Image 8 – Dashboard of runs over time.

Step 6 – Test Early, Test Often

Now that the test? environment has been established and connected to the build environment, ongoing testing across any number of environments is completely automated. Now, instead of setting up testing environments, quality teams can focus on building better tests and improving collaboration with business stakeholders. Here is where Silk delivers true DevOps value to an organization. New browsers, such as Microsoft Edge can easily be added to the configuration environment. There’s no need recreate tests; just point the tests that are already there at the new environment.

Image 9 – Adding new browsers to the execution environment.
Image 9 – Adding new browsers to the execution environment.

Step 7 – Performance Tune

Along with each functional automation piece, Silk can test from both a ‘load’ and ‘functional’ perspective. When testing applications under load, Silk determines average response times, performance bottlenecks, mobile latency, and anything related to a massive amount of load generated on a system. Taking a functional perspective, Silk runs a smaller number of virtual users across different configurations of browsers that outline where things can be tuned. And this is key information. For example, Silk can determine that FireFox your application runs 15% slower on Firefox than Chrome on Android. Adjusting the stylesheets or javascript may be all that is required to performance tune your application. Testing for responsive web design is crucial to keeping user experience sentiments high in a DevOps context.

Image 10 – Performance tuning across different devices and platforms for optimization in response web Design.
Image 10 – Performance tuning across different devices and platforms
for optimization in response web Design.

Using Silk’s technology and running these tests over time will track the trends. This, along with details analytics from data within Silk and sources like Google PageSpeed, will illustrate where your applications will benefit from being fine-tuned across browsers and devices.

Image 11 – Outline where applications can be adjusted for better end user performance and optimization.
Image 11 – Outline where applications can be adjusted for
better end user performance and optimization.

In conclusion

DevOps is a slightly nebuous phrase that means different things to people. But when it comes to testing, the value is pretty clear. Aligned with the right software, it will ensure your applications perform as expected across any device/browser/platform. In addition, using Silk will ensure that your apps are:

  • delivered on time and within budget
  • constantly improving
  • responsive
  • built and tested collaboratively.
  • feeding trend data on responsiveness and quality to a central location
  • successful with end users/consumers.

So if this sounds like something you can use, then we should talk about Silk. It’s the only tool that can reach your quality goals today and will continue to innovate to help you eliminate complexity and continuously improve the overall development and operations process. Now that’s DevOps.












Delving into DevOps – the Data

DevOps is on everyone’s lips these days, but we’re not all talking about it in the same way. Derek Britton takes a look at the latest industry study to find if there’s anything we can all agree on

The Customer, DevOps and Micro Focus

Recently, we’ve been speaking frequently with our clients about the popular DevOps topic, and we are hearing more examples of its implementation, usage and success. Lately, we heard of one client who has built a DevOps framework at the centre of their entire IT operation; we have seen another client recruit a new CIO specifically because of his DevOps experience.

… oh, and everyone else

This is indicative of a broader industry appeal around the topic. According to one observer, “we are seeing the gold rush phase for DevOps in 2015[1]”. Consider just a few of the public events around Devops in recent months:

Each of these will no doubt repeat in 2016, joining the Cloud and DevOps World Forum – and that’s not to mention the plethora of vendor-specific events that will showcase their own DevOps angle.


Read all about it

Of course, there’s no need necessarily to travel to learn. Publications on the topic are wide and varied ranging from the highly accessible Mainframe DevOps for Dummies (by IBM luminary Rosalind Radcliffe, as launched at SHARE, August 2015) to the acclaimed and comprehensive Phoenix Project by Gene Kim. Meanwhile, taking that knowledge further presents seemingly limitless possibilities. Devops-related certification, training, press articles and blogs is seemingly limitless, as are the mentions of DevOps as a key element of many a vendor’s messaging today.  (I can’t tell you how many times I have read a vendor promote that they are “The Devops Company”).

Delving into DevOps

Now, while it could be argued that some of the DevOps documentation is slanted according to the perspective of the authors, this is often the case while new trends emerge and attempt to define themselves clearly. Establishing a de-facto “truth” from the various viewpoints is often the task of broader surveys and industry studies. A CA study from 2013 suggested tangible results  and inferred at least some direction on how the industry was embracing the idea. More recently, another global survey was conducted byDevOps.com which may – we anticipate – provide further insight.

A third example of an industry study is the far-reaching and illuminating 2015 Annual State of DevOps: “A global perspective on the evolution of DevOps”, conducted by Gleanster/Delphix.


A Real State

The study surveyed 2,381 IT practitioners or leaders from across the globe, including 49% at CIO or IT Director level.  The appetite and effort towards DevOps adoption was evident in the response – very interestingly 73% of those surveyed had already set up a dedicated DevOps group. Other results include a number of interesting perspectives that Micro Focus shares.

  • When looking at the specific DevOps practice, the results reported that “continuous integration” was the 2nd most popular activity among DevOps leaders, with 64% respondents agreeing to this stated aim. This is consistent with Micro Focus’ view where efficient, repeatable and rapid build and test cycles were a key requirement in DevOps adoption[2].
  • In terms of who drives DevOps – the question was put as bluntly as “Dev or Ops?” The results showed Dev as the senior partner (50%), with Ops at 17%. A “shared” leadership was 34% (one can only assume the numbers were rounded up as they didn’t total 100% exactly). This is consistent with Micro Focus’ assertion that Development often has to act as the “leading light” in DevOps activities.
  • The rationale and motivation for DevOps saw a top 3 in terms of responses listed as Faster delivery (88%), Faster bug detection (69%), and Greater Delivery Frequency (64%). This is consistent with Micro Focus own market view, where the drive towards faster delivery of more predictable, high quality deliveries is a fundamental principle of DevOps adoption. This is especially true for our mainframe clients, where reliability and availability are critical.
  • Finally, a soul-searching question about “how effective at DevOps” each organization was provided some interesting insight. While “leaders” were very upbeat (96% saying they were “very” or “somewhat” effective), those who classed themselves as “practitioners” were less positive, with nearly two-thirds saying they were only “somewhat effective” or “ineffective”. The disparity between leadership and practice is perhaps not atypical, but it indicates or at the very least begs the question that desire may outstrip the reality in many cases.



Hope and Hype

The study makes interesting reading. DevOps enjoys growing clarity, purpose and investment, yet faces significant ongoing challenges. Aiming towards faster delivery, higher frequency and better bug detection will improve results and reputation, such that the hope will catch up with the hype.

Fixing such specific challenges in the delivery cycle is the cornerstone of the Micro Focus solution for mainframe DevOps: providing practical solutions to real industry challenges. We look forward to the debate continuing.

[1] Gleanster, Delphix, 2015

[2] Most popular was agile data management

Legacy Systems timebomb. What ‘timebomb’? Re-use and defuse…

A piece on the FCW site, calling out the supposed dangers of legacy IT caught the eye of Ed Airey, our Solutions Director. He responds below.

This article raises some interesting – and some very familiar – points. Many of them I agree with, some of them less so.

I certainly concur that putting the right people in the right places is just good business sense. For any forward-thinking organization, underpinning future business strategy depends on recruiting, retaining and developing the next generation of talent.

This is particularly true for enterprises with significant investment in legacy applications and it’s an area we have addressed ourselves. But this is where our paths diverge slightly.

To recap Mark Rockwell’s concerns, any business that allows IT staff with core business app knowledge to leave the business without being replaced by developers with the right skills is looking at the potential for organization-wide impact. For “legacy IT systems”, I read ‘COBOL applications’. And I disagree with the apocalyptic scenarios he is using.

For sure, a so-called ‘skills gap’ could affect business continuity and compromise future innovation prospects.  It is – or should be – a concern for many organizations, including the federal agencies that Mark calls out. But he quotes a CIO, speaking at the President’s Management Advisory Board who likens the potential, albeit more slow-burning impact to the Y2K bug.  The IT industry knows about the so-called skills crisis just as it knew about the Y2K bug. By preparing in the same diligent and focused fashion it’s highly likely that the crisis will fizzle out leaving the apocalyptic headlines high and dry.

Fewer people, more challenges

Now, safely into 2015, the modern CIO has plenty of other challenges. Addressing the IT Backlog, meeting tough compliance targets and developing a smarter outsourcing strategy all add to the In Tray. Meanwhile, organizations must support the evolving needs of the customer – that means delivering news web, mobile and Cloud-based services quickly and in response to new user requirements.

There always a right way to do things; the key is to distinguish it from the many alternatives. For owners of so-called legacy IT, modern development tooling offers many benefits. Modernization enables easier maintenance of well-established applications, and will support the business as it looks to innovate.

In addition, contemporary development environments (IDEs) make supporting core business systems easier.  With a wider array of development aids at their fingertips to accelerate the build, test and deploy process, more programmers than ever can support organizations in filling these skills shortfalls.


Why rewrite – just re-use

These game-changing modern tools help organizations proactively develop their own future talent today and extract new value from older business applications, while providing a more contemporary toolset for next gen developers.

How ‘modern’ are these modern tools?  Next generation COBOL and PL/I development can be easily integrated within Visual Studio or Eclipse environments, reducing development complexity and delivery time.  The Visual Studio and Eclipse skillsets acquired through local universities are quickly applied to supporting those ‘archaic’ core business systems that have quietly supported processes for many decades yet are – suddenly – no longer fit for purpose.

But of course, they are perfectly able to support organizations meet future innovation challenges. The key is embracing new technology through modern development tooling. It is this ‘re-use’ policy that helps IT to confidently address skills concerns, build an innovation strategy – and support trusted business applications.

Late in the piece, the writer references the Federal IT Acquisition Reform Act. For government agencies facing these multiple compliance challenges, the modern tooling approach offers a low risk, low cost and pragmatic process to delivering value through IT.

This stuff works

Micro Focus can point to a significant body of work and an order book full of happy customers. The Fire and Rescue Department of the City of Miami, for example – their modernization program halved their IT costs.  The Cypriot Ministry of Finance being another example where 25 year old COBOL-based Inland Revenue payment and collection system was given a new lease of life through Micro Focus technology.

So – can you hear a ticking sound? Me neither.

To learn more about modern development tooling in support of core business applications, visit: www.microfocus.com

Micro Focus: In good company

Did you hear about Micro Focus winning big at the UK Tech Awards last Wednesday? Well, here’s your chance

It’s never dull on the IT show circuit. A few days after Micro Focus touched down from SUSECon in Amsterdam, on Wednesday it was time to revive the dinner suit in preparation for the UK Tech Awards 2015 in London.

And it’s just as well that we reclaimed the company tuxedo from the dry cleaners, as we won – and won big. Micro Focus is the 2015 ‘Tech company of the Year’.

Since 2000, the TechMark Awards – as they were formally known – has recognised the achievements of UK public and private companies in the technology sector.  This year the field was as competitive as ever with the other nominees representing a strong field.

What a difference a year makes

This time last year Micro Focus and the Attachmate Group (TAG) were two separate companies and COBOL was regarded in some quarters as yesterday’s language.

Since the turn of the year, so much has changed. COBOL has moved up 11 places in the TIOBE Programming Community Index and is now up to no11, while Micro Focus has completed the merger with TAG and is now one company operating two product portfolios: Micro Focus and SUSE.

The Micro Focus portfolio includes identity access, security, COBOL development and mainframe solutions, development and IT operations management tools, host connectivity and collaboration/networking solutions.

The SUSE portfolio includes leading enterprise-grade open source solutions including Enterprise Linux, OpenStack private cloud, software-defined storage and other IT infrastructure management and optimization solutions.

But the awards are not about who has the best products and solutions. That is for the marketplace to decide. Neither is it about which companies represent the best investment opportunity – although analysts have already nominated Micro Focus as a ‘top pick’ for Q4.

Tech Comp

Movers and shakers

Instead, the judges – who include some of the industry’s major players – have recognised Micro Focus’ bold move to create one of the industry’s biggest infrastructure software companies, and the dedication of the management team and staff that have made it happen.

This is a volatile industry and no-one can predict the future. However, it is likely that certain trends will endure – chief among them is good customer service and access to solid portfolios that enable customers to achieve the innovation needed to meet tomorrow’s challanges.

By offering the full range of services for those looking to create and maintain business-critical applications and software, from the initial build through operate to securing those business systems against disruption, Micro Focus and SUSE have the product spread and solution portfolio to help an installed base of more than two million license holders meet a broad range of challenges.

In this industry to spend too long in self-congratulation is to invite hubris. So we won’t be doing that. Once the dinner jacket has gone back in the wardrobe, the sleeves will be rolled up and we’ll be back to work. Albeit with a nice trophy for the reception area.

‘Sur-thrive-al’ guide to attending #Devday

We get asked a lot of questions via Social Media, our website and email about upcoming Micro Focus #DevDays. Jackie Anglin is now a seasoned #DevDay Veteran having presided over 25 of them to date aross North America. We’ve summed up the most common questions and here are Jackie’s answers to help you plan your day!

We get asked a lot of questions via Social Media, our website and email about upcoming Micro Focus #DevDays.  Jackie Anglin is now a seasoned #DevDay Veteran having presided over 25 of them to date aross North America. We’ve summed up the most common questions and here are Jackie’s answers to help you plan your day!

Where do I register and how far in advance should I let you know I will be going?

We do our best to make sure everyone has a great seat and a great view. Registering early is key and the best place to find the schedule is the #DevDay hubpage Plan to arrive before the morning session begins because these events are growing.

You’ll see from the photos there are very few spare seats and a some delegates show up on the door on the day too. Register early to help us find the right space, cater correctly and get the logistics right. If there isn’t a #DevDay scheduled near you then simply fill out the form to at the bottom of the #DevDay hubpage to request one and we’ll see what we can do.


Are the #DevDay events only aimed at COBOL or Mainframe COBOL Developers?

Think about bringing a COBOL sceptic or two! Your whole development team is welcome, COBOL developers and non-COBOL developers alike! It’s been really interesting to watch developers who haven’t been exposed to COBOL see how easily they can work with the language using modern Visual Studio or eclipse IDEs.  It’s all about taking what works today and ensuring it will still work in years to come – we’ve covered some of this in a previous #DevDay blog. Don’t forget that a #DevDay only costs your time so the business-case of expanding the invitation to a wider team may not be as difficult as you think. Being there together might just lead to a light bulb moment of cross team developer efficiency!


Do you provide lunch and drinks?

We like to feed our guests well, blame it on my Southern American upbringing, but no one leaves hungry. I highly recommend everyone stay for the networking reception afterwards, and invite your local colleagues (and maybe even your boss or manager) to join us.  Our techies let their hair down (or not, in the case of Mike Bleistein) and like to talk even more about the COBOL community, the old days, the future and answer your tough questions.  A designated driver or public transport might be a good idea! If you are travelling in why not get in the mood by downloading our #COBOLRocks playlist to your Spotify account. bier19

What do you mean by ‘Stump the expert’?

It’s the last and my favourite part of the day!!  It’s your chance to ask our panel of experts your hardest questions specific to your environment or related to your current and future projects and development challenges. I want our know-it-all experts to get stumped.  If you stump them, make sure you boast remorselessly about it on Twitter to let the world know!  As far as I know they’ve never been defeated with any COBOL or Mainframe AppDev challenge……


Can I find the events online too?

 These events buzz as much online as they do in the room, we’re very passionate about social media if you can’t tell by the hashtag in our event name. We seriously love it and our large developer community talks regularly via Twitter, LinkedIn and Facebook. In fact, I get pretty giddy when I meet a fellow tweeter…so much so that I may have to take a selfie to tweet. If you do tweet about #DevDay you’ll be in line for a reward  and quickly find out how many likeminded Developers there are out there! Details of prizes and awards with will be announced on the day.


What do I need to bring with me?

While we love the stories about Derek Britton back in the day at past Developer Conferences, I love hearing the stories about the cool applications powered by COBOL out there. Please  bring your real world stories to share. You could also bring a laptop if you’re involved in a Product Trial. There aren’t many opportunities these days to get up close and personal with the Principal Product Architects and Managers so use the opportunity for your own advantage!


Don’t hesitate to find me on Twitter and point out what I’ve missed from the list if you’re one of our #DevDay alumni. Hopefully see you there in person at a North American #DevDay soon! If you can’t attend one of our North American events, you can find a location near you or request one at the hubpage.

Standing_Very Very Happy

‘May the Open Source be with you’

SUSECon, the Woodstock for Open Source devotees, wound up in Amsterdam last week. Steve Moore took a look around, and found some familiar faces.

The halls of the Beurs van Berlage, Amsterdam, are not a great place to be selling proprietary software. More than 700 delegates attended 127 sessions and visited 20 demo stations, all dedicated to maximising the opportunities that open source offers.

SUSE, now a Micro Focus company, has been creating open source software, Linux and cloud infrastructure solutions for more than 20 years. It was the first company to market Linux for the enterprise and more than 13,000 businesses worldwide now use SUSE Linux Enterprise Server. Open source is embraced by more companies than ever before to simplify application deployment on OpenStack-based cloud infrastructures.


Tale of the tape

The venue was once home to the Dutch stock exchange, the floors strewn with tickertape tracking the ups and downs of the national economy. Fluctuating fortunes are now measured in other ways – not least in how organizations manage disruptive technologies and maintain optimum levels of customer service. But innovation can be as elusive as inspiration. That’s why they come to events like this – to see what open source can achieve.

The Westinghouse Electric Company was awarded the SUSE Always Open Customer of the Year award for excellence in using SUSE solutions to ‘control, optimize and innovate’ their IT environment. They will not be the last organisation to maximise the close development relationship between SUSE and SAP.

The opening keynotes by Michael Miller, SUSE Sales VP and Nils Brauckmann, SUSE President and General Manager, highlighted partnerships as key to success in the new world of open source. Both speakers highlighted data access, management and storage as new battlegrounds and SUSE just won twice at the 2015 Storage Awards. SUSE used the conference to announce that they are set to collaborate with SAP on the OpenStack cloud provider interface. The two men believe that only open source can support the innovation needed to win these struggles.

Well they would say that, wouldn’t they?

It’s not surprising that open source zealots would bang their own drum. But other voices are speaking up in favour of the freedom to innovate that open source offers. Microsoft, HP and other companies on that scale offer open source options. And when IBM book a stand you know that the industry’s major vendors view open source with respect.

Opinions formed many years ago are being revisited – this article deconstructs 10 of the more obvious concerns. And with more companies under growing pressure to keep the lights and fund innovation all while stymied with stagnant IT budgets, doing their own thing with open source becomes more tempting.

Micro Focus is happy to stay out of this particular argument. Borland GitCentric and Subversion Connector support open source standards without being open source tools per se. However, if open source works for some of our customers we are happy to support that, and we recognise the need to resolve business challenges through new thinking rather than throwing money at them.


On the right track

This is a central tenet of the wider Micro Focus approach and it was a key theme in our session at SUSECon. Derek Britton and Ed Airey hosted a well-attended discussion, Ready for the mobile-first and Cloud-first economy. In a room with an attractive – and distracting – view of Amsterdam’s Centraal train station, Micro Focus offered a new perspective on enabling mainframe and distributed systems to meet the demands of the digital age.

This could have been subtitled ‘where Micro Focus COBOL meets SUSE Linux Enterprise on the IBM mainframe’, because for organisations running core systems on outdated platforms, this slide deck offered a pathway to future innovation.

The key is to leverage what’s there and just add what’s needed. Because in most cases the current IT has enduring value and the organization that adds to it rather than risking it all in a drastic package replacement or rewrite strategy will achieve an advantage over those that fail to recognise this potential.


Taking core applications into the future

The IT challenges are many and resolution depends on the organization’s ability to form a future-proof platform strategy. In short, where can the organization run the core business applications that have underpinned past success? Many are moving away from older variants of UNIX or other proprietary machines, such as UNYSIS or Tandem, and towards emerging standards, such as Linux. Rehosting to open architectures can reduce TCO by up to 90%.

But the platform is just half the story. Indeed, the true value to the organization of their core systems is the business logic and data, rather than the platform or language. So protecting, evolving and enhancing that value in the digital age is vital. As such, Application development – adapting older code to meet business changes – requires some thought, too. But there are plenty of options. A modern IDE can breathe new life into long-standing applications – an issue for the many enterprises whose portfolio is 65% COBOL.

Imagine the banking network using COBOL apps to run their ATM network. Those same apps now need to support mobile device access. That’s a technological innovation that the original developers could not have anticipated a good example of the disruptive and mobile technologies now challenging many organisations. And who can predict the next one?

Whatever the next innovation, maximising the enduring value of the open mainframe, COBOL and SUSE combination to create a back end IT infrastructure robust enough to withstand future demands is a business-critical decision. Platform selection is another strategic decision requiring a well-considered answer – let us assume that the movement from UNIX or proprietary systems to Linux (whether on a new IBM LinuxOne or other hardware) will appear on many boardroom agendas in the future. Importantly, rehosting to an open architecture liberates developers from the prescriptive structures of proprietary code. It creates the potential for organisations to create their own futures, built on platforms and applications built for their own requirements. So – will the source be with you?

Building a Robust Test Automation Framework – Best Practice

According to Trigent Software’s own research a robust test automation framework ranks highly on their list of Software Testing ‘must-haves’. When executed in a structured manner, it helps improve the overall quality and reliability of a software. Read more from Test Architect Raghurikan in this fantastic guest blog…

Through our experience and research, ranking high on our list is a robust test automation framework. When executed in a structured manner, it helps improve the overall quality and reliability of a software.

The software development industry always faces a time crunch when it comes to the last mile, i.e. Testing. Ask any software developer and he will tell you that the development team in any corner of the world, wants 1. Testing activities to be faster than humanly possible, 2. Deliver results which are laser accurate and 3. All of these without compromising quality. Manual testing fails to live up to these expectations and therefore, are least preferred. Test Automation is therefore, the best choice as it helps accelerate testing and delivers fast results. To ensure that test automation works well, there is the need for a robust test frameworks which acts as the core foundation for the automation life cycle. If we don’t build the right framework, the results could be:

  • Non-modularized tests
  • Maintenance difficulties
  • Inconsistent test results

All of which will result in escalating costs bringing down the ROI considerably.

Best Practices

Framework Organization
The automation framework needs to be well organized to make it easier to understand and work with the framework. Organized framework provides an easier way to expand and maintain. Items to be considered are

  • Easier way to manage resources, configurations, input test data, test cases and utility functions
  • Provide support for adding new features
  • Easy option to integrate with Automation tool, third part tools, databases etc…
  • Have the standard scripting guidelines that needs to be followed to be across

Good Design

Automation tests are used for long terms regression runs to reduce the testing turnaround time, hence, the design involved should be good so that the tests can be maintained easily and yield reliable tests results. Following are some of the good design step

  • Separation of application locators from the test code so that the locators can be updated in the locator file independently on change. Example: To use locators from the object map, external excel or xml file
  • Separate test data from the code and pull data from the external sources such as excel, text file, csv or xml file. Whenever required we can just update the data in the file
  • Organize tests as modules/ functions, so that they are re-usable and easy to manage. Have application/ business logic in the separate class and call them from the test class
  • Tests should start from the base state and ensure that it recovers and continues when there are intermittent test failures

Configuration options

The framework should provide option to choose the configurations at run time so that it can be used as per the test execution requirement. Some of the configurations include

  • Ability to choose test execution environment such as QA, Staging or Production
  • Ability to choose the browser
  • Ability to choose the operating system, platform
  • Ability to mark for priority dependency and groups for the tests

Re-usable libraries

Libraries helps in grouping of the application utilities and hide the complex implementation logic from the external world. It helps in code reusability and easy to maintain code.

  • Build the library of utilities, business logic, external connections
  • Build the library of generic functions of the framework

Reports and logs

To evaluate the effectiveness of automation we need to right set of results, the automation framework should provide all the detailed required to test execution.

  • Provide logs with necessary details of problem with the custom message
  • Have reports which provides the detailed execution status with Pass/ Fail/ Skipped category along with the screenshots

Version Control and Continuous Integration

To effectively control the automation framework we need to keep track of it, hence the version control system is required to achieve this. Have the framework integrated with the version control.

Also we need to run the regression suite continuously to ensure that the tests are running fine and the application functionality is as expected, hence the continuous integration system is required to support for execution of the and results monitoring.

If we build the Robust Automation framework with the above capabilities, we gain the below benefits

  • Increase product reliability – Accurate, efficient, automated regression tests – reduce risks
  • Reduce the product release cycle time – Improve the time to market, Reduce QA cycle time
  • Improve efficiency and effectiveness of QA – free QA team to focus manual efforts where needed
Test Architect






Raghukiran, Test Architect with Trigent Software, has over a decades’ experience building test frameworks for automation tools which are scalable and efficient. He is keenly interested in exploring new automation tools, continuous integration setups for nightly execution and automation coverage. In the blog `Best Practices for building Robust Test Automation Framework’ he discusses the best practices to be followed for building a Robust Test Automation framework.

Merging Attachmate and Micro Focus will change how you think about Terminal Emulation

There is no company out there that has the depth and breadth of technology and product expertise that we bring to the table. David Fletcher blogs about how the Micro Focus and Attachmate merger is leading to stronger and more secure host access solutions.

When it comes to mergers and acquisitions we have all heard and read about the chaos that comes from the ones that don’t work.  Remember when AOL announced that it was buying Time Warner to create the “world’s largest media company” or how about when Sprint and Nextel agreed to merge only to have Sprint shut down the Nextel network a few years later?

The interesting thing is that these days’ mergers fail more often than marriages.  There are many reasons for failures – technology differences market changes and especially company cultural differences to name a few.

Not only did these mergers above fail – they brought a lot of pain and suffering to their customers.  They gutted the market place with the lack of ability to make something new and better out of two different companies and cultures.  They took a chance to re-think their strategies and failed to move forward with the products and services into a new age of technology and customer satisfaction.



The Micro Focus and Attachmate merger is leading to stronger and more secure host access solutions.

With this merger, there is now no other company that can better address the needs of organizations that want to fit their host systems into a modern and secure IT environment.

With our combined portfolios our customers can:

  • Deliver best-of-class terminal emulation solutions across the range of devices required by their business users
  • Harden their endpoints to help secure the sensitive data and protect the host systems accessed by end users…without impacting user productivity
  • Simplify the interaction with non-intuitive mainframe apps for today’s “Facebook generation” users not familiar with green screens
  • Non-invasively extend the business logic embedded in mainframe apps to developers as web services
  • Work with a single partner that is focused on helping companies get the most out of their long-term IT investments

Whatever mainframe or host system you have – we have experts with years of experience with these technologies.  When it comes to understanding how to secure and manage access to mainframe and host systems – there simply is no vendor that has a more complete set of solutions to protect your critical data-in-motion or at rest.  When it comes to enabling mainframe-based applications to new users in new ways – no other vendor is as passionate about bringing new solutions to our customers. There is no company out there that has the depth and breadth of technology and product expertise that Micro Focus brings to the table.

So how will this merger be different than other technology mergers?

It’s easy to say, “Oh, but this merger is different”.  But what really matters is how have our companies fared with mergers in the past and how is this merger benefiting customers for the future. Both Attachmate and Micro Focus have a history of mergers and acquisitions where we have taken the opportunity to bring products and services into our portfolio to provide more value for our customers.

Since this merger was completed in November of 2014 we’ve been working hard on the nuts and bolts that make a company work.  Bringing together people, systems and processes to make it easy to do business the combined Micro Focus company.  We’ve also been busy working through our product portfolios and determining how we can best enable this combined product set to help customers secure and manage their host systems.  We’ve been cross-pollinating our products with the best of breed technologies so that our customers can take advantage of these solutions without having to swap applications.

Here are just a few examples of how we are bringing these technologies together:

  • Reflection Desktop and InfoConnect Desktop now offer the User Interface Modernization capabilities that originated with Rumba+
  • Delivered Host Access Management and Security Server to market which will allow our customers to centrally manage and authenticate access to mainframe systems from our terminal emulation clients.

It’s been a challenge – but we’ve stayed focused on driving new releases and updates for the products that our customers rely on.   Take a closer look what we’ve been up to:

As these products move forward we will continue to invest in and enable the best technologies and solutions across the portfolio.  No longer will customers have to choose between different products for the solutions they need.

When you look at your mainframe and host systems, ask yourself – are you getting the most out of these investments as possible?  If it was simple and less risky to your business to re-think how you are using these systems– what would you do differently?

Just like with anything – change is hard and can be daunting but now you have a combined company in your corner to help you re-think your business and make each step of the way a safe one.

Keep your eyes on Micro Focus over the coming months as we continue to drive innovation that solve modern customer challenges with host systems.  Take a look at how our customers that have taken the steps to make change to their systems and processes and how they have benefited.  This could be your business.

Health Plan of San Mateo

Bauverein der Elbgemeinden (BVE)

Renew Insurance