The 5 Longest Lead Times in Software Delivery

The Pressure to Go Fast

Rapid business change, fueled by software innovation is transforming how software delivery organizations define, develop, test, and release business applications. For these software organizations to keep their competitive advantage in today’s complex and volatile digital marketplace, they must become more agile, adaptive, and integrated into the business and embrace digital transformation business practices. Unfortunately, most current software delivery practices can’t keep pace with the demands of the business.

Long software delivery cycles are a significant impediment to business technology innovation. Agile development teams have shortened development cycles, but Agile by itself is insufficient as it does not remove the cultural and technical barriers between development and operations.  DevOps principles and practices developed in response to this problem, facilitates cooperation and coordination among teams to deliver software faster and with better quality.

The goal of scaling “DevOps” for the enterprise is to prioritize and optimize deployment pipelines and reduce lead times to deliver better business outcomes. Creating new and optimizing existing deployment pipelines in large IT organizations is key to improving their efficiency and effectiveness in delivering software at the speed that the business requires.

Long Lead Times

Every enterprise IT organization is unique in that it will have different bottlenecks and constraints in its deployment pipelines.  I recommend conducting a value stream mapping exercise to identify specific problem areas.  “Starting and Scaling DevOps in the Enterprise” , by Gary Gruver is a great book and provides a good framework for getting started. The following are the some of the most common areas found that generate the longest lead times:


DevOps culture strives to break down the organizational silos and transition more to product teams.  This is because the current silo’d organizational structure provides headwinds to the objective of short lead times and continuous flow.  Organizational silos are artifacts of the industrial era designed specifically for “Batch and Queue” processing which drives up lead times with handoffs from one team or organization to another. Each handoff is potentially a queue in itself.  Resolving ambiguities require additional communication between teams and can result in significant delays, high costs, and failed releases.

You need to strive to reduce the number of handoffs by automating a significant portion of the work and enabling the teams to continuously work on creating customer value – the faster the flow, the better the quality, resulting in lower lead times.

Approval Processes

Approval processes were originally developed to mitigate risk and provide oversight to ensure adherence to auditable standards for moving changes into production, however, the approval process within most large enterprises is slow and complex and is often comprised of a set of manual stovepipe processes that use email and Microsoft office tools to track, manage, and, more often than not, wait on people for approval of a software change. Lack of proper data or insufficient data leads to hasty or faulty approvals or bounce backs further frustrating software delivery teams, reducing quality, and impeding deployments.

Continuous delivery practices and deployment pipeline automation enables a more rigorous approval process, and a dramatic improvement in speed. Releasing into production might need approval from the business, but everything up to that point could be automated dramatically reducing lead times.

Environment Management and Provisioning

There is nothing more demoralizing to a dev team than having to wait to get an environment to test a new feature. Lack of environment availability and/or environment contention due to manual processes and poor scheduling can create extremely long lead times, delay releases, and increase the cost of release deployments.

Creating environments is a very repetitive task that should be documented, automated, and put under version control. An automated and self-service process to schedule, manage, track, and provision all the environments in the deployment pipeline will greatly reduce lead times, drive down costs, while increasing the productivity of your Dev and QA teams.

Manual Software Deployments

Machines are far better and much more consistent at deploying applications than humans. Yet there still are a significant number of organizations that still manually deploy their code.  Automating manual deployment can be a quick win for these organizations. This approach can be delivered rapidly without major organizational changes. It is not uncommon for organizations to see deployment lead times reduced by over 90%.

The more automated this process is, the more repeatable and reliable it will be. When it’s time to deploy to production, it will be a non-event. This translates into dramatically lower lead times, less downtime and keeps the business open so that it can make more money.

Manual Software Testing

Once the environment is ready and the code is deployed, it’s time to test to ensure the code is working as expected and that it does not break anything else. The problem is that most organizations today manually test their code base. Manual software testing drives lead times up because the process is very slow, error prone and expensive to scale out across large organizations.

Automated testing is a prime area to focus on to reduce lead times. Automated testing is less expensive, more reliable and repeatable, can provide broader coverage, and is a lot faster.  There will be an initial cost of developing the automated test scripts, but a lot of that can be absorbed by shifting manual tester resources to “Test Development Engineers” to focus on automated API-based testing. Over time manual testing costs and lead times will go down as quality goes up.

 The velocity and complexity of software delivery continues to increase as businesses adapt to new economic conditions. Optimizing and automating deployment pipelines using DevOps practices will dramatically reduce lead times and enable the delivery of software faster and with better quality.

To learn more about how to optimize your deployment pipelines, listen to our popular on-demand webcast with Gary Gruver, where he talks about how to start your DevOps journey and how to scale it in large enterprises where change is usually difficult. He shares his recommendations from his new book on scaling DevOps and answers audience questions on how to adopt those best practices in their organizations.

Fill the form to listen to the recording and get your free copy of Gary’s new book Starting and Scaling DevOps in the Enterprise

Cyber Monday

Big retailers have been planning ahead for up to 18 months for their share of approximately 2.6 billion dollars of revenue. Cyber Monday started in 2005 by has become one of the biggest online traffic days of the year, Simon Puleo takes a look at the list of how some of our biggest customers have prepared.

‘Twas the night before Cyber Monday and all through the house

everyone was using touchscreens gone was the mouse.

While consumers checked their wish lists with care

in hopes that great savings soon would be there.

The children were watching screens in their beds

while visions of Pikachu danced in their heads.

And Mamma in her robe and I in my Cub’s hat

reviewed our bank accounts and decided that was that!’

Cyber Monday started in 2005 by has become one of the biggest online traffic days of the year.  Black Friday may have started as early as 1951 and between the two shopping holidays generate over $70 BN!  Let’s take a look at the list of how some of our biggest customers have prepared:

1.)    Performance testing.  Did you know that our customers typically start performance testing for cyber-Monday in February, why would they start so early?  Customers are testing more than just peak load, they are testing that sites will render correctly across multiple configurations, bandwidths, devices, and sometimes in multiple regions of the world.  The goals of ecommerce is to enable as many shoppers as possible that includes my Dad on his iPad 2 on a rural carrier and my daughter on her Chromebook in an urban area.   Multiply that by thousands of users and you can see that unfortunately, retailers can’t hire enough of my relatives to help them out. What they do is use a combination of synthetic monitors and virtual users to simulate and assess how a website will perform when 10,000 of users are shopping at the same time.


2.)    New Feature Testing.  Whether you consciously think about it or not you expect and gravitate towards websites that have the latest feature set and best user experience.  What does that mean?  Listing a photo and description is bare bones the best commerce websites not only have reviews, videos, links to social media and wish lists they may actually be responsive to your shopping habits, regional weather and personal interests.  They use big data sets to preclude what you are browsing for and offer you targeted deals too good to pass up!  While that is exciting, it also means that the complexity of code both rendering the browser and behind the scenes has grown exponentially over the years.  Ensuring that new features perform and old code works with legacy systems as well renders correctly over multiple devices is what functional and regression testing is all about.  While a team of testers may track multiple code changes they lean towards automation to ensure that code works on target configurations.

3.)    Offering Federated Access Management What? you’re thinking, user-login was solved ages ago. For sophisticated online retailers using Facebook, Google, Yahoo!, Twitter, LinkedIn or other credentials to gain access is first a method to gain trust, second opens up the potential opportunity for more customers and finally a road to valuable personal data.  Regardless of which advantage a retailer may prioritize developing the ability to enable millions of Facebook users to easily login and check-out with a credit-card equates to new customers and a leg up over legacy competitors.  And, for added amount of trust and security retailers can couple multi-factor authentication at key points of the conversion process.   Simple user login and password for each shopping site is quickly becoming a relic of the past as users opt for convenience over management of many user names and passwords.


These are some of the top methods and solutions that big retailers have implemented for 2016.  The best online commerce professionals know what they are up against and what is at stake for example:

  • In 2014 there were over 18,000 different Android devices on the market according to OpenSignal, that is an overwhelming amount of devices to ensure.
  • At a minimum retailers lose $5600 per minute their websites are down
  • The market is huge a recent estimate put the global amount of digital buyers at 1.6 Billion, that is nearly 1/5 of the world’s population.  Converting even .1% of that number is 160,000 users!
  • Users are fickle and will leave a website if delayed just a few seconds
  • Last year Cyber Monday accounted for $3 billion in revenue, this year we expect even more!

Retailers like Mueller in Germany realize that no “downtime” is critical to keeping both the online and virtual shelving stocked.  Their holistic approach to managing software testing and performance helps them implement new features while keeping existing systems up and running.   It is never too late to get started for this year or preparing for next, consider how Micro Focus has helped major US and European Online Retailers with performance testing, automated functional and regression testing, access management and advanced authentication.

The rise of Dynamic Mobile Ecosystems

When you think of Mobile Applications from a testing perspective one of the first big headaches that comes to mind, is just how dynamic Mobile ecosystems are. Owners of iOS devices are well accustomed to being prompted by frequent requests from Apple to upgrade the iOS Operating System throughout their ownership of an Apple device.

When you think of Mobile Applications from a testing perspective one of the first big headaches that comes to mind, is just how dynamic Mobile ecosystems are. Owners of iOS devices are well accustomed to being prompted by frequent requests from Apple to upgrade the iOS Operating System throughout their ownership of an Apple device.

The story for the Android ecosystem is even more complex, the market has a multitude of the big technology players such as “Samsung, HTC, LG, Sony etc..” each providing their own customized OEM version of the Android Operating System and most also running a different version of the Android base operating system at any given time.

To put this into perspective the graph (taken from Wikipedia) below highlights both the pace of releases for Android Operating System releases and how this correlates with the percentage of Android versions accessing Google Play within a given timeframe. For example as of February 2016, Android 4.4 “KitKat” is the single most widely used Android version, operating on 35.5% of all Android devices accessing Google Play.


What are the challenges for application vendors?

From a high level perspective the major challenge for application vendors is the need to ensure that their applications function correctly within an evolving and fragmented market place. Application vendors now have an immediate need to ensure that their deployed applications are not only compatible on specific hardware devices but also that they function correctly on the most commonly used Operating System versions for each device. Some application vendors main focus is often on ensuring that their application is compatible with the latest Operating System running on the latest shiny new device, however as the graph above highlights, the majority of Google Play customers are not running on the latest Android versions at given time.

Failure to ensure that your application is compatible and provides the same user experience across as wide a spectrum of devices and operating system versions will not only hurt your businesses reputation but will affect the company bottom line. It does not make business sense to either lock out or deploy an app which is incompatible to a significant proportion of your potential customers or market space. According to 96% of unhappy customers do not complain whilst even more telling, 91% of those customers will never come back.

Therefore if we take a more granular insight and have a look at the key challenges stakeholders within an organization face; we can see that whilst the main challenge of a fragmented market place remains, it becomes intertwined with additional challenges which are unique to each department within an vendors organization. We can categorize some of these challenges as follows:

QA Department:

  • More devices & more market demands typically means slower and more complicated testing cycles
  • Frequent changes and reduced project cycle times make it harder to test thoroughly
  • Device combinations and changing environment makes it difficult to integrate into a formal continuous delivery environment

Development Department:

  • More devices & more market demands typically means slower and more complicated testing cycles
  • Frequent changes and reduced project cycle times make it harder to test thoroughly
  • Device evolutions along with changing business needs make it difficult to ensure user experience

Business Analyst/Product Manager

  • Device priorities are constantly changing so decision making abilities are hindered
  • Lack of visibility across delivery and testing assets slows business agility
  • The capability of business focused stakeholders to participate in quality activities

How using Silk Mobile can overcome these challenges

Silk Mobile is the new software bundle from Micro Focus, which is specifically tailored to address the key challenges faced by application vendors in today’s fast paced Mobile environment. It does this by utilizing the sophisticated testing functional capabilities of Silk Test Mobile, with the powerful performance capabilities of Silk Performer all managed and maintained from the test management tool Silk Central.


This unique three pronged approach to testing and test management helps application vendors deliver end to end quality Mobile Applications on time and on budget by reducing the risk of customers experiencing an unsatisfactory user experience. Silk Mobile achieves this goal by delivering return of investment in three key areas:

Speeding up your testing

  • Leverage the cloud for coverage and accuracy
  • Collect and compare performance across the globe
  • Easily identify root cause of performance problems

Safeguard that your apps work anywhere

  • Quickly build cross platform/device automation tests
  • Easily document manual/exploratory testing
  • Understand and document application issues

Confirm that your apps meet customers’ expectations

  • Leverage the cloud for coverage and accuracy
  • Collect and compare performance across the globe
  • Easily identify root cause of performance problems

Each component of the Silk Mobile bundle plays a unique part in helping deliver these benefits

Silk Test Mobile provides:

  • The ability to build automated tests that can run on different browsers & different mobile applications across different operating systems, platforms and devices
  • The ability to increase test coverage faster with reusable test building blocks
  • IDE integration that enables developers to contribute to test automation

Silk Performer provides:

  • Ability to simulate users performance experience across multiple device/network bandwidth combinations
  • Ability to easily collect and compare transaction’s performance across different geographical locations
  • Ability to identify the root cause of application performance problems through powerful, end-to-end diagnostics capabilities
  • Ability to Leverage the cloud to reduce the cost and increase the accuracy of your performance testing

Silk Central provides:

  • Support for the full test lifecycle, from requirements to test execution over to resulting and issue tracking
  • The capability to business focused stakeholders to easily create and reuse automation assets via Keyword Driven Tests
  • The ability to quickly understand and document application issues across devices and platforms
  • The ability to easily document manual testing execution through screen shots, videos and status report on every step in any device

Silk Mobile utilizes the technology of each of software component in conjunction to offer a bundled testing solution that is greater than the sum of all its parts. This unified testing approach for Mobile Applications will significantly help improve “time to market“ and ensure that your application can withstand the rigours of an increasingly fragmented and rapidly evolving market place.


The Force Awakens – the web performance testing farce continues

In a Galaxy far too close to home, Frank Borland has donned his Alec Guinness Jedi cape and fired up the Millennium Falcon of Best Web Performance Practice. People are just not learning their lessons and Frank ain’t happy.


I’ve got a bad feeling about this…

Now, it takes a lot to get Frank excited; maybe a new Creedence album, a BOGOF on red neckerchiefs at JC Penny – something pretty damn huge, anyway. But, any developer who has ever worn the uniform – insert ponytail/sandals/generic Sci-Fi movie T-shirt gag here – knows that a pretty darn special film is out this week. It’s been years in the making and has been promoted to a Galaxy far far away and back.

That’s right, folks. Because just as the Ridiculous 6 simultaneously hits the streets and the bargain bins, the season’s blockbuster, Alvin and the Chipmunks – the Road Chip gets ready to do its thing. Just kidding you – Frank knows full well that Star Wars: Episode VII – The Force Awakens is due, to, er, awaken. With force.

While the news has even reached Frank’s Testing Nerve Centre in my secret West Coast location, the news still caught UK cinemas with their metaphorical pants down when they tried and failed to sell advance tickets. That’s right, guys and gals – the very organisations who planned to make oodles of boodle from these screenings didn’t properly prepare for them. It’s like General Motors being caught out by the invention of roads. Not too likely, huh?

Looking through my well-thumbed copy of Talking Star Wars for Dummies – no dev should be without one – Frank’s eye spots a good quote. “Every so often there’s a great disturbance in The Force as thousands of voices cry out”. This time round it’s legions of dudes – let’s be honest, most of them are dudes – yelling at their laptops as the website they were using to not buy tickets for the Force Awakens crashed like Biggs Darklighter’s X-Wing in the final Death Star trench run.

But these organisations weren’t tipped into the edge into oblivion by a sneaky Sith Lord Vader TIE fighter attack. No sirree. These online commerce teams knew the onslaught was coming. And they did squat about it.

“Test. Or Test not. There is no try.”

The link between well-publicised events and website crashes is nothing new, right? Major eCommerce sites know you’re coming to visit – dammit, they spend thousands of bucks on ads that drive you to the site. But again and again and again inadequate performance testing casts them into a Sarlacc pit of doom.

Hit Google. Check out all the negative headlines. Read your Twitter feed – see all the bad vibes heading towards these companies. That reputation is going to take more rebuilding that the Death Star – and it’s just as difficult. Think of all the potential bounty that took-off faster than Boba Fett did when his back-pack took a lucky shot from Han’s trusty blaster. And it ain’t coming back.

Apparently some folks still think it is better to risk losing millions than drop a few bucks on world class performance testing software and a QA regime. Their customers disagree.

What a wonderful smell you’ve discovered

The Battle of Hoth begins on Social Media when the ordure of corporate misjudgement hits the fan of public opinion. The brave Rebel Alliance forces manning the Twitter and Facebook outposts are left to fight an unnecessary battle against impossible odds. A brave few will win a skirmish or two in full knowledge that it’s all slipping out of their control. People – it doesn’t have to be this way.

Somebody has to save our skins

That’s where Silk WebMeter, Silk Performer and our world class performance testing products come in folks. If you’re a developer, you won’t need me to remind you Obi-Wan’s wise words, “in my experience there’s no such thing as luck”. And who am I to argue with the Big Guy? So prepare your site for the next epic struggle against the hordes – or ‘customers’, as I call ‘em. Hit our Trials page, fire up your trusty, non-clumsy or random performance testing weapon of choice and May the Force Be With You!

Frank out.

Frank Profile

Cyber Fun Days!

How can online retailers ensure virtual shopping carts will continue to be filled now that Black Friday has kicked off the seasonal shopping season? New writer Lenore Adam talks about ways to prevent website bottlenecks and guarantee a positive and consistent user experience.

As my colleague Derek Britton recently noted in his blog, Cyber Sunday is the latest extension of the traditional Thanksgiving retail feeding frenzy (pardon the pun, I struggle with any reminder of having eaten too much this past week…). U.S. retail giant Wal-Mart, along with several other major retailers, pulled their Cyber Monday promotions into Sunday in a bid to capture increased online demand.

For consumers, it is less of a trend and more of a way of life. Ubiquitous use of smart phones with fast internet has helped blur the lines between what were traditionally distinct retail and online shopping days. Economists estimate that digital shopping will rise by ‘11.7 percent this year, lifting the overall proportion of online sales to 14.7 percent of total retail activity, or $1 out of every $7’ that consumers spend this season. Despite these indicators, major retailers were caught unprepared for the volume of online shopping this year, promoting products that consumers were unable to order due to website overload.

Ensure a Positive User Experience

Even after the holiday rush, online retailers are still vulnerable to unpredictable demand. Will another polar vortex increase climate commerce and drive an unexpected wave of consumers to your site? Will those newly implemented e-commerce delivery options stress back-end systems and reduce peak performance? Are you ready for this season’s variety & volume of access devices, browsers, and geographically dispersed access points? Online retail success demands a positive user experience for a customer base accustomed to web page response times ticked off in milliseconds.

The mantra for brick and mortar retailers is often location, location, location. With online retailers it’s more like test, test, test. This is where Silk Performer and Cloudburst come in. Borland products help prevent our customers – who include some of the biggest names in online retailing – from becoming another online casualty. Archie Roboostoff, Director of Product Management, explains how Silk is used not only for website performance testing, but also for testing responsive web design. For example, use Silk to test…

‘…across different configurations of browsers to outline where things can be tuned…For example, Silk can determine that your application runs 15% slower on Firefox than Chrome on Android. Adjusting the stylesheets or javascript may be all that is required to performance tune your application. Testing for responsive web design is crucial to keeping user experience sentiments high…’

When ‘a 100 millisecond delay… equates to a 1% drop in revenue’, online performance clearly is business critical. With the competition just a click away, don’t lose customers due to poor site performance. Keep them on your site, happily filling up their shopping carts. Try Silk Performer here.


After the Goldrush

How can online retailers keep the tills keep ringing now Thanksgiving is over? Chris Livesey talks about easy ways to prevent website wobbles.


As my colleague Derek Britton recently noted in his blog, Cyber Sunday is the latest extension of the traditional – at least in contemporary terms – Thanksgiving retail feeding frenzy. Wal-Mart has decided to further test their website’s resilience to heavy digital footfall by a further 24 hours.

Similarly, the UK-based technology store Carphone Warehouse brought forward their Black Friday event by 24 hours and joined Amazon and Argos in offering deals that run from November 23 until December 2 inclusive.

Whether it is out of consideration for the consumer or just another dead-eyed strategy to squeeze more pre-Christmas cash out of consumers, the line between the end of one sales event and the commencement of another is increasingly blurred. And it is less of a trend and more of a way of life. UK shoppers spent more than £718.7m online every week throughout 2014, an 11.8% increase on the previous year.

The Reiss Effect

So what happens after the seasonal rush? Everything goes back to normal, right? Well, maybe not. Online retailers are still vulnerable to The Reiss Effect. This happens when a company isn’t prepared for, well, the unexpected and loses out as a result.

In this case, Kate Middleton being pictured wearing a Reiss dress had unforeseen – and unfortunate – consequences for the manufacturer. The website crashed. Reiss were unable to take advantage of their good fortune. This once-in-a-lifetime opportunity passed them by. Unable to process orders from new or established customers, they lost revenue and became a ‘thing’.

Websites are the virtual shopfronts for retailers and manufacturers and, just like shops, can quickly become overwhelmed if not battle-ready. Unexpected opportunities can quickly become unwanted headaches. The same Social Media platforms that plug your product can quickly damage your brand.

We are not bemused

Underestimating the potential popularity of your offering can be just as damaging is just another form of unpreparedness. The website for Dismaland, the pop-up art project set up by British graffiti artist Banksy recently crashed, leaving thousands of would-be visitors unable to purchase tickets. But as the creative theme of this ‘bemusement park’ attraction was disappointment, this may well have been the intention.

So the key to online retail success for Black Friday, Cyber Sunday, ‘Gratuitous Spending Wednesday’ and beyond is to road-test your website for any eventuality. It’s easier than you think. As the CMO for Micro Focus Borland I am proud that we help prevent our customers – who include some of the biggest names in online retailing – becoming another ‘thing’.

It’s easy with Silk Performer and Cloudburst. This is stress-free, stress testing for websites and applications. With it, users have Cloud-based scalability and access to unlimited virtual users as they like. Without it, they may not detect the errors that can turn go-live into dead-in-the-water day. Try it here.

But even the best tool can’t prepare an organization for everything. Sorry, US Airlines, but if a opossum is going to chew through the power cable, you’re on your own.



How can I make my tests execute faster?

This is a common question that often arises from Silk Test users. In most instances we find that a number of efficiency gains can be obtained from changes to how the tests are scripted.

This is a common question that often arises from Silk Test users. In most instances we find that a number of efficiency gains can be obtained from changes to how the tests are scripted. The Silk Test suite provides many ways of interacting with an application under test (AUT), some of which can drastically reduce the overall execution time of your tests. This blog provides information on how the findAll method can help reduce the overall execution time of tests with Silk4J. The procedure is similar for the other Silk Test clients.

The following code snippet was created by a real Silk4J user to determine the number of different controls within a Web Application. When executing tests in Chrome and Firefox, the user found that the execution time was significantly higher than in Internet Explorer. When tracing through the Silk4J framework of the user, we discovered that the vast majority of the additional execution time was being lost in a utility method called countObjects. The utility method was scripted as follows:

public void countObjects( ) {

desktop.setOption(“OPT_TRUELOG_MODE”, 0);

int iCount = 0;

int i = 0;
while(desktop.exists(“//BrowserApplication//BrowserWindow//INPUT[“+(i+1)+”]”) ) {

iCount += i;

while(desktop.exists(“//BrowserApplication//BrowserWindow//SELECT[“+(i+1)+”]”) ) { i++;

iCount += i;

i=0; while(desktop.exists(“//BrowserApplication//BrowserWindow//BUTTON[“+(i+1)+”]”) ) { i++; }

iCount += i;

while(desktop.exists(“//BrowserApplication//BrowserWindow//A[“+(i+1)+”]”) ) {

iCount += i;

System.out.println(“Counted “+iCount+” items.”);


The initial analysis of the method indicated that the code was not efficient for the task at hand.

Why was the above code not efficient?

The following areas were identified as being not efficient:

  1. The method counts each control individually; therefore if there are 100 controls to be counted, the above snippet would make over 100 calls to the browser to count the required controls.
  2. The use of while loops in the above fashion would eventually lead to unnecessary calls to the browser and therefore execution time which was being wasted.

How could the code be made efficient?

Analysis of the countObjects method revealed that the same functionality could be achieved in four Silk4J statements. To make this possible the method was modified to use the Silk4J method findAll. This method returns a list of controls matching a specific locator string with a single call, therefore it has the following benefits over the original approach:

  1. Each control does not need to be found individually before incrementing the count.
  2. No unnecessary calls are made to the browser

The modifications resulted in the following method:

public void countObjects ( ) {

int iCount = 0;

iCount += desktop.findAll (“//INPUT”).size( );
iCount += desktop.findAll (“//SELECT”).size( );
iCount += desktop.findAll (“//BUTTON”).size( );
iCount += desktop.findAll (“//A”).size( );

System.out.println (“Counted “+iCount+” items.”);


Visually the modifications have already reduced the number of lines of code required to perform the same functionality as the original utility method. There is now also no requirement for loops of any sort and from an execution standpoint, we were able to demonstrate the following performance gains:


The above chart and table demonstrate how much time a user can gain by simplifying their code through the use of other methods within the Silk4J API that are more suitable for a particular task. The performance gains in Google Chrome and Mozilla Firefox are hugely substantial, while the execution time in Internet Explorer is now less that a second. Overall this process has resulted in better code, better efficiency, and ultimately time saved.

Can I apply this to other parts of an automation framework?

In the following example the user needed to verify that the labels and values within a number of tables have the correct text values. Again, the execution time in both Mozilla Firefox and Google Chrome was considerably higher than the execution time in Internet Explorer. For example, the user experienced a great difference in execution times when executing the following method:

public void verifyLabels (String[ ][ ] expected) {

for (int i=1;i<=17; i++) {
DomElement lbl = desktop.find (“//TH [“+i+”]”);
DomElement value = desktop.find (“//TD [“+i+”]”);
Assert.assertEquals (expected [i-1] [0], lbl.getText ( ) );
Assert.assertEquals (expected [i-1] [1], value.getText ( ) );


Why was the above code not efficient?

In the above method two find operations are being executed inside the for-loop. Each iteration of the loop increases the index of the controls to be found. To find all the elements of interest and to retrieve the text of those elements, 68 calls to the browser are required.

Where are the efficiency gains?

As the loop is simply increasing the locator index, which indicates that Silk4J is looking for a lot of similar controls, the find operations within the for-loop can be replaced with two findAll operations outside of the loop. This modification immediately reduces the number of calls that must be made to the browser through Silk4J to 36 and the method now reads as follows:

public void verifyLabelsEff(String[ ][ ] expected) {

List <DomElement> labels = desktop. <DomElement>findAll (“//TH”);
List <DomElement> values = desktop.<DomElement>findAll (“//TD”);

for (int i=0;i<17; i++) {
Assert.assertEquals(expected [i] [0], labels.get(i).getText ( ) );
Assert.assertEquals(expected [i] [1], values.get(i).getText ( ) );


Performance Impact

The chart below highlights how small changes in your Silk4J framework can have a large impact on the replay performance of tests.



DevOps & Quality Automation

DevOps means different things across the IT & development landscape. When it comes to quality, it’s about constantly monitoring your application across all end user endpoints, for both functional and performance needs. The goal is to maintain a positive end user experience while continuing to make the entire process more efficient from both speed and cost perspectives.

So, how does DevOps work in the testing arena? Extremely well for Borland Silk customers, as Archie Roboostoff explains in this blog.

DevOps means different things across the IT and development landscape. Take quality. The bottom line is to maintain a positive end-user experience while making the process more efficient from both speed and cost perspectives. The top line is to constantly monitor your application across all end-user endpoints, for both functional and performance needs.

The old-school method required a combination of record/replay or manual tests being run across large virtual instances of platform and browser combinations. This was great for eliminating functional and performance defects, but as more browsers and devices came to market, testing across these variants became more demanding. This lead to organizations making the tradeoff of delivery speed vs adequate testing. As many of their customers will testify, it’s a compromise too many – and one that is not required now.

For example?

One of our large financial customers was spending more time building infrastructures than testing their applications. They had almost doubled the size of their quality teams while reducing their overall test effectiveness – as the missed deadlines and budget over-runs confirmed. Their products were falling behind competitors, the end-user experience was poor and their ability to deliver new products was very unpredictable.

It doesn’t have to be this way. This blog post is a walk-through of using Silk to achieve an automated quality infrastructure for DevOps in just a few short steps. It is quick and simple. This process works for any operating environment, test desktop and web application across any combination of platform and device.

Step 1 – Create the tests

The continuous monitoring of quality in a DevOps context is about taking any development changes such as nightly builds, hotfixes, updates and full releases, and rapidly scaling out the testing environment to improve the end-user experience. This swiftly identifies any device- or browser-specific issues caused by the changes quickly, exactly where the issue occurred. Many organizations using manual application testing do so because they feel automation is too difficult to or requires a different skillset. With Silk, less technically-adept users create scripts either visually while coders work from within the IDE.

Image 1 – Visual test creation with Silk – no technical skill needed.
Image 1 – Visual test creation with Silk – no technical skill needed.
Image 2 – IDE test creation – either in Eclipse or Visual Studio
Image 2 – IDE test creation – either in Eclipse or Visual Studio


Step 2 – Verify and Catalog the newly created tests

To confirm that our test works, we need to set a context that makes collaboration easier and test reuse simpler.

Verifying the test runs is straightforward – create your test and select one or more target browsers or devices to run the test on. It doesn’t matter where or how the test was created; if the test was recorded using Internet Explorer, it can be verified to work across any browser or device. To select the test target just point and click.

Running a test across any browser or device.
Running a test across any browser or device.

Making it work for DevOps

Our test works, so let’s add some context to the test for better collaboration and reuse in the larger DevOps infrastructure. A unique Silk feature, ‘Keyword Driven Tests’, enables cataloging and providing business/end user context to a large number of tests. In this case, we will provide a keyword for our test. Keywords can be assembled to reflect a series of transactions or use cases. For example, adding keywords like ‘verifyLogin’ and ‘checkoutShoppingCart will create tests to check the robustness of the login and shopping cart checkout.

We are now able to better collaborate with business stakeholders and in the DevOps context, enabling the creation of a variety of use cases that get to the root of any post-deployment issues. Keywords are also able to take parameters and pass them to the tests in question making automation even easier.

Image 4 – Keywords being added to tests.
Image 4 – Keywords being added to tests.

Step 3 – Create an execution environment

How do we get to a point where continuous deployment and integration is overcoming the challenges of today’s software market? The key is to set up an environment that can replicate these real world conditions. As we have said, previously this would require a number of development/test boxes or virtual machines. These were expensive to maintain and difficult to set up. For DevOps, Silk can manage and deploy an ‘on demand’ test environment either in a public cloud, private cloud, or on-premise within the confines of the enterprise data center.

With Silk, setting this up is easy and straightforward. So let’s do it. In this example, we will setup an execution environment with the Amazon AWS infrastructure. Even though this is in a public environment, all tests and information access are secure. This environment can be set up in a more private cloud setting, on premise within the firewall –even on the tester’s individual machine. Whatever the environment parameters, Silk has you covered.

Image 5 – Connecting Silk to an execution environment.
Image 5 – Connecting Silk to an execution environment.


The Amazon test case

We are about to connect Silk to an Amazon instance using AWS credentials given to us by Amazon. Silk will set up the browser and device combinations so that we can rapidly deploy our applications for testing. Rather than manually setting up a range of VMs to test different versions of Chrome, Firefox, IE, Edge, etc, Silk will spin up the instances, configure them, deploy the application, run the tests, gather the results, and then close them out. It is in this context that we start to really take advantage of some key DevOps practices and principles.

Silk will take the tests we have created and pass them to the environment you have set up. Do you have applications inside the firewall that need to be tested from an external environment? No problem. Silk’s secure technology can tunnel through.

Image 6 – Tunneling through the firewall with Silk to test private internal applications from public execution environments.
Image 6 – Tunneling through the firewall with Silk to test private internal applications
from public execution environments.

Step 5 – Run the tests

Now we’ve created the tests, created the context and set up the execution environment, we must determine when – and how – we want these tests to run. In most DevOps environments, tests are triggered to run from a continuous integration environment managed with Jenkins, Hudson, Cloudbees, etc. Whatever the preferred solution, Silk will execute tests from any of these providers.

When the tests are executed, depending on the selected configurations, Silk will run through the tests and provide a detailed analysis across all browsers, devices, keywords, and tests. Remember – the more the better, as you want to use what your end-users will be using. Better safe than sorry; along with the analysis, screen shots of each test display the evolving trends for that application. This represents visual confirmation that the test was either successful or highlighted an issue – especially important in responsive and dynamic web designs.

Image 7 – Results analysis from Silk running a series of tests and keywords through the execution environment.
Image 7 – Results analysis from Silk running a series of tests and keywords
through the execution environment.

Each test is outlined across the platform/browser/device combination and the end user benefits from a visual representation along with detailed results analysis. Management appreciate the high level dashboard summary showing trends across targets.

Image 8 – Dashboard of runs over time.

Image 8 – Dashboard of runs over time.

Step 6 – Test Early, Test Often

Now that the test? environment has been established and connected to the build environment, ongoing testing across any number of environments is completely automated. Now, instead of setting up testing environments, quality teams can focus on building better tests and improving collaboration with business stakeholders. Here is where Silk delivers true DevOps value to an organization. New browsers, such as Microsoft Edge can easily be added to the configuration environment. There’s no need recreate tests; just point the tests that are already there at the new environment.

Image 9 – Adding new browsers to the execution environment.
Image 9 – Adding new browsers to the execution environment.

Step 7 – Performance Tune

Along with each functional automation piece, Silk can test from both a ‘load’ and ‘functional’ perspective. When testing applications under load, Silk determines average response times, performance bottlenecks, mobile latency, and anything related to a massive amount of load generated on a system. Taking a functional perspective, Silk runs a smaller number of virtual users across different configurations of browsers that outline where things can be tuned. And this is key information. For example, Silk can determine that FireFox your application runs 15% slower on Firefox than Chrome on Android. Adjusting the stylesheets or javascript may be all that is required to performance tune your application. Testing for responsive web design is crucial to keeping user experience sentiments high in a DevOps context.

Image 10 – Performance tuning across different devices and platforms for optimization in response web Design.
Image 10 – Performance tuning across different devices and platforms
for optimization in response web Design.

Using Silk’s technology and running these tests over time will track the trends. This, along with details analytics from data within Silk and sources like Google PageSpeed, will illustrate where your applications will benefit from being fine-tuned across browsers and devices.

Image 11 – Outline where applications can be adjusted for better end user performance and optimization.
Image 11 – Outline where applications can be adjusted for
better end user performance and optimization.

In conclusion

DevOps is a slightly nebuous phrase that means different things to people. But when it comes to testing, the value is pretty clear. Aligned with the right software, it will ensure your applications perform as expected across any device/browser/platform. In addition, using Silk will ensure that your apps are:

  • delivered on time and within budget
  • constantly improving
  • responsive
  • built and tested collaboratively.
  • feeding trend data on responsiveness and quality to a central location
  • successful with end users/consumers.

So if this sounds like something you can use, then we should talk about Silk. It’s the only tool that can reach your quality goals today and will continue to innovate to help you eliminate complexity and continuously improve the overall development and operations process. Now that’s DevOps.












Building a Robust Test Automation Framework – Best Practice

According to Trigent Software’s own research a robust test automation framework ranks highly on their list of Software Testing ‘must-haves’. When executed in a structured manner, it helps improve the overall quality and reliability of a software. Read more from Test Architect Raghurikan in this fantastic guest blog…

Through our experience and research, ranking high on our list is a robust test automation framework. When executed in a structured manner, it helps improve the overall quality and reliability of a software.

The software development industry always faces a time crunch when it comes to the last mile, i.e. Testing. Ask any software developer and he will tell you that the development team in any corner of the world, wants 1. Testing activities to be faster than humanly possible, 2. Deliver results which are laser accurate and 3. All of these without compromising quality. Manual testing fails to live up to these expectations and therefore, are least preferred. Test Automation is therefore, the best choice as it helps accelerate testing and delivers fast results. To ensure that test automation works well, there is the need for a robust test frameworks which acts as the core foundation for the automation life cycle. If we don’t build the right framework, the results could be:

  • Non-modularized tests
  • Maintenance difficulties
  • Inconsistent test results

All of which will result in escalating costs bringing down the ROI considerably.

Best Practices

Framework Organization
The automation framework needs to be well organized to make it easier to understand and work with the framework. Organized framework provides an easier way to expand and maintain. Items to be considered are

  • Easier way to manage resources, configurations, input test data, test cases and utility functions
  • Provide support for adding new features
  • Easy option to integrate with Automation tool, third part tools, databases etc…
  • Have the standard scripting guidelines that needs to be followed to be across

Good Design

Automation tests are used for long terms regression runs to reduce the testing turnaround time, hence, the design involved should be good so that the tests can be maintained easily and yield reliable tests results. Following are some of the good design step

  • Separation of application locators from the test code so that the locators can be updated in the locator file independently on change. Example: To use locators from the object map, external excel or xml file
  • Separate test data from the code and pull data from the external sources such as excel, text file, csv or xml file. Whenever required we can just update the data in the file
  • Organize tests as modules/ functions, so that they are re-usable and easy to manage. Have application/ business logic in the separate class and call them from the test class
  • Tests should start from the base state and ensure that it recovers and continues when there are intermittent test failures

Configuration options

The framework should provide option to choose the configurations at run time so that it can be used as per the test execution requirement. Some of the configurations include

  • Ability to choose test execution environment such as QA, Staging or Production
  • Ability to choose the browser
  • Ability to choose the operating system, platform
  • Ability to mark for priority dependency and groups for the tests

Re-usable libraries

Libraries helps in grouping of the application utilities and hide the complex implementation logic from the external world. It helps in code reusability and easy to maintain code.

  • Build the library of utilities, business logic, external connections
  • Build the library of generic functions of the framework

Reports and logs

To evaluate the effectiveness of automation we need to right set of results, the automation framework should provide all the detailed required to test execution.

  • Provide logs with necessary details of problem with the custom message
  • Have reports which provides the detailed execution status with Pass/ Fail/ Skipped category along with the screenshots

Version Control and Continuous Integration

To effectively control the automation framework we need to keep track of it, hence the version control system is required to achieve this. Have the framework integrated with the version control.

Also we need to run the regression suite continuously to ensure that the tests are running fine and the application functionality is as expected, hence the continuous integration system is required to support for execution of the and results monitoring.

If we build the Robust Automation framework with the above capabilities, we gain the below benefits

  • Increase product reliability – Accurate, efficient, automated regression tests – reduce risks
  • Reduce the product release cycle time – Improve the time to market, Reduce QA cycle time
  • Improve efficiency and effectiveness of QA – free QA team to focus manual efforts where needed
Test Architect






Raghukiran, Test Architect with Trigent Software, has over a decades’ experience building test frameworks for automation tools which are scalable and efficient. He is keenly interested in exploring new automation tools, continuous integration setups for nightly execution and automation coverage. In the blog `Best Practices for building Robust Test Automation Framework’ he discusses the best practices to be followed for building a Robust Test Automation framework.

Software Testing – Automation or manual testing, that is the question.

Software Testing – Automation or manual testing, that is the question. Automation isn’t an automatic choice – Renato Quedas wonders why………….

In this recent posting, the question of when to automate and when to stick to manual got another airing. It prompted the usual flurry of comments and it’s great to see the passion out there. So here’s my view. Feel free to throw rocks at it – but I’d prefer it if you just use the comments box…!

In my view, test automation should be non-disruptive and it works best when it supplements and extends manual test to eliminate the mundane, repetitive parts of the manual test process.  But it’s always important to keep in mind that software testing is about ensuring functionality in the way that human beings use it. What that means is that until automation can anticipate every aspect of human behavior, the initial test implementation will still be, to some extent, manual.

Capture and … automate

Once the initial test procedures are captured, though, automation can eliminate the redundant test tasks that don’t change. That’s why Borland introduced keyword-driven testing (KDT). This enables test procedures to be implemented once and then assigned to a keyword, or for a keyword to be defined and the test procedures scripted for that keyword, so that it can be reused to automate repetitive features.

Test implementation can be a combination of captured keystrokes, mouse clicks, gestures etc. that are converted into script along with some manual scripting when required to complete the test procedures.  Once implemented as keywords, the test procedures can then be connected together to create complicated, multi-faceted test scripts much more easily than writing those test scripts from scratch.


Automate to collaborate

Keyword-driven testing can also facilitate role-based testing and greater test collaboration by enabling non-technical business stakeholders to participate in software testing without having to understanding the test script details. Business users can interact with keywords such as “Click Select Button” or “Select Shopping Cart” without having to understand the underlying test script that implements those operations.

As object-oriented programming accomplished for software development, keyword-driven testing enables reusable test procedures to be captured and implemented in a way that boosts test automation considerably.

It enables manual test implementation to be reduced as much as possible with automation while still recognizing that the manual variations needed for realistic software tests.  It also enables greater software test participation by all key stakeholders, including non/less-technical operations and business personnel.

So in summary, I’m joining the narrative – automate when you can but don’t treat it as a silver bullet. But that’s my view. What’s yours…?


Test Automation: Becoming Inclusive

Test automation practices frequently appear to reside in the hands of a small, specialized group of individuals within most testing organizations. Although automation tooling (e.g. macro engines, native code scripting, automation products) have been around for decades, many quality assurance teams continue to rely (sometimes exclusively) on manual efforts. While some organizations express that they wouldn’t be able to function without automation, the vast majority appear to be still bound by the efforts available through manual testers.

Test automation practices frequently appear to reside in the hands of a small, specialized group of individuals within most testing organizations. Although automation tooling (e.g. macro engines, native code scripting, automation products) have been around for decades, many quality assurance teams continue to rely (sometimes exclusively) on manual efforts. While some organizations express that they wouldn’t be able to function without automation, the vast majority appear to be still bound by the efforts available through manual testers.

In the past, I would have understood such constraints. Some automation options in the market are focused on specialized individuals that have a need for testing solutions and the skills of a developer. Newer tools (or newer releases of older products) have evolved to support automation coding in more visual and user-friendly IDE’s to combat this problem. Regardless of the tooling, there will of course be some individuals that are more suited to the demands of logic coding than others.

The challenge I pose to the test automation community is expressed simply… how do we adapt automation practices so that more people per organization may contribute to exercising automated test? As an inverse of this question, we should also ask how the efforts of a single automation engineer can be better leveraged to serve a larger number of testing needs.

The answer resides in our implementation of test automation. Implementation of automation is expressed by how an automation framework is developed and used in an organization. No matter the type of test automation tool or scripting employed, a test automation framework defines how that technology is put to use in the organization.

Traditional approaches (or initial approaches) to test automation started as linear coding whereby all of the actions for a test case or transaction were captured in a single script. Over time, this adapted to practices that were modular in design so that a driver (or parent) script called a subordinate (or child) script. Libraries could be formed from these modules of automation such that parent scripts could call child scripts that called grandchild scripts, and so on. These approaches grew to encompass logic that responded to input data (or data from applications) to determine which subordinate functions to exercise (e.g. if password-length=0, test for an error message to be displayed, else log into an application).  These approaches are employed across many organizations currently, and these approaches have served the software testing industry well over decades. These practices have also limited organizations such that only key individuals who have a development skill set can often comprehend the automation that has been build. This is an exclusive approach to automation that fosters specialists, rather than encouraging inclusion of automation in the service of all testers. It is now time for a new paradigm.

Newer approaches to test automation are developing across the quality assurance market.  These approaches include keyword-driven, behavior-driven, and state-driven frameworks. These framework styles each present various merits. However, they share some commonality that shift the automation perspective from an exclusive practice to an inclusive approach.
These frameworks implement automation by identifying that test automation may be comprised of multiple process areas:

  1. Automation Design
  2. Automation Implementation
  3. Test Case Design

In automation design, the elements or behaviors in an application are identified where automation is to be applied. Each transactional element (or atomic function) is identified. In more tangible terms, automation design may determine that elements are comprised of discrete elements on a screen (e.g. a text field, a radio button, a frame with a collection of a few controls). Alternatively, these elements may be seen as common (yet small and discrete) actions in the application. This may be actions such as “enter a form value”, “navigate a header menu list”, or even “login”. These elements may be considered analogous to individual steps in a manual script. The analysis required for automation design may be performed by automation specialists, or non-technical resources. This allows for inclusion of more individuals with less specialized skill sets to participate in automation implementation. These resources are focused on “what actions do we need to be able to perform to execute transactions?”. The focus is shifted from worries on how to technically achieve the automation to an abstraction based on real world needs.

In automation implementation, an automation engineer (or someone capable of building an automation script) builds the automation code to strictly meet each element from automation design. While this component may be more technically challenging and require deeper knowledge of automation tools or scripting languages, this process is still simplified when compared to traditional approaches. The automation engineer has less logic to develop as functions are bounded by the need to create small, discrete modules for specific needs. This leads to rapid development of many small functions rather than laborious creation of comprehensive transactional scripts. Furthermore, this promotes increase maintainability over time. If a small function or element of a transaction is changed in the application, maintenance is quickly applied to the smaller function. For example, if an application adds a “secret question” during login, the automation engineer knows that the login function is the only location in the automation libraries which needs to be modified.

Test case design is the process area where testing is actually assembled. This may also be a more inclusive process. Test case design is a process whereby the smaller building blocks of elements or functions are assembled to represent a test case. From the elements defined by automation design, elements are triggered sequentially to determine pass/fail status. This test design may be written in the automation tooling language, or in many cases, driven by rows in a spreadsheet. When implemented through spreadsheets, this enables less-technical resources to define the tests to be executed in an automated fashion. The automation framework may be designed to open a file with the defined steps, and execute the implementation code.

By developing frameworks that separate test case development from implementation details, automation coding becomes faster, reduces maintenance overhead, and improves overall efficiency. More importantly, this separation of automation functions offers the possibility of including more people in the automation practices. More individuals in the organization may realize the benefits of any implementation efforts applied. More testing may be achieved in less time as constraints imposed by available automation engineering personnel are reduced.

Please contribute to this discussion. How does your organization apply automation to testing processes? What type of framework approaches do you use? What problems and constraints do you face?

Chris Meranda
Senior Solutions Engineer