Süßer die Online Kassen nie klingeln. Wie digital ist das Fest der Liebe?

Süßer die Online-Kassen nie klingeln – so lautet das diesjährige Motto des Onlinehandel-Weihnachtsgeschäfts. Nahezu jeder zweite Deutsche beabsichtigt alle seine Geschenke für das Fest online zu kaufen. Doch wie gut sind die Online-Händler auf das digitale Weihnachten vorbereitet? Systemabstürze und nicht verfügbare Webseiten führender Online-Handelsplattformen in der Schweiz am Black Friday zeigen, wie schnell der Online-Kaufhype zum Bommerang werden kann und warum Last- und Perfomancetests so wichtig sind.

Die Adventszeit beginnt, Weihnachten steht kurz vor der Tür. Während die einen sich auf Duftwolken von Glühwein, Zimt und Anis freuen, die von Weihnachtsmärkten durch die Straßen ziehen, fiebern die anderen dem Black Friday, der Cyber Monday Week oder ganz einfach dem digitalen 24/7 Shoppingangebot im Netz entgegen. Dass die Digitalisierung auch vor dem traditionellen Weihnachtsfest nicht Halt macht, ist klar: Viele Bräuche wandern ins Netz ab  –  wurden früher noch voller Enthusiasmus Bilder von Spielen, Puppen, Büchern und Klamotten mühselig aus Katalogen ausgeschnitten, auf einen Din-A-4-Zettel geklebt, beschriftet, umrandet und dann als formvollendete künstlerische Meisterleistung auf die Fensterbank gelegt,  so versenden die digital Natives heute ihren Wunschzettel samt Emoijs via Facebook, WhatsApp & Co über das Netz. Sehr zur Freude des Onlinehandels, denn die voranschreitende Digitalisierung und die zunehmende Nutzung von social media erschließt den Werbetreibenden ein neues lukratives Feld – man erhält detaillierte Einblicke in das alltägliche Konsumverhalten der Menschen und somit die Möglichkeit, genau auf die Bedürfnisse und Vorlieben des Einzelnen Werbung zu platzieren.

BlackMon2

Süßer die Online Kassen nie klingeln …..oder auch nicht?

Einer aktuellen, repräsentativen Studie von Adobe zur Folge, boomt das Online-Geschäft gerade zur Weihnachtszeit. Im Vergleich zum Vorjahr wird der Umsatz, so die Studie, nochmals um 10 % gesteigert und aller Voraussicht nach auf 23 Milliarden anwachsen.  Angesichts dieser rosigen Umsatzprognosen verwundert es kaum, dass rund 43 % aller Deutschen ihre diesjährigen Weihnachtseinkäufe ausschließlich online tätigen wollen. Als Gründe für die wachsende Begeisterung für das Online Shoppen nannten die 4.000 Befragten vor allem, dass das Einkaufen über das Smartphone wesentlich einfacher geworden sei und sich die Mobile Optimierung der Shops  stark verbessert habe. Viele Händler nutzen deswegen hierzulande, auch aufgrund des wachsenden Bedürfnisses der Kunden, Einkäufe online tätigen zu wollen, Trends wie die aus Amerika bekannten Online Schnäppchen Kampagnen Black Friday oder Cyber Monday Week. Alleine für das Wochenende rund um den sogenannten „Black Friday“ schätzt Adobe das Umsatzpotenzial auf 549 Millionen. Doch der Hype um die Rabattschlachten im Netz birgt für die Händler auch Risiken.  So brachen führende Schweizer Online-Portale unter dem Ansturm der Shopper zusammen und scheiterten grandios im Stress-Test von «Black Friday». Statt erhoffter klingelnder Online Kassen läuteten die Alarmglocken, denn neben Umsatzeinbußen bedeuten solche Störungen auch immer einen Imageverlust – schließlich hat man für die Onlineaktion ja kräftig die Werbetrommel gerührt.  Unternehmen müssen hochperformante Websites bereitstellen, die auch Spitzenlasten problemlos bewältigen können. Deshalb sind Last- und Performance Tests nicht optional, sondern geschäftskritisch und lohnenswert. Für ein reibungsloses Funktionieren eines Programms oder einer Online-Shopping Plattform müssen die einzelnen Komponenten optimal aufeinander abgestimmt sein. Heutzutage müssen Apps und Websites – ganz egal in welcher Branche ein Unternehmen tätig ist – auf jeder Plattform bzw. auf jedem Gerät und unabhängig vom Standort zuverlässig funktionieren und dabei ein ausgezeichnetes Benutzererlebnis bieten. Inkonsistente Benutzererfahrungen und langsame Reaktionszeiten werden von Kunden kaum noch toleriert und führen im Falle des Onlineshoppings zu Kaufabbrüchen.

BlackMon1

5 Tipps zur Beseitigung potenzieller Performance-Engpässe:

1. Frühzeitiges und häufiges Testen

Keinesfalls sollten Performance-Tests bei Applikationen erst vor ihrer Übernahme in den Produktivbetrieb durchgeführt werden, sondern in jeder Entwicklungsstufe und für alle Architekturebenen. Für Applikationen mit einer Drei-Schichten-Architektur bedeutet das zum Beispiel, dass Lasttests für die Präsentations-, Logik- und Datenhaltungsschicht in jeder Phase ihrer Lebenszyklen vorzusehen sind. Durch frühzeitige Tests können Fehler und Architekturprobleme schneller erkannt und damit auch der Kostenaufwand reduziert werden, da es bis zu 100-fach höhere Kosten verursacht, wenn Fehler erst am Ende des Softwareentwicklungsprozesses und nicht bereits zu Beginn beseitigt werden.

2. Ermittlung der maximalen Spitzenlast

Ein Unternehmen sollte auf jeden Fall wissen, für welchen maximalen Traffic die eigene Website ausgelegt ist. Unkenntnis kann hier desaströse Folgen für das eigene Geschäft nach sich ziehen. Viele Unternehmen vertreten allerdings die Auffassung, dass dies nur durch den kostenintensiven Aufbau einer Testumgebung zu ermitteln ist. Es gibt jedoch auch alternative Optionen wie Cloud-basierte Infrastrukturen für Lasttests, über die reale Szenarios simuliert werden können – zum Beispiel mit Tausenden von Usern. Verfügbar sind hier heute Pay-as-you-go-Versionen, mit denen eine Simulation von Spitzenlasten schnell und kosteneffizient erfolgen kann.

3. Durchführung von plattform- und geräteübergreifenden Tests

Anwender nutzen heute eine große Vielfalt unterschiedlicher Geräte und Plattformen für den Zugriff auf Websites. Deshalb ist es unerlässlich, dass sie problemlos auf einem Smartphone, Tablet oder Desktop-PC dargestellt werden können. Folglich müssen Unternehmen auch Performance-Tests für mobile Web- und mobile native Applikationen für Android, iOS, und Windows Phone durchführen. Ebenfalls zu berücksichtigen sind Bandbreitentests im Hinblick auf unterschiedliche Mobilfunkverbindungen über GPRS, 3G oder 4G.

4. Analyse der Fehler-Ursachen

Es ist keineswegs ausreichend, Performance-Probleme zu erkennen, viel wichtiger noch sind Root-Cause-Analysen. Als Teil einer Testing-Gesamtlösung ermöglichen Diagnosetools eine effiziente Lokalisierung der Ursachen von Performance-Problemen – auch unter Spitzenlasten –, und damit eine deutlich schnellere Fehlerbehebung.

5. Unmittelbare Alarmierung

Reguläre Performance-Tests helfen zwar, potenzielle Spitzenlast-Engpässe zu beseitigen, ebenso wichtig ist es aber, unmittelbar über etwaige Probleme informiert zu werden. Deshalb sollte ein Online-Händler auf jeden Fall auch ein Website-Monitoring-Tool mit Alarmierungsfunktion nutzen, das zudem tägliche Updates und Reports bietet, auf deren Basis auch potenzielle Schwachstellen beseitigt werden können.

Fazit : Auch wenn das Weihnachtsgeschäft bereits im vollem Gange ist – für Veränderung und Optimierung ist es nie zu spät, denn auch im kommenden Jahr gibt es nicht nur wieder einen neuen „Black Friday“ oder andere Online-Kampagnen, die gut vorbereitet sein sollten.  Mit Performance Tests, automatisierten funktionalen Tests sowie Lösungen im Bereich des Zugriffsmanagement und erweiterter Authentifizierungsmethoden bietet Micro Focus nicht nur Online-Händlern wichtige Unterstützung.

Gregor Rechberger

Gregor-Rechberger

Gregor Rechberger is the Product Manager for performance testing and application monitoring which includes load-testing and application performance monitoring products. Since joining Segue/Borland in 2002 he has held documentation, program, and product management positions and has over 10 years of experience in the testing discipline.

Cyber Monday

Big retailers have been planning ahead for up to 18 months for their share of approximately 2.6 billion dollars of revenue. Cyber Monday started in 2005 by Shop.org has become one of the biggest online traffic days of the year, Simon Puleo takes a look at the list of how some of our biggest customers have prepared.

‘Twas the night before Cyber Monday and all through the house

everyone was using touchscreens gone was the mouse.

While consumers checked their wish lists with care

in hopes that great savings soon would be there.

The children were watching screens in their beds

while visions of Pikachu danced in their heads.

And Mamma in her robe and I in my Cub’s hat

reviewed our bank accounts and decided that was that!’

Cyber Monday started in 2005 by Shop.org has become one of the biggest online traffic days of the year.  Black Friday may have started as early as 1951 and between the two shopping holidays generate over $70 BN!  Let’s take a look at the list of how some of our biggest customers have prepared:

1.)    Performance testing.  Did you know that our customers typically start performance testing for cyber-Monday in February, why would they start so early?  Customers are testing more than just peak load, they are testing that sites will render correctly across multiple configurations, bandwidths, devices, and sometimes in multiple regions of the world.  The goals of ecommerce is to enable as many shoppers as possible that includes my Dad on his iPad 2 on a rural carrier and my daughter on her Chromebook in an urban area.   Multiply that by thousands of users and you can see that unfortunately, retailers can’t hire enough of my relatives to help them out. What they do is use a combination of synthetic monitors and virtual users to simulate and assess how a website will perform when 10,000 of users are shopping at the same time.

BlackMon1

2.)    New Feature Testing.  Whether you consciously think about it or not you expect and gravitate towards websites that have the latest feature set and best user experience.  What does that mean?  Listing a photo and description is bare bones the best commerce websites not only have reviews, videos, links to social media and wish lists they may actually be responsive to your shopping habits, regional weather and personal interests.  They use big data sets to preclude what you are browsing for and offer you targeted deals too good to pass up!  While that is exciting, it also means that the complexity of code both rendering the browser and behind the scenes has grown exponentially over the years.  Ensuring that new features perform and old code works with legacy systems as well renders correctly over multiple devices is what functional and regression testing is all about.  While a team of testers may track multiple code changes they lean towards automation to ensure that code works on target configurations.

3.)    Offering Federated Access Management What? you’re thinking, user-login was solved ages ago. For sophisticated online retailers using Facebook, Google, Yahoo!, Twitter, LinkedIn or other credentials to gain access is first a method to gain trust, second opens up the potential opportunity for more customers and finally a road to valuable personal data.  Regardless of which advantage a retailer may prioritize developing the ability to enable millions of Facebook users to easily login and check-out with a credit-card equates to new customers and a leg up over legacy competitors.  And, for added amount of trust and security retailers can couple multi-factor authentication at key points of the conversion process.   Simple user login and password for each shopping site is quickly becoming a relic of the past as users opt for convenience over management of many user names and passwords.

BlackMon2

These are some of the top methods and solutions that big retailers have implemented for 2016.  The best online commerce professionals know what they are up against and what is at stake for example:

  • In 2014 there were over 18,000 different Android devices on the market according to OpenSignal, that is an overwhelming amount of devices to ensure.
  • At a minimum retailers lose $5600 per minute their websites are down
  • The market is huge a recent estimate put the global amount of digital buyers at 1.6 Billion, that is nearly 1/5 of the world’s population.  Converting even .1% of that number is 160,000 users!
  • Users are fickle and will leave a website if delayed just a few seconds
  • Last year Cyber Monday accounted for $3 billion in revenue, this year we expect even more!

Retailers like Mueller in Germany realize that no “downtime” is critical to keeping both the online and virtual shelving stocked.  Their holistic approach to managing software testing and performance helps them implement new features while keeping existing systems up and running.   It is never too late to get started for this year or preparing for next, consider how Micro Focus has helped major US and European Online Retailers with performance testing, automated functional and regression testing, access management and advanced authentication.

Schlaflos vor dem Bildschirm – der Super-Sportsommer 2016 biegt auf die Zielgrade ein!

Am 5. August ertönt der Startschuss für das nächste Großereignis des Sportsommers – die Olympischen Sommerspiele 2016. Die Vorfreude bei allen Sportbegeisterten ist riesengroß, denn bei mehr als 340 Stunden live Übertragungen im Fernsehen und über 1.000 Stunden Live-Streams in Web, stellt sich nicht mehr die Frage, ob und was man schauen möchte, sondern vielmehr wie, wo und mit welchem Endgerät. Des einen Freud, des anderen Leid …. denn gerade die Vielfalt an Endgeräten, die rasant steigende Nachfrage von Livestream-Angeboten und die Browservielfalt stellt die Entwickler vor immer größere Herausforderungen. Georg Rechberger gewährt Einblicke, wie man frühzeitig Leistungs- und Funktionsprobleme aufdeckt und eine zuverlässige Perfomrance von Apps erzielen kann.

2016 ist ein absolutes Highlight für alle Sportfans, denn die Großereignisse geben sich quasi die Klinke in die Hand. Kaum sind mit der UEFA EURO 2016 und der Copa America die beiden großen Fussball-Turniere nördlich und südlich des Äquators Geschichte, die neuen Champions des Rasentennis in Wimbledon gefunden und der Sieger der Tour de France kürzlich erst auf der Avenue des Champs-Élysées gekürt, beginnt mit den olympischen Spielen von Rio nun der absolute Höhepunkt des Sportsommers. Am Freitag wird das olympische Feuer im Maracanã Stadion entzündet und in den nächsten 16 Tagen lautet das Motto nicht nur für die Sportler „Dabei sein ist alles“. Mit mehr als 340 Stunden live Übertragungen im Fernsehen und über 1.000 Stunden Video-Live-Streams im Internet stellt sich für die Zuschauer nicht mehr die Frage,  ob und was man schauen möchte, sondern vielmehr wie, wo und mit welchem Endgerät. Alleine die beiden Platzhirsche ARD und ZDF bieten auf ihren Webportalen sportschau.de und sport.zdf.de täglich von 14.00 – 05:00 Uhr morgens sechs verschiedene Live-Streams an und darüber hinaus noch 60 Clips täglich, die als on-Demand Paket das Internet Programm abrunden.

Diese immense Investition der öffentlich-rechtlichen Sender in Live-Streaming- und in Video-on-Demand-Angebote ist auf den allgemeinen Trend zurückzuführen, dass Konsumenten ihr Nutzerverhalten in den letzten Jahren vor allem dank neuer Technologien wie Smartphones und Tablets grundlegend geändert haben. Waren es bei Olympia 2012 in London bereits 25 % der Konsumenten, die die Spiele im Netz statt auf linearem Übertragungsweg verfolgten, so werden es diesmal sicherlich noch weitaus mehr sein. Nicht nur bei Sportevents ist der Trend zu  Mobile Streaming erkennbar:  der The Cisco® Visual Networking Index (VNI) prognostiziert , dass der durch Video verursachte Datenmengenverbrauch innerhalb der nächsten 5 Jahre um 825 % steigen wird. Für die Entwickler solcher Streaming- und Video-on-Demand Anwendungen stellt sich damit nicht nur die Frage, wie diese immens steigenden Datenmengen an den Kunden ausgeliefert werden, sondern auch, wie man für eine immer gleichbleibend hohe Qualität bei der Video-Wiedergabe sorgen kann, und zwar unabhängig davon, ob der Konsument den Live-Stream im Park auf seinem Smartphone, oder auf dem Heimweg in der Straßenbahn auf seinem Tablet oder zuhause vor seinem 50 Zoll 4k Fernseher sitzend verfolgt. Eines ist dabei klar: Qualitätseinbußen, insbesondere solche, wie eine Verzögerung bei der Übertragung, werden seitens der Konsumenten nicht geduldet.

riode1

Denken wir doch jetzt nur einmal an das 100-Meter Finale der Herren in Rio, der wohl prestigeträchtigsten Entscheidung in Brasilien. Usain Bolt und seine Konkurrenten stehen in den Startblöcken und warten auf den Startschuss und just in diesem Moment erscheint auf unserem Bildschirm das Buffering  Symbol, oder eine „Video ist nicht verfügbar“ Notiz oder das Video springt ständig auf Pause – nicht auszudenken, welche Reaktionen ein solcher Vorfall auf Twitter, Facebook oder den anderen sozialen Netzwerken auslösen würde. Neben Spot und Häme in den sozialen Netzwerken müsste der Videostreaming Anbieter im schlimmsten Fall auch noch mit finanziellen Einbußen bei seinen Werbepartnern rechnen, denn wer möchte schon, dass seine Werbung ruckelt und stockend oder überhaupt nicht übertragen wird. Der Nutzer erwartet die gleiche Performance, die er seitens der herkömmlichen Fernsehübertragung gewohnt ist – eine Verzögerung von mehr als einer Minute wird nicht toleriert –  ansonsten wird eine andere Quelle für die Berichterstattung gewählt und der Anbieter läuft Gefahr, seine Abonnements und Werbeinnahmen zu verlieren.
Der Schlüssel für die unterbrechungsfreie Ausführung liegt vor allem in sogenannten Lasttest und in effektiven Leistungsmessungen mit Workloads, die das echte Benutzerverhalten replizieren. Lasttests sind ein wesentlicher Bestandteil des Software-Entwicklungsprozesses. Anwendungen müssen auch bei Nachfragespitzen für Tausende, wenn nicht sogar Hunderttausende verfügbar bleiben und die versprochene Leistung erbringen. Micro Focus bietet mit Silk Performer ein Produkt an, mit dem man bei der derzeit führenden Videostreaming Technologie HLS (HTTP Live Streaming), aussagefähige Last- und Performance Tests durchführen kann. Das Aufzeichnen von Skripten ist hierbei sehr einfach und bei der Testausführung gibt es verschiedene Qualitätsmetriken, die Ihnen zum Beispiel sagen, wie viele Segmente der Streaming Client in verschiedenen Auflösungen heruntergeladen hat. So kann man feststellen, wann mehr Segmente mit niedriger Auflösung geladen wurde, weil die Bandbreite zu gering oder die Infrastruktur nicht der Lage war, gleichzeitig eine hohe Anzahl an hochauflösenden Streams bereitzustellen. Man sieht, wie lange es dauerte das erste Segment herunterzuladen und man kann das Verhältnis zwischen der Dauer der Downloadzeit und der Abspielzeit genau analysieren. Kurzum gesagt, diese Qualitätsmessungen helfen dabei, die Nutzererfahrung in Bezug auf Downloadzeiten, dem Verhältnis von „download-to-play“ Zeiten und dem Live-Streaming  deutlich zu verbessern. Mit den Performancetest-Lösungen von Micro Focus können Sie das echte Benutzerverhalten  geräte-, netzwerk- und standortübergreifend präzise simulieren. Durch die Bereitstellung dieser Lösungen als cloudbasierten Service zur Leistungsmessung kann man  kosteneffektiv sicherstellen, dass geschäftskritische Anwendungen Spitzenlasten bewältigen und wie erwartet von allen Benutzern weltweit auf allen Geräten ausgeführt werden können.

Fazit:

Wenn Kunden schlechte Erfahrungen machen – egal ob mit einer Webseite, einer App oder einem Videostreaming Dienst, werden sie kaum zurückkehren oder Ihre Angebote nutzen. Das Testen auf Performance, Skalierbarkeit und Zuverlässigkeit ist daher von entscheidender Bedeutung. Der Anspruch der Kunden verändert das herkömmliche Verständnis von „Qualität“. Unternehmen können sich inkonsistente Benutzererfahrungen und langsame Reaktionszeiten einfach nicht mehr leisten. Die Tools von Micro Focus decken Funktions- und Leistungsprobleme auf, vermeiden somit Prestige- und Umsatzverluste und helfen dabei, eine zuverlässige Performance von Apps und die Funktionsfähigkeit globaler Websites zu erzielen.

 

Visualizing a Use Case

Have you ever put the finishing touches on your use case in a word document only to find that the visio diagram you had depicting the process flow is now out of date? If you are lucky, you have both some visual model of your functional flows along with the corresponding text to back it up – and let’s not forget about the corresponding test cases!

Have you ever put the finishing touches on your use case in a word document only to find that the visio diagram you had depicting the process flow is now out of date?  If you are lucky, you have both some visual model of your functional flows along with the corresponding text to back it up – and let’s not forget about the corresponding test cases!

In the fast paced world of software development, if you don’t have solid processes in place and have a team that follows it, you might find yourself “out of sync” on a regular basis.  The industry numbers such as “30% of all project work is considered to be rework… and 70% of that rework can be attributed to requirements (incorrect, incomplete, changing, etc.)” start to become a reality as you struggle to keep your teams in sync.

The practice of using “Use Cases” in document form through a standard template was a significant improvement in promoting reuse, consistency and best practices.  However, a written use case in document form is subject to many potential downfalls.

Let’s look at the following template, courtesy of the International Institute of Business Analysis (IIBA) St. Louis Chapter:

Skip past the cover page, table of contents, revision history,  approvals and the list of use cases (already sounds tedious right?)  Let’s look at the components of the use case template:

The core structure is based on a feature, the corresponding model (visualization) and the use case (text description).  This should be done for every core feature of your application and depending on the size of your project, this document could become quite large.

The use case itself is comprised of a header which has the use case ID, use case name, who created it and when as well as who last modified it and when.  As you can see, we haven’t even gotten to the meat of the use case and we already have a lot of implied work to maintain this document so you need to make sure you have a good document repository and a good change management process!

Here is a list of the recommended data that should be captured for each use case:

  • Actors
  • Description
  • Trigger
  • Preconditions
  • Postconditions
  • Normal flow
  • Alternative flows
  • Exceptions
  • Includes
  • Frequency of use
  • Special requirements
  • Assumptions
  • Notes and issues

The problem with doing this in textual format is that you lose the context of where you are in the process flow.  Surely there must be a better way?  By combining a visual approach with the text using the visual model as the focus, you will be able to save time by modeling only to the level of detail necessary, validate that you have covered all the possible regular and alternative flows and most importantly, you will capture key items within the context of the use case steps making it much easier to look at the entire process or individual levels of detail as needed.

If you look through the template example, you can quickly see that it is a manual process that you cannot validate without visual inspection, so it is subject to human error.  Also, it is riddled with “rework” since you have to reference previous steps in the different data field boxes to make sense of everything.

Here is a visual depiction of the example provided in the template.  I have actually broken the example into two use cases in order to minimize required testing by simply reusing the common features:

Access and Main Menu

ATM Withdraw Cash

I have added some colorful swim lanes to break the activity steps down into logic groupings. If you think the visualizations look complicated you might be right… they say a picture says a thousand words, so what you have done is taken the thousand words from the use case with all of the variations and you have put them into one visual diagram!  The good news is, it is surprisingly easy to create these diagrams and to translate all of the required data from the use case template directly into this model.  A majority of the complexities of the use case are handled automatically for you.  When it comes time for changes, you no longer have to worry about keeping your model in sync with your text details and you certainly no longer have to worry about keeping references to steps and other parts of the use case document in agreement!

In the next blog, we’ll look at how to model the “Normal flow” described in the use case template.

Software Testing: Myths vs Reality

If you’re thinking about purusing a career as a Software Tester, this blog will make good reading! One of our junior testers Karthik Venkatesh puts pen to paper to help anyone starting out on a Testing career with some expectation setting. Here’s what he’s learnt so far.

“Testing started when the human race began”!

The whole analytical brain of the human mind is about doing verification and validation before concluding anything and Software Testing is no exception to this.

Market Outlook and Future for Software Testing

  • A survey by Global Software Testing Services Market (2016-2020) research analyst predicts the global software testing services market to grow at a CAGR of close to 11% during the forecast period.
  • According to a recent report by Fortune magazine- Software testing is listed among the top 10 in-demand careers of 2015

Software-Testing_banner-(1)

So aiming to pursue a career as a Tester or Quality Assurance looks like a good plan. Let’s take a look through some Myths and Realities of being a Software Test Professional:

Myths vs Reality about Software Testing

Myth-1: Testing is Boring

Reality: Testing is not boring or a repetitive task. It is like a detective’s job! Testing is a process of investigation, exploration, discovery, and learning. The key is to try new things. In reality, testing presents new and exciting challenges every day.

Myth-2: Testers do not write code

Reality: Some people may say that software test engineers do not write code. Testers usually require entirely different skill set which could be a mix of Java, C, Ruby, and Python. That is not all you need to be a successful tester. A tester needs to have a good knowledge of the software manuals and automation tools. Depending on the complexity of a project, a software testing engineer may write more complex code than the developer.

Myth-3: Testers job is only to find bugs

Reality: The job of a software test engineer is not restricted to find bugs. A tester should be more customer focused, understands how the system works as a whole to accomplish customer goals, and have good understanding of how the product will be used by the end-user. A tester has to understand the complete product architecture, how it interacts with the environment, how the application works in a given situation, and how the application integrate with all the components and work seamlessly.

Myth-4: Software testers are paid less than the developers

Reality: These days quality of the product directly effects the products’ or the brands’ reputation. So no organizations are ready to compromise on quality. Organizations are always looking forward to work with energetic testers. An efficient software tester can draw more salary than the developer of similar experience.

myth

Top 6 tips for Software Test Engineer starting out on their career

  1. Development and testing are moving closer to the business units and you will need to communicate and work closely as a team.
  2. To find bugs, you will need to be creative. A software test engineer needs to come up with new ideas which would help in finding bugs. Work smart as well as hard! Always find better and simpler ways to do the assigned tasks, own tasks proactively and innovate.
  3. A good tester is the one who knows the application in and out. The tester should be aware of all the components in a product and the business logic behind it. Good knowledge of the product helps to understand the importance of a feature from a business perspective so become the expert!
  4. Always want to learn more!
  5. Try to hone some skill sets such as good negotiation skills, thinking out of the box, and multi-platform skills
  6. You will need to be persuasive and explain to the stakeholders which bugs have been found and how they are likely to impact on end-users and the business.
  7. You must be a perfectionist and resilient to pressure as Testing is the typically the last gate before the product reaches into the hands of customer.

Corporations cannot hire customers, so they hire software test engineers who put products through their paces in on potential Customers behalf. So, to represent customers within a corporate – What kind of a hat would you wear – a purple hat, a yellow, a blue, or a white?

Customers have different approaches to use a product. If you consider each approach as a colored hat, a test engineer needs to wear a wide variety of hats of different colors and shapes.

Testing is a career which is built with innovative thinking – be passionate about it and be strong enough to make your own choices work! Don’t forget to read the  Micro Focus approach to Software Testing and view our  impressive range of testing Products.

Karthik

 

 

 

 

Karthik Venkatesh

Cyber Fun Days!

How can online retailers ensure virtual shopping carts will continue to be filled now that Black Friday has kicked off the seasonal shopping season? New writer Lenore Adam talks about ways to prevent website bottlenecks and guarantee a positive and consistent user experience.

As my colleague Derek Britton recently noted in his blog, Cyber Sunday is the latest extension of the traditional Thanksgiving retail feeding frenzy (pardon the pun, I struggle with any reminder of having eaten too much this past week…). U.S. retail giant Wal-Mart, along with several other major retailers, pulled their Cyber Monday promotions into Sunday in a bid to capture increased online demand.

For consumers, it is less of a trend and more of a way of life. Ubiquitous use of smart phones with fast internet has helped blur the lines between what were traditionally distinct retail and online shopping days. Economists estimate that digital shopping will rise by ‘11.7 percent this year, lifting the overall proportion of online sales to 14.7 percent of total retail activity, or $1 out of every $7’ that consumers spend this season. Despite these indicators, major retailers were caught unprepared for the volume of online shopping this year, promoting products that consumers were unable to order due to website overload.

Ensure a Positive User Experience

Even after the holiday rush, online retailers are still vulnerable to unpredictable demand. Will another polar vortex increase climate commerce and drive an unexpected wave of consumers to your site? Will those newly implemented e-commerce delivery options stress back-end systems and reduce peak performance? Are you ready for this season’s variety & volume of access devices, browsers, and geographically dispersed access points? Online retail success demands a positive user experience for a customer base accustomed to web page response times ticked off in milliseconds.

The mantra for brick and mortar retailers is often location, location, location. With online retailers it’s more like test, test, test. This is where Silk Performer and Cloudburst come in. Borland products help prevent our customers – who include some of the biggest names in online retailing – from becoming another online casualty. Archie Roboostoff, Director of Product Management, explains how Silk is used not only for website performance testing, but also for testing responsive web design. For example, use Silk to test…

‘…across different configurations of browsers to outline where things can be tuned…For example, Silk can determine that your application runs 15% slower on Firefox than Chrome on Android. Adjusting the stylesheets or javascript may be all that is required to performance tune your application. Testing for responsive web design is crucial to keeping user experience sentiments high…’

When ‘a 100 millisecond delay… equates to a 1% drop in revenue’, online performance clearly is business critical. With the competition just a click away, don’t lose customers due to poor site performance. Keep them on your site, happily filling up their shopping carts. Try Silk Performer here.

Lenore

After the Goldrush

How can online retailers keep the tills keep ringing now Thanksgiving is over? Chris Livesey talks about easy ways to prevent website wobbles.

cyberweek-(1)

As my colleague Derek Britton recently noted in his blog, Cyber Sunday is the latest extension of the traditional – at least in contemporary terms – Thanksgiving retail feeding frenzy. Wal-Mart has decided to further test their website’s resilience to heavy digital footfall by a further 24 hours.

Similarly, the UK-based technology store Carphone Warehouse brought forward their Black Friday event by 24 hours and joined Amazon and Argos in offering deals that run from November 23 until December 2 inclusive.

Whether it is out of consideration for the consumer or just another dead-eyed strategy to squeeze more pre-Christmas cash out of consumers, the line between the end of one sales event and the commencement of another is increasingly blurred. And it is less of a trend and more of a way of life. UK shoppers spent more than £718.7m online every week throughout 2014, an 11.8% increase on the previous year.

The Reiss Effect

So what happens after the seasonal rush? Everything goes back to normal, right? Well, maybe not. Online retailers are still vulnerable to The Reiss Effect. This happens when a company isn’t prepared for, well, the unexpected and loses out as a result.

In this case, Kate Middleton being pictured wearing a Reiss dress had unforeseen – and unfortunate – consequences for the manufacturer. The website crashed. Reiss were unable to take advantage of their good fortune. This once-in-a-lifetime opportunity passed them by. Unable to process orders from new or established customers, they lost revenue and became a ‘thing’.

Websites are the virtual shopfronts for retailers and manufacturers and, just like shops, can quickly become overwhelmed if not battle-ready. Unexpected opportunities can quickly become unwanted headaches. The same Social Media platforms that plug your product can quickly damage your brand.

We are not bemused

Underestimating the potential popularity of your offering can be just as damaging is just another form of unpreparedness. The website for Dismaland, the pop-up art project set up by British graffiti artist Banksy recently crashed, leaving thousands of would-be visitors unable to purchase tickets. But as the creative theme of this ‘bemusement park’ attraction was disappointment, this may well have been the intention.

So the key to online retail success for Black Friday, Cyber Sunday, ‘Gratuitous Spending Wednesday’ and beyond is to road-test your website for any eventuality. It’s easier than you think. As the CMO for Micro Focus Borland I am proud that we help prevent our customers – who include some of the biggest names in online retailing – becoming another ‘thing’.

It’s easy with Silk Performer and Cloudburst. This is stress-free, stress testing for websites and applications. With it, users have Cloud-based scalability and access to unlimited virtual users as they like. Without it, they may not detect the errors that can turn go-live into dead-in-the-water day. Try it here.

But even the best tool can’t prepare an organization for everything. Sorry, US Airlines, but if a opossum is going to chew through the power cable, you’re on your own.

Chris

 

How can I make my tests execute faster?

This is a common question that often arises from Silk Test users. In most instances we find that a number of efficiency gains can be obtained from changes to how the tests are scripted.

This is a common question that often arises from Silk Test users. In most instances we find that a number of efficiency gains can be obtained from changes to how the tests are scripted. The Silk Test suite provides many ways of interacting with an application under test (AUT), some of which can drastically reduce the overall execution time of your tests. This blog provides information on how the findAll method can help reduce the overall execution time of tests with Silk4J. The procedure is similar for the other Silk Test clients.

The following code snippet was created by a real Silk4J user to determine the number of different controls within a Web Application. When executing tests in Chrome and Firefox, the user found that the execution time was significantly higher than in Internet Explorer. When tracing through the Silk4J framework of the user, we discovered that the vast majority of the additional execution time was being lost in a utility method called countObjects. The utility method was scripted as follows:

public void countObjects( ) {

desktop.setOption(“OPT_TRUELOG_MODE”, 0);

int iCount = 0;

int i = 0;
while(desktop.exists(“//BrowserApplication//BrowserWindow//INPUT[“+(i+1)+”]”) ) {
i++;
}

iCount += i;

i=0;
while(desktop.exists(“//BrowserApplication//BrowserWindow//SELECT[“+(i+1)+”]”) ) { i++;
}

iCount += i;

i=0; while(desktop.exists(“//BrowserApplication//BrowserWindow//BUTTON[“+(i+1)+”]”) ) { i++; }

iCount += i;

i=0;
while(desktop.exists(“//BrowserApplication//BrowserWindow//A[“+(i+1)+”]”) ) {
i++;
}

iCount += i;

System.out.println(“Counted “+iCount+” items.”);

}

The initial analysis of the method indicated that the code was not efficient for the task at hand.

Why was the above code not efficient?

The following areas were identified as being not efficient:

  1. The method counts each control individually; therefore if there are 100 controls to be counted, the above snippet would make over 100 calls to the browser to count the required controls.
  2. The use of while loops in the above fashion would eventually lead to unnecessary calls to the browser and therefore execution time which was being wasted.

How could the code be made efficient?

Analysis of the countObjects method revealed that the same functionality could be achieved in four Silk4J statements. To make this possible the method was modified to use the Silk4J method findAll. This method returns a list of controls matching a specific locator string with a single call, therefore it has the following benefits over the original approach:

  1. Each control does not need to be found individually before incrementing the count.
  2. No unnecessary calls are made to the browser

The modifications resulted in the following method:

public void countObjects ( ) {

int iCount = 0;

iCount += desktop.findAll (“//INPUT”).size( );
iCount += desktop.findAll (“//SELECT”).size( );
iCount += desktop.findAll (“//BUTTON”).size( );
iCount += desktop.findAll (“//A”).size( );

System.out.println (“Counted “+iCount+” items.”);

}

Visually the modifications have already reduced the number of lines of code required to perform the same functionality as the original utility method. There is now also no requirement for loops of any sort and from an execution standpoint, we were able to demonstrate the following performance gains:

Chart01

The above chart and table demonstrate how much time a user can gain by simplifying their code through the use of other methods within the Silk4J API that are more suitable for a particular task. The performance gains in Google Chrome and Mozilla Firefox are hugely substantial, while the execution time in Internet Explorer is now less that a second. Overall this process has resulted in better code, better efficiency, and ultimately time saved.

Can I apply this to other parts of an automation framework?

In the following example the user needed to verify that the labels and values within a number of tables have the correct text values. Again, the execution time in both Mozilla Firefox and Google Chrome was considerably higher than the execution time in Internet Explorer. For example, the user experienced a great difference in execution times when executing the following method:

public void verifyLabels (String[ ][ ] expected) {

for (int i=1;i<=17; i++) {
DomElement lbl = desktop.find (“//TH [“+i+”]”);
DomElement value = desktop.find (“//TD [“+i+”]”);
Assert.assertEquals (expected [i-1] [0], lbl.getText ( ) );
Assert.assertEquals (expected [i-1] [1], value.getText ( ) );
}

}

Why was the above code not efficient?

In the above method two find operations are being executed inside the for-loop. Each iteration of the loop increases the index of the controls to be found. To find all the elements of interest and to retrieve the text of those elements, 68 calls to the browser are required.

Where are the efficiency gains?

As the loop is simply increasing the locator index, which indicates that Silk4J is looking for a lot of similar controls, the find operations within the for-loop can be replaced with two findAll operations outside of the loop. This modification immediately reduces the number of calls that must be made to the browser through Silk4J to 36 and the method now reads as follows:

public void verifyLabelsEff(String[ ][ ] expected) {

List <DomElement> labels = desktop. <DomElement>findAll (“//TH”);
List <DomElement> values = desktop.<DomElement>findAll (“//TD”);

for (int i=0;i<17; i++) {
Assert.assertEquals(expected [i] [0], labels.get(i).getText ( ) );
Assert.assertEquals(expected [i] [1], values.get(i).getText ( ) );
}

}

Performance Impact

The chart below highlights how small changes in your Silk4J framework can have a large impact on the replay performance of tests.

Chart02

Robert

DevOps & Quality Automation

DevOps means different things across the IT & development landscape. When it comes to quality, it’s about constantly monitoring your application across all end user endpoints, for both functional and performance needs. The goal is to maintain a positive end user experience while continuing to make the entire process more efficient from both speed and cost perspectives.

So, how does DevOps work in the testing arena? Extremely well for Borland Silk customers, as Archie Roboostoff explains in this blog.

DevOps means different things across the IT and development landscape. Take quality. The bottom line is to maintain a positive end-user experience while making the process more efficient from both speed and cost perspectives. The top line is to constantly monitor your application across all end-user endpoints, for both functional and performance needs.

The old-school method required a combination of record/replay or manual tests being run across large virtual instances of platform and browser combinations. This was great for eliminating functional and performance defects, but as more browsers and devices came to market, testing across these variants became more demanding. This lead to organizations making the tradeoff of delivery speed vs adequate testing. As many of their customers will testify, it’s a compromise too many – and one that is not required now.

For example?

One of our large financial customers was spending more time building infrastructures than testing their applications. They had almost doubled the size of their quality teams while reducing their overall test effectiveness – as the missed deadlines and budget over-runs confirmed. Their products were falling behind competitors, the end-user experience was poor and their ability to deliver new products was very unpredictable.

It doesn’t have to be this way. This blog post is a walk-through of using Silk to achieve an automated quality infrastructure for DevOps in just a few short steps. It is quick and simple. This process works for any operating environment, test desktop and web application across any combination of platform and device.

Step 1 – Create the tests

The continuous monitoring of quality in a DevOps context is about taking any development changes such as nightly builds, hotfixes, updates and full releases, and rapidly scaling out the testing environment to improve the end-user experience. This swiftly identifies any device- or browser-specific issues caused by the changes quickly, exactly where the issue occurred. Many organizations using manual application testing do so because they feel automation is too difficult to or requires a different skillset. With Silk, less technically-adept users create scripts either visually while coders work from within the IDE.

Image 1 – Visual test creation with Silk – no technical skill needed.
Image 1 – Visual test creation with Silk – no technical skill needed.
Image 2 – IDE test creation – either in Eclipse or Visual Studio
Image 2 – IDE test creation – either in Eclipse or Visual Studio

 

Step 2 – Verify and Catalog the newly created tests

To confirm that our test works, we need to set a context that makes collaboration easier and test reuse simpler.

Verifying the test runs is straightforward – create your test and select one or more target browsers or devices to run the test on. It doesn’t matter where or how the test was created; if the test was recorded using Internet Explorer, it can be verified to work across any browser or device. To select the test target just point and click.

Running a test across any browser or device.
Running a test across any browser or device.

Making it work for DevOps

Our test works, so let’s add some context to the test for better collaboration and reuse in the larger DevOps infrastructure. A unique Silk feature, ‘Keyword Driven Tests’, enables cataloging and providing business/end user context to a large number of tests. In this case, we will provide a keyword for our test. Keywords can be assembled to reflect a series of transactions or use cases. For example, adding keywords like ‘verifyLogin’ and ‘checkoutShoppingCart will create tests to check the robustness of the login and shopping cart checkout.

We are now able to better collaborate with business stakeholders and in the DevOps context, enabling the creation of a variety of use cases that get to the root of any post-deployment issues. Keywords are also able to take parameters and pass them to the tests in question making automation even easier.

Image 4 – Keywords being added to tests.
Image 4 – Keywords being added to tests.

Step 3 – Create an execution environment

How do we get to a point where continuous deployment and integration is overcoming the challenges of today’s software market? The key is to set up an environment that can replicate these real world conditions. As we have said, previously this would require a number of development/test boxes or virtual machines. These were expensive to maintain and difficult to set up. For DevOps, Silk can manage and deploy an ‘on demand’ test environment either in a public cloud, private cloud, or on-premise within the confines of the enterprise data center.

With Silk, setting this up is easy and straightforward. So let’s do it. In this example, we will setup an execution environment with the Amazon AWS infrastructure. Even though this is in a public environment, all tests and information access are secure. This environment can be set up in a more private cloud setting, on premise within the firewall –even on the tester’s individual machine. Whatever the environment parameters, Silk has you covered.

Image 5 – Connecting Silk to an execution environment.
Image 5 – Connecting Silk to an execution environment.

 

The Amazon test case

We are about to connect Silk to an Amazon instance using AWS credentials given to us by Amazon. Silk will set up the browser and device combinations so that we can rapidly deploy our applications for testing. Rather than manually setting up a range of VMs to test different versions of Chrome, Firefox, IE, Edge, etc, Silk will spin up the instances, configure them, deploy the application, run the tests, gather the results, and then close them out. It is in this context that we start to really take advantage of some key DevOps practices and principles.

Silk will take the tests we have created and pass them to the environment you have set up. Do you have applications inside the firewall that need to be tested from an external environment? No problem. Silk’s secure technology can tunnel through.

Image 6 – Tunneling through the firewall with Silk to test private internal applications from public execution environments.
Image 6 – Tunneling through the firewall with Silk to test private internal applications
from public execution environments.

Step 5 – Run the tests

Now we’ve created the tests, created the context and set up the execution environment, we must determine when – and how – we want these tests to run. In most DevOps environments, tests are triggered to run from a continuous integration environment managed with Jenkins, Hudson, Cloudbees, etc. Whatever the preferred solution, Silk will execute tests from any of these providers.

When the tests are executed, depending on the selected configurations, Silk will run through the tests and provide a detailed analysis across all browsers, devices, keywords, and tests. Remember – the more the better, as you want to use what your end-users will be using. Better safe than sorry; along with the analysis, screen shots of each test display the evolving trends for that application. This represents visual confirmation that the test was either successful or highlighted an issue – especially important in responsive and dynamic web designs.

Image 7 – Results analysis from Silk running a series of tests and keywords through the execution environment.
Image 7 – Results analysis from Silk running a series of tests and keywords
through the execution environment.

Each test is outlined across the platform/browser/device combination and the end user benefits from a visual representation along with detailed results analysis. Management appreciate the high level dashboard summary showing trends across targets.

Image 8 – Dashboard of runs over time.

Image 8 – Dashboard of runs over time.

Step 6 – Test Early, Test Often

Now that the test? environment has been established and connected to the build environment, ongoing testing across any number of environments is completely automated. Now, instead of setting up testing environments, quality teams can focus on building better tests and improving collaboration with business stakeholders. Here is where Silk delivers true DevOps value to an organization. New browsers, such as Microsoft Edge can easily be added to the configuration environment. There’s no need recreate tests; just point the tests that are already there at the new environment.

Image 9 – Adding new browsers to the execution environment.
Image 9 – Adding new browsers to the execution environment.

Step 7 – Performance Tune

Along with each functional automation piece, Silk can test from both a ‘load’ and ‘functional’ perspective. When testing applications under load, Silk determines average response times, performance bottlenecks, mobile latency, and anything related to a massive amount of load generated on a system. Taking a functional perspective, Silk runs a smaller number of virtual users across different configurations of browsers that outline where things can be tuned. And this is key information. For example, Silk can determine that FireFox your application runs 15% slower on Firefox than Chrome on Android. Adjusting the stylesheets or javascript may be all that is required to performance tune your application. Testing for responsive web design is crucial to keeping user experience sentiments high in a DevOps context.

Image 10 – Performance tuning across different devices and platforms for optimization in response web Design.
Image 10 – Performance tuning across different devices and platforms
for optimization in response web Design.

Using Silk’s technology and running these tests over time will track the trends. This, along with details analytics from data within Silk and sources like Google PageSpeed, will illustrate where your applications will benefit from being fine-tuned across browsers and devices.

Image 11 – Outline where applications can be adjusted for better end user performance and optimization.
Image 11 – Outline where applications can be adjusted for
better end user performance and optimization.

In conclusion

DevOps is a slightly nebuous phrase that means different things to people. But when it comes to testing, the value is pretty clear. Aligned with the right software, it will ensure your applications perform as expected across any device/browser/platform. In addition, using Silk will ensure that your apps are:

  • delivered on time and within budget
  • constantly improving
  • responsive
  • built and tested collaboratively.
  • feeding trend data on responsiveness and quality to a central location
  • successful with end users/consumers.

So if this sounds like something you can use, then we should talk about Silk. It’s the only tool that can reach your quality goals today and will continue to innovate to help you eliminate complexity and continuously improve the overall development and operations process. Now that’s DevOps.

Archie

 

 

 

 

 

 

 

 

 

 

Building a Robust Test Automation Framework – Best Practice

According to Trigent Software’s own research a robust test automation framework ranks highly on their list of Software Testing ‘must-haves’. When executed in a structured manner, it helps improve the overall quality and reliability of a software. Read more from Test Architect Raghurikan in this fantastic guest blog…

Through our experience and research, ranking high on our list is a robust test automation framework. When executed in a structured manner, it helps improve the overall quality and reliability of a software.

The software development industry always faces a time crunch when it comes to the last mile, i.e. Testing. Ask any software developer and he will tell you that the development team in any corner of the world, wants 1. Testing activities to be faster than humanly possible, 2. Deliver results which are laser accurate and 3. All of these without compromising quality. Manual testing fails to live up to these expectations and therefore, are least preferred. Test Automation is therefore, the best choice as it helps accelerate testing and delivers fast results. To ensure that test automation works well, there is the need for a robust test frameworks which acts as the core foundation for the automation life cycle. If we don’t build the right framework, the results could be:

  • Non-modularized tests
  • Maintenance difficulties
  • Inconsistent test results

All of which will result in escalating costs bringing down the ROI considerably.

Best Practices

Framework Organization
The automation framework needs to be well organized to make it easier to understand and work with the framework. Organized framework provides an easier way to expand and maintain. Items to be considered are

  • Easier way to manage resources, configurations, input test data, test cases and utility functions
  • Provide support for adding new features
  • Easy option to integrate with Automation tool, third part tools, databases etc…
  • Have the standard scripting guidelines that needs to be followed to be across

Good Design

Automation tests are used for long terms regression runs to reduce the testing turnaround time, hence, the design involved should be good so that the tests can be maintained easily and yield reliable tests results. Following are some of the good design step

  • Separation of application locators from the test code so that the locators can be updated in the locator file independently on change. Example: To use locators from the object map, external excel or xml file
  • Separate test data from the code and pull data from the external sources such as excel, text file, csv or xml file. Whenever required we can just update the data in the file
  • Organize tests as modules/ functions, so that they are re-usable and easy to manage. Have application/ business logic in the separate class and call them from the test class
  • Tests should start from the base state and ensure that it recovers and continues when there are intermittent test failures

Configuration options

The framework should provide option to choose the configurations at run time so that it can be used as per the test execution requirement. Some of the configurations include

  • Ability to choose test execution environment such as QA, Staging or Production
  • Ability to choose the browser
  • Ability to choose the operating system, platform
  • Ability to mark for priority dependency and groups for the tests

Re-usable libraries

Libraries helps in grouping of the application utilities and hide the complex implementation logic from the external world. It helps in code reusability and easy to maintain code.

  • Build the library of utilities, business logic, external connections
  • Build the library of generic functions of the framework

Reports and logs

To evaluate the effectiveness of automation we need to right set of results, the automation framework should provide all the detailed required to test execution.

  • Provide logs with necessary details of problem with the custom message
  • Have reports which provides the detailed execution status with Pass/ Fail/ Skipped category along with the screenshots

Version Control and Continuous Integration

To effectively control the automation framework we need to keep track of it, hence the version control system is required to achieve this. Have the framework integrated with the version control.

Also we need to run the regression suite continuously to ensure that the tests are running fine and the application functionality is as expected, hence the continuous integration system is required to support for execution of the and results monitoring.

If we build the Robust Automation framework with the above capabilities, we gain the below benefits

  • Increase product reliability – Accurate, efficient, automated regression tests – reduce risks
  • Reduce the product release cycle time – Improve the time to market, Reduce QA cycle time
  • Improve efficiency and effectiveness of QA – free QA team to focus manual efforts where needed
Raghukiran
Test Architect

Raghukiran

 

 

 

 

Raghukiran, Test Architect with Trigent Software, has over a decades’ experience building test frameworks for automation tools which are scalable and efficient. He is keenly interested in exploring new automation tools, continuous integration setups for nightly execution and automation coverage. In the blog `Best Practices for building Robust Test Automation Framework’ he discusses the best practices to be followed for building a Robust Test Automation framework.

Black Friday

Ahh the Holidays. That wonderful time of the year that fills us with joy, happiness, and sets our innate primal consumer instincts ablaze. Serious shoppers will do just about anything for that one out-of-this-world bargain. They happily abandon family gatherings, stand in lines that wrap around a building two or three times, fight-off horrific weather or, even worse, other competitive shoppers.

Ahh the Holidays. That wonderful time of the year that fills us with joy, happiness, and sets our innate primal consumer instincts ablaze. Serious shoppers will do just about anything for that one out-of-this-world bargain. They happily abandon family gatherings, stand in lines that wrap around a building two or three times, fight-off horrific weather or, even worse, other competitive shoppers.

The holiday battleground has extended its reach from brick-and-mortar locations to the devices in your possession, hence the shopping boom experienced from Black Friday extending to Cyber Monday. Forrester Research predicts an annual online shopping growth rate of nearly 10% through 2018. Assuming that prediction is correct, the average consumer will spend roughly $2,000 in online shopping by 2018! That means, a potential of $461 billion in online spending alone!

Clearly, retailers will be fighting furiously for every slice of that $461 billion pie. In order to accommodate those demanding customer expectations, it’s more important during the holiday season for the entire organization to be on the same page; from executives to marketing to development. Customers, more than ever, have increasingly growing demands for online experiences and less patience for distractions or loading lags while shopping.

To show just how detrimental downtime can be for e-retail sites, we created an infographic to demonstrate the true business impact of website performance issues during the holiday shopping season. For example, based on industry surveys, Gartner found that each minute of downtime can cost companies up to $5,600, which extrapolates to well over $300K an hour! Additionally, Radware published data sharing that consumers will typically only wait about four seconds before abandoning a slow web page. That can equate to quite a few lost sales. Check out our infographic for the full story:

These high customer expectations put a tremendous level of stress on the DevOps teams to ensure visitors to their sites have a consistent, functional experience across every platform (whether the e-retail site is being accessed via phone, tablet or desktop). When asking your dev and test teams how they plan to accommodate customer expectations, they will all respond with a resounding, “Easier said, than done!”

The rationale behind their answer is understandable and yet as cloudy as a mid-western blizzard about to dump three feet of snow on all those holiday shoppers. The holiday shopping storm is made of an ever increasing variety of smartphones, tablets, laptops and wearables all running different operating systems and browsers (aka user profiles). Every customer has different shopping habits, browsing preferences and, most importantly, purchasing techniques, which according latest IBM Online Holiday Mobile Shopping Report1, are heavily influenced by the screen size and performance of the device and varies across the globe! For instance, the bounce rate while in the UK smartphones and tablets have a 36% and 29% bounce rates respectively, in the US shows 41% and 33% respectively!

So, what does it take for retailers to have a better answer? Here are a few tips to get you started:

  1. Leverage web traffic data to help prioritize user profiles, with the emphasis on the right customers. That will help minimize the testing efforts to get things properly verified, for your defined audience.
  2. Provide testing automation tools that enable both functional and performance/load testing across all the different user profiles and network conditions to minimize the time taken from finding to fixing any possible issues.
  3. Use the cloud as a way to reduce the costs of running tests that can accurately represent the volume and the diversity of user profiles required by your business priorities.

While the details above will prove to be a valuable start for your organization this holiday season, the best gift you can give your customer is knowing what they want. DevOps and testing teams need to minimize their risk and build brand loyalty by spending their time developing and testing the features customers want. Providing a consistent, functional and appealing experience will attract shoppers to your website and winning customers well after the holiday season is over.

Renato Quedas

RenatoQ

 

 

 

 

 

 

Sources:

  1. Growth in online shopping
    Source: Testing times in eCommerce white paper – based on Forester research: Forrester Research Online Retail Forecast, 2013 To 2018 (US) www.forrester.com
  2. Shift in customer expectations
    Source: www.webperformancetoday.com
  3. Websites getting slower:
    Source: HTTP Archive – as mentioned in www.borland.com/Blog/July-2015/is-it-me-or-is-the-web-getting-slower
  4. Most popular web browsers
    Source: GS.statcounter.com as referenced in Borland’s ‘The Cross-browser configuration conundrum’ white paper
  5. Total number of digital buyers:
    Source: www.statista.com
  6. Added weight:
    Source: www.worldwidewebsize.com and http://www.borland.com/Blog/July-2015/is-it-me-or-is-the-web-getting-slower
  7. Impact of a one second improvement/delay
    Source: http://www.slideshare.net/Radware/radware-sotu-winter2014infographicwebperformance – based on the research from Strange loop networks: the impact of HTML delay on mobile and business metrics
  8. Black Friday total spend
    Source: Time Magazine
  9. Average cost per minute of downtime
    Source: Gartner