Visualizing a Use Case

Have you ever put the finishing touches on your use case in a word document only to find that the visio diagram you had depicting the process flow is now out of date? If you are lucky, you have both some visual model of your functional flows along with the corresponding text to back it up – and let’s not forget about the corresponding test cases!

Have you ever put the finishing touches on your use case in a word document only to find that the visio diagram you had depicting the process flow is now out of date?  If you are lucky, you have both some visual model of your functional flows along with the corresponding text to back it up – and let’s not forget about the corresponding test cases!

In the fast paced world of software development, if you don’t have solid processes in place and have a team that follows it, you might find yourself “out of sync” on a regular basis.  The industry numbers such as “30% of all project work is considered to be rework… and 70% of that rework can be attributed to requirements (incorrect, incomplete, changing, etc.)” start to become a reality as you struggle to keep your teams in sync.

The practice of using “Use Cases” in document form through a standard template was a significant improvement in promoting reuse, consistency and best practices.  However, a written use case in document form is subject to many potential downfalls.

Let’s look at the following template, courtesy of the International Institute of Business Analysis (IIBA) St. Louis Chapter:

Skip past the cover page, table of contents, revision history,  approvals and the list of use cases (already sounds tedious right?)  Let’s look at the components of the use case template:

The core structure is based on a feature, the corresponding model (visualization) and the use case (text description).  This should be done for every core feature of your application and depending on the size of your project, this document could become quite large.

The use case itself is comprised of a header which has the use case ID, use case name, who created it and when as well as who last modified it and when.  As you can see, we haven’t even gotten to the meat of the use case and we already have a lot of implied work to maintain this document so you need to make sure you have a good document repository and a good change management process!

Here is a list of the recommended data that should be captured for each use case:

  • Actors
  • Description
  • Trigger
  • Preconditions
  • Postconditions
  • Normal flow
  • Alternative flows
  • Exceptions
  • Includes
  • Frequency of use
  • Special requirements
  • Assumptions
  • Notes and issues

The problem with doing this in textual format is that you lose the context of where you are in the process flow.  Surely there must be a better way?  By combining a visual approach with the text using the visual model as the focus, you will be able to save time by modeling only to the level of detail necessary, validate that you have covered all the possible regular and alternative flows and most importantly, you will capture key items within the context of the use case steps making it much easier to look at the entire process or individual levels of detail as needed.

If you look through the template example, you can quickly see that it is a manual process that you cannot validate without visual inspection, so it is subject to human error.  Also, it is riddled with “rework” since you have to reference previous steps in the different data field boxes to make sense of everything.

Here is a visual depiction of the example provided in the template.  I have actually broken the example into two use cases in order to minimize required testing by simply reusing the common features:

Access and Main Menu

ATM Withdraw Cash

I have added some colorful swim lanes to break the activity steps down into logic groupings. If you think the visualizations look complicated you might be right… they say a picture says a thousand words, so what you have done is taken the thousand words from the use case with all of the variations and you have put them into one visual diagram!  The good news is, it is surprisingly easy to create these diagrams and to translate all of the required data from the use case template directly into this model.  A majority of the complexities of the use case are handled automatically for you.  When it comes time for changes, you no longer have to worry about keeping your model in sync with your text details and you certainly no longer have to worry about keeping references to steps and other parts of the use case document in agreement!

In the next blog, we’ll look at how to model the “Normal flow” described in the use case template.

Black Friday

Ahh the Holidays. That wonderful time of the year that fills us with joy, happiness, and sets our innate primal consumer instincts ablaze. Serious shoppers will do just about anything for that one out-of-this-world bargain. They happily abandon family gatherings, stand in lines that wrap around a building two or three times, fight-off horrific weather or, even worse, other competitive shoppers.

Ahh the Holidays. That wonderful time of the year that fills us with joy, happiness, and sets our innate primal consumer instincts ablaze. Serious shoppers will do just about anything for that one out-of-this-world bargain. They happily abandon family gatherings, stand in lines that wrap around a building two or three times, fight-off horrific weather or, even worse, other competitive shoppers.

The holiday battleground has extended its reach from brick-and-mortar locations to the devices in your possession, hence the shopping boom experienced from Black Friday extending to Cyber Monday. Forrester Research predicts an annual online shopping growth rate of nearly 10% through 2018. Assuming that prediction is correct, the average consumer will spend roughly $2,000 in online shopping by 2018! That means, a potential of $461 billion in online spending alone!

Clearly, retailers will be fighting furiously for every slice of that $461 billion pie. In order to accommodate those demanding customer expectations, it’s more important during the holiday season for the entire organization to be on the same page; from executives to marketing to development. Customers, more than ever, have increasingly growing demands for online experiences and less patience for distractions or loading lags while shopping.

To show just how detrimental downtime can be for e-retail sites, we created an infographic to demonstrate the true business impact of website performance issues during the holiday shopping season. For example, based on industry surveys, Gartner found that each minute of downtime can cost companies up to $5,600, which extrapolates to well over $300K an hour! Additionally, Radware published data sharing that consumers will typically only wait about four seconds before abandoning a slow web page. That can equate to quite a few lost sales. Check out our infographic for the full story:

These high customer expectations put a tremendous level of stress on the DevOps teams to ensure visitors to their sites have a consistent, functional experience across every platform (whether the e-retail site is being accessed via phone, tablet or desktop). When asking your dev and test teams how they plan to accommodate customer expectations, they will all respond with a resounding, “Easier said, than done!”

The rationale behind their answer is understandable and yet as cloudy as a mid-western blizzard about to dump three feet of snow on all those holiday shoppers. The holiday shopping storm is made of an ever increasing variety of smartphones, tablets, laptops and wearables all running different operating systems and browsers (aka user profiles). Every customer has different shopping habits, browsing preferences and, most importantly, purchasing techniques, which according latest IBM Online Holiday Mobile Shopping Report1, are heavily influenced by the screen size and performance of the device and varies across the globe! For instance, the bounce rate while in the UK smartphones and tablets have a 36% and 29% bounce rates respectively, in the US shows 41% and 33% respectively!

So, what does it take for retailers to have a better answer? Here are a few tips to get you started:

  1. Leverage web traffic data to help prioritize user profiles, with the emphasis on the right customers. That will help minimize the testing efforts to get things properly verified, for your defined audience.
  2. Provide testing automation tools that enable both functional and performance/load testing across all the different user profiles and network conditions to minimize the time taken from finding to fixing any possible issues.
  3. Use the cloud as a way to reduce the costs of running tests that can accurately represent the volume and the diversity of user profiles required by your business priorities.

While the details above will prove to be a valuable start for your organization this holiday season, the best gift you can give your customer is knowing what they want. DevOps and testing teams need to minimize their risk and build brand loyalty by spending their time developing and testing the features customers want. Providing a consistent, functional and appealing experience will attract shoppers to your website and winning customers well after the holiday season is over.

Renato Quedas

RenatoQ

 

 

 

 

 

 

Sources:

  1. Growth in online shopping
    Source: Testing times in eCommerce white paper – based on Forester research: Forrester Research Online Retail Forecast, 2013 To 2018 (US) www.forrester.com
  2. Shift in customer expectations
    Source: www.webperformancetoday.com
  3. Websites getting slower:
    Source: HTTP Archive – as mentioned in www.borland.com/Blog/July-2015/is-it-me-or-is-the-web-getting-slower
  4. Most popular web browsers
    Source: GS.statcounter.com as referenced in Borland’s ‘The Cross-browser configuration conundrum’ white paper
  5. Total number of digital buyers:
    Source: www.statista.com
  6. Added weight:
    Source: www.worldwidewebsize.com and http://www.borland.com/Blog/July-2015/is-it-me-or-is-the-web-getting-slower
  7. Impact of a one second improvement/delay
    Source: http://www.slideshare.net/Radware/radware-sotu-winter2014infographicwebperformance – based on the research from Strange loop networks: the impact of HTML delay on mobile and business metrics
  8. Black Friday total spend
    Source: Time Magazine
  9. Average cost per minute of downtime
    Source: Gartner

Software Testing – Automation or manual testing, that is the question.

Software Testing – Automation or manual testing, that is the question. Automation isn’t an automatic choice – Renato Quedas wonders why………….

In this recent posting, the question of when to automate and when to stick to manual got another airing. It prompted the usual flurry of comments and it’s great to see the passion out there. So here’s my view. Feel free to throw rocks at it – but I’d prefer it if you just use the comments box…!

In my view, test automation should be non-disruptive and it works best when it supplements and extends manual test to eliminate the mundane, repetitive parts of the manual test process.  But it’s always important to keep in mind that software testing is about ensuring functionality in the way that human beings use it. What that means is that until automation can anticipate every aspect of human behavior, the initial test implementation will still be, to some extent, manual.

Capture and … automate

Once the initial test procedures are captured, though, automation can eliminate the redundant test tasks that don’t change. That’s why Borland introduced keyword-driven testing (KDT). This enables test procedures to be implemented once and then assigned to a keyword, or for a keyword to be defined and the test procedures scripted for that keyword, so that it can be reused to automate repetitive features.

Test implementation can be a combination of captured keystrokes, mouse clicks, gestures etc. that are converted into script along with some manual scripting when required to complete the test procedures.  Once implemented as keywords, the test procedures can then be connected together to create complicated, multi-faceted test scripts much more easily than writing those test scripts from scratch.

business-operations

Automate to collaborate

Keyword-driven testing can also facilitate role-based testing and greater test collaboration by enabling non-technical business stakeholders to participate in software testing without having to understanding the test script details. Business users can interact with keywords such as “Click Select Button” or “Select Shopping Cart” without having to understand the underlying test script that implements those operations.

As object-oriented programming accomplished for software development, keyword-driven testing enables reusable test procedures to be captured and implemented in a way that boosts test automation considerably.

It enables manual test implementation to be reduced as much as possible with automation while still recognizing that the manual variations needed for realistic software tests.  It also enables greater software test participation by all key stakeholders, including non/less-technical operations and business personnel.

So in summary, I’m joining the narrative – automate when you can but don’t treat it as a silver bullet. But that’s my view. What’s yours…?

RenatoQ