Test automation practices frequently appear to reside in the hands of a small, specialized group of individuals within most testing organizations. Although automation tooling (e.g. macro engines, native code scripting, automation products) have been around for decades, many quality assurance teams continue to rely (sometimes exclusively) on manual efforts. While some organizations express that they wouldn’t be able to function without automation, the vast majority appear to be still bound by the efforts available through manual testers.
In the past, I would have understood such constraints. Some automation options in the market are focused on specialized individuals that have a need for testing solutions and the skills of a developer. Newer tools (or newer releases of older products) have evolved to support automation coding in more visual and user-friendly IDE’s to combat this problem. Regardless of the tooling, there will of course be some individuals that are more suited to the demands of logic coding than others.
The challenge I pose to the test automation community is expressed simply… how do we adapt automation practices so that more people per organization may contribute to exercising automated test? As an inverse of this question, we should also ask how the efforts of a single automation engineer can be better leveraged to serve a larger number of testing needs.
The answer resides in our implementation of test automation. Implementation of automation is expressed by how an automation framework is developed and used in an organization. No matter the type of test automation tool or scripting employed, a test automation framework defines how that technology is put to use in the organization.
Traditional approaches (or initial approaches) to test automation started as linear coding whereby all of the actions for a test case or transaction were captured in a single script. Over time, this adapted to practices that were modular in design so that a driver (or parent) script called a subordinate (or child) script. Libraries could be formed from these modules of automation such that parent scripts could call child scripts that called grandchild scripts, and so on. These approaches grew to encompass logic that responded to input data (or data from applications) to determine which subordinate functions to exercise (e.g. if password-length=0, test for an error message to be displayed, else log into an application). These approaches are employed across many organizations currently, and these approaches have served the software testing industry well over decades. These practices have also limited organizations such that only key individuals who have a development skill set can often comprehend the automation that has been build. This is an exclusive approach to automation that fosters specialists, rather than encouraging inclusion of automation in the service of all testers. It is now time for a new paradigm.
Newer approaches to test automation are developing across the quality assurance market. These approaches include keyword-driven, behavior-driven, and state-driven frameworks. These framework styles each present various merits. However, they share some commonality that shift the automation perspective from an exclusive practice to an inclusive approach.
These frameworks implement automation by identifying that test automation may be comprised of multiple process areas:
- Automation Design
- Automation Implementation
- Test Case Design
In automation design, the elements or behaviors in an application are identified where automation is to be applied. Each transactional element (or atomic function) is identified. In more tangible terms, automation design may determine that elements are comprised of discrete elements on a screen (e.g. a text field, a radio button, a frame with a collection of a few controls). Alternatively, these elements may be seen as common (yet small and discrete) actions in the application. This may be actions such as “enter a form value”, “navigate a header menu list”, or even “login”. These elements may be considered analogous to individual steps in a manual script. The analysis required for automation design may be performed by automation specialists, or non-technical resources. This allows for inclusion of more individuals with less specialized skill sets to participate in automation implementation. These resources are focused on “what actions do we need to be able to perform to execute transactions?”. The focus is shifted from worries on how to technically achieve the automation to an abstraction based on real world needs.
In automation implementation, an automation engineer (or someone capable of building an automation script) builds the automation code to strictly meet each element from automation design. While this component may be more technically challenging and require deeper knowledge of automation tools or scripting languages, this process is still simplified when compared to traditional approaches. The automation engineer has less logic to develop as functions are bounded by the need to create small, discrete modules for specific needs. This leads to rapid development of many small functions rather than laborious creation of comprehensive transactional scripts. Furthermore, this promotes increase maintainability over time. If a small function or element of a transaction is changed in the application, maintenance is quickly applied to the smaller function. For example, if an application adds a “secret question” during login, the automation engineer knows that the login function is the only location in the automation libraries which needs to be modified.
Test case design is the process area where testing is actually assembled. This may also be a more inclusive process. Test case design is a process whereby the smaller building blocks of elements or functions are assembled to represent a test case. From the elements defined by automation design, elements are triggered sequentially to determine pass/fail status. This test design may be written in the automation tooling language, or in many cases, driven by rows in a spreadsheet. When implemented through spreadsheets, this enables less-technical resources to define the tests to be executed in an automated fashion. The automation framework may be designed to open a file with the defined steps, and execute the implementation code.
By developing frameworks that separate test case development from implementation details, automation coding becomes faster, reduces maintenance overhead, and improves overall efficiency. More importantly, this separation of automation functions offers the possibility of including more people in the automation practices. More individuals in the organization may realize the benefits of any implementation efforts applied. More testing may be achieved in less time as constraints imposed by available automation engineering personnel are reduced.
Please contribute to this discussion. How does your organization apply automation to testing processes? What type of framework approaches do you use? What problems and constraints do you face?