Summary: |
Today's software systems usually feature Graphical User Interfaces (GUIs). GUIs have become an important and accepted way of interacting with today's software. They can be a crucial point in the users' decisions to use or not use the system.
GUI testing (with the purpose of finding defects in the GUI or in the overall application) is difficult, extremely time-consuming, and costly, with very few tools and techniques available to aid in the testing process. Most of the currently used GUI testing methods are almost ad hoc and require the test designer to manually develop test cases and evaluate whether the GUI software is adequately tested.
There have been efforts to automate the GUI testing process. Some tools, called capture/replay tools, are commercially available. They can be used to record user interactions in test scripts and replay them later. Among other problems, these tools still require too much manual effort and postpone the testing activity to the end of the development process when the GUI is already constructed. They are useful mainly for regression testing, and not for finding bugs in first hand.
Specification based testing methods and techniques can help to systematize and automate to a higher degree the GUI testing process, because test cases can be generated automatically from formal specifications/models. However, they are not commonly applied to GUIs. The availability of acceptable GUI modelling environments, mechanisms to control the test case explosion problem for GUIs, and tools to bridge the gap between the model and the implementation are necessary to foster the adoption of model-based GUI testing methods.
In the previous work of the research team from FEUP (Ana Paiva, João Faria e Raul Vidal), conducted in cooperation with researchers from the group Foundations of Software Engineering of Microsoft Research (FSE/MR), an initial environment and experiments have been set up to demonstrate the feasibility of automating the testing of G |
Summary
Today's software systems usually feature Graphical User Interfaces (GUIs). GUIs have become an important and accepted way of interacting with today's software. They can be a crucial point in the users' decisions to use or not use the system.
GUI testing (with the purpose of finding defects in the GUI or in the overall application) is difficult, extremely time-consuming, and costly, with very few tools and techniques available to aid in the testing process. Most of the currently used GUI testing methods are almost ad hoc and require the test designer to manually develop test cases and evaluate whether the GUI software is adequately tested.
There have been efforts to automate the GUI testing process. Some tools, called capture/replay tools, are commercially available. They can be used to record user interactions in test scripts and replay them later. Among other problems, these tools still require too much manual effort and postpone the testing activity to the end of the development process when the GUI is already constructed. They are useful mainly for regression testing, and not for finding bugs in first hand.
Specification based testing methods and techniques can help to systematize and automate to a higher degree the GUI testing process, because test cases can be generated automatically from formal specifications/models. However, they are not commonly applied to GUIs. The availability of acceptable GUI modelling environments, mechanisms to control the test case explosion problem for GUIs, and tools to bridge the gap between the model and the implementation are necessary to foster the adoption of model-based GUI testing methods.
In the previous work of the research team from FEUP (Ana Paiva, João Faria e Raul Vidal), conducted in cooperation with researchers from the group Foundations of Software Engineering of Microsoft Research (FSE/MR), an initial environment and experiments have been set up to demonstrate the feasibility of automating the testing of GUIs based on formal specifications. The FSE/MR group provided the basic specification languages and testing tools needed for automating the testing of APIs based on formal specifications: the Spec# specification language (a model-based formal specification language designed as an extension of C#) and the Spec Explorer tool. The research team from FEUP extended the testing tools and techniques to allow their application for GUI testing: techniques and helper libraries for modelling GUIs in Spec#, a GUI mapping tool to automate the mapping between the GUI model and the implementation, and a tool to avoid test case explosion taking into account the hierarchical structure of GUIs. Successful experiments have been conducted with simple GUI applications. The results achieved have been demonstrated to practitioners of product groups and have received significant interest. But some shortcomings of the approach still prevent its adoption in industrial contexts: the time and effort required to build the GUI models, the reluctance of GUI testers and modellers in writing textual formal specifications (they strongly prefer graphical notations with which they are familiar), and the effort required to configure the test generation process to guarantee the quality of the test cases generated.
The goal of this project is to develop a set of tools and techniques to automate specification based GUI testing, solving the shortcomings found in our previous work, so that it can be used in industrial environments. |