Functional Testing for Every Mainframe Application
Despite long-rumored replacement by server clusters and other architectures, mainframes remain vital to operational support of the most critical applications. In fact, mainframe capacity continues to grow as it becomes clear that it is the only platform that can reliably handle the enormous transaction volumes and performance demands of an increasingly on line economy. Far from being a legacy platform, significant new development occurs continuously on the mainframe, creating a demand for thorough test coverage of new features as well as regression testing of massive feature inventories.
This constant need for extensive testing simply cannot be met with manual test practices. Unfortunately, test automation solutions for the mainframe have not kept pace. Early solutions offered simple capture/replay tools that recorded keystroke sequences and captured screen responses. When these proved to be impossible to maintain and too simplistic to handle the numerous variations necessary for full coverage, they were augmented with cryptic scripting languages such as REXX.
While more powerful, script-based testing tools require programming skills, creating a barrier for analysts whose system and business knowledge make them essential to the test process. This divide kept application experts from successful automation and resulted in continued manual testing.
With the advent of the PC and more user-friendly GUI interfaces, mainframe access moved from so-called dumb terminals to workstation-based terminal emulation.This opened up mainframe testing to the newer GUI based testing tools, which treat the emulator itself as another Windows-based application.
But the original problems still remain: simple record and play techniques aren’t powerful enough and require too much maintenance overheard, and the underlying scripting languages are too arcane for analysts to use effectively.This white paper explores the challenges of record/replay approaches as well as scripting, and introduces the next generation of mainframe test automation solutions.
The typical terminal emulator-based test automation tool records the keyboard activity of the tester and saves it into a script. A recording that enters a customer name and address might look something like this:
Although on the surface this might appear straightforward, it is actually far from it.The first issue is that there is no indication of where the text is being typed on the screen—or, for that matter, on which screen.That is, the text is being keyed into the current cursor location, regardless of where that might be. For a busy screen with many fields, it may be impossible to ascertain what data is being entered into which field. This is especially problematic for future maintenance.
A related problem is that the lack of context information makes errors more likely. For example, fields may have auto-tab capability in which the cursor is automatically advanced to the next field when the previous field is filled, but require the tab to be pressed if the field is not filled.This means that if the data content is modified, it may completely change the behavior of the recorded script so that data is being entered into the wrong field.
Similar challenges arise when managing synchronization, or timing. Because the response time on the host will vary with network traffic and processing loads, a script that is recorded under quick response times may get ahead when replayed during slower responses.
When this happens, the script loses context altogether and will either lose data or enter it into the wrong screen. Some tools try to address this by looking for the status line (X) or clock symbol, but even these indicators are unreliable. Terminals configured for response-mode will remove the busy indicator even though the system is in a wait state.
Yet another issue with recorded scripts is that they hard-code the data into the script itself. Since effective test coverage usually requires many different data conditions, this means either that scripts become too long to manage, or the tester must use advanced scripting techniques to externalize the data into files, substitute variable values for literal ones, then provide additional code and logic to open and read the files. Finally, the typical method of verifying the results of record and replay scripts is through the capture and comparison of screen images. But even this is not as straightforward as it might seem. First, there are usually fields (dates and time, for example) that will vary from one test run to the next; these fields must be masked so that they are not compared. Many tools allow selected screen areas to be excluded, which will address fields that appear in common locations.
But there is a more complex level of comparison that is more difficult to manage. Many systems do not allow the exact same data values to be re-entered. For example, an account number may not be reusable, or a transaction identifier may be required to be unique. In this instance, not only must the script and/or data be constantly modified, but the screen comparisons must be massaged to prevent comparison failures when the values change. Because of these issues, capture/replay is rarely a viable long term strategy, and most companies find themselves forced to use scripting techniques.
Scripting languages are essentially specialized programming languages.While they are powerful and therefore flexible, they introduce all of the attendant issues that come from a development effort. The result is cryptic, complex code:
public function edit_Input ( sFieldId, sFieldName, sScreenId, sScreenName )
if ( field_exists ( sFieldId ) != E_OK )
gsStepStatus = " Field does not exist";
if ( GetStringArgument ( "Value", gsExpectedValue, sDefaultValue ) == FAILED )
gsStepStatus = "Parameter (Value) could not be retrieved";
status = edit_set ( sFieldId, gsExpectedValue );
if ( status != E_OK )
gsStepStatus = " Value could not be input";
gbActionStatus = PASSED;
The most obvious issue, of course, is the skills barrier.The ideal tester is often a business or systems analyst or expert user with domain expertise, but not with programming skills. Requiring these personnel to learn programming on the job is unrealistic at best.
Because technical skills are required, many companies divide the automation effort up into test case design and documentation, which is performed by analysts, and automation that is performed by engineers.This approach adds significant time, cost and complexity to the entire process.
But more importantly, the real implication is that the company is now basically developing software to test software. If each test case is translated into a program, there will be more code in the test system than is in the application being tested! This is because for each feature there must be multiple conditions tested, both valid and invalid.
Furthermore, the test automation system must contain code to organize and schedule execution, manage external data files, handle error conditions and recover from them, and log results.
In the end, this results in a one-off system for which the company has 100% cost of ownership, and one that requires a major maintenance effort when the application under test is modified. Furthermore, any turnover in personnel typically results in rewrites as new developers impose their own personal design preferences and programming approaches.
Because of these challenges, many test automation tools become shelfware as companies realize that their return on investment is less than they expected, or even negative as compared to the benefits they enjoy.
Worksoft Certify®: The Next Generation
Worksoft, Inc. has used its experience with hundreds of companies to design a next generation solution that would address the challenges of test automation. Worksoft Certify:
- Requires no scripting
- Shortens the learning curve by 50%
- Accelerates implementation by 80%
- Reduces maintenance by 90%
- Automatically generates documentation
Instead of a programming language that requires yet another development project, Certify is an analyst-friendly solution that allows your application experts to design, document and automate in one easy step. No programming required. Just point and click through a series of simple drop-down menus and Certify will not only automate your test, it will generate user documentation:
Certify provides a unique and powerful repository that supports end-to-end testing across multiple platforms, enabling enterprise level standardization. Whether you are working with Web, client/server, mainframe or middleware, Certify delivers a common interface.
Certify can also quickly identify the differences between application versions, map them to the tests that are affected, and automatically implement the changes needed to bring your tests up to date. Testers can spend their time creating new tests for new features, not debugging regression tests every time a new version is released.
Certify accomplishes all this through an open architecture that can take advantage of multiple tools and technologies. Certify can work with existing tools or work stand-alone as appropriate for the application.This unique strength means that test cases are not disturbed by changes to underlying technologies; changing tools does not mean a loss of investment in the test repository.
Certify is especially powerful for mainframe applications. It not only supports most 3270 and 5250 terminal emulation products, it has support for directly importing screen maps from formats such as CICS/BMS and AutoDoc. This capability means Certify can quickly inventory the entire application including all screens and fields, rapidly identifying any changes or additions between versions. If screen maps are not available, then Certify offers a utility that can easily inventory any displayed screen.
Certify also has built-in synchronization for mainframes, which means the tester does not have to worry about execution speed or response time: Certify never gets ahead of the host. There is no need for complicated schemes to watch the status line or timer.
Also, Certify maintains constant context information and confirms that the expected screen is available and always enters data into the correct field. No need to worry about runaway tests or auto-tab fields; Certify always knows exactly where it is at all time, and documents the application context in all test results.
Worksoft customers have demonstrated that analysts can become productive with Certify quickly and easily, improving their productivity and expanding test coverage in order to increase product quality and reduce cycle time.