The Assertion Based Testing Tool for OOP: ADL2

Masaharu Obayashi - Hiroshi Kubota - Shane P. McCarron - Lionel Mallet

ABSTRACT

This is a paper describing an assertion based testing tool for object oriented programming (OOP) languages. It includes the architecture of formal specification language and its translator to test programs.

Keywords

Test, testing tool, formal specification, test generation, assertion, interface definition, IDL, OOP

1. INTRODUCTION

The Assertion Definition Language (ADL) Project has defined a formal specification language and language translator that permit the specification of interface behavior. Once specified, the translator can transform the formal specification into a less formal ìnatural languageî specification, a test specification, and tests for an implementation of the specification.

Inputs to ADL are in the form of formal specification files. The ADL Translation System (ADLT) defines three specification languages; Assertion Definition Language (ADL), Test Data Description (TDD) language, and the Natural Language Dictionary (NLD).

2. THE ADL PROJECT

The ADL project culminated by the merging of many independent activities that included early prototypes developed at Sun Laboratories, and other related work done at universities such as Stanford and New York Polytechnic. This along with the decision of Information-technology Promotion Agency, Japan (IPA) and X/Open to fund this project is what caused ADLT to become the freely available [1,2], yet well supported tool that it is.

Phase 1 of this project is over, and Phase 2 has commenced as an international cooperative project with above organizations and Sun Laboratories. The last version of Phase1 was 1.1 which provides a closure with a GUI based user interaction facility [4]. Phase 2 will greatly enhance the target languages and also expand to cover Java, C++, and OMG IDL. The Phase 2 tools will continue to be upward compatible to the Phase 1 tools through an automatic translation capability.

3. OVERVIEW OF ADL2

Assertions are written as a description of how the function behavior is affected by the input state, and the resulting changes on the output state. This is referred to as the pre/post conditions on functions.

The ADL language has been designed to be easy to learn and use. Constructs have been kept minimal and look very similar to constructs in the target languages. Also it has been designed specifically to document interfaces as opposed to other similar specification languages that are used to do a lot more.

The ADL language was initially designed to permit users to describe formally the specifications as they are in UNIX man pages at least as a starting point. This emphasis has resulted in the addition of a rich set of error specification capabilities in ADL. For instance, the exception operator mentioned in Section 4.6 was introduced.

Since the language has been kept simple, tool support is very easy to provide when compared to other specification languages. Given the fact that the natural language tools were developed as an integral part of ADL and ADLT, this has resulted in ADL specifications having strong similarities (albeit in the target language syntax) to specifications in natural languages.

ADL and TDD for that matter have been designed with the large software project target in mind. ADL provides good modularization capabilities thus encouraging reuse and the distribution of a large specification into small and easily manageable entities.

The diagram in Figure 1 depicts the general architecture of the system. In particular, it shows that although the ADLT system will be perceived by the user as a single tool, it is made of several compilers acting on specifications written in a specific language binding flavor. These compilers implement the translation mechanisms. Actually, the single tool perceived by the user is the driver. This module allows the selection and operation of the appropriate compiler based on user's directions and the type of inputs.

ADLT Architecture
Figure 1. The general architecture of the system

The remainder of this paper is structured as follows. In Section 4 and Section 5, we describe ADL annotations and TDD annotations respectively, using the ADL/Java binding. Section 6 provides the framework of test results reporting. In Section 7, we illustrate runtime architecture. Finally, we summarize results presented in this paper in Section 8.

4. ADL ANNOTATIONS

ADL/Java provides a syntax that is a minor extension to that of Java 1.0.2. Any Java expression except assignments may appear in a semantic annotation, as well as expressions using the extensions described in this section.

The new syntactic constructs of Java 1.1 - that deal with "inner classes" and the ".class" feature for the reflection API - are not supported within the ADL for Java syntax. However, as the ADLT runtime is written in Java 1.1, ADL 2 supports the testing of Java 1.1 interfaces and the use of Java 1.1 auxiliary classes.

4.1 The Call State Operator

The call state operator "@" is an unary operator. It has the effect of evaluating its argument before the call to the specified method. Call state operators may nest within each other in which case, the inner operator is overridden by the outer operator.

4.2 Bindings

Bindings are used to declare local variables and initialize them with useful values. Their main goal is to be used in conjunctions with NLD annotations and improve readability of specifications.

4.3 The Try/catch Statement

During the evaluation of the assertions of an assertion group, it is possible for exceptions to be thrown. Try/catch specifications may be used to catch these exceptions and provide an alternate assertion group whose value is used for that of the parent ("try") assertion group. The assertion group(s) in the catch specification(s) must therefore be of the same type as the parent assertion group.

4.4 Thrown Expressions

Thrown expressions are boolean expressions used to specify whether or not exceptions have been thrown.

4.5 Behavior Classification

It is often very useful to broadly categorize the behavior of a method into its "normal behavior" and "abnormal behavior". One may then specify more details of the behavior in each of these cases. ADL provides the behavior classification construct for this purpose. The behavior classification is used to associate a boolean expression to the reserved words normal and abnormal.

4.6 The Exception Operator

ADL provides the exception operator "<:>" whose meaning is based on behavior classifications. In usual usage of this operator, the left operand is the enabler of an exception, while the right operand is a "thrown" expression.

A <:> B is equivalent to (A --> abnormal) *and* (abnormal && B --> A). Informally, this means that if A is true, then an abnormal condition should be detected but B is not necessarily true. However, if an abnormal condition is detected and B is also true, then A must be true as well. The first part of this rule allows the specification of abnormal conditions for functions that can raise several different abnormal statuses in a possibly non-deterministic way, e.g., several error conditions are met initially but we don't care which one is raised as long as at least one of them is actually raised.

4.7 Inline Procedure Declarations

Inline macro declarations is another way to define concepts used in behavior descriptions (along with auxiliary interface declarations) and simplify assertions. They are very similar to bindings in their purpose though semantically totally different.

4.8 Prologues and Epilogues

Before being able to test a specified method, it is sometimes necessary to perform preliminary initializations that require imperative features: this cannot be made inside semantic assertions, which should remain declarative constructs with no side-effect. Prologues and epilogues provide a place for these imperative statements.

4.9 ADL Bank Specification

The Figure 2 illustrates the ADL specification concepts. It is an oversimplified description of a mythical banking software interface. It is easy to understand how to describe ADL specifications for this interface, since banking operations are familiar to most people.

adlclass bank { 
   BankAcct open_acct(long amt) throws negAmtExc, bnkFullExc {
      semantics
      [abnormal = thrown(negAmtExc, bnkFullExc)] {
         amt < 0 <:> thrown(negAmtExc);
            @bankAux.bank_is_full(this) <:> thrown(bnkFullExc);
           if (normal) {
               return.get_balance() == amt;
               get_accts() == @get_accts() + 1;
               bankAux.is_acct_active(this, return.get_acct_num())
                           == true;
           }
           if (abnormal) { 
               unchanged(get_accts());
           }
      }
   }
}

Figure 2. ADL Bank Specification

5. TDD ANNOTATIONS

Test data annotations allow the test engineer to define how an interface should be tested, what data and what procedures should be used to exercise the functions in the interface. TDD provides a notation in which the user can write descriptions of test sets, which will be processed into test driver programs.

The principle behind TDD is that it is processed by re-writing the input to create a test program. The re-write does not remove any information, and a valid program in the target language should not be altered by the re-write. Hence any code fragment in the input which does not use TDD features will appear unaltered in the rewritten output.

The concepts of TDD are applied to a variety of programming languages, called target language. The concepts of TDD are common to all target languages, and the syntax is in large measure common; the parts of the language that get re-written are common to our four target languages (Java, IDL, C++, and C).

5.1 Dataset

A dataset is a set of data values. It may be used in place of an expression in the target language syntax. The result of such an expression over a dataset is another dataset. An expression involving more than one dataset is treated as an expression over the Cartesian product of the datasets.

5.2 Dataset Size

A dataset has a definite size, by construction. However, that size may not be feasible to use as a test. Examples of feasible datasets are enum types, array indices, array contents, and datasets created by literal expressions. The concept of feasibility is not precise; there is not an axiomatic way to decide if a dataset is small enough. In practice, a dataset with more than 2^32 elements is certainly unfeasible.

A dataset may be created by a literal expression or by a factory. A dataset may also be created by the combination of a representation type and a constraint. A single value, that is an expression in the target language, is a trivial dataset. A TDD specification is shown in Figure 3.

public tddclass bankTest { 
    dataset int INT_VALUES = {0,5,10,100};
    dataset int BANKSIZE = INT_VALUES;
    dataset int WITHDRAWS = INT_VALUES;
    dataset int DEPOSITS = {-1,0,1,10,100};
    factory bank make_bank (int max, int num, int initDep) {
     bank ret = new bank(max);
     for (int x = 0; x < max && x < num; x++) {
       try{
         bankAcct ba = ret.open_acct(initDep);
         } catch (bankExc be) {
           tdd_result(ADL_FAIL, "exception caught on open");
           tdd_end_case(); 
       }
     } return ret;
  } relinquish (bank b) { 
      if (num > 0) { b.close_all_accts();}
  }
    factory bankAcct make_acct(bank theBank, int initDep) {
        .......... 
   } 
    dataset bank B_EMPTY =  make_bank(BANKSIZE, 0, DEPOSITS);
    dataset bankAcct BA1 = make_acct(B_EMPTY, DEPOSITS);
    test(bankAcct ba = BA1, int d = DEPOSITS, int w = WITHDRAWS) {
       long save = ba.get_balance();
       ADL(ba).deposit(d);
       ADL(ba).withdraw(w);
       if (d == w) { 
          tdd_assert(ba.get_balance() == save);
       }
   }
}

Figure 3. TDD Bank Specification

5.3 Factory

A factory is a data creator. It encapsulates the notions of a constructor, a destructor, and reporting. A factory is, formally, a function from a dataset to a dataset. A function f(a1,..,an) of more than one argument is formally treated as function f(a) of a single argument, the crossproduct a1 x...x an of the input datasets.

Operationally, a factory is implemented by a pointwise function on the elements of the domain. In addition, the implementation of a factory includes a destructor function (relinquish) for elements of the range, and an association from an element of the range to the element of the domain (See the bank factory in Figure 3).

5.4 Test Directive

A test directive is formally a statement evaluated for side effect. In particular, a test directive normally includes an expression involving one or more calls to checked functions.

Note that a function or method body in a test declaration is subject to the same re-writing as any other code in the test declaration. Hence any call to a checked function ADL(obj).method(...), in such a body, will be interpreted as a call to the checked version of the function; and calling such a function or method will have the side-effect of making an observation about the behavior of such checked functions.

A test directive expression is parameterized by the datasets used in the test directive as in the example of the bankTest in Figure 3.

6. TEST RESULT REPORTING

The final result generated from an ADL test is computed from the tabulated results of all test instances executed from a given run of the test. The results are computed as a hierarchy of granularity ranging from the program result as the least granular level to an assertion result as the most granular level.

Test results reporting occurs at the end of the test execution. The level of reporting detail is determined by the reporting level switch set either by default or from a configuration file. The results of a test are defined by a code number, name, and precedence value and are defined by the system or optionally by the user.

7. RUNTIME ARCHITECTURE

The ADL generated test program is a self contained executable that embodies the test and verification intentions of the input ADL and TDD specifications. It consists of user written test code, the implementation under test (except for IDL interfaces), the ADL runtime library and the generated test code from the ADL Translator (See Figure 4).

ADL Generated Tests
Figure 4. ADL runtime architecture

In the ADL runtime library, TET [5] is used to manage the test instances and report their results. In addition, TET-style result reporting are aligned with the POSIX standard for assertion-based test methods [3].

The Assertion Checking Objects (ACO) generated by ADLT check an implementation against its corresponding ADL specification. An ACO is a mirror image class definition of an ADL annotated object that contains the same method names and signatures as the specified original.

An ACO method invokes the implementation method and evaluates the assertion checking code that wraps around the call to the implementation method.

8. CONCLUSION

Using ADL2, one can perform mathematical proofs of program correctness with respect to a specification of programs. This requires the building up of axioms that describe each programming language construct. These axioms together with the specifications create a basis for determining if the program is indeed consistent with its specifications. Testing is the process of running a system on a variety of test data and determining whether or not the system has worked correctly in these cases.

As a result, ADL2 could be a useful tool to help ease the burden of testing software components.

ACKNOWLEDGMENTS


This work is supported in part by the Advanced Software Enrichment Project of Information-technology Promotion Agency, Japan (IPA).

REFERENCES

  1. Sriram Sankar and Roger Hayes, Sun Labs; Specifying and Testing Software Components using ADL; Sun Microsystems, 1993.
  2. Matt Evans, Sun Labs and Shane P. McCarron, X/Open; Assertion Definition Language Translation System, SRI Quality Week; May 1994.
  3. IEEE P1003.3; Test Methods for Measuring Conformance to POSIX; Institute of Electrical and Electronics Engineers, Inc., April 17, 1991
  4. ADL version 1.1, Available from The Open Group at <http://adl.opengroup.org>
  5. Test Environment Toolkit version 3.2, Available from The Open Group at <http://tetworks.opengroup.org>

AUTHORS

Masaharu Obayashi
Kanrikogaku, Ltd.
Meguro Suda Bldg.
3-9-1 Meguro, Meguro-ku
Tokyo 153, Japan
Tel. +81 3 3716 6300
obayashi@kthree.co.jp
Hiroshi Kubota
Mitsubishi Research Institute
3-6, Otemachi 2-chome
Chiyoda-ku
Tokyo 100, Japan
Tel. +81 3 3277 0748
kubota@mri.co.jp
Shane P. McCarron
The Open Group
Testing Research Dept.
Apex House, Forbury Road
Reading England RG1 1A
Tel. +44 118 9508311
s.mccarron@opengroup.org
Lionel Mallet
The Open Group
Research Institute

2, Avenue de Vignate
F38610 Gières, France
Tel. +33 4 76 63 48 66
l.mallet@opengroup.org