Panel 3:

 

Is There a "Compete" Testing to Replace Program Proof?

 

 [ About the Speakers ] [ Abstract ]

  • Fevzi Belli, University of Paderborn, Germany
  • Jianong Cao, Polytechnic University of Hong Kong, Hong Kong
  • Yvan Labiche, Carleton University, Canada
  • Jean-Claude Laprie, LAAS, Toulouse, France
  • Joachim Wegener, Daimler-Chrysler, Germany

About the Speakers:

Dr. Belli is a Professor of Computer Science, over­seeing more than 25 years of experience in software development, quality assurance and testing. He was senior scientist with the German National Research Center for Information Tech­nology and, prior to this, systems engineer with a German computer manufacturer in Munich, where he headed several projects, devel­oping real-time information systems.

Dr. Cao received the B.Sc. degree from Nanjing University, China, in 1982, and the M.Sc. and Ph.D. degree from Washington State University, USA, in 1986 and 1990, respectively, all in computer science. Before joined the Hong Kong Polytechnic University in 1997, where he is currently an associate professor, he has been on faculty of computer science in James Cook University and The University of Adelaide in Australia, and the City University of Hong Kong.

Dr. Cao's research interests include parallel and distributed computing, fault tolerance, programming methodology and environments, and mobile computing. His main areas of expertise are the design and analysis of efficient and reliable algorithms and protocols for parallel/distributed system functions and services, and the development of software tools for programming distributed systems.

Yvan Labiche currently holds a position of assistant professor with the Department of Systems and Computer Engineering, Software Quality Engineering Laboratory http://www.sce.carleton.ca/Squall/Squall.htm), Carleton University, Ottawa, Canada, where he teaches in the Software Engineering Program and performs research in Software Testing. Before that, Yvan completed a Ph.D. in Software Engineering, at LAAS/CNRS in Toulouse, France. During this period, he worked with Aerospatiale Matra Airbus on the definition of testing strategies for safety-critical, on-board software, using object-oriented technologies.  

Jean-Claude Laprie is "Directeur de Recherche" at CNRS, the French National Organization for Scientific Research. He has been Director of LAAS-CNRS since 1997. He joined LAAS-CNRS in 1968, where he founded the research group on Fault Tolerance and Dependable Computing in 1975 and directed until he became Director of LAAS. His research has focused on dependable computing since 1973, and especially on fault tolerance and on dependability evaluation, subjects on which he has authored and coauthored more than 100 papers, as well as coauthored or edited several books. He also founded and directed, from 1992 to 1996, LIS, the Laboratory for Dependability Engineering, a joint academia-industry laboratory.

In 1984-1985, he was Chairman of the IEEE Computer Society Technical Committee on Fault-Tolerant Computing, and has been Chairman of IFIP WG 10.4 on Dependable Computing and Fault Tolerance from 1986 to 1995. He is currently Chair of IFIP TC 10 on Computer System Technology.

He received in 1992 the IFIP Silver Core, and in 1993 the Silver Medal of the French Scientific Research.

Joachim Wegener is manager of Adaptive Technologies in Software Engineering at DaimlerChrysler, Research and Technology. He was involved in the development of the classification-tree editor CTE and the test system TESSY. He is currently working on the design of software development processes for Mercedes-Benz as well as on systematic and evolutionary software testing methods for the verification and validation of embedded systems. He is a member of the Seminal Network and the German Computer Society Special Interest Group on Testing, Analysis and Verification.

 

Abstract:

Since E.W. Dijkstra’s critics “Program testing can be used to show the presence of bugs, but never to show their absence.” [DIJ1], we have almost a religious discussion within the Computer Sci­ence Community whether testing could be used as a verification method [DIJ2, BOYE] or not. Nevertheless, testing is the most accepted and widely used method in the industrial software development, not only for verification of the software correctness, but also its vali­dation, es­pecially con­cerning the user requirements [GrWe].

Taking the shortcomings of testing concerning the completeness of verified issues of the pro­gram into account, many researchers started very early to form a mathematically sound funda­ment for a sys­tematic testing in Software Engi­neering.  One of the pioneer work that made Software Testing become a solid disci­pline is the Fun­damental Test Theorem of S. Gerhart and J.B. Goodenough which was pub­lished 1975: “We prove ... that properly structured tests are capable of demonstrating the ab­sence of errors in a program” [GeGo]. By means of their test selection criteria, based on predicate logic, Gerhart and Goodenough found numerous capital errors in a  program of P. Naur which was published in a text book and repeatedly quoted by other renowned authors, also as an example for correctness proof. Since the Fundamental Test Theorem, a vast amount of further research work has enabled System­atic Testing to become more and more recognized and also accepted in the academia, e.g. through the work of E.W. Howden: “It is possible to use testing to formally prove the correct­ness of programs” [HOWD].

This is a very brief summary of an academic discussion; its impacts are, however, of immense relevance for the software industry. There is a vast amount of scientific work that cannot be quoted here, some of them for testing and reliability, e.g. [HAML, VOAS], others for proof, e.g [BLUM]. The panel’s objective is to estab­lish a platform for researchers and practitioners who could represent the state of the art of this discussion, not only enlightening problems and approaches to resolve them, but also identi­fying limitations in theory and prac­tice.

References

[BLUM]

M. Blum, H. Wassermann, “Program Result Checking: A Theory of Testing Meets a Test of Theory”, Proc. 35th Ann. Symp. On Foundation of Comp. Scie., pp. 382-391 (1994)

[BOYE]

R.S. Boyer, J. Strother-Moore (eds.), “The Correctness Problem in Computer Science”, Academic Press, London, 1981

[DIJ1]

E.W. Dijkstra, “Notes on Structured Programming”, in O.J. Dahl et al. (ed.), “Structured Program­ming”, Academic Press, London, pp. 1-82, 1972

[DIJ2]

E.W. Dijkstra, “Why Correctness Must Be a Mathematical Concern?”, in [BOYE]

[GeGo]

S. Gerhart, J.B. Goodenough, “Toward a Theory of Test Data Selection”, IEEE Trans. On Softw. Eng., pp. 156-173 (1975)

[GrWe]

M. Grochtmann, J. Wegener, “Evolutionary Testing of Temporal Correctness”, Proc. 2nd Software Quality Week Europe (1998)

[HAML]

D. Hamlet, “Foundations of Software Testing: Dependability Theory”, SIGSOFT’94, 12/94, pp. 128-139 (1994)

[HOWD]

W.E. Howden, “Theoretical and Empirical Studies of Program Testing”, IEEE Trans. Softw. Eng., pp. 293-298, 1978

[VOAS

J.M. Voas, “PIE: A Dynamic Failure-Based Technique”, IEEE TSE 18-8 , pp. 717-727 (1992)