Panel 3:
Is There a "Compete" Testing to Replace Program Proof?
[ About the Speakers ] [ Abstract ]
About the Speakers: Dr.
Belli
is a Professor of Computer Science, overseeing more than 25 years of
experience in software development, quality assurance and testing. He was senior
scientist with the German National Research Center for Information Technology
and, prior to this, systems engineer with a German computer manufacturer in
Munich, where he headed several projects, developing real-time information
systems. Dr.
Cao
received the B.Sc. degree from Nanjing University, China, in 1982, and the M.Sc.
and Ph.D. degree from Washington State University, USA, in 1986 and 1990,
respectively, all in computer science. Before joined the Hong Kong Polytechnic
University in 1997, where he is currently an associate professor, he has been on
faculty of computer science in James Cook University and The University of
Adelaide in Australia,
and the City University of Hong Kong. Dr.
Cao's research interests include parallel and distributed computing, fault
tolerance, programming methodology and environments, and mobile computing. His
main areas of expertise are the design and analysis of efficient and reliable
algorithms and protocols for parallel/distributed system functions and services,
and the development of software tools for programming distributed systems. Yvan
Labiche
currently holds a position of assistant professor with the Department of Systems
and Computer Engineering, Software Quality Engineering Laboratory http://www.sce.carleton.ca/Squall/Squall.htm),
Carleton University, Ottawa, Canada, where he teaches in the Software
Engineering Program and performs research in Software Testing. Before that, Yvan
completed a Ph.D. in Software Engineering, at LAAS/CNRS in Toulouse,
France. During this period, he worked with Aerospatiale Matra Airbus on the
definition of testing strategies for safety-critical, on-board software, using
object-oriented technologies. Jean-Claude
Laprie is "Directeur de Recherche"
at CNRS, the French National Organization for Scientific Research. He has been
Director of LAAS-CNRS since 1997. He joined LAAS-CNRS in 1968, where he founded
the research group on Fault Tolerance and Dependable Computing in 1975 and
directed until he became Director of LAAS. His research has focused on
dependable computing since 1973, and especially on fault tolerance and on
dependability evaluation, subjects on which he has authored and coauthored more
than 100 papers, as well as coauthored or edited several books. He also founded
and directed, from 1992 to 1996, LIS, the Laboratory for Dependability
Engineering, a joint academia-industry laboratory. In 1984-1985, he was Chairman
of the IEEE Computer Society Technical Committee on Fault-Tolerant Computing,
and has been Chairman of IFIP WG 10.4 on Dependable Computing and Fault
Tolerance from 1986 to 1995. He is currently Chair of IFIP TC 10 on Computer
System Technology. He received in 1992 the IFIP
Silver Core, and in 1993 the Silver Medal of the French Scientific Research. Joachim
Wegener
is manager of Adaptive Technologies in Software Engineering at
Abstract: Since
E.W. Dijkstra’s critics “Program testing can be used to show the presence of
bugs, but never to show their absence.” [DIJ1],
we have almost a religious discussion within the Computer Science Community
whether testing could be used as a verification method [DIJ2,
BOYE]
or not. Nevertheless, testing is the most accepted and widely used method in the
industrial software development, not only for verification of the software
correctness, but also its validation, especially concerning the user
requirements [GrWe].
Taking
the shortcomings of testing concerning the completeness of verified issues of
the program into account, many researchers started very early to form a
mathematically sound fundament for a systematic testing in Software Engineering.
One of the pioneer work that made Software Testing become a solid discipline
is the Fundamental Test Theorem of
S. Gerhart and J.B. Goodenough which was published 1975: “We prove ... that
properly structured tests are capable of demonstrating the absence of errors
in a program” [GeGo].
By means of their test selection criteria, based on predicate logic, Gerhart and
Goodenough found numerous capital errors in a
program of P. Naur which was published in a text book and repeatedly
quoted by other renowned authors, also as an example for correctness proof.
Since the Fundamental Test Theorem, a vast amount of further research work has
enabled Systematic Testing to become more and more recognized and also
accepted in the academia, e.g. through the work of E.W. Howden: “It is
possible to use testing to formally prove the correctness of programs” [HOWD]. This
is a very brief summary of an academic discussion; its impacts are, however, of
immense relevance for the software industry. There is a vast amount of
scientific work that cannot be quoted here, some of them for testing and
reliability, e.g. [HAML,
VOAS],
others for proof, e.g [BLUM].
The panel’s objective is to establish a platform for researchers and
practitioners who could represent the state of the art of this discussion, not
only enlightening problems and approaches to resolve them, but also identifying
limitations in theory and practice. References
|
||||||||||||||||||