Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Categories  



1.1  Specified  





1.2  Derived  





1.3  Implicit  





1.4  Human  







2 Examples  





3 References  





4 Bibliography  














Test oracle






Deutsch
Español
Italiano
עברית

 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


Insoftware testing, a test oracle (or just oracle) is a provider of information that describes correct output based on the input of a test case. Testing with an oracle involves comparing actual results of the system under test (SUT) with the expected results as provided by the oracle.[1]

The term "test oracle" was first introduced in a paper by William E. Howden.[2] Additional work on different kinds of oracles was explored by Elaine Weyuker.[3]

An oracle can operate separately from the SUT; accessed at test runtime, or it can be used before a test is run with expected results encoded into the test logic.[4]

However, method postconditions are part of the SUT, as automated oracles in design by contract models.[5]

Determining the correct output for a given input (and a set of program or system states) is known as the oracle problemortest oracle problem,[6]: 507  which some consider a relatively hard problem, and involves working with problems related to controllability and observability.[7]

Categories[edit]

A research literature survey covering 1978 to 2012[6] found several potential categories of test oracles.

Specified[edit]

A specified oracle is typically associated with formalized approaches to software modeling and software code construction. It is connected to formal specification,[8] model-based design which may be used to generate test oracles,[9] state transition specification for which oracles can be derived to aid model-based testing[10] and protocol conformance testing,[11] and design by contract for which the equivalent test oracle is an assertion.

Specified test oracles have a number of challenges. Formal specification relies on abstraction, which in turn may naturally have an element of imprecision as all models cannot capture all behavior.[6]: 514 

Derived[edit]

A derived test oracle differentiates correct and incorrect behavior by using information derived from artifacts of the system. These may include documentation, system execution results and characteristics of versions of the SUT.[6]: 514  Regression test suites (or reports) are an example of a derived test oracle - they are built on the assumption that the result from a previous system version can be used as aid (oracle) for a future system version. Previously measured performance characteristics may be used as an oracle for future system versions, for example, to trigger a question about observed potential performance degradation. Textual documentation from previous system versions may be used as a basis to guide expectations in future system versions.

A pseudo-oracle[6]: 515  falls into the category of derived test oracle. A pseudo-oracle, as defined by Weyuker,[12] is a separately written program which can take the same input as the program or SUT so that their outputs may be compared to understand if there might be a problem to investigate.

A partial oracle[6]: 515  is a hybrid between specified test oracle and derived test oracle. It specifies important (but not complete) properties of the SUT. For example, metamorphic testing exploits such properties, called metamorphic relations, across multiple executions of the system.

Implicit[edit]

An implicit test oracle relies on implied information and assumptions.[6]: 518  For example, there may be some implied conclusion from a program crash, i.e. unwanted behavior - an oracle to determine that there may be a problem. There are a number of ways to search and test for unwanted behavior, whether some call it negative testing, where there are specialized subsets such as fuzzing.

There are limitations in implicit test oracles - as they rely on implied conclusions and assumptions. For example, a program or process crash may not be a priority issue if the system is a fault-tolerant system and so operating under a form of self-healing/self-management. Implicit test oracles may be susceptible to false positives due to environment dependencies. Property based testing relies on implicit oracles.

Human[edit]

A human can act as a test oracle.[7] This approach can be categorized as quantitative or qualitative.[6]: 519–520  A quantitative approach aims to find the right amount of information to gather on a SUT (e.g., test results) for a stakeholder to be able to make decisions on fit-for-purpose or the release of the software. A qualitative approach aims to find the representativeness and suitability of the input test data and context of the output from the SUT. An example is using realistic and representative test data and making sense of the results (if they are realistic). These can be guided by heuristic approaches, such as gut instincts, rules of thumb, checklist aids, and experience to help tailor the specific combination selected for the SUT.

Examples[edit]

Test oracles are most commonly based on specifications and documentation.[13][14] A formal specification used as input to model-based design and model-based testing would be an example of a specified test oracle. The model-based oracle uses the same model to generate and verify system behavior.[15] Documentation that is not a full specification of the product, such as a usage or installation guide, or a record of performance characteristics or minimum machine requirements for the software, would typically be a derived test oracle.

A consistency oracle compares the results of one test execution to another for similarity.[16] This is another example of a derived test oracle.

An oracle for a software program might be a second program that uses a different algorithm to evaluate the same mathematical expression as the product under test. This is an example of a pseudo-oracle, which is a derived test oracle.[12]: 466 

During Google search, we do not have a complete oracle to verify whether the number of returned results is correct. We may define a metamorphic relation[17] such that a follow-up narrowed-down search will produce fewer results. This is an example of a partial oracle, which is a hybrid between specified test oracle and derived test oracle.

A statistical oracle uses probabilistic characteristics,[18] for example with image analysis where a range of certainty and uncertainty is defined for the test oracle to pronounce a match or otherwise. This would be an example of a quantitative approach in human test oracle.

A heuristic oracle provides representative or approximate results over a class of test inputs.[19] This would be an example of a qualitative approach in human test oracle.

References[edit]

  • ^ Howden, W.E. (July 1978). "Theoretical and Empirical Studies of Program Testing". IEEE Transactions on Software Engineering. 4 (4): 293–298. doi:10.1109/TSE.1978.231514.
  • ^ Weyuker, Elaine J.; "The Oracle Assumption of Program Testing", in Proceedings of the 13th International Conference on System Sciences (ICSS), Honolulu, HI, January 1980, pp. 44-49
  • ^ Jalote, Pankaj; An Integrated Approach to Software Engineering, Springer/Birkhäuser, 2005, ISBN 0-387-20881-X
  • ^ Meyer, Bertrand; Fiva, Arno; Ciupa, Ilinca; Leitner, Andreas; Wei, Yi; Stapf, Emmanuel (September 2009). "Programs That Test Themselves". Computer. 42 (9): 46–55. doi:10.1109/MC.2009.296.
  • ^ a b c d e f g h Barr, Earl T.; Harman, Mark; McMinn, Phil; Shahbaz, Muzammil; Yoo, Shin (November 2014). "The Oracle Problem in Software Testing: A Survey" (PDF). IEEE Transactions on Software Engineering. 41 (5): 507–525. doi:10.1109/TSE.2014.2372785.
  • ^ a b Ammann, Paul; and Offutt, Jeff; "Introduction to Software Testing, 2nd edition", Cambridge University Press, 2016, ISBN 978-1107172012
  • ^ Börger, E (1999). "High Level System Design and Analysis Using Abstract State Machines". In Hutter, D; Stephan, W; Traverso, P; Ullman, M (eds.). Applied Formal Methods — FM-Trends 98. Lecture Notes in Computer Science. Vol. 1641. pp. 1–43. CiteSeerX 10.1.1.470.3653. doi:10.1007/3-540-48257-1_1. ISBN 978-3-540-66462-8.
  • ^ Peters, D.K. (March 1998). "Using test oracles generated from program documentation". IEEE Transactions on Software Engineering. 24 (3): 161–173. CiteSeerX 10.1.1.39.2890. doi:10.1109/32.667877.
  • ^ Utting, Mark; Pretschner, Alexander; Legeard, Bruno (2012). "A taxonomy of model-based testing approaches" (PDF). Software Testing, Verification and Reliability. 22 (5): 297–312. doi:10.1002/stvr.456. ISSN 1099-1689.
  • ^ Gaudel, Marie-Claude (2001). "Testing from Formal Specifications, a Generic Approach". In Craeynest, D.; Strohmeier, A (eds.). Reliable SoftwareTechnologies — Ada-Europe 2001. Lecture Notes in Computer Science. Vol. 2043. pp. 35–48. doi:10.1007/3-540-45136-6_3. ISBN 978-3-540-42123-8.
  • ^ a b Weyuker, E.J. (November 1982). "On Testing Non-Testable Programs". The Computer Journal. 25 (4): 465–470. doi:10.1093/comjnl/25.4.465.
  • ^ Peters, Dennis K. (1995). Generating a Test Oracle from Program Documentation (M. Eng. thesis). McMaster University. CiteSeerX 10.1.1.69.4331.
  • ^ Peters, Dennis K.; Parnas, David L. "Generating a Test Oracle from Program Documentation" (PDF). Proceedings of the 1994 International Symposium on Software Testing and Analysis. ISSTA. ACM Press. pp. 58–65.
  • ^ Robinson, Harry; Finite State Model-Based Testing on a Shoestring, STAR West 1999
  • ^ Hoffman, Douglas; Analysis of a Taxonomy for Test Oracles, Quality Week, 1998
  • ^ Zhou, Z.Q.; Zhang, S.; Hagenbuchner, M.; Tse, T.H.; Kuo, F.-C.; Chen, T.Y. (2012). "Automated functional testing of online search services". Software Testing, Verification and Reliability. 22 (4): 221–243. doi:10.1002/stvr.437. hdl:10722/123864.
  • ^ Mayer, Johannes; Guderlei, Ralph (2004). "Test Oracles Using Statistical Methods" (PDF). Proceedings of the First International Workshop on Software Quality, Lecture Notes in Informatics. First International Workshop on Software Quality. Springer. pp. 179–189.
  • ^ Hoffman, Douglas; Heuristic Test Oracles, Software Testing & Quality Engineering Magazine, 1999
  • Bibliography[edit]


    Retrieved from "https://en.wikipedia.org/w/index.php?title=Test_oracle&oldid=1225380771"

    Categories: 
    Software testing
    Computation oracles
    Hidden categories: 
    Use American English from January 2021
    All Wikipedia articles written in American English
     



    This page was last edited on 24 May 2024, at 02:33 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki