Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Safety  





2 Liveness  





3 History  





4 References  














Safety and liveness properties







Add links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


Properties of an execution of a computer program—particularly for concurrent and distributed systems—have long been formulated by giving safety properties ("bad things don't happen") and liveness properties ("good things do happen").[1]

A program is totally correct with respect to a precondition and postcondition if any execution started in a state satisfying terminates in a state satisfying . Total correctness is a conjunction of a safety property and a liveness property:[2]

Note that a bad thing is discrete,[3] since it happens at a particular place during execution. A "good thing" need not be discrete, but the liveness property of termination is discrete.

Formal definitions that were ultimately proposed for safety properties[4] and liveness properties[5] demonstrated that this decomposition is not only intuitively appealing but is also complete: all properties of an execution are a conjunction of safety and liveness properties.[5] Moreover, undertaking the decomposition can be helpful, because the formal definitions enable a proof that different methods must be used for verifying safety properties versus for verifying liveness properties.[6][7]

Safety[edit]

A safety property proscribes discrete bad things from occurring during an execution.[1] A safety property thus characterizes what is permitted by stating what is prohibited. The requirement that the bad thing be discrete means that a bad thing occurring during execution necessarily occurs at some identifiable point.[5]

Examples of a discrete bad thing that could be used to define a safety property include:[5]

An execution of a program can be described formally by giving the infinite sequence of program states that results as execution proceeds, where the last state for a terminating program is repeated infinitely. For a program of interest, let denote the set of possible program states, denote the set of finite sequences of program states, and denote the set of infinite sequences of program states. The relation holds for sequences and iff is a prefixofor equals .[5]

A property of a program is the set of allowed executions.

The essential characteristic of a safety property is: If some execution does not satisfy then the defining bad thing for that safety property occurs at some point in . Notice that after such a bad thing, if further execution results in an execution , then also does not satisfy , since the bad thingin also occurs in . We take this inference about the irremediability of bad things to be the defining characteristic for to be a safety property. Formalizing this in predicate logic gives a formal definition for being a safety property.[5]

This formal definition for safety properties implies that if an execution satisfies a safety property then every prefix of (with the last state repeated) also satisfies .

Liveness[edit]

A liveness property prescribes good things for every execution or, equivalently, describes something that must happen during an execution.[1] The good thing need not be discrete—it might involve an infinite number of steps. Examples of a good thing used to define a liveness property include:[5]

The good thing in the first example is discrete but not in the others.

Producing an answer within a specified real-time bound is a safety property rather than a liveness property. This is because a discrete bad thing is being proscribed: a partial execution that reaches a state where the answer still has not been produced and the value of the clock (a state variable) violates the bound. Deadlock freedom is a safety property: the "bad thing" is a deadlock (which is discrete).

Most of the time, knowing that a program eventually does some "good thing" is not satisfactory; we want to know that the program performs the "good thing" within some number of steps or before some deadline. A property that gives a specific bound to the "good thing" is a safety property (as noted above), whereas the weaker property that merely asserts the bound exists is a liveness property. Proving such a liveness property is likely to be easier than proving the tighter safety property because proving the liveness property doesn't require the kind of detailed accounting that is required for proving the safety property.

To differ from a safety property, a liveness property cannot rule out any finite prefix [8] of an execution (since such an would be a "bad thing" and, thus, would be defining a safety property). That leads to defining a liveness property to be a property that does not rule out any finite prefix.[5]

This definition does not restrict a good thing to being discrete—the good thing can involve all of , which is an infinite-length execution.

History[edit]

Lamport used the terms safety property and liveness property in his 1977 paper[1] on proving the correctness of multiprocess (concurrent) programs. He borrowed the terms from Petri net theory, which was using the terms liveness and boundedness for describing how the assignment of a Petri net's "tokens" to its "places" could evolve; Petri net safety was a specific form of boundedness. Lamport subsequently developed a formal definition of safety for a NATO short course on distributed systems in Munich.[9] It assumed that properties are invariant under stuttering. The formal definition of safety given above appears in a paper by Alpern and Schneider;[5] the connection between the two formalizations of safety properties appears in a paper by Alpern, Demers, and Schneider.[10]

Alpern and Schneider[5] gives the formal definition for liveness, accompanied by a proof that all properties can be constructed using safety properties and liveness properties. That proof was inspired by Gordon Plotkin's insight that safety properties correspond to closed sets and liveness properties correspond to dense sets in a natural topology on the set of infinite sequences of program states.[11] Subsequently, Alpern and Schneider[12] not only gave a Büchi automaton characterization for the formal definitions of safety properties and liveness properties but used these automata formulations to show that verification of safety properties would require an invariant and verification of liveness properties would require a well-foundedness argument. The correspondence between the kind of property (safety vs liveness) with kind of proof (invariance vs well-foundedness) was a strong argument that the decomposition of properties into safety and liveness (as opposed to some other partitioning) was a useful one—knowing the type of property to be proved dictated the type of proof that is required.

References[edit]

  1. ^ a b c d Lamport, Leslie (March 1977). "Proving the correctness of multiprocess programs". IEEE Transactions on Software Engineering. SE-3 (2): 125–143. CiteSeerX 10.1.1.137.9454. doi:10.1109/TSE.1977.229904. S2CID 9985552.
  • ^ Manna, Zohar; Pnueli, Amir (September 1974). "Axiomatic approach to total correctness of programs". Acta Informatica. 3 (3): 243–263. doi:10.1007/BF00288637. S2CID 2988073.
  • ^ i.e. it has finite duration
  • ^ Alford, Mack W.; Lamport, Leslie; Mullery, Geoff P. (3 April 1984). "Basic concepts". Distributed Systems: Methods and Tools for Specification, An Advanced Course. Lecture Notes in Computer Science. Vol. 190. Munich, Germany: Springer Verlag. pp. 7–43. ISBN 3-540-15216-4.
  • ^ a b c d e f g h i j k Alpern, Bowen; Schneider, Fred B. (1985). "Defining liveness". Information Processing Letters. 21 (4): 181–185. doi:10.1016/0020-0190(85)90056-0.
  • ^ Alpern, Bowen; Schneider, Fred B. (1987). "Recognizing safety and liveness". Distributed Computing. 2 (3): 117–126. doi:10.1007/BF01782772. hdl:1813/6567. S2CID 9717112.
  • ^ The paper[5] received the 2018 Dijkstra Prize ("for outstanding papers on the principles of distributed computing whose significance and impact on the theory and/or practice of distributed computing have been evident for at least a decade"), for the formal decomposition into safety and liveness properties was crucial to future research into proving properties of programs.
  • ^ denotes the set of finite sequences of program states and the set of infinite sequences of program states.
  • ^ Alford, Mack W.; Lamport, Leslie; Mullery, Geoff P. (3 April 1984). "Basic concepts". Distributed Systems: Methods and Tools for Specification, An Advanced Course. Lecture Notes in Computer Science. Vol. 190. Munich, Germany: Springer Verlag. pp. 7–43. ISBN 3-540-15216-4.
  • ^ Alpern, Bowen; Demers, Alan J.; Schneider, Fred B. (November 1986). "Safety without stuttering". Information Processing Letters. 23 (4): 177–180. doi:10.1016/0020-0190(86)90132-8. hdl:1813/6548.
  • ^ Private communication from Plotkin to Schneider.
  • ^ Alpern, Bowen; Schneider, Fred B. (1987). "Recognizing safety and liveness". Distributed Computing. 2 (3): 117–126. doi:10.1007/BF01782772. hdl:1813/6567. S2CID 9717112.

  • Retrieved from "https://en.wikipedia.org/w/index.php?title=Safety_and_liveness_properties&oldid=1215181993"

    Categories: 
    Concurrent computing
    Theoretical computer science
    Model checking
     



    This page was last edited on 23 March 2024, at 16:47 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki