Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Concept  





2 Operational readiness using chaos engineering  





3 History  





4 Chaos engineering tools  



4.1  Chaos Monkey  



4.1.1  Simian Army  







4.2  Proofdock chaos engineering platform  





4.3  Steadybit  





4.4  Gremlin  





4.5  Facebook Storm  





4.6  Days of Chaos  







5 See also  





6 Notes and references  





7 External links  














Chaos engineering






Español
Français

Magyar
 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


Chaos engineering is the discipline of experimenting on a system in order to build confidence in the system's capability to withstand turbulent conditions in production.[1]

Concept[edit]

In software development, a given software system's ability to tolerate failures while still ensuring adequate quality of service—often generalized as resilience—is typically specified as a requirement. However, development teams often fail to meet this requirement due to factors such as short deadlines or lack of knowledge of the field. Chaos engineering is a technique to meet the resilience requirement.

Chaos engineering can be used to achieve resilience against infrastructure failures, network failures, and application failures.

Operational readiness using chaos engineering[edit]

Calculating how much confidence we have in the interconnected complex systems that are put into production environments requires operational readiness metrics. Operational readiness can be evaluated using chaos engineering simulations supported by Kubernetes infrastructure. Solutions for increasing the resilience and operational readiness of a platform include strengthening the backup, restore, network file transfer, failover capabilities and overall security of the environment. Gautam Siwach et al, performed an evaluation of inducing chaos in a Kubernetes environment which terminates random pods with data from edge devices in data centers while processing analytics on a big data network, and inferring the recovery time of pods to calculate an estimated response time, as a resilience metric.[2][3]

History[edit]

1983 – Apple

While MacWrite and MacPaint were being developed for the first Apple Macintosh computer, Steve Capps created "Monkey", a desk accessory which randomly generated user interface events at high speed, simulating a monkey frantically banging the keyboard and moving and clicking the mouse. It was promptly put to use for debugging by generating errors for programmers to fix, because automated testing was not possible; the first Macintosh had too little free memory space for anything more sophisticated.[4]

1992 – Prologue While ABAL2 and SING were being developed for the first graphical versions of the PROLOGUE operating system, Iain James Marshall created "La Matraque", a desk accessory which randomly generated random sequences of both legal and invalid graphical interface events, at high speed, thus testing the critical edge behaviour of the underlying graphics libraries. This program would be launched prior to production delivery, for days on end, thus ensuring the required degree of total resilience. This tool was subsequently extended to include the Database and other File Access instructions of the ABAL language to check and ensure their subsequent resiliance. A variation, of this tool, is currently employed for the qualification of the modern day version known as OPENABAL.

2003 – Amazon

While working to improve website reliability at Amazon, Jesse Robbins created "Game day",[5] an initiative that increases reliability by purposefully creating major failures on a regular basis. Robbins has said it was inspired by firefighter training and research in other fields lessons in complex systems, reliability engineering.[6]

2006 – Google

While at Google, Kripa Krishnan created a similar program to Amazon's Game day (see above) called "DiRT".[6][7][8] Jason Cahoon, a Site Reliability Engineer [9] at Google, contributed a chapter on Google DiRT [10] in the "Chaos Engineering" book [11] and described the system at the GOTOpia 2021 conference.[12]

2011 – Netflix

While overseeing Netflix's migration to the cloud in 2011 Nora Jones, Casey Rosenthal, and Greg Orzell [11][13][14] expanded the discipline while working together at Netflix by setting up a tool that would cause breakdowns in their production environment, the environment used by Netflix customers. The intent was to move from a development model that assumed no breakdowns to a model where breakdowns were considered to be inevitable, driving developers to consider built-in resilience to be an obligation rather than an option:

"At Netflix, our culture of freedom and responsibility led us not to force engineers to design their code in a specific way. Instead, we discovered that we could align our teams around the notion of infrastructure resilience by isolating the problems created by server neutralization and pushing them to the extreme. We have created Chaos Monkey, a program that randomly chooses a server and disables it during its usual hours of activity. Some will find that crazy, but we could not depend on the random occurrence of an event to test our behavior in the face of the very consequences of this event. Knowing that this would happen frequently has created a strong alignment among engineers to build redundancy and process automation to survive such incidents, without impacting the millions of Netflix users. Chaos Monkey is one of our most effective tools to improve the quality of our services."[15]

By regularly "killing" random instances of a software service, it was possible to test a redundant architecture to verify that a server failure did not noticeably impact customers.

The concept of chaos engineering is close to the one of Phoenix Servers, first introduced by Martin Fowler in 2012.[16]

Chaos engineering tools[edit]

Chaos Monkey[edit]

The logo for Chaos Monkey used by Netflix

Chaos Monkey is a tool invented in 2011 by Netflix to test the resilience of its IT infrastructure.[13] It works by intentionally disabling computers in Netflix's production network to test how the remaining systems respond to the outage. Chaos Monkey is now part of a larger suite of tools called the Simian Army designed to simulate and test responses to various system failures and edge cases.

The code behind Chaos Monkey was released by Netflix in 2012 under an Apache 2.0 license.[17][18]

The name "Chaos Monkey" is explained in the book Chaos Monkeys by Antonio Garcia Martinez:[19]

Imagine a monkey entering a 'data center', these 'farms' of servers that host all the critical functions of our online activities. The monkey randomly rips cables, destroys devices and returns everything that passes by the hand [i.e. flings excrement]. The challenge for IT managers is to design the information system they are responsible for so that it can work despite these monkeys, which no one ever knows when they arrive and what they will destroy.

Simian Army[edit]

The Simian Army[18] is a suite of tools developed by Netflix to test the reliability, security, or resilience of its Amazon Web Services infrastructure and includes the following tools:[20]

At the very top of the Simian Army hierarchy, Chaos Kong drops a full AWS "Region".[21] Though rare, loss of an entire region does happen and Chaos Kong simulates a systems response and recovery to this type of event.

Chaos Gorilla drops a full Amazon "Availability Zone" (one or more entire data centers serving a geographical region).[22]

Proofdock chaos engineering platform[edit]

Proofdock is a chaos engineering platform that focuses on and leverages the Microsoft Azure platform and the Azure DevOps services. Users can inject failures on the infrastructure, platform and application level.[23]

Steadybit[edit]

Founded in 2019, Steadybit is a Chaos and Resilience Engineering platform that aims to reduce downtime and improve system visibility to detect issues early. Steadybit also manages a Reliability Hub on GitHub, an open-source collection of chaos engineering extensions. [23]

Gremlin[edit]

Gremlin is a "failure-as-a-service" platform.[24]

Facebook Storm[edit]

To prepare for the loss of a datacenter, Facebook regularly tests the resistance of its infrastructures to extreme events. Known as the Storm Project, the program simulates massive data center failures.[25]

Days of Chaos[edit]

Voyages-sncf.com created a "Day of Chaos"[26] in 2017, gamifying the simulation of pre-production failures.[27] They presented their results at the 2017 DevOps REX conference.[28]

See also[edit]

Notes and references[edit]

  1. ^ "Principles of Chaos Engineering". principlesofchaos.org. Retrieved 21 October 2017.
  • ^ Siwach, Gautam (29 November 2022). Evaluating operational readiness using chaos engineering simulations on Kubernetes architecture in Big Data (pdf). 2022 International Conference on Smart Applications, Communications and Networking (SmartNets). Botswana. pp. 1–7. Retrieved 3 January 2023.
  • ^ "Machine Learning Podcast Host and Technology Influencer: Gautam Siwach". LA Weekly. 7 October 2022.
  • ^ Hertzfeld, Andy. "Monkey Lives". Folklore. Retrieved 11 September 2023.
  • ^ "Game day". AWS Well-Architected Framework Glossary. Amazon. 31 December 2020. Retrieved 25 February 2024.
  • ^ a b Limoncelli, Tom (13 September 2012). "Resilience Engineering: Learning to Embrace Failure". ACM Queue. 10 (9) – via ACM.
  • ^ Krishnan, Kripa (16 September 2012). "Weathering the Unexpected". ACM Queue. 10 (9): 30–37. doi:10.1145/2367376.2371516 – via ACM.
  • ^ Krishnan, Kripa (8–13 November 2015). 10 Years of Crashing Google (html). 2015 Usenix LISA. Washington DC. Retrieved 25 February 2024.
  • ^ Beyer, Betsy; Jones, Chris (2016). Site Reliability Engineering (1st ed.). O'Reilly Media. ISBN 9781491929124. OCLC 1291707340.
  • ^ "Chapter 5. Google DiRT: Disaster Recovery Testing". "Chaos Engineering" book website. O'Reilly Media. 30 April 2020. Retrieved 25 February 2024.
  • ^ a b Jones, Nora; Rosenthal, Casey (2020). Chaos Engineering (1st ed.). O'Reilly Media. ISBN 9781492043867. OCLC 1143015464.
  • ^ Cahoon, Jason (2 June 2021). "WATCH: The DiRT on Chaos Engineering at Google" (video). youtube.com. GOTO Conferences.
  • ^ a b "The Netflix Simian Army". Netflix Tech Blog. Medium. 19 July 2011. Retrieved 21 October 2017.
  • ^ US 20120072571, Orzell, Gregory S. & Izrailevsky, Yury, "Validating the resiliency of networked applications", published 2012-03-22 
  • ^ "Netflix Chaos Monkey Upgraded". Netflix Tech Blog. Medium. 19 October 2016. Retrieved 21 October 2017.
  • ^ "PhoenixServer". martinFowler.com. Martin Fowler (software engineer). 10 July 2012. Retrieved 14 January 2021.
  • ^ "Netflix libère Chaos Monkey dans la jungle Open Source" [Netflix releases Chaos Monkey into the open source jungle]. Le Monde Informatique (in French). Retrieved 7 November 2017.
  • ^ a b "SimianArmy: Tools for your cloud operating in top form. Chaos Monkey is a resiliency tool that helps applications tolerate random instance failures". Netflix, Inc. 20 October 2017. Retrieved 21 October 2017.
  • ^ "Mais qui sont ces singes du chaos ?" [But who are these monkeys of chaos?]. 15marches (in French). 25 July 2017. Retrieved 21 October 2017.
  • ^ SemiColonWeb (8 December 2015). "Infrastructure : quelles méthodes pour s'adapter aux nouvelles architectures Cloud ? - D2SI Blog". D2SI Blog (in French). Archived from the original on 21 October 2017. Retrieved 7 November 2017.
  • ^ "Chaos Engineering Upgraded", medium.com, 19 April 2017, retrieved 10 April 2020
  • ^ "The Netflix Simian Army", medium.com, retrieved 12 December 2017
  • ^ a b Miller, Ron (22 September 2022). "Steadybit wants developers involved in chaos engineering before production". Tech Crunch.
  • ^ "Gremlin raises $18 million to expand 'failure-as-a-service' testing platform". VentureBeat. 28 September 2018. Retrieved 24 October 2018.
  • ^ Hof, Robert (11 September 2016), "Interview: How Facebook's Storm Heads Off Project Data Center Disasters", Forbes, retrieved 21 October 2017
  • ^ "Days of Chaos". Days of Chaos (in French). Retrieved 18 February 2022.
  • ^ "DevOps: feedback from Voyages-sncf.com". Moderator's Blog (in French). 17 March 2017. Retrieved 21 October 2017.
  • ^ devops REX (3 October 2017). "[devops REX 2017] Days of Chaos : le développement de la culture devops chez Voyages-Sncf.com à l'aide de la gamification". Retrieved 18 February 2022.
  • External links[edit]


    Retrieved from "https://en.wikipedia.org/w/index.php?title=Chaos_engineering&oldid=1228737469"

    Categories: 
    Netflix
    Software development
    Reliability engineering
    Software testing
    Software testing tools
    Disaster recovery
    Automation software
    Software delivery methods
    Hidden categories: 
    CS1 French-language sources (fr)
    Articles with short description
    Short description is different from Wikidata
    Use dmy dates from November 2022
     



    This page was last edited on 12 June 2024, at 22:12 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki