Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Definition  





2 Examples  



2.1  Tossing coins  



2.1.1  Solution  







2.2  Rolling dice  



2.2.1  Solution  









3 See also  





4 References  





5 External links  














Bernoulli trial







Català
Čeština
Español
Esperanto
Euskara
فارسی
Français

Magyar
Македонски
Монгол
Nederlands

Norsk bokmål
Piemontèis
Polski
Português
Slovenščina
Suomi
Українська
Tiếng Vit


 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 




In other projects  



Wikimedia Commons
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


Graphs of probability P of not observing independent events each of probability p after n Bernoulli trials vs np for various p. Three examples are shown:
Blue curve: Throwing a 6-sided die 6 times gives a 33.5% chance that 6 (or any other given number) never turns up; it can be observed that as n increases, the probability of a 1/n-chance event never appearing after n tries rapidly converges to 0.
Grey curve: To get 50-50 chance of throwing a Yahtzee (5 cubic dice all showing the same number) requires 0.69 × 1296 ~ 898 throws.
Green curve: Drawing a card from a deck of playing cards without jokers 100 (1.92 ×52) times with replacement gives 85.7% chance of drawing the ace of spades at least once.

In the theory of probability and statistics, a Bernoulli trial (orbinomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted.[1] It is named after Jacob Bernoulli, a 17th-century Swiss mathematician, who analyzed them in his Ars Conjectandi (1713).[2]

The mathematical formalization and advanced formulation of the Bernoulli trial is known as the Bernoulli process.

Since a Bernoulli trial has only two possible outcomes, it can be framed as some "yes or no" question. For example:

Therefore, success and failure are merely labels for the two outcomes, and should not be construed literally. The term "success" in this sense consists in the result meeting specified conditions; it is not a value judgement. More generally, given any probability space, for any event (set of outcomes), one can define a Bernoulli trial, corresponding to whether the event occurred or not (event or complementary event). Examples of Bernoulli trials include:

Definition[edit]

Independent repeated trials of an experiment with exactly two possible outcomes are called Bernoulli trials. Call one of the outcomes "success" and the other outcome "failure". Let be the probability of success in a Bernoulli trial, and be the probability of failure. Then the probability of success and the probability of failure sum to one, since these are complementary events: "success" and "failure" are mutually exclusive and exhaustive. Thus, one has the following relations:

Alternatively, these can be stated in terms of odds: given probability of success and of failure, the odds for are and the odds against are These can also be expressed as numbers, by dividing, yielding the odds for, , and the odds against, :

These are multiplicative inverses, so they multiply to 1, with the following relations:

In the case that a Bernoulli trial is representing an event from finitely many equally likely outcomes, where of the outcomes are success and of the outcomes are failure, the odds for are and the odds against are This yields the following formulas for probability and odds:

Here the odds are computed by dividing the number of outcomes, not the probabilities, but the proportion is the same, since these ratios only differ by multiplying both terms by the same constant factor.

Random variables describing Bernoulli trials are often encoded using the convention that 1 = "success", 0 = "failure".

Closely related to a Bernoulli trial is a binomial experiment, which consists of a fixed number ofstatistically independent Bernoulli trials, each with a probability of success , and counts the number of successes. A random variable corresponding to a binomial experiment is denoted by , and is said to have a binomial distribution. The probability of exactly successes in the experiment is given by:

where is a binomial coefficient.

Bernoulli trials may also lead to negative binomial distributions (which count the number of successes in a series of repeated Bernoulli trials until a specified number of failures are seen), as well as various other distributions.

When multiple Bernoulli trials are performed, each with its own probability of success, these are sometimes referred to as Poisson trials.[3]

Examples[edit]

Tossing coins[edit]

Consider the simple experiment where a fair coin is tossed four times. Find the probability that exactly two of the tosses result in heads.

Solution[edit]

A representation of the possible outcomes of flipping a fair coin four times in terms of the number of heads. As can be seen, the probability of getting exactly two heads in four flips is 6/16 = 3/8, which matches the calculations.

For this experiment, let a heads be defined as a success and a tails as a failure. Because the coin is assumed to be fair, the probability of success is . Thus, the probability of failure, , is given by

.

Using the equation above, the probability of exactly two tosses out of four total tosses resulting in a heads is given by:

Rolling dice[edit]

What is probability that when three independent fair six-sided dice are rolled, exactly two yield sixes?

Solution[edit]

Probabilities of rolling k sixes from n independent fair dice, with crossed out dice denoting non-six rolls – 2 sixes out of 3 dice is circled

On one die, the probability of rolling a six, . Thus, the probability of not rolling a six, .

As above, the probability of exactly two sixes out of three,

See also[edit]

References[edit]

  1. ^ Papoulis, A. (1984). "Bernoulli Trials". Probability, Random Variables, and Stochastic Processes (2nd ed.). New York: McGraw-Hill. pp. 57–63.
  • ^ James Victor Uspensky: Introduction to Mathematical Probability, McGraw-Hill, New York 1937, page 45
  • ^ Rajeev Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, New York (NY), 1995, p.67-68
  • External links[edit]


    Retrieved from "https://en.wikipedia.org/w/index.php?title=Bernoulli_trial&oldid=1220990851"

    Categories: 
    Discrete distributions
    Coin flipping
    Experiment (probability theory)
    Hidden categories: 
    Articles with short description
    Short description matches Wikidata
    Articles containing Latin-language text
    Commons category link from Wikidata
     



    This page was last edited on 27 April 2024, at 04:50 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki