![]() | This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
![]() |
Daily pageviews of this article
A graph should have been displayed here but graphs are temporarily disabled. Until they are enabled again, visit the interactive graph at pageviews.wmcloud.org
|
Although I know well what this is and have used it many times and developed and implemented a separate EM algorithm for my case, I am not able to understand the explanation that the article opens with.
\begin{align}
\mathrm{P}(X=x\ \mathrm{and}\ Y=y) = \mathrm{P}(Y=y \mid X=x) \cdot \mathrm{P}(X=x) = \mathrm{P}(X=x \mid Y=y) \cdot \mathrm{P}(Y=y) \end{align}. the last equals is very confusing where it tries to illustrate equivalency between x and y
<math> ... </math>
tag.Niubrad (talk) 04:40, 21 October 2020 (UTC)Reply
The formulae in the final version that I see (date 28th January 2009) are all erroneous. Returning to the previous version, where at least one can read the formulae!!!Noyder (talk) 12:27, 28 January 2009 (UTC)Reply
I'm trying to find the answer to a problem I'm having, I collect Yu-Gi-Oh! cards and I was making a spreadsheet to evaluate the probability of drawing cards out of packs, I have the ratios for each card and probability of pulling each out of a single pack but I'm finding it difficult to discover the probability of drawing every card I want out of the list. I'm guessing the solution has to do with Joint Probability.
So far the data looks like this:
126 cards total in the set.
There are 9 cards per pack.
24 packs per box.
2 Secret Rares (probability of 1:31 packs)
10 Ultra Rares (probability of 1:12 packs)
10 Super Rares (probability of 1:6 packs)
22 Rares (probability of 5:7 packs)
82 commons (probability of 8:1 packs)
I want:
3 specific Commons
7 specific rares
1 specific super rare
1 specific ultra rare
1 specific secret rare
What are my odds of getting everything I want in a box of 24 packs?
174.24.99.41 (talk) 22:23, 27 July 2011 (UTC)Dragula42Reply
To anyone qualified to edit this article (definitely not me!), it is in serious need of some good examples and better description, as well as, perhaps, an explanation as to why this is important and where it fits in with the rest of statistics! — Preceding unsigned comment added by 99.181.61.118 (talk) 20:19, 2 August 2011 (UTC)Reply
As you can refer this link and see, there are two ways that the joint probability distribution is represented. Please mention this.Aditya 09:43, 12 March 2017 (UTC) — Preceding unsigned comment added by Aditya8795 (talk • contribs)
I propose adding a section including the following (suggestions for improvements or expansions welcome):
Given two random variables with known marginal distributions, the joint distribution between the two distributions cannot be uniquely determined from the marginal distributions. Suppose for example that we have two binary random variables X and Y each with equal probability of taking on either of their two values. All of the following are valid and unique joint probability distributions for X and Y.
In the case where X and Y are uncorrelated, the joint distribution would look like
X Y |
x1 | x2 | py(Y) ↓ |
---|---|---|---|
y1 | 1/4 | 1/4 | 1/2 |
y2 | 1/4 | 1/4 | 1/2 |
px(X) → | 1/2 | 1/2 | 1 |
If X and Y are perfectly correlated, the joint distribution would look like
X Y |
x1 | x2 | py(Y) ↓ |
---|---|---|---|
y1 | 1/2 | 0 | 1/2 |
y2 | 0 | 1/2 | 1/2 |
px(X) → | 1/2 | 1/2 | 1 |
If X and Y are perfectly negatively correlated, the joint distribution would look like
X Y |
x1 | x2 | py(Y) ↓ |
---|---|---|---|
y1 | 0 | 1/2 | 1/2 |
y2 | 1/2 | 0 | 1/2 |
px(X) → | 1/2 | 1/2 | 1 |
In this example, there are actually infinitely many valid joint probability distributions that could be created given only the marginal distributions of X and Y.