Showing posts with label Probability. Show all posts
Showing posts with label Probability. Show all posts

M11. Some basic terms of Probability:

Probability: A type of ratio, which compare how many times an outcomes (results) can occur compared to all possible outcomes (results). 
Probability$(P)$: $\frac{total\; number\; of\; favourable\; cases \;(m)}{total\; number\; of\; cases\;(n)}$      [∵ Click here for more]
1) Experiment:
- A scientific procedure undertaken to make a discovery, test a hypothesis, or demonstrate a known fact. 
The processes, which performs the different possible outcomes (cases) as a result, known as experiment. If the result is not obtained unique, but any one of the possible outcomes. It is called random experiment. 
2) Trial and Events:
Trial: Performing of a random experiment. 
Events: Set of outcomes (results) of an experiments; Important incident (an occurrence). 
3) Exhaustive cases:
An event is said to be exhaustive if we can identify all the possible outcomes. 
Example: In throwing dices, we identify the total number of possible outcomes $(i.e. 1, 2, 3, 4, 5, 6)$ is $6$. Thus the total number of exhaustive cases in rolling a dice is $6$.
4) Equally likely cases:
If any one of the possible outcomes may occur, but no one case can be expected to occur. 
Example: In a rolling dices, there may equally possible to occur a six number $(1, 2, 3, 4, 5, 6)$ marked in the faces of dices.
5) Mutually exclusive cases:
It is not possible to occur all the outcomes at the same time. (OR, If one and only one of them can occur in a single trial).
Example: If we tossing a coin, It is not possible to get both head and tail at a same time. So head and tail are two mutually exclusive events. 
6) Favourable cases:
The happening of an event satisfies the given condition is known as favourable cases. 
Example: In a rolling dice, the cases favourable to getting an even number is $3$ i.e. $(2, 4, 6)$.
7) Dependent cases:
If $A$ and $B$ are two events. Then $A$ affects the probability of the getting of the $B$. It is said to be dependent. (OR, If the occurrence of one event affected by the occurrence of the other).
Example: Imagine, A bag contain $6$ white balls and $4$ red balls. The chance of getting a white ball is $\frac{6}{10}$. If the ball drawn is not replaced and again, we find the probability of getting a white ball is  $\frac{5}{9}$. Hence, probability of getting a $2^{nd}$ white ball depends upon the occurrence of the $1^{st}$ ball.
8) Independent cases:
If $A$ and $B$ are two events. Then $A$ does not affects the probability of the getting of the $B$. It is said to be independent.  
Example: If a coin and dice are rolling, the turning head up in a coin does not affect the getting 6 on the dice.
Return to Main Menu

M11. Probability (Mathematical terms):

"Something likely to happen !!"
1. Classical (Priori) Definition of Probability:
If $n =$ Exhaustive, Mutually exclusive and equally likely cases, and
    $m =$ Favorable cases to an event $(E)$
Then Probability of the happening of an event $(E)$ denoted by $P(E)$ is defined by,
$P(E) = \frac{m}{n}$ ................... (i)
The Probability $P(E)$ of happening of an event $E$ satisfies the following property:
$0 \leq  P(E) \leq 1$ 
Case 1. If $E$ is an impossible event then $P(E) = 0$
             If $E$ is an sure event then $P(E) = 1$
Case 2: The sum of the probabilities of the occurrence $P(E)$, and non-occurrence $P(\bar{E})$ of an event is unity. 
i.e. $P(E) + P(\bar{E}) = 1$
2. Probability:
If $S$ be the sample space of random experiment.
$n(S)$ = number of sample points of random experiment
$n(E)$ = number of favourable points to an events (E), Then, 
The Probability of happening an event $P(E)$ is defined by:
$P(E) = \frac{n(E)}{n(S)}$ ............................ (ii) 

3. Two basic laws of Probability:
i) Additional theorem:
If $A$ and $B$ are two events with their respective Probabilities $P(A)$ & $P(B)$. 
» Then the probability of occurrence of at least one of these two events denoted by $P(A \cup B)$ is given by:
$P(A$ or $B)$ = $P(A \cup B)$ = $P(A) + P(B) - P(A \cap B)$ .......(iii)
where $P(A \cap B)$ is the probability of the simultaneous occurrence of the events $A$ and $B$ (i.e common to $A$ and $B$). 
ii) Multiplication theorem:
If two events $A$ and $B$ are independent, then the probability of their simultaneous occurrence is equal to product of their individual probabilities. 
$P(A$ and $B)$ = $P(A \cap B)$ = $P(A) . P(B) $ ...........(iv)

4. Permutation and Combination:
i) Permutation: The arrangement of objects in some order.
If $n$ = total number of objects (number of permutation)
    $r$ = number of object taken (number of ways) 
» The number of permutations of a set of $(n)$, different objects taken $(r)$ at a time denoted by $P(n,r)$ or $^{n}\textrm{P}_r$ is:
$^{n}\textrm{P}_r$ = $P(n,r)$ = $\frac{n!}{(n-r)!}; (r\leq n)$ .......................... (v) 
Where $n!$ = factorial $n = 1,2,3, .............. n.$
Also,
$P(n,n)$ = $^{n}\textrm{P}_n$ = $n!$; $(0! = 1)$

» The number of permutations of a set of $n$ objects taken all of them at a time where $p$ of them are of one kind, $q$ of them the second kind, $r$ of them of the third kind. Then,
Total number of permutation = $\frac{n!}{p!q!r!}$ .......................(vi)
ii) Combination: The selection of objects without regard to any order of arrangement.
If $n$ = total number of selection (combination)
    $r$ = different objects taken at a time.
» The total number of combinations of $n$ objects taken $r$ at a time,
$C(n,r)$ = $\frac{n!}{(n-r)!r!}$ ................(vii)

5. Binomial Distribution: 
An experiment consisting only two outcomes (Success and failure) is known as "Bernoully Process". The discrete probability distribution derived from the Bernoully process is known as Binomial distribution.
Let $p$ be the probability of a success and $q$ be the probability of a failure in one trial. Let $r$ be the number of success described in $n$ independent trials of the binomial expansion of $(p+q)^n$, Then there would be $(n-r)$ failures.
The probability of getting exactly $r$ success and consequently $(n-r)$ failure in $n$ independent trials is given by;

$P(r)$ = $^{n}\textrm{C}_r \;p^r \;q^{n-r}$; $(0\leq r\leq n)$ ............... (viii)
Where, $P(r)$ = Probability of $r$ successive in $n$ trials,
                 $n$ = number of trial performed,
                 $p$ = probability of a success in a trial,
                 $q$ = probability of a failure in a trial such that $p + q = 1$
                 $r$ = number of successive in $n$ trials. (r = 0, 1, 2, ........n). 
Mean of binomial distribution = $np$ ........................ (ix)
Standard deviation of binomial distribution = $\sqrt{n\;p\;q}$ ..................(x)

Return to Main Menu