MTH-361A | Spring 2025 | University of Portland
February 21, 2025
The Law of Large Numbers
It states that as the number of trials in a random experiment increases, the sample mean approaches the expected value.
Example Bernoulli Trials Simulation
Let \(p=0.60\) be the “success” probability of a Bernoulli r.v. \(X\), where \(\text{E}(X) = p\).
Exponential Distribution
Exponential R.V.
Let \(\lambda=\frac{1}{15}\) be the rate of “success”.
\[ \begin{aligned} \text{R.V. } & \longrightarrow X \sim \text{Exp}\left(\frac{1}{15}\right) \\ \text{PDF } & \longrightarrow f(x) = \frac{1}{15} e^{- \frac{1}{15} x} \\ \text{for } & x \in [0,\infty) \end{aligned} \]
Exponential Distribution
Example:
What is the probability that “success” happens on 15 unit length or less, given \(\lambda=\frac{1}{15}\)? \[ \begin{aligned} P(X \le 15) & = \int_0^{15} f(x) \ dx \\ & = \int_0^{15} \frac{1}{15} e^{-\frac{1}{15} x} \ dx \\ P(X \le 15) & \approx 0.632 \end{aligned} \]
Using R:
## [1] 0.6321206
\(\star\) Note that the
pexp()
function computes the probability \(P(X \le x)\), meaning it computes the sum
of all probabilities from \(X=0\) to
\(X=x\) using the Exponential PDF. The
dexp()
function computes the density, not probability
because \(P(X = x)=0\) at any \(x\).
Exponential Distribution with Expected Value
Exponential R.V.
Let \(\lambda=\frac{1}{15}\) be the rate of “success”.
\[ \begin{aligned} \text{R.V. } & \longrightarrow X \sim \text{Exp}\left(\frac{1}{15}\right) \\ \text{PDF } & \longrightarrow f(x) = \frac{1}{15} e^{- \frac{1}{15} x} \\ \text{for } & x \in [0,\infty) \\ \text{expected value} & \longrightarrow \text{E}(X) = 15 \end{aligned} \]
In general, the expected value of the exponential r.v. is given by \[\text{E}(X) = \frac{1}{\lambda},\] which is the reciprocal of the “success” rate.
Random Sampling from the Exponential Distribution
Sample Mean vs the Expected Value
The sample mean of \(0.88\) is not exactly equal to the expected value of \(1\) due to sampling variability. As we increase the number of samples, the sample mean gets closer to the expectation.
Geometric Random Sampling using R
Binomial Distribution
Binomial R.V.
Let \(p=0.50\) be the success probability and \(n=10\) the number of trials.
\[ \begin{aligned} \text{R.V. } & \longrightarrow X \sim \text{Binom}(n,p) \\ \text{PMF } & \longrightarrow P(X=k) = \binom{n}{k} p^k (1-p)^{n-k} \\ \text{for } & k = 0,1,2,3, \cdots, n \\ \text{expected value} & \longrightarrow \text{E}(X) \approx 5 \end{aligned} \]
Suppose we conduct an experiment of flipping \(n\) fair coins in a sequence, where \(n\) is an integer. The sample space \(S\) contains all possible sequences of \(H\) and \(T\). Number of possible outcomes is \(|S| = 2^n\).
Visualizing the possible outcomes using Pascal’s triangle
\(\star\) Key Idea: Pascal’s Triangle helps us visualize the total possible sequences of “success” (\(H\)) outcomes given \(n\) independent trials.
Let \(X\) be the r.v. that counts the number of \(H\) outcomes in \(n\) trials.
Pascal’s triangle helps us count
\(\dagger\) Can you determine the ways \(H\) can occur in \(4\) trials using Pascal’s triangle?
Compute the probability of observing a certain number of “success” (\(H\)) outcomes in \(n\) trials.
\(\dagger\) Can you determine the probabilities of observing \(H\) outcomes in \(4\) trials?
How many \(H\) outcomes do we expect to have in \(n\) independent Bernoulli trials with “success” probability \(p\)?
Example
In general:
\(\star\) Key Idea: Over many repetitions, the long-run average number of “success” outcomes is \(n \times p\), reflecting the frequentist interpretation of probability.
\(\star\) Key Idea: The Binomial distribution is approximately the normal distribution given large enough samples because of the Law of Large Numbers.
A normal r.v. is a type of continuous r.v. whose probability distribution follows the normal distribution, also known as the Gaussian distribution. The normal distribution is characterized by two parameters, \(\mu\) as the mean and \(\sigma^2\) as the variance: \[X \sim \text{N}(\mu,\sigma^2)\]
Sample Space:
Parameters
The normal r.v. \(X \sim \text{N}(\mu,\sigma^2)\) has infinite possible outcomes (or infinite sized sample space) where \(\mu\) is the mean and \(\sigma^2\) is the variance with PDF given as \[f(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}, \ -\infty < x < \infty\]
\(\star\) Key Idea: The normal r.v. often approximates the distribution of many types of data, especially when there are large numbers of independent factors contributing to the outcome.
Normal Distribution
Example:
What is \(P(X \le 13)\) for \(X \sim \text{N}(10,2.24)\)? \[ \begin{aligned} P(X \le 13) & = \int_0^{13} f(x) \ dx \\ & = \int_0^{13} \frac{1}{\sqrt{2 \pi (2.24)^2}} e^{-\frac{(x-10)^2}{2(2.24)^2}} \ dx \\ P(X \le 13) & \approx 0.9098 \end{aligned} \]
Using R:
## [1] 0.9097612
\(\star\) Note that the
pnorm()
function computes the probability \(P(X \le x)\), meaning it computes the area
under \(f(x)\) from \(X=0\) to \(X=x\) using the Normal PDF. The
dnorm()
function computes the density, not probability
because \(P(X = x)=0\) at any \(x\).
Normal Distribution with Expected Value
Normal R.V.
Let \(\mu=10\) and \(s=2.24\) be the mean and standard deviation respectively.
\[ \begin{aligned} \text{R.V. } & \longrightarrow X \sim \text{N}\left(10,2.24^2\right) \\ \text{PDF } & \longrightarrow f(x) = \frac{1}{\sqrt{2 \pi (2.24)^2}} e^{-\frac{(x-10)^2}{2(2.24)^2}} \\ \text{for } & x \in (-\infty,\infty) \\ \text{expected value} & \longrightarrow \text{E}(X) = 10 \end{aligned} \]
In general, the expected value of the normal r.v. is given by \[\text{E}(X) = \mu,\] which is the center of the normal distribution.
Random Sampling from the Exponential Distribution
Sample Mean vs the Expected Value
The sample mean of \(10.07\) is not exactly equal to the expected value of \(10\) due to sampling variability. As we increase the number of samples, the sample mean gets closer to the expectation.
Normal Random Sampling using R
When Can we use it?
The binomial distribution \(X \sim \text{Binom}(n,p)\) can be approximated by a normal distribution when:
Approximation Formula
\(\star\) The normal approximation simplifies binomial probability calculations for large \(n\).
R.V. \(X\) | Exponential | Normal |
---|---|---|
Description | Unit length until a “success” event happens | Approximation to the Binomial with sufficiently large number of \(n\) independent trials |
Sampling | With replacement | With replacement |
Parameters | \(\lambda \longrightarrow\) rate of “success” | \(\mu
\longrightarrow\) mean \(\sigma^2 \longrightarrow\) variance |
PMF | \(f(x) = f(x) = \lambda
e^{- \lambda x}\) \(x \in [0,\infty)\) |
\(f(x) = \frac{1}{\sqrt{2 \pi
\sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}\) \(x \in (-\infty,\infty)\) |
Expected Value \(\text{E}(X)\) | \(\frac{1}{\lambda}\) | \(\mu\) |
\(P(X = x)\) | dexp(rate) |
dnorm(mean,sd) |
\(P(X \le x)\) | pexp(rate) |
pnorm(mean,sd) |
\(N\) Simulations | rexp(N,rate) |
rnorm(N,mean,sd) |
Pascal’s Triangle and Combinations This formula calculates the number of ways to choose \(k\) elements from a set of \(n\). Each number in Pascal’s Triangle corresponds to a combination. Also known as the binomial coefficient.