Random Sampling &
Law of Large Numbers

Applied Statistics

MTH-361A | Spring 2026 | University of Portland

Objectives

Visualizing the Bernoulli PMF

The Bernoulli PMF can be visualized using the vertical line plot because the Bernoulli r.v. is discrete.

Example: Suppose that \(p=0.60\), meaning the the “success” probability is \(P(X=1)=0.60\) and the “failure” probability is \(P(X=0)=0.40\).

Simulating Bernoulli Trials

A Bernoulli trial is a random experiment with two possible outcomes: success (\(1\)) or failure (\(0\)). The following R code uses the tidyverse package.

# set hyperparameters
set.seed(42) # for reproducibility
n <- 10 # number of trials

# set parameter and PMF of the Bernoulli r.v
X <- c("1","0") # outcomes ("1"="success","0"="failure")
p <- 0.6 # probability of success
bern_pmf <- c(p,1-p) # an ordered list of outcome probabilities

# simulate n Bernoulli trials
samples <- sample(
  X, # set sample space
  size = n, # set number of samples
  prob = bern_pmf, # set probabilities for each outcome
  replace = TRUE # sampling with replacement
  )

# convert samples into tibble form and compute proportions
samples_tib <- tibble(X = samples) %>% 
  # group by outcome
  group_by(X) %>% 
  # summarisse by counting each outcome
  summarise(
    count = n(), # compute frequencies
    proportion= count/n # compute relative frequencies
    )

# view tibble
samples_tib
## # A tibble: 2 × 3
##   X     count proportion
##   <chr> <int>      <dbl>
## 1 0         7        0.7
## 2 1         3        0.3

Visualizing the Simulated Bernoulli Trials

The Bernoulli trials simulations are considered nominal categorical data. A bar plot that shows the proportion of “success” and “failure” outcomes are most appropriate.

Comparing the Bernoulli PMF vs the Trials

We can compare the Bernoulli PMF and the Trials by plotting the plots side-by-side for easy comparison.

\(\star\) Since we only have \(10\) trials the proportions of the samples is not quite the same as the parameters of the Bernoulli r.v. due to sampling variability.

Increasing the Number of Bernoulli Trials

Suppose we increase the number of Bernoulli Trials and compare it to the Bernoulli PMF.

Let \(p = 0.60\) (“success” probability) and \(n = 1000\) (number of Bernoulli Trials).

\(\star\) As we increase the number of Bernoulli trials from \(n = 10\) to \(n = 1000\), the proportion of \(1\) and \(0\) are close to the Bernoulli PMF, which shows the frequentist interpretation of Probability.

The Law of Large Numbers

The Law of Large Numbers states that as the number of trials in a random experiment increases, the sample mean approaches the expected value.

Example Bernoulli Trials Simulation

Let \(p = 0.60\) be the “success” probability of a Bernoulli r.v., \(X \sim \text{Bern}(0.60)\), where \(\text{E}(X) = 0.60\).

\(\star\) By the Law of Large Numbers, as the number of Bernoulli trials increases, the sample proportion of “success” converges to the expected value.