Category Archives: Statistics

Simulating Waiting Times with R

In my last post, I used Mathematica to show the connection between events that follow a Poisson process (i.e. occur independently at a constant rate per unit time/space) and the waiting time between successive events, which is exponentially distributed.  By the memoryless property, the waiting time between successive events also happens to be the time until the first event occurs – making the exponential distribution a reasonable model of the time until failure of batteries, electronic components, machines, and so forth.  Using the exponential model, I showed that the probability that a component lasts longer than average (i.e. exceeds its expected lifetime of 15 months in the example) is only 36.79%, which tells us the distribution is skewed (i.e. the average and the median aren’t equal).  Below is the equation to compute this probability and below that are plots of the theoretical probability density function and cumulative density functions.

three

Rplot_pdf Rplot_cdf

In this post I’m going to simulate the previous result using R.  I provided the R code to generate the results but I’m a pretty bad programmer so I’d encourage you to look elsewhere for programming methodology etc. but nonetheless my code does produce the intended result.  Before starting I loaded the MASS library because it’s a quick way to make plots look less shitty if you’re not an R pro.

First, I simulated 1000 random draws from an exponential distribution with parameter 1/15 and assign the result to a variable called sample:

n <- 1000

sample <- rexp(n, rate=1/15)

Then I made a histogram of ‘sample’ using the truehist command (which you’ll need the MASS library to use:

Exp_Sim

Now I want to somehow count how many of the random waiting times are greater than the average, 15.  The graph gives us an idea – it doesn’t look more than a third – but it’s not precise.  To know for sure I create a list of only the values that exceeded 15 in the sample called upper_sample with the help of a logical vector that returns true if the corresponding value in ‘sample’ is greater than 15 and false otherwise:

above_avg <- sample > 15

upper_sample <- sample[above_avg]

truehist(upper_sample, main=”Sample Waiting Times > 15″)

upper_sample_dist

The number of elements in the upper_sample vector divided by the number of trials gives me the probability that a component lasts more than 15 hours, which I named psuccess:

psuccess <- length(upper_sample)/n
psuccess

There were 377 observations in my sample, so the simulated probability that a component lasts more than 15 hours is 37.7%, about 1% off from the true value.


The Maximum Likelihood Estimate of an unknown parameter

 

The Maximum Likelihood Estimate is one estimate of a parameter’s value.  It basically answers the question: “Given my data, what’s the most likely value of an unknown parameter”.  To use it, we need to compute the likelihood function by taking a sample of n observations from the same distribution, evaluating each of the n observations, (x1, …, xn), at the pdf pX(x), and taking the product of the result for all n.

Likelihood Function

Note: The expression pX(x; θ) is just the distribution of the population from which we’re sampling – usually the parameter is added with a semicolon to emphasize the fact that the distribution is characterized by the unknown parameter

The likelihood function is just the joint density of the random sample.  Since samples are independent and identically distributed (iid), the joint pdf of all n sample observations is the product of the individual densities.  This is the same principle that lets us multiply the P(A), P(B), and P(C) together to find P(A intersect B intersect C) when events A, B, and C are independent.  Suppose we take a sample of size n = 3 from the distribution, and the resulting values are x1, x2, and x3.  What’s the probability associated with the three sample values? That is, what’s the joint density of the three sample values, pX(x1, x2, x3)?

Joint

Generalizing this case for all n clearly gives us the result that the joint density of n randomly drawn sample values is the product of individual densities and the likelihood function is nothing more than the joint pdf of the sample – a multivariate probability density function of the values taken on my the n random variables in the sample.

The likelihood function is a function of the unknown parameter.  The Maximum Likelihood Estimate for the unknown parameter is the parameter value that maximizes the likelihood function:

We use calculus to find this value, by first taking the derivative of the likelihood function with respect to the unknown parameter, then setting it equal to 0 and solving for the parameter.  Don’t forget to verify conditions so as to make sure you are indeed finding a maximum.

Derivative

This will usually involve complicated, messy math.  To mitigate this, we sometimes work with the logarithm of the likelihood function and use properties of logs to simplify computations.  This won’t chance our answer – taking the logarithm of some function won’t change the point at which the maximum value is achieved.

LogDerivative

The value of the parameter that you end up with maximizes the probability of your sample values x1, …,xn.  You could say it’s the value “most consistent” with the observed sample – the Maximum Likelihood Estimate.


A Post on Measuring Historical Volatility

I’ve reblogged a concise yet thorough explanation of the calculation of market volatility. The post makes very clear how input parameters (weighting, time frame, etc.) affect its validity as an estimate of future market movements (link).  The phrase “Fat Tails” is often thrown around like a meaningless buzzword in financial media (Squawk Box, for example), but the concept is explained intuitively here. In a separate post, market data from the S&P500 is used to demonstrate the decay factor’s effect on log returns (link).

 

mathbabe

Say we are trying to estimate risk on a stock or a portfolio of stocks. For the purpose of this discussion, let’s say we’d like to know how far up or down we might expect to see a price move in one day.

First we need to decide how to measure the upness or downness of the prices as they vary from day to day. In other words we need to define a return. For most people this would naturally be defined as a percentage return, which is given by the formula:

$latex (p_t – p_{t-1})/p_{t-1},$

where $latex p_t$ refers to the price on day $latex t$. However, there are good reasons to define a return slightly differently, namely as a log return:

$latex mbox{log}(p_t/p_{t-1})$

If you know your power series expansions, you will quickly realize there is not much difference between these two definitions for small returns- it’s only…

View original post 807 more words


The Central Limit Theorem

Suppose we’re sampling from a population with mean μ and variance σ2. Formally, the independent random variables {X1, X2, …, Xn} comprise our sample of size n from the distribution of interest. The random variables observed in the sample will likely all be different, but they all come from the same distribution. The stipulation that sample observations are independent is important.

Though it’s often stated in other terms, the Central Limit Theorem is a statement about the sum of the independent random variables observed in the sample. What can we say about the quantity below?

Screen shot 2014-04-21 at 5.38.37 PM

We could start by standardizing the quantity S; this is done in the same manner that one would standardize a test statistic while testing a hypothesis. To standardize the sum, we subtract the mean and divide by the standard deviation. The standardized quantity can be interpreted as the number of directed standard deviations said quantity is from its mean. Some people call this a Z score. We will make use of the fact that the sample random variables are independent in deriving the mean and variance of Sn, which is why the independence assumption is so important. So, our standardized sum of random variables in the sample is going to be of the form:

Screen shot 2014-04-21 at 5.39.11 PM

If we substitute in the values for the expected value and standard deviation of Sn that we derived above, we have the expression:

Screen shot 2014-04-21 at 5.39.50 PM

The CLT is a statement about this standardized sum of observations from a population. Intuitively, the theorem states that as the sample size (n) grows arbitrarily large, the distribution of the sum of the sample values (the Xi’s) tends towards the normal distribution. Mathematically, this means:

Screen shot 2014-04-21 at 5.40.23 PM

In the expression above, the sample values are {X, X2, …, Xn}, the expected value of their sum is nμ and the standard deviation of their sum is (√n)σ. In words, the expected value of the sum of all n sample observations is n times the population mean μ and the standard deviation of the sum of all n sample observations is the square root of n times the population standard deviation σ. I know I’m beating a dead horse, but it’s important that the CLT is a statement about the sum of observations in a sample. Therefore, the quantity between numbers a and b in the inequality is the standardized sum of sample observations, or a z score if you want to call it that. On the right side is the probability density function (pdf) of the normal distribution. Integrating any continuous pdf over a region (a, b) gives the probability of attaining a value greater than a and less than b, so the right hand side of the equality above can be interpreted as the probability that a standard normal random variable takes on a value between the numbers a and b.

Putting everything together, the theorem states that if we take the sum of n observations from any distribution and standardize it, the probability that this quantity lies in a given range approaches can be approximated with the normal distribution as n increases. That is, the distribution of the standardized sum of sample values approaches the normal distribution as the size of the sample increases.

The best way to illustrate this is probably with the uniform distribution. Consider rolling a die – it’s a trivial example, but it’s a familiar concept.  The outcome of rolling the die is uniformly distributed over (1,6), because the probability that you roll a 1, 2, 3, 4, 5, or 6 is 1/6 or approx. .167.  It’s easy to show that the random variable X, where X corresponds to the dots showing after rolling one die, has a mean of E(X) = 3.5 and a Standard Deviation of approximately 1.4434.

Uniform1

Now let the random variable X be the sum of the dots showing when you roll two dice.  X can take on a minimum of 2 (if you ‘snake eyes’) and a maximum of 12 (if you roll two sixes).  X has a mean of of  7 and a standard deviation of 2.0412, and is distributed like the graph below:

Uniform2

To demonstrate the CLT, we want to increase the number of dice we roll when we compute the sum.  Below is the pdf for n = 3, that is, rolling three dice and computing their sum.

Uniform3

And as we let n get larger and larger, the histogram looks a lot like the normal curve:

Uniform20

Uniform30

So when we consider the sum of more rolls, the curve looks normal.  This is the CLT working!  In another post, I’ll talk about ways that we can test the normality of a given distribution and apply the theorem to the sample mean.


The Sampling Distribution of the Sample Mean

Before talking about the Central Limit Theorem (CLT), it’s necessary to distinguish between the probability distribution of the population of interest and the sampling distribution of the sample mean. Suppose we draw n independent observations from a population with mean μ and standard deviation σ. Each observation is a random variable, so the sample that we draw from the population is a collection of n random variables, all of which have the same probability distribution. We can compute the sample mean, X, from the observed sample values. We use the sample values drawn from the population of interest to compute the sample mean as a way of approximating the true mean μ, which we don’t know. The true mean μ is a parameter, because it is a characteristic of the entire population. It is not feasible (or even possible in most cases) to determine the parameter in question, so we develop mathematical functions with the purpose of approximating the parameter. The inputs to such a function are the sample values – the n observations described previously. So the sample mean is a function of the random variables we observe in a given sample. Any function of a random variable is a random variable itself, so the sample mean is a random variable, complete with all the properties that random variables are endowed with. The sample mean, which we just said is a function of random variables, or, using proper terminology, a statistic, gives us a point estimate of the true population mean μ. It is an unbiased estimator of the population mean, so we know that the expected value of the sample mean is equal to the true population mean. All this tells us is that our guesses will be “centered around” μ; the sample mean fails to give us any indication of dispersion about the population parameter. To address this concern, we have to appeal to the idea that the sample mean is a random variable, and is thus governed by a probability distribution that can tell us about its dispersion. We call the probability distribution of the sample mean the sampling distribution of the sample mean – but it’s really nothing more than the probability distribution of X. Like I mentioned earlier, the sample mean is an unbiased estimator of μ. In other words, E(X) = μ. This is easy to prove: Screen shot 2014-04-19 at 7.17.59 PM The expected value of each of the observations we draw from the population {X1,X2, …, Xn} is the unknown population parameter μ. Notice we’re NOT talking about the sample mean right now – the last expression is simply the sum of n independent observations from the population with mean μ and variance σ. Therefore: Screen shot 2014-04-19 at 7.19.15 PM We’ve shown that the mean of the sampling distribution of the sample mean is the parameter μ. However, this is not true for the standard deviation, as demonstrated below: Screen shot 2014-04-19 at 7.21.06 PM So, the sample mean is distributed normally with a mean of μ and a standard deviation of σ times the square root of the sample size, n. The sample size is not how many sample values we take from the sampling distribution of the sample mean. Instead, it corresponds to the number of observations used to calculate each sample mean in the sampling distribution. A different number for n corresponds to a different sampling distribution of the sample mean, because it has a different variance. The standard deviation of the sampling distribution of the sample mean is usually called the standard error of the mean. Intuitively, it tells us by how much we deviate from the true mean, μ, on average. The previous information tells us that when we take a sample of size n from the population and compute the sample mean, we know two things: we expect, on average, to compute the true population mean, μ; and, the average amount by which we will deviate from the parameter μ is σ/Sqrt(n), the standard error of the mean. Clearly, the standard error of the mean decreases as n increases, which implies that larger samples lead to greater precision (i.e. less variability) in estimating μ. Suppose repair costs for a TV are normally distributed with a population mean of $100 and a standard deviation of $20. To simplify notation, we can write: Repair Cost~N(100, 20).  The sampling distribution of the sample mean is the probability distribution of all sample means computed from a fixed sample of size n. In this case, Cost~N(100, 20/Sqrt(n)). If you want to visualize the distribution of repair costs for the TV in your bedroom, you’d plot a normal curve with a mean of 100 and a standard deviation of 20. If, however, you’d like the visualize average repair cost of the four TVs in your house, the correct curve would still be normal, but it would have a mean of 100 and a standard deviation (in this case, more correctly called a standard error) of 20/Sqrt(4). Now if you include your neighbor’s TVs as well, your sample size will be eight. The probability density function describing the average cost of the eight TV repairs is ~N(100, 20/Sqrt(8)). The three curves are plotted below. Screen shot 2014-04-19 at 1.10.14 PM To demonstrate the idea that the variance and the sample size are inversely related, consider the probability that a TV will require more than $110 worth of repairs. The probability that one TV requires repairs in excess of $110 (i.e. sample size = 1) is equivalent to P(Z > ((110 – 100)/20) = P(Z > 0.5) = 30.85%. The probability that the average repair cost of a sample of four TVs exceeds $110 is P(Z > ((110 – 100)/(20/Sqrt(4)) = P(Z > 1) = 15.87%. For the sample of size n = 8, we get 7.86%. What’s happening is that the value $110 is becoming more ‘extreme’ based on the distribution parameters as we increase the sample size. Since we’ve decreased the standard deviation when we take a sample of size 8 instead of 4, for example, the value $110 is less likely to be observed. Screen shot 2014-04-19 at 1.35.17 PM We can use the sample size n in a more useful way if we consider its impact on the precision of the sample mean. We know a larger n corresponds to a tighter distribution of the sample mean around the true mean.   Therefore, we can achieve a desired level of precision with our point estimator (the sample mean) by manipulating n. Since increasing n generally costs money in practice (think about surveying more respondents, etc.), we want to find the minimum value of n that will give us the desired condition in most cases. Let’s say I want the sample mean to be within 10 units of the true parameter with probability 0.97. To do this, I’ll have to decrease the variability by increasing n. I’d also like to find the minimum value that achieves this result, since I want to spend as little money as possible. In other words, I want to be 97% confident that the value I compute for the sample mean is within 10 units of the true mean, and I want the smallest possible n to do so. Here is the mathematica input: Screen shot 2014-04-19 at 1.57.55 PM Below, sample sizes are plotted against the probability of being within 10 units if the parameter μ: Screen shot 2014-04-19 at 1.58.01 PM Solving for n gives 18.84, so n = 19 is the smallest sample size that gives us the desired precision. Below, the sampling distribution of the sample mean for n = 19 is plotted against the normal curve with mean $100 and standard deviation $20: Screen shot 2014-04-19 at 2.01.20 PM In the next post, I’ll talk about how the properties of the sampling distribution of the sample mean relate to one of the most important theorems in all of mathematics, the Central Limit Theorem.


Modeling Stock Market Behavior

In the finance world, there’s some debate about whether or not the daily closing prices for various stock market indices convey  useful information.  Some financiers subscribe to the belief that the daily close price reflects market trends and impacts the probability of realizing a good return.  Others disagree, claiming that the day-to-day movements in the stock market are completely random and convey no useful information.  If this is true, then the closing price changes in the stock market should mirror a geometric random variable.  In this post I’ll explain why the geometric model would imply that stock market fluctuations are random and then test the validity of the model empirically.

Suppose the outcome of some even is binary, and success occurs with probability p.   Obviously failure must occur with probability 1 – p.  A geometric random variable models the number of trials that take place before the first success.  It takes on the value k when the first success occurs on the kth trial.  Trials are assumed to be independent, so we can write the probability density function of the random variable X as follows:

Screen shot 2014-02-07 at 9.09.52 AM

We used the independence assumption to rewrite the event “k-1 failures and a success on trial k” as the product of two distinct groups of events, namely k -1 failures and then 1 success.  Now we use the fact that success occurs with probability p (and the independence assumption, again) to write the following:

Screen shot 2014-02-07 at 9.10.00 AM

To model the behavior of the stock market as a geometric random variable, assume that on day 1 the market has fallen from the previous day.  We’ll call this fall in the closing price a “failure” that occurs with probability 1 – p.  Let the geometric random variable X represent the number of subsequent failures that occur until the stock market rises (“success”).  For example, if on the second day the stock market rises, the random variable X takes on the value 1, because there was only one decline (failure) until the rise (success) that occurred on the second day.  Similarly, if the market declines on days 2, 3, and 4 and rises on day 5, then it has declined on four occasions before rising on the fifth day and thus the random variable X takes on the value 4.  Keep in mind that it is stipulated in the formulation of the random variable that the market declined on day one, and therefore a fall on days 2, 3, and 4 is a sequence of four failures, not three.

To determine whether a geometric model fits the daily behavior of the stock market, we have to estimate the parameter p.  In our model, we are addressing the question of whether stock market price fluctuations are geometric.  Geometric random variables can take on infinitely many values of p (so long as p is between 0 and 1), so our model doesn’t address the probability with which the stock market rises and falls; the geometric model addresses the behavior of the random variable for a given p.  The value p takes on may be of interest in formulating other questions, but here its job is to create a realistic geometric model that we can compare to empirical stock market data.  If the stock market data fits the geometric model, the implication is that stock markets tend to rise and fall randomly with a constant probability of success.  This suggests that daily stock market quotes are meaningless in that today’s price does not reflect historical prices.  One could say that if this model fits stock markets don’t “remember” yesterday, but that sounds a lot like something called the memoryless property, an important characteristic of the exponential distribution, so we should be careful to not confuse the two.

Once we get some empirical data, we’re going to estimate the probability of success p.  So let’s solve for the general case and then compute an actual value with data afterwards.  There is no one way to estimate the value of a parameter, but one good way to do so is to use the maximum likelihood estimator of the parameter.  The idea is simple, but sometimes computationally difficult.  To estimate the value of p with the maximum likelihood estimator, we find the value of p for which the observed sample is mostly likely to have occurred.  We are basically maximizing the “likelihood” that the sample data comes from a distribution with parameter p.   To do this, we take the likelihood function, which is the product of the probability density function of the underlying distribution evaluated at each sample value:

Screen shot 2014-02-07 at 9.11.29 AM

For our model, we just need to substitute in the pdf of a geometric random variable for the generic pdf above and replace theta with p, the probability of success:

Screen shot 2014-02-07 at 9.11.35 AM

To find the maximum likelihood estimate for p, we maximize the likelihood function with respect to p.  That is, we take its partial derivative with respect to p and set it equal to 0.  However, it’s computationally simpler to work with the natural logarithm of the likelihood function.  This won’t affect the value of p that maximizes L(p), since the natural logarithm of L(p) is a positive, increasing function of L(p).  Sometimes you’ll hear of “Log-likelihood functions”, and this is precisely what they are  – just the log of a likelihood function that facilitates easier calculus.

Screen shot 2014-02-07 at 9.12.56 AM

Taking the derivative of this function is a lot easier than the likelihood function we had before:

Screen shot 2014-02-07 at 9.13.05 AM

So our maximum likelihood estimate of p (the probability of success) is one divided by the sample average, or, equivalently, n divided by the sum of all the k values in our sample.  This gives us the value of p that is most consistent with the n observations k1, …, k.  Below is a table of k values derived from closing data for the Dow Jones over the course of the year 2006-2007.

Recall that the random variable X takes on the value K when K – 1 failures occur (market price decreases) before a success (price increase) on trial k.  For example, X takes on the value k = 1 72 times in our dataset, which means that on 72 occasions over the course of the year there was only one failure before the first success; that is, the market declined on day 1 (by definition) and subsequently rose on day 2.  Similarly, there were 35 occasions where two failures were realized before a success, because the random variable X took on the value k = 2 on 35 occasions.

K Observed Freq.

1

72

2

35

3

11

4

6
5

2

6

2

We now have the necessary data to compute p.  We have 128 observations (values of k), so n = 128.  There are two ways we can compute p.  First, we could take the sample mean of the dataset how we normally would for a discrete random variable and then utilize formula 1 above:

Screen shot 2014-02-07 at 9.14.43 AM

The second formula obviously yields the same result, as you directly compute 128/221 instead of first computing its reciprocal.  So we now have a maximum likelihood estimate for the parameter p.  We can use this to model the stock price movement as a geometric random variable.  First let’s make the assumption that the stock market can in fact be modeled this way.  Given our value of p, what would we expect for the values of k?  that is, what proportion or with what frequency do we expect X to take on the values k = 1, 2, … ? First we’ll compute this, and then compare to the empirical data.

Screen shot 2014-02-07 at 9.15.27 AM

The probability that X takes on the value one is equal to the probability of success, which is to be expected since X = 1 corresponds to the situation in which success is realized on the day immediately following the initial failure.

Screen shot 2014-02-07 at 9.16.03 AM

And the rest are computed the same way.  Now since we have 128 observations, we can multiply each expected percentage by the number of observations to come up with an expected frequency.  Then, we can compare these to the observed frequencies and judge how well the model fits.

K N Expected % Expected Frequency

1

128

.5792

74.14

2

128

.2437

31.19

3

128

.1027

13.13

4

128

.0432

5.52

5 128

.0182

2.32

6 128

.0132

1.69

Now that we know what we should expect if the geometric model is a valid representation of the stock market, let’s compare the expected frequencies to the observed frequencies:

Expected Frequency Observed Frequency

74.14

72

31.19

35

13.13

11

5.52

6

2.32

2

1.69

2 

The geometric model appears to be a very good fit, which suggests that daily fluctuations in stock market prices are random.  Furthermore, stock indices don’t ‘remember’ yesterday – the probability of the market rising or falling is constant, and whether it actually rises or falls on a given day is subject to random chance.


Binomial Distribution Basics

The Binomial Distribution is a discrete probability distribution of the number of successes in n independent trials with a binary outcome and probability of success p.  There are three important conditions that must be met for the binomial distribution to be used correctly:

  • Trials are independent
  • The probability of success p is constant for all trials
  • Each trial has only two possible outcomes (binary)

One way to describe a Binomial random variable is in terms of Bernoulli Trials.  A Bernoulli trial is a random experiment in which there are only two possible outcomes – success and failure.  If we let success = 1 and failure = 0, then a Bernoulli Random Variable looks like this:

P(X = 1) = p

P(X = 0) = 1 – p

So really a Binomial experiment is just a sequence of n repeated Bernoulli trials.  Formally, a random variable X that counts the number of successes, k, in n trials, has a Binomial Distribution with parameters n and p, written X~Bin(n, p).  Let p = Probability that success occurs at any given trial.  Then the probability of k successes is:

Screen shot 2013-11-14 at 2.10.11 AM

  • The first part of the equation is “n choose k” or a combination of k objects from a population of n objects.  This expression is the number of ways we can arrange k successes in n trials, where the order of the k successes is unimportant.  This is called a Binomial Coefficient.
  • The second part of the equation is the expression p raised to the power k, where p is the probability of success in any given trial.  This expression takes advantage of the independence assumption – independent probabilities are multiplicative, so the probability of p happening k times is p*p*p*…*p (k times) = p raised to the k power.
  • The third part of the equation is the probability of realizing n-k failures where trials are independent and the probability of failure is (1-p).  Intuitively, if there are n trials and k of those are successes, the remaining trials, of which there are n-k, must result in failure.

It might help to think backwards and ignore the binomial coefficient for a moment.  If k independent events occur with probability p, and n – k independent events occur with probability (1 – p), then the probability that all mentioned events occur is obviously the product (pk)*(1-p)n-k.  While what we’ve just stated is accurate for one specific sequence of k successes and n – k failures, we must consider that there are a number of combinations of outcomes that yield k successes and n – k failures.  If we flip a coin three times and define success as landing on heads, there is more than one outcome for k = 2.  We could have HHT, HTH, or THH.   The exact number of outcomes that yield k successes and n – k failures is the binomial coefficient “n choose k”.  That is why it is multiplied by probability of success and failure.

Ex) Three coins are tossed, each with probability p that the coin lands on heads (it may not be a fair coin, so p may not be 0.5).  Since coin tosses are independent, we can write:

Screen shot 2013-11-14 at 2.11.36 AM

The table below shows the probabilities associated with each sequence:

Screen shot 2013-11-14 at 2.12.20 AM

The chart above gives the probability for each outcome – that is a sequence of heads and tails.  It would be incorrect to look at row 6 and then assume that the probability of obtaining 2 heads is the probability in column 4.  The probability there is the probability of obtaining exactly the sequence TTH, not the more general case of obtaining two heads.

But if our main interest is the number of heads that occur, the sequence HHT is the same as HTH.  Note that there are three outcomes with exactly two heads, for example, each with probability p2(1 – p).  This is where the “n choose k” term becomes necessary.  Exactly k successes may be achieved in a number of different ways.  In the context of this question, the binomial probability is:

P(k heads) = (# ways to arrange k heads and n-k tails)*(prob. of any specific sequence w/ k heads and n-k tails)

P(k heads) = (# ways to arrange k heads and n-k tails)*pk (1-p)n-k

Implicit in the operations just performed is that a random variable was used to map outcomes in a sample space to the real numbers.  Basically, we used a random variable to assign a real number (0, 1, 2, or 3 in this case) to the outcomes (sequences of H and T).  A random variable is not random nor is it a variable – it is a function that maps sample outcomes to the real numbers.  This is a very simplified definition that will not suffice in a mathematical statistics course most of the time, but that’s basically what’s going on here.  The outcomes were grouped in such a way to facilitate problem solving – in this case, it was used to give us the number of heads rather than a specific sequence.

Just as the probability function assigns probabilities to sample outcomes in the sample space, a probability density function assigns probabilities to values that a random variable takes on.  Let X be the number of heads that occur in a sequence of 3 coin tosses.  Then the probability density function is:

Screen shot 2013-11-14 at 2.13.51 AM

For example:

Screen shot 2013-11-14 at 2.14.30 AM

And here is a chart of the actual probabilities:

Screen shot 2013-11-14 at 2.15.01 AM

It should be clear now why independence of trials is such an important criterion in using the Binomial distribution.  If trials are not independent, i.e. the outcome of one trial affects the probability of subsequent outcomes, then the binomial model is not applicable.  In this situation, the Hypergeometric Distribution should be used instead.


Paradoxical expected values

We tend to think of the expected value of a random variable as a standard, practical, and simple measure of the variable’s central tendency.  However, there are situations in which the expected value is misleading or even nonexistent.  One famous case, commonly called the “St. Petersburg Paradox”, illustrates perfectly the limitations of the expected value as a measure of central tendency.

Suppose a fair coin is flipped until the first tail appears.  You win $2 if a tail appears on the first toss, $4 if a tail appears on the second toss, $8 if a tail appears on the third toss, etc.  The sample space for this experiment is infinite: S = {T, HT, HHT, HHHT, …}.  A game is called fair if the ante, or amount you must pay to play the game, is exactly equal to the expected value E(x) of the game.  Casino games are obviously never fair, because casinos stand to earn a profit (on average) from the games they offer – otherwise, they would not have a viable business model and they couldn’t afford to give you free drinks and erect fancy statues (though, while we’re on the subject, craps is the “fairest” as far as casino games go in the sense that the odds are skewed least heavily in the dealer’s favor).

Let the random variable W denote the winnings from playing in the game described above.   We want to categorize W such that we can mathematically evaluate its expected value.  Once we find this expected value, we know how much we’d have to ante for this game to be considered fair.  We know how much we’ll win on the first, second, and third toss, but let’s generalize for k tosses:

  • T on 1st toss -> $2
  • T on 2nd toss -> $4
  • T on 3rd toss -> $8
  • T on kth toss -> $2^k

So we win $2^k if the first tail appears on the kth toss.  But what is the probability of each of those outcomes?  For toss one, there is a ½ chance of getting a tail.  Since trials are independent, the probability of the first tail appearing on the second toss is (1/2)*(1/2) = 1/4.  For the third toss, the probability is 1/8.  So the probability of winning $2^k is the probability of getting the first tail on the kth toss which is:

Screen shot 2013-11-01 at 7.01.14 PM

To find the expected value of the random variable W, we need to find the sum over all values that k (remember that k = the number of tosses before the first tail) of k times the probability of k.  It’s evident from the sample space that k can take on an infinite number of values.  The expected value is:

Screen shot 2013-11-01 at 7.01.21 PM

So the expected value of the random variable W (winnings) is the sum over all k of the payoff (2^k dollars) times the probability of realizing that payoff (p(2^k)).  So the first term is the actual payoff, the second is the probability of that payoff.  Plugging in what we know about the probability of k and the infinite nature of the sample space:

Screen shot 2013-11-01 at 7.01.30 PM

The sum diverges!  There is no finite expected value.  That is, the expected value of the game is infinite, which means we’d have to pay an infinite amount of money for the game to be fair.

The point is not that you should wager an infinite amount of money in order to play the aforementioned game.  The point is that the expected value is often an inappropriate measure of central tendency that leads to an inaccurate characterization of a distribution.


Criminology and Combinatorial Probability

I just read a really interesting application of combinatorics to criminology in my stats book, An introduction to Mathematical Statistics and Its Applications, by Larsen and Marx.  Alphonse Bertillon was a French criminologist who developed a system to identify criminals.  He chose 11 anatomical characteristics that supposedly remain somewhat unchanged throughout a person’s adult life – such as ear length – and divided each characteristic into three classifications: small, medium, and large.  So each of the 11 anatomical variables are categorized according to their respective size, and the ordered sequence of 11 letters (s for ‘small’, m for ‘medium’, and l for ‘large’) comprised a “Bertillon Configuration”.

This seems rudimentary and imprecise, perhaps, but keep in mind this was before the discovery of fingerprinting and so forth.  One glaring issue with this system is the obvious subjectivity involved with assigning a classification to a given anatomical variable.  Also, I’d imagine it’s kind of hard to get all of this information about a person, especially if they’re suspected of having committed a crime but haven’t ever been in custody.

Issues aside, the Bertillon configuration is an interesting idea I think.  But the obvious question to ask is how likely is it that two people have the exact same Bertillon configuration?  The implications are two people who, from a legal classification standpoint, are exactly the same (and you can use your imagination from there).  So, how many people must live in a municipality before two of them necessarily share a Bertillon configuration?

The question is actually very simple to answer via the multiplication rule:

If Operation A can be performed in m different ways, and Operation B can be performed in n different ways, the sequence {Operation A, Operation B} can be performed in (m)(n) different ways.

Corollary: If Operation Ai, i = 1, 2, …, k can be performed in ni ways, i = 1, 2, …, k, respectively, then the ordered sequence, {Operation A1, Operation A2, …, Operation Ak) can be performed in (n1)(n2)***(nk) ways.

Think of Operation A as the anatomical variable and Operation B as assigning it a value (s,m,l).

Screen shot 2013-10-17 at 10.08.05 AM

Each Operation can be performed in three different ways, namely s, m, and l.  Since there are 11 anatomical variables, by the multiplication rule there are (3)*(3)*(3)*(3)*(3)*(3)*(3)*(3) *(3)*(3)*(3) = 311 ways to assign arrange the Bertillon configuration.  Therefore, if 311 = 177,147 people live in a town, we’d expect two of them to have the same Bertillon configuration.


Birthday Statistics

Today is my roommate’s birthday and three of my friends’ birthdays.  There are 365 days in a year, so what are the chances that so many birthdays are on the same day?  More importantly, what are the chances that I have to spend so much time and money on gifts?

The probability is higher than you’d expect – so high, in fact, it has a special name, the birthday paradox.  Informally, the birthday paradox is the surprising probabilistic phenomenon that for a relatively small group of people, the probability that at least two people share the same birthday is surprisingly high.  For example, if you have 23 people in your stats class, there’s roughly a 50% chance that at least two people have the same birthday.

One way to think about this result is that, at first, it seems like there are only 22 comparisons to be made – essentially 22 ‘chances’ that one person shares his or her birthday with someone else in the room.  This is true for the first person, who we’ll call Bob, but what happens when Bob is done comparing his birthday to everyone else’s?  Well, then the second person, who we’ll call Mary, compares her birthday to everyone in the room except for Bob, who she has already compared birthdays with.  So Mary has 21 people to compare with.  The third person has 20 potential matches, the fourth has 19, and so forth.  So what you really have is 22+21+…+1 = 253 comparisons, not 22.

Each individual comparison has only a 1/365 chance of being a match – equivalently, each trial has a 99.726% chance of not being a match.  This is probably why we’re so surprised by this paradox – we’re conditioned to think it’s rare to have the same birthday as someone, because when you ask any given person when their birthday is there is only a .274% chance you’ll have the same birthday as him or her.  But each time we compare, there is one less possibility – i.e. there is one more birth date that isn’t available, so to speak.  Below is the probability that no two people in the room share a birthday.  Notice that the first term is 365/365 because the first person has 365 birthdays to choose from without ‘taking’ someone else’s.  The second person, however, has one less choice, because the first person’s birthday has already been taken.  The third person has one less choice than the second person.  This continues until the last person, person number 23, has 22 fewer choices (365-22=343) of birthdays if he is to avoid having the same birthday as another person in the room.

Screen shot 2013-09-17 at 8.21.26 AM

The probability that two people in the room do share a birthday is obviously the complement of this, so it’s equal to 1 – 0.4927 = 50.73%.  You can write the expression above with factorials etc., but I just wanted to sort of explain why (though not actually justify in a rigorously mathematical sense) the birthday paradox is a thing.  For a more hardcore explanation, you can visit Wolfram Mathworld here.


www.openeuroscience.com/

Open source projects for neuroscience!

Systematic Investor

Systematic Investor Blog

Introduction to Data Science, Columbia University

Blog to document and reflect on Columbia Data Science Class

Heuristic Andrew

Good-enough solutions for an imperfect world

r4stats.com

"History doesn't repeat itself but it does rhyme"

My Blog

take a minute, have a seat, look around

Data Until I Die!

Data for Life :)

R Statistics and Programming

Resources and Information About R Statistics and Programming

Models are illuminating and wrong

A data scientist discussing his journey in the analytics profession

Xi'an's Og

an attempt at bloggin, nothing more...

Practical Vision Science

Vision science, open science and data analysis

Big Data Econometrics

Small posts about Big Data.

Simon Ouderkirk

Remote Work, Small Data, Digital Hospitality. Work from home, see the world.

rbresearch

Quantitative research, trading strategy ideas, and backtesting for the FX and equity markets

Statisfaction

I can't get no

The Optimal Casserole

No Line Is Ever Pointless

SOA Exam P / CAS Exam 1

Preparing for Exam P / Exam 1 thru Problem Solving

schapshow

Mathematical statistics for the layman.