Lucky 8?

My state government’s lottery administration, SA Lotteries, makes the results from its various games available online, including tables of how frequently the various lottery numbers were drawn.

For example, you can see here how frequently the numbers 1 through 45 have been drawn in the Saturday X Lotto. At the time of writing this, Number 8 was the most frequently drawn, recorded as occurring a total of 289 times between Draw Number 351 to 3265. Note that the Saturday X Lotto Draws are odd-numbered, so Draw 351 to 3265 actually consists of 1458 (i.e. (3265-351)/2+1) weekly games.

Of course there will be random variation in how frequently balls are drawn over time, just as there’s random variation in heads and tails in the toss of a coin. But is it particularly unusual for the Number 8 Ball being drawn 289 times in 1458 games of Saturday X Lotto?

Is South Australia’s Saturday X Lotto biased towards the number 8?

Now before we can determine if the Number 8 being drawn 289 times in 1458 games of X Lotto is an extraordinary event, it helps if we first work out how many times we expected it to happen. In X Lotto, a total of eight balls (6 balls for the main prize and then 2 supplementary balls) are selected without replacement from a spinning barrel of 45 balls. The probability of any single number of interest being selected in eight attempts without replacement from a pool of 45 can be calculated using a Hypergeometric Calculator as P(X=1)=0.17778 (i.e. just under 18%). Therefore we expect the Number 8 (or any other number for that matter) to be drawn 0.17778 x 1458 = 259 times in 1458 games.

So observing 289 occurrences when we were only expecting 259 certainly seems unusual, but is it extraordinary?

To answer this, I’ll employ the Binomial test to evaluate the null hypothesis, H0:

H0: Observed frequency of the Number 8 being drawn in SA Lotteries’ Saturday X Lotto is within the expected limits attributable to chance (i.e. the lotto is a fair draw)

vs. the alternative hypothesis, H1:

H1: Observed frequency is higher than would be expected from chance alone (i.e. the lotto is not a fair draw)

The statistics package, R, can be used to run the Binomial test:

> binom.test(289,1458,0.17778,alternative=”greater”)

> Exact binomial test
> data: 289 and 1458
> number of successes = 289, number of trials = 1458, p-value = 0.02353
> alternative hypothesis: true probability of success is greater than 0.17778

So we can reject the null hypothesis of a fair draw at the alpha=0.05 level of significance.  The p-value is small enough to conclude that the true probability of Number 8 being drawn is higher than expected based on chance alone.

However, please note that I am definitely not suggesting that anything untoward is going on at SA Lotteries, or that you’ll improve the odds of winning the lottery by including the number 8 in your selection. For a start, rejection of the null hypothesis of a fair system occurs at the standard, but fairly conservative, alpha=0.05 level. What if I had decided to use alpha=0.01 instead? The null hypothesis of a fair system would be retained. Things are all rather arbitrary in the world of statistics.

Still, a curious result that utilised several statistical concepts that I thought would be interesting to blog about.



How poker machines vacuum up your money

Poker machines are unique in the gambling world. They are the only form of gambling that has been designed and crafted for the purpose of making money, and where there is absolutely no chance of influencing the outcome.

Tom Cummings, Poker machine maths, 27 May 2011

The article, linked above, on poker machines is an excellent insight into the mathematics of poker machine gaming and how, despite a guaranteed return-to-player percentage which is high, a punter that plays long enough ends up with nothing. The vital point that Tom Cummings makes is this:

It’s common knowledge that poker machines have a return-to-player percentage of anywhere between 85 per cent and 90 per cent, depending on where you live and what kind of establishment you’re playing in. For the sake of the story, let’s assume that Gladys’ poker machine was set to 90 per cent. That means that over a long period of time, the machine will return 90 per cent of money gambled to players, and keep 10 per cent as profit.

But wait, I hear you cry. Gladys didn’t lose 10 per cent; she lost it all! Well, no… not according to poker machine mathematics. The rule is that the poker machine has to return 90 per cent of money gambled… not money inserted. And there’s a huge difference.

The way it works, mathematically, is, I think, very interesting. Take the example used in the article of a 2-cent gaming machine, at $1 per play, with guaranteed return of 90%. When you load that dollar in the slot, sometimes you win, and sometimes you lose, but the long-term average dictates that at each “turn” the dollar is losing 10% of its value. $1 turns into 90c, turns into 81c, turns into 72.9c, … , turns into 2c at which point the game ends.

More generally, at “turn” n, an amount A1 initially fed into a slot machine with return R is worth:


or solving for n:

n=\frac{\displaystyle ln(A_{n}) - ln(A_{1})}{\displaystyle ln(R)} + 1

What does this mean? It means that it takes, on average

n = [ ln(0.02) – ln(1.00) / ln(0.90) ] + 1 = 38

iterations to turn $1.00 into 2 cents on a 90% return poker machine. More importantly, the total amount gambled isn’t $1.00, it’s actually (because you’re re-investing your winnings and following your losses until the game ends): $1.00 + 90c + 81c + 72.9c + … + 2c. Or more generally,

A_{1}\sum_{1}^{n} R^{i-1}

which you’ll remember converges to:

A_{1} \frac{\displaystyle 1-R^{n-1}}{\displaystyle 1-R}

So a single dollar coin will generate, on average (on a 2-cent, 90% return machine):

[ 1-0.9(38-1)] / [ 1-0.90 ] = $9.80 in total bets.

At $1.00 per game, at 10 games per minute, that’s about 1 minute of play for every dollar put in, or 5 hours to burn through Gladys’ $300.

Those who defend poker machines often point to the high rate of return as one of the reasons that pokies are just “good, clean fun” for most people. The reality is that every poker machine can meet this “rate of return” requirement while still leaving the gambler broke. That’s poker machine mathematics.


Further reading:

ABC Hungry Beast, The Beast File: Pokies

Will the 43rd parliament of Australia survive the next two years?

Good question. I mean, what are the odds?

Australia’s 43rd parliament is a precarious one. Last year’s federal election resulted in a hung parliament in the House of Representatives, with a minority Labor government clinging to power with the help of the Greens and a few Independents. Now I don’t want to sound morbid but, Heaven forbid, should any one of the Honorable Members die, the resulting by-election could conceivably bring down the government. So finely balanced are the numbers.

So, I wonder, what IS the probability that this will happen before the next election, due in 2013?

Statisticians that devote themselves to thinking about exactly this sort of question are called Actuaries, and when calculating risk of mortality they reference a thing called a Life Table. Life tables list the probabilities that a person aged x will survive to x+1 years. For example, looking at the most recent Life Tables for Australia, the probability that a female aged 40 will die before turning 41 is 0.00078 (i.e. 0.078%). Or looking at it in a more positive light, the chance a 40 year old female will live to see her 41st birthday is 1-0.00078, or a healthy 99.922%.

So turning our attention back to the 150-member House of Representatives, what is the probability that one or more of them might shuffle off this mortal coil in the next two years, resulting in a by-election and/or a change of government?

To be rigorous I should aggregate individual probabilities of survival based on each member’s age and gender. But I can’t be arsed, so I’ll talk about it in terms of generalities.

If I recall correctly, the average age of a politician is 51 years. Despite this being a modern society in the year 2011, our parliament is still a massive sausage-fest. Men in government significantly outnumber women. So I’ll just be lazy and use the qx column in the Australian Life Tables linked above for males aged 51 and 52.

From the Life Table, the probability that a male aged 51 years will survive to his 53rd birthday is (1-0.00332)*(1-0.00359)=0.993. Therefore the chance that every member of the 150 seat parliament survives for the next two years can be approximated at 0.993^150=0.349 (i.e. 34.9%). The likelihood that one or more parliamentarian dies within the next two years is the complement all surviving, calculated as 1-0.349=0.651 (65.1%).

65%. That’s a pretty high risk that the parliament won’t see out its full term.


The NBN, CVC and burst capacity

Late last year, NBN Co (the body responsible for rolling out Australia’s National Broadband Network) released more detail on its wholesale products and pricing. You can download their Product and Pricing Overview here. The pricing component that I wanted to analyse in this post is NBN Co’s additional charge for “Connectivity Virtual Circuit” (CVC) capacity.

CVC is bandwidth that ISPs will need to purchase from NBN Co, charged at the rate of $20 (ex-GST) per Mbps per month. Note that this CVC is on top of the backhaul and international transit required to pipe all those interwebs into your home. And just like backhaul and international transit, if an ISP doesn’t buy enough CVC from NBN Co to cover peak utilisation, its customers will experience a congested service.

The problem with the CVC tax, priced as it is by NBN Co, is that it punishes small players. By my calculations, an ISP of (say) 1000 subscribers will need to spend proportionally a lot more on CVC than an ISP of 1,000,000 subscribers if they want to provide a service that delivers the speeds it promises.

Here comes the statistics.

Consider NBN Co’s theoretical 12 megabit service with 10GB of monthly quota example that they use in the document I linked to above. 10GB per month, at 12Mbps gives you 6,827 seconds (a bit under 2 hours) at full speed before you’re throttled off. There’s 2,592,000 seconds in a 30-day month, so if I choose a random moment in time there is a 6827/2592000 = 0.263% chance that I’ll find you downloading at full speed.

That’s on average. The probability would naturally be higher during peak times. But let’s assume in this example that our theoretical ISP has a perfectly balanced network profile (no peak or off-peak periods). It doesn’t affect the point I’ll labour to make.

A mid-sized ISP with (let’s say) 100,000 subscribers can expect, on average, to have 100,000*0.263% = only 263 of those customers downloading at full speed simultaneously at any particular second. However, the Binomial distribution tells us that there’s a relatively small, but still statistically significant (at the alpha=0.05 level) probability that there could be 290 or more customers downloading at the same time.

So a quality ISP of 100,000 subscribers will plan to buy enough CVC bandwidth to service 263 customers at any one time. But a statistician would advise the ISP to buy enough CVC bandwidth to service 290 subscribers, an additional (290-263)/263 = 10% , or find itself with a congested service about one day in every 20.

This additional “burst headroom”, as a percentage, increases as the size of the ISP decreases. From above, an ISP of 10,000 can expect to have 26 customers downloading simultaneously at any random moment in time. But there’s a statistically significant chance this could be 35+. This requires them to buy an additional (35-26)/26=33% in CVC over and above what was expected to cover peak bursts.

The table below summarises, for ISPs of various sizes, how much additional CVC would need to be purchased over and above the expected amount, to provide an uncontended service 95%+ of the time.

Graphically it looks a bit like this…

As you can see, things only really start to settle down for ISPs larger than 100,000 subscribers. Any smaller than that and your relative cost of CVC per subscriber per month is disproportionally large.


Further reading:

Rebalancing NBNCo Pricing Model

NBN Pricing – Background & Examples, Part 1


The Gambler’s Fallacy

Roulette is no match for geeks with a basic knowledge of the fundamentals of statistics

The quote above popped up on my Twitter timeline a few days ago.  It’s not my purpose to embarrass the person who wrote it; their statement simply reflects several very common misconceptions around gambling probability, concepts that I’ve been meaning to write about for a while.  Probability is often unintuitive, and when applied to gambling people can quickly get unstuck.

I think the most fundamental error here, more observational than mathematical, is the idea that casinos could somehow have a poor grasp of probability theory, and therefore roulette can be beaten with a “system”.  Notwithstanding that casinos must have, over centuries, evolved a powerful grip on probability theory else quickly perish, whenever I confront people with this notion I find their strategy boils down to a variation of the debunked Martingale System.  One of the simplest Martingale strategies, as it applies to a roulette player betting on red or black for example, is to wait for a run of a colour (let’s say 5 to 7 reds in a row) and then place a wager on the opposite outcome (i.e. black).   This strategy lends itself to the Gamblers Fallacy, a mistaken belief that a future outcome in a random independent process is somehow “more likely” based on past behaviour.  And while it’s true, given infinite time, that the roulette ball in a fair game must fall as often on black as it does on red, you’ll go broke long before then for reasons I hope to make clear below.

The casino’s strategy, on the other hand, is simple, fullproof and ruthless: keep you playing.  Like a predator stalking its prey to exhaustion, the casino can wear you down to broke using a statistical concept known as Expected Value.  Put simply, the expected value, as it applies to gambling, is the average payoff over time.

Take, for example, the roulette wheels at Adelaide’s “SkyCity” casino.  These tables have 37 numbers consisting of 18 red, 18 black, and one “0” slot.  Even money bets such as black/red pay even money, meaning that if you bet $1 and win (with probability of 18/37) you gain an additional $1.  Conversely, there’s a 19/37 chance you will lose your $1 stake.  So the punters’ average payoff over time is:

Expected Value [$1 even-money bet] = Prob(loss) x Price of losing + Prob(win) x Price of winning

= (19/37 x -$1) + (18/37 x $1)

=  -$0.0270

Therefore you can expect to lose, on average, about 2.7 cents for every $1 even-money bet over the long term.

In other words, the house doesn’t always win, but it will ultimately prevail.


Here’s a clip showing how it can all go horribly, horribly wrong.  Note how the player starts with a wager of just €1 and watch the losses increase exponentially [warning: some language].


Sex, Lies and Polygraph Tests

The Premier of South Australia, Mike Rann, is embroiled in something of a sex scandal at the moment.  I won’t go into all the sordid details, but, briefly, a woman by the name of Michelle Chantelois is claiming that Rann had sex with her several years ago (he was single but she was married at the time), a claim he is emphatically denying.  The political vultures are circling because, given the denial, if a “smoking gun” (or perhaps more aptly a “blue dress”) is produced then Rann is cactus.

Anyway, none of this is really interesting to me.  Actually, it’s hardly anybody’s business.  The shenanigans these two people did, or did not, get up to in their private lives years ago is entirely between them and their families as far as I’m concerned.  But today Chantelois has come out and volunteered to take a lie detector test to determine who is telling the truth.  So I wonder, statistically what’s the probability that she will pass the test?

To answer this question we’ll use Bayes’ Theorem and some rather dodgy data points I picked up around the internet.  It’s on the internet, you see, so it must be true.

The first bit of information we need is the probability that Chantelois is actually telling the truth.  In a recent, and utterly meaningless straw poll conducted by the Adelaide Advertiser, this probability is precisely 53%.

Shortly before 3pm, 53 per cent of respondents to an AdelaideNow poll believed former Parliament House waitress Ms Chantelois was telling the truth about claims of a sexual relationship with Mr Rann, while 47 per cent believed the Premier’s rejection of the allegations.

Therefore P(Chantelois is telling the truth) = P(T) = 0.53;

and P(Chantelois is not telling the truth) = P(N) = 1-P(T) = 0.47.

The next bit of information we need concerns the reliability of polygraph tests themselves.  Personally I’ve always been more than a little sceptical of the infernal things.  Polygraphs smell like voodoo science to me, and according to Wikipedia,

Polygraph testing has little credibility among scientists. Despite claims of 90-95% validity by polygraph advocates, critics maintain that rather than a “test”, the method amounts to an inherently unstandardizable interrogation technique whose accuracy cannot be established.  A 1997 survey of 421 psychologists estimated the test’s average accuracy at about 61%, a little better than chance.

Therefore P(polygraph says you’re telling the truth, given that you’re telling the truth) = P(+|T) = 0.61; and

P(polygraph says you’re telling the truth, given that you’re lying) = P(+|N) = 1-P(+|T) = 0.39.

Now using Bayes’ Theorem, we can calculate Chantelois’ chance of evading the lie detector test.

P(Chantelois is not telling the truth, given that the polygraph says she is)

= P(N|+)

= P(+|N) x P(N) / P(+)

= [ P(+|N) x P(N) ] / [ P(+|T)xP(T) + P(+|N)xP(N) ]

= [ 0.39 x 0.47 ] / [ 0.61 x 0.53 + 0.39 x 0.47 ]

= 0.362 (i.e. 36.2%)

Too high to put any kind of faith in the results of the test.

The calculations above were all done with tongue planted firmly in cheek and are not to be taken seriously.  Whether it’s Rann or Chantelois really telling the truth I don’t know or care.  What is important is that Bayes’ Theorem shows us that, even with accurate tests, there is a good chance of a misclassification.  A single test is usually not enough.


The randomness of iTunes

In 1998, a rather awkward 25-year-old male walked into a CD store (this was in the day when music was sold on CDs, in stores, to 25 year-olds) and purchased Whitey Ford Sings the Blues by Everlast.  Here’s what the indubitable Wikipedia has to say about said album and artist

Whitey Ford Sings the Blues was both a commercial and critical success (selling more than 3 million copies).  It was hailed for its blend of rap with acoustic and electric guitars, developed by Everlast together with producers Dante Ross and John Gamble (aka SD50).  The album’s genre-crossing lead single “What It’s Like” proved to be his most popular and successful song, although the follow up single, “Ends”, also reached the rock top 10.

Several years later Apple launched iTunes, which also proved to be a commercial and critical success, and the awkward male promptly loaded Whitey Ford Sings the Blues into the song library.  iTunes seemed to take a particular shine to this album, apparently favouring it with many more frequent plays, when iTunes was set to “shuffle”, than any of other 100 or more albums in the collection.  At least that’s how it appeared to the awkward male, who seemed to notice it come up much more often than expected.

In a strange twist of fate I also just happen to have Whitey Ford Sings the Blues in my iTunes collection.  In another strange coincidence, just like that awkward male from a decade ago, I’ve noticed that iTunes tends to favour it over other albums in the song list when iTunes is set to shuffle.

Life is certainly full of strange coincidences, but does iTunes really favour certain songs/ artists/ albums over others?  Let’s test it scientifically…

I set iTunes to shuffle and counted the number of tracks I had to skip before I hit Whitey Ford Sings the Blues.  The results are below:

32, 65, 181, 67, 77, 152, 50, 46, 230, 64

In other words, Whitey Ford Sings the Blues played randomly 10 times in 964 attempts (i.e. 1.037% of the sample).  I have 119 albums in iTunes, so theoretically I should be hearing it 1/119=0.840% of the time.  So the sample is a little bit higher than expected, but statistically significantly higher?

This question can be answered using the probability mass function of the Binomial Distribution.  The probability of exactly 10 “successes” out of 964 “attempts”, given that the probability of a success is 1/119 is, using the very fine SpeedCrunch calculator:

binompmf(10; 964; 1/119) = 0.102 (i.e. 10.2%)

This is well above the standard p=0.05 (5%) significance level.  I have to conclude that Whitey Ford Sings the Blues doesn’t play any more or less frequently than any other album in my iTunes collection when the playlist is set to shuffle.

Humans are very bad a gauging randomness.  Or rather, probably like most predators, we’re very good at detecting patterns, and tend to see patterns when they’re not really there.  Luckily we have statistics to sort it all out for us.

And Whitey Ford Sings the Blues is still an awesome album.