The Tipping Point

I recently finished reading Malcolm Gladwell’s The Tipping Point: How Little Things Can Make A Big Difference.  The basic premise behind the book is best summarised in the author’s own words:

…the best way to understand the emergence of fashion trends, the ebb and flow of crime waves, or, for that matter, the transformation of unknown books into bestsellers, or the rise of teenage smoking, or the phenomena of word of mouth, or any number of the other mysterious changes that mark everyday life is to think of them as epidemics.  Ideas and products and messages and behaviors spread just like viruses do.

Under Gladwell’s hypothesis, there are three Agents of Change behind any tipping point: The actions of a Few, a Stickiness Factor and the right Context.

1. Law of the Few

The Law of the Few states that three personality types are required to work together before a trend will “tip”: Connectors, Mavens, and Salesmen.  I’ll try to summarise them as best I can below…

1a. Connector – essentially a person who “accumulates” people.  The type of person that loves to know everybody, and everybody loves to know.  Connectors move up and down and between disparate social circles.  They are experts at networking, forming friendships and cultivating acquaintances.  Connectors are the people who really spread the message from one group to another.  They act as the “feet” of a tipping point.

1b. Maven – “one who accumulates knowledge”.  Mavens are experts in all the technical details about products, services and pricing.  Importantly, they enjoy helping other people with purchasing decisions by sharing this unique knowledge of the marketplace.  Mavens allow you to compare the relative benefits of one thing to another in an objective, quantitative way.  Continuing the body analogy from above, I would describe mavens as the “brains” behind a tipping point.

1c. Salesman – as the name implies, salesmen are experts at selling a concept, product, service or movement.  However, Gladwell uses the term in the informal sense in that they “sell” an idea to those around them, not in the formal sense as in that they are employed by a company to push a particular product.  Essentially the “heart” behind a tipping point, salesmen are those charismatic people with powerful negotiation skills that can successfully persuade people.

So, according to Gladwell, a trend really needs these “champions” to help push it out to the masses and hence over the tipping point.  Of the three, it seems to me that the Salesman would be the most important.  And I’d say it’s possible to be two or even three of the above personality types at once.  In fact, wouldn’t most salesmen also be connectors, and vice versa?

2. Stickiness Factor – OK, it’s one thing for The Few to get your attention.  It is another thing entirely to keep that attention.  This is where Gladwell’s “stickiness factor” comes in.  There has to be a certain je ne sais quois about the concept that resonates with the target audience, transfixing its attention.  I think Gladwell tends to focus way too much on children’s TV (Sesame Street and Blue’s Clues) in this section of the book to stake a solid claim.  But he at least highlights something curious: when watching TV, young children don’t stop viewing because they’re bored.  They stop paying attention because they don’t understand.  Personally, I think this applies more generally to adults too.  If you want to sell something – an idea, product or concept – then your message needs to follow the KISS principle.  Keep It Simple, Stupid.

3. Power of Context – An idea, product or societal movement can be a bit like Goldilocks – all the conditions need to be just right before a big change goes over its tipping point.  Gladwell argues that people behave according to their surroundings and to those around them (particularly their peers).  It’s these environmental factors that will determine ultimately whether a tipping point can be reached.  Perhaps this is obvious – after all, you’re hardly going to see an explosion of air conditioner sales during a Siberian winter.  It’s intuitive that a person will respond, sometimes in a very primal way, to their conditions.   Gladwell uses examples from New York City’s “Fixing Broken Windows” anti-crime strategy, and the famous Stanford Prison Experiment to support his case.  I found The Power of Context section to be the more engaging chapters of The Tipping Point.  I particularly enjoyed Gladwell’s discussions on how our peer groups play a far more significant role in our upbringing than our own parents, and how groups/organisations tend to become dysfunctional at a seemingly universal number of 150 people, for example.

In summary, I thought Gladwell’s Tipping Point was a very interesting concept, and presented some compelling arguments.  Tipping Point is a worthwhile read, if only because it’s a meme that has effectively entered our collective consciousness.  And it does make you think, which is never a bad thing.

Having said that, I remain unconvinced that tipping points, in fact, follow Gladwell’s “laws” in any kind of consistent, predictable fashion.  Perhaps the three “Agents of Change” that Gladwell defines are elusive and impossible to measure.  But it seems to me that sometimes a tipping point can be reached without all of Gladwell’s factors being present.  Conversely, sometimes a tipping point fails to be reached with all of the necessary agents in place.

At the end of the day, it all seems rather random.

Final note: The website FastCompany has published a comprehensive rebuttal to The Tipping Point here.

How to talk back to a statistic

In my previous blog entry I briefly reviewed Darrell Huff’s excellent book, How to Lie with Statistics.  In the closing chapter Huff summarises the lessons by explaining How to Talk Back to a Statistic.  Or, in Huff’s own words, “how to look a phoney statistic in the eye and face it down”.

Not all the statistical information that you may come upon can be tested with the sureness of chemical analysis or of what goes on in an assayer’s laboratory.  But you can prod the stuff with five simple questions, and by finding the answers avoid learning a remarkable lot that isn’t so.
How to Lie with Statistics, Chapter 10, p110

The five simple questions are:

  1. Who Says So?
  2. How Does He Know?
  3. What’s Missing?
  4. Did Somebody Change the Subject?
  5. Does It Make Sense?

Armed with these five questions I thought it might be interesting to examine a real world example.

You might have seen recently the news report that “75% of ex-Bush officials are still unemployed”.  The source of the story was the Wall Street Journal article of 21 Feb 2009: Jobs Still Elude Some Bush Ex-Officials

The jobless rate is hanging high for many of the roughly 3,000 political appointees who served President George W. Bush.  Finding work has proved a far tougher task than those appointees expected …

Only 25% to 30% of ex-Bush officials seeking full-time jobs have succeeded … much, much worse than when Ronald Reagan, George H.W. Bush and Bill Clinton left the White House …

Let’s put this 75% unemployment rate of ex-Bush officials to the Duff Test.

Who Says So?

The first sleight of hand you notice about the statistic is that it hides behind what Huff calls an “OK Name”.  In this case the “OK Name” is the Wall Street Journal, a well known and reputable news source.  But it’s not the WSJ who actually “says so”.  It’s not a piece of their own independent investigative journalism.  In this case the WSJ is merely reporting on a statistic prepared by somebody else.  It’s therefore worthwhile considering if this third party actually has any expertise in the areas of data collection and statistical analysis.  Are they impartial?  Could they be biased?  Do they have a hidden agenda or ulterior motive behind presenting these figures?

I don’t want to drink from a poisoned well.  So I’m going to approach this source with a healthy dose of scepticism.

How Does He Know?

How did the researchers arrive at their “estimate”?  Via robust statistical sampling?  Rumour mill?  Reading tea leaves?  On the face of it, the data look anecdotal at best.  The WSJ article doesn’t go into any details.  This is enough to raise a second doubt about the statistic.

What’s Missing?

What kind of error margin is there in the estimates?  If the estimate was based on a sample, how big was it?  How was it selected?  Is it representative?  When comparing ex-Bush officials with previous administrations are they comparing apples with apples in terms of such things as ages and career ambitions?  Were ex-Bush officials more likely to be heading into retirement or satisfied with a bit of part time work?  Not to mention the vastly different employment situation that exists right now as the U.S., and indeed the world, enters the Second Great Depression.

Did Somebody Change the Subject?

Huff warns that “when assaying a statistic, watch out for a switch somewhere between the raw figure and the conclusion.  One thing is all too often reported as another.”  Although it doesn’t explicitly say so, the implication behind the WSJ article is that ex-Bush officials are having a hard time finding employment because they’re ex-Bush officials.  It’s fair to say that George W. Bush is regarded as being one of the worst U.S. presidents in history.  Certainly he left office with some of the lowest approval ratings of all time.  So it’s only natural that nobody from his administration could ever find gainful employment again.  They’re hopeless and everybody hates them, right?  Well, maybe.  But the truth is probably far more complex.  There are so many external variables in play that such a conclusion represents a leap of faith.

Does It Make Sense?

For any statistic to “make sense” it needs context.  A comparison of yourself to a group, or your suburb to the country, or a trend over time are examples of data context.  In my opinion any kind of real, meaningful context is missing from the statistic reported by the WSJ.

All in all I believe the figure of “75% of ex-Bush officials are unemployed”, as reported by the WSJ, fails Duff’s basic how to talk back to a statistic five-point criteria.  This particular statistic is like a bad smell in an elevator.  Source and purpose unknown, it hangs in the air requiring our attention.  But any kind of meaningful comment is impossible.  The only sensible course of action is to ignore it.

How to Lie with Statistics

Statistics is hard.  Let’s all go to the beach.

You know, I really enjoy being an information analyst.  Statistics has been a very rewarding career choice.  Over time I’ve learnt to swim through data like a fish dives through water.  In fact, remove me from statistics and I’d probably flap around gasping to breathe just like a landed fish.  But after many years I’ve come to accept that the vast majority of the population simply don’t “trust” statistics.  I admit not without good reason.  On the one hand we’re bombarded with statistics every day, mostly from the media (both “as reported” and by advertising).  On the other hand statistics are too often twisted, corrupted, misrepresented, biased, misused, falsified, misreported or sometimes simply ignored (not by me of course, heh).  No wonder some people throw up their hands and declare it’s all too hard.  Why bother paying attention anyway when 83.7% of all statistics are simply made up on the spot?

With that in mind I’ve just finished reading How to Lie with Statistics by Darrell Huff.

I understand that How to Lie with Statistics is one of the, if not the highest selling books on statistics ever written.  An extraordinary achievement, especially considering Huff had no formal training in statistics.  The concepts are all too familiar to me, but of course How to Lie with Statistics is not aimed at the professional.  It’s very much an introductory text aimed squarely at a non-technical audience.  My copy was a mere 124 pages long, making How to Lie with Statistics something that can be digested in just a couple of hours.  First published in 1954 it’s striking how, even though some of the language has dated terribly (“Negro”? “Mongolism”?), the basic ideas expressed inside are timeless.  Warning people to beware of such things as hidden bias, inappropriate sampling, “conveniently” omitted details, and inappropriate measures (e.g. using mean when median is more appropriate) remain as relevant in 2009 as in 1954.  They’ll still be relevant in 2059.

Duff certainly writes entertainingly and with good humour throughout, making How to Lie with Statistics a very accessible and enjoyable read.  More than 50 years after first being published, many of the statistical “sins” highlighted by Huff in his book are still being committed today.  By way of example – correlation being used to imply causation, graph scales used to exaggerate minor differences and “OK names” being used to mask dodgy sources.  In conclusion, How to Lie with Statistics will help the average reader identify the various statistics “sharks” that can lurk in these waters.

Safe swimming.

In future blog entries I’d like to expand further on some of the concepts that Huff wrote about in How to Lie with Statistics, hopefully using some real world examples.

Further reading:

The Wisdom of Gummy Bears

A couple of weeks ago I reviewed James Surowiecki’s The Wisdom of Crowds.  To briefly recap:

Under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them.  Groups do not need to be dominated by exceptionally intelligent people in order to be smart.  Even if most of the people within a group are not especially well-informed or rational, it can still reach a collectively wise decision.

The book opens with an example of an impromptu experiment conducted by the scientist Francis Galton in 1906:

Galton was at a country fair where a live ox was placed on display.  Fairgoers were invited to guess the weight of the ox after it had been slaughtered and dressed.  Eight hundred ordinary people from all walks of life tried their luck.  They included experts such as butchers and farmers, as well as non-experts.  Out of interest, when the contest was over, Galton collected the used tickets and averaged the punters’ individual guesses.  This figure represented the “wisdom of the crowd”, and in this case the crowd had guessed the ox would weigh 1197 pounds.  After it had been slaughtered and dressed the ox weighed 1198 pounds.  The crowd’s judgement was essentially perfect.

I think this is amazing.  The “experiment” was a success because the “crowd” in this example had met Surowiecki’s conditions for making a “wise” decision. That is, they:

  1. were diverse, from a range of backgrounds, including experts and non-experts
  2. understood the process and outcome.  That is, the task was simply to guess the weight of an ox.
  3. acted independently using their own judgement to come to a personal decision, free of undue external influence
  4. made a decision that was aggregatable, in this case via Galton’s calculation of the average
  5. produced a result. The competition was run and when it was over there was a definitive winner.

Several weeks ago I had an opportunity to re-create Galton’s 100+ year old experiment.  I was at a party and one of the games was to guess the number of gummy bears in a big glass jar.  Like Galton in 1906 I was curious to test the decision making ability of a group of people.  So after the competition was over I asked the host for a list of all the contestants’ individual guesses.  A total of twenty three people had participated in the game and made the following estimates.

Contestant Contestant’s Guess
A 203
B 215
C 306
D 295
E 237
F 500
G 251
H 1000
I (winner)
J 450
K 150
L 300
M 1002
N 462
O 200
P 174
Q 295
R 187
S 305
T 420
U 483
V 250
W 1200
Average 402

In fact there were 387 gummy bears in the jar.  So contestant “I” was the clear winner (it wasn’t me, by the way!) with a guess of 369 gummy bears.  Not a bad effort.  Only 18 away from the true value.  But the really striking outcome for me was that the “crowd” faired even better.  The average of everyone’s guesses was 402, just 15 off the actual number.  In other words, the crowd was smarter than the smartest individual member.

An extraordinary result.

It also leads to an optimum strategy, if you ever find yourself participating in one of these “guess how much/how many” kind of games.  Wait until the last minute and take an average everybody else’s guesses.  In all likelihood that will be closer than any other individual estimate.

I didn’t follow my own brilliant strategy, by the way, which is why I’m not eating gummy bears at the moment.

The Wisdom of Crowds

James Surowiecki’s The Wisdom of Crowds turned the way I view the world completely on its head.  Concise, well thought out, and sharply written the premise behind The Wisdom of Crowds is this:

Under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them.  Groups do not need to be dominated by exceptionally intelligent people in order to be smart.  Even if most of the people within a group are not especially well-informed or rational, it can still reach a collectively wise decision.

Obvious?  No.  This is a contradiction to the way many of us think of crowds.  Most of us, myself included before I read the book, might consider an individual person quite clever but would never describe a crowd as “wise”.  No doubt this is because we typically equate crowds with mobs.  And (as Surowiecki himself emphasises early on in the piece) a mob, of course, is profoundly stupid.  However stop to consider situations where crowds prove to be very astute decision makers.  A classic example of this is horse race betting.  It is no accident that the shorter-odd horses consistently finish so well.  Rather, a group of people is making a collective decision on an uncertain outcome that, time and time again, turns out to be remarkably accurate.

Which leads me to the critical conditions underlying Surowiecki’s hypothesis.  For a crowd to be “wise” it must be under the right circumstances.  In order for a group of people to come to a collectively good decision, it must:

  1. Be diverse. That is, a wide range of backgrounds is essential, including experts and non-experts.
  2. Have a basic grasp of the process and outcome. You wouldn’t necessarily ask a group of non-doctors to make a decision on a surgical procedure.  Not the same way you would on the outcome of a horse race at least.
  3. Act independently. That is, individuals in the group must be allowed to use their own internal judgement systems to come to a personal decision, without influence from each other and “outside”.
  4. Aggregate-able. There must be some way of crystalising the crowd’s collective “decision”.  In the horse racing example the aggregating system is via odds calculation.  The horse with the shortest odds is the group’s pick of the winner.
  5. Produce a result. For example, a race will be run and when it’s over there will be a definitive winner.  The stock market, on the other hand, isn’t as suitable as it’s in perpetual motion.

In essence Surowiecki details where group decision making can work, when it can work best, but also how it can fail.  Based on a simple but counter-intuitive theory that seems obvious by the end, it presents a compelling argument.  You don’t need a degree in statistics to enter.  The Wisdom of Crowds is aimed squarely at the non-technical audience and it hits that target.  Included is plenty of supporting evidence and examples in the form of entertaining anecdotes highlighted to press home its case.  It’s a great read, a real page-turner, and I highly recommend it.  If you’re looking for gift ideas this Christmas then The Wisdom of the Crowds would be an excellent addition to the wish list.


“Meet me by the tram stop.  I’ll be the one wearing a hat.”