"Figures often beguile me, particularly when I have the arranging of them to myself; in which case the remark attributed to Disraeli would often apply with justice and force: 'There are three kinds of lies: lies, damned lies and statistics'." - Autobiography of Mark Twain

Well, I have to ask if the infrequent Halloween/full moon convergence is having an affect on y'all! I usually think I'm way, way up there on the curmudgeon/cynic scale for this BBS, but Tony and greggerm are making me want to pull on a nice flower-print dress and a blonde wig with pigtails and break out in song!

In case anyone thought I was serious (how often has *that* happened?), my reference to 96.3 percent of computer users was a perversion of a favorite Doonesbury (roughly):

Reporter Hedley: "President Reagan, some citizens have questioned your use of statistics..."
Reagan: "Well, that's not true. 99 percent of Americans think my statistics are fine!"

Tony, you'll be pleased to know that Infoworld stole your line in that article: "In Search of a Better Benchmark: We've said for years that there are lies, damned lies, and benchmarks. Popular benchmarks (....)but are designed around linear scripts..." Their professed cynicism does not make their testing valid. I did take a look again at their methods and it did not seem obviously tilted. *However*, they use a "better, more realistic", vendor-provided test suite that is essentially a black box, so I suppose first you have to decide whether they are measuring things that matter (looks like it to me) and then whether the testing tool is valid (No idea. Could it be biased to one OS or the other with calls that it's making to MAPI, DB, etc?)

In defense of statistics -- one of my concentrations in a grad degree and something I dealt with during my time as an epidemiologist -- I would say that it is entirely possible to conduct valid and meaningful surveys, benchmarks, and studies. Does this mean that people don't intentionally or inadverdently bias the methodology, pick the wrong methodology, overextend the methodology, design it poorly, and/or misinterpret their results or the results of others? Does it mean that there aren't complex phenomena that are very challenging to study well (from a statistical standpoint?) No. On top of all the methodological issues are certainly all the issues of motive. Why did X do this study? Who paid them? What were their biases and interests going in? Of the two ways to analyze this data, why did they pick the second? What did they omit? These things aren't the fault of poor, harrassed Statistics.

On a very crude level, I think Tim and I just conducted a benchmark. It has limited validity, but that doesn't make it entirely invalid. It's a data point. For better or worse, many of the opinion polls that are conducted follow well-tested random sampling methodologies that account for their own weaknesses (i.e.- increase samples to adjust for non-response rates), then you just get to deal with the issues of leading/slanted questions, etc. You just have to keep those "non-scientific poll" and "sponsored by" filters working all the time.

Pollyanna Jim
_________________________
Jim


'Tis the exceptional fellow who lies awake at night thinking of his successes.