Toward a "robust" anti-doping testing program

It was with great fanfare earlier this year, upon the unveiling of the London anti-doping laboratory, that organizers said a record 6,250 doping tests would be carried out at the 2012 Olympics and Paralympics. That's up from 5,600 in Beijing four years ago, an 11.6 percent jump. In opening the facility, London 2012 chief executive Paul Deighton praised the commitment to a "robust testing system" and declared, "Our message to any athlete thinking about doping is simple -- we'll catch you."

All of which is entirely well-meaning.

But -- is it meaningful, or relevant?

The U.S. sprinter Marion Jones passed 160 drug tests. What, if anything, did that prove? She was later revealed to be a chronic doper.

Anyone who knows the first thing about the way the anti-doping system works knows two things:

One, you have to be a complete and utter fool to get caught doping at the Olympic Games. If you're juicing, the time to be on a program that offers max benefit is weeks or months beforehand. If you're so stupid that you've still got something in your system come Games-time, you deserve to be caught.

That's why, at every edition of the Games, for all the talk about thousands and thousands of tests, there are relatively few positives, and in the year 2012 hardly any involving significant names.

Two, there's another set of numbers out there that surprisingly hasn't gained widespread attention.

These numbers, though, are well-known among senior leaders of Olympic and international sport. And there may yet be a nexus between the Lance Armstrong case, which is due in the coming days to take its next turns, and these figures.

It should be well understood, too, that the Armstrong matter has seized the attention of the Olympic movement at the highest levels.

For the year 2010, as very publicly reported by the World Anti-Doping Agency, its 35 accredited laboratories worldwide performed tests on 258,267 samples, returning positives -- if you include both what are called "adverse analytical findings" and "atypical findings" -- on 4,820 samples. That's a return rate of 1.87 percent.

That rate was down from 2.02 percent in 2009 -- 5,610 samples from 277,928 tested.

Breaking it down further:

Table E from the 2010 report -- a caution here, the numbers don't add up to 4,820 for a variety of reasons -- details that nearly 61 percent of those caught were positive for "anabolic agents." That means steroids. Another 10.3 percent were positive for stimulants. But the third-most positive substance on the list, at 9.6 percent?

"Cannabinoids." That means marijuana.

It's a real question, ladies and gentlemen, about a system that spends a lot of money but that's not very effective and that, when it does turn up positives, turns up positives one in 10 times about a substance that a significant number of people suggest ought to be legalized and don't believe is in any way a performance-enhancer.

The anti-doping system depends, first and foremost, on credibility. These kinds of numbers, one could reasonably argue, do not especially promote credibility.

The challenge is fundamental:

The general public wants -- and by extension the governments and sports officials who fund the anti-doping system want -- to believe in tests that can detect performance-enhancing drugs. But the most sophisticated people at work in the system understand that the tests can only do so much, can only go so far.

Those people also understand that WADA does not itself do the testing. WADA may bear the brunt of the PR pressure, fairly or unfairly; WADA is trying to get the sports federations and national agencies who are out there to do their jobs as well as possible or, obviously, better.

All of this, by the way, assumes that there are more drug cheats out there; that's a thesis at the core of the whole thing. Some people are absolutely certain that's the case. Others ask, why is that a legitimate premise?

This is in part why WADA, at its May 18 meeting, launched a working group to assess what, if anything, can be done to enhance testing effectiveness. Unclear is the working group's precise mandate or time frame for reporting; uncertain, too, is its membership, although one of those under consideration is U.S federal agent Jeff Novitzky, who played a key role in both the BALCO case and in investigating Armstrong in the inquiry led by the U.S. attorney's office in Los Angeles that was abruptly dropped earlier this year without the filing of charges.

It's not clear whether the U.S. government would even allow Novitzky to participate in such a working group; so far it's moot because he has not accepted the invite.

This is all very complex, sometimes incredibly politically oriented and nuanced stuff.

At the same time, it's reasonable to expect that if governments and sports officials are going to spend millions of dollars in an effort to promote drug-free sport, that system ought to be, truly, "robust."

It has been said by others in this context before but bears repeating here -- if you went out and did a job that came back with a one or two percent return rate, what would your boss say to you?

Would it be -- let's keep doing exactly what we're doing?

Doubtful, right?