Six months on from the publication of the Hargreaves Review in the UK, I wonder if its seemingly most insipid recommendation may turn out to be the most controversial. Amid radical proposals regarding copyright licensing and private copying, Hargreaves's suggestion that "evidence should drive policy" barely raised an eyebrow.
Yes, we all nodded at the time, of course policy should be evidence-driven. But unpack that statement and you find it raises all sorts of uncomfortable questions: what evidence is there? How is it compiled? Where does it come from? And, above all, whose evidence is it?
Attending various conferences over the past month, I have been struck by how lazily speakers (often lawyers and patent attorneys who, in other circumstances, would be reluctant to write their own name on a document without footnoting it) roll out statistics without any reference or support, when even the most cursory examination calls for more analysis. More worryingly, proponents of new laws or agreements justify them on the back of specious percentages and dubious dollar figures.
Here are three examples: counterfeiting makes up 5% to 7% of world trade; IP licensing is worth "more than" $600 billion annually (this one is twice cited in the Hargreaves Review); and a quarter of all branded products sold online are fake. Where do these figures come from? The first, which is still widely quoted, seems to appear first in an ICC document in 1997 (making it at least 14 years old). The second is a combination of two separate reports by the OECD in 2009 and UNCTAD in 2010. And the third I have yet to trace satisfactorily.
There are several difficulties here. First is that counterfeiting, being illegal, is hard to quantify: it's hidden and the perpetrators have no interest in counting it. That means it is ripe for guesswork and exaggeration. As Reuters blogger Felix Salmon cogently argues: "All counterfeiting statistics are bullshit." Second, figures are often subservient to the agenda of whoever is paying for the research: it is policy driving evidence rather than the other way around. And third, too many of us are happy to parrot figures without examination, perhaps in the hope that repetition will give them more credence.
The problem with all this is that we are heading on a downward spiral of credibility. Take two recent examples. The Business Software's report last month titled "Inside a $59 billion heist" generated a storm over its definition of piracy and allegedly inflated figures for criminal activity. On the other hand lawyers dismissed a detailed study by the Social Science Research Council ("Media Piracy in Emerging Economies") as being driven by an "anti-business agenda".
If there's a lesson here it is perhaps that we all need to think smaller: sacrifice the spectacular for the useful. Figures such as $600 billion and $250 billion sound impressive, but are really meaningless as a basis for policy. Detailed analysis from small sets of companies, countries and industries may be both simpler to assemble and more useful. That is true whether we're debating the best way to fight piracy, whether to establish an EU patent system or how to value brands on the balance sheet.
All of us have an interest in assembling credible IP evidence – not just statistics, but case studies, surveys and polls. Managing IP will work harder to do this in the future, and it's reassuring to see that national and international IP offices have hired economists and are taking evidence collection more seriously. We particularly look forward to the publication of WIPO's first World Intellectual Property Report later this month, which promises to shed some well-founded light on innovation.
But we cannot leave all the work to the economists. Everyone, including IP owners and their advisers, has a responsibility to use their expertise and overcome the proposition that (to misquote Benjamin Disraeli) there are lies, damned lies and IP statistics.
James Nurton
Managing editor
Managing IP