A fundamental business problem right now is caused by the rapidly decreasing costs of storing and sharing data. The combination of these price drops with the increasing ease of sharing digital information has disrupted several industries – music, journalism, and book publishing among them. So everyone with a business model based on information being scarce and expensive is doomed, right?
Maybe not. There are some useful lessons to learn from one information-based industry that so far has avoided major turbulence – scientific publishing. Michael Clarke has written an excellent post addressing this called Why Hasn’t Scientific Publishing Been Disrupted Already? Here is how he sets up the issue:
When Tim Berners-Lee created the Web in 1991, it was with the aim of better facilitating scientific communication and the dissemination of scientific research. Put another way, the Web was designed to disrupt scientific publishing. It was not designed to disrupt bookstores, telecommunications, matchmaking services, newspapers, pornography, stock trading, music distribution, or a great many other industries.
And yet it has.
The one thing that one could have reasonably predicted in 1991, however, was that scientific communication—and the publishing industry that supports the dissemination of scientific research—would radically change over the next couple decades.
And yet it has not.
In his explanation of why scientific publishing has not yet been disrupted, he outlines three key roles for journals.
- Validation: Peer review before publication is designed to validate the findings reported in each article. There are some obvious problems with the peer review process, but in general it does a good job of ensuring that published research meets at least minimal standards of rigor in terms of research design.
- Filtration: As scientific journals proliferate, it is increasingly difficult to keep track of what is going on in any one particular field. Specialist journals allow researchers to keep track of what is happening by aggregating research findings concentrated in one particular area, which makes it a bit easier to cope with the amount of data being produced.
- Designation: Researchers are generally judged by the quality of their output. One way that this is measured is by the quantity of their publications, and by the quality of the journals in which they publish.
Clarke argues that new technology does not disrupt any of these three functions, and that consequently scientific publishing will continue to function much as it does now.
I’ve seen all sides of this process – I read a lot of journal articles, I’ve written a few, and I’ve processed a lot as Managing Editor of Innovation: Management, Policy & Practice. Overall, I’m not quite as confident as Clarke is concerning the robustness of this business model. On the other hand, there are reasons for optimism.
I’ve talked before about how successful business models in information-based industries must be built around three functions: aggregate, filter and connect. You can see from the brief discussion of Clarke’s post how scientific publishers are doing all three things. In particular, there is a strong filtering element in all three of his key functions. Validation is meant to filter out sub-standard research, Filtering is meant of filter out irrelevant research findings, and Designation is meant to filter out researchers that perform poorly. Journals also do some aggregating, and referencing is pure connecting – there is a common thread through all of the lines of scientific enquiry. This suggests to me that scientific publishing may well have a future that looks somewhat similar to its present.
This might all seem fairly boring to someone that isn’t in the middle of performing and publishing research, but I think there is an important general lesson here. I would argue that the main reason that the scientific publication business model has survived the internet relatively intact is that scientific publishers have explicitly recognised that their aggregating, filtering and connecting functions are central to their business model. Several of the industries that have been severely disrupted by the internet have not understood this – with the music industry in particular being a poor example.
In order to survive disruptive change in your industry, you must first know how you add value for your customers. The value in information-based industries is driven by firms that successfully aggregate, filter and connect. If you understand this, you have better chances of survival. If you fail to understand this, you’ll end up protecting the wrong parts of your business model – and that’s fatal.
Another reason why the structure of the academic publishing game might have been more resilient: it has a rather unique revenue model. Although the primary consumer of these articles is usually other researchers, the bill for subscriptions is paid by a central library. This creates a significant disconnect in the performance feedback process. If you buy the March and Simon argument, the performance feedback process is the key mechanism driving driving organisational adaptation. Such a disconnect could explain why this industry has lagged the others and why such strange practices as printing journals still exists in this day and age!
With this said, I think there are a range of productivity reasons that are now pushing us much closer to a point where we’ll start seeing this industry get disrupted. Just look at the rise of academic blogs as a means of distributing research output and ideas!
There have been some huge changes in the industry already (which Clarke outlines very effectively). In the big picture, they’re probably mostly incremental rather than radical.
Not sure where blogs fit in. Ultimately, do they drive people to published articles? Orgtheory certainly seems to. It is a way of establishing priority though not widely accepted as such necessarily. But if push came to shove, you can certainly demonstrate a history of working on a line of thought through a blog, if that’s what you’re doing.