For Most Researchers, “Practice” is Harder than “Deep”

I’ve been invited to participate in a symposium on Deep Practice, organised around the theme “Deepening knowledge
and innovation through design practice.” The program includes this free talk by Professor Mark Burry on March 24th in Melbourne.

To prepare for the event, we’ve been asked to write a brief piece on the topic. This is mine:

Researchers do not have enough impact on practice. In trying to encourage Deep Practice, it is important to address this problem.

Too often, going deeply into a subject leads to abstraction. Model building is an essential part of the discovery process. Unfortunately, a focus on abstract modeling often comes at the expense of practical application.

There are multiple reasons for this. One is that there are no academic metrics for practical impact. In measuring research excellence, the primary measures include things such as: publication volume, quality of publication outlets, citations, and research grant income. All of these measures are based on research conversations taking place exclusively between academics. This encourages depth in research, but it also encourages abstraction.

The recent Excellence in Research Australia also included “Application Measures”. These included commercialization income, and patents. These measures are based on an outmoded view of how research can drive innovation.

Both types of research metrics fail to address impact on practice. If we are to encourage Deep Practice, we must encourage greater interaction with practice on the part of researchers.

Student and teacher of innovation - University of Queensland Business School - links to academic papers, twitter, and so on can be found here.

Please note: I reserve the right to delete comments that are offensive or off-topic.

3 thoughts on “For Most Researchers, “Practice” is Harder than “Deep”

  1. Tim,

    When I think about deep practice, do not necessarily invoke abstraction. In particular, “deep” means to me putting things in context, examining precursor conditions, implications etc. Something a lot like a well thought out piece of research.
    So, for me, deep practice *is* research.
    Having said that, there are two chasms that you point to in your post: a conceptual one: how can we bridge the abstract model and the practical problem? And there the word “deep” (in the sense I just proposed) comes to the rescue.
    The other is sociological: how can we connect researchers to practitioners. I think this is a very interesting problem with no obvious solution.
    You rightly point to the lack of meaningful metrics. But even if we find some, this can only help us measure how wide (or narrow) the chasm me. It does not tell us how we can narrow the gap. What we need is a way (or several) to bring these two groups together.
    Perhaps the operative word here is “practice”. You have already discussed boundary-spanning, network weaving and embedded personnel. I think this is where researchers are lacking (and lack incentive). In particular, the metrics you point to: patents files and commercialisation income measure flow of information in exactly the wrong way. All it tells us (if at all) is how much research-led practice we achieved. What we need is a metric for practice-led research.

  2. Ralph – that’s a brilliant post – I wish I had remembered when writing.this.

    Marco – all good points. I agree with you about the network-related tools that work well elsewhere are not put to good use in research. It would definitely be good to devise some way to do this.

Comments are closed.