Media

The following videos and PR articles have been put together to communicate a better understanding of the services and skills offered by RJ Hill innovation. Should you require further clarification of any of our media we invite you to contact us directly. 

 

Commentary published in The Australian in 2012

Measuring the Impact of Research – It’s not that Difficult

The Minister for Innovation, Industry, Science and Research, Senator Kim Carr has released recently the Review of Publicly Funded Research – Maximising the Innovation Dividend.  One of the major recommendations of this review is that a committee be established to have another look at the value and feasibility of developing a set of metrics that seek to capture the “Innovation Dividend”, or impact, of research in Australia.

This is a rather contentious matter, as the Minister scrapped impact measures from the previous Government’s Research Quality Framework (RQF) when developing his own Excellence in Research for Australia (ERA) framework.  One of the complexities is that the impact measures need to be applicable to the social sciences and humanities, as well as to the technological sciences.  Despite a great deal of analysis by the higher education sector in Australia at the time of the RQF discussions, including a comprehensive report by the Council for Humanities, Arts and the Social Sciences in 2005, and overseas studies such as that by Library House (Cambridge) in 2009, the issue is now very much on the table again as the Government seeks to estimate a return on investment formula to evaluate research funding and to develop its customised “compacts” with universities.

Indeed, the whole matter was discussed in great detail again at a workshop in Canberra this month, organised by the Australia Technology Network and Group of Eight university groups.  Comparisons were made with the Research Excellence Framework (REF) in the UK, where impact has been allocated a 20 per cent weight in the assessment of research excellence.  It was also a topic of animated discussion at the Knowledge Commercialisation Australasia (KCA) meeting in Auckland last week.

It is clear that impact metrics have been around for a while, are on everyone’s lips at the moment, and are not going away anytime soon.
Personally, I can’t see why everyone is attaching such a high degree of difficulty to the measurement of social, economic and environmental impact – it’s really not that hard.  One just needs to move away from the rigid concept of ‘metrics’ to a more generous focus on ‘assessment’.  It also requires acknowledgement that the impact of research really only comes into play when one moves away from the outputs of the research to the point where they have a clear ‘effect’ on individuals, society or firms.

The implication of this is that some elements of the impact will be necessarily subjective (just like the journal ratings in ERA), some will take a long time to be fully realised (but time to publication can also take many years), we may need to use case studies in order to describe complex examples, end-users may have to be involved in the assessment, and we will probably have to look backwards as well as forwards in time (the UK model goes back 17 years to 1994!).

Nevertheless, there are several objective components of impact that immediately spring to mind, such as, patents granted (where the objectivity and depth of analysis on novelty and inventiveness leaves journal paper refereeing for dead), external research funding received (here, the market is assigning a value to the work), the amount of repeat funding from external sources (an excellent measure of value delivered), licenses granted (indicating that the work has reached the market), royalty income received (which is directly related to sales achieved in the real world of commerce), sustainable employment and market value created from spin-out companies, and, perhaps, operating and capital benefits specifically achieved by business (when not withheld due to commercial sensitivity).

In the social sciences and humanities one can use the measure of government references made to the work in defining public policy and legislation development (for example the acknowledgement by the USA Government of the effect on immigration policy of the work of the Centre of Policy Studies at Monash University), the number of books sold (for example, the >800,000 copies of the CSIRO Total Wellbeing Diet that have been bought).

I also see great value in using an assessment of the “expected impact if the research and its translation are successful”.  This removes from the research assessment the effect of encumbrances to the ultimate delivery of impact that are unrelated to the relevance or quality of the research per se, including, for example, lack of investment capital, widespread public opposition (for example, in the adoption of genetically modified crops or nuclear energy), collapse of the anticipated market (in the case of the abandonment of an Australian magnesium metal production industry in the late 1990s), or the emergence of a major competitor.  These matters are beyond the control of the researchers and should not diminish the scale and scope of the impact that would have been achieved under different circumstances of time and place.
Of course, any assessment of expected or potential impact, since it has not yet been achieved, must include a qualification for practicality, degree of difficulty, and likelihood of success. This would avoid the complete dominance of research proposals that would have massive impact, but which are very, very unlikely to succeed and may require enormous financial investment and time scales, such as an increase in the efficiency of solar cells to 50 per cent.

It has troubled me why there appears to be such a strong resistance to the assessment of impact and its formal inclusion into measures of research; even in the case of the UK REF, the impact component was relegated to only 20% of the overall measure of research quality or excellence.  Could it be that, at some level, the inclusion of an impact parameter challenges the independence and “purity” of the research?  Does the inclusion of an end user somehow degrade the status of the research and cause it to be irretrievably sullied in some way? Is it seen as the thin edge of the wedge of greater social accountability of research?  Does it reflect the deep-seated culture in academe that payment for research, or “research for commission”, is of an inherently lower quality than pure, undirected research?

I hope that the new Innovation Dividend Committee is given the chance to do its job through the inclusion of some of these broader and more flexible parameters and in the absence of any of the entrenched research cultural constraints.

 

Commentary published in The Age, 2 august, 2011 

Summary of the Major Innovation Capability & Infrastructure in the SE of Melbourne

Copyright © 2013. All Rights Reserved.