“India does not need any more journals, especially localized institution-based ones, if that is what you mean. We already have too many journals, most of them third rate. What we need is to look for ways by which we can convince many of these journals to close down. Instead, we should try to identify the better ones and persuade their editors and publishers to make them Open Access…”
My challenging correspondent has permitted me, in this case, to share his comments and my response on this matter of journal proliferation. I have chosen to do so as it has become a common enough concern, and one that I may otherwise seem to be exacerbating through the Public Knowledge Project’s lowering of the cost barrier for scholarly publishing by developing and distributing its open source (free) Open Journal Systems.
As opposed to my correspondent I feel that there may well be value in supporting the spread of journals and a research culture. As a result, I have grown concerned with how government agencies in South Africa, Brazil, Australia, and elsewhere, are increasing the recognition and reward for researchers who publish in the "best" journals and in the best journal alone (as defined by a ranked listing in the ISI Web of Science, for example). This may well be a way to convince third-rate, as well as second-rate, journals to close down, which my correspondent suggests is a desirable thing.
I do appreciate that there may be risks associated with enabling more faculty members in more institutions to participate in a research and review culture, and I am open to learning more about the deleterious effects around, perhaps, misrepresentations of work and faulty studies. However, my concern is that recognizing only work in the best journals may well prove a terribly inefficient approach, from a human capital perspective, as it unnecessarily restricts the development of a research and review culture – sending a message to those who are not given a step up through mentoring, training abroad, or fortuitous collaborations – and that message is, “don’t bother.” This can only discourage, for example, the intellectual innovation and breakthrough that often lead to the development of new journal titles; it can only reduce the public reach of research. Let me explain why this might be the case.
To officially identify the better journals, through measures such as the ISI Web of Science, effectively rewards an already existing elite of well-published scholars, who usually possess strong international connections and collaborations. While this might well stem the brain drain among these researchers, I would worry that this will also further isolate these scholars within their own communities, providing a disincentive to reaching out and mentoring others who are not part of this journal club. It further sets off locally published journals, reducing their ability to attract submissions and reviewers, as it reinforces the "value" of US-European journals, putting pressure on research agendas in relation to the interests of those titles.
What I believe, at this point, is that what is needed are many publishing opportunities at varying levels of selectivity (rejection rates). This will allow for a greater number of researchers (if only another 10-20% in addition to the accredited elite) to make their way up the long ladder to the submission upload page of the best journals, while in the process raising the quality of at least the second-tier journals in the process.
For government agencies to in effect cut the rungs below the top levels of this research ladder needlessly reduces the chances of their research communities making it up the ladder. That is, the idea that the best value is achieved by concentrating resources/opportunities on the very best is countered by the idea that the best arises out of the surplus production of knowledge, along with a few surprises from unexpected sources, and it is that surplus, or ordinary science, in Kuhn’s terms, that needs supporting. As well, as only the best journals are recognized, to start a new title becomes all the more difficult, even as such an act is often a common, if not necessary, step in developing new, innovative field of research as at least part of how science develops.
Meanwhile, the risky sort of proliferation of journals, which I am envisioning and in a sense encouraging, will help to spread the scope of a research and review culture and capacity, which enables within post-secondary education a greater appreciation and take-up of current research (with of course, greater interest in the better journals). This can improve teaching more broadly, especially at the graduate level, while greater involvement of faculty in the reviewing process, can raise the level of research that is then carried out in non-research intensive institutions, especially as the prospects of getting it published are not out of reach. The editorial standards for even so-called third-rate journals may well increase somewhat, with a greater spread of journal culture and through the use of standardizing systems for indexing and journal management (which of course is the area that we are working on).
As for the public reach and impact of research, it is again a matter of the proliferation of research leading to more work on local topics, responding in some cases to local questions of interest. It is worth remembering that the measures of journal quality have to do with the use of research by other researchers, and not the value or take-up of research by groups outside the academy, including teachers, professionals, policy-makers and the public.
You may still say that if the research is in a third-rate journal, why would we want it to have any public impact at all. The quality of the research is always a question, but it also happens that work on local, practical questions may not get published in the best journals for reasons not related to quality. To go back to the case for open access, I would also argue for how a greater availability of research — of varying qualities — also contributes to greater interest in comparing studies and qualities in the public use of this work. This could well lead to comparing locally focused research to other related studies (with regard to consistency, reliability, etc.).
Much of the case I am making is still hypothetical and, as such, still to be proven. Yet against the weight of current initiatives underway to "recognize" only those who are publishing in the best journals, given how this will indeed undermine the rest, I think it worth reconsidering the value of research proliferation. It holds some promise for increasing the contribution of research and scholarship to the public good.