Picking up from many of the discussions about academic publishing that have occurred in the sciences, I recently made this suggestion on the (primarily) Australian Fibreculture email list:
Imagine a database which publishes peer reviewed work via http. The system is designed to offer structured feedback and mentoring not only of those receiving reviews, but also those doing the reviewing.
- I submit a paper and nominate the fields and/or discplines (from set choices) that I
- think it fits into
- as a possible author I agree to also be a reviewer (this is a requirement)
- where I also nominate a series of fields and/or disciplines that I have expertise in
- these fields which represent the disciplines would be derived from an existing metadata standard,
- and if one doesn’t exist then the project would develop one
- the system automatically allocates 3 anonymous reviewers based on the preferences made by each contributor
- my paper is anonymously reviewed by 3 others
- these reviews are well structured (template and process driven)
- and the author gets access to these reviews and
- the author is able to rate these reviews (template and process driven)
- during this process, which would support resubmission, the paper must meet miminum requirements for publication
- this minimum number would be determined by averaging the reviews received, measured against the reviewers ranks as reviewers
- if accepted the paper is published and identified as peer reviewed
- probably only after you have completed your 3 reviews (quid pro quo)
- since reviewers are rated by authors and authors are rated by reviewers
- over time an expert peer driven system is built which can then weight participants so that
- reviewer A is known as high quality and receives a high review rank, while
- reviewer B who is not much chop receives a low review rank
The benefits of this are multiple.
First of all the academic labour that constitutes a great deal of scholarly publishing is made visible, not only in the requirements of becoming a reviewer to publish, but by the use of straight forward and standardised feedback protocols to structure feedback. This models good practice, so provides professional development for new academics, and may also improve the quality of feedback generally (which in the humanities can be abysmal).
Once established, there is virtually zero cost involved, apart from bandwidth (which could admittedly be considerable), as the system is more or less self organising and self sustaining.
The engine could be entirely scalable so that new discipline groups could be easily added.
It moves scholarly publication into a quite different temporal model because rather than being volume based articles would appear whenever sufficient reviews had been completed and appropriate ‘criteria’ met. This would mean that in some cases publication would in fact be issue based, though in the more usual sense of timely interest, as a spate of papers may appear dealing with a specific theme because of interest or debate.
Finally, the criteria used throughout this system would be explicit, and though qualitative would have some quantitative index (a rank) so not only the papers but the values that constitute ‘good’ work would also then be subject to peer review and discussion. In the same manner reviewing as a professional practice would be subject to review. Of course all indicators would be anonymous, so that my reviewer identity would be some sort of number, for instance, which only I would know.
This would, of course, be just the beginning. Complex visualisation strategies could be employed to represent content and the emergent clusters that developed through such a system, generating in itself a whole series of new research problems and programs. For example the system could easily build and represent citational frequency clusters, visualise link patterns, and so forth. It might even help some parts of the humanities catch up to the sciences in reimagining what constitutes knowledge production, expression, and dissemination.