The (Fake) Academic Publishing Game

Last month Vox reported on a “scientific paper” written by Maggie SimpsonMaggieSimpson1, et al., being accepted by two scientific journals. The paper, a spoof generated by engineer Alex Smolyanitsky using a random text editor, was allegedly peer reviewed and accepted for publication by two of the many for-profit open access science journals that have sprung up over the past decade.The article (here) provides a nice overview of how rampant the trolling by fake scientific journals has become and some of the economic incentives behind them.

If you’re in academia, you probably receive email solicitations from these predatory journals regularly. I probably delete a handful of solicitations per day from such journals. I just assumed they were bogus, but the Vox article also provided a link to a useful listing of suspected predatory publishers created by Jeffrey Beall. Sure enough, my most recent email was from one of the publishers on this list.

While the article focuses on the problems these journals create for trust in scientific publications, the credibility of real peer reviewed scientific research, and evaluation of a given scholar’s publication resume, it fails to mention the complementary cause of the problem: an obsession with journal publication counts in evaluating faculty for tenure and promotion. A market for bogus journal publications exists not just because there is a supply of publishers faking peer review, publishing papers, and collecting the “publication fee” from the authors to generate profits. It requires scholars to submit their papers and pay the fees, something they are only willing to do in order to pad their resumes.

Scholarly research and productivity needs to be measured on some basis, no doubt. Unfortunately, the transaction costs of substantive evaluation are non-trivial and simply counting publications is a lot easier. Having scholars produce impact factors (a citation-based metric of a journal’s influence in academic publishing circles) helps ensure the outlets are credible, but impact factors are not perfect measures, are not always available (especially for newer journals), and are not always comparable across journals (particularly if one’s research is in an important, but more specialized, academic niche). And given the profitability of the predatory publishing industry, there is now a supporting industry that generates and sells fake impact factors to those predatory journals to improve their credibility, making the use of impact factors even more difficult.

Organizations like the Social Science Research Network and ResearchGate are creating their own types of evaluation metrics, using things like paper downloads and citations among papers in their own paper collections. Some criticize these measures since many of these are just working papers, not journal publications that have gone through peer review. The question is which is a better barometer of quality for an individual paper? The perspective of a handful of journal reviewers and editors, or the open market of ideas and citations from sources like SSRN, Google Scholar, or ResearchGate.