Open Access “Sting” Reveals Deception, Missed Opportunities

On Thursday, Science journalist John Bohannon (some of you will recognize his work organizing the annual “Dance Your PhD” Contest) released the findings of the largest studyof the peer review systems of open access journals, and it didn’t look good: The majority of publishers tested in his study accepted a bogus scientific paper, most with little (if any) peer review.

Critics of the investigation were quick to point out that the experiment lacked a control group–a group of subscription based journals to which the open access journal group could serve as a comparison. The lack of a control means that it is impossible to say that open access journals, as a group, do a worse job vetting the scientific literature than those operating under a subscription-access model. The study does reveal that many of the new publishers conduct peer review badly, some deceptively, and there is a geographic pattern in where new open access publishers are located.

Some critics assailed Bohannon (and Science) for undertaking such a study in the first place, accusing Bohannon of “drawing the flagrantly unsupported concluding [sic] that open-access publishing is flawed,” using the opportunity to come up with an equally unsubstantiated conclusion, that “peer review is a joke,” or arguing that by publishing the piece, Science failed in properly conducting its own peer review (ignoring the fact that Bohannon’s article was a piece of investigative journalism published in a Special News section). Science does indeed give the article credibility, inasmuch as Nature and The Lancet and Physics Today conveys credibility upon news reported by its journalists.

While I agree that Bohannon missed a great opportunity to include a control group in his study, this is not grounds to dismiss his investigation completely. Previous attempts to unearth unscrupulous publishers or a flawed peer review process provided little more than anecdotal evidence. Bohannon approached and documented his investigation systematically, and while the lack of a control group clearly limits what can be concluded from his study, much can be learned.

First, there is evidence that a large number of open access publishers are willfully deceiving readers and authors that articles published in their journals passed through a peer review process–or any review for that matter. It is simply not enough to declare that a journal abides by a rigorous review process.

Similarly, the results show that neither the Directory of Open Access Journals (DOAJ), nor Beall’s List  are accurate in detecting which journals are likely to provide peer review. In spite of an editorial and advisory board on the DOAJ, nearly half (45%) of the journals that received the bogus manuscript accepted it for publication. And while Bohannon reports that Beall was good at spotting publishers with poor quality control (82% of publishers on his list accepted the manuscript). That means that Beall is falsely accusing nearly one in five as being a “potential, possible, or probable predatory scholarly open access publisher” on appearances alone.

It would be unfair to conclude from Bohannon’s work that open access publishers, as a class, are untrustworthy and provide little (or no) quality assurance through peer review, only that there are a lot of them, their numbers are growing very quickly and that many willfully deceive authors and readers with false promises, descriptions copied verbatim from successful journals, and fake contact information. Bohannon refers to this new landscape as an “emerging Wild West in academic publishing.”

New frontiers eventually become tame when groups of civil-minded individuals get together to develop laws and norms of good conduct. In the publishing world, there are organizations like COPE (Committee on Publication Ethics), and OASPA (Open Access Scholarly Publishers Association), both of whom include, as members, several of the deceptive publishers revealed in Bohannan’s investigation.  The real test will be to see how these membership organizations react to the investigation. If they are to uphold their credibility, they will need to censure and delist the offenders until they can provide evidence that they are abiding by the guidelines of their organization. This means stripping these publishers of the logos many display proudly on their web pages.

It also means that the DOAJ, if it is to remain a directory of open access journals that uses a “quality control system to guarantee the content” must provide stricter guidelines and require evidence that publishers are doing what they purport to be doing. The DOAJ may also require independent auditing to periodically verify a publisher’s claims. It is simply not enough to take promises of quality control on word alone. Finally, it means that librarian, Jeffrey Beall, should reconsider listing publishers on his “predatory” list until he has evidence of wrongdoing. Being mislabeled as a “potential, possible, or probable predatory publisher” by circumstantial evidence alone is like the sheriff of a Wild West town throwing a cowboy into jail just ‘cuz he’s a little funny lookin.’

Civility requires due process.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s