Skip to content Skip to sidebar Skip to footer

How Do Peer Reviewed Articles Differ From Regular Aticles

Abstruse

To gain insight into the duration and quality of the scientific peer review process, we analyzed data from 3500 review experiences submitted by authors to the SciRev.sc website. Aspects studied are duration of the first review circular, total review duration, immediate rejection fourth dimension, the number, quality, and difficulty of referee reports, the time information technology takes authors to revise and resubmit their manuscript, and overall quality of the experience. We find clear differences in these aspects between scientific fields, with Medicine, Public health, and Natural sciences showing the shortest durations and Mathematics and Computer sciences, Social sciences, Economics and Business, and Humanities the longest. 1-third of journals take more than 2 weeks for an immediate (desk) rejection and one sixth even more than four weeks. This suggests that as well the time reviewers take, inefficient editorial processes too play an important role. As might be expected, shorter peer review processes and those of accustomed papers are rated more positively by authors. More than surprising is that peer review processes in the fields linked to long processes are rated highest and those in the fields linked to curt processes lowest. Hence authors' satisfaction is obviously influenced by their expectations regarding what is common in their field. Qualitative information provided past the authors indicates that editors can raise author satisfaction by taking an independent position vis-à-vis reviewers and by communicating well with authors.

Introduction

The scientific peer review process is one of the weakest links in the process of scientific knowledge production. While information technology is possible to review a paper in less than a day (Ware and Mabe 2015), it may often lie untouched on reviewers' desks and in editorial offices for extended periods before it is evaluated. This means a substantial loss of time for the scientific process, which has otherwise become much more efficient in the last decades. There are even indications that the elapsing of the peer review process may accept increased in the last decades (Ellison 2002a; Azar 2007). Hence there are good reasons for a disquisitional look at this process.

To gain insight into the elapsing and other cardinal aspects of the peer review process, we analyze data from 3500 review experiences submitted past authors to the SciRev.sc website (www.scirev.sc). On this website, researchers tin can share their experiences with the peer review process regarding manuscripts they take submitted to scientific journals. This information can after be used by their colleagues when selecting a journal to submit their work. Information is available on several of import aspects of the peer review process, including the duration of the kickoff review round, total review duration, the time editors have to inform authors most an immediate (desk) rejection of a manuscript, the number and quality of referee reports, the fourth dimension authors take to revise and resubmit their manuscript, and the overall quality of the procedure as experienced by the authors.

Duration of the kickoff review circular—or offset response time (Azar 2007)—is probably most important for scientific authors equally information technology determines how much time may be lost if the outcome is negative (Solomon and Björk 2012). The number of review rounds and the fourth dimension journals have to manage these rounds are also important, equally these aspects significantly affect the fourth dimension that elapses until author(s) are informed of the terminal editorial decision. Another important duration indicator is the firsthand (desk) rejection time, i.eastward., the time taken by an editor to inform authors that the manuscript is non considered fitted for the journal. If this only takes a few days, authors tin can without much time loss ship the manuscript to another journal. Yet, quite often, editors may take weeks or even months for a desk rejection. This seems unacceptable and may indicate to a less than efficient system of the editorial process. If editors have much time to inform authors that they are not interested in the manuscript, they probably will also be rather slow in other aspects of manuscript handling, such as assigning reviewers and processing review reports. The immediate rejection fourth dimension is thus a major indicator of a periodical's operation.

Besides by the duration of the unlike steps of the peer review process, total publication fourth dimension is too influenced by revision time, i.e., the time taken by authors to revise and resubmit the manuscript. This factor is therefore besides included in our analysis. It is influenced by the fourth dimension authors are able and prepared to spend on the revision of the manuscript and by the difficulty of the revisions required. In this connection, it is of import also to include aspects of the referee reports. Constructive comments by reviewers may substantially contribute to the quality of scientific papers, while low quality and contradictory referee reports may be a major source of frustration among authors (Nicholas et al. 2015). In the SciRev questionnaire, authors are asked about the number of reports they received and how they experienced the quality of the reports and the difficulty of the changes they were required to make.

Too the measurable factors, such as the duration of the unlike phases of the peer review procedure and the number of referee reports, there are also aspects of the process that are more difficult to quantify. Does the editor take questions of the author(due south) seriously? Is a reasonable motivation for a (desk) rejection given? Does the editor accept an independent position vis-à-vis reviewers when making important decisions? Does the editor advise authors on the importance of specific reviewer comments? Together these aspects affect the author's feel with the journal and to a sure extent may plow a rejection into a good experience or an acceptance into a bad one. We therefore also analyze the authors' overall evaluation scores given to the journals for their peer review performance every bit well as the motivations given by authors for their scores. Considering an author's review experience is influenced by many factors (eastward.g., the outcome of the review process, the impact factor of the journal, and differences in expectations betwixt scientific fields), we study the overall scores in a multivariate way and also clarify the authors' scoring motivations.

Groundwork

There are around 28,000 scientific journals worldwide, which publish 2.5 one thousand thousand scientific articles annually, produced by a inquiry community of six–9 one thousand thousand scientists (Ware and Mabe 2015; Jinha 2010; Björk et al. 2009; Plume and Van Weijen 2014; Etkin 2014). Many of the published articles have been rejected at least once before they reached the editor'southward desk-bound of the periodical in which they were published. This means that each yr many more manuscripts pass through peer review than are published.

Although there is some variation among journals, the peer review procedure typically starts with a beginning evaluation of the manuscript by the editor, followed by a decision to take the manuscript for peer review or immediately (desk) turn down it. If desk rejected, the respective author receives a message from the editor that the manuscript is considered not fit for publication in the periodical, with or without a brief motivation given for the rejection. A manuscript that has passed this first stage will then exist ship out for peer review, whereby experts in the field (peers of the authors) evaluate the manuscript and write a referee written report. On the footing of these reports, the editor decides either to reject the manuscript or gives the writer(s) an opportunity to revise and resubmit it, or—in exceptional cases,—directly accepts information technology. In case of a revise-and-resubmit, several additional review rounds may follow before a final decision regarding acceptance or rejection is made. If the process takes exceptionally long, the author may decide to withdraw the manuscript and submit it to another journal.

Process too slow

Given the fact that reviewers are often overloaded with academic work, that they are generally not paid for their review piece of work, and that reviews are mostly bearding, there are few incentives to give high priority to this piece of work (Azar 2007; Moizer 2009). Hence, while the actual time it takes to write a referee written report may vary between a few hours and a day (Ware and Mabe 2015), reviewers tend to take several weeks to several months to submit their reports. Apart from the time reviewers accept to deliver their reports, the total manuscript processing time of journals is influenced by the duration of the diverse stages of manuscript handling at editorial offices. Given that these offices often have limited resources and many editors practice this work likewise busy academic careers, waiting times at the unlike stages are often (much) longer than strictly necessary.

It is therefore non surprising that one of the most of import criticisms of the peer review system is that it is much also ho-hum (Lotriet 2012). There are fifty-fifty indications that is has been getting slower in recent decades (Alberts et al. 2008). Ellison (2002a, 2002b) documents a slowdown since the 1970s in submission-acceptance duration in economics and suggests a similar slowdown in other fields. A major cause for this is that authors are required to revise their manuscripts more often and more extensively (Ellison 2002a, 2002b; Azar 2007; Cherkashin et al. 2009; Björk and Solomon 2013). According to Ellison (2002a), review rounds are of quite recent appointment. In the early 1950s, 'near all submissions were either accepted or rejected: the noncommittal "revise-and-resubmit" was reserved for exceptional cases (p. 948).'

From the author's perspective, showtime response time is particularly of import, i.e., the fourth dimension that elapses betwixt submission and offset response from the editor, be it rejection, acceptance, or a revise-and-resubmit. Starting time response time is important because it oft delays the publication of an article more than one time, as many manuscripts are rejected in one case or several times before acceptance (Azar 2007; Etkin 2014; Pautasso and Schäfer 2010). There are indications that duration of the showtime review round has increased, at least in some fields. Azar (2007) finds that offset response time for economical journals "grew from near 2 months circa 1960 to about 3–6 months in the early 2000s (Azar 2007, p. 182)". Withal, as Azar points out, a longer commencement response time is in itself non necessarily negative. Economics manuscripts have become longer over time and have more mathematical content, which means it is more time-consuming to evaluate them.

Field divergence

Durations vary substantially between scientific fields and even within the same broader discipline. Kareiva et al. (2002), for instance, studying conservation biology, found that the process from submission to publication took on average 572 days for conservation and applied ecology journals compared to 249 days for genetics and evolution journals.

With respect to the number of times the average manuscript is rejected before it reaches the periodical that will publish it, Azar (2004) arrives at a effigy of 3 to six rejections. Similar to an increment in commencement response time, in that location also seems to be an increase in the number of rejections prior to publication. Thomson Reuters (in Ware and Mabe 2015, p. 51) reports an increase in the rejection rate from 59 to 63% between 2005 and 2010. Regarding the desk-bound rejection rate, Lewin (2014) reports an increase of up to 3 times for some journals. Lewin attributes this to increased publication pressure, whereby "governments in countries outside of the USA engage in a process of quantifying the scholarship of scientists in their countries as a way of rationalizing the allocation of national resources to institutions of higher learning in their countries. The unsurprising consequence has been a dramatic increase in submissions to the meridian journals by scholars from emerging economies as well equally from European countries" (Lewin 2014, p. 169).

Editors are also worried about these developments. 'Among journal editors there are growing concerns that the quality—and duration—of the review process is being negatively affected as "referees are stretched thin by other professional commitments". This often leads to "challenges in finding sufficient numbers of reviewers in a timely manner" (Lotriet 2012, p. 27).' Once reviewers have been found, other bug may emerge, such as poor reviewer agreement on submissions (Peters and Ceci 1982; Onitilo et al. 2014) or ethical problems (Resnik et al. 2008). Reviewers who make contradictory comments are a major source of frustration for authors every bit well as editors. Regarding unethical practices, Resnik et al. (2008) mention (in order of frequency) reviewers asking authors to include 'unnecessary references to their publication(s), personal attacks, reviewers delaying publication to publish a paper on the same topic, breach of confidentiality and using ideas, data, or methods without permission (p. 305)'.

Ways to improve

Several suggestions have been done to get in more than attractive for scientists to human activity as reviewers. Free subscription to journal content, almanac acknowledgement on the periodical's website, more feedback about the outcome of the submission and quality of the review, appointment of reviewers to the journal'southward editorial board and financial incentives (Tite and Schroter 2007). A noteworthy initiative in this respect is Publons (world wide web.publons.com), a website where reviewers tin upload information on anonymous review work they performed. This information is and so verified with the journals and can subsequently be used as 'proof' of the peer review work done by the reviewer. This initiative provides a solution to the recognition problem. Yet, it does not aid solve the problems of elapsing and quality as neither the time reviewers spent writing the reports nor the quality of their reports are registered.

As to financial incentives, Thompson et al. (2010) found a statistically significant reduction in review elapsing when referees were paid for their efforts. 'Median first response fourth dimension was reduced from 90 to lxx days, a 22% reduction in the presence of payments. With payments, only 1% of first response times exceeded six months; without payments, 16% exceeded 6 months (Thompson et al. 2010, p. 678).' Although it was non possible to compare the quality of referee reports submitted with or without payment, they idea it likely that if the length of referee reports was an indication of quality, payment might even have led to an increase in referee reports' quality: "[r]eferees did not dash off shorter reports to run into the deadline for payment; in fact, reports were statistically significantly longer with payments than they were prior to payments" (Thompson et al. 2010, p. 690).

Previous studies by Hamermesh (1994) for seven journals in 1989 also found an increase in timely referee reports for journals offer payments. Even so, since "some empirical evidence suggests that when voluntary economical activities—giving blood, volunteering to work for public or private institutions, and collecting donations for charity, for case—are rewarded with relatively low payment levels, low-paid operation is inferior to voluntary functioning" (Thompson et al. 2010, p. 680), most likely reviewers would have to receive a realistic rather than a symbolic payment for their efforts.

It seems natural to expect that authors of papers that have been accepted are happier with the review experience, when they look back at it in retrospect. Authors tend to suffer from attributional bias. If their paper is rejected, many authors tend to arraign this on situational factors, such as incompetent reviewers or uninterested editors, only in case of acceptance tend to attribute this to their ain expertise and competence in writing high-quality papers (Garcia et al. 2016). The difference in ratings between authors of accustomed and rejected manuscripts might also exist greater, the longer the duration of the peer review process. The more time and energy authors invest in a manuscript, the more likely it is they volition exist disappointed by a rejection, and even more so if rejection follows later on several review rounds.

Methods

The data used in this paper are based on 3500 review experiences, reported by authors between 2013 and 2016, by filling in a questionnaire on the SciRev.sc website. The SciRev questionnaire contains questions on the duration of the different phases of the peer review process of research articles, on the number, quality, and difficulty of the received referee reports, on the outcome of the peer review process, and on whether the manuscript has previously been submitted to another journal. Information technology also asks authors to provide an overall rating of the review experience and gives them the opportunity to motivate their rating. Research articles may include any paper submitted to a scientific journal (regular inquiry papers, review articles, rapid communications, research notes, etc.), provided it has been subjected to peer review.

Authors who submitted a review to SciRev.sc were asked about their affiliation, which was checked by asking them for their institutional email accost and sending a confirmation link to that address. Authors who registered with a noninstitutional email address, considering for various reasons they could not provide an institutional one (east.g., job change or working in a non-Western institute without good ICT services), were asked for additional information to check their identity. Reviews were only accepted if the author's identity was confirmed. Reviews of accepted papers were additionally checked at the journals' websites; these reviews were only included if the author had indeed published a paper in the journal during the period mentioned.

Although the information are non based on a representative sample of author experiences, they are interesting because they paint a broad picture of the range of author experiences from different fields of study. Each submitted review represents the experience of an author and is important equally such. If other authors report similar experiences, this would point toward a specific pattern. And if the resultant patterns differ among scientific fields, this would point that the prevalence of specific experiences differs among those fields.

There is little reason to look authors from unlike fields to be fundamentally different in the way they experience the different aspects of the peer review procedure. However, there might be different expectations betwixt fields about review duration and hence about what is considered a long process. As well by field differences, experiences may also be colored past the process consequence and the periodical's impact factor. Nosotros therefore dissever the figures presented in this newspaper according to scientific field and process outcome (accepted/rejected) and also study relationships with the periodical's impact factor. Information on the impact cistron was derived from the journal's website and other Internet sources. This information could be found for 3126 reviews. In our analysis, we apply the natural logarithm of the impact factor, as more journals are concentrated in the lower ranges of the impact factor.

Of the 3500 review experiences, 572 (16.3%) referred to manuscripts that were rejected without being sent to reviewers, 693 (nineteen.8%) that were rejected later on the first review round, 2128 (60.8%) that were accustomed after one or more review rounds, 43 (1.2%) that were immediately accustomed without peer review process, and 64 (one.viii%) that were withdrawn by the author. Given the relatively modest number of reported cases of manuscripts that were withdrawn or immediately accepted, these were not included in our analysis. We also removed some farthermost cases regarding immediate rejection time (>62 days; 53 cases), duration of first review round and total review duration (>100 weeks; fifteen cases), and duration of revision afterward get-go review round (>300 days; half-dozen cases). The extreme cases were not concentrated in specific fields.

Information on the various aspects of the peer review process is presented for all review experiences, separately for accepted and rejected papers and for ten major scientific fields: (1) General journals (n = 172), (2) Natural sciences (due north = 1408), (3) Engineering (including applied science; northward = 518), (4) Mathematics and Computer sciences (n = 375), (5) Medicine (northward = 640), (six) Public wellness (including health professions; n = 348), (7) Psychology (including didactics; n = 355), (eight) Economic science and Business concern (including law; n = 318), (ix) Social sciences (due north = 553), and (10) Humanities (n = 178). Given that a substantial number of journals have a broad scope and therefore include more than one scientific field, the sum of the reviews in the different fields is higher than the total number of reviews.

At the end of the SciRev questionnaire, authors are asked to give an overall rating of their review experience. Considering this experience is influenced by many aspects of the peer review process, besides providing descriptive figures, also a multivariate regression assay is performed. In this analysis, the variation in the rating is explained on the basis of relevant characteristics of the process, i.eastward., whether or non the newspaper was accepted or rejected, the duration of the first review round, the number of review rounds, the number of referee reports received in the first review round, whether the writer is from an English language-speaking land, and the scientific field of the periodical. We present both straight effects of these factors and significant interactions between them. For journals covering several scientific fields, we only included the periodical'south principal field in this analysis.

In the multivariate assay, we excluded reviews of papers that were withdrawn, immediately accepted, or desk-bound rejected. Among the remaining 2821 reviews, there were some missing values. Five reviews for which duration of the first review round was missing were given the average duration of the first review round. Ii reviews where the language of the reviewer was missing were included in the non-English (biggest) category. For 289 cases the touch factor was missing. These missings were addressed using the dummy variable adjustment procedure [imputing the hateful and including a dummy indicating the missings (cf. Allison 2001)]. Results of the analysis with missing values dealt with in this mode were substantially the same as those with all missings removed from the data.

The overall rating of the review experience is measured on a scale running from 0 (very bad) to 5 (excellent). The consequence of the peer review process is a dummy indicating whether the paper was accepted (1) or rejected (0). The duration of the first review round is measured in days. To indicate language background, nosotros included a dummy indicating whether (ane) or not (0) the organisation where the author works is located in a country where English is the main language used in daily life (i.due east., Great britain, Ireland, USA, Canada, Australia, New Zealand, South Africa, and Biot). Of the 3500 reviews, 2516 were submitted past authors from not-English-speaking countries. Regarding the distribution of reviews over continents, 557 were obtained from Canada and the USA, 96 from Latin America and the Carribean, 2099 from Europe, 470 from Asia and the Pacific, 190 from the Heart Eastward, 83 from Africa, and 5 of which the continent is not known. For the dummies for scientific field, difference from hateful (effects) coding is used. The dummies therefore indicate to what extent the overall rating within the field is higher or lower than the mean of the fields (Hardy 1993).

Later on rating the overall review experience, authors are given the opportunity to motivate their rating in a few words or sentences. These motivations are published online with the reviews, if permission is given by the author. They paint a sometimes revealing movie of what researchers feel in their attempts to get their piece of work published. To supplement the figures presented in this newspaper with qualitative information, we analyzed the 1879 motivations available in the 3500 reviews studied.

Results

First response time

For authors, the duration of the first review circular, or first response time, is probably the gene they are more often than not interested in, as this takes upward a substantial part of the total manuscript evaluation time and to a big extent determines how much time is lost if the outcome is negative. First response time includes the time taken by the journal for a first evaluation of the manuscript, finding reviewers, the time the latter require to practise their work, and the time the editor then requires to evaluate the manuscript in light of the referee reports and to inform authors about the determination.

As can exist seen in Table one, the reported commencement response time in the SciRev data is on average 13 weeks and varies considerably amid scientific fields. Information technology took 8–ix weeks in Medicine and Public health related journals, 11 weeks in Natural sciences and Full general journals, 14 in Psychology, and xvi–eighteen weeks in Social sciences, Humanities, Mathematics and Calculator sciences, and Economics and Business. These figures differ between accepted and rejected manuscripts, with first response fourth dimension of rejected manuscripts taking, on boilerplate, iv weeks longer.

Table 1 First response time

Total size table

While writing a peer review may accept between 4 and 8 h, in simply 19% of all reported cases authors were informed about the outcome in less than a calendar month. In about ane tertiary of the cases (32%) authors had to wait iii months or more than and in 10% of the cases even more than 6 months before being informed. Duration differs widely betwixt scientific fields. In Social sciences and Humanities, simply vii–viii% of the authors were informed within 1 month versus 25% in Natural sciences and 27–28% in Medicine and Public wellness. In Economic science and Business organization and Mathematics and Computer sciences over ane sixth (xviii%) of authors had to wait 6 months or longer.

It is all the same unclear to what extent the long elapsing of the first review round is the result of the peer review process as such and to what extent information technology is due to (in)efficient manuscript handling at editorial offices. Given that immediate rejection times are often long (come across Table 3 and its give-and-take below), it seems that inefficiencies at editorial offices too play an important role. The finding that in Medicine and Public Health—where professionalization of journals is relatively high—first response times are the shortest, also points in this management.

To examination this idea farther, we looked at the relationship between the periodical's impact cistron and first response time. As highly ranked journals generally have more resource at their disposal and thus probably meliorate organized editorial offices, and as reviewers are more motivated to review for those journals, we expected to find a negative relationship. Pearson correlations between first response time and impact factor indeed confirm this expectation. These correlations are significantly negative for all scientific fields combined (P = −0.29) besides as for all scientific fields separately, with General journals (P = −0.51), Mathematics and Computer sciences (P = −0.27), and Natural sciences (P = −0.26) having the highest correlations. The only exception was Humanities, where no significant correlation between first response time and impact gene was plant. This might exist considering this field traditionally values publishing books more publishing in journals (Ware and Mabe 2015).

Full review elapsing

Total review duration refers to the time a manuscript is under responsibility of the journal. Also by the elapsing of the first review round, total review duration is also determined by the number and duration of subsequent review rounds. Full review duration does not include the time taken by authors to revise and resubmit their manuscript. Given that rejected manuscripts have on average less review rounds, nosotros restrict this analysis to accepted papers.

Table 2 shows that the reported total review elapsing of accepted manuscripts is on boilerplate 17 weeks. Once more there are substantial differences between scientific fields. With 12–fourteen weeks, average full review duration is shortest in Medicine, Public health, and the Natural sciences. Information technology is longest in Economics and Business, where the process takes on average 25 weeks and is twice as long. In Mathematics and Computer sciences, Social sciences and Humanities, total review elapsing is besides long, i.e., 22–23 weeks. Hence the differences in the duration of the review processes nosotros observed for the starting time review round are also present in the other aspects of the process.

Table ii Total review elapsing of accepted papers

Full size tabular array

If nosotros split up out the data further, we note that in Natural sciences, Medicine, and Public health 13–xvi% of the manuscripts laissez passer through the unabridged peer review process within 1 calendar month, that this applies to well-nigh two thirds of the manuscripts subsequently 3 months, and to 87–92% of the manuscripts afterwards 6 months. In Mathematics and Computer sciences, Social sciences, and Humanities, these figures are three–4%, i tertiary and slightly above ii thirds, respectively. Whereas only 8% of the authors in Medicine had to await more than vi months, this applies to ane tertiary of authors in Social sciences and Economic science and Business concern.

The full time a manuscript is with the journal is adamant by the fourth dimension a journal takes for a review round and by the number of review rounds. As mentioned in the Background-department, there are indications that the number of review rounds has increased in recent years. In our data, the number of review rounds on average amounts to 2.03, with Psychology (ii.23), General journals (2.18), Economic science and Business (ii.sixteen), and Social sciences (2.15) showing a higher average number of review rounds.

Total review duration correlates significantly and negatively (−0.27) with a journal'southward touch on factor, thus indicating that total review duration is shorter for higher touch on factor journals.

Immediate (desk) rejection time

Immediate rejection time is the fourth dimension an editor takes to inform authors that he or she is not interested in the manuscript (and volition therefore not transport it to reviewers). Our figures conspicuously show that immediate rejection fourth dimension is a major source of unnecessary time loss in the peer review process (Table three). On average, an immediate rejection in Medicine takes 10 days, closely followed past Natural sciences, Public health, and Applied science, taking 11–12 days. Journals in Psychology, Social sciences and Mathematics and Estimator sciences accept half as long, i.e., 15–17 days. These are relatively high averages, given that in many cases an inspection of the abstract is sufficient to decide that a paper does not fit.

Table iii Immediate rejection time

Full size table

On the positive side, in one-half (fifty%) of the reported immediate rejection cases, the editor informed the author(s) inside i week. Still, the information also evidence that in 17% of cases authors had to await more iv weeks to be informed of the rejection. Several authors even had to wait for more than than iii months, or withdrew their manuscripts after hearing nothing for an fifty-fifty longer period. These are clearly unacceptable practices.

The state of affairs is best in Medicine, where 62% of authors are informed nearly an immediate rejection within 7 days, followed by Natural sciences and Public wellness where this figure is 54%. Firsthand rejection time is longest for authors in the Social sciences and Mathematics and Reckoner sciences, where in nearly 30% of reported cases information technology took the editor iv weeks or more to inform author(s) that he or she was not interested in the manuscript and would not to ship information technology to reviewers. At that place is a pregnant negative correlation (−0.18) betwixt immediate rejection time and the periodical's bear on factor, which indicates that journals with a college impact gene take editors who work faster and editorial offices that are more professionally organized.

Reviewers are generally blamed for long processing times, just our findings indicate that manuscript treatment at editorial offices plays an important role too. If editors take a month for an firsthand rejection decision, they are probably also slow in finding reviewers and processing referee reports.

Referee reports

The boilerplate number of referee reports is most ii.ii in all scientific fields (see Table 4). This correspondence is remarkable, given the substantial differences between fields in other respects. There is slight variation in the experienced quality of the referee reports between the fields [as indicated on a scale running from 0 (very bad) to v (first-class)]. Authors report the quality of the reports to be somewhat college in Natural sciences, Engineering, and Public health (iii.7), and lower in General journals, Psychology, and Economics and Business (3.4). It is interesting that the long review duration in Economics and Business did not interpret into referee reports experienced of college quality.

Table 4 Review reports, number, quality and difficulty of requested changes (calibration 0–5)

Full size table

Authors who were given the opportunity to revise and resubmit their papers were likewise asked to what extent they perceived the requested changes every bit difficult and whether they thought their manuscript had improved equally a result of the revision. There is a pregnant positive correlation (0.40) between these factors. When the revision was experienced as more than difficult, authors were also more satisfied with the comeback. Regarding the difficulty experienced, revision processes were perceived every bit easiest in Mathematics and Computer sciences and in Public health (2.6), and as most hard in Economic science and Business concern (3.iii). Regarding the experienced improvement of the manuscript as a result of the revision, authors from Social sciences, Economics and Business, and Humanities reported somewhat higher figures (3.viii and iii.9) compared to the other scientific fields (3.7).

At that place is a small positive correlation (0.07) between the difficulty experienced regarding the referee reports and the impact factor of the journal. Thus, reviewers of more highly ranked journals tend to make somewhat greater demands on the authors. The degree of improvement experienced regarding the manuscript is not significantly related to affect factor.

Revision fourth dimension

The fourth dimension from the first submission date to the final determination date is not simply influenced by the time the manuscript is at the editorial part or existence reviewed, but too by the time authors take to revise their manuscript. It is therefore important to look also at the duration of the revision time. Table 5 shows that authors who received a revise-and-resubmit on average take 39 days to revise their manuscript, but there is substantial variation amid the fields. Authors in Economic science and Business take longest to revise their manuscripts: on average 64 days to prepare and submit a revised version. This is essentially longer than authors in Natural sciences, Engineering science and Mathematics and Computer sciences (32–34 days) and in Public health (29 days). Apparently, in Economics and Business it is not only the editors who take more time.

Tabular array 5 Revision fourth dimension

Full size table

Tabular array five also shows the per centum of manuscripts revised within a specific number of days. While 18% of authors in Engineering science, Mathematics and Estimator sciences and Public health revise their manuscript within 7 days, this applies to 9–10% of authors in Social sciences and Humanities and only 3% of authors in Economics and Business.

Regarding the relationship between the periodical'due south impact cistron and the time authors have to revise their manuscript, nosotros expected authors who received a revise-and-resubmit from a loftier-level periodical to be more motivated to complete the revision of their manuscript apace. However, no meaning correlation was found between revision time and the periodical's touch factor.

Rating of peer review experience

The SciRev questionnaire gives authors the opportunity to provide an overall rating of the review experience on a scale from 0 (very bad) to 5 (excellent); see Tabular array 6 for details. Authors of accepted manuscripts requite the peer review process a much higher rating (4) than authors of rejected manuscripts (ii.2). Moreover, the rating of the peer review process is negatively related to total review elapsing. This correlation is −0.43 for both accepted and rejected manuscripts.

Table half dozen Rating of review process of accepted and rejected papers per field (scale 0–v)

Full size tabular array

To determine how the diverse factors might affect the satisfaction of authors with the peer review procedure, we plow to the results of multivariate analyses (see Table 7). The start columns show the results of Model 1, which contains all relevant variables. Model 2 contains the same variables but besides the pregnant interactions between the variables.

Table 7 Regression analysis with overall rating as dependent variable (calibration 0–v)

Full size table

Equally can exist seen in Model 1, all variables, except impact cistron, are significantly related to authors' rating of the peer review process of their manuscript. As expected, authors of accustomed manuscripts rate the procedure significantly more positive than authors of rejected manuscripts. Authors tend to suffer from attributional bias: if their paper is rejected, they ofttimes blame this on situational factors such as incompetent reviewers and uninterested editors; but if it is accepted they tend to aspect this to their ain expertise and competence in writing loftier-quality papers (Garcia et al. 2016).

Authors also value speed of the peer review process. When the duration of the offset review round is shorter and there are fewer review rounds, authors give the process a significantly college rating. Authors who receive more referee reports likewise tend to exist more positive about the process. Their perception might be that their manuscript has been dealt with more than seriously and thoroughly. Authors from countries where English is the beginning linguistic communication rate the peer review process less positive than authors from other countries. It is possible that these authors accept higher expectations of the process and are more than disquisitional regarding aspects that do non come across their expectations.

Taking into account other factors, authors in Economic science and Business organisation, Social sciences, Psychology, and Mathematics and Calculator sciences are more positive about the peer review process than authors in Natural sciences, Medicine, Public wellness, and especially General journals.

When we include the pregnant interactions in the model (Model two), the sign and significance of the primary effects stay the same. The interaction analysis shows that the negative upshot of a longer duration of the first review circular and the negative result of more review rounds are less profound for accepted papers. Hence information technology seems that authors are willing to accept extensive revision piece of work if this is rewarded with the acceptance of their newspaper. At the same time, they seem especially disappointed if the manuscript is nonetheless rejected later on a long review process.

The negative interaction between a newspaper being accepted and the number of referee reports indicates that authors of rejected papers may consider a higher number of reports as a sign that their paper was taken seriously and might be content with extensive feedback. For obvious reasons, authors of accepted papers are more positive when the journal has a higher impact factor. Authors from English-speaking countries are less negative about the peer review procedure when their paper is accepted and when they receive more referee reports but discover a long process more problematic. This might reflect that they take college expectations that their paper will be accustomed and that the peer review process will exist curt and efficient compared to authors from non-English-speaking countries.

When the elapsing of the offset review circular is longer, or when the impact factor of the journal is higher, authors are more than concerned virtually a higher number of review rounds. In those cases, they might expect a smoothen continuation of the process and exist more disappointed when this proves not to be the case. A longer duration of the commencement review circular is considered less negative past authors who receive more referee reports.

Qualitative findings

The motivations authors give for their rating of the peer review process on SciRev.sc comprise of import qualitative information on author experiences. We analyzed these motivations and registered the author's major business organisation(southward). A offset important observation is that nigh half (918) of the 1879 comments is positive. Many authors, in detail of accepted papers, are satisfied with the process and express their gratitude in their motivations. Of the 961 comments with a negative connotation, 371 (39%) express concerns most the elapsing of the review process. This attribute of long review elapsing is included in the quantitative outcomes and has been discussed in the preceding sections.

A more informative source of discontent, mentioned 437 times (45%), concerns the role of editors and editorial offices. Poor communication of editors/offices—in detail not reacting to information requests—are a major source of frustration mentioned by authors. We received reports of authors who waited over 6 months without hearing anything of the journal or receiving reactions to information requests. Also editors who 'hibernate' behind reviewers and practice not take an independent position vis-à-vis them are perceived as problematic. In particular when referee reports are contradictory—as often happens—it is important that editors provide guidance and indicate the comments on which authors should focus in their revision.

Poor quality of referee reports is mentioned in 141 (15%) of the disquisitional comments. Referee reports are often perceived to be superficial, contradictory, unreadable, ask unreasonable modifications, or convey the impression that the reviewer did not read or understand the paper. Some other problems mentioned are the improver of completely new comments in the 2nd review round, the theft of ideas, or request for unnecessary references.

Determination

In this newspaper nosotros study various aspects of the peer review procedure on the basis of 3500 review experiences reported in the last 3 years on the SciRev.sc website. Aspects discussed include the first response time (duration of the first review round), total review duration (the time the manuscript is at the editorial role or with reviewers), the immediate rejection fourth dimension, the time authors accept for their starting time revision (revision time), the number, quality, and difficulty of referee reports received, and the overall rating of the process.

Nosotros find considerable variation between the 10 scientific fields distinguished. Whereas the reported first response time is 8–9 weeks for Medicine and Public wellness, information technology is 11–14 weeks in Natural sciences, Engineering, Psychology, and General journals and 16–eighteen weeks in Economics and Business, Social sciences, Mathematics and Calculator sciences, and Humanities (Tabular array 1). At that place is too considerable variation around these averages. While 27–28% of authors in Medicine and Public health were informed within a month, 18% of authors in Mathematics and Computer sciences and Economics and Business concern had to wait more than vi months for a decision. Equally expected, these figures also translate into longer total review durations reported for the scientific fields with longer kickoff review rounds (Table ii).

The long elapsing of the peer review procedure is oft blamed on reviewers taking much time to complete their reports. Withal, our figures bespeak that inefficient editorial processes are likewise of import. The reported immediate rejection time (Table 3), which is non influenced by reviewers, shows substantial variation among the fields and is often unreasonably long. Whereas in one-half of the immediate rejection cases authors were informed within a calendar week, in near one 6th of these cases authors had to wait for more than 4 weeks. Medicine performs all-time with an average of 10 days, Natural sciences, Public health and Engineering come second with 11–12 days. Psychology, Social sciences, and Mathematics and Estimator sciences have longest with fifteen–17 days. If editors take much time for a desk rejection, it is likely they also accept much fourth dimension finding reviewers and processing incoming referee reports. Firsthand rejection time is therefore a powerful indicator of the overall performance of editorial offices.

The full time betwixt submission of a manuscript and the final decision of the editor is not only influenced past the time reviewers take to submit their reports and the time editorial offices take to handle the manuscript, only besides past the time authors take to revise and resubmit their manuscript (Table 5). In this respect, the state of affairs is like to that of the other durations. While, on average, authors take 39 days to revise their manuscript, authors in Psychology and Social sciences take fifty days, and those in Economics and Concern fifty-fifty 64 days. On the other manus, authors in Public Wellness, Applied science, Mathematics and Computer sciences, and Natural sciences take only 29–34 days for a revision. The longer elapsing in some fields is not associated with a higher number of referee reports (2.0–2.3) nor with more than difficult referee reports (2.6–3.3).

Well-nigh characteristics of the peer review process studied are related to the periodical'southward bear on factor. More than highly ranked journals have a shorter duration of the offset review round (P = −0.29), total review duration (P = −0.27), and firsthand rejection time (P = −0.18), all indicating that review processes of more highly ranked journals are more efficient. We also plant a modest but significant positive correlation (P = 0.08) between experienced difficulty of the referee reports and impact cistron, indicating that reviewers of more than highly ranked journals are somewhat more demanding.

As expected, authors of accustomed manuscripts are more satisfied with the peer review experience than authors of rejected papers (Table 6). On a calibration from 0 (very bad) to v (first-class), they rate the process a iv, compared to a 2.2 for authors of rejected manuscripts. A longer duration of the process is negatively associated with the rating, independent of the process outcome. For both accepted and rejected manuscripts the Pearson correlation coefficient between total review duration and rating is −0.37.

To assess the independent associations between the characteristics of the procedure and the satisfaction of authors, a multivariate regression analysis was performed with the overall rating of the procedure as dependent variable (Table 7). This analysis shows that fifty-fifty when the other variables are taken into account, all three aspects, i.due east., a shorter duration of the commencement review round, a lower number of review rounds, and credence of the newspaper, are associated with a significantly higher overall rating of the experience. Interestingly, it besides shows that, in spite of the longer duration in Economic science and Concern, Social sciences, and Mathematics and Estimator sciences, authors in those fields are more than positive about the process than authors in the General journals, Medicine and Public health, where processes are shorter. Expectations thus clearly play a office.

As expected, authors of accustomed papers are even more positive if the periodical has a higher impact gene. They are (afterwards) also less bothered by a longer duration of the starting time review round and by more than than one review round. We also find that authors rate the process more positive if they receive more referee reports, in particular subsequently a long first review round and when the manuscript is rejected. This indicates that authors appreciate the work of reviewers and the feedback given on their manuscripts. Compared to authors from non-English-speaking countries, those from English language-speaking countries are by and large less satisfied with the procedure, specially when their manuscript is rejected or in case of more than than 1 review round. This suggests that authors from English language-speaking countries have higher expectations of the peer review procedure.

References

  • Alberts, B., Hanson, B., & Kelner, K. 50. (2008). Reviewing peer review. Scientific discipline, 321, 15.

    Article  Google Scholar

  • Allison, P. (2001). Missing information. London: Sage Publications Ltd.

    MATH  Google Scholar

  • Azar, O. H. (2004). Rejections and the importance of start response times. International Journal of Social Economics, 31(3), 259–274.

    Article  Google Scholar

  • Azar, O. H. (2007). The slowdown in first-response times of economic science Journals: Tin it be beneficial? Economic Enquiry, 45(1), 179–187.

    Article  Google Scholar

  • Björk, B., Roos, A., & Lauri, M. (2009). Scientific periodical publishing: Yearly volume and open up access availability. Information Research, xiv, 1.

    Google Scholar

  • Björk, B., & Solomon, D. (2013). The publishing delay in scholarly peer-reviewed journals. Journal of Informetics, 7, 914–923.

    Article  Google Scholar

  • Cherkashin, I., Demidova, S., Imai, Due south., & Krishna, G. (2009). The inside scoop: Credence and rejection at the journal of international economic science. Journal of International Economics, 77, 120–132.

    Article  Google Scholar

  • Ellison, G. (2002a). The slowdown of the economics publishing procedure. Journal of Political Economy, 110(five), 947–993.

    Article  Google Scholar

  • Ellison, G. (2002b). Evolving standards for academic publishing: A q-r theory. Journal of Political Economic system, 110(five), 994–1034.

    Article  Google Scholar

  • Etkin, A. (2014). A new method and metric to evaluate the peer review procedure of scholarly journals. Pub Res Q, 30, 23–38.

    Commodity  Google Scholar

  • Garcıa, J. A., Rodriguez-Sanchez, Rosa, & Fdez-Valdivia, J. (2016). Why the referees' reports I receive as an editor are so much improve than the reports I receive as an writer? Scientometrics, 106, 967–986.

    Article  Google Scholar

  • Hamermesh, D. S. (1994). Facts and myths about refereeing. Periodical of Economical Perspectives, 8(1), 153–163.

    Commodity  Google Scholar

  • Hardy, M.A. (1993). Regression with dummy variables. Sage.

  • Jinha, A. Eastward. (2010). Article 50 million: An estimate of the number of scholarly articles in existence. Learned Publishing, 23, 258–263.

    Article  Google Scholar

  • Kareiva, P., Marvier, Thou., West, S., & Hornisher, J. (2002). Slow-moving journals hinder conservation efforts. Nature, 420, 15.

    Article  Google Scholar

  • Lewin, A. Y. (2014). The peer-review process: The skilful, the bad, the ugly, and the extraordinary. Management and Organization Review, 10(2), 167–173.

    Article  Google Scholar

  • Lotriet, C. J. (2012). Reviewing the review procedure: Identifying sources of filibuster. Australasian Medical Periodical, 5(1), 26–29.

    Commodity  Google Scholar

  • Moizer, P. (2009). Publishing in accounting journals: A fair game? Accounting, Organizations and Society, 34, 285–304.

    Commodity  Google Scholar

  • Nicholas, D., Watkinson, A., Jamali, H. R., Herman, E., Tenopir, C., VolentineK, R., et al. (2015). Peer review: Still king in the digital age. Learned Publishing, 28(1), xv–21.

    Article  Google Scholar

  • Onitilo, A. A., Engel, J. M., Salzman-Scott, S. A., Stankowski, R. V., & Suhail, A. R. (2014). A cadre-detail reviewer evaluation (CoRE) organisation for manuscript peer review. Accountability in Research: Policies and Quality Assurance, 21, 109–121.

    Commodity  Google Scholar

  • Park, I.-U., Peacey, 1000. W., & Munafo, Chiliad. R. (2014). Modelling the effects of subjective and objective decision making in scientific peer review. Nature, 506, 93–98.

    Article  Google Scholar

  • Pautasso, Grand., & Schäfer, H. (2010). Peer review delay and selectivity in ecology journals. Scientometrics, 84, 307–315.

    Commodity  Google Scholar

  • Peters, D., & Ceci, S. (1982). Peer-review practices of psychological journals: The fate of published manufactures, submitted over again. The Behavioral and Brain Sciences, 5, 187–255.

    Article  Google Scholar

  • Plume, A., & van Weijen, D. (2014). Publish or perish? (p. 38). The rise of the partial author: Inquiry Trends.

    Google Scholar

  • Resnik, D. B., Gutierrez-Ford, Ch., & Peddada, Due south. (2008). Perceptions of ethical issues with scientific journal peer review: An exploratory study. scientific discipline eng Ethics, 14, 305–310.

    Article  Google Scholar

  • Solomon, D., & Björk, B. (2012). Publication fees in open access publishing: Sources of funding and factors influencing option of journal. Journal of the American Society for Information Scientific discipline and Technology, 63(1), 98–107.

    Article  Google Scholar

  • Thompson, G. D., Aradhyula, S. V., Frisvold, Chiliad., & Frisvold, R. (2010). Does paying referees expedite reviews?: Results of a natural experiment. Southern Economic Journal, 76(three), 678–692.

    Article  Google Scholar

  • Tite, 50., & Schroter, Due south. (2007). Why do peer reviewers pass up to review? A Survey, Cintinuing Professional Education, 61, 9–12.

    Google Scholar

  • Ware, M., & Mabe, One thousand. (2015). The STM written report: An overview of scientific and scholarly journal publishing. The Hague: International Clan of Scientific, Technical and Medical Publishers. http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf.

Download references

Acknowledgement

This article is based upon work from Price Activity TD1306 "New Frontiers of Peer Review", supported by Toll (European Cooperation in Science and Technology).

Author information

Affiliations

Corresponding author

Correspondence to Jeroen Smits.

Rights and permissions

Open up Access This commodity is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/past/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided y'all requite advisable credit to the original author(s) and the source, provide a link to the Creative Eatables license, and indicate if changes were made.

Reprints and Permissions

Almost this commodity

Verify currency and authenticity via CrossMark

Cite this commodity

Huisman, J., Smits, J. Elapsing and quality of the peer review process: the author's perspective. Scientometrics 113, 633–650 (2017). https://doi.org/ten.1007/s11192-017-2310-5

Download citation

  • Received:

  • Published:

  • Consequence Appointment:

  • DOI : https://doi.org/10.1007/s11192-017-2310-5

Keywords

  • Peer review procedure
  • Duration
  • Quality
  • Author's experience

williamsmucatinter.blogspot.com

Source: https://link.springer.com/article/10.1007/s11192-017-2310-5

Post a Comment for "How Do Peer Reviewed Articles Differ From Regular Aticles"