bannière skepti

Open Peer Review
Post-Publication Peer Review

Retractions studies

  • Anderson Caleb, Nugent Kenneth, Peterson Christopher, 2021, “Academic Journal Retractions and the COVID-19 Pandemic”, Journal of Primary Care & Community Health, 12, 1–6 ,
    Abstract. The 2020 COVID-19 pandemic has produced an unprecedented amount of scientific research, with over 100,000 articles on the SARS-COV2 virus or the associated pandemic published within the first year. To effectively disseminate such a large volume of research, some academic journal publishers altered their review criteria, and many articles were made available before undergoing a traditional review process. However, with this rapid influx of information, multiple COVID-19 articles have been retracted or withdrawn. Some researchers have expressed concern that these retractions call into question the validity of an expedited review process and the overall quality of the larger body of COVID-19 research. We examined 68 removed articles and determined that many of the articles were removed for unknown reasons (n = 22) or as duplications (n = 12); 24 papers were retracted for more significant reasons (data integrity, plagiarism, reporting or analysis, and IRB or privacy issues). The majority of removed papers were from the USA (n = 23) and China (n = 19).
  • Bar-Ilan Judith, Halevi Gali, 2017, “Post retraction citations in context: A case study”, Scientometrics, 113, 547-565,
    Abstract. This study examines the nature of citations to articles that were retracted in 2014. Out of 987 retracted articles found in ScienceDirect, an Elsevier full text database, we selected all articles that received more than 10 citations between January 2015 and March 2016. Since the retraction year was known for only about 83% of the retracted articles, we chose to concentrate on recent citations, that for certain appeared after the cited paper was retracted. Overall, we analyzed 238 citing documents and identified the context of each citation as positive, negative or neutral. Our results show that the vast majority of citations to retracted articles are positive despite of the clear retraction notice on the publisher’s platform and regardless of the reason for retraction. Positive citations can be also seen to articles that were retracted due to ethical misconduct, data fabrication and false reports. In light of these results, we listed some recommendations for publishers that could potentially minimize the referral to retracted studies as valid.
  • Bar-Ilan Judith, Halevi Gali, 2018, “Temporal characteristics of retracted articles”, Scientometrics, 116, 1771–1783,
    Abstract. There are three main reasons for retraction: (1) ethical misconduct (e.g. duplicate publication, plagiarism, missing credit, no IRB, ownership issues, authorship issues, interference in the review process, citation manipulation); (2) scientific distortion (e.g. data manipulation, fraudulent data, unsupported conclusions, questionable data validity, non-replicability, data errors—even if unintended); (3) administrative error (e.g. article published in wrong issue, not the final version published, publisher errors). The first category, although highly deplorable has almost no effect on the advancement of science, the third category is relatively minor. The papers belonging to the second category are most troublesome from the scientific point of view, as they are misleading and have serious negative implications not only on science but also on society. In this paper, we explore some temporal characteristics of retracted articles, including time of publication, years to retract, growth of post retraction citations over time and social media attention by the three major categories. The data set comprises 995 retracted articles retrieved in October 2014 from Elsevier’s ScienceDirect. Citations and Mendeley reader counts were retrieved four times within 4 years, which allowed us to examine post-retraction longitudinal trends not only for citations, but also for Mendeley reader counts. The major findings are that both citation counts and Mendeley reader counts continue to grow after retraction.
  • Casadevall Arturo, Steen Grant R., Fang Ferric C., 2014, “Sources of error in the retracted scientific literature”, The FASEB Journal, 28 (9), 3847-3855,
    Abstract. Retraction of flawed articles is an important mechanism for correction of the scientific literature. We recently reported that the majority of retractions are associated with scientific misconduct. In the current study, we focused on the subset of retractions for which no misconduct was identified, in order to identify the major causes of error. Analysis of the retraction notices for 423 articles indexed in PubMed revealed that the most common causes of error-related retraction are laboratory errors, analytical errors, and irreproducible results. The most common laboratory errors are contamination and problems relating to molecular biology procedures (e.g., sequencing, cloning). Retractions due to contamination were more common in the past, whereas analytical errors are now increasing in frequency. A number of publications that have not been retracted despite being shown to contain significant errors suggest that barriers to retraction may impede correction of the literature. In particular, few cases of retraction due to cell line contamination were found despite recognition that this problem has affected numerous publications. An understanding of the errors leading to retraction can guide practices to improve laboratory research and the integrity of the scientific literature. Perhaps most important, our analysis has identified major problems in the mechanisms used to rectify the scientific literature and suggests a need for action by the scientific community to adopt protocols that ensure the integrity of the publication process.
  • Castillo Mauricio, 2014, “The Fraud and Retraction Epidemic”, American Journal of Neuroradiology, 35 (9), 1653-1654,
  • Coudert François-Xavier, 2019, “Correcting the Scientific Record: Retraction Practices in Chemistry and Materials Science”, Chemistry of Materials, 31 (10), 3593–3598,
  • Dal-Ré Rafael, Ayuso Carmen, 2021, “For how long and with what relevance do genetics articles retracted due to research misconduct remain active in the scientific literature”, Accountability in Research, 28 (5), 280-296,
    Abstract. We aimed to quantify the number of pre- and post-retraction citations obtained by genetics articles retracted due to research misconduct. All retraction notices available in the Retraction Watch database for genetics articles published in 1970–2016 were assessed. The reasons for retraction were fabrication/falsification and plagiarism. The endpoints were the number of citations of retracted articles and when and how journals reported on retractions and whether this was published on PubMed. Four hundred and sixty retracted genetics articles were cited 34,487 times; 7,945 (23%) were post-retraction citations. Median time to retraction and time to last citation were 3.2 and 3 years, respectively. Most (96%) had a PubMed retraction notice, One percent of these were totally removed from journal websites altogether, and 4% had no information available on either the online or PDF versions. Ninety percent of citations were from articles retracted due to falsification/fabrication. The percentage of post-retraction citations was significantly higher in the case of plagiarism (42%) than in the case of fabrication/falsification (21.5%) (p<0.001). Median time to retraction was shorter (1.3 years) in the case of plagiarism than for fabrication/falsification (4.8 years, p<0.001). The retraction was more frequently reported in the PDFs (70%) for the fabrication/falsification cases than for the plagiarism cases (43%, p<0.001). The highest rate of retracted papers due to falsification/fabrication was among authors in the USA, and the highest rate for plagiarism was in China. Although most retractions were appropriately handled by journals, the gravest issue was that median time to retraction for articles retracted for falsification/fabrication was nearly 5 years, earning close to 6800 post-retraction citations. Journals should implement processes to speed-up the retraction process that will help to minimize post-retraction citations.
  • Deculllier Evelyne, Maisonneuve Hervé, 2018, “Correcting the literature: Improvement trends seen in contents of retraction notices”, BMC Res Notes, 11, 490,
    Abstract. Objective: To analyse retraction notices from 2016 and compare their quality to the 2008 notices. Results: From 146 retractions retrieved, only 123 were included, of which, a clear reason for retraction was available for 122 (99.2%) and no reason was given for one (0.8%). The main reasons for retraction were mistakes 26.0% (n = 32), fraud 26.0% (n = 32), plagiarism 20.3% (n = 25), and overlap 8.1% (n = 10). In 100 (81.3%) cases, a mention of retraction was available on the original paper, in 15 (12.2%) there was no mention of retraction, and 8 (6.5%) papers were deleted. Compared to the previous cohorts, management of retraction has improved because 99.2% provided a clear reason, and 81.3% of original articles were available with a mention of the retraction.
  • Dobránszki Judit, Teixeira da Silva Jaime A., 2019, “Corrective factors for author- and journal-based metrics impacted by citations to accommodate for retractions”, Scientometrics, 121, 387–398,
    Abstract. Citation-based metrics are frequently used to evaluate the level, or quality, of a researcher, or their work, often as a function of the ranking of the journal in which they publish, and broadly tend to be divided into journal-based metrics (JBMs) and author-based metrics (ABMs). Despite wide knowledge of the gaming of such metrics, in particular the Clarivate Analytics journal impact factor (JIF), no suitable substitute concept has yet emerged, nor has any corrective measure been developed. In a post-publication peer review world of increasing retractions, and within a framework of open science, we propose correction factors for JBMs and ABMs that take into account retractions. We describe ways to correct the JIF, CiteScore, the 5-year Impact Factor, Immediacy Index, Cited Half-Life, Raw Impact per Paper and other JBMs (Eigenfactor Score and Article Influence Score) as well as the h-index, one of the most widespread ABMs, depending on the number of retractions for that journal or individual, respectively. The existence of such corrective factors could make the use of these metrics more transparent, and might allow them to be used in a world that is adapting to an increase in retractions and corrective measures to deal with erroneous scientific literature. We caution that such correction factors should be used exclusively as such, and should not be viewed, or used, as punitive factors.
  • Drimer-Batca Daniel, Iaccarino Jonathan M., Fine Alan, 2019, “Status of retraction notices for biomedical publications associated with research misconduct”, Research Ethics, 15 (2), 1-5,
    Abstract. In order to assess the status of retraction notices for publications involving research misconduct, we collected and analyzed information from the Office of Research Integrity website. This site lists confirmed instances of misconduct in research supported by the National Institutes of Health. Over a 10-year period, 200 publications derived from misconduct were identified. For 20.5% of those papers, no retraction notice was published. We found that the majority of these cases were from investigations concluded at least two years before our analysis, and thus are unlikely to be explainable by timing considerations. These findings demonstrate that retraction notices for papers associated with misconduct are often not published and suggest that clear, adherent policies are needed in this circumstance to correct the scientific record.
  • Fanelli Daniele, 2013, “Why Growing Retractions Are (Mostly) a Good Sign”, PLoS Medicine, 10 (12),
    Summary Points. – Corrections to scientific papers have been published for much longer than retractions, and show little sign of a recent increase. – The number of journals issuing retractions has grown dramatically in recent years, but the number of retractions per retracting-journal has not increased. – The number of queries and allegations
    made to the US Office of Research Integrity has grown, but the frequency of its findings of misconduct has not increased. – Therefore, the rising number of retractions is most likely to be caused by a growing propensity to retract flawed and fraudulent papers, and there is little evidence of an increase in the prevalence of misconduct. – Statistics on retractions and findings of misconduct are best used to make inferences about weaknesses in the system of scientific self-correction.
  • Fanelli Daniele, Ioannidis John P.A., Goodman Steven, 2018, “Improving the integrity of published science: An expanded taxonomy of retractions and corrections”, European Journal of Clinical Investigation, 48 (4), Daniele,
  • Fanelli Daniele, Wong Julie, Moher David, 2021, “What difference might retractions make? An estimate of the potential epistemic cost of retractions on meta-analyses”,
    Accountability in Research,
    Abstract. The extent to which a retraction might require revising previous scientific estimates and beliefs – which we define as the epistemic cost – is unknown. We collected a sample of 229 meta-analyses published between 2013 and 2016 that had cited a retracted study, assessed whether this study was included in the metaanalytic estimate and, if so, re-calculated the summary effect size without it. The majority (68% of N = 229) of retractions had occurred at least one year prior to the publication of the citing meta-analysis. In 53% of these avoidable citations, the retracted study was cited as a candidate for inclusion, and only in 34% of these meta-analyses (13% of total) the study was explicitly excluded because it had been retracted. Meta-analyses that included retracted studies were published in journals with significantly lower impact factor. Summary estimates without the retracted study were lower than the original if the retraction was due to issues with data or results and higher otherwise, but the effect was small. We conclude that meta-analyses have a problematically high probability of citing retracted articles and of including them in their pooled summaries, but the overall epistemic cost is contained.
  • Furman Jeffrey L., Jensen Kyle, Murray Fiona, 2012, “Governing knowledge in the scientific community: Exploring the role of retractions in biomedicine”, Research Policy,
    41 (2), 276-290,
    Abstract. Although the validity of knowledge is critical to scientific progress, substantial concerns exist regarding the governance of knowledge production. While research errors are as relevant to the knowledge economy as defects are to the manufacturing economy, mechanisms to identify and signal “defective” or false knowledge are poorly understood. In this paper, we investigate one such institution – the system of scientific retractions. We analyze the universe of peer-reviewed scientific articles retracted from the biomedical literature between 1972–2006 and comparing with a matched control sample in order to identify the correlates, timing, and causal impact of scientific retractions. This effort provides insight into the workings of a distributed, peer-based system for the governance of validity in scientific knowledge. Our findings suggest that attention is a key predictor of retraction – retracted articles arise most frequently among highly-cited articles. The retraction system is expeditious in uncovering knowledge that is ever determined to be false (the mean time to retraction is less than two years) and democratic (retraction is not systematically affected by author prominence). Lastly, retraction causes an immediate, severe, and long-lived decline in future citations. Conditional on the obvious limitation that we cannot measure the absolute amount of false science in circulation, these results support the view that distributed governance systems can be designed to uncover false knowledge relatively swiftly and to mitigate the costs that false knowledge for future generations of producers.
  • Hsiao Tzu-Kun, Schneider Jodi, 2021, “Continued use of retracted papers: Temporal trends in citations and (lack of ) awareness of retractions shown in citation contexts in
    biomedicine”, Quantitative Science Studies, 2 (4), 1144-1169,
    Abstract. We present the first database-wide study on the citation contexts of retracted papers, which covers 7,813 retracted papers indexed in PubMed, 169,434 citations collected from iCite, and 48,134 citation contexts identified from the XML version of the PubMed Central Open Access Subset. Compared with previous citation studies that focused on comparing citation counts using two time frames (i.e., preretraction and postretraction), our analyses show the longitudinal trends of citations to retracted papers in the past 60 years (1960–2020). Our temporal analyses show that retracted papers continued to be cited, but that old retracted papers stopped being cited as time progressed. Analysis of the text progression of pre- and postretraction citation contexts shows that retraction did not change the way the retracted papers were cited. Furthermore, among the 13,252 postretraction citation contexts, only 722 (5.4%) citation contexts acknowledged the retraction. In these 722 citation contexts, the retracted papers were most commonly cited as related work or as an example of problematic science. Our findings deepen the understanding of why retraction does not stop citation and demonstrate that the vast majority of postretraction citations in biomedicine do not document the retraction.
  • Chen Chaomei, Hu Zhigang, Milbank Jared, Schultz Timothy, 2013, “A visual analytic study of retracted articles in scientific literature”, Journal of The American Society for Information Science and Technology, 64 (2), 234-253,
    Abstract. Retracting published scientific articles is increasingly common. Retraction is a self-correction mechanism of the scientific community to maintain and safeguard the integrity of scientific literature. However, a retracted article may pose a profound and long-lasting threat to the credibility of the literature. New articles may unknowingly build their work on false claims made in retracted articles. Such dependencies on retracted articles may become implicit and indirect. Consequently, it becomes increasingly important to detect implicit and indirect threats. In this article, our aim is to raise the awareness of the potential threats of retracted articles even after their retraction and demonstrate a visual analytic study of retracted articles with reference to the rest of the literature and how their citations are influenced by their retraction. The context of highly cited retracted articles is visualized in terms of a co-citation network as well as the distribution of articles that have high-order citation dependencies on retracted articles. Survival analyses of time to retraction and postretraction citation are included. Sentences that explicitly cite retracted articles are extracted from full-text articles. Transitions of topics over time are depicted in topic-flow visualizations. We recommend that new visual analytic and science mapping tools should take retracted articles into account and facilitate tasks specifically related to the detection and monitoring of retracted articles.
  • Katavic Vedran, 2014, “Retractions of scientific publications: responsibility and accountability”, Biochemia Medica, 24 (2), 217–222,
    Abstract. This evidence-based opinion piece gives a short overview of the increase in retractions of publications in scientific journals and discusses various reasons for that increase. Also discussed are some of the recent prominent cases of scientific misconduct, the number of authors with multiple retractions, and problems with reproducibility of published research. Finally, some of the effects of faulty research on science and society, as well as possible solutions are discussed.
  • Mebane Christopher A., Sumpter John P., Fairbrother Anne, Augspurger Thomas P., Canfield Timothy J., Goodfellow William L., Guiney Patrick D., LeHuray Anne, Maltby Lorraine, Mayfield David B., McLaughlin Michael J., Ortego Lisa S., Schlekat Tamar, Scroggins Richard P., Verslycke Tim, 2019, “Scientific integrity issues in Environmental Toxicology and Chemistry: Improving research reproducibility, credibility, and transparency”, Integrated Environmental Assessment and Management, 15 (3), 320-344,
    Abstract. High-profile reports of detrimental scientific practices leading to retractions in the scientific literature contribute to lack of trust in scientific experts. Although the bulk of these have been in the literature of other disciplines, environmental toxicology and chemistry are not free from problems. While we believe that egregious misconduct such as fraud, fabrication of data, or plagiarism is rare, scientific integrity is much broader than the absence of misconduct. We are more concerned with more commonly encountered and nuanced issues such as poor reliability and bias. We review a range of topics including conflicts of interests, competing interests, some particularly challenging situations, reproducibility, bias, and other attributes of ecotoxicological studies that enhance or detract from scientific credibility. Our vision of scientific integrity encourages a self-correcting culture that promotes scientific rigor, relevant reproducible research, transparency in competing interests, methods and results, and education.
  • Montgomery Kathleen, Amalya L. Oliver, 2017, “Conceptualizing Fraudulent Studies as Viruses: New Models for Handling Retractions”, Minerva, 55, 49–64,
    Abstract. This paper addresses the growing problem of retractions in the scientific literature of publications that contain bad data (i.e., fabricated, falsified, or containing error), also called ‘‘false science.’’ While the problem is particularly acute in the biomedical literature because of the life-threatening implications when treatment recommendations and decisions are based on false science, it is relevant for any knowledge domain, including the social sciences, law, and education. Yet current practices for handling retractions are seen as inadequate. We use the metaphor of a virus to illustrate how such studies can spread and contaminate the knowledge system, when they continue to be treated as valid. We suggest drawing from public health models designed to prevent the spread of biological viruses and compare the strengths and weaknesses of the current governance model of professional self-regulation with a proposed public health governance model. The paper concludes by considering the value of adding a triple-helix model that brings industry into the university-state governance mechanisms and incorporates bibliometric capabilities needed for a holistic treatment of the retraction process.
  • Moylan Elizabeth C., Kowalczuk Maria K., 2016, “Why articles are retracted: a retrospective cross-sectional study of retraction notices at BioMed Central”, BMJ Open,
    Abstract. Objectives: To assess why articles are retracted from BioMed Central journals, whether retraction notices adhered to the Committee on Publication Ethics
    (COPE) guidelines, and are becoming more frequent as a proportion of published articles. Design/setting: Retrospective cross-sectional analysis of 134 retractions from January 2000 to December 2015. Results: 134 retraction notices were published during this timeframe. Although they account for 0.07% of all articles published (190 514 excluding supplements, corrections, retractions and commissioned content), the rate of retraction is rising. COPE guidelines on retraction were adhered to in that an explicit reason for each retraction was given. However, some notices did not document who retracted the article (eight articles, 6%) and others were unclear whether the underlying cause was honest error or misconduct (15 articles, 11%). The largest proportion of notices was issued by the authors (47 articles, 35%). The majority of retractions were due to some form of misconduct (102 articles, 76%), that is, compromised peer review (44 articles, 33%), plagiarism (22 articles, 16%) and data falsification/fabrication (10 articles, 7%). Honest error accounted for 17 retractions (13%) of which 10 articles (7%) were published in error. The median number of days from publication to retraction was 337.5 days. Conclusions: The most common reason to retract was compromised peer review. However, the majority of these cases date to March 2015 and appear to be the result of a systematic attempt to manipulate peer review across several publishers. Retractions due to plagiarism account for the second largest category and may be reduced by screening manuscripts before publication although this is not guaranteed. Retractions due to problems with the data may be reduced by appropriate data sharing and deposition before publication. Adopting a checklist (linked to COPE guidelines) and templates for various classes of retraction notices would increase transparency of retraction notices in future.
  • Oransky Ivan, Fremes Stephen E, Kurlansky Paul, Gaudino Mario, 2021, “Retractions in medicine: the tip of the iceberg”, European Heart Journal, 42 (41), 4205–4206,
  • Pulverer Bernd, 2015, “When things go wrong: correcting the scientific record”, The EMBO Journal, 34, 2483-2485,
  • Redman, B. K., Yarandi, H. N., Merz Jon F., 2008, “Empirical developments in retraction”, Journal of Medical Ethics, 34, 807-809,
    Abstract. This study provides current data on key questions about retraction of scientific articles. Findings confirm that the rate of retractions remains low but is increasing. The most commonly cited reason for retraction was research error or inability to reproduce results; the rate from research misconduct is an underestimate, since some retractions necessitated by research misconduct were reported as being due to inability to reproduce. Retraction by parties other than authors is increasing, especially for  research misconduct. Although retractions are on average occurring sooner after publication than in the past, citation analysis shows that they are not being recognised by subsequent users of the work. Findings suggest that editors and institutional officials are taking more responsibility for correcting the scientific record but that reasons published in the retraction notice are not always reliable. More aggressive means of notification to the scientific community appear to be necessary.
  • Rivera Horacio, Teixeira da Silva Jaime A., 2021, “Retractions, Fake Peer Reviews, and Paper Mills”, Journal of Korean Medical Science, 36 (24),
  • Schneider Jodi, Ye Di, Hill Alison M., Ashley S., 2019, “Continued post-retraction citation of a fraudulent clinical trial report, 11 years after it was retracted for falsifying data”, Scientometrics, 125, 2877–2913,
    Abstract. This paper presents a case study of long-term post-retraction citation to falsified clinical trial data (Matsuyama et al. in Chest 128(6):3817–3827, 2005. https ://, demonstrating problems with how the current digital library environment communicates retraction status. Eleven years after its retraction, the paper continues to be cited positively and uncritically to support a medical nutrition intervention, without mention of its 2008 retraction for falsifying data. To date no high quality clinical trials reporting on the efficacy of omega-3 fatty acids on reducing inflammatory markers have been published. Our paper uses network analysis, citation context analysis, and retraction status visibility analysis to illustrate the potential for extended propagation of misinformation over a citation network, updating and extending a case study of the first 6 years of post-retraction citation (Fulton et al. in Publications 3(1):7–26, 2015.
    The current study covers 148 direct citations from 2006 through 2019 and their 2542 second-generation citations and assesses retraction status visibility of the case study paper and its retraction notice on 12 digital platforms as of 2020. The retraction is not mentioned in 96% (107/112) of direct post-retraction citations for which we were able to conduct citation context analysis. Over 41% (44/107) of direct post-retraction citations that do not mention the retraction describe the case study paper in detail, giving a risk of diffusing misinformation from the case paper. We analyze 152 second-generation citations to the most recent 35 direct citations (2010–2019) that do not mention the retraction but do mention methods or results of the case paper, finding 23 possible diffusions of misinformation from these nondirect citations to the case paper. Link resolving errors from databases show a significant challenge in a reader reaching the retraction notice via a database search. Only 1/8 databases (and 1/9 database records) consistently resolved the retraction notice to its full-text correctly in our tests. Although limited to evaluation of a single case (N = 1), this work  demonstrates how retracted research can continue to spread and how the current information environment contributes to this problem.
  • Singh Balhara Yatan Pal, Mishra Ashwani, 2014, “Compliance of retraction notices for retracted articles on mental disorders with COPE guidelines on retraction”, Current Science, 107 (5), 757-76
    Abstract. The current study is aimed at assessment of compliance of retraction notices for articles on mental disorders with COPE guidelines and impact of open access on postretraction citation of retracted articles on mental disorders. A bibliometric search was carried out for retraction notices for articles on mental disorders using PubMed. Twentyfour (43.63%) articles were retracted in the year 2010 or later and 31 (56.36%) were retracted before 2010. A significantly higher proportion of articles cited at least once postretraction were without a freely accessible retraction notice (chi square = 10.06, df = 1, P = 0.002). Open access status of the article did not influence the times (in months) to retraction after publication (U = 321.00, P = 0.73).
  • Steen R. Grant, 2011, “Retractions in the scientific literature: do authors deliberately commit research fraud?”, Journal of Medical Ethics, 37 (2), 113-117,
    Abstract. Background Papers retracted for fraud (data fabrication or data falsification) may represent a deliberate effort to deceive, a motivation fundamentally different from papers retracted for error. It is hypothesised that fraudulent authors target journals with a high impact factor (IF), have other fraudulent publications, diffuse responsibility across many co-authors, delay retracting fraudulent papers and publish from countries with a weak research infrastructure. Methods All 788 English language research papers retracted from the PubMed database between 2000 and 2010 were evaluated. Data pertinent to each retracted paper were abstracted from the paper and the reasons for retraction were derived from the retraction notice and dichotomised as fraud or error. Data for each retracted article were entered in an Excel spreadsheet for analysis. Results Journal IF was higher for fraudulent papers (p<0.001). Roughly 53% of fraudulent papers were written by a first author who had written other retracted papers (‘repeat offender’), whereas only 18% of erroneous papers were written by a repeat offender (c¼88.40; p<0.0001). Fraudulent papers had more authors (p<0.001) and were retracted more slowly than erroneous papers (p<0.005). Surprisingly, there was significantly more fraud than error among retracted papers from the USA (c2¼8.71; p<0.05) compared with the rest of the world. Conclusions This study reports evidence consistent with the ‘deliberate fraud’ hypothesis. The results suggest that papers retracted because of data fabrication or falsification represent a calculated effort to deceive. It is inferred that such behaviour is neither naïve, feckless nor inadvertent.
  • Steen R. Grant, Casadevall Arturo, Fang Ferric C., 2013, “Why Has the Number of Scientific Retractions Increased?”, PLoS ONE, 8 (7), e68397,
    Abstract. Background: The number of retracted scientific publications has risen sharply, but it is unclear whether this reflects an increase in publication of flawed articles or an increase in the rate at which flawed articles are withdrawn. Methods and Findings: We examined the interval between publication and retraction for 2,047 retracted articles indexed in PubMed. Time-to-retraction (from publication of article to publication of retraction) averaged 32.91 months. Among 714 retracted articles published in or before 2002, retraction required 49.82 months; among 1,333 retracted articles published after 2002, retraction required 23.82 months (p,0.0001). This suggests that journals are retracting papers more quickly than in the past, although recent articles requiring retraction may not have been recognized yet. To test the hypothesis that time-to-retraction is shorter for articles that receive careful scrutiny, time-to-retraction was correlated with journal impact factor (IF). Time-to-retraction was significantly shorter for high-IF journals, but only ,1% of the variance in time-toretraction was explained by increased scrutiny. The first article retracted for plagiarism was published in 1979 and the first for duplicate publication in 1990, showing that articles are now retracted for reasons not cited in the past. The proportional impact of authors with multiple retractions was greater in 1972–1992 than in the current era (p,0.001). From 1972–1992, 46.0% of retracted papers were written by authors with a single retraction; from 1993 to 2012, 63.1% of retracted papers were written by single-retraction authors (p,0.001). Conclusions: The increase in retracted articles appears to reflect changes in the behavior of both authors and institutions. Lower barriers to publication of flawed articles are seen in the increase in number and proportion of retractions by authors with a single retraction. Lower barriers to retraction are apparent in an increase in retraction for ‘‘new’’ offenses such as plagiarism and a decrease in the time-to-retraction of flawed work.
  • van der Vet Paul E., Nijveen Harm, 2016, “Propagation of errors in citation networks: a study involving the entire citation network of a widely cited paper published in, and later retracted from, the journal Nature”, Research Integrity and Peer Review, 1, 3,
    Abstract. Background: In about one in 10,000 cases, a published article is retracted. This very often means that the results it reports are flawed. Several authors have voiced concerns about the presence of retracted research in the memory of science. In particular, a retracted result is propagated by citing it. In the published literature, many instances are given of retracted articles that are cited both before and after their retraction. Even worse is the possibility that these articles in turn are cited in such a way that the retracted result is propagated further. Methods: We have conducted a case study to find out how a retracted article is cited and whether retracted results are propagated through indirect citations. We have constructed the entire citation network for this case. Results: We show that directly citing articles is an important source of propagation of retracted research results. In contrast, in our case study, indirect citations do not contribute to the propagation of the retracted result. Conclusions: While admitting the limitations of a study involving a single case, we think there are reasons for the non-contribution of indirect citations that hold beyond our case study.
  • Vuong Quan-Hoang, 2019, “The limitations of retraction notices and the heroic acts of authors who correct the scholarly record: An analysis of retractions of papers published from 1975 to 2019”, Learned Publishing, 33 (2), 119-130,
    Abstract. While researchers with retracted papers – publications that are withdrawn because of significant errors or scientific misconduct – carry a permanent stain on their publishing records, understanding the causes and initiators of such retractions can shed a different light on the matter. This paper, based on a random sample of 2,046 retracted papers, which were published between 1975 and 2019, extracted from Retraction Watch and the websites of major publishers, shows that 53% of the retraction notices do not specify who initiated the retraction. Nearly 10% of the retraction notes either omit or do not contain information related to reasons for retractions. Furthermore, most of the retracted papers in our sample have no limitation section; those who do are commonly unhelpful or irrelevant. The results carry three implications for scientific transparency: retraction notices need to be more informative; limitation sections ought to be a required and even an open section of all published articles; and finally, promoting ‘heroic acts’ in science can positively change the current publishing culture.
  • Wray K. Brad, Andersen Line Edslev, 2018, “Retractions in Science”, Scientometrics, 117, 2009–2019,
    Abstract. Retractions are rare in science, but there is growing concern about the impact retracted papers have. We present data on the retractions in the journal Science, between 1983 and 2017. Each year, approximately 2.6 papers are retracted; that is about 0.34% of the papers published in the journal. 30% of the retracted papers are retracted within 1 year of publication. Some papers are retracted almost 12 years after publication. 51% of the retracted papers are retracted due to honest mistakes. Smaller research teams of 2–4 scientists are responsible for a disproportionately larger share of the retracted papers especially when it comes to retractions due to honest mistakes. In 60% of the cases all authors sign the retraction notice.
  • Xu Shaoxiong (Brian), Hu Guangwei, 2018, “Retraction Notices: Who Authored Them?”, Publications, 6 (1), 2,
    Abstract. Unlike other academic publications whose authorship is eagerly claimed, the provenance of retraction notices (RNs) is often obscured presumably because the retraction of published research is associated with undesirable behavior and consequently carries negative consequences for the individuals involved. The ambiguity of authorship, however, has serious ethical ramifications and creates methodological problems for research on RNs that requires clear authorship attribution. This article reports a study conducted to identify RN textual features that can be used to disambiguate obscured authorship, ascertain the extent of authorship evasion in RNs from two disciplinary clusters, and determine if the disciplines varied in the distributions of different types of RN authorship. Drawing on a corpus of 370 RNs archived in theWeb of Science for the hard discipline of Cell Biology and the soft disciplines of Business, Finance, and Management, this study has identified 25 types of textual markers that can be used to disambiguate authorship, and revealed that only 25.68% of the RNs could be unambiguously attributed to authors of the retracted articles alone or jointly and that authorship could not be determined for 28.92% of the RNs. Furthermore, the study has found marked disciplinary differences in the different categories of RN authorship. These results point to the need for more explicit editorial requirements about RN authorship and their strict enforcement.
  • Editorial, 2021, “Breaking the stigma of retraction”, Nature Human Behaviour, 5, 1591,
    Abstract. Retractions are a key tool for maintaining the integrity of the published record. We need to recognize and reward researchers, especially early-career researchers, who do the right thing in coming forward with a request to retract research that cannot be relied upon due to honest error.