The Classroom | Empowering Students in Their College Journey

The Difference Between a Published & Unpublished Dissertation

How to Locate PhD Dissertations

How to Locate PhD Dissertations

A dissertation is the main element in completion of a Ph.D. The central element of a doctoral dissertation, and the quality that differentiates it from a master's thesis or an undergraduate thesis, is that it must make an original contribution to its field, usually using primary research. The structure and content of a completed doctoral dissertation is often very different from the structure required for articles or books that are based on it.

Unpublished Dissertations

When a Ph.D. candidate completes her dissertation, this usually results in three or four copies: one each for the candidate, the dissertation supervisor, the university library and sometimes an archive. Unless a dissertation is subsequently published, these are the only copies that are ever created. What this means in practical terms is that unpublished dissertations are almost never widely read. The vast majority of dissertations serve their purpose of gaining a Ph.D. for their author and then fade into obscurity. If you write a dissertation that you want to have an impact, you will need to revise it and publish it in some form.

One of the easiest options for getting your research into published form is to revise a single chapter into an article for a peer-reviewed journal in your field. The difference between this article and an unpublished dissertation is clear: The article is present in a journal that is printed in thousands of copies and distributed to influential academics around the world. In most cases, the editors of the journal will want the form of the dissertation chapter reworked to some extent to make it more accessible to readers who are probably not experts in that particular subject matter.

Motivated dissertation authors often seek to have their dissertations published in book form. As with journal articles, books that are based on dissertations need to be reworked. This usually takes the form of downplaying the methodology and literature-review sections, cutting down on the density of footnotes and references and generally making the text more readable to non-specialists. A published book can get your name out in your academic field and to the world in general. Having a book and some published articles in your field will be helpful to you in advancing your academic career. Within academia, an unpublished dissertation is really nothing more than a prerequisite.

Online Publishing

The Internet has opened up tremendous new opportunities for academic publishing. While having your work accessible online doesn't carry the same weight with hiring committees as an article in a peer-reviewed journal, or better yet a book, it is an effective way to make yourself and your work known, as long as you get it published in the right places. Making contacts through online publishing can be an effective stepping stone toward breaking into journals and book publishing. It's also a useful way to get feedback from other academics about your work.

Related Articles

Referencing Your Own Knowledge in APA Format

Referencing Your Own Knowledge in APA Format

How to write a humanities paper

How to write a humanities paper

How to Write a Dissertation Summary

How to Write a Dissertation Summary

Difference Between College-Level and Casual Writing?

Difference Between College-Level and Casual Writing?

The Purpose of a PhD

The Purpose of a PhD

How to Do a Summary of a News Article

How to Do a Summary of a News Article

How to find credible web sources for a research paper.

How to Write a Microbiology Research Proposal

How to Write a Microbiology Research Proposal

  • University of California Berkeley/Graduate Division: Publishing Your Dissertation

Jagg Xaxx has been writing since 1983. His primary areas of writing include surrealism, Buddhist iconography and environmental issues. Xaxx worked as a cabinetmaker for 12 years, as well as building and renovating several houses. Xaxx holds a Doctor of Philosophy in art history from the University of Manchester in the U.K.

SPIRES HEP was a joint project of SLAC, DESY & FNAL as well as the worldwide HEP community. It was superseeded by INSPIRE

Last Updated: 12/30/2013

Valid CSS!

Banner

APA Referencing - Education & CCSC students: Unpublished or informally published work

  • Abbreviations
  • Journal article
  • Quotes & citations
  • Reference lists
  • Referencing questions
  • Audiovisual works
  • Brochure or pamphlet
  • Conference paper
  • Dictionary/Encyclopedia
  • Government publication
  • Gray literature
  • Group author
  • Interviews/Research data
  • Lecture notes/Tutorial material
  • Newspaper/Magazine
  • Personal communication
  • Self-referencing
  • Software app
  • Figures & tables

Unpublished or informally published work

How to reference an unpublished or informally published work.

As with all referencing in academic writing, referencing is a matter of establishing the authority of the source or information you are relying upon as evidence to support the claims you make in your writing. This is the reason for peer review as it is a process that establishes the authority of a work through expert checking. Peer-reviewed published works are accepted as having greater authority than works that are not peer reviewed. Sometimes, however, the most useful research article might not be available as a peer-reviewed published article but it is available to us in an unpublished form. Use other peer-reviewed articles if possible but if there is a lack of published research reports and, for example, a pre-press version is available directly from the author, you may use it. Check whether the article has been published before submitting your final assignment or thesis and, if it has, reference the final version, taking into account any changes that the editors may have required in the peer-review process.

Unpublished and informally published works include:

  • work in progress
  • work submitted for publication
  • work prepared for publication but not submitted

a university website

An electronic archive such as academia.edu or researchgate.

  • the author's personal website

In-text citation

Reference list

Author, A. A. (Year).  Title of manuscript.  Unpublished manuscript [or "manuscript submitted for publication," or "Manuscript in preparation"].

If the unpublished manuscript is from a university, give this information at the end.

If you locate the work on an electronic archive, give this information at the end.

If a URL is available, give it at the end. 

If you use a pre-print version of an article that is later published, reference the published version.

  • << Previous: Figures & tables
  • Last Updated: Oct 30, 2023 5:48 PM
  • URL: https://morlingcollege.libguides.com/apareferencing

Robert Gordon University

Covid-19 Information for students and staff on Library services and facilities, opening hours and the safe use of the Library.

The University recommends the use of face coverings in busy or crowded areas.

APA Referencing: Unpublished Works

  • Act of Parliament
  • Apparatus or equipment
  • Chapters of Edited Books
  • Cochrane Database of Systematic Reviews
  • Code of ethics
  • Computer software
  • Conference presentations and posters
  • Dictionary entry
  • Dissertations
  • Encylopaedia entry
  • Kindle books
  • Journal articles
  • Generative AI
  • Magazine articles
  • Newspaper Articles
  • Personal Communications
  • Pre-print or post-print
  • Psychometric tests

Unpublished Works

  • Videos (Online)
  • Work of art
  • How should I reference confidential material?

Vancouver Banner

APA Referencing

Unpublished piece of writing (book, article, etc.)

If you download an article from a web repository, such as a preprint, postprint or e-print, you should reference as an eprint. See page on referencing preprints/eprints. An article on the internet is considered to be informally published. An example of unpublished work might be a an article that you have written or been sent by the author which has not been published, or has been submitted for publication but with no decision yet.

Author(s) (Year). Title of manuscript . [Unpublished manuscript] or [Manuscript in preparation] or [Manuscript submitted for publication].

Doe, J. (2018). How to Take Over the World . [Unpublished manuscript].

The citation in your text will be:

(Doe, 2018)

or, if you have quoted directly,

(Doe, 2018, p. 16).

If you have used the author's name in your sentence then only the year of publication, with a page reference if necessary, is placed after it in brackets, eg

Doe (2018) suggests that ...

Doe (2018, p. 16) states that ...

Other Examples

Unpublished manuscript associated with university (example from the APA Manual)

Blackwell, E. & Concord, P. J. (2003). A Five-Dimensional Measure of Drinking Motives [Unpublished manuscript]. Department of Psychology, University of British Columbia.

Manuscript submitted to a journal for publication (example from the APA Manual)

Ting, J. Y., Florsheim, P. & Huang, W. (2008). Mental health help-seeking in ethnic minority populations: A theoretical perspective [Manuscript submitted for publication].

(Ting, Florsheim & Huang, 2008)

Informally published or work published by self on website, not dated

Informally-published work (e.g. on author’s website) is not unpublished, so this is not indicated in square brackets. Such work is often cited like a webpage.

Ajzen, I. (n.d.). Designing a TPB Intervention . http://people.umass.edu/aizen/tpb.html

(Ajzen, n.d.)…

  • << Previous: Television
  • Next: Videos (Online) >>
  • Last Updated: Jan 19, 2024 10:04 AM
  • URL: https://library.rgu.ac.uk/apa-referencing

Home

  • Annual Meeting
  • SAA Connect
  • Using Archives: A Guide
  • Donating Personal/Family Papers
  • Donating Organizational Records
  • Copyright & Unpublished Material
  • A Guide to Deeds of Gift
  • Development & Review
  • SAA-Developed Standards
  • Standards Creation/Revision Outline
  • Endorsement of External Standards
  • Proposal for Inclusion of Non-Endorsed Standard
  • U.S. & Canada
  • International
  • Allied Professional Organizations
  • Affiliated Groups
  • Representatives
  • How to Get Listed
  • Find an Archivist
  • Bibliography of American Archival History
  • Facts & Figures
  • News & Press Releases
  • Core Values & Code of Ethics
  • Code of Conduct
  • Strategic Plan
  • Position Statements
  • SAA Archives & History
  • Individual Membership
  • Student Membership
  • Associate Membership
  • Institutional Membership
  • Fellows of SAA
  • Awards for Excellence
  • Travel Awards
  • Student Scholarships
  • SAA Council
  • Component Groups
  • Minutes | Reports | Agendas
  • Leader Resources
  • Governance Manual
  • Donate Now!
  • Board of Directors
  • Grant Funding
  • Staff Directory
  • Advertising Opportunities
  • Communication Channels
  • Job Seekers
  • Career Learning Center
  • Paid Internships
  • Employers: Post a Job
  • Career Counselors
  • Appointments
  • Continuing Education
  • Mentoring Program
  • So You Want to Be an Archivist...
  • Directory of Archival Education
  • ARCHIVES * RECORDS 2024
  • Past Locations and Attendance
  • Online Learning
  • Course Catalog
  • Education Fees & Policies
  • Instructors
  • Host a Course
  • Continuing Education Development
  • Archival Continuing Education (ACE) Guidelines
  • Arrangement & Description (A&D)
  • Digital Archives Specialist (DAS)
  • Course Resources
  • Archival Education: Mission and Goals
  • GPAS Curriculum
  • Administration, Faculty and Infrastructure
  • Conclusion & Footnotes
  • Latest Releases
  • One Book, One Profession
  • Archival Fundamentals Series III
  • Trends in Archives Practice
  • Archival Futures
  • Bookstore FAQs
  • How to Access Issues
  • Reviews Portal
  • Submissions
  • Advertising
  • Subscription
  • Submit an Article
  • Submit Announcement
  • Guidelines for Book Proposals
  • Guidelines for Manuscript Submissions
  • Guidelines for Archival Futures
  • Module Guidelines - Trends in Archives Practice
  • Advice for Authors
  • About the Dictionary
  • Suggest a Term
  • Feedback on a Term
  • Word of the Week
  • Archival Ethics
  • Campus Case Studies
  • Government Records
  • Native American Archival Materials
  • Records Management
  • Teaching With Primary Sources
  • Write For SAA!
  • Archives in Context Podcast
  • Research Forum
  • SAA Sampler Series
  • Thesaurus for Use in College and University Archives
  • Free Publications
  • Archives, Public Policy & You: Advocacy Guide
  • Position Statements & Issue Briefs
  • SAA Public Policy Agenda
  • SAA Legislative Agenda
  • Resources & Toolkits
  • Federal Funding Impact Story
  • American Archives Month
  • #AskAnArchivist Day
  • ArchivesAWARE! Blog
  • Elevator Speech
  • COVID-19 Pandemic Resources
  • Documenting in Times of Crisis
  • Crisis Collecting Assistance Team
  • MayDay: Saving Our Archives
  • Disaster Recovery Fund (NDRFA)
  • Preservation Week
  • All Archivists Survey Report
  • General FAQs
  • Take Action!
  • The Climate Data Harvest Project
  • Read the Stories
  • Share Your Story
  • Get Recognized
  • Benefits of Membership
  • List of All Dues Categories
  • Update My Profile
  • Directory of SAA Members
  • Email Discussion Lists
  • Committees & Boards
  • Student Chapters
  • Task Forces & Working Groups
  • My Communities
  • Post a Message
  • Notification Settings
  • Directory of Student Chapters
  • Scholarships & Travel Awards
  • Students and New Archives Professionals (SNAP) Section
  • Get Involved
  • Important Dates & Deadlines
  • Zoom Video Conferencing
  • Webex Meeting Guide
  • Microsite Manual
  • Funding for SAA Groups

Copyright and Unpublished Material

An introduction for users of archives and manuscript collections, this text is intended to answer questions you may have about archives and manuscript collections that may be protected by copyright. because copyright law is constantly evolving, this text is provided for introductory and educational purposes only. it is not intended to be a complete discussion of the subject and is not a substitute for qualified legal advice. other countries have different rules; this document applies only to u.s. law., frequently asked questions, i want to use material from the archives. what do i need to know.

U.S. Copyright law governs, among other things, using copyrighted material in research papers, published books and articles, web pages, exhibits, plays, songs, etc. Ultimately, you are responsible for determining whether you need permission to make use of a work.

What is protected?

Copyright protects works of original authorship the moment a work is fixed in some tangible form. Exceptions are works produced by the U.S. government and some state governments. Under U.S. law, the simple act of fixing the work in a “tangible medium” is sufficient to establish the creator’s copyright in unpublished material—no copyright statement (e.g., © 2014) is mandated, nor does the item need to be registered with the Copyright Office. The law distinguishes between published and unpublished material and the courts often afford more copyright protection to unpublished material when an asserted fair use is challenged.

How can I tell if something is published or unpublished?

The law defines “publication” as offering for distribution or actually distributing copies of a work to the public by sale or other transfer of ownership, or by rental, lease, or lending. Publication has been interpreted by the courts as distribution to numerous individuals who are under no explicit or implicit restrictions with respect to the use of the contents. An informational text, such as this one, is published if it is distributed to the public, whether or not it is offered for sale. Generally, material is considered unpublished if it was not intended for public distribution or if only a few copies were created and distribution was limited.

How long does copyright in an unpublished work last?

Copyright in an unpublished work lasts for the life of the author plus 70 years. If the author (or the author’s death date) is unknown or if the author is a corporate body, then the term is 120 years from the creation date for the work. Therefore much unpublished material in archives or manuscript collections is likely to still be under copyright.

Can the archives or manuscript repository give me permission to publish an unpublished work?

The fact that the archives or repository holds the physical document does not mean it also owns the copyright. Many donors or sellers, when they transfer collections, retain the creative rights to the material for which they are the rights holder. (Archives that serve as the repository of record for materials created by a parent organization will be able to communicate the organization’s procedures for managing copyright permission requests.) Only when rights holders assign the copyright in the work to a repository can the latter (and only it) give you permission to publish. But even when copyrights are transferred along with a collection, the repository may not receive copyright in all of the material, whether analog or born-digital. This is because rights holders can transfer only the copyrights they own, and in most cases donors will own copyright only in material they created. For example, donors would generally own copyright in photographs they took or in letters they wrote to others; however, they may not own (and therefore could not transfer) copyrights in photographs taken of them by someone else or letters they received written by others. The original rights holders may also have transferred copyright to a third party, such as a publisher, and thus no longer own the rights to works they originally made.

Note that the repository that owns the item you wish to publish may charge fees for publication (even if it may not own copyright in the work) in addition to any fees a rights holder might charge. Any such stipulation is separate from copyright permission and is determined by a repository’s use policies.

Why can you give me a copy of an unpublished work but not give me permission to publish it?

Sections 107 and 108 of copyright law provide archives and libraries with a limited authority to make copies of copyrighted material without permission under certain conditions, such as when the copy is to be used for private study, scholarship, or research.

Is there any way I can use an unpublished work without permission from the copyright holder?

The fair use doctrine (as codified in Section 107) recognizes that there are uses that do not infringe on the rights of copyright holders and provides a defense for the use of copyrighted works without permission from the copyright owner. The statute does not say what is or is not fair. Rather, courts evaluate fair use cases based on four factors, no one of which is determinative in and of itself:

  • The purpose and character of the use: How are you using the copyrighted work, and in what context? The statute lists several examples of the kinds of uses that might be fair—“criticism, comment, news reporting, teaching, scholarship, and research.” This list is not all-inclusive and some uses that fall under one of these might not be fair. Commercial uses can be fair, but courts tend to give more weight to noncommercial uses. Recently, courts have primarily been asking if the use is transformative; does it “merely supersede” the original work or does it add “something new, with a further purpose or different character, altering [it] with new expression, meaning, or message?” [ Campbell v. Acuff-Rose Music ]
  • The nature of the copyrighted work:  Is the work you are using published or unpublished? Is it highly creative or primarily factual? Courts give more protection to works that are “closer to the core of copyright protection,” such as unpublished or highly creative works. [ Campbell v. Acuff-Rose Music ]
  • The amount and substantiality of the portion used in relation to the copyrighted work as a whole: There is no predetermined amount of a work that constitutes fair use or that is automatically an unfair use. Determining factors include how much of the copyrighted work was used, the relative importance of the amount used to the work as a whole (whether the portion used constituted “the heart of the work,” for example), and whether the amount used was justified by the purpose and character of the use. [ Harper & Row v. Nation, Campbell ]
  • The effect of the use upon the potential market for, or value of, the copyrighted work: This factor assesses how, and to what extent, the use damages the existing and potential market for the original. Courts have recognized that where uses are highly transformative under the first factor, the affected markets receive less protection. [ Castle Rock v. Carol ]

How can I determine if a proposed use is fair?

Determining that a use may be fair involves conducting an analysis along a continuum of “less likely fair” to “more likely fair.” Helpful aids in conducting such an analysis can be found at the resource page of the Society of American Archivists’ Intellectual Property Working Group .

Could I be sued for using someone else’s work even though it seems a fair use?

Yes, it is possible. Because of the case-specific nature of fair use, you can only know whether a use is fair if a court rules it to be so. However, several authorities have produced guides to help people take advantage of this vague, but very useful, exception to copyright. If there is a question about whether a particular use is fair, it is always safe to seek permission.

What if I cannot determine who owns the copyright or if I am unable to locate a known copyright holder?

The Society of American Archivists’ Intellectual Property Working Group has produced a document that is designed to provide guidance on this dilemma—commonly known as the Orphan Works problem—and that suggests search strategies for identifying the creator of a work, identifying the work’s copyright holder, and for locating the copyright holder. 

This work is licensed under a  Creative Commons Attribution-NoDerivatives 4.0 International License .

This brochure was created by the intellectual property working group of the society of american archivists., about archives.

  • Archival Organizations
  • Archival Consultants

published vs unpublished research paper

  • What Are Archives?
  • Awards & Scholarships
  • SAA Foundation
  • SAA Career Center
  • Career Services Commons
  • Professional Development
  • Explore a Career in Archives
  • Certificate Programs
  • Graduate Archival Education
  • American Archivist
  • Archival Outlook
  • In the Loop
  • Book Publishing
  • Dictionary of Archives Terminology
  • Case Studies
  • More Resources
  • Public Policy
  • Public Awareness
  • Within Your Institution
  • Crisis + Disaster Response
  • A*CENSUS II
  • Suggest SAA Advocacy Action
  • Archives Change Lives
  • Join / Renew

Privacy & Confidentiality   •   Disclaimer   •   Contact Us

Copyright © 1997-2024 by SAA. All rights reserved.

LKS ASE logo

Guide to Sources for Finding Unpublished Research

Unpublished research.

  • Research Networks
  • Conference Proceedings
  • Clinical Research in Progress
  • Grey Literature
  • Institutional Repositories
  • Preprint Servers
  • Finding Theses

This guide takes you through the tools and resources for finding research in progress and unpublished research in Paramedicine. 

What do we mean by unpublished?  

Typically we mean anything that is publicly available on the internet that isn't published formally in a journal article or conference proceedings.  By their nature these  unpublications  are varied but might include things like: 

  • Preprints, work in progress or an early version of an article intended for publication that is made available for comment by interested researchers, 
  • Presentations, posters, conference papers  published on personal websites or research networks like  ResearchGate  or  Mendeley ,
  • Theses and dissertations  published on the web or through repositories.

Unpublished research can be harder to find a number of reasons.  There is no one place to look. You have to dig a little deeper.  The tools you can use o do this are covered in this  Guide . Also, there isn't that much of it.  There are a number of reasons for this. Paramedic researchers are relatively few and widely dispersed geographically and across different organizations (academic and EMS/Ambulance Services).  Compared to similar areas Paramedic research is in the early stages of development.  To use an analogy, Paramedic research is till taxing up the runway while other areas are already up and flying. It's not impossible; it's just harder than in more established research areas.

Why would you want to look?

If you are wondering why you would want to search for  unpublished  material, there could be a number of reasons: 

  • Completeness,   if you need to cover a complete topic including work in progress and projects and ideas that haven't made it to formal publication,
  • Real- world  examples and case studies , not every project or every implementation will make it to formal publication but may be reported informally as a presentation, theses or dissertation,
  • Currency,   the lengthy publication process encourages researchers to find alternative routes to promote research in progress to share ideas and inform current practice. Typically this would be preprints but there are other informal methods such as copies of posters and presentations. 
  • Next: Research Networks >>
  • Last Updated: Nov 9, 2023 10:51 PM
  • URL: https://ambulance.libguides.com/unpublishedresearch

Copyright Alliance Logo

Copyright Alliance

  • Who We Represent
  • Community Partners
  • Statements to Congress
  • Agency and Other Filings
  • Amicus Briefs
  • Position Papers
  • Copyright Act
  • Copyright Regulations
  • Copyright Compendium
  • Copyright Cases
  • Copyright Legislation
  • Government Reports
  • Congressional Hearings
  • International Agreements
  • Find a Copyright Attorney
  • Creator Assistance Directory
  • Find a Copyright Owner
  • Copyright Facts by State
  • Report Piracy
  • Jobs in Copyright
  • IPDC Program
  • Press Releases
  • Media Center
  • Trending Topics
  • Event Calendar

Copyright Law Explained

  • AI and Copyright
  • CCB Explained
  • Copyright Law by Industry
  • Copyright Courses
  • Join the Alliance
  • Creator Voices
  • Take Action
  • Copyright Alliance Policy Alert
  • AI Copyright Alert

The word Copyright highlighted in blue

Copyright Published vs. Unpublished Work

Often when and how a copyright owner registers a copyrighted work will depend on whether that work is published or unpublished. The registration requirements for which a copyright is published vs unpublished differs under the law. The “under the law” phrase is important here because what the normal person might consider to be published does not necessarily correspond to the Copyright Office’s definition of the term. Moreover, sometimes even knowing these definitions doesn’t help because there is some ambiguity in the term and how it applies to new digital environments. So, this is one area to proceed with caution.

Copyright Published Work Definition

The Copyright Office’s definition of published includes: the distribution of copies of a work to “the public by sale or other transfer of ownership, or by rental, lease, or lending” or offering to distribute copies … “to a group of persons for purposes of further distribution, public performance, or public display, constitutes publication.” A public performance or display of a work does not by itself constitute publication.

Online Work: Published vs. Unpublished

In the online environment this gets confusing. For example, a blog post or a photo posted on a website might be considered to be a “distribution of copies,” which would mean it’s a published work under the definition or it could be a “public display,” which would mean it’s unpublished.

To read more about copyright published vs unpublished work, and copyright law in general, join the alliance today—it’s free.

Related Resources

How to register for a copyright, what is joint copyright ownership, requirements for copyright protection.

10.3.2  Including unpublished studies in systematic reviews

Publication bias clearly is a major threat to the validity of any type of review, but particularly of unsystematic, narrative reviews. Obtaining and including data from unpublished trials appears to be one obvious way of avoiding this problem.  Hopewell and colleagues conducted a review of studies comparing the effect of the inclusion or exclusion of ‘grey’ literature (defined here as reports that are produced by all levels of government, academics, business and industry in print and electronic formats but that are not controlled by commercial publishers) in meta-analyses of randomized trials (Hopewell 2007b) .  They included five studies (Fergusson 2000, McAuley 2000, Burdett 2003, Hopewell 2004) , all of which showed that published trials had an overall greater intervention effect than grey trials. A meta-analysis of three of these studies suggested that, on average, published trials showed a 9% larger intervention effect than grey trials (Hopewell 2007b) .

The inclusion of data from unpublished studies can itself introduce bias. The studies that can be located may be an unrepresentative sample of all unpublished studies. Unpublished studies may be of lower methodological quality than published studies: a study of 60 meta-analyses that included published and unpublished trials found that unpublished trials were less likely to conceal intervention allocation adequately and to blind outcome assessments (Egger 2003). In contrast, Hopewell and colleagues found no difference in the quality of reporting of this information (Hopewell 2004).

A further problem relates to the willingness of investigators of located unpublished studies to provide data. This may depend upon the findings of the study, more favourable results being provided more readily. This could again bias the findings of a systematic review. Interestingly, when Hetherington et al., in a massive effort to obtain information about unpublished trials in perinatal medicine, approached 42,000 obstetricians and paediatricians in 18 countries they identified only 18 unpublished trials that had been completed for more than two years (Hetherington 1989) .

A questionnaire assessing the attitudes toward inclusion of unpublished data was sent to the authors of 150 meta-analyses and to the editors of the journals that published them (Cook 1993). Researchers and editors differed in their views about including unpublished data in meta-analyses. Support for the use of unpublished material was evident among a clear majority (78%) of meta-analysts while journal editors were less convinced (47%) (Cook 1993).  This study was recently repeated, with a focus on the inclusion of grey literature in systematic reviews, and it was found that acceptance of inclusion of grey literature has increased and, although differences between groups remain (systematic review authors: 86%, editors: 69%), they may have decreased compared with the data presented by Cook et al. (Tetzlaff 2006).

Reasons for reluctance to include grey literature included the absence of peer-review of unpublished literature. It should be kept in mind, however, that the refereeing process has not always been a successful way of ensuring that published results are valid (Godlee 1999) . The team involved in preparing a Cochrane review should have at least a similar level of expertise with which to appraise unpublished studies as a peer reviewer for a journal. On the other hand, meta-analyses of unpublished data from interested sources are clearly a cause for concern.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Wiley-Blackwell Online Open

Logo of blackwellopen

Searching practices and inclusion of unpublished studies in systematic reviews of diagnostic accuracy

Daniël a. korevaar.

1 Department of Respiratory Medicine, Amsterdam University Medical Centres, University of Amsterdam, The Netherlands

Jean‐Paul Salameh

2 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada

Yasaman Vali

3 Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam the Netherlands

Jérémie F. Cohen

4 Department of General Pediatrics and Pediatric Infectious Diseases, Necker‐Enfants Malades Hospital, Assistance Publique‐Hôpitaux de Paris, Paris France

5 Inserm UMR 1153 (Centre of Research in Epidemiology and Statistics), Paris Descartes University, France

Matthew D. F. McInnes

6 Department of Radiology, University of Ottawa, Ottawa Canada

René Spijker

7 Cochrane Netherlands, Julius Center for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht University, The Netherlands

8 Medical Library, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam the Netherlands

Patrick M. Bossuyt

Associated data.

The full data set is available in the Supporting Information S1.

Introduction

Many diagnostic accuracy studies are never reported in full in a peer‐reviewed journal. Searching for unpublished studies may avoid bias due to selective publication, enrich the power of systematic reviews, and thereby help to reduce research waste. We assessed searching practices among recent systematic reviews of diagnostic accuracy.

We extracted data from 100 non‐Cochrane systematic reviews of diagnostic accuracy indexed in MEDLINE and published between October 2017 and January 2018 and from all 100 Cochrane systematic reviews of diagnostic accuracy published by December 2018, irrespective of whether meta‐analysis had been performed.

Non‐Cochrane and Cochrane reviews searched a median of 4 (IQR 3‐5) and 6 (IQR 5‐9) databases, respectively; most often MEDLINE/PubMed (n = 100 and n = 100) and EMBASE (n = 81 and n = 100). Additional efforts to identify studies beyond searching bibliographic databases were performed in 76 and 98 reviews, most often through screening reference lists (n = 71 and n = 96), review/guideline articles (n = 18 and n = 52), or citing articles (n = 3 and n = 42). Specific sources of unpublished studies were searched in 22 and 68 reviews, for example, conference proceedings (n = 4 and n = 18), databases only containing conference abstracts (n = 2 and n = 33), or trial registries (n = 12 and n = 39). At least one unpublished study was included in 17 and 23 reviews. Overall, 39 of 2082 studies (1.9%) included in non‐Cochrane reviews were unpublished, and 64 of 2780 studies (2.3%) in Cochrane reviews, most often conference abstracts (97/103).

Searching practices vary considerably across systematic reviews of diagnostic accuracy. Unpublished studies are a minimal fraction of the evidence included in recent reviews.

1. INTRODUCTION

Systematic reviews aim to provide a comprehensive and informative summary of the evidence on a certain topic, for example, the effectiveness of an intervention or the accuracy of a diagnostic test. 1 , 2 Unfortunately, a reviewer's job is impeded by the fact that approximately half of all initiated biomedical studies are never reported in full in a peer‐reviewed journal. 3 Unpublished studies are often difficult to identify, making the inclusion of their results in systematic reviews a hazardous task. This may lead to flawed and overoptimistic review conclusions, when studies with more optimistic results are published more often. Among systematic reviews of therapeutic interventions, it has been documented that published trials report, on average, a 9% greater treatment effect than unpublished ones. 4

For diagnostic accuracy studies, evidence of similar selective publication practices is still scarce, yet increasing. In recent years, a number of evaluations assessed publication rates among completed diagnostic accuracy studies, identifying that approximately a quarter to half of them failed to reach full‐text publication in a peer‐reviewed journal. 5 , 6 , 7 , 8 , 9 Two studies evaluated time from study completion to publication among published diagnostic accuracy studies, both concluding that those reporting higher estimates of diagnostic accuracy were published more rapidly. 10 , 11 It seems plausible that studies reporting higher estimates of diagnostic accuracy also more often reach publication, although this has yet to be demonstrated. 6 , 7 , 8 , 9

To prevent the potential bias from relying only on published evidence in systematic reviews, guidance documents invite reviewers to search for studies that are not reported in peer‐reviewed journals but may be identifiable in, for example, proceedings of scientific conferences or prospective trial registries. 12 , 13 , 14 , 15 , 16 Making efforts to identify unpublished data may also result in more precise estimates of diagnostic accuracy after meta‐analysis and provide better opportunities to investigate sources of heterogeneity in meta‐regression, which is not always possible in standard systematic reviews of diagnostic accuracy, typically due to small numbers of primary studies. 17 As such, including unpublished studies may help to reduce avoidable research waste due to a failure to report studies in full. 18 , 19

The objective of this study was to assess searching practices among recent systematic reviews of diagnostic accuracy, with a special focus on the identification and inclusion of unpublished studies. We were suspecting that, despite the accumulating evidence that many studies remain unreported, a majority of systematic reviews fails to search for or identify such studies. Given the explicit guidance provided in the Cochrane Handbook for Diagnostic Test Accuracy Reviews and the thorough peer‐review process that protocols for Cochrane systematic reviews undergo before they are initiated, 16 we also evaluated a set of Cochrane reviews.

In this evaluation, unpublished studies were defined as those that had not been reported in full in a peer‐reviewed journal but had only been described or mentioned in, for example, conference abstracts, trial registries, dissertations, repositories, book chapters, or unpublished manuscripts obtained through contact with investigators.

2.1. Selection of systematic reviews

Two sets of systematic reviews of diagnostic accuracy were obtained. First, we used a set of systematic reviews identified in a recently published project on reporting quality of systematic reviews of diagnostic accuracy, for which the full search details have been reported elsewhere. 20 In short, MEDLINE had been searched for systematic reviews of diagnostic accuracy published between 31 October 2017 and 20 January 2018, where the time span had been modulated to reach a convenience sample size of 100 systematic reviews, using the following search strategy: “systematic[sb] AND (sensitivity and specificity[mesh] OR sensitivit*[tw] OR specifit*[tw] OR accur*[tw] OR ROC[tw] OR AUC[tw] OR likelihood[tw]).”

In addition, we obtained a set of Cochrane systematic reviews of similar size by searching the Cochrane Library ( www.cochranelibrary.com/cdsr/reviews ) filtering the “type” of systematic review by “diagnostic,” without any additional search terms. We searched from inception onwards until we arrived at a convenience sample of 100 Cochrane systematic reviews. The first Cochrane systematic review of diagnostic accuracy was published in October 2009; the 100th in December 2018.

Both non‐Cochrane and Cochrane systematic reviews were included if they had evaluated the diagnostic accuracy of one or more index tests against a reference standard in humans, independent of whether they had been able to include studies or to perform meta‐analysis. Systematic reviews published in languages other than English were excluded.

2.2. Data extraction

All data extraction was performed by one investigator (DAK) and all extracted information was checked by a second investigator (JPS or YV), who marked 44 datapoints (out of a total of 10  800) for discussion. Disagreements were resolved through discussion. The complete report of each systematic review was read, and the following characteristics were extracted:

2.2.1. General characteristics of included systematic reviews

We extracted type of systematic review (non‐Cochrane vs Cochrane), first author, number of authors, country of corresponding author, year of publication, type of index test under evaluation (imaging test, laboratory test, another type of test, or multiple types of tests), target condition, language restrictions applied, and whether efforts were made to contact authors of included studies for additional data (eg, in case of incomplete reporting). We also extracted all bibliographic databases searched for the review and whether unpublished studies were explicitly eligible for inclusion.

2.2.2. Additional efforts to identify studies

We extracted whether additional efforts were made to identify potentially eligible (published or unpublished) studies beyond searching bibliographic databases (categorized as screening of reference lists of included studies, screening of review articles or clinical guidelines, screening of articles citing included studies, contacting authors or experts, using a “related articles” search feature, contacting device manufacturers, or other), and whether specific sources of unpublished studies were searched (categorized as sources of conference abstracts, trial registries, or other [including specific sources of theses, dissertations, studies in‐progress or other grey literature]).

2.2.3. Systematic review results

Finally, we also extracted total number of studies included in the systematic review, number of unpublished studies included and through which sources these had been identified, number of identified ongoing unpublished studies (ie, studies that fulfilled the inclusion criteria of the systematic review but had not yet been completed) and their sources, whether at least one meta‐analysis had been performed, whether unpublished studies had been included in a meta‐analysis, and whether the authors had pre‐planned a comparison between published and unpublished studies (or a sensitivity analysis excluding unpublished studies) and what the results of this comparison were.

2.3. Data analysis

Quantitative analysis consisted in descriptive statistics. Data on practices for including unpublished studies were reported as frequencies and percentages, or as medians and interquartile ranges (IQR). Data were analyzed for non‐Cochrane and Cochrane systematic reviews separately as we expected considerable differences in searching practices between the two groups, as has been found for systematic reviews of therapeutic studies. 21 We did not attempt a statistical comparison between non‐Cochrane and Cochrane systematic reviews, as they covered different timeframes; in addition, because we included all published Cochrane systematic reviews inference to a larger population does not apply. A comparison between published vs unpublished studies among meta‐analyses containing at least three published and three unpublished studies was pre‐planned but not performed due to limited data, as there were only seven systematic reviews that fulfilled this criterion.

3.1. General characteristics of included systematic reviews

We included 100 non‐Cochrane systematic reviews and 100 Cochrane systematic reviews. An overview of systematic review characteristics and results is provided in Table ​ Table1 1 .

Characteristics of included systematic reviews of diagnostic accuracy

Note : Data are absolute numbers, unless otherwise indicated.

Abbreviation: IQR, inter quartile range.

The median number of authors was 5 (IQR 4‐7) for non‐Cochrane systematic reviews and 7 (IQR 6‐8) for Cochrane systematic reviews. Corresponding authors were mostly from China (n = 28), United States (n = 13) and South Korea (n = 12) for non‐Cochrane systematic reviews, and from the United Kingdom (n = 50), the Netherlands (n = 9) and Australia (n = 8) for Cochrane systematic reviews. The type of index test under investigation was most often an imaging test (n = 60 and n = 34), followed by a laboratory test (n = 27 and n = 33).

Of the non‐Cochrane systematic reviews, 37/100 explicitly reported in their methods section that no language restrictions were applied, compared to 90/100 Cochrane systematic reviews; those that had applied language restrictions usually restricted inclusion to English only (43 of 56, and 4 of 6). Only seven and four systematic reviews did not report whether language restrictions were applied. Efforts to contact authors in case of incomplete or unclear data were announced or reported by 31 non‐Cochrane systematic reviews and by 78 Cochrane systematic reviews. Of these, 13 and 63 reported that the authors of at least one primary study had actually been contacted, whereas the remaining did not report this information. In addition, 8 and 52 reported that at least some requested data had been obtained after contacting authors of primary studies, whereas the remaining 23 and 26 reported that no data had been obtained or did not report this information.

Non‐Cochrane and Cochrane systematic reviews had searched a median of 4 (IQR 3‐5) and 6 (IQR 5‐9) bibliographic databases, respectively. Databases most often searched were MEDLINE/PubMed (n = 100 and n = 100), Embase (n = 81 and n = 100), at least one database within the Cochrane Library (n = 68 and n = 71), and at least one database within Web of Science (n = 42 and n = 65). Regional databases such as Latin American and Caribbean Health Sciences Literature (LILACS) (n = 13 and n = 39) and African Index Medicus (n = 2 and n = 4) were less often searched. This also applied to Chinese databases such as CNKI and WanFang (n = 11 and n = 0 systematic reviews searched at least one Chinese database).

Of the non‐Cochrane systematic reviews, 10 explicitly reported that they considered (at least one type of) unpublished studies for inclusion, or that they had searched for studies independent of publication status/type. In contrast, 36 systematic reviews explicitly reported that (at least one source of) unpublished studies were not eligible for inclusion: 23 referred to conference abstracts and 13 to unpublished, non‐peer reviewed or grey literature studies in general. The remaining 54 non‐Cochrane systematic reviews did not make explicit comments about whether (a type of) unpublished studies were eligible for inclusion, although 13 of these reported having searched in one or more specific sources of unpublished studies, and 10 included at least one unpublished study.

Of the Cochrane systematic reviews, 42 explicitly reported that they considered (at least one type of) unpublished studies for inclusion, or that they searched for studies independent of publication status/type. In contrast, 10 systematic reviews explicitly reported that (at least one source of) unpublished studies were not eligible for inclusion: eight referred to conference abstracts and two to unpublished studies in general. The remaining 48 Cochrane systematic reviews did not make explicit comments about whether (a type of) unpublished studies were eligible for inclusion, although 35 of these reported having searched in one or more specific sources of unpublished studies (eg, conference proceedings or trial registries), and 9 had included one or more unpublished studies.

3.2. Additional efforts to identify studies

Additional efforts to identify potentially eligible (published or unpublished) studies beyond searching bibliographic databases were performed by 76 non‐Cochrane systematic reviews and by 98 Cochrane systematic reviews: screening of reference lists of included studies (n = 71 and n = 96), searching of relevant review articles or clinical guidelines (n = 18 and n = 52), screening of articles citing included studies (n = 3 and n = 42), contacting authors or experts (n = 6 and n = 37), using a “related articles” search feature (n = 6 and n = 32), or contacting device manufacturers (n = 0 and n = 9). Other efforts to identify studies included screening reports from World Health Organization (WHO; n = 0 and n = 5), websites such as Food and Drug Administration (FDA; n = 1 and n = 3), or specific journals (n = 3 and n = 2).

Specific sources of unpublished studies were searched by 22 non‐Cochrane systematic reviews, and by 68 Cochrane systematic reviews. These included conference proceedings of specific conferences (n = 4 and n = 18), databases only containing conference abstracts (ie, CPCI and/or British Library Zetoc conference search; n = 2 and n = 33), or trial registries (n = 12 and n = 39), most often ClinicalTrials.gov (n = 7 and n = 33). Other efforts to identify unpublished studies included searching ProQuest Dissertations and Theses (n = 3 and n = 6) and OpenGREY (n = 6 and n = 4).

3.3. Systematic review results

The median total number of primary studies included in the systematic reviews was 14.5 (IQR 10‐23) in non‐Cochrane systematic reviews and 15.5 (IQR 8‐41) in Cochrane systematic reviews. At least one unpublished study was included in 17 and 23 systematic reviews; the median number of unpublished studies among these systematic reviews was 1 (IQR 1‐2) and 3 (IQR 1‐3).

In the non‐Cochrane systematic reviews, a total of 2082 primary studies were included. Of these, 39 (1.9%) were unpublished studies; these were conference abstracts (n = 36), a dissertation (n = 1), an unpublished study from the review authors themselves (n = 1), or not reported (n = 1). In the Cochrane systematic reviews, a total of 2780 primary studies were included. Of these, 64 (2.3%) were unpublished studies; these were conference abstracts (n = 61), identified in a trial registry (n = 1), or included in a previous systematic review (n = 2). None of the systematic reviews explicitly reported through which source they had identified the included conference abstracts. Characteristics of the three systematic reviews including the largest numbers of unpublished studies are provided in Table ​ Table2 2 .

Systematic reviews of diagnostic accuracy including the largest numbers of unpublished studies

Abbreviations: EUS, endoscopic ultrasound; MCRP, magnetic resonance cholangiopancreatography.

At least one meta‐analysis was performed in 89 non‐Cochrane systematic reviews vs 71 in Cochrane systematic reviews. However, only 14 non‐Cochrane systematic reviews included at least one unpublished study in at least one meta‐analysis vs 18 for the Cochrane systematic reviews. Overall, eight systematic reviews included at least one unpublished study but did not include them in a meta‐analysis; six of these did not perform meta‐analysis at all, and the other two only performed meta‐analysis on a small proportion of included studies providing sufficient data. A comparison between the results of published vs those of unpublished studies (or a sensitivity analysis excluding unpublished studies) was planned according to Section 2 in 1 and 11 systematic reviews. However, only three systematic reviews actually reported such an analysis; one did not observe a significant difference between published and unpublished studies and two identified no influence on the results when excluding unpublished studies. For the remaining nine systematic reviews, the authors indicated that the small number or the absence of unpublished studies withheld them from performing the analysis.

Of the non‐Cochrane systematic reviews, only three explicitly reported whether they had identified ongoing eligible studies (ie, studies that fulfilled the inclusion criteria of the systematic review but had not yet been completed), identifying 0, 2, and 6 ongoing studies. In contrast, 24 Cochrane systematic reviews reported this information: five reported to have identified 0 ongoing studies; the remaining 19 reported to have identified at least one ongoing study (ranging from 1 to 25). Sources through which these 80 ongoing studies were identified were trial registries (n = 56), conference abstracts (n = 5), contact with researchers (n = 2), published in journals (n = 1), and not reported (n = 16).

4. DISCUSSION

We observed that efforts to identify eligible studies varied considerably across recently published systematic reviews of diagnostic accuracy. Only a minority of non‐Cochrane systematic reviews reported having searched for studies not reported in journals, and only a small number of systematic reviews had actually included unpublished studies.

This study is not without limitations. Many systematic reviews did not explicitly report whether they had included unpublished studies. We carefully screened the references of studies included in the systematic reviews to check whether these had been published. Although this was done by two authors, we may have missed unpublished studies, which may have led to an underestimation of the number of unpublished studies included in the evaluated systematic reviews.

We acknowledge that our definition of “unpublished studies” may refer to data that is in fact publicly available, for example, reported in conference abstracts or dissertations. Authors of systematic reviews who explicitly reported that unpublished studies were or were not eligible for inclusion may have used a different definition of “unpublished.” This is, for example, illustrated by the fact that we found a systematic review that explicitly excluded “unpublished studies” but had actually included a conference abstract. 25 Some systematic reviews reported to have obtained additional unreported data by contacting authors of studies published in peer‐reviewed journals. We considered such studies as “published” although the unreported data may have included 2 × 2 tables that ended up in the meta‐analysis.

The adequacy of data extracted in our review completely relies on completeness of reporting in the included systematic reviews. Research has shown that authors of systematic reviews often fail to report critical information. 20 , 26 , 27 , 28 In such cases, the extracted data may not represent the actual methodology used by the reviewers.

Our search for non‐Cochrane and Cochrane systematic reviews covered different timeframes: October 2017 to January 2018 vs October 2009 to December 2018, respectively. For this reason, we did not perform a formal statistical comparison between the two groups. It seems unlikely that non‐Cochrane systematic reviews published prior to 2017 made more efforts to identify unpublished studies.

The Cochrane Handbook for Diagnostic Test Accuracy Reviews explicitly recommends reviewers to locate unpublished studies and to include them in a systematic review to minimize risk of bias. 16 Our findings show that, even among Cochrane systematic reviews, efforts to identify unpublished studies are often absent or minimal. The fact that only 1.9% of all primary studies included in the non‐Cochrane systematic reviews were unpublished, and only 2.3% of those included in the Cochrane systematic reviews indicate that it is highly likely that such reviews fail to include a considerable amount of completed diagnostic accuracy studies.

This is worrying for multiple reasons. First, despite the fact that time and effort has been put in performing these studies, and patients may have been posed to risk by participating in them, their added value to clinical practice is likely to be nihil. This is a major source of avoidable research waste and can be considered unethical. 18 , 19 Identifying and including such studies may lead to more precise meta‐analysis results and provide more room for investigating sources of heterogeneity, thereby increasing research value. Second, publication bias in meta‐analyses lures when the results of unpublished studies are systematically different from those of published studies. Among trials of interventions, for example, it has been shown that those with significant findings are more often published than those without. 29 , 30 , 31 Whether this phenomenon also occurs among diagnostic accuracy studies is largely unclear. Some evidence is hinting towards similar selective reporting practices, although other studies could not confirm this. 6 , 7 , 8 , 9 , 10 , 11 In our study, among the three systematic reviews that made a comparison between published and unpublished diagnostic accuracy studies included in the meta‐analysis (or performed a sensitivity analysis excluding unpublished studies), none found a significant difference.

Systematic reviewers should be aware of other sources of reporting bias as well. Although almost none of the Cochrane systematic reviews applied language restrictions, this was only the case for 37% of non‐Cochrane systematic reviews. This may introduce language bias, where studies in non‐English language produce less optimistic results. Chinese and other regional databases, which have been shown to contain large amounts of studies not available through databases such as Medline and Embase, 32 were searched in a minority of systematic reviews.

How do systematic reviews of diagnostic accuracy compare to other types of systematic reviews? Several evaluations of search methods among systematic reviews in different fields of research have been performed, with varying results. These also showed that efforts to identify all eligible studies were in many cases suboptimal. Among systematic reviews of adverse effects of medical interventions, for example, 39% searched at least one source of unpublished studies, and 48% of these were able to include unpublished data. 33 A recent assessment found that 39% of systematic reviews published in 2014 explicitly reported that both published and unpublished studies were eligible for inclusion, whereas 27% explicitly restricted to published studies only, and 34% did not report this information. 21 Sources of unpublished data, however, were rarely searched; for example, only 19% of systematic reviewers screened trial registries. Another evaluation of grey literature in systematic reviews in child‐relevant Cochrane systematic reviews found that only 5.6% were able to include an unpublished study, and such studies only represented 1.9% of all included studies. 34

Assessing the risk of publication bias and other reporting biases in a systematic review of diagnostic accuracy is not an easy task. In a set of 114 of such systematic reviews, it was shown that 47 used statistical methods to investigate publication bias. 35 However, the use of such methods is generally not advised as they may produce inconsistent results (ie, different statistical methods applied on the same dataset may lead to conflicting inference), and because heterogeneity in test accuracy may lead to funnel plot asymmetry not necessarily implying publication bias. 35 , 36 For this reason, the Preferred Reporting Items for Systematic Reviews of Diagnostic Test Accuracy (PRISMA‐DTA) guideline does not invite authors to report statistical analyses of publication bias. 26 Rather than assessing the risk of publication bias statistically, it seems preferred to limit the potential of such bias by making considerable efforts to identify and include unpublished studies.

Previous evaluations have shown that especially conference proceedings and trial registries are excellent sources of unpublished diagnostic accuracy studies. An evaluation of diagnostic accuracy studies registered in ClinicalTrials.gov found that only 54% reached full‐text publication in a peer reviewed journal. 5 Similar evaluations of publication rates among diagnostic accuracy studies presented at international conferences in the fields of dementia, ophthalmology, radiology, and stroke found that, respectively, 39%, 57%, 71%, and 76% reached full‐text publication. 6 , 7 , 8 , 9 Unfortunately, our evaluation shows that only a minority of systematic reviews of diagnostic accuracy searched these sources.

Still, identifying unpublished studies may be difficult and time‐consuming, which is illustrated by the fact that even among systematic reviews that made considerable efforts to identify unpublished studies, most only included a small number, if any. Information reported in conference abstracts is often limited, for which reason they may not be picked up by literature searches. 37 , 38 In addition, although databases such as Conference Proceedings Citation Index (CPCI), BIOSIS Previews and EMBASE contain large numbers of conference abstracts, many conferences are not covered by these databases. In such cases, proceedings of specific conferences may be difficult to access and, in the absence of an electronic searching feature, may need to be browsed manually. Trial registries such as ClinicalTrials.gov contain large numbers of ongoing and completed diagnostic accuracy studies, but a literature review found that still only 15% of diagnostic accuracy studies published in high‐impact journals were actually registered in a trial registry. 39 Even when a conference abstract or registered record of an unpublished study is identified, it may be difficult to include the study in a meta‐analysis due to sparse or absent reporting of methodological features or results, prohibiting a proper quality assessment or data extraction. 37 , 40 An additional concern is that these unpublished studies usually have not undergone a thorough peer‐review process, and that data may be preliminary. 41 Future research may focus on establishing the optimal sources of identifying unpublished diagnostic accuracy studies.

Over the past years, registration of clinical trials before inclusion of the first participant in the study has been enforced by numerous organizations, such as the International Committee of Medical Journal Editors (ICMJE). 42 , 43 A major advantage of such registration is that all ongoing, completed and terminated trials can be identified and included in literature syntheses. It is highly recommended that researchers also start registering their diagnostic accuracy studies. 44 , 45 , 46 The Standards for Reporting of Diagnostic Accuracy Studies (STARD) group recently established guidance on how to register a diagnostic accuracy study in an informative manner in existing clinical trial registries. 40 , 47 It was found that the majority of existing clinical trial registries accept registration of such studies.

In conclusion, although large numbers of diagnostic accuracy studies are never reported in full in a peer‐reviewed journal, they only make up a tiny fraction of the evidence included in systematic reviews. This represents a major source of avoidable waste of research efforts and funds. Failure to include unpublished studies may lead to a partial and biased view of the available evidence. We recommend that reviewers increase their efforts to identify unpublished diagnostic accuracy studies and to include them in their evidence syntheses.

5. POTENTIAL IMPACT FOR RSM READERS?

Including unpublished studies in systematic reviews may reduce bias due to selective publication, can increase the power of explorations of heterogeneity in meta‐analysis, and should help in reducing avoidable research waste.

Supporting information

Appendix S1: Supporting Information

Korevaar DA, Salameh J‐P, Vali Y, et al. Searching practices and inclusion of unpublished studies in systematic reviews of diagnostic accuracy . Res Syn Meth . 2020; 11 :343–353. 10.1002/jrsm.1389 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

DATA AVAILABILITY STATEMENT

  • Research article
  • Open access
  • Published: 01 June 2020

Publication and related biases in health services research: a systematic review of empirical evidence

  • Abimbola A. Ayorinde 1 ,
  • Iestyn Williams 2 ,
  • Russell Mannion 2 ,
  • Fujian Song 3 ,
  • Magdalena Skrybant 4 ,
  • Richard J. Lilford 4 &
  • Yen-Fu Chen   ORCID: orcid.org/0000-0002-9446-2761 1  

BMC Medical Research Methodology volume  20 , Article number:  137 ( 2020 ) Cite this article

6912 Accesses

12 Citations

16 Altmetric

Metrics details

Publication and related biases (including publication bias, time-lag bias, outcome reporting bias and p-hacking) have been well documented in clinical research, but relatively little is known about their presence and extent in health services research (HSR). This paper aims to systematically review evidence concerning publication and related bias in quantitative HSR.

Databases including MEDLINE, EMBASE, HMIC, CINAHL, Web of Science, Health Systems Evidence, Cochrane EPOC Review Group and several websites were searched to July 2018. Information was obtained from: (1) Methodological studies that set out to investigate publication and related biases in HSR; (2) Systematic reviews of HSR topics which examined such biases as part of the review process. Relevant information was extracted from included studies by one reviewer and checked by another. Studies were appraised according to commonly accepted scientific principles due to lack of suitable checklists. Data were synthesised narratively.

After screening 6155 citations, four methodological studies investigating publication bias in HSR and 184 systematic reviews of HSR topics (including three comparing published with unpublished evidence) were examined. Evidence suggestive of publication bias was reported in some of the methodological studies, but evidence presented was very weak, limited in both quality and scope. Reliable data on outcome reporting bias and p-hacking were scant. HSR systematic reviews in which published literature was compared with unpublished evidence found significant differences in the estimated intervention effects or association in some but not all cases.

Conclusions

Methodological research on publication and related biases in HSR is sparse. Evidence from available literature suggests that such biases may exist in HSR but their scale and impact are difficult to estimate for various reasons discussed in this paper.

Systematic review registration

PROSPERO 2016 CRD42016052333.

Peer Review reports

Publication bias occurs when the publication, non-publication or late publication of research findings is influenced by the direction or strength of the results, and consequently the findings that are published or published early may differ systematically from those that remain unpublished or for which publication is delayed [ 1 , 2 ]. Other related biases, however, may occur between the generation of research evidence and its eventual publication. These include: p-hacking, which involves repeated analyses using different methods or subsets of data until statistically significant results are obtained [ 3 ]; and outcome reporting bias, whereby among those examined, only favourable outcomes are reported [ 4 ]. For brevity, we use the term “publication and related bias” in this paper to encompass these various types of biases (Fig.  1 ).

figure 1

Publication related biases and other biases at various stages of research

Publication bias is a major concern in health care as biased evidence available to decision makers may lead to suboptimal decisions that a) negatively impact on the care and the health of patients and b) lead to an inefficient and inequitable allocation of scarce resources. This problem has been documented extensively in the clinical research literature [ 2 , 4 , 5 ], and several high-profile cases of non-publication of studies showing unfavourable results have led to the introduction of mandatory prospective registration of clinical trials [ 6 ]. By comparison, publication bias appears to have received scant attention in health services research (HSR). A recent methodological study of Cochrane reviews of HSR topics found that less than one in 10 of the reviews explicitly assessed publication bias [ 7 ].

However, it is unlikely that HSR is immune from publication and related biases, and these problems may be anticipated on theoretical grounds. In contrast with clinical research, where mandatory registration of all studies involving human subjects has long been advocated through the declaration of Helsinki [ 8 ] and publication of results of commercial trials are increasingly enforced by regulatory bodies, the registration and regulation of HSR studies are much more variable. In addition, studies in HSR often examine a large number of factors (independent variables, mediating variables, contextual variables and outcome variables) along a long service delivery causal chain [ 9 ]. The scope for ‘data dredging’ associated with use of multiple subsets of data and analytical techniques is substantial [ 10 ]. Furthermore, there is a grey area between research and non-research, particularly in the evaluation of quality improvement projects [ 11 ], which are usually initiated under a service imperative rather than to produce generalizable knowledge. In these settings there are fewer checks against the motivation that may arise post hoc to selectively publish “newsworthy” findings from evaluations showing promising results.

The first step towards improving our understanding of publication and related biases in HSR, which is the main aim of this review, is to systematically examine the existing literature. We anticipated that we might find two broad types of literature: (1) methodological research that set out with the prime purpose of investigating publication and related bias in HSR; (2) systematic reviews of substantive HSR topics but in which the authors had investigated the possibility of publication and related biases as part of the methodology used to explore the validity of their findings.

We adopted the definition of HSR used by the United Kingdom’s National Institute for Health Research Health Services & Delivery Research (NIHR HS & DR) Programme: “research to produce evidence on the quality, accessibility and organisation of health services”, including evaluation of how healthcare organizations might improve the delivery of services. The definition is deliberately broad in recognition of the many associated disciplines and methodologies, and is compatible with other definitions of HSR such as those offered by the Agency for Healthcare Research and Quality (AHRQ). We were aware that publication bias may arise in qualitative research [ 12 ], but as the mechanisms and manifestations are likely to be very different, we focused on publication bias related to quantitative research in this review. The protocol for this systematic review was pre-registered in the PROSPERO International prospective register of systematic reviews (2016:CRD42016052333). We followed the PRISMA statement [ 13 ] for undertaking and reporting this review where applicable (see Additional file 1 for the PRISMA checklist).

Inclusion criteria

Included studies needed to be concerned with HSR related topics based on the NIHR HS & DR Programme’s definition described above. The types of study included were either:

(1) methodological studies that set out to investigate data dredging/p-hacking, outcome reporting bias or publication bias by one or more of: a) tracking a cohort of studies from inception or from a pre-publication stage such as conference presentation to publication (or not); b) surveying researchers about their experiences related to research publication; c) investigating statistical techniques to prevent, detect or mitigate the above biases;

(2) systematic reviews of substantive HSR topics that provided empirical evidence concerning publication and related biases. Such evidence could take various forms such as comparing findings in published vs. grey literature; statistical analyses (e.g. funnel plots and Egger’s test); and assessment of selective outcome reporting within individual studies included in the reviews.

Exclusion criteria

Articles were excluded if they assessed publication and related biases in subject areas other than HSR (e.g. basic sciences; clinical and public health research) or publication bias purely in relation to qualitative research. Biases in the dissemination of evidence following research publication, such as citation bias and media attention bias, were not included since they can be alleviated by systematic search [ 2 ]. Studies of bias relating to study design (such as recall bias) were also excluded. No language restriction was applied.

Search strategy

We used a judicious combination of information sources and searching methods to ensure that our coverage of the relevant HSR literature was as comprehensive as possible. MEDLINE (1946 to 16 March 2017), EMBASE (1947 to 16 March 2017), Health Management Information Consortium (HMIC, 1979 to January 2017), CINAHL (1981 to 17 March 2017), and Web of Science (all years) were searched using indexed terms and text words related to HSR [ 14 ], combined with search terms relating to publication bias. In April 2017 we searched HSR-specific databases including Health Systems Evidence (HSE) and the Cochrane Effective Practice and Organisation of Care (EPOC) Review Group using publication bias related terms. The search strategy for MEDLINE is provided in Appendix 1 (see Additional file  2 ).

For the included studies, we used forward and backward citation searches (using Google Scholar/PubMed and manual check of reference lists) to identify additional studies that had not been captured in the electronic database searches. We searched the webpages of major organizations related to HSR, including the Institute for Healthcare Improvement (USA), The AHRQ (USA), and the Research and Development (RAND) Corporation (USA), Health Foundation (UK), King’s Fund (UK) (last searched on 20th September 2017). We also searched the UK NIHR HSDR Programme website and the US HSRProj (Health Services Research Projects in Progress) database for previously commissioned and ongoing studies (last searched on 20th February 2018). All the searches were updated between 30th July and 2nd August 2018 in order to identify any new relevant methodological studies. Members of the project steering and management committees were consulted to identify any additional studies.

Citations retrieved were imported and de-duplicated in the EndNote software, and were screened for relevance based on titles and abstracts. Full-text publications were retrieved for potentially relevant records and articles were included/excluded based on the selection criteria described above. The screening and study selection were carried out by two reviewers independently, with any disagreement resolved by discussion with the wider research team.

Data extraction

Methodological studies.

For the included methodological studies set out to examine publication and related biases, a data extraction form was designed to collect the following information: citation details; methods of selecting study sample; characteristics of study sample; methods of investigating publication and related biases; key findings; limitations; and conclusions. Data extraction was conducted by one reviewer and checked by another reviewer.

Systematic reviews of substantive topics of HSDR

For systematic reviews that directly compared published literature with grey literature/unpublished studies, the following data were collected by one reviewer and checked by another: the topic being examined; methods used to identify grey literature and unpublished studies; findings of comparisons between published and grey/unpublished literature; limitations and conclusions. A separate data extraction form was used to collect data from the remaining HSR systematic reviews. Information concerning techniques used to investigate publication bias and outcome reporting bias was extracted along with findings of these investigations. Due to the large number of identified HSR systematic reviews falling into this category, the data extraction was carried out only by a single reviewer.

Risk of bias assessment

No single risk of bias assessment tool could capture the dimensions of quality for the types of methodological studies included [ 2 ]. We therefore critically appraised individual methodological studies and systematic reviews directly comparing published vs unpublished evidence on the basis of adherence to commonly accepted scientific principles, including: representativeness of published/unpublished HSR studies being examined or health services researchers being surveyed; rigour in data collection and analysis; and whether attention was paid to factors that could confound the association between study findings and publication status. Each study was read by at least two reviewers and any methodological issues identified are presented as commentary alongside study findings in the results section. No quality assessment was carried out for the remaining HSR systematic reviews, as we were only interested in their findings in relation to publication and related biases rather than the effects or associations examined in these reviews per se. We anticipated that it would not be feasible to use quantitative methods (such as funnel plots) for evaluating potential publication bias across studies due to heterogeneous methods and measures adopted to assess publication bias in the methodological studies included in this review.

Data synthesis and presentation

As included studies used diverse approaches and measures to investigate publication and related biases, meta-analyses could not be performed. Findings were therefore presented narratively [ 15 ].

Literature search and selection

The initial searches of the electronic databases yielded 6155 references, which were screened on the basis of titles/abstracts. The full-text for 422 of them and six additional articles identified from other sources were then retrieved and assessed (Fig.  2 ). Two hundred and forty articles did not meet the inclusion criteria primarily because no empirical evidence on publication and related biases was reported or the subject areas lay outside the domain of HSR as described above. An updated search yielded 1328 new records but no relevant methodological studies were identified.

figure 2

Flow diagram showing study selection process

We found four methodological studies that set out with the primary purpose of investigating publication and related biases in HSR [ 16 , 17 , 18 , 19 ]. We identified 184 systematic reviews of HSR topics where the authors of reviews looked for evidence of publication and related biases. Three of these 184 systematic reviews provided direct evidence on publication bias by comparing findings of published articles with those of grey literature and unpublished studies [ 20 , 21 , 22 ]. The remaining 181 review provided only indirect evidence on publication and related biases (Fig. 2 ).

Methodological studies setting out to investigate publication and related biases

The characteristics of the four included methodological studies are presented in Table  1 . Three studies [ 16 , 17 , 19 ] explored the presence or absence of publication bias in health informatics research. The remaining study [ 18 ] focused on p-hacking or reporting bias that may arise when authors of research papers compete by reporting ‘more extreme and spectacular results’ in order to optimize chances of journal publication. A brief summary of each of the studies is provided below.

Only one study was an inception cohort study, which tracked individual research projects from their start. Such a study provides direct evidence of publication bias [ 19 ]. This study assessed publication bias in clinical trials of electronic health records registered with ClinicalTrials.gov during 2000–8 and reported that results from 76% (47/62) of completed trials were subsequently published. Of the published studies, 74% (35/47) reported predominantly positive results, 21% (10/47) reported neutral results (no effect) and 4% (2/47) reported negative/harmful results. Data were available from investigators for seven of the 15 unpublished trials: four reported neutral results and three reported positive results. Based on these data, the authors concluded that trials with positive results are more likely to be published than those with null results, although we noticed that this finding was not statistically significant (see Table 1 ). The authors cautioned that few trials were registered in the early years of ClinicalTrials.gov and those registered may be more likely to publish their findings and thus systematically different from those not registered. They further noted that the registered data were often unreliable during that period.

The second study reported a pilot survey of academics in order to assess rates of non-publication in IT evaluation studies and reasons for any non-publication [ 16 ]. The survey asked what information systems the respondents had evaluated in the past 3 years, whether the results of the evaluation(s) were published, and if not published, the reasons behind the non-publication. The findings show that approximately 50% of the identified evaluation studies were published in peer reviewed journals, proceedings or books. Of the remaining studies, some were published in internal reports and/or local publications (such as masters’ theses and local conferences) and approximately one third were unpublished at the time of the survey. The reasons cited for non-publication included: “results not of interest for others”; “publication in preparation”; “no time for publication”; “limited scientific quality of study”; “political or legal reasons”, and “study only conducted for internal use”. The main limitation of this study is a low response rate with only 118 of 722 (18.8%) targeted participants providing valid responses.

The third methodological study used three different approaches to assess publication bias in health informatics [ 17 ]. However, for one of the approaches (statistical analyses of publication bias/small study effects) the authors were unable to find enough studies which reported findings using the same outcome measures; while the remaining two approaches adopted in this study (i.e. examining percentage of HSR evaluation studies reporting positive results and percentage of HSR reviews reaching positive conclusion) provided little information on publication bias since there is no estimate of what the “unbiased” proportion of positive findings should be for HSR evaluation studies and reviews (Table 1 ).

The fourth methodological study included in this review examined quantitative estimates of income elasticity of health care and price elasticity of prescription drugs reported in the published literature [ 18 ]. Using funnel plots and meta-regressions the authors identified a positive correlation between effect sizes and the standard errors of income/price elasticity estimates, which suggested potential publication bias [ 18 ]. In addition, they found an independent association between effect size and journal impact factor, indicating that given similar standard errors (which reflect sample sizes), studies reporting larger effect sizes (i.e. more striking findings) were more likely to be published in ‘high-impact’ journals. As other confounding factors could not be ruled out for these observed associations and no unpublished studies were examined, the evidence is suggestive rather than conclusive.

Systematic reviews of HSR topics providing evidence on publication and related bias

We identified 184 systematic reviews of HSR topics in which empirical evidence on publication and related bias was reported. Three of these reviews provided direct evidence on publication bias by comparing evidence from studies published in academic journals with those from grey literature or unpublished studies [ 20 , 21 , 22 ]. These reviews are described in detail in the next sub-section. The remaining 181 reviews only provided indirect evidence and are summarised briefly in the subsequent sub-section and in Appendix 2 (see Additional file  2 ).

HSR systematic reviews comparing published and grey/unpublished evidence

Three HSR systematic reviews made such comparisons [ 20 , 21 , 22 ]. The topics of these reviews and their findings are summarised in Table  2 . The first review evaluated the effectiveness of mass mailings for increasing the utilization of influenza vaccine [ 22 ], focusing on evidence from controlled trials. The authors found one published study reporting statistically significant intervention effects, but additionally identified five unpublished studies through a Medicare quality improvement project database. All the unpublished studies reported clinically trivial intervention effects (no effect or an increase of less than two percentage point in uptake). This case illustrated the practical implications of publication bias: the authors highlighted that further mass mailing interventions were being considered by service planners on the basis of results from the first published study when they presented the review findings.

The second review compared the grey literature [ 20 ] with published literature [ 23 ] on the effectiveness and cost-effectiveness of strategies to improve immunization coverage in developing countries, and found that the quality and nature of evidence differed between these two sources of evidence, and that the recommendations about the most cost-effective interventions would differ between the two reviews (Table 2 ).

The third review assessed nine associations between various measures of organisational culture, organisational climate and nurse’s job satisfaction [ 21 ]. The author included both published literature and doctoral dissertations in the review, and statistically significant differences in the pooled estimates between these two types of literature were found in three of the nine associations (Table 2 ).

Findings from other systematic reviews of HSR topics

Of the 181 remaining systematic reviews, 100 examined potential publication bias across studies included in the reviews using funnel plots and related techniques, and 108 attempted to assess outcome reporting bias within individual included studies, generally as part of the risk of bias assessment. The methods used in these reviews and key findings in relation to publication bias and outcome reporting bias are summarised in Appendix 2 (see Additional file  2 ). Fifty-one of the 100 reviews which attempted to assess publication bias showed some evidence of its existence (through the assumption that observed small study effects were caused by publication bias).

For the assessment of outcome reporting bias, reviewers frequently reported difficulties in judging outcome reporting bias due to the absence of a published protocol for the included studies. For instance, a Cochrane review of the effectiveness of interventions to enhance medication adherence included 182 RCTs and judged eight and 32 RCTs to be of high and low risk for outcome reporting bias respectively, but the remaining 142 RCTs were judged to be of unclear risk, primarily due to unavailability of protocols [ 24 ]. In the absence of a protocol, some reviewers assessed outcome reporting bias by comparing outcomes specified in the methods to those presented in the results section, or made subjective judgements on the extent to which all important outcomes were reported. However, the validity of such approaches remains unclear. All but one of the reviews that assessed outcome reporting bias used either the Cochrane risk of bias tool (the checklist developed by the Cochrane Collaboration for assessing internal validity of individual RCTs) or bespoke tools derived from this. The remaining review - of the effectiveness of interventions for hypertension care in the community - undertook a sensitivity analysis to explore the influence of studies that otherwise met the inclusion criteria except for not providing sufficient data on relevant outcomes [ 25 ]. This was achieved by imputing zero effects (with average standard deviations) for the studies with missing outcomes (40 to 49% of potentially eligible studies), including them in the meta-analysis and recalculating the pooled effect. They found that the pooled effect was considerably reduced although still statistically significant [ 25 ]. These reviews illustrate the challenges of assessing outcome reporting bias in HSR and in identifying its potential consequences.

Delay in publication arising from the direction or strength of the study findings, referred to as time lag bias, was assessed in one of the reviews which evaluated the effectiveness of interventions for increasing the uptake of mammography in low and middle income countries [ 26 ]. The authors classified the time lag from end of intervention to the publication date into ≤4 years and > 4 years and reported that studies published within 4 years showed stronger association between intervention and mammography uptake (risk differences: 0.10, 95% CI 0.08, 0.12) when compared to studies published more than 4 years after completion (0.08, 95% CI 0.04, 0.11). However, the difference between the two subgroups was very small and not statistically significant (F ratio = 2.94, p  = 0.10), and it was not clear whether this analysis and the cut-off time lag for defining the subgroups were specified a priori.

This systematic review examined current empirical evidence on publication and related biases in HSR. Very few methodological studies that directly investigated these issues were found. Nonetheless, a small number of available studies focusing on publication bias suggested its existence: findings of studies were not always reported/published; those published were often with positive results, and were sometimes of different nature, which could impact upon their applicability and relevance for different users of the evidence. There was also evidence suggesting that studies reporting larger effect sizes were more likely to be published in high impact journals. However, there are methodological weaknesses behind these pieces of evidence, which does not allow a firm conclusion to be drawn.

Reasons for non-publication of HSR findings described in the only survey we found appear to be similar to those of clinical research [ 27 ]. Lack of time and interest from the part of the researcher appears to be a major factor, which could exacerbate when the study findings are uninteresting. Also of note are comments such as “not of interest for others” and “only meant for internal use”. These not only illustrate context-sensitive nature of evidence for HSR, but also highlight issues arising from the hazy boundary between research and non-research for many evaluations undertaken in healthcare organizations, such as quality improvement projects and service audits. As promising findings are likely to motivate publication of these quality improvement projects, caution is required in interpreting and particularly in generalizing their findings. Another reason given for non-publication in HSR is “political and legal reasons”. Publication bias and restriction of access to data arising from conflict of interest is well documented in clinical research [ 2 ] and one might expect similar issues in HSR. We did not identify methodological research specifically related to the impact of conflict of interest on publication of findings in HSR, although anecdotal evidence of financial arrangement influencing editorial process exists [ 28 ], and there are debates concerning public’s accessibility of information related to health services and policy [ 29 ].

It is currently difficult to gauge the true scale and impact of publication and related biases given the sparse high quality evidence. Among the four methodological studies identified in this review, only one was an inception cohort study that provided direct evidence. This paucity of evidence is in stark contrast with a methodological review assessing publication bias and outcome reporting bias in clinical research, in which 20 inception cohort studies of RCTs were found [ 4 ]. The difference between these two fields is likely to be in part attributable to the less frequent use of RCTs in HSR and lack of requirement for study registration. The lesser reliance on RCTs and lack of study registration present a major methodological challenge in studying publication bias in HSR as there is no reliable way to identify studies that have been conducted but not subsequently published.

The lack of prospective study registration poses further challenges in assessing outcome reporting bias, which could be a greater concern for HSR than clinical research given the more exploratory approaches to examining a larger number of variables and associations in HSR. Empirical evidence on selective outcome reporting has primarily been obtained from RCTs as study protocols are made available in the trial registration process [ 4 ]. Calls for prospective registration of study protocols of observational studies have been made [ 30 ] and repositories of quality improvement projects are emerging [ 31 ]. HSR and quality improvement communities will need to consider and evaluate the feasibility and values of adopting these practices.

Statistical techniques such as funnel plots and regression methods are commonly used in HSR systematic reviews to identify potential publication bias, as in clinical research. Assumptions (e.g. any observed small study effects are caused by publication bias) and conditions (e.g. at least 10 studies measuring the same effect) related to the appropriate use of these techniques hold true for HSR, but heterogeneity commonly found among HSR studies resulting from the inherent complexity and variability of service delivery interventions and their interaction with contextual factors [ 32 , 33 ] may further influence the validity of funnel plots and related methods [ 34 ], and findings from these methods should be treated with caution [ 35 ].

In addition to the conventional methods discussed above, new methods such as p-curves for detecting p-hacking have emerged in recent years [ 36 , 37 ]. P-curves have been tested in various scientific disciplines [ 3 , 38 , 39 ], although no studies that we examined in the field of HSR have used this technique. The validity and usefulness of p-curves are subject to debate and accumulation of further empirical evidence [ 40 , 41 , 42 , 43 ].

Given the limitations of statistical methods, search of grey literature and contacting stakeholders to unearth unpublished studies remain an important means of mitigating publication bias, although this is often resource intensive and does not completely eliminate the risk. The finding from Batt et al. (2004) described above highlighted that published and grey literature could differ in their geographical coverage and nature of evidence [ 20 ]. This has important implications given the context-sensitive nature of HSR.

The limited evidence that we found does not allow us to estimate precisely the scale and impact of publication and related biases in HSR. It may be argued that publication bias may not be as prevalent in HSR as in clinical research because of the complexity of health systems which makes it often necessary to investigate the associations between a large number of variables along the service delivery causal pathway. As a result, HSR studies may be less likely to have completely null results or to depend for their contribution on single outcomes. Conversely, this heterogeneity and complexity may increase the scope for p-hacking and outcome reporting bias in HSR, which are even more difficult to prevent and detect.

A major challenge for this review was to delineate a boundary between HSR and other health/medical research. We used a broad range of search terms and identified a large number of studies, many of which were subsequently excluded after screening. We have used the definition of HSR provided by the UK NIHR and therefore our review may not have covered some areas of HSR if defined more broadly. We combined publication bias related terms with HSR related terms in our searches. As a result, we might not have captured some HSR related studies which have investigated publication and related bias but which did not mention them in their titles, abstracts or indexed terms. This is most likely to occur for systematic reviews of substantive HSR topics, in which funnel plot and related methods might have been deployed as a routine procedure to examine potential publication bias. Nevertheless, it is well known that statistical techniques such as funnel plot and related tests have low statistical power, and publication bias is just one of the many potential reasons behind ‘small study effects’ which these methods actually detect [ 34 ]. Findings from these systematic reviews are therefore of limited value in terms of confirming or refuting the existence of publication bias. Despite the limitation related to the search strategy, we identified and briefly examined more than 180 systematic reviews as shown in Appendix 2 in the supplementary file , but except for the small number of systematic reviews highlighted in the Results section, very little conclusion in relation to publication bias could be drawn from these reviews.

A further limitation of this study is that we have focused on publication and related biases related to quantitative studies and have not covered qualitative research, which plays an important role in HSR. It is also worth noting that three of the four included studies relate to the specific sub-field of health informatics which places limits on the extent to which our conclusions can be generalised to other subfields of HSR. Lastly, although we attempted to search several databases as well as grey literature, the possibility that evidence included in this review is subject to publication and related bias cannot be ruled out.

There is a paucity of empirical evidence and methodological literature addressing the issue of publication and related biases in HSR. While the available evidence suggests the presence of publication bias in this field, its magnitude and impact is yet to be fully explored and understood. Further research evaluating the existence of publication and related biases in HSR, what factors contribute towards their occurrence, their impact and the range of potential strategies to mitigate them, is therefore warranted.

Availability of data and materials

All data generated and/or analysed during this review are included within this article and its additional files. This systematic review was part of a large project investigating publication and related bias in HSR. The full technical report for the project will be published in the UK National Institute for Health Research (NIHR) Journals Library: https://www.journalslibrary.nihr.ac.uk/programmes/hsdr/157106/#/

Abbreviations

Agency for Healthcare Research and Quality

Effective Practice and Organisation of Care

Health Systems Evidence

Health Services Research

National Institute for Health Research Health Services & Delivery Research Programme

Randomised controlled trials

Hopewell S, Clarke M, Stewart L, Tierney J. Time to publication for results of clinical trials. Cochrane Database Syst Rev. 2007;2:MR000011.

Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, Hing C, Kwok CS, Pang C, Harvey I. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14(8):1–193.

Head ML, Holman L, Lanfear R, Kahn AT, Jennions MD. The extent and consequences of p-hacking in science. PLoS Biol. 2015;13(3):e1002106.

Article   Google Scholar  

Dwan K, Gamble C, Williamson PR, Kirkham JJ. Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS One. 2013;8(7):e66844.

Kicinski M, Springate DA, Kontopantelis E. Publication bias in meta-analyses from the Cochrane database of systematic reviews. Stat Med. 2015;34(20):2781–93.

Gulmezoglu AM, Pang T, Horton R, Dickersin K. WHO facilitates international collaboration in setting standards for clinical trial registration. Lancet. 2005;365(9474):1829–31.

Li X, Zheng Y, Chen T-L, Yang K-H, Zhang Z-J. The reporting characteristics and methodological quality of Cochrane reviews about health policy research. Health Policy. 2015;119(4):503–10.

Article   CAS   Google Scholar  

The World Medical Association. WMA declaration of Helsinki - ethical principles for medical research involving human subjects. In: Current policies. The World Medical Association; 2013.  https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/ . Accessed 26 Apr 2020.

Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010;341:c4413.

Gelman A, Loken E. The garden of forking paths: why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time (2013). http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf . Accessed 25 July 2018.

Google Scholar  

Smith R. Quality improvement reports: a new kind of article. They should allow authors to describe improvement projects so others can learn. BMJ. 2000;321(7274):1428.

Toews I, Glenton C, Lewin S, Berg RC, Noyes J, Booth A, Marusic A, Malicki M, Munthe-Kaas HM, Meerpohl JJ. Extent, awareness and perception of dissemination bias in qualitative research: an explorative survey. PLoS One. 2016;11(8):e0159290.

Liberati A, Altman DG, Tetzlaff J. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700.

Wilczynski NL, Haynes RB, Lavis JN, Ramkissoonsingh R, Arnold-Oatley AE, The HSRHT. Optimal search strategies for detecting health services research studies in MEDLINE. CMAJ. 2004;171(10):1179–85.

Mays N, Pope C, Popay J. Systematically reviewing qualitative and quantitative evidence to inform management and policy-making in the health field. J Health Serv Res Policy. 2005;10(Suppl 1):6–20.

Ammenwerth E, de Keizer N. A viewpoint on evidence-based health informatics, based on a pilot survey on evaluation studies in health care informatics. JAMIA. 2007;14(3):368–71.

PubMed   Google Scholar  

Machan C, Ammenwerth E, Bodner T. Publication bias in medical informatics evaluation research: is it an issue or not? Stud Health Technol Inform. 2006;124:957–62.

Costa-Font J, McGuire A, Stanley T. Publication selection in health policy research: the winner's curse hypothesis. Health Policy. 2013;109(1):78–87.

Vawdrey DK, Hripcsak G. Publication bias in clinical trials of electronic health records. J Biomed Inform. 2013;46(1):139–41.

Batt K, Fox-Rushby JA, Castillo-Riquelme M. The costs, effects and cost-effectiveness of strategies to increase coverage of routine immunizations in low- and middle-income countries: systematic review of the grey literature. Bull World Health Organ. 2004;82(9):689–96.

PubMed   PubMed Central   Google Scholar  

Fang Y. A meta-analysis of relationships between organizational culture, organizational climate, and nurse work outcomes (PhD thesis). Baltimore: University of Maryland; 2007.

Maglione MA, Stone EG, Shekelle PG. Mass mailings have little effect on utilization of influenza vaccine among Medicare beneficiaries. Am J Prev Med. 2002;23(1):43–6.

Pegurri E, Fox-Rushby JA, Damian W. The effects and costs of expanding the coverage of immunisation services in developing countries: a systematic literature review. Vaccine. 2005;23(13):1624–35.

Nieuwlaat R, Wilczynski N, Navarro T, Hobson N, Jeffery R, Keepanasseril A, Agoritsas T, Mistry N, Iorio A, Jack S, et al. Interventions for enhancing medication adherence. Cochrane Database Syst Rev. 2014;11:CD000011.

Lu Z, Cao S, Chai Y, Liang Y, Bachmann M, Suhrcke M, Song F. Effectiveness of interventions for hypertension care in the community--a meta-analysis of controlled studies in China. BMC Health Serv Res. 2012;12:216.

Gardner MP, Adams A, Jeffreys M. Interventions to increase the uptake of mammography amongst low income women: a systematic review and meta-analysis. PLoS One. 2013;8(2):e55574.

Song F, Loke Y, Hooper L. Why are medical and health-related studies not being published? A systematic review of reasons given by investigators. PLoS One. 2014;9(10):e110418.

Homedes N, Ugalde A. Are private interests clouding the peer-review process of the WHO bulletin? A case study. Account Res. 2016;23(5):309–17.

Dyer C. Information commissioner condemns health secretary for failing to publish risk register. BMJ. 2012;344:e3480.

Swaen GMH, Urlings MJE, Zeegers MP. Outcome reporting bias in observational epidemiology studies on phthalates. Ann Epidemiol. 2016;26(8):597–599.e594.

Bytautas JP, Gheihman G, Dobrow MJ. A scoping review of online repositories of quality improvement projects, interventions and initiatives in healthcare. BMJ Qual Safety. 2017;26(4):296–303.

Long KM, McDermott F, Meadows GN. Being pragmatic about healthcare complexity: our experiences applying complexity theory and pragmatism to health services research. BMC Med. 2018;16(1):94.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Sterne JAC, Sutton AJ, Ioannidis JPA, Terrin N, Jones DR, Lau J, Carpenter J, Rücker G, Harbord RM, Schmid CH, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. 2011;343:d4002.

Lau J, Ioannidis JPA, Terrin N, Schmid CH, Olkin I. The case of the misleading funnel plot. BMJ. 2006;333(7568):597–600.

Simonsohn U, Nelson LD, Simmons JP. P-curve: a key to the file-drawer. J Exp Psychol Gen. 2014;143(2):534–47.

Simonsohn U, Nelson LD, Simmons JP. P-curve and effect size: correcting for publication Bias using only significant results. Perspect Psychol Sci. 2014;9(6):666–81.

Carbine KA, Larson MJ. Quantifying the presence of evidential value and selective reporting in food-related inhibitory control training: a p-curve analysis. Health Psychol Rev. 2019;13(3):318–43.

Carbine KA, Lindsey HM, Rodeback RE, Larson MJ. Quantifying evidential value and selective reporting in recent and 10-year past psychophysiological literature: a pre-registered P-curve analysis. Int J Psychophysiol. 2019;142:33–49.

Bishop DV, Thompson PA. Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential value. PeerJ. 2016;4:e1715.

Bruns SB, Ioannidis JPA. P-curve and p-hacking in observational research. PLoS One. 2016;11(2):e0149144.

Simonsohn U, Simmons JP, Nelson LD. Better P-curves: making P-curve analysis more robust to errors, fraud, and ambitious P-hacking, a reply to Ulrich and Miller (2015). J Exp Psychol Gen. 2015;144(6):1146–52.

Ulrich R, Miller J. Some properties of p-curves, with an application to gradual publication bias. Psychol Methods. 2018;23(3):546–60.

Download references

Acknowledgements

We are grateful for the advice and guidance provided by members of the Study Steering Committee for the project.

This project is funded by the UK NIHR Health Services and Delivery Research Programme (project number 15/71/06). The authors are required to notify the funder prior to the publication of study findings, but the funder does not otherwise have any roles in the preparation of the manuscript and the decision to submit and publish it. MS and RJL are also supported by the NIHR Applied Research Collaboration (ARC) West Midlands. The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the HS&DR Programme, NIHR, National Health Services or the Department of Health.

Author information

Authors and affiliations.

Warwick Centre for Applied Health Research & Delivery, Division of Health Sciences, Warwick Medical School, University of Warwick, Coventry, UK

Abimbola A. Ayorinde & Yen-Fu Chen

Health Services Management Centre, School of Social Policy, University of Birmingham, Birmingham, UK

Iestyn Williams & Russell Mannion

Norwich Medical School, University of East Anglia, Norwich, UK

Fujian Song

Institute of Applied Health Research, University of Birmingham, Birmingham, UK

Magdalena Skrybant & Richard J. Lilford

You can also search for this author in PubMed   Google Scholar

Contributions

YFC and RJL conceptualised the study. AAA and YFC contributed to all stages of the review and drafted the paper. IW, RM, FS, MS, RJL were involved in planning the study, advised on the conduct of the review and interpretation of the findings. All authors reviewed and helped revising drafts of this paper and approved its submission.

Corresponding author

Correspondence to Yen-Fu Chen .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

PRISMA checklist.

Additional file 2.

Appendices.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ayorinde, A.A., Williams, I., Mannion, R. et al. Publication and related biases in health services research: a systematic review of empirical evidence. BMC Med Res Methodol 20 , 137 (2020). https://doi.org/10.1186/s12874-020-01010-1

Download citation

Received : 28 January 2019

Accepted : 07 May 2020

Published : 01 June 2020

DOI : https://doi.org/10.1186/s12874-020-01010-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publication bias
  • Outcome reporting bias
  • Dissemination bias
  • Grey literature
  • Research publication
  • Research registration
  • Health services research
  • Systematic review
  • Research methodology
  • Funnel plots

BMC Medical Research Methodology

ISSN: 1471-2288

published vs unpublished research paper

University Library, University of Illinois at Urbana-Champaign

University of Illinois Library Wordmark

Library Research for Undergraduate History Students: An Introduction

  • Starting Points
  • Secondary Sources

What are Primary Sources?

Published vs unpublished primary sources, how do i find primary sources, shelf browsing, struggling, learn more about primary sources.

  • Constructing Your Paper
  • Video Tutorials
  • Return to HPNL Website
  • Ask a Librarian

Primary sources are produced at the time of the event or phenomenon you are investigating, and they purport to document it. They reflect what someone observed or believed about an event at the time it occurred or soon afterwards. These sources provide raw material that you will analyze and interpret. Primary sources can be published or unpublished. 

There are different types of primary sources for different historical periods. For example, church documents and saints' lives serve as primary sources for the study of medieval history, while newspapers, government reports, and photographs serve as primary sources for the modern period. Moreover, what constitutes a primary source depends in part on how you have formulated your research topic. An article in an academic journal from 1984 could be a secondary source because it is part of an ongoing scholarly analysis of your topic, or it could be primary source because it provides evidence of attitudes and opinions held by people in 1984. In other words, there is no intrinsic or distinguishing feature of a text that makes it a primary, rather than a secondary, source. In fact, many sources, whether visual or textual, can serve as either primary or secondary sources. The key is how you use the material. In order to determine whether a source might be primary or secondary for your purposes, you must consider it in relation to your particular topic.

Unpublished primary sources are original documents and artifacts of all kinds that were created by individuals but not published (that is, made public --issued in a format that could be widely distributed) during the period you are studying. In the past, only archives and museums preserved these kinds of primary source materials, and researchers had to travel all over the world to use them. With the invention of microfilming, and later, digitization, it became possible to create facsimiles of large collections of primary source materials. Large research libraries like the UIUC Library have extensive collections of microfilm and digital facsimiles of unpublished primary sources. Universities also have rare books libraries and university archives, which hold original unpublished primary source materials.

In general, published primary source material covers a wide range of publications, including first-person accounts, memoirs, diaries, letters, newspapers, statistical reports, government documents, court records, reports of associations, organizations and institutions, treatises and polemical writings, chronicles, saints' lives, charters, legal codes, maps, graphic material (e.g. photographs, posters, advertising images, paintings, prints, and illustrations), literary works and motion pictures. Some of these materials were not published at the time of their creation (e.g. letters), but have subsequently been published in a book. For example, The Selected Papers of Margaret Sanger  is a selection from birth control activist Margaret Sanger's letters and other unpublished papers, presented in chronological order, which contextual information provided by expert editors.

Here's an overview:

published vs unpublished research paper

There are many ways to find digitized primary sources, both published and unpublished, starting with our Digital Collections guide:

  • Digital Collections Guide by History, Philosophy and Newspaper Library Last Updated Feb 1, 2024 490 views this year

You can find published primary sources by using library catalogs, research guides, and published bibliographies. You can also look at secondary literature on your topic to ascertain what sources other scholars have used in their research. Our Guide to Primary Source Reprints is another good place to look for published primary sources:

  • Primary Source Reprints by History, Philosophy and Newspaper Library Last Updated Jan 31, 2024 167 views this year

To find published primary sources in library catalogs, try these strategies:

-Search by date of publication to find sources that were published during the time period you're researching --you can also use this strategy in full-text digital collections such as ProQuest Historical Newspapers

-Use the library catalog advanced search option and include one or more of these Library of Congress Subject Heading form subdivisions as subject search terms:

  • Correspondence
  • Personal narratives
  • Early works to 1800
  • Manuscripts

You can find unpublished primary sources in the University of Illinois Library in the library catalog and in the University Archives Holdings Database . You can find materials held by other archives and museums using ArchiveGrid (an inventory of archival finding aids), or using the "archival material" format in WorldCat . Microfilm facsimiles of primary source materials are also included in WorldCat and other library catalogs:

  • University of Illinois Library Catalog Use the Library Catalog to identify books, journals (but not journal articles), microform collections, and digital collections owned by the University of Illinois. The Library Catalog is the primary tool for exploring the collections of the University of Illinois Library, the second largest academic library collection in the United States. In the Library Catalog you can search for books by subject, and you identify the location within the Library of a particular book or journal. Books and journals are organized in the library by subject. Each item is assigned one or more subject headings and a unique call number. Subject headings are standardized terms from the Library of Congress. The call number is based on the Dewey Decimal Classification or Library of Congress Classification . Boolean operators must be capitalized if used: AND, OR, NOT. Interface automatically truncates some search terms unless Boolean operators are used within the same query line. You can also browse catalog records by call number, creating a "virtual shelf browsing" experience.
  • University of Illinois Archives Holdings Database Finding aids for archival collections held by the University Archives, and the Student Life and Culture Archive
  • HathiTrust Over 17 million books, periodicals, and government documents digitized by Google, the Internet Archive, Microsoft, and research libraries. About 2,500,000 of these works can be read online (because they are public domain works, and therefore freely accessible).
  • WorldCat This link opens in a new window WorldCat is a worldwide union catalog created and maintained collectively by more than 9,000 member institutions. With millions of online records built from the bibliographic and ownership information of contributing libraries, it is the largest and most comprehensive database of its kind. You can also use WorldCat on the FirstSearch platform .
  • ArchiveGrid This link opens in a new window Destination for searching through historical documents, personal papers, and family histories held in archives around the world. Thousands of libraries, museums, and archives have contributed nearly a million collection descriptions to ArchiveGrid. Researchers searching ArchiveGrid can learn about the many items in each of these collections, contact archives to arrange a visit to examine materials, and order copies.

In order to browse the shelves, you need to know the “classification number” for your topic. Once a new book is assigned subject headings, it is then “classified” according to the Dewey Decimal Classification. In Dewey, the first three numbers indicate the main subject, and additional numbers are added after a decimal point to narrow the subject. Books and journals on historical topics are usually classified in the 900s, although much of social history gets classified in the 300s, and film is classified in the 700s.

Once you have identified a few books on your topic by doing a subject search in the online catalog, you can browse the shelf under the same general number(s) to find related works. For example, if you know that the book Slaves on Screen, by Natalie Z. Davis, has the call number 791.43655 D29s, you could go to the Main Stacks to browse the shelves under the same Dewey number to find related material.

Because so much of the Library collection is now stored in a high density, off-site storage facility, it's no longer possible to browse the collection as completely as it once was. You can, however, do "virtual shelf browsing" using the Library Catalog :

  • University of Illinois Library Catalog: Browse Search Use this interface to browse the catalog by author, subject heading, or call number. Choose the type of browse you want to conduct using the drop down menu at the left of the search box.

If you're having trouble finding primary sources for a topic you've already started researching, go back to your secondary literature: what sources have other scholars consulted?  These should be cited in the footnotes or endnotes and/or described in an essay in the back of the book.

If you haven't decided on your topic yet, browsing the primary source collections described in the Digital Collections Guide can be a good way to find inspiration. Find a source that interests you, whether it's something you're surprised by, something that doesn't make sense, or just something you'd like to know more about.

  • Guide to Digital Collections (History, Philosophy, and Newspaper Library) Guide to digitized primary source collections, for the most part collections that are owned or licensed by the University of Illinois at Urbana-Champaign Library. Organized by broad discipline (History, Philosophy, Religious Studies, and African American Studies). History is subdivided by region and time period.

If you have time, one of the best guides to conducting serious library research is the Oxford Guide to Library Research :

  • The Oxford Guide to Library Research This guide has long been the standard introduction to library research methods, since its first edition in 1993 ( Library Research Models ).

Don't forget that you can Ask a Librarian for assistance at any stage of your research, or, for more in depth assistance, Schedule a Research Consultation with a subject specialist librarian:

  • Schedule a Research Consultation Browse from the list of the Library's subject specialist librarians, and schedule an appointment with the subject librarian in your field.

Primary Source Village . Urbana, Ill.: University of Illinois, 2006.

Williams, Robert C. "Sources and Evidence." The Historian's Toolbox: A Student's Guide to the Theory and Craft of History . Armonk, N.Y.: M.E. Sharpe, 2003.

Booth, Wayne C., Gregory G. Colomb, and Joseph M. Williams. "From Problems to Sources." The Craft of Research . 3d ed. Chicago: University of Chicago Press, 2008.

  • Research Methods: Primary Sources Guides, tutorials, case studies, and videos that teach students how to find and use primary sources for historical research.
  • << Previous: Secondary Sources
  • Next: Constructing Your Paper >>
  • Last Updated: Feb 6, 2024 1:02 PM
  • URL: https://guides.library.illinois.edu/historicalresearch

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals

Plagiarism and duplicate publication

On this page, plagiarism and fabrication, due credit for others' work, nature portfolio journals' policy on duplicate publication, nature portfolio journals' editorials.

Plagiarism is unacknowledged copying or an attempt to misattribute original authorship, whether of ideas, text or results. As defined by the ORI (Office of Research Integrity), plagiarism can include, "theft or misappropriation of intellectual property and the substantial unattributed textual copying of another's work". Plagiarism can be said to have clearly occurred when large chunks of text have been cut-and-pasted without appropriate and unambiguous attribution. Such manuscripts would not be considered for publication in a Nature Portfolio journal. Aside from wholesale verbatim reuse of text, due care must be taken to ensure appropriate attribution and citation when paraphrasing and summarising the work of others. "Text recycling" or reuse of parts of text from an author's previous research publication is a form of self-plagiarism. Here too, due caution must be exercised. When reusing text, whether from the author's own publication or that of others, appropriate attribution and citation is necessary to avoid creating a misleading perception of unique contribution for the reader.

Duplicate (or redundant) publication occurs when an author reuses substantial parts of their own published work without providing the appropriate references. This can range from publishing an identical paper in multiple journals, to only adding a small amount of new data to a previously published paper.

Nature Portfolio journal editors assess all such cases on their individual merits. When plagiarism becomes evident post-publication, we may correct,retract or otherwise amend the original publication depending on the degree of plagiarism, context within the published article and its impact on the overall integrity of the published study. Nature Portfolio is part of Similarity Check , a service that uses software tools to screen submitted manuscripts for text overlap. 

Top of page ⤴

Discussion of unpublished work

Manuscripts are sent out for review on the condition that any unpublished data cited within are properly credited and the appropriate permission has been attained. Where licenced data are cited, authors must include at submission a written assurance that they are complying with originators' data-licencing agreements.

Discussion of published work

When discussing the published work of others, authors must properly describe the contribution of the earlier work. Both intellectual contributions and technical developments must be acknowledged as such and appropriately cited.

Material submitted to a Nature Portfolio journal must be original and not published or concurrently submitted for publication elsewhere. 

Authors submitting a contribution to a Nature Portfolio journal who have related material under consideration or in press elsewhere should upload a clearly marked copy at the time of submission, and draw the editors' attention to it in their cover letter. Authors must disclose any such information while their contributions are under consideration by a Nature Portfolio journal - for example, if they submit a related manuscript elsewhere that was not written at the time of the original Nature Portfolio journal submission.

If part of a contribution that an author wishes to submit to a Nature Portfolio journal has appeared or will appear elsewhere, the author must specify the details in the covering letter accompanying the Nature Portfolio submission. Consideration by the Nature Portfolio journal is possible if the main result, conclusion, or implications are not apparent from the other work, or if there are other factors, for example if the other work is published in a language other than English.

Nature Portfolio will consider submissions containing material that has previously formed part of a PhD or other academic thesis which has been published according to the requirements of the institution awarding the qualification.

The Nature Portfolio journals support prior publication on recognized community preprint servers for review by other scientists in the field before formal submission to a journal. More information about our policies on preprints can be found here .

Nature Portfolio journals allow publication of meeting abstracts before the full contribution is submitted. Such abstracts should be included with the Nature Portfolio journal submission and referred to in the cover letter accompanying the manuscript.

In case of any doubt, authors should seek advice from the editor handling their contribution.

If an author of a submission is re-using a figure or figures published elsewhere, or that is copyrighted, the author must provide documentation that the previous publisher or copyright holder has given permission for the figure to be re-published. The Nature Portfolio journal editors consider all material in good faith that their journals have full permission to publish every part of the submitted material, including illustrations.

  • There are tools to detect non-originality in articles, but instilling ethical norms remains essential. Nature . Plagiarism pinioned, 7 July 2010.
  • Scientific plagiarism—a problem as serious as fraud—has not received all the attention it deserves. Nature Medicine . The insider’s guide to plagiarism , July 2009.
  • Tackling plagiarism is becoming an easier fight. Nature Physics. The truth will out , July 2009.
  • Accountability of coauthors for scientific misconduct, guest authorship and deliberate or negligent citation plagiarism, highlight the need for accurate author contribution statements. Nature Photonics. Combating plagiarism , May 2009.
  • Plagiarism is on the rise, thanks to the Internet. Universities and journals need to take action. Nature . Clamp down on copycats , 3 November 2005.

Fraud and replication

  • When it comes to research misconduct, burying one's head in the sand and pretending it doesn't exist is the worst possible plan. Nature Chemistry. They did a bad bad thing, May 2011.
  • Commit to promoting best practice in research and education in research ethics. Nature Cell Biology . Combating scientific misconduct, January 2011.
  • Scientific misconduct may be more prevalent than most researchers would like to admit. The solution needs to be wide-ranging yet nuanced. Nature . Solutions, not scapegoats, 19 June 2008.
  • Related Commentary by S. Titus et al. in the same issue of Nature: Repairing research integrity .
  • The use of electronic laboratory notebooks should be supported by all concerned. Nature . Share your lab notes, 3 May 2007.
  • Record-keeping in the lab has stayed unchanged for hundreds of years, but today's experiments are putting huge pressure on the old ways. Nature News Feature. Electronic notebooks: a new leaf, 7 July 2005.
  • The true extent of plagiarism is unknown, but rising cases of suspect submissions are forcing editors to take action. Nature special report. Taking on the cheats , 19 May 2005.

Duplicate publication

  • Clarifying journal policies on overlapping or concurrent submissions and embargo. Nature Neuroscience . Navigating issues of related submission and embargo, July 2014.
  • Duplicate publication dilutes science. Nature Photonics. Quality over quantity , September 2011.
  • On fragmenting one coherent body of research into as many publications as possible. Nature Materials . The cost of salami slicing , January 2005.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

published vs unpublished research paper

  • Research Guides
  • Vanderbilt University Libraries
  • Peabody Library

Tests and Measurements

  • Published vs. Unpublished Tests
  • Test Collection at Peabody Library
  • Test Collection FAQs

Published Tests

Unpublished tests.

  • Test Reference Materials
  • Test Databases
  • Evaluating Your Sources
  • Citation Management

Published or standardized tests are instruments that have been commercially published by a test publisher. These instruments are administered and scored in a consistent, or "standard" manner. The validity and reliability of the instrument are two essential elements for defining the standard quality of the test. These tests are generally only available from the publisher and often come in the form of kits or multiple booklets. They can be very costly if purchased.

Famous examples of published tests:

  • ACT (formerly American College Testing)
  • GRE (Graduate Record Examination)
  • MCAT (Medical College Admission Test)
  • SAT (formerly Scholastic Aptitude Test)

Unpublished or non-standardized tests are instruments that have been published in books and journals but have not been published by a test publishing company. If a user decides to use an unpublished test, there are a few ethical responsibilities involved. First, the author of the test needs to be contacted and the user needs to request permission to use the test, and secondly if the material is copyrighted, permission to use the test must be made in writing.

Please note: The process to locate the original author of a test or measure may be difficult, but every effort should be made to contact the author or copyright holder if possible.

  • << Previous: Test Collection FAQs
  • Next: Test Reference Materials >>
  • Last Updated: Oct 19, 2023 1:35 PM
  • URL: https://researchguides.library.vanderbilt.edu/PBDY_test

Creative Commons License

Quetext

How to Cite Yourself

  • Posted on December 22, 2021 December 22, 2021

You already know that it’s unethical and, in some cases, illegal to use another person’s work without giving them credit. Plagiarism is intellectual theft, whether you’re a professional writer or a student. But did you also know that it’s possible to plagiarize yourself?

Like other forms of plagiarism,  self-plagiarism  can cause severe problems for you professionally and academically if you’re not careful. Here’s everything you need to know about citing yourself so you can avoid unintentionally plagiarizing yourself.

Why Self-Citations Are Important

There are several reasons why it’s essential to self-cite when referencing your prior work. If that previous work is published, for example, then quoting from it without proper citation could be a violation of your publishing agreement.

Even if your work is unpublished, it’s inappropriate to reuse prior work without proper citation and identification. If you’re a student, reusing work from a previous assignment without doing any new work deprives you of the learning opportunity, and it may also be a violation of your high school or college’s academic integrity policy.

There’s not much difference between citing your work and citing someone else’s work in most style guides. As a general rule, you cite your previous work in the same way you cite a similar work by another author.

Let’s say you wrote and published a novel. Under most style guides, if you wanted to quote or reference a novel you wrote, you would cite it in the same format as you would cite a novel by anyone else. Likewise, if you wanted to reference a research paper you wrote, you would cite it using the same format as a research paper completed by someone else.

Published vs. Unpublished Work

Whether you’re a content creator getting paid for a piece of work or a student submitting a paper for high school or college credit, you must cite every piece you reference—including published, scholarly sources as well as unpublished works. This is true whether you’re citing your own work or someone else’s.

However, this issue comes up more often when it comes to your own work simply because you’re more likely to possess your own unpublished work than another person’s. However, the American Psychological Association (APA) Style Guide and the Modern Language Association (MLA) Style Guide differ somewhat in handling citations to unpublished work.

How do you determine whether your work is published or unpublished? In most cases, it’s pretty straightforward. If your work has appeared in an anthology, journal, or otherwise been made public, it’s a published work. If it hasn’t appeared anywhere and is solely in your possession, then it’s an unpublished work.

Things get a little fuzzier when you consider the work that you’ve shared online. If you’ve posted it somewhere that it can be accessed by the general public, like an online forum, then it’s been informally published and should be cited as a website.

On the other hand, a private document that can only be accessed by people you authorize using a private link is generally considered unpublished. Unless a stranger could access it without your authorization, your own private work is unpublished.

Under the APA Style Guide, a published work is always cited the same way, whether it’s your own or someone else’s. If the work you are citing is published, cite it as you would a similar publication by another author, even if it’s your own.

However, if you cite your unpublished work, the APA citation style requires you to specify that the work is unpublished. In addition, if you created the work for a particular purpose, you must state that purpose in the citation.

Published Research Paper  – Walter Wombat, a researcher, previously published a research paper in a wildlife journal. Now Walter wants to cite that study in a new paper. He will cite it in the same way as he would another researcher’s published study:

Last Name, First Name (Year of publication). Title of study. Title of Journal, volume number(issue number), page numbers. http://webaddress.com

Wombat, Walter (2018). Wombats in the wild: a study. Wildlife Journal, 47(3), 48-63. http://wombatstudies.org

Unpublished Assignment – Walter also wants to cite a previous assignment from his graduate school coursework in the new research study. Under the APA Style Guide, to cite to an unpublished student assignment, Walter must also identify that the study is unpublished, as well as its purpose:

Last Name, First Name (Year authored). Title of study [Unpublished study submitted for course]. University Name.

Wombat, Walter (2020). Wombat teeth grow forever [Unpublished study submitted for Biology 1001]. Marsupial University.

If Walter also cited other sources in his unpublished study, then he must also cite those sources in the reference list of his new work.

Other examples – Citation styles for different types of sources can be found in the complete  7th edition APA Style Guide  at the Purdue OWL website.

As the APA Style Guide, the rules for citing your own published work are the same as citing someone else’s under the MLA Style Guide. A published work is cited the same way, whether you’re self-citing or citing someone else’s work.

The MLA Style Guide doesn’t explicitly require you to identify an unpublished manuscript or unpublished paper in the reference list. However, you still must identify the origin of an unpublished document, such as the collection where it’s housed or the reason for its creation.

There are plenty of unpublished documents available in public and private museums and personal collections around the world. Your unpublished work is most likely either from your personal collection or submitted for a high school or university assignment.

Published Research Paper  – Walter Wombat is writing an article for a popular science magazine and wants to reference a study he published previously in a peer-reviewed science journal. He will cite it in the same way as if he were citing another person’s published study:

Last Name, First Name. “Title of Article.” Title of Journal, Volume, Issue, Year, pages.

Wombat, Walter. “Wombats in the Wild.” Wildlife Journal, 47, 3, 2018, 48-63.

Unpublished Assignment – Suzie Scholar is writing a reflective piece on her growth as a writer for a college assignment. In it, she wants to reference a piece she wrote for an assignment in high school. While she doesn’t have to specify that the piece is unpublished, Suzie does have to identify its source:

Last Name, First Name. “Paper Title.” Date authored. Class, School, assignment type.

Scholar, Suzie. “My Final Paper.” 1 May 2010. 12th Grade English, Wisdom High School, student paper.

Other Examples – You can find citation formats for different types of sources in the complete MLA Style Guide  at the Purdue OWL website.

Avoiding Self-Plagiarism

To avoid self-plagiarism under any style guide, you must cite all of your sources using in-text citations and a list of works cited. This is true whether you’re citing your own work or someone else’s.

One way to ensure that you’re not accidentally committing self-plagiarism or any other kind of plagiarism is to use a plagiarism checker like Quetext. If you’ve unintentionally quoted or paraphrased from a source without citing it, a good plagiarism checker will flag it for you so you can cite it appropriately.

Quetext’s plagiarism checker takes this one step further by automatically generating the appropriate citation for you, making it easy to avoid unintentional plagiarism.

Sign Up for Quetext Today!

Click below to find a pricing plan that fits your needs.

published vs unpublished research paper

You May Also Like

published vs unpublished research paper

Revolutionizing Education: The Role of AI Detection in Academic Integrity

  • Posted on February 15, 2024

published vs unpublished research paper

How Reliable are AI Content Detectors? Which is Most Reliable?

  • Posted on February 8, 2024 February 8, 2024

published vs unpublished research paper

What Is an Article Spinner?

  • Posted on February 2, 2024 February 2, 2024

Woman writing in her notebook near computer

Recognizing & Avoiding Plagiarism in Your Research Paper

  • Posted on January 26, 2024 January 26, 2024

Close up of a person's hand writing a letter with an ink pen.

How to Cite a Letter: APA, MLA, and Chicago Style

  • Posted on January 18, 2024 January 18, 2024

published vs unpublished research paper

Is Using AI Content Plagiarism?

  • Posted on January 10, 2024 January 10, 2024

Image of CD resting on top of a stack of newspapers.

Famous Plagiarism Cases (And How They Could’ve Been Avoided)

  • Posted on January 5, 2024 January 5, 2024

published vs unpublished research paper

How to Write a Thesis Statement & Essay Outline

  • Posted on December 28, 2023 January 3, 2024

Input your search keywords and press Enter.

IMAGES

  1. The Advantages And Disadvantages Of Using Published And Unpublished

    published vs unpublished research paper

  2. How To Decide if a Publication is Suitable For Publishing a Research

    published vs unpublished research paper

  3. Primary Sources

    published vs unpublished research paper

  4. Copyright Published vs Unpublished Work

    published vs unpublished research paper

  5. Published vs. Unpublished Content ("Save Draft")

    published vs unpublished research paper

  6. Characteristics of published and unpublished studies

    published vs unpublished research paper

VIDEO

  1. முதலை வருகை

  2. Neuronal Integration: from Circuits to Dendrites

  3. Law of Self Defense Q&A Show! What's LEGAL in Self-Defense?

  4. Episode 9: Traditionally Published VS. Independently Published

  5. SEATAOO FAQ's

  6. Published vs Unpublished Material

COMMENTS

  1. The Difference Between a Published & Unpublished Dissertation

    The difference between this article and an unpublished dissertation is clear: The article is present in a journal that is printed in thousands of copies and distributed to influential academics around the world.

  2. Published vs unpublished

    The first method is better for newer papers. The latter method is faster for most papers, but is harder to remember Feel free to suggest things at [email protected]

  3. Published vs. Unpublished Works

    Generally, publication occurs on the date on which copies of the work are first made available to the public. Unpublished works are those which have not been distributed in any manner.

  4. Unpublished or informally published work

    (Sloane, 2018) Reference list Author, A. A. (Year). Title of manuscript. Unpublished manuscript [or "manuscript submitted for publication," or "Manuscript in preparation"]. Sloane, A. (2018). The dissolving self: Dementia and identity in philosophical theology. Unpublished manuscript.

  5. Unpublished Works

    Unpublished piece of writing (book, article, etc.) If you download an article from a web repository, such as a preprint, postprint or e-print, you should reference as an eprint. See page on referencing preprints/eprints. An article on the internet is considered to be informally published. An example of unpublished work might be a an article ...

  6. Copyright and Unpublished Material

    The law distinguishes between published and unpublished material and the courts often afford more copyright protection to unpublished material when an asserted fair use is challenged. How can I tell if something is published or unpublished?

  7. Unpublished Research

    There is no one place to look. You have to dig a little deeper. The tools you can use o do this are covered in this Guide . Also, there isn't that much of it. There are a number of reasons for this. Paramedic researchers are relatively few and widely dispersed geographically and across different organizations (academic and EMS/Ambulance Services).

  8. Systematic review finds that study data not published in full text

    Full papers of all methodological research projects which included a cohort of meta-analyses (i.e., more than one meta-analysis) and (i) compared pooled effect estimates of meta-analyses of health care interventions according to publication status (i.e., published vs. unpublished and/or grey study data) and/or (ii) examined whether the ...

  9. What exactly is an "unpublished paper"?

    2 Answers. I would interpret "published" to mean exactly what the text says — citable and accessible. ArXiv papers are both citable and accessible, and therefore do count as acceptable research products. in general, I would agree with you and count arXiv papers, as well as any self-published paper, as "published".

  10. What are the boundaries between draft, manuscript, preprint, paper, and

    paper = article: In the academic meaning of the words, papers and articles refer to the same thing: a published piece of writing.The term is used for journal papers or journal articles, which means they have been published by a journal, but also for less traditional publications, including self-publication ("Dr.Who just published a great paper on the intricacies of time travel on his webpage ...

  11. Unpublished Dissertation or Thesis References

    Narrative citation: Harris (2014) When a dissertation or thesis is unpublished, include the description " [Unpublished doctoral dissertation]" or " [Unpublished master's thesis]" in square brackets after the dissertation or thesis title. In the source element of the reference, provide the name of the institution that awarded the degree.

  12. Methods for obtaining unpublished data

    Methods to obtain unpublished studies (data for studies that have never been published). Proportion of unpublished studies (data) obtained as defined and reported by authors. Secondary outcomes. Methods to obtain missing data (data available to the original researchers but not reported in the published study).

  13. Copyright Published vs Unpublished Work

    What's the main difference between published and unpublished work? Copyright Alliance's experts breaks down the main differences between the two.

  14. 10.3.2 Including unpublished studies in systematic reviews

    Unpublished studies may be of lower methodological quality than published studies: a study of 60 meta-analyses that included published and unpublished trials found that unpublished trials were less likely to conceal intervention allocation adequately and to blind outcome assessments (Egger 2003).

  15. Searching practices and inclusion of unpublished studies in systematic

    Unpublished studies are a minimal fraction of the evidence included in recent reviews. Go to: 1. INTRODUCTION

  16. Publication and related biases in health services research: a

    Publication bias occurs when the publication, non-publication or late publication of research findings is influenced by the direction or strength of the results, and consequently the findings that are published or published early may differ systematically from those that remain unpublished or for which publication is delayed [1, 2].Other related biases, however, may occur between the ...

  17. Primary Sources

    Primary sources can be published or unpublished. There are different types of primary sources for different historical periods. For example, church documents and saints' lives serve as primary sources for the study of medieval history, while newspapers, government reports, and photographs serve as primary sources for the modern period.

  18. Published vs. Unpublished

    A good example is the famous case of Harper & Row v. Nation Enterprises, 471 U.S. 539 (1985). Former U.S. president Gerald Ford had written a tell-all memoir to be published with Harper & Row when tabloid newspaper The Nation obtained and published a juicy excerpt.

  19. Plagiarism and duplicate publication

    Plagiarism and fabrication. Plagiarism is unacknowledged copying or an attempt to misattribute original authorship, whether of ideas, text or results. As defined by the ORI (Office of Research ...

  20. Published vs. Unpublished Tests

    Unpublished or non-standardized tests are instruments that have been published in books and journals but have not been published by a test publishing company. If a user decides to use an unpublished test, there are a few ethical responsibilities involved.

  21. How to Cite Yourself

    Likewise, if you wanted to reference a research paper you wrote, you would cite it using the same format as a research paper completed by someone else. Published vs. Unpublished Work. Whether you're a content creator getting paid for a piece of work or a student submitting a paper for high school or college credit, you must cite every piece ...

  22. Published vs. unpublished research? : r/ApplyingToCollege

    4 NoxiousQuadrumvirate • 5 yr. ago It has to be one of: published under review in preparation or nothing "Published" is published. "Under review" means you've sent it off for publication and didn't get auto-rejected.

  23. research paper unpublished vs published? : r/ApplyingToCollege

    1 tachyonicinstability Moderator | PhD • 1 yr. ago how is high schoolers get actual PUBLICATIONS get reviewed by PhD doctorates? isn't that insanely difficult? Yes. That's why it can be a deciding factor in an application - but it's also extremely rare and usually involves things like having family who are university faculty/researchers.