How to Synthesize Written Information from Multiple Sources

Shona McCombes

Content Manager

B.A., English Literature, University of Glasgow

Shona McCombes is the content manager at Scribbr, Netherlands.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, Ph.D., is a qualified psychology teacher with over 18 years experience of working in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

When you write a literature review or essay, you have to go beyond just summarizing the articles you’ve read – you need to synthesize the literature to show how it all fits together (and how your own research fits in).

Synthesizing simply means combining. Instead of summarizing the main points of each source in turn, you put together the ideas and findings of multiple sources in order to make an overall point.

At the most basic level, this involves looking for similarities and differences between your sources. Your synthesis should show the reader where the sources overlap and where they diverge.

Unsynthesized Example

Franz (2008) studied undergraduate online students. He looked at 17 females and 18 males and found that none of them liked APA. According to Franz, the evidence suggested that all students are reluctant to learn citations style. Perez (2010) also studies undergraduate students. She looked at 42 females and 50 males and found that males were significantly more inclined to use citation software ( p < .05). Findings suggest that females might graduate sooner. Goldstein (2012) looked at British undergraduates. Among a sample of 50, all females, all confident in their abilities to cite and were eager to write their dissertations.

Synthesized Example

Studies of undergraduate students reveal conflicting conclusions regarding relationships between advanced scholarly study and citation efficacy. Although Franz (2008) found that no participants enjoyed learning citation style, Goldstein (2012) determined in a larger study that all participants watched felt comfortable citing sources, suggesting that variables among participant and control group populations must be examined more closely. Although Perez (2010) expanded on Franz’s original study with a larger, more diverse sample…

Step 1: Organize your sources

After collecting the relevant literature, you’ve got a lot of information to work through, and no clear idea of how it all fits together.

Before you can start writing, you need to organize your notes in a way that allows you to see the relationships between sources.

One way to begin synthesizing the literature is to put your notes into a table. Depending on your topic and the type of literature you’re dealing with, there are a couple of different ways you can organize this.

Summary table

A summary table collates the key points of each source under consistent headings. This is a good approach if your sources tend to have a similar structure – for instance, if they’re all empirical papers.

Each row in the table lists one source, and each column identifies a specific part of the source. You can decide which headings to include based on what’s most relevant to the literature you’re dealing with.

For example, you might include columns for things like aims, methods, variables, population, sample size, and conclusion.

For each study, you briefly summarize each of these aspects. You can also include columns for your own evaluation and analysis.

summary table for synthesizing the literature

The summary table gives you a quick overview of the key points of each source. This allows you to group sources by relevant similarities, as well as noticing important differences or contradictions in their findings.

Synthesis matrix

A synthesis matrix is useful when your sources are more varied in their purpose and structure – for example, when you’re dealing with books and essays making various different arguments about a topic.

Each column in the table lists one source. Each row is labeled with a specific concept, topic or theme that recurs across all or most of the sources.

Then, for each source, you summarize the main points or arguments related to the theme.

synthesis matrix

The purposes of the table is to identify the common points that connect the sources, as well as identifying points where they diverge or disagree.

Step 2: Outline your structure

Now you should have a clear overview of the main connections and differences between the sources you’ve read. Next, you need to decide how you’ll group them together and the order in which you’ll discuss them.

For shorter papers, your outline can just identify the focus of each paragraph; for longer papers, you might want to divide it into sections with headings.

There are a few different approaches you can take to help you structure your synthesis.

If your sources cover a broad time period, and you found patterns in how researchers approached the topic over time, you can organize your discussion chronologically .

That doesn’t mean you just summarize each paper in chronological order; instead, you should group articles into time periods and identify what they have in common, as well as signalling important turning points or developments in the literature.

If the literature covers various different topics, you can organize it thematically .

That means that each paragraph or section focuses on a specific theme and explains how that theme is approached in the literature.

synthesizing the literature using themes

Source Used with Permission: The Chicago School

If you’re drawing on literature from various different fields or they use a wide variety of research methods, you can organize your sources methodologically .

That means grouping together studies based on the type of research they did and discussing the findings that emerged from each method.

If your topic involves a debate between different schools of thought, you can organize it theoretically .

That means comparing the different theories that have been developed and grouping together papers based on the position or perspective they take on the topic, as well as evaluating which arguments are most convincing.

Step 3: Write paragraphs with topic sentences

What sets a synthesis apart from a summary is that it combines various sources. The easiest way to think about this is that each paragraph should discuss a few different sources, and you should be able to condense the overall point of the paragraph into one sentence.

This is called a topic sentence , and it usually appears at the start of the paragraph. The topic sentence signals what the whole paragraph is about; every sentence in the paragraph should be clearly related to it.

A topic sentence can be a simple summary of the paragraph’s content:

“Early research on [x] focused heavily on [y].”

For an effective synthesis, you can use topic sentences to link back to the previous paragraph, highlighting a point of debate or critique:

“Several scholars have pointed out the flaws in this approach.” “While recent research has attempted to address the problem, many of these studies have methodological flaws that limit their validity.”

By using topic sentences, you can ensure that your paragraphs are coherent and clearly show the connections between the articles you are discussing.

As you write your paragraphs, avoid quoting directly from sources: use your own words to explain the commonalities and differences that you found in the literature.

Don’t try to cover every single point from every single source – the key to synthesizing is to extract the most important and relevant information and combine it to give your reader an overall picture of the state of knowledge on your topic.

Step 4: Revise, edit and proofread

Like any other piece of academic writing, synthesizing literature doesn’t happen all in one go – it involves redrafting, revising, editing and proofreading your work.

Checklist for Synthesis

  •   Do I introduce the paragraph with a clear, focused topic sentence?
  •   Do I discuss more than one source in the paragraph?
  •   Do I mention only the most relevant findings, rather than describing every part of the studies?
  •   Do I discuss the similarities or differences between the sources, rather than summarizing each source in turn?
  •   Do I put the findings or arguments of the sources in my own words?
  •   Is the paragraph organized around a single idea?
  •   Is the paragraph directly relevant to my research question or topic?
  •   Is there a logical transition from this paragraph to the next one?

Further Information

How to Synthesise: a Step-by-Step Approach

Help…I”ve Been Asked to Synthesize!

Learn how to Synthesise (combine information from sources)

How to write a Psychology Essay

Print Friendly, PDF & Email

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Working with sources
  • Synthesizing Sources | Examples & Synthesis Matrix

Synthesizing Sources | Examples & Synthesis Matrix

Published on July 4, 2022 by Eoghan Ryan . Revised on May 31, 2023.

Synthesizing sources involves combining the work of other scholars to provide new insights. It’s a way of integrating sources that helps situate your work in relation to existing research.

Synthesizing sources involves more than just summarizing . You must emphasize how each source contributes to current debates, highlighting points of (dis)agreement and putting the sources in conversation with each other.

You might synthesize sources in your literature review to give an overview of the field or throughout your research paper when you want to position your work in relation to existing research.

Table of contents

Example of synthesizing sources, how to synthesize sources, synthesis matrix, other interesting articles, frequently asked questions about synthesizing sources.

Let’s take a look at an example where sources are not properly synthesized, and then see what can be done to improve it.

This paragraph provides no context for the information and does not explain the relationships between the sources described. It also doesn’t analyze the sources or consider gaps in existing research.

Research on the barriers to second language acquisition has primarily focused on age-related difficulties. Building on Lenneberg’s (1967) theory of a critical period of language acquisition, Johnson and Newport (1988) tested Lenneberg’s idea in the context of second language acquisition. Their research seemed to confirm that young learners acquire a second language more easily than older learners. Recent research has considered other potential barriers to language acquisition. Schepens, van Hout, and van der Slik (2022) have revealed that the difficulties of learning a second language at an older age are compounded by dissimilarity between a learner’s first language and the language they aim to acquire. Further research needs to be carried out to determine whether the difficulty faced by adult monoglot speakers is also faced by adults who acquired a second language during the “critical period.”

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

research paper on synthesis

To synthesize sources, group them around a specific theme or point of contention.

As you read sources, ask:

  • What questions or ideas recur? Do the sources focus on the same points, or do they look at the issue from different angles?
  • How does each source relate to others? Does it confirm or challenge the findings of past research?
  • Where do the sources agree or disagree?

Once you have a clear idea of how each source positions itself, put them in conversation with each other. Analyze and interpret their points of agreement and disagreement. This displays the relationships among sources and creates a sense of coherence.

Consider both implicit and explicit (dis)agreements. Whether one source specifically refutes another or just happens to come to different conclusions without specifically engaging with it, you can mention it in your synthesis either way.

Synthesize your sources using:

  • Topic sentences to introduce the relationship between the sources
  • Signal phrases to attribute ideas to their authors
  • Transition words and phrases to link together different ideas

To more easily determine the similarities and dissimilarities among your sources, you can create a visual representation of their main ideas with a synthesis matrix . This is a tool that you can use when researching and writing your paper, not a part of the final text.

In a synthesis matrix, each column represents one source, and each row represents a common theme or idea among the sources. In the relevant rows, fill in a short summary of how the source treats each theme or topic.

This helps you to clearly see the commonalities or points of divergence among your sources. You can then synthesize these sources in your work by explaining their relationship.

If you want to know more about ChatGPT, AI tools , citation , and plagiarism , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • What is ChatGPT?
  • Chicago style
  • Paraphrasing

 Plagiarism

  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Consequences of plagiarism
  • Common knowledge

Synthesizing sources means comparing and contrasting the work of other scholars to provide new insights.

It involves analyzing and interpreting the points of agreement and disagreement among sources.

You might synthesize sources in your literature review to give an overview of the field of research or throughout your paper when you want to contribute something new to existing research.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

Topic sentences help keep your writing focused and guide the reader through your argument.

In an essay or paper , each paragraph should focus on a single idea. By stating the main idea in the topic sentence, you clarify what the paragraph is about for both yourself and your reader.

At college level, you must properly cite your sources in all essays , research papers , and other academic texts (except exams and in-class exercises).

Add a citation whenever you quote , paraphrase , or summarize information or ideas from a source. You should also give full source details in a bibliography or reference list at the end of your text.

The exact format of your citations depends on which citation style you are instructed to use. The most common styles are APA , MLA , and Chicago .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, May 31). Synthesizing Sources | Examples & Synthesis Matrix. Scribbr. Retrieved February 22, 2024, from https://www.scribbr.com/working-with-sources/synthesizing-sources/

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, signal phrases | definition, explanation & examples, how to write a literature review | guide, examples, & templates, how to find sources | scholarly articles, books, etc..

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 30 June 2022

Chemical synthesis and materials discovery

  • Anthony K. Cheetham   ORCID: orcid.org/0000-0003-1518-4845 1 , 2 ,
  • Ram Seshadri   ORCID: orcid.org/0000-0001-5858-4027 1 &
  • Fred Wudl   ORCID: orcid.org/0000-0002-2140-5301 1  

Nature Synthesis volume  1 ,  pages 514–520 ( 2022 ) Cite this article

1896 Accesses

14 Citations

42 Altmetric

Metrics details

  • History of chemistry
  • Materials chemistry
  • Materials for devices
  • Materials for energy and catalysis

Functional materials impact every area of our lives, from electronic and computing devices to transportation and health. Here we examine the relationship between synthetic discoveries and the scientific breakthroughs that they have enabled. By tracing the development of some important examples, we explore how and why the materials were initially synthesized and how their utility was subsequently recognized. Three common pathways to materials breakthroughs are identified. In a small number of cases, such as the aluminosilicate zeolite catalyst ZSM-5, an important advance is made by using design principles based on earlier work. There are also rare cases of breakthroughs that are serendipitous, such as the buckyball and Teflon. Most commonly, however, the breakthrough repurposes a compound that is already known and was often made out of curiosity or for a different application. Typically, the synthetic discovery precedes the discovery of functionality by many decades; key examples include conducting polymers, topological insulators and electrodes for lithium-ion batteries.

research paper on synthesis

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

research paper on synthesis

Kokotailo, G. T., Lawton, S. L. & Olson, D. H. Structure of synthetic zeolite ZSM-5. Nature 272 , 437–438 (1978).

Article   CAS   Google Scholar  

Wilson, S. T., Lok, B. M., Messina, C. A., Cannon, T. R. & Flanigen, E. M. Aluminophosphate molecular-sieves—a new class of microporous crystalline inorganic solid. J. Am. Chem. Soc. 104 , 1146–1147 (1982).

Capaca, E. et al. Synthesis and structure of a 22 × 12 × 12 extra-large pore zeolite ITQ-56 determined by 3D electron diffraction. J. Am. Chem. Soc. 143 , 8713–8719 (2021).

Kresge, C. T., Leonowicz, M. E. & Beck, J. S. Ordered mesoporous molecular sieves synthesized by a liquid-crystal template mechanism. Nature 359 , 710–712 (1992).

Wudl, F., Smith, G. M. & Hufnagel, E. J. Bis-1,3 dithiolium chloride: an unusually stable organic radical cation. Chem. Commun. 1970 , 1453–1454 (1970).

Article   Google Scholar  

Wudl, F. From organic metals to superconductors: managing conduction electrons in organic solids. Acc. Chem. Res. 17 , 227–232 (1984).

Martín, N. Tetrathiafulvalene: the advent of organic metals. Chem. Commun. 49 , 7025–7027 (2013).

Haywang, G. & Jonas, F. Poly(alkylenedioxythiophene)s—new, very stable conducting polymers. Adv. Mater. 4 , 116–118 (1992).

Jonas, F. & Schrader, L. Conductive modifications of polymers with polypyrroles and polythiophenes. Synth. Met. 41–43 , 831–836 (1991).

Kim, G. H., Shao, L., Zhang, K. & Pipe, K. P. Engineered doping of organic semiconductors for enhanced thermoelectric efficiency. Nat. Mater. 12 , 719–723 (2013).

Article   CAS   PubMed   Google Scholar  

Worfolk, B. J. et al. Ultrahigh electrical conductivity in solution-sheared polymeric transparent films. Proc. Natl Acad. Sci. USA 112 , 14138–14143 (2015).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Plunkett, R. J. in High Performance Polymers , their Origin and Development (eds Seymour, R. B. & Kirshenbaum, G. S.) 261–266 (Springer, 1986).

Sicard, A. J. & Baker, R. T. Fluorocarbon refrigerants and their syntheses: past to present. Chem. Rev. 120 , 9164–9303 (2020).

Jones, D. E. H. Hollow molecules. New Sci. 32 , 245 (1966).

Google Scholar  

Osawa, E. The evolution of the football structure for the C 60 molecule: a retrospective. Phil. Trans. R. Soc. A 343 , 1–8 (1993).

CAS   Google Scholar  

Rohlfing, E. A., Cox, D. M. & Kaldor, A. Production and characterization of supersonic carbon cluster beams. J. Chem. Phys. 81 , 3322–3330 (1984).

Kroto, H. W., Heath, J. R., O’Brien, S. C., Curl, R. F. & Smalley, R. E. C 60 : buckminsterfullerene. Nature 318 , 162–164 (1985).

Kratschmer, W., Lamb, L. D., Fostirapoulos, K. & Huffman, D. R. Solid C 60 : a new form of carbon. Nature 347 , 354–358 (1990).

Hebard, A. F. et al. Superconductivity at 18 K in potassium-doped C 60 . Nature 350 , 600–601 (1991).

Campbell, E. K., Holz, M., Gerlich, D. & Maier, J. P. Laboratory confirmation of C 60 + as carrier of two diffuse interstellar bands. Nature 523 , 322–323 (2015).

Rasmussen, S. C. Conjugated polymers and conducting polymers: the first 150 years. ChemPlusChem 85 , 1412–1429 (2020).

Natta, G., Mazzanti, G. & Corradini, P. Stereospecific polymerization of acetylene. Atti Accad. Naz. Lincei Mem. Cl. Sci. Fis. Mat. Na . 25 , 3–12 (1958).

Shirakawa, H., Lewis, E. J., MacDiarmid, A. J., Chiang, C. K. & Heeger, A. J. Synthesis of electrically conducting organic polymers: halogen derivatives of polyacetylene, (CH) x . J. Chem. Soc. J. Chem. Soc. 1977 , 578–580 (1977).

Ito, T., Shirakawa, H. & Ikeda, S. Simultaneous polymerization and formation of polyacetylene film on the surface of a concentrated soluble Ziegler-type catalyst solution. J. Poly. Sci. Polym. Chem. Ed. 12 , 11–20 (1974).

Burroughes, J. H. et al. Light emitting diodes based on conjugated polymers. Nature 347 , 539–541 (1990).

Wudl, F. & Srdanov, G. Conducting polymer formed of poly(2-methoxy-5-(2′-ethylhexyloxy)- p -phenylenevinylene). US patent 5,189,136 (1993).

Sariciftci, N. S., Smilowitz, L., Heeger, A. J. & Wudl, F. Photoinduced electron transfer from conducting polymer to buckminsterfullerene. Science 258 , 1474–1476 (1992).

Perrot, S., Pawla, F., Pechev, S., Hadziioannou, G. & Fleury, G. PEDOT:Tos electronic and thermoelectric properties: lessons from two polymerization processes. J. Mater. Chem. C 9 , 7417–7425 (2021).

Lee, G.-H. et al. Multifunctional materials for implantable and wearable photonic healthcare devices. Nat. Rev. Mater. 5 , 149–165 (2020).

Article   PubMed   PubMed Central   Google Scholar  

Weber, D. CH 3 NH 3 PbX 3 , ein Pb(II)-System mit kubischer Perowskitstruktur/CH 3 NH 3 PbX 3 , a Pb(II)-system with cubic perovskite structure. Z. Naturforsch. B 33 , 1443–1445 (1978).

Wasylishen, R. E., Knop, O. & Macdonald, J. B. Cation rotation in methylammonium lead halides. Solid State Commun. 56 , 581–582 (1985).

Poglitsch, A. & Weber, D. Dynamic disorder in methylammoniumtrihalogenoplumbates (II) observed by millimeter-wave spectroscopy. J. Chem. Phys. 87 , 6373–6378 (1987).

Yamada, K., Kawaguchi, H. & Matsui, T. Bull. Chem. Soc. Jpn 63 , 2521–2525 (1990).

Koutselas, I. B., Ducasse, L. & Papavassiliou, G. C. Electronic properties of three- and low-dimensional semiconducting materials with Pb halide and Sn halide units. J. Phys. Condens. Matter. 8 , 1217–1227 (1996).

Mitzi, D. B., Feild, C. A., Harrison, W. T. A. & Guloy, A. M. Conducting tin halides with a layered organic-based perovskite structure. Nature 369 , 467–469 (1994).

Kagan, C. R., Mitzi, D. B. & Dimitrakopoulos, C. D. Organic–inorganic hybrid materials as semiconducting channels in thin-film field-effect transistors. Science 286 , 945–947 (1999).

Kojima, A., Teshima, K., Shirai, Y. & Miyasaka, T. Organometal halide perovskites as visible-light sensitizers for photovoltaic cells. J. Am. Chem. Soc. 131 , 6050–6051 (2009).

Jeong, J. et al. Pseudo-halide anion engineering for α-FAPbI 3 perovskite solar cells. Nature 592 , 381–385 (2021).

Al-Ashourim, A. et al. Monolithic perovskite/silicon tandem solar cell with >29% efficiency by enhanced hole extraction. Science 370 , 1300–1309 (2020).

Wei, F. et al. Lead-free hybrid double perovskite (CH 3 NH 3 ) 2 AgBiBr 6 : synthesis, electronic structure, optical and mechanical properties. Chem. Mater. 29 , 1089–1094 (2017).

Vishnoi, P., Seshadri, R. & Cheetham, A. K. Why are double perovskites iodides so rare? J. Phys. Chem. C 125 , 11756–11764 (2021).

Moore, J. E. The birth of topological insulators. Nature 464 , 194–198 (2010).

Kane, C. L. & Mele, E. J. Z 2 topological order and the quantum spin Hall effect. Phys. Rev. Lett. 95 , 146802 (2005).

Bernevig, B. A., Hughes, T. A. & Zhang, S. C. Quantum spin Hall effect and topological phase transition in HgTe quantum wells. Science 314 , 1757–1761 (2006).

Lange, P. W. Ein Vergleich zwischen Bi 2 Te 3 und Bi 2 Te 2 S. Naturwissenschaften 27 , 133–135 (1939).

Mönkmeyer, K. Über Tellur-Wismut. Z. Anorg. Chem. 46 , 415–422 (1905).

Zhang, H. et al. Topological insulators in Bi 2 Se 3 , Bi 2 Te 3 and Sb 2 Te 3 with a single Dirac cone on the surface. Nat. Phys 5 , 438–442 (2009).

Chen, Y. L. et al. Experimental realization of a three-dimensional topological insulator, Bi 2 Te 3 . Science 325 , 178–181 (2006).

Boller, H. & Parthé, E. The transposition structure of NbAs and of similar monophosphides and arsenides of niobium and tantalum. Acta Crystallogr. 16 , 1095–1101 (1963).

Weng, H., Fang, C., Fang, Z., Bernevig, B. A. & Dai, X. Weyl semimetal phase in noncentrosymmetric transition-metal monophosphides. Phys. Rev. X 5 , 011029 (2015).

Lv, B. Q. et al. Experimental discovery of Weyl semimetal TaAs. Phys. Rev. X 5 , 031013 (2015).

Huang, S.-M. et al. A Weyl fermion semimetal with surface Fermi arcs in the transition metal monopnictide TaAs class. Nat. Commun. 6 , 7373 (2015).

Johnston, W. D., Heikes, R. R. & Sestrich, D. The preparation, crystallography and magnetic properties of the Li x Co (1 −  x ) O system. J. Phys. Chem. Solids 7 , 1–13 (1958).

Whittingham, M. S. The role of ternary phases in cathode reactions. J. Electrochem. Soc. 123 , 315–320 (1976).

Mizushima, K., Jones, P. C., Wiseman, P. J. & Goodenough, J. B. Li x CoO 2 (0 <  x  < 1): a new cathode material for batteries of high energy density. Mater. Res. Bull. 15 , 783–789 (1980).

Nishizawa, M., Yamamura, S., Itoh, T. & Uchida, I. Irreversible conductivity change of Li 1- x CoO 2 on electrochemical lithium insertion/extraction, desirable for battery applications. Chem. Commun. 1998 , 1631–1632 (1998).

Chebiam, R. V., Kannan, A. M., Prado, F. & Manthiram, A. Comparison of the chemical stability of the high energy density cathodes of lithium-ion batteries. Electrochem. Commun. 3 , 624–627 (2001).

Manthiram, A. A reflection on lithium-ion battery cathode chemistry. Nat. Commun. 11 , 1550 (2020).

Li, W., Erickson, E. & Manthiram, A. High-nickel layered oxide cathodes for lithium-based automotive batteries. Nat. Energy 5 , 26–24 (2020).

Griffith, K. J. et al. Titanium niobium oxide: from discovery to application in fast-charging lithium-ion batteries. Chem. Mater. 33 , 4–18 (2021).

Roth, R. S. & Coughanour, L. W. Phase equilibrium relations in the systems titania-niobia and zirconia-niobia. J. Res. Natl Bur. Stand. 55 , 209–213 (1955).

Han, J.-T. & Goodenough, J. B. 3-V full cell performance of anode framework TiNb 2 O 7 /spinel LiNi 0.5 Mn 1.5 O 4 . Chem. Mater. 23 , 3404–3407 (2011).

Danielson, E. et al. A combinatorial approach to the discovery and optimization of luminescent materials. Nature 389 , 944–948 (1997).

Jandeleit, B., Schaefer, D. J., Powers, T. S., Turner, H. W. & Weinberg, W. H. Combinatorial materials science and catalysis. Angew. Chem. Int. Ed. 38 , 2495–2532 (1999).

Banerjee, R. et al. High-throughput synthesis of zeolitic imidazolate frameworks and application to CO 2 capture. Science 319 , 939–943 (2008).

Corey, E. J. & Wipke, W. T. Computer-assisted design of complex organic syntheses. Science 166 , 178–192 (1969).

Davies, I. W. The digitization of organic synthesis. Nature 570 , 175–181 (2019).

Shields, B. J. et al. Bayesian reaction optimization as a tool for chemical synthesis. Nature 590 , 89–96 (2021).

Kim, E. et al. Materials synthesis insights from scientific literature via text extraction and machine learning. Chem. Mater. 29 , 9436–9444 (2017).

Raccuglia, P. et al. Machine-learning-assisted materials discovery using failed experiments. Nature 533 , 73–76 (2016).

Schön, J. C. & Jansen, M. First step towards planning of syntheses in solid-state chemistry: determination of promising structure candidates by global optimization. Angew. Chem. Int. Ed. 35 , 1286–1304 (1996).

Chen, B.-R. et al. Understanding crystallization pathways leading to manganese oxide polymorph formation. Nat. Commun. 9 , 2553 (2018).

Article   PubMed   PubMed Central   CAS   Google Scholar  

Jóhannesson, G. H. et al. Combined electronic structure and evolutionary search approach to materials design. Phys. Rev. Lett. 88 , 255506 (2002).

Article   PubMed   CAS   Google Scholar  

Hautier, G., Fischer, C. C., Jain, A., Mueller, T. & Ceder, G. Finding nature’s missing ternary oxide compounds using machine learning and density functional theory. Chem. Mater. 22 , 3762–3767 (2010).

Oliynyk, A. O. et al. High-throughput machine-learning-driven synthesis of full-Heusler compounds. Chem. Mater. 28 , 7324–7331 (2016).

Burger, B. et al. A mobile robotic chemist. Nature 583 , 237–241 (2020).

Groom, C. R., Bruno, I. J., Lightfoot, M. P. & Ward, S. C. The Cambridge structural database. Acta Crystallogr. B 72 , 171–179 (2016).

Bergerhoff, G., Hundt, R., Sievers, R. & Brown, I. D. The inorganic crystal structure data base. J. Chem. Inf. Comput. Sci. 23 , 66–69 (1983).

Chung, Y. G. et al. Computation-ready, experimental metal–organic frameworks: a tool to enable high-throughput screening of nanoporous crystals. Chem. Mater. 26 , 6185–6192 (2014).

Jain, A. et al. Commentary. The Materials Project: a materials genome approach to accelerating materials innovation. APL Mater. 1 , 011002 (2013).

Ortiz, B. R. et al. New Kagome prototype materials: discovery of KV 3 Sb 5 , RbV 3 Sb 5 and CsV 3 Sb 5 . Phys. Rev. Mater. 3 , 094407 (2019).

Ortiz, B. R. et al. CsV 3 Sb 5 : a Z 2 topological Kagome metal with a superconducting ground state. Phys. Rev. Lett. 125 , 247002 (2020).

Sun, S. et al. Synthesis, crystal structure, and properties of a perovskite-related bismuth phase, (NH 4 ) 3 Bi 2 I 9 . APL Mater. 4 , 031101 (2016).

Zhuang, R. et al. Highly sensitive X-ray detector made of layered perovskite-like (NH 4 ) 3 Bi 2 I 9 single crystal with anisotropic response. Nat. Photon. 13 , 602–608 (2019).

Hagman, L. & Kierkegaard, P. The crystal structure of NaMe IV 2 (PO 4 ) 3 ; Me IV  = Ge, Ti, Zr. Acta Chem. Scand. 22 , 1822–1832 (1968).

Chen, S. et al. Challenges and perspectives for NaSICON-type electrode materials for advanced sodium-ion batteries. Adv. Mater. 29 , 1700431 (2017).

Download references

Acknowledgements

A.K.C. thanks the Ras al Khaimah Centre for Advanced Materials for financial support. R.S. gratefully acknowledges the US Department of Energy, Office of Science, Basic Energy Sciences, for support under award no. DE-SC-0012541.

Author information

Authors and affiliations.

Materials Research Laboratory, University of California, Santa Barbara, CA, USA

Anthony K. Cheetham, Ram Seshadri & Fred Wudl

Department of Materials Science and Engineering, National University of Singapore, Singapore, Singapore

Anthony K. Cheetham

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Anthony K. Cheetham .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Synthesis thanks Linda Nazar, Michael Hayward and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editor: Alison Stoddart, in collaboration with the Nature Synthesis team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information.

References for Fig. 2.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Cheetham, A.K., Seshadri, R. & Wudl, F. Chemical synthesis and materials discovery. Nat. Synth 1 , 514–520 (2022). https://doi.org/10.1038/s44160-022-00096-3

Download citation

Received : 03 February 2022

Accepted : 10 May 2022

Published : 30 June 2022

Issue Date : July 2022

DOI : https://doi.org/10.1038/s44160-022-00096-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

A robotic platform for the synthesis of colloidal nanocrystals.

  • Haitao Zhao
  • Xue-Feng Yu

Nature Synthesis (2023)

Combinatorial synthesis for AI-driven materials discovery

  • John M. Gregoire
  • Joel A. Haber

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research paper on synthesis

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Synthesizing Sources

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

When you look for areas where your sources agree or disagree and try to draw broader conclusions about your topic based on what your sources say, you are engaging in synthesis. Writing a research paper usually requires synthesizing the available sources in order to provide new insight or a different perspective into your particular topic (as opposed to simply restating what each individual source says about your research topic).

Note that synthesizing is not the same as summarizing.  

  • A summary restates the information in one or more sources without providing new insight or reaching new conclusions.
  • A synthesis draws on multiple sources to reach a broader conclusion.

There are two types of syntheses: explanatory syntheses and argumentative syntheses . Explanatory syntheses seek to bring sources together to explain a perspective and the reasoning behind it. Argumentative syntheses seek to bring sources together to make an argument. Both types of synthesis involve looking for relationships between sources and drawing conclusions.

In order to successfully synthesize your sources, you might begin by grouping your sources by topic and looking for connections. For example, if you were researching the pros and cons of encouraging healthy eating in children, you would want to separate your sources to find which ones agree with each other and which ones disagree.

After you have a good idea of what your sources are saying, you want to construct your body paragraphs in a way that acknowledges different sources and highlights where you can draw new conclusions.

As you continue synthesizing, here are a few points to remember:

  • Don’t force a relationship between sources if there isn’t one. Not all of your sources have to complement one another.
  • Do your best to highlight the relationships between sources in very clear ways.
  • Don’t ignore any outliers in your research. It’s important to take note of every perspective (even those that disagree with your broader conclusions).

Example Syntheses

Below are two examples of synthesis: one where synthesis is NOT utilized well, and one where it is.

Parents are always trying to find ways to encourage healthy eating in their children. Elena Pearl Ben-Joseph, a doctor and writer for KidsHealth , encourages parents to be role models for their children by not dieting or vocalizing concerns about their body image. The first popular diet began in 1863. William Banting named it the “Banting” diet after himself, and it consisted of eating fruits, vegetables, meat, and dry wine. Despite the fact that dieting has been around for over a hundred and fifty years, parents should not diet because it hinders children’s understanding of healthy eating.

In this sample paragraph, the paragraph begins with one idea then drastically shifts to another. Rather than comparing the sources, the author simply describes their content. This leads the paragraph to veer in an different direction at the end, and it prevents the paragraph from expressing any strong arguments or conclusions.

An example of a stronger synthesis can be found below.

Parents are always trying to find ways to encourage healthy eating in their children. Different scientists and educators have different strategies for promoting a well-rounded diet while still encouraging body positivity in children. David R. Just and Joseph Price suggest in their article “Using Incentives to Encourage Healthy Eating in Children” that children are more likely to eat fruits and vegetables if they are given a reward (855-856). Similarly, Elena Pearl Ben-Joseph, a doctor and writer for Kids Health , encourages parents to be role models for their children. She states that “parents who are always dieting or complaining about their bodies may foster these same negative feelings in their kids. Try to keep a positive approach about food” (Ben-Joseph). Martha J. Nepper and Weiwen Chai support Ben-Joseph’s suggestions in their article “Parents’ Barriers and Strategies to Promote Healthy Eating among School-age Children.” Nepper and Chai note, “Parents felt that patience, consistency, educating themselves on proper nutrition, and having more healthy foods available in the home were important strategies when developing healthy eating habits for their children.” By following some of these ideas, parents can help their children develop healthy eating habits while still maintaining body positivity.

In this example, the author puts different sources in conversation with one another. Rather than simply describing the content of the sources in order, the author uses transitions (like "similarly") and makes the relationship between the sources evident.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Nanomaterials (Basel)

Logo of nanomat

Green Synthesis of Nanomaterials

Matthew huston.

1 Internal Medicine-Infectious Disease, University of Michigan, Ann Arbor, MI 48109, USA; ude.hcimu@amnotsuh

Melissa DeBella

2 Department of Pharmaceutical Sciences, University of Saint Joseph, Hartford, CT 06117, USA; ude.jsu@allebedm (M.D.); ude.jsu@allebidm (M.D.)

Maria DiBella

Anisha gupta, associated data.

Data sharing not applicable.

Nanotechnology is considered one of the paramount forefronts in science over the last decade. Its versatile implementations and fast-growing demand have paved the way for innovative measures for the synthesis of higher quality nanomaterials. In the early stages, traditional synthesis methods were utilized, and they relied on both carcinogenic chemicals and high energy input for production of nano-sized material. The pollution produced as a result of traditional synthesis methods induces a need for environmentally safer synthesis methods. As the downfalls of climate change become more abundant, the scientific community is persistently seeking solutions to combat the devastation caused by toxic production methods. Green methods for nanomaterial synthesis apply natural biological systems to nanomaterial production. The present review highlights the history of nanoparticle synthesis, starting with traditional methods and progressing towards green methods. Green synthesis is a method just as effective, if not more so, than traditional synthesis; it provides a sustainable approach to nanomaterial manufacturing by using naturally sourced starting materials and relying on low energy processes. The recent use of active molecules in natural biological systems such as bacteria, yeast, algae and fungi report successful results in the synthesis of various nanoparticle systems. Thus, the integration of green synthesis in scientific research and mass production provides a potential solution to the limitations of traditional synthesis methods.

1. Introduction

Over the past few years, a large amount of attention has been directed towards nanotechnology. The classification of nano-sized technology encompasses materials between 1 and 100 nano-meters [ 1 ]. Though the size of the compound is what classifies it as a nanomaterial, its morphology and geometry also play a significant role in its characteristics. Nano-sized materials have applications in almost every sector including, but not limited to: electronics, agriculture, and medicine ( Figure 1 ). Nanotechnology allows nanoparticles to revolutionize the materials designed for use, resulting in noteworthy improvements in thermal, mechanical, and barrier properties [ 2 ]. The fin e- combed development of various nanoparticle morphologies, such as spheres, rods, quantum dots and particles, allow for variety in applications and can arguably result in limitless opportunity for technological advancement [ 3 ].

An external file that holds a picture, illustration, etc.
Object name is nanomaterials-11-02130-g001.jpg

Applications of Nanomaterials. 0D and 1D nanomaterials have extensive applications throughout multiple sectors. Nanosheets, a 1D nanomaterial, are highly utilized in electronics as they can be highly conductive and sensitive. Nanoparticles are a 0D nanomaterial that are highly employed for drug delivery. Most notably, nanoparticle technology is used in the delivery of SARS-CoV-2 vaccines. Nanotubes are long, 1D nanomaterials that can vary in thickness depending on their application. For bio-imaging, single walled carbon nanotubes can be used to target specific tissues and emit a fluorescent signal bright enough to be detected. Finally, quantum dots, or buckyballs, vary slightly from nanoparticles in their composition. They are smaller and emit a light when excited, making them useful for lasers, amongst other applications.

Nano-sized materials are synthesized in a multitude of ways, but there are generally two approaches to their creation: top-down and bottom-up synthesis ( Figure 2 ). The former utilizes larger bulk material and breaks them down into nano-sized particles, and the latter utilizes individual atoms and builds them up into larger nanomaterials. Metal nanomaterial products such as silver (Ag), gold (Au) [ 4 ], selenium (Se) [ 5 ], cadmium sulfide (CdS) [ 6 ], lead sulfide (PbS) [ 7 ] and iron oxide (Fe 3 O 4 ) [ 8 ], provide useful properties for diverse applications.

An external file that holds a picture, illustration, etc.
Object name is nanomaterials-11-02130-g002.jpg

Top-down vs. Bottom-up synthesis schemes. There are two methods by which nanomaterials can be synthesized. Top-down synthesis refers to the process by which bulk materials are broken down into their monomers. Laser ablation is an example of a top-down synthesis method. Bottom-up synthesis refers to the process by which atoms are reacted with other substrates to create the desired nanomaterials. Reactions can be catalyzed by an outside force such as in hydrothermal synthesis, or the introduction of volatile compounds such as in chemical vapor deposition.

Nanomaterial synthesis can be subdivided into two main categories: traditional methods and green methods. Many attractive benefits exist with using traditional nanomaterial synthesis methods. These methods produce a large variety of nanoparticles with vast applications. Some methods offer extensive scalability [ 9 ], high control over nanoparticle morphology [ 10 , 11 , 12 ], with applications in innovative battery conduction, electrical applications [ 13 , 14 , 15 , 16 , 17 ], targeted disease therapy [ 18 , 19 ] and energy storage/conservation [ 20 , 21 , 22 ]. However, the outstanding negative effects of employing these traditional methods are undeniable. Organic solvents are exceedingly utilized in the synthesis of these nanomaterials, posing a major neurobehavioral and reproductive risk during synthesis [ 23 , 24 , 25 ]; additionally, the use of high pressure and heat conditions may also aid in dangerous working conditions [ 26 , 27 , 28 ]. Concern for volatile vapor [ 29 ] and excessive cardon dioxide production, which contributes remarkably to the greenhouse effect, is an adverse effect of highest priority from these syntheses [ 30 , 31 ]. Overall, these methods pose irreversible risks to both the scientists conducting the synthesis and the environment. These potential harms outweigh the benefit of traditional nanomaterial synthesis methods. Due to these factors, traditional synthesis methods have fallen out of favor, which has paved the way for green synthesis. With the current climate crisis, the development of new and forward-looking methods that follow the 12 Principles of Green Chemistry is of vital importance.

Green synthesis employs a clean, safe, cost effective and environmentally friendly process of constructing nanomaterials. Microorganisms such as bacteria, yeast, fungi, algal species and certain plants act as substrates for the green synthesis of nanomaterials ( Figure 3 ). Different active molecules and precursors, such as metal salt, determine the final morphology and size of the nanoparticle. Additionally, green synthesis provides nanomaterial benefits ranging from antimicrobial properties [ 32 ] to natural reducing properties and stabilizing properties. The active molecules of the microorganisms utilized as green synthesis substrates attribute to these properties—a recent discovery since the last published comprehensive nanomaterial review by Saratale, R., et al. [ 33 ]. The part of the green species utilized in the synthesis of nanomaterials often consist of specific enzymes [ 34 ], amino acid groups [ 35 ], proteins [ 36 ], or chemical structure [ 37 ].

An external file that holds a picture, illustration, etc.
Object name is nanomaterials-11-02130-g003.jpg

Representation of the role of active molecules in green metallic nanoparticle synthesis. ( A ) Microorganism obtained from raw sample is cultured on a plate. ( B ) Culture is harvested and purified before it is inoculated in sterile nutrient broth with metal ion solution. Methods are taken to promote homeostasis for subsequent nanoparticle synthesis. ( C ) Metal ions are reduced to metal nanoparticles, facilitated by microorganism’s active molecules. The proposed active molecule mechanism reflected in this figure is the intracellular conversion of metal ion to metal nanoparticle through an enzymatic reduction oxidation process. ( D ) Nanoparticles are collected and analyzed for purity and formation. Stability and reducing properties from microorganism can be observed in final product.

In this review, we highlight traditional synthesis methods and applications in addition to green synthesis methods and applications of nanomaterials. We pay specific attention to the active molecules produced by a variety of microorganisms that attribute to specialized nano-sized material production. Substantial benefit exists in pinpointing these active molecules as they are responsible for determining the specific morphology, size, and application of nanoparticles produced. Understanding the role of active molecules in nanomaterial production can pave the way for manipulation of these natural chemical properties to continue nanostructure advancements in the scientific community. Overall, the role of active molecules provides a more refined insight into the capability of nanoparticle synthesis from green synthesis methods in future applications.

2. Traditional Synthesis Methods

2.1. sol-gel synthesis.

Sol-Gel synthesis is a common method for the synthesis of nanomaterials. This relatively simple method is straightforward and can be easily utilized for the synthesis of nanomaterials using a variety of different metal oxides such as TiO 2 , ZnO, SnO 2 , WO 3 , Fe 2 O 3 as well as silica and Platinum [ 38 , 39 , 40 , 41 ]. This process usually progresses over a series of five steps, beginning with hydrolysis of the precursors using either water or an organic solvent. Next, molecules that are adjacent to one another begin to form linkages as the process continues into the condensation step. The resulting “gel” is then aged and dried by supercritical drying, thermal drying, or freeze drying, with each producing slightly different products. Finally, calcination is performed in order to drive off residues and dry any remaining water [ 38 ].

Nanomaterials synthesized using Sol-Gel have wide-spread applications including drug delivery, wastewater treatment, construction materials, and a variety of sensors [ 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 ]. This method can be applied at an industrial level due in part to the limited number of ingredients that are required to facilitate the final product [ 49 , 50 ]. Additionally, Sol-Gel synthesis can progress utilizing only one-pot which adds to its allure [ 44 ].

As effective as the Sol-Gel synthesis method can be for the manufacturing of nanomaterials, it has several shortcomings regarding environmental and personal safety. Firstly, organic solvents that are typically used for the hydrolysis of the nanomaterial precursors pose enormous health and environmental risks [ 23 , 24 ]. It has been shown that organic solvents can affect a variety of different bodily systems, including neurobehavioral and reproductive systems [ 23 , 25 , 51 ]. Although this method is effective and efficient, it poses significant risks that cannot be overlooked.

2.2. Chemical Vapor Deposition

In general, Chemical Vapor Deposition (CVD) is the process by which a substrate, such as Nickel, Iron, or Zinc is introduced to one or more volatile compounds (vapor or gas) which react with the substrate, to produce a final 2D product [ 8 , 52 , 53 , 54 , 55 ]. The reaction between the substrate and volatile compound is contained within a vacuum, conducted at high temperature, in the presence of N 2 gas and often a catalyst [ 56 ]. The temperature, substrates and precursors can be altered in order to produce products with different morphologies, sizes, and geometries [ 8 ]. Carbon nanomaterials are just one example of the amount of control that the scientist has over the final product synthesized by CVD. Graphene, fullerene, carbon nanotubes, and diamond-like carbon films are some of the nanostructures that can be created via CVD synthesis [ 53 , 57 , 58 , 59 ].

Because of the wide variety of nanostructures that can be created via CVD synthesis, there also exists a wide variety of applications of the nanomaterials. Though a majority of these applications overlap with nanomaterials created by other synthesis methods, there exist some interesting and unique applications as well. In the past ten years, a number of groups have utilized slightly different versions of CVD in order to produce graphene glass [ 60 , 61 , 62 , 63 , 64 , 65 ]. This graphene glass can be used in transparent electrodes, windows, and touch panels, amongst other applications [ 60 , 63 , 64 ]. Other applications of CVD synthesized nanomaterials include semiconductors, nanosensors, conductive electrodes, and optics [ 66 , 67 , 68 ].

This technique of nanomaterial synthesis is quickly evolving and has several derivatives from the original method, each of which posing their own risks. One such risk that spans these slightly different techniques is the use of volatile vapors and gasses. Though the gas or vapor in and of itself is not necessarily harmful, the side products that are produced as a result of the reaction with the substrate or catalyst often are harmful to both the environment and individual conducting the synthesis [ 29 ]. Plata et al. identified over 45 different compounds that are produced as a result of synthesis of carbon nanotubes [ 69 ]. Although these side products are worrisome, they are not the primary risk associated with this method. CVD synthesis requires large amounts of energy to heat the vacuums to their final temperature (~1000 °C) [ 70 ].

2.3. Hydrothermal Synthesis

Hydrothermal synthesis, also known as solvothermal synthesis, is an overarching term for techniques that take advantage of materials solubility by placing them under intensely hot water and high pressure, which results in crystalline structures. To form the products, precursors, water (or other solvent) and stabilizing agents are combined in a steel autoclave, which is heated and left to run for a predetermined amount of time. To change the morphology, size and geometry, the individual operating the autoclave can change the precursors, alter the temperature and/or change the pH of the solution in the autoclave. Upon completion of the autoclave cycle, the products are then cooled to room temperature, washed, and then finally dried [ 71 , 72 , 73 ].

As with other methods of nanomaterial synthesis, structures that are produced as a result of hydrothermal synthesis have applications in a wide variety of sectors and the materials applications are largely based upon their size, morphology, geometry and surface coatings. One of the more exciting applications of hydrothermal synthesis is the manufacturing of the components of Na-ion and K-ion batteries. Specifically, the hydrothermal method is utilized in the synthesis of different nanostructures such as nanorods (non-conductive), nanowires (conductive), and nanosheets for the electrodes of the batteries [ 13 , 14 , 15 , 16 , 17 ]. Beyond electrical applications, nanomaterials synthesized via hydrothermal synthesis can be applied in a number of different ways in healthcare, sensing devices, and electric media storage, amongst others. Darr et al. produced a comprehensive review of the applications of hydrothermal produced nanomaterials that covers this more extensively [ 74 ].

Compared to other methods of nanomaterial synthesis, hydrothermal synthesis is much cleaner and more energy efficient, which is achieved primarily through the use of lower temperatures in the autoclave [ 30 , 75 , 76 ]. Although this method is a step in the right direction for large-scale production of nanomaterials, it does not fully satisfy the 12 Principles of Green Chemistry. Caramazana et al. reported that the process of hydrothermal synthesis produced 10.86 kg of CO 2 per kg of Ag 2 S nanoparticles [ 30 ] which is significantly less compared to 543 kg of CO 2 per kg of Ag 2 S nanoparticles produced by flame spray pyrolysis [ 31 ].

2.4. Ultrasound Synthesis

Ultrasound, or sonochemistry, is a common laboratory technique used for the creation of nanomaterials. This technique manipulates acoustic waves to cause cavitation events, which in turn drive chemical reactions that result in the growth of nano-scale materials. Cavitation is the process by which microbubbles in a liquid rapidly store ultrasonic energy, grow, and subsequently collapse, releasing the ultrasonic energy back into the environment. Cavitation event is restricted and generate intense heat (5000 K) and pressure (1000 Bar) for a very short amount of time [ 77 , 78 , 79 , 80 , 81 ]. The cavitation event produces an interaction and/or a chemical reaction with a precursor in the environment, while the chemical reaction produces the final nanomaterial [ 78 ]. The size and morphology of the produced nanomaterial can be controlled by varying the precursors, the liquid in which the reaction takes place, and the frequency of the ultrasound waves.

The precursor of the nanocomposite is crucial to what it can be applied to; transition metal carbides, for instance, are highly effective precursors for the creation of nanoparticles with applications in chemical refinement and uses, including magnets. Mo 2 C and W 2 C nanoparticles are highly effective catalysts for hydrodehalogenation and are used to keep our bodies and environment clean from dangerous chloroflourocarbons and other hazardous halogenated organic chemicals [ 82 , 83 , 84 ]. Sonochemically synthesized nanomaterials can also be utilized as magnets in a variety of applications, including drug delivery. The drug attached to a magnetic nanoparticle is transported through the body using local magnets, placed at the site that the drug is required [ 85 , 86 , 87 ].

Ultrasound/sonochemistry requires little to no organic solvents or other harsh chemicals and is one of the most environmentally friendly methods of “traditional” nanomaterial synthesis. Additionally, the energy required for production of ultrasonic waves is meager compared to the energy required for other systems such as flame spray pyrolysis and Sol-Gel synthesis [ 77 , 88 ]. However, one shortcoming of ultrasonic method is its scalability, and large-scale ultrasonic synthesis that does not require chemical catalysts is under investigation. Recently, Hujjatul Islam et al. determined that large-scale synthesis of nanoparticles was possible by using an ultrasonic one-pot synthesis method [ 89 ]. This is just one of many steps that open the door for large-scale synthesis of nanomaterials.

2.5. Laser Ablation

Laser ablation is the process by which bulk material is broken down by laser pulses into its smaller constituents. The constituents that are released during the ablation are nano-sized and are collected as the final product. This process takes place in either a gaseous or liquid medium to control the size and shape of the resulting nanomaterial. In gas or a vacuum, the nanomaterial collects on a surface as a thin film. In contrast, when performed in a liquid, a colloidal structure is formed [ 90 ]. Further, by altering the intensity, pulse length and wavelength of the laser, the user is able to control the size and shape of the final product. The laser can also be utilized to further “fine tune” the shape and size of the nanomaterial, giving the user immense control over the final product [ 10 , 11 , 12 ].

Recently, it was identified that laser ablated nanomaterials are functional in the treatment of a number of diseases, namely, certain cancers. Walter et al. determined that AuNP generated by laser ablation in Tris buffer, with conjugated aptamers, were effective in the detection of human prostate cancers [ 18 ]. Additionally, Salmaso et al. were successful in detecting human breast adenocarcinoma in culture with laser ablated AuNP with a thermoresponsive polymer coating [ 19 ]. Beyond cancer, laser ablation synthesized nanomaterials can be applied to luminescent semiconductors, biosensing and imaging, and as nanofertilizers for seed germination [ 11 , 91 , 92 , 93 ].

Though laser ablation can be utilized in a “greener” fashion, it is often coupled with organic solvents in order to further control the morphology of the product, which is often the case for a variety of metal nanoparticles [ 94 , 95 , 96 ]. As previously mentioned, organic solvents are not only dangerous for the individual using them, but are harmful to the environment as well [ 23 , 25 ]. Further, laser ablation is one of the most energy consuming methods of nanomaterial synthesis, which has secondary effects on the environment, as coal and gas are often used to generate the electricity consumed in powering the laser [ 97 ].

2.6. Flame Spray Pyrolysis

Flame spray pyrolysis (FSP) is a process by which nanomaterials are synthesized by combining a high enthalpy precursor (typically an organic solvent) with oxygen and hydrocarbons in a flame [ 73 ]. Once these ingredients mix and nanomaterials begin to form, they pass through a filter and are collected on a substrate ( Figure 4 ). The size, shape and morphology of the final product can be controlled by modulating the precursor, oxygen level and temperature via amount of hydrocarbons released into the environment. One of the more important facets to creating the desired nanomaterial is the precursor selected. To produce a homogenous morphology of nanomaterials, precursors with high enthalpies and low melting points must be selected; otherwise (without fine-tuning the processing conditions), the final products will be a heterogenous mixture of nanomaterials that are largely unusable. This process can be dangerous to the individual(s) operating the furnace as temperatures inside the furnace can reach as high as 2800 K [ 26 , 27 , 28 ]. Further, this process is scalable and is done on an industrial level [ 9 ].

An external file that holds a picture, illustration, etc.
Object name is nanomaterials-11-02130-g004.jpg

Nanoparticle formation by Flame Spray Pyrolysis. Flame spray pyrolysis is a complicated process that involves the breakdown of a precursor (typically an organic solvent) into its monomers, which are then reacted with hydrocarbons and catalyzed by very high temperatures. The resulting nanomaterials are collected on substrate. FSP is scalable to industrial levels, but is highly dangerous and the carbon dioxide biproducts are major contributors to the greenhouse effect.

Versions of FSP have been around for decades and as such, a diverse field of applications have been discovered [ 28 ]. One such application are catalysts, such as photocatalysts (from TiO 2 and ZnO), CO oxidation catalysts (from Au/TiO 2 ) and dehydrogenation catalysts (from Pt-Sn/Al 2 O 3 ), amongst others [ 27 , 98 , 99 ]. Further, Eckert et al. determined a low-cost manner to produce metal oxides through FSP that have applications in lasers [ 100 ]. Beyond catalysts and lasers, nanomaterials synthesized by FSP have applications in energy storage and energy conversion, solar cells, and dye degradation [ 20 , 21 , 22 ].

FSP is, without a doubt, one of the most environmentally harmful and dangerous methods of nanomaterial synthesis. According to Eckelman et al., FSP produces over 50× more carbon dioxide as compared to a hydrothermal approach [ 31 ]. Different methods of FSP utilized for nanomaterial synthesis can have different adverse environmental and personal effects. For instance, the basis of FSP is the burning of hydrocarbon fuel in order to create the environment for nanomaterial creation. The burning of hydrocarbons produces carbon dioxide, which is the primary factor in the greenhouse effect [ 101 , 102 , 103 , 104 ]. Further, FSP commonly uses organic solvents to contain the precursors which are harmful to the body and environment.

3. Green Synthesis

3.1. background.

As a population of scientists, we can no longer allow green science to be another avenue which we can take, but rather, must be the avenue. As the scientific communities’ knowledge of the synthesis of nanomaterials grows, so do the means by which they can be created without harming the Earth and the people who live on it.

The basis of green chemistry was established in the early 2000s by Anastas and Warner with their book, “Green Chemistry” in which they published the 12 Principles of Green Chemistry [ 105 ]. The Principles are as follows:

  • Prevention—Steps must be taken to prevent the production of waste.
  • Atom Economy—As much as possible, the materials used in the synthesis should be incorporated into the final product.
  • Less Hazardous Chemical Synthesis—Synthesis methods that require materials with minimal or no toxicity to the environment or individual should be prioritized.
  • Designing Safer Chemicals—Chemicals should designed to achieve function with limited or no toxicity.
  • Safer Solvents—The use of solvents and auxiliary chemicals should not be used when possible.
  • Design for Energy Efficiency—Energy usage should be limited for synthesis.
  • Use of Renewable Feedstocks—a feedstock should be renewable and depletion should be avoided whenever possible.
  • Reduce Derivatives—Derivatives such as blocking agents and protecting/deprotecting groups should be avoided whenever possible as they cause additional waste.
  • Catalysis—Catalysis agents are preferable to stoichiometric agents.
  • Design for Degradation—Chemicals should be designed so that at the end of synthesis, they will break down into non-toxic derivatives.
  • Real-time Analysis for Pollution Prevention—Synthesis should be monitored in real-time for toxic chemical production.
  • Inherently Safer Chemistry for Accident Prevention—Agents used in product synthesis should be selected to limit the possibility of hazardous accidents.

These 12 principles should be followed whenever possible in order to limit the release of hazardous materials into the environment as well as human exposure to these chemicals [ 105 ]. These 12 principles were expanded upon in 2012 by Gałuszka et al. in a review of green analytical chemistry [ 106 ]. In this review, the group proposed the mnemonic “SIGNIFICANCE” as an easy way to remember the 12 Principles of Green Chemistry.

  • S—Select direct analytical technique
  • I—Integrate analytical processes and operations
  • G—Generate as little waste as possible and treat it properly
  • N—Never waste energy
  • I—Implement automation and miniaturization of methods
  • F—Favor agents obtained from renewable source
  • I—Increase safety of operator
  • C—Carry out in-situ measurements
  • A—Avoid derivatization
  • N—Note the sample number and size should be minimal
  • C—Choose multi-analyte or multi-parameter methods
  • E—Eliminate or replace toxic reagents

A key point that both of these groups touch on is the use of agricultural waste as reducing and capping agents. This waste often takes the form of different plant materials such as onion peels, banana peels, honey, and many other waste products [ 1 , 107 , 108 , 109 , 110 ]. Further, many nanomaterials are synthesized using different solvents, many of which (as previously noted) are of an organic nature; however, this does not have to be the case, as it is possible to utilize H 2 O to form nanomaterials. Water, even when slightly heated, can form nanomaterials. Additionally, when the water is heated to a supercritical state (the state in which it is above its critical temperature and pressure, but below that to become a solid, its properties change and it is able to function as a solvent in the reaction [ 111 , 112 , 113 ]). Other compounds that can be used as supercritical fluids for nanomaterial synthesis include ethanol and carbon dioxide [ 112 , 114 ].

There are many different facets of nanomaterials that play into their efficacy in different systems. For instance, silver nanoparticle coated nanosheets are often utilized in electronics as they have highly sought after electrical catalytic potential ([ 115 ]). However, these same sheets would not be effective in biomedical imaging techniques, such as MRI. Thus, the morphology, size, and material which the nanomaterial is composed of are incredibly important in the function of the nanomaterial.

There are typically three general types of morphologies that are recognized as nanomaterials, each of which build on the previous, with 0D being the first, 1D the second, and 2D the third. The “dimensions” refer to any part of the nanomaterial that stretches beyond the 100 nanometer range and sometimes into the micrometer(s). Typically, 0D nanomaterials are nanoparticles that can take a variety of shapes, from triangles and hexagons to spherical, polymeric forms. The 0D nanoparticles are often utilized in drug delivery, medical imaging, and other biomedical applications. However, they also have optical and electronic applications, amongst others. In terms of 1D nanomaterials, these typically take the form of nanorods, nanowires, or nanotubes. These materials vary slightly in their formation and the materials they are made of. For instance, nanotubes are typically comprised of carbon, and are formed from graphene sheets that wrap into a cylinder. The last type of nanomaterials are 2D. These nanomaterials have two dimensions that stretch beyond the 100 nanometer mark and include graphene, nanofilms, and nanocoatings. Each of these nanomaterials can be synthesized using green substrates and techniques.

3.2. 0D Nanomaterials

A large portion of nanosystems, regardless of their final form, begin as 0D nanomaterials. These systems can be synthesized using a number of different methods and substrates such as bacteria, yeast, fungi, plant material, live plants, viruses, and pure enzymes, amongst others ( Table 1 , Table 2 , Table 3 , Table 4 and Table 5 ). Each of these substrates require slightly different methods in order to properly synthesize the nanomaterial.

Nanoparticles produced by Bacteria.

Nanoparticles produced by Actinomycetes and Yeast.

Nanoparticles produced by Fungi.

Nanoparticles produced by Algae.

Nanoparticles produced by plant extracts.

3.2.1. Bacteria

Bacteria that can be utilized in the green synthesis of nanomaterials belong to a large group of unicellular organisms with cell walls, but lack organelles and an organized nucleus. Although some strains of bacteria can be very dangerous, many strains occur naturally in the body and pose little to no harm to someone working with them. Further, many of these strains, such as Escherichia Coli ( E. coli ) and bacillus subtilis , are very easy to culture and their genetic code can be easily altered. Because of these characteristics, nanoparticle synthesis in bacteria is a feasible process. To utilize bacteria for the synthesis of nanomaterials, bacteria is first grown aerobically to a desired optical density and then the growth medium containing the cells is combined with the nanoparticle precursor. After an incubation period, and a visible media color change, the media is centrifuged at high speeds (≥10,000 RPM). The supernatant from this spin contains a suspension of the nanomaterials [ 4 ]. Different strains of bacteria and precursors determine the final morphology and size of the nanoparticle ( Table 1 ). For instance, Gurunathan et al. and Sweeney et al. reported synthesis of different shapes of silver, gold, and cadmium nanoparticles due to their interaction with different biomolecules. Gurunathan et al. reported that the proteins on the exterior of the cell wall of E. coli interacted with the silver nitrate and choloauric solution to produce irregular and triangle morphologies of silver and gold nanoparticles [ 4 ].

However, in the synthesis of cadmium nanoparticles, glutathione and cysteine desulfhydrase in the interior of the E. coli were primarily involved in the formation of spherical morphology [ 4 ]. Intracellular vs. extracellular and the interaction with bioactive molecules can also be linked to the final size of the nanoparticle. In extracellular synthesis using bacteria, nanoparticles are typically larger than those synthesized intracellularly. Nanomaterials made of selenium, ferrous oxide, zinc sulfide, and lead sulfide have been synthesized using bacterial systems ( Table 1 ). Nanoparticles can be stabilized and reduced by a number of different active molecules common in living organisms. For bacteria, amino acids of proteins present on the cell wall and in the cytosol such as tyrosine and tryptophan are able to reduce the nanoparticles and keep the stable. Further, sugars such as aldose and ketose can serve as reducing/stabilizing agents. Specifically, the amino acids in the cells walls and inside the cells function as a protective capping layer, which renders them non-toxic to mammalian cells [ 116 ]. These active molecules that exist in and on different bacteria species react with the metal ions and reduce them, allowing the metal ions to react with one another to facilitate the creation of higher ordered structures such as spherical nanoparticles.

3.2.2. Yeast (Live and Extract)

Similar to bacteria, yeast are unicellular organisms and belong to the fungus family. The most traditional and common use for yeast is with the species Saccharomyces cerevisiae , which converts carbohydrates to alcohols and carbon dioxide. This specific species is used for baking and in alcoholic beverage creation through a process known as fermentation. Not all species of yeast are benign, such as those used in baking; some species, such as Candida albicans , can cause life-threatening systemic and bloodstream infections [ 132 ].

The use of yeast cells allows for the synthesis of some different nanosystems than bacteria. Silver, gold, cadmium sulfide, lead sulfide, ferrous oxide, selenium and antimony nanoparticles have all been synthesized using yeast species ( Table 2 ). With more common nanomaterial composites, such as silver and gold, nanosystems can be synthesized using live cells, or cell extracts. Sivaraj et al. reported successful synthesis of silver chloride nanoparticles with proteins extracted from commercial yeast. To form the nanoparticles, the group began by treating commercial yeast extracts with precursor solutions and allowing to incubate for 24 h. After this incubation, the group collected the solution and sterile filtered it to obtain a solution containing only the nanoparticles. The group also showed that, specifically, the primary amine of certain proteins imbedded in the cell wall of the yeast were responsible for the reduction of the silver chloride into nanoparticles. This particular nanoparticle was also shown to have advantageous anti-mycobacterial properties [ 32 ]. In an early report, Kowshik et al. showed nanoparticles synthesized by silver-tolerant MKY-3 yeast cells [ 133 ]. These nanoparticles had different morphologies and sizes, depending on their synthesis conditions (concentration of silver chloride, pH, etc.). This particular study suggests that excreted biochemical reducing agents were responsible for the extracellular reduction of silver chloride [ 133 ]. However, extracellular synthesis is not the most common method of nanoparticle synthesis by yeast. Most other groups reported that synthesis, in their models, occurred intracellularly and enzymes within the cell were responsible for the creation of the nanosystem. Kowshik et al. followed up their prior work by reporting that Schizosaccharomyces pombe and Torulopsis sp. were able to intracellulary synthesize cadmium sulfide and lead sulfide nanoparticles, respectively. Differing from their prior work, Kowshik et al. showed that species of phytochelatin synthase was responsible for the intracellular synthesis of the nanoparticles [ 134 ]. Interestingly, almost all the publications analyzed reported that the nanoparticles they synthesized functioned exceptionally well in different biomedical applications. The broad term “biological applications” results from the wide variety of different ways that nanoparticles can be utilized. For instance: Saccharomyces cerevisiae produces spherical silver nanoparticles that effectively eliminate mycobacteria in culture [ 32 ]. Similar to other living organisms, yeast are capable of producing proteins with specific amino acids that are able to reduce and stabilize the nanoparticle. Specific to yeast, quinones are organic molecules derived from aromatic compounds and are reported to assist in the production of nanoparticles. The oxidoreductases are activated when the internal pH becomes more basic and subsequently allows them to reduce the metal ions. The quinones are strong nucleophiles with redox properties that are suitable to facilitate the conversion from simple metal ions to higher ordered nanoparticles [ 5 , 135 ].

3.2.3. Fungi

An umbrella term that technically includes yeast, fungi are eukaryotic organisms that acquire their food by secreting digestive enzymes into their immediate environment, then absorbing the dissolved molecules. However, their defining characteristic is the chitin, a long-chain polymer and derivative of glucose, that reinforces their cell walls. The cell walls of fungi, in addition to containing chitin, can also facilitate the formation of nanoparticles of different shapes, sizes, and compositions. Production of nanoparticles can be undertaken both intracellular and extracellular by enzymes and protein residues. To form nanomaterials from fungi, the fungi is first retrieved and incubated in broth, shaken for 72 h to produce a biomass which is then filtered. After extensive washing, the biomass is incubated with the nanoparticle precursor and the resulting solution, after 24 h, contains the nanoparticles [ 141 ]. Mukherjee et al. reported that Verticillium , a genus that is most commonly known for the Verticillium Wilt that can decimate crops worldwide, can form silver nanoparticles on the cell wall by reducing aqueous silver nitrate [ 142 ]. Sanghi et al., Ingle et al., and Gade et al., amongst others noted in Table 3 , showed extracellular silver nanoparticle synthesis by different fungal species [ 143 , 144 , 145 ]. Additionally, it has been shown that intracellular proteins can facilitate nanoparticle synthesis. Ahmad et al. and Gericke et al. showed that intracellular enzymes of Trichothecium and Verticillium luteoalbum , respectively, form gold nanospheres and nanorods [ 146 , 147 ]. Similarly to nanoparticles formed by other green methods, nanoparticles formed by fungi have applications in many different fields, ranging from medicine to optoelectronics [ 34 , 148 ]. Medical and therapeutic applications of nanoparticles are exciting applications that warrant future study. Phillip et al. [ 127 ] reported that edible mushroom extracts have chemotherapeutic properties, amongst other helpful characteristics, stating that nanoparticles synthesized from these extracts carry similar properties. Phillip et al., amongst others, also reported that amino acid residues in fungi, such as cysteine, are able to assist in the production of nanoparticles [ 127 ]. Similarly to the bacteria, the amino acids in the cell wall of the fungi function as capping and stabilizing agents. Further, when the nanoparticles are applied to therapeutics, as referenced by Phillip et al., they are non-toxic, which is not the case for a traditionally synthesized nanoparticle.

3.2.4. Algal Species

Algae are a group of phytosynthetic, eukaryotic organisms that are not typically considered to be plants. The chlorophyll containing single, or multicellular (depending on the species) organisms, grow on water but lack true stems, leaves and vascular tissues that characterize plants. Additionally, their effects on humans can range from therapeutics like Spirulina , which contains a high concentration of natural nutrients [ 167 ], to species that are lethal if their cells or toxins are ingested, such as Anabaena [ 168 ]. In recent years, a considerable amount of algal species have been identified for their abilities to catalyze the synthesis of nanomaterials ( Table 4 ). To form nanomaterials from different algal species, the samples are first thoroughly dried then ground into a fine powder, added to water, incubated for 24 h and filtered. Once filtered, the biomass filtrate is combined with the nanomaterial precursor and incubated at room temperature until the color of the solution changes, indicating the formation of the nanomaterial. Similar to fungus and bacteria, some algal species, such as Tetraselmis kochinensis, facilitate the formation of gold nanoparticles intracellularly, through enzymes on the cell wall and in the cytoplasm [ 169 ]. However, comparing the two classes of agents, algal species, on average, have greater diversity in the morphology of nanoparticles that they can create. For instance, species such as the Cystophora moniliformis [ 170 ], Scenedesmus sp. [ 171 ], and Leptolyngbya valderianum [ 172 ] all form spherical nanoparticles, which is consistent with fungus and bacterial catalyzed nanoparticles. Beyond nanospheres, the Sinha et al. showed that Pithophora oedogonia is capable of synthesizing cubical and hexagonal silver nanoparticles [ 173 ]. Beyond morphology, algae and other green agents share similar bioactive molecules. A large majority of the nanoparticles synthesized by algal species that were recorded in Table 4 are reduced/stabilized by enzymes/proteins in their cytosol or on the membrane. Finally, a majority of nanoparticles that are synthesized by algal species function as potent bactericides. As for the bioactive molecules involved in the production of nanoparticles, algae utilize the same, but also slightly different molecules from other classes. In algae, polysaccharides, as well as protein residues, are able to reduce and stabilize the nanoparticles. One major bonus of employing algae is the presence of a wide variety of phytochemicals. Amino acids, alkaloids, carbohydrates, flavonoids, saponins, sterols, tannins, and phenolic compounds are all present in certain algae species such as sargassum tenerrimum. Each of the compounds, once purified, can function in their own way to further manipulate the size, shape, and active properties of the nanomaterial. This characterization by Kumar et al. paves the way for future work and applications of algae in nanomaterial synthesis [ 174 ].

3.2.5. Plant and Plant Extract

Arguably the most interesting and most environmentally friendly form of green synthesis is by utilizing plant or food scraps to create nanomaterials. Typically, the plant or food scraps undergo a process to have certain chemical compounds extracted out of them ( Figure 5 ). In general, the plant or food scrap is dried, ground or cut up, then emerged in hot water for a period of time, then finally filtered and stored at 4 °C [ 1 ].

An external file that holds a picture, illustration, etc.
Object name is nanomaterials-11-02130-g005.jpg

Creating nanoparticles from plant material. The creation of nanomaterials can be an incredibly complicated process, such as when FSP is employed. However, it can also be very straightforward. By drying onion peels, then grinding them up into a powder and introducing that powder to a solution with metal substrates with heat, nanoparticles can be created.

This filtered reagent can have a number of different bioactive molecules in it depending on the specific plant or plant material that they are extracted from. Commonly, flavonoids, terpenoids, and phenols are present in plant extracts [ 189 , 190 ], although proteins, glucosides, and polysaccharides are also implicated in the synthesis of nanoparticles [ 1 , 191 , 192 ]. These bioactive molecules contain functional parts that act as reducing and stabilizing agents for the nanoparticle precursors. Further, this method of nanoparticle synthesis eliminates the need for harsh chemicals that pose risks to both the user and the environment. Additionally, by extracting the bioactive molecules using only heated water, the need for high energy consuming techniques is also eliminated. Because of the lack of harsh reagents, a majority of the synthesized nanoparticles can be utilized in biomedical applications.

According to cited literature, silver nanoparticles are the most common type of nanoparticle synthesized by plant material ( Table 5 ).

Besides silver, copper, gold, and selenium have also been reported for the production of nanoparticles. The synthesis of silver nanoparticles can be accomplished by a number of different plant materials with reducing, stabilizing and capping agents present in their extracts. For instance, in addition to the agents mentioned in the first paragraph, tea polyphenols, vegetable oil, Carpesium cernuum (flora native to China), cannabis sativa, and black currant (a berry native to northern Europe and Asia) amongst many others, can function as reducing, stabilizing, and capping agents. Moulton et al. reported in 2010 that colloidal silver nanoparticles could be synthesized using tea leaves containing polyphenols. Their method of synthesis mirrors others that have been reported. The group received tea powder (dried and ground tea leaves) in which they boiled, filtered, and added silver nitrate to produce their silver nanoparticles which was confirmed by Transmission Electron Microscopy (TEM) [ 108 ]. The group then went on to test the toxicity of the nanoparticles on biological systems through cell viability and membrane integrity assays. The results were promising, as the nanoparticles showed no toxicity and showed potential biocompatibility [ 108 ]. Another common plant material that many people have in their kitchen is vegetable oil. Kumar et al. took a slightly different approach to the green synthesis of metallic nanoparticles by utilizing free radicals that are commonly present in common household paints made from certain vegetable oils including cashew oil [ 193 ]. The group took advantage of naturally occurring free radical exchange during the oxidative drying of oils to reduce silver benzoate (a common silver salt common in silver np synthesis) to silver nanoparticles. The group also utilized alkyd resin as the protecting agent, and fatty acids and aldehydes from the oils as stabilizing agents for the reaction. The result of their reaction were silver nanoparticle-embedded paints with antimicrobial properties [ 193 ]. Another plant agent that can be used in the synthesis of nanoparticles is aloe vera. Aloe vera is commonly used in traditional medicines to alleviate a number of different symptoms; additionally, it is also commonly used today to remedy sunburns. Fardsadegh et al. successfully employed aloe vera in the synthesis of selenium nanoparticles which carried antifungal and antibacterial properties. Although the Fardsadegh group used the hydrothermal approach, which is highlighted as a classic synthesis method, this method is more environmentally friendly than others have noted, as its most substantial downfall is high energy consumption [ 191 ]. Just as the bioactivity of molecules in plant leaves can be taken advantage of, so can the activity of molecules present in our favorite spices. The Myristica fragrans fruit comes from a species of evergreen trees in Indonesia. This fruit, when dried and ground, produces nutmeg and mace, common spices for cooking. When the pericarp, or non-seed part of the fruit, is dried, crushed, added to water and boiled with Cupric oxide, or silver nitrate, metallic nanoparticles form. Sasidharan et al. went on to show that flavonoids, quecetin, and phenols from the mysristic fragrans were primarily involved in the stabilization and reduction of the nanoparticles. Further, they found that the silver nanoparticles were particularly effective at breaking down bacteria cell walls. Additionally, Sasidharan et al. found that the copper nanoparticles were effective as catalysts for the construction of triazole rings [ 194 ].

Beyond plant extracts, live plants ( Table 6 ) have also been implicated in the synthesis of nanoparticles. Utilizing live plants is one of, if not the greenest methods, by which to synthesize nanomaterials. Although this method is not very common and is seemingly not as reproducible as some other methods, it has tremendous upsides if it can be mastered. The first report of synthesis of nanoparticles through a living plant came from Gardea-Torresdey et al. who showed that silver and gold nanosystems could be synthesized using living alfalfa sprouts in 2003. The group showed that the silver from silver nitrate present in the soil that the plant was grown in was transported in its original oxidation state up the shoot of the plant. At this point, the silver particles were reduced inside of the plant and formed into nanoparticles. Once the nanoparticles began to form, they conglomerated into nanowire-like systems [ 195 ]. The other living plant species that have shown to be able to synthesize nanoparticles are black mustard plants ( Brassica Juncea ) and red fescue ( Festuca Rubra ) that was observed by Marchiol et al. After growing the plants to full size, they were exposed to 1000 ppm of silver nitrate for 24 h. During this time, the plants absorbed the silver nitrate into their roots and stems. While the silver nitrate was in the plant’s system, it was stabilized and reduced by sugars (glucose and fructose), phenols, and citric and ascorbic acids, all of which are commonly present in plants. After the reduction, the silver began to form nanoparticles that were visible by TEM [ 196 ].

Nanoparticles produced by live plants.

3.3. 1D Nanomaterials

Of the nanomaterials, 0D (where all dimensions of the product are below 100 nm) are the most common form of nanomaterial, since other, higher-level structures, are often composed of smaller building blocks such as nanoparticle studded nanorod for electrical conduction (discussed more in higher ordered structures section). However, the nanorods are technically not considered 0D nanomaterials and fall into the category of 1D nanomaterials as the lengths of these materials extend beyond the nanometer scale. The structures shown in Table 7 all fall into this category. Structures that are synthesized to sizes in the micron range are often built from smaller, 0D structures. For instance, Lin et al. determined that the fabrication of silver nanowires by extracts of the Cassia fistula leaf evolved from individual nanoparticles [ 204 ]. This, however, is not uncommon for higher ordered structures. Other groups, such as Nadagouda et al., witnessed a similar phenomenon. Specifically, they noted that the solvent media was critical for 1D nanomaterial self assembly. When water was employed as the solvent media, Ag and Pd began to form rod-like structures, whereas isopropanol yielded wire-like structures. This difference in structure is a result of the different polarity of each of the solvents and the subsequent interactions between them and the substrate. Further, the combination of multiple forms of nanomaterials can yield interesting applications. Horta-Piñeras et al. were successful in decorating silver nanowires with spherical silver nanoparticles that showed exceptional inhibition of E. coli [ 205 ]. This application, though common for nanoparticles, is not as common for 1d nanomaterials. Because of the diameter, length, and conductivity of metallic nanowires and nanorods, they are often utilized in electrical and optical applications [ 204 , 206 , 207 , 208 ].

1D Nanomaterials Produced from different plant material or other species.

3.4. Higher Ordered Structures

After nanoparticles and other 0D structures are formed from their respective reagents, there exists the possibility, given the right conditions, that they can form higher-level structures outside of the 100 nM range. Such higher order structures include nanowires, nanotubes, nanorods (1D) and nanosheets (2D). These nanosystems possess different functional properties than their less ordered relatives, such as the ability to transmit electricity, which makes them excellent electrodes in batteries [ 224 ]. Similar to nanoparticles and other one-dimensional nanomaterials, higher ordered nanosystems can also be synthesized using environmentally friendly agents such as plant material, which is the most common agent used in systems synthesis. The higher ordered structures can be composed of metals such as silver, lead and gold, as well as other elements such as carbon. Lin et al. were able to show that broth from the Cassia fistula was effective at functioning as a capping and reducing agent for silver nitrate. The group was able to synthesize up to 10 micrometers of nanowires between 50 nm and 60 nm diameters. The group determined that when the Cassia fistula broth and silver nitrate mixture is first mixed, small nanoparticles form. However, when the solution was left to shake for multiple hours, and eventually days, the nanoparticles grew onto each other in an Ostwald ripening process. When the mixture was subjected to high temperatures (>400 °C), irregular shaped nanowires formed [ 204 ]. Another green synthesis “reagent” is viruses. In 2003, Mao et al. reported the successful synthesis of Zinc and Cadmium nanowires from ZnS and CdS, respectively. The group found that the Zn and Cd originally formed quantum dots before becoming further organized into nanowires. However, the viruses that were used in the synthesis were engineered to express pVIII fusion proteins that effectively guided the formation of the nanowires [ 225 ]. Synthesis of higher ordered nanosystems has also been seen in algae, bacteria, and fungus. In algae, Parial et al. observed that when the solution containing the algal biomass and hydrogen tetrachloraurate carried a lower pH (pH of 5), the resulting nanomaterial were nanorods as compared to the nanoparticles observed at higher pH [ 215 ]. Bacteria are equally capable of producing gold nanowires. He et al. found that rhodopseudomonas capsulata were able to facilitate the formation of gold nanowires from nanoparticles when the concentration of HAuCl 4 was varied. They believe that proteins from the bacteria were the major bioactive molecules involved in the nanoparticle and nanowire formation [ 214 ]. In fungi, Das et al. found that when the solution of chloroauric acid and cell-free extract of Rhizopus oryzae were left to incubate for more than 24 h, gold nanowires began to form as a result of Ostwald ripening. This process, however, subsequently reduced the concentration of nanoparticles in the solution [ 216 ]. A common feature of the synthesis of higher ordered structures is that they form from lower ordered structures when certain conditions are modulated.

4. Conclusions

The field and application of nanomaterials is expanding, and more research continues to be conducted. Traditional synthesis methods of nanomaterials (including Sol-Gel, chemical vapor deposition, laser ablation, flame spray pyrolysis, ultrasound, and hydrothermal) are harmful to the environment due to their use of toxic reagents and high energy requirements. The usage of these methods is both ignorant and negligent, considering their distressing effects on the environment during the current rise in climate change. Green synthesis methods provide minimal to no harm to the environment, or indeed to the individuals involved in their fabrication, and are equally efficacious as the traditional synthesis methods. It is our belief that nanomaterials will revolutionize our daily lives, and we have seen first-hand the power of nanoparticles with the SARS-CoV-2 vaccines. It is also our belief that humanity is heading towards disaster in regard to our uncontrolled climate change. With these two points in mind, we believe that it is critical to conduct this groundbreaking research in an environmentally conscious way. Research into green synthesis of nanotechnology will lead us to develop more impactful medical equipment, create new supercomputer conductors, and examine space, the final frontier, with revolutionized sensors. Our article reviewed the methods of green nanomaterial production with a focus on the active molecule of various microorganism substrates facilitating the synthesis. The involvement of active molecules from natural biological systems continues to broaden with the number of organisms that can be utilized. Manipulation of the active molecules in nanomaterial synthesis results in refinement and precision of nanomaterial morphology along with antimicrobial, stabilizing, reducing and capping properties. The tailoring of nanosystem synthesis can result in small 0D nanoparticles, micron long nanowires or even nanosheets for specific application. Overall, pinpointing the active molecules in the green synthesis of nanomaterials allows continual advancements and manipulation of physical and chemical properties of nanomaterials applicable to the wider scientific community. As the world continues to adjust to climate change, continuing development in green synthesis of nanomaterials is of great importance for the preservation of environmental health, energy and ethics of scientific research.

Acknowledgments

We are grateful to the extensive resources of Biorender software utilized in creating graphical abstract and figures.

Author Contributions

Conceptualization, M.H. and A.G.; literature gathering, M.H.; writing—original draft preparation, M.H. and A.G.; writing—review and editing, M.H., M.D. (Melissa DeBella), M.D. (Maria DiBella) and A.G.; project administration and supervision, A.G. All authors have read and agreed to the published version of the manuscript.

This work received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Data availability statement, conflicts of interest.

The authors declare no competing financial interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

National Academies Press: OpenBook

Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design (2020)

Chapter: chapter 2 - literature review and synthesis.

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

4 Literature Review and Synthesis Literature Review Purpose of Literature Review Performance-based seismic design (PBSD) for infrastructure in the United States is a developing field, with new research, design, and repair technologies; definitions; and method- ologies being advanced every year. A synthesis report, NCHRP Synthesis 440: Performance- Based Seismic Bridge Design (Marsh and Stringer 2013), was created to capture PBSD understanding up to that point. This synthesis report described the background, objec- tives, and research up until 2011 to 2012 and synthesized the information, including areas where knowledge gaps existed. The literature review in this research report focuses on new infor mation developed after the efforts of NCHRP Synthesis 440. The intention is that this research report will fuel the next challenge: developing a methodology to implement PBSD for bridge design. Literature Review Process Marsh and Stringer (2013) performed an in-depth bridge practice review by sending a questionnaire to all 50 states, with particular attention to regions with higher seismic hazards. The survey received responses from a majority of those agencies. This process was continued in the current project with a request for new information or research that the state depart- ment of transportation (DOT) offices have participated in or are aware of through other organizations. The research team reached out to the list of states and researchers in Table 1. An X within a box is placed in front of their names if they responded. The team also examined the websites of the state DOTs that participated to investigate whether something was studied locally, especially work being developed in California. The research team made an additional effort to perform a practice review of bridge designs, research, and other design industries, specifically in the building industry. The building industry has been developing PBSD for more than 20 years, and some of their developments are appli- cable to bridge design. These combined efforts have allowed the research team to assemble an overview of the state of PBSD engineering details and deployment since Marsh and Stringer’s (2013) report was published. NCHRP Synthesis 440 primarily dealt with the effects of strong ground motion shaking. Secondary effects such as tsunami/seiche, ground failure (surface rupture, liquefaction, or slope failure), fire, and flood were outside the scope of this study. Regardless, their impact on bridges may be substantial, and investigation into their effects is undoubtedly important. C H A P T E R 2

Literature Review and Synthesis 5 The following e-mail was sent to the owners and researchers. Dear (individual): We are assisting Modjeski & Masters with the development of proposed guidelines for Performance- Based Seismic Bridge Design, as part of NCHRP [Project] 12-106. Lee Marsh and our Team at BergerABAM are continuing our efforts from NCHRP Synthesis 440, which included a literature review up to December of 2011. From this timeframe forward, we are looking for published research, contractual language, or owner documents that deal with the following categories: 1. Seismic Hazards (seismic hazard levels, hazard curves, return periods, geo-mean vs. maximum direc- tion, probabilistic vs. deterministic ground motions, conditional mean spectrum, etc.) 2. Structure Response (engineering design parameters, materials and novel columns, isolation bearings, modeling techniques, etc.) 3. Damage Limit States (performance descriptions, displacement ductility, drift ratios, strain limits, rotation curvature, etc.) 4. Potential for Loss (damage descriptions, repairs, risk of collapse, economical loss, serviceability loss, etc.) 5. Performance Design Techniques (relating hazard to design to performance to risk, and how to assess [these] levels together) If you are aware of this type of resource, please provide a contact that we can work with to get this information or provide a published reference we can gather. Your assistance is appreciated. We want to minimize your time, and ask that you respond by Wednesday, 8 February 2017. Thank you again, Research Team Synthesis of PBSD (2012–2016) Objectives of NCHRP Synthesis 440 The synthesis gathered data from a number of different but related areas. Marsh and Stringer (2013), herein referred to as NCHRP Synthesis 440, set the basis for this effort. The research report outline follows what has been added to the NCHRP Synthesis 440 effort since 2012. The information gathered that supplements NCHRP Synthesis 440 includes, but is not limited to, the following topics. • Public and engineering expectations of seismic design and the associated regulatory framework Participation State Alaska DOT Arkansas DOT California DOT (Caltrans) Illinois DOT Indiana DOT Missouri DOT Montana DOT Nevada DOT Oregon DOT South Carolina DOT Utah DOT Washington State DOT Table 1. List of state DOT offices and their participation.

6 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design • Seismic hazard analysis • Structural analysis and design • Damage analysis • Loss analysis • Organization-specific criteria for bridges • Project-specific criteria Where new or updated information is available for these areas, a summary is included. Marsh and Stringer (2013) also identified gaps in the knowledge base of PBSD, current as of 2012, that need to be closed. Knowledge gaps certainly exist in all facets of PBSD; however, key knowledge gaps that should be closed in order to implement PBSD are covered. • Gaps related to seismic hazard prediction • Gaps related to structural analysis • Gaps related to damage prediction • Gaps related to performance • Gaps related to loss prediction • Gaps related to regulatory oversight and training • Gaps related to decision making These knowledge gaps have been filled in somewhat in this research report but, for the most part, these areas are still the key concepts that require additional development to further the development of a PBSD guide specification. Public and Engineering Expectations of Seismic Design and the Associated Regulatory Framework The public expectation of a structure, including a bridge, is that it will withstand an earthquake, but there is a limited understanding of what that actually means. Decision makers struggle to understand how a bridge meeting the current requirements of the AASHTO Guide Specifications for LRFD Seismic Bridge Design (2011), herein referred to as AASHTO guide specifications, will perform after either the expected (design) or a higher level earthquake. Decision makers understand the basis of life safety, wherein the expectation is that no one will perish from a structure collapsing, but often mistakenly believe that the structure will also be usable after the event. In higher level earthquakes, even in some lower level events, this is not true without repair, retrofit, or replacement. In the past decade, there has been an increased awareness by owners and decision makers as to the basis of seismic design. As a result, a need has developed for performance criteria so that economic and social impacts can be interwoven with seismic design into the decision processes (see Figure 1). Several states, including California, Oregon, and the State of Washington, are working toward resiliency plans, although these are developed under different titles or programs within the states. Resiliency has been defined in several ways: (1) amount of damage from an event measured in fatalities, structural replacement cost, and recovery time and (2) the time to resto- ration of lifelines, reoccupation of homes and structures, and, in the short term, resumption of normal living routines. The California DOT Caltrans has generated risk models and is in the process of developing a new seismic design specification to address PBSD in bridge design. The risk models and specifications are not published yet, but the use in PBSD is discussed in greater detail later in this chapter.

Literature Review and Synthesis 7 The State of Washington The State of Washington’s resiliency plan, outlined in Washington State Emergency Management Council–Seismic Safety Committee (2012), works to identify actions and policies before, during, and after an earthquake event that can leverage existing policies, plans, and initiatives to realize disaster resilience within a 50-year life cycle. The hazard level used for trans- portation planning is the 1000 year event. The goals for transportation systems vary depending on the type of service a route provides, as shown in following components of the plan. For major corridors such as Interstates 5, 90, and 405 and floating bridges SR 520, I-90, and Hood Canal, the target timeframe for response and recovery is between 1 to 3 days and 1 to 3 months, depending on location. The current anticipated timeframe based on current capacity and without modifications is between 3 months to 1 year and 1 to 3 years, depending on location. The actual response and recovery time will depend on a number of factors. For example: 1. The number of Washington State DOT personnel who are able to report to work may be limited by a variety of circumstances, including where personnel were at the time of the earthquake and whether they sustained injuries. 2. Bridges and roadways in earthquake-affected areas must be inspected. How long this takes will depend on the number and accessibility of the structures and the availability of qualified inspectors. 3. Some bridges and segments of road may be rendered unusable or only partially usable as a result of the earthquake or secondary effects. The response and recovery timeframe will depend on the number, the location, and the extent of the damage. 4. Certain earthquake scenarios could result in damage to the Ballard Locks and cause the water level in Lake Washington to drop below the level required to operate the floating bridges. 5. Depending on the scenario and local conditions, liquefaction and slope failure could damage both interstates and planned detours. During the first 3 days after the event, the Washington State Department of Transportation (Washington State DOT) will inspect bridges and begin repairs as needed. Washington State DOT’s first priority will be to open key routes for emergency response vehicles. Subsequent phases of recovery will include setting up detours where necessary and regulating the type and Figure 1. PBSD decision-making process (Guidelines Figure 2.0-1). References to guidelines figures and tables within parentheses indicate the proposed AASHTO guidelines.

8 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design volume of traffic, to give the public as much access as possible while damaged roads and bridges are repaired. For major and minor arterials, which encompass arterial roadways (including bridges) other than the interstates (so therefore includes state highways and many city and county roads), the target timeframe for response and recovery is between 0 to 24 hours and 3 months to 1 year, depending on location; the percentage of roadways that are open for use will increase over this period. Anticipated timeframe based on current capacity is between 1 week to 1 month and 1 to 3 years, depending on location; the percentage of roadways that are open for use will increase over this period. The goal of Washington State Emergency Management Council’s resiliency plan is to establish a means to coordinate agencies, public–private partnerships, and standards toward these resiliency goals. The plan outlines goals for recovery times for transportation systems in terms of hours, days, weeks, months, and years, with targets to achieve different levels of recovery (see Table 2) as follows. Similar recovery timeframe processes were established for service sectors (e.g., hospitals, law enforcement, and education); utilities; ferries, airports, ports, and navigable waterways; mass transit; and housing. The overall resiliency plan also discusses the degree to which the recovery of one component or sector would depend on the restoration of another. The key interdependencies that the participants identified include information and communication technologies, transportation, electricity, fuel, domestic water supplies, wastewater systems, finance and banking, and planning and community development. It appears that the implementation of the Washington State Emergency Management Council’s initiative, originally assumed to take 2.5 to 3 years in 2012, has not seen significant development since then. However, the State’s initiative to develop a more resilient community has been extended down to the county level, with King County’s efforts referenced in Rahman et al. (2014) and, at the city level, with the City of Seattle referenced in CEMP (2015). This reflects the commitment needed not only by the legislature and the state departments but also by other agencies (e.g., county, city, or utilities) and the public to take an interest in, and provide funding for, the development of a resiliency plan. The recovery continuum is presented graphically in Figure 2. Developing this relationship with other agency plans is an iterative process that will take time, as shown in Figure 3. Identifying the critical sectors of the agency is necessary to develop a resiliency model and determine how to approach a disaster recovery framework. King County worked from Washington State’s initiative to develop Figure 4. The Oregon DOT Oregon DOT has developed a variation of the approach identified by the State of Wash- ington; further discussion is found later in this chapter. Other Resilience Documents The building industry has recently seen the development of two additional documents that address PBSD in terms of expectations and process. The REDi Rating System from REDi (2013) sets an example for incorporating resilience- based design into the PBSD process. This document outlines structural resilience objectives for organizational resilience, building resilience, loss assessment, and ambient resilience to evaluate and rate the decision making and design methodology using PBSD for a specific project.

Literature Review and Synthesis 9 The document is one of the only references that addresses a system to develop probabilistic methods to estimate downtime. The overall intent is to provide a roadmap to resilience. This roadmap is intended to allow owners to resume business operation and to provide livable conditions quickly after an earthquake. The Los Angeles Tall Buildings Structural Design Council (LATBSDC 2014) created an alter- native procedure specific to their location. Design specification criteria are identified and modi- fications are described as appropriate for the PBSD approach to tall buildings in this localized Minimal (A minimum level of service is restored, primarily for the use of emergency responders, repair crews, and vehicles transporting food and other critical supplies.) Functional (Although service is not yet restored to full capacity, it is sufficient to get the economy moving again—for example, some truck/freight traffic can be accommodated. There may be fewer lanes in use, some weight restrictions, and lower speed limits.) Operational (Restoration is up to 80 to 90 percent of capacity: A full level of service has been restored and is sufficient to allow people to commute to school and to work.) Time needed for recovery to 80 to 90 percent operational given current conditions. Source: Washington State Emergency Management Council–Seismic Safety Committee (2012). Table 2. Washington State’s targets of recovery.

10 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design Source: Adapted from FHWA by CEMP (2015). Figure 2. Recovery continuum process. Source: CEMP (2015). Figure 3. Relationship of disaster recovery framework to other city plans. region. This procedure is a good example of how PBSD criteria and methodology need to be established locally, with a knowledge of risk, resources, and performance needs in order to set the criteria for true PBSD. Seismic Hazard Prediction As outlined in NCHRP Synthesis 440, the seismic hazard includes the regional tectonics and the local site characteristics from either a deterministic or probabilistic viewpoint. The deterministic form allows the assessment of shaking at a site as a function of the controlling earthquake that can occur on all the identified faults or sources. The probabilistic approach

Literature Review and Synthesis 11 defines an acceleration used in design that would be exceeded during a given window of time (e.g., a 7% chance of exceedance in 75 years). The following subsections provide a summary of procedures currently used within AASHTO, as well as new issues that should be eventually addressed in light of approaches used by the building industry. AASHTO Probabilistic Approach As summarized in the AASHTO guide specifications, the current approach used by AASHTO involves the use of a probabilistic hazard model with a nominal return period of 1000 years. Baker (2013) noted that the probabilistic seismic hazard analysis involves the following five steps: 1. Identify all earthquake sources capable of producing damaging ground motions. 2. Characterize the distribution of earthquake magnitudes (the rates at which earthquakes of various magnitudes are expected to occur). 3. Characterize the distribution of source-to-site distances associated with potential earthquakes. 4. Predict the resulting distribution of ground motion intensity as a function of earthquake magnitude, distance, and so forth. 5. Combine uncertainties in earthquake size, location, and ground motion intensity, using a calculation known as the total probability theorem. While implementation of the five steps in the probabilistic approach is beyond what most practicing bridge engineers can easily perform, AASHTO, working through the U.S. Geological Survey, developed a website hazard tool that allows implementation of the probabilistic proce- dure based on the latitude and longitude of a bridge site. The product of the website includes peak ground acceleration (PGA), spectral acceleration at 0.2 s (Ss), and spectral acceleration at 1 s (S1). These values are for a reference-site condition comprising soft rock/stiff soil, having a time-averaged shear wave velocity (Vs) over the upper 100 feet of soil profile equal to 2500 feet per second (fps). The Geological Survey website can also correct for local site conditions following procedures in the AASHTO Guide Specifications for LRFD Seismic Bridge Design. One of the limitations of the current U.S. Geological Survey hazard website is that it is based on a seismic hazard model developed in 2002. The Geological Survey updated its seismic model in 2008 and then in 2014; however, these updates are currently not implemented within the AASHTO hazard model on the Geological Survey’s website. Oregon and the State of Washington have updated the seismic hazard map used by the Oregon DOT and the Washington State Source: Rahman et al. (2014). Figure 4. Resilient King County critical sectors and corresponding subsectors.

12 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design DOT to include the 2014 U.S. Geological Survey hazard model; however, most state DOTs are still using the out-of-date hazard model. Use of the outdated hazard model introduces some inconsistencies in ground motion prediction, relative to the current Geological Survey hazard website tool at some locations. Discussions are ongoing between NCHRP and the U.S. Geological Survey to update the 2002 website tool. Another issue associated with the current AASHTO probabilistic method is that it is based on the geomean of the ground motion. In other words, the ground motion prediction equations in the hazard model are based on the geomean of recorded earthquake motions. These motions are not necessarily the largest motion. The building industry recognized that the maximum direction could result in larger ground motions and introduced maximum direction corrections. These corrections increase spectral acceleration by a factor of 1.1 and S1 by a factor of 1.3. The relevance of this correction to bridges is discussed in the next subsection of this review. The building industry also introduced a risk-of-collapse correction to the hazard model results. This correction is made to Ss and S1. The size of the correction varies from approximately 0.8 to 1.2 within the continental United States. It theoretically adjusts the hazard curves to provide a 1% risk of collapse in 50 years. The risk-of-collapse corrections were developed by the U.S. Geological Survey for a range of building structures located throughout the United States. Although no similar corrections have been developed for bridges, the rationale for the adjust- ment needs to be further evaluated to determine if the rationale should be applied to bridge structures. As a final point within this discussion of probabilistic methods within the AASHTO guide specifications, there are several other areas of seismic response that need to be considered. These include near-fault and basin effects on ground motions, as well as a long-period transition factor. The near-fault and basin adjustments correct the Ss and S1 spectral accelerations for locations near active faults and at the edge of basins, respectively. These adjustments typically increase spectral accelerations at longer periods (> 1 s) by 10% to 20%, depending on specifics of the site. The long-period transition identifies the point at which response spectral ordinates are no longer proportional to the 1/T decay with increasing period. These near-fault, basin, and long-period adjustments have been quantified within the building industry guidance documents but remain, for the most part, undefined within the AASHTO guide specifications. As bridge discussions and research move closer to true probabilistic format for PBSD, these issues need to be addressed as part of a future implementation process. Correction for Maximum Direction of Motion Over the last decade, a debate has been under way within the building industry regarding the appropriate definition of design response spectra (Stewart et al. 2011). The essence of the argument relates to the representation of bidirectional motion via response spectra. In both the AASHTO LRFD Bridge Design Specifications (2014), as well as the AASHTO Guide Specifications for LRFD Seismic Bridge Design (SGS), response spectra are established by defining spectral ordinates at two or three different periods from design maps developed by the U.S. Geological Survey for a return period of 1000 years. The resulting spectra are then adjusted for local site conditions, resulting in the final design spectra. In establishing the design maps for parameters such as Ss and S1, the U.S. Geological Survey has traditionally relied upon probabilistic seismic hazard analysis, which utilizes ground motion prediction equations (GMPEs) defined by the geometric mean of the two principal directions of recorded motion. In 2006, Boore introduced a new rotation independent geometric mean definition termed GMRotI50 (Boore et al. 2006). Then, in 2010, Boore developed a new defini- tion that does not rely upon the geometric mean termed RotD50 spectra, which can be generi- cally expressed as RotDNN spectra, where NN represents the percentile of response (i.e., 50 is

Literature Review and Synthesis 13 consistent with the median, 0 is the minimum, and 100 is the maximum). The NGA–West2 project GMPEs utilized RotD50 spectra for the ground motion models; however, the 2009 National Earthquake Hazards Reduction Program (NEHRP) provisions adopted a factor to modify the median response, RotD50, to the maximum possible response, RotD100 as the spectra for the design maps (Stewart et al. 2011). Introducing RotD100 resulted in a 10% to 30% increase in spectral ordinates results relative to the geometric mean, which has traditionally been used as a basis of seismic design. In order to appreciate the impact of these choices, a brief discussion of RotDNN spectra is warranted. As described in Boore (2010), for a given recording station, the two orthogonal- component time series are combined into a single time series corresponding to different rotation angles, as shown in Equation 1: aROT(t ; θ) = a1(t)cos(θ) + a2(t)sin(θ) (1) where a1(t ) and a2(t ) are the orthogonal horizontal component acceleration time series and θ is the rotation angle. For example, consider the two orthogonal horizontal component time series, H1 and H2, shown in Figure 5. The single time series corresponding to the rotation angle θ is created by combining the Direction 1 and Direction 2 time series. Then, the response spectrum for that single time series can be obtained, as shown in the figure. The process is repeated for a range of azimuths from 0° to one rotation-angle increment less than 180°. If the rotation-angle increment is θ, then there will be 180/θ single time series, as well as 180/θ corresponding response spectra. For example, if θ = 30°, then there will be six single time series (the original two, as well as four generated time series), as well as six response spectra, as shown in Figure 6. Once the response spectra for all rotation angles are obtained, then the nth percentile of the spectral amplitude over all rotation angles for each period is computed (e.g., RotD50 is the median value and RotD100 is the largest value for all rotation angles). For example, at a given period of 1 s, the response spectra values for all rotation angles are sorted, and the RotD100 value would be the largest value from all rotation angles while RotD50 would be the median. This is repeated for all periods, with potentially different rotation angles, producing the largest Source: Palma (2019). Figure 5. Combination of time series to generate rotation dependent spectra.

14 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design response at any given period (period-dependent rotation angle.) Figure 7 shows an example of the two orthogonal horizontal components, as well as the RotD50 and RotD100 spectra for the as-recorded ground motion from the 2011 Christchurch, New Zealand, earthquake at Kaiapoi North School station. As can be seen in the sample spectra (see Figure 7), the RotD100 spectrum represents a sub- stantial increase in demand when compared with the RotD50 spectrum. The main question facing the bridge community from this point onward is the appropriate selection of response spectra definition. This question can only be answered by developing sample designs to both the RotD50 and RotD100 spectra, which would then be evaluated via no-linear time history analysis. Such a study will require multiple bridge configurations and multiple ground motions. As an example of the potential impact, Figure 8 shows the results of a single-degree-of- freedom bridge column designed according to both RotD50 and RotD100 spectra, along with the resulting nonlinear time history analysis. The column was designed using direct displacement- based design to achieve a target displacement of 45 cm. It is clear from the results in Figure 8d that the nonlinear response of the column designed to the RotD100 spectrum matches the target Source: Palma (2019). Figure 6. Example of time series rotations with an angle increment (p) of 30ç. Source: Palma (2019). Figure 7. Sample spectra for a recorded ground motion pair.

Literature Review and Synthesis 15 reasonably well, while designing to the RotD50 spectrum results in displacements that are much greater than expected. This is, of course, only one result of an axisymmetric system. In the future (and outside the scope of this project), a systematic study could be conducted for both single degree of freedom and multiple degrees of freedom systems. The literature on this topic can be divided into two categories: (1) response spectra definitions and (2) impact on seismic response. The majority of the literature addresses the former. For example, Boore et al. (2006) and Boore (2010) introduced orientation-independent measures of seismic intensity from two horizontal ground motions. Boore et al. (2006) proposed two measures of the geometric mean of the seismic intensity, which are independent of the in-situ orientations of the sensors. One measure uses period-dependent rotation angles to quantify the spectral intensity, denoted GMRotDnn. The other measure is the GMRotInn, where I stands for period-independent. The ground motion prediction equations of Abrahamson and Silva (1997), Figure 8. Single bridge column designed according to both RotD50 and RotD100 spectra (Tabas EQ = Tabas earthquake and displ. = displacement).

16 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design Boore et al. (1997), Campbell and Bozorgnia (2003), and Sadigh et al. (1997) have been updated using GMRotI50 as the dependent variable. Since more users within the building industry expressed the desire to use the maximum spec- tral response over all the rotation angles without geometric means, Boore (2010) introduced the measures of ground-shaking intensity irrespective of the sensor orientation. The measures are RotDnn and RotInn, whose computation is similar to GMRotDnn and GMRotInn without computing the geometric means. With regard to impact on seismic response, the opinion paper by Stewart et al. (2011) and the work by Mackie et al. (2011) on the impact of incidence angle on bridge response are relevant. Specifically, Stewart et al. (2011) noted the importance of computational analysis of structures (which had not been done as of 2011) in proposing appropriate spectra definitions. Other Methodologies for Addressing Seismic Ground Motion Hazards There are several other reports that address the question of the methodology that may be utilized in developing the seismic hazard. These recent studies endeavored to create a method- ology that is easier for engineers, as users, to understand how to tie the seismic hazard to the performance expectation. The variability of these approaches also demonstrates the broad range of options and therefore a limited understanding by practitioners in the bridge design industry. Following are some examples that apply to PBSD. Wang et al. (2016) performed a probabilistic seismic risk analysis (SRA) based on a single ground motion parameter (GMP). For structures whose responses can be better predicted using multiple GMPs, a vector-valued SRA (VSRA) gives accurate estimates of risk. A simplified approach to VSRA, which can substantially improve computational efficiency without losing accuracy, and a new seismic hazard de-aggregation procedure are proposed. This approach and the new seismic hazard de-aggregation procedure would allow an engineer to determine a set of controlling earthquakes in terms of magnitude, source–site distance, and occurrence rate for the site of interest. Wang et al. presented two numerical examples to validate the effectiveness and accuracy of the simplified approach. Factors affecting the approximations in the simplified approach were discussed. Kwong and Chopra (2015) investigated the issue of selecting and scaling ground motions as input excitations for response history analyses of buildings in performance-based earthquake engineering. Many ground motion selection and modification procedures have been developed to select ground motions for a variety of objectives. This report focuses on the selection and scaling of single, horizontal components of ground motion for estimating seismic demand hazard curves of multistory frames at a given site. Worden et al. (2012) used a database of approximately 200,000 modified Mercalli intensity (MMI) observations of California earthquakes collected from U.S. Geological Survey reports, along with a comparable number of peak ground motion amplitudes from California seismic networks, to develop probabilistic relationships between MMI and peak ground velocity (PGV), PGA, and 0.3-s, 1-s, and 3-s 5% damped pseudo-spectral acceleration. After associating each ground motion observation with an MMI computed from all the seismic responses within 2 kilometers of the observation, a joint probability distribution between MMI and ground motion was derived. A reversible relationship was then derived between MMI and each ground motion parameter by using a total least squares regression to fit a bilinear function to the median of the stacked probability distributions. Among the relationships, the fit-to-peak ground velocity has the smallest errors, although linear combinations of PGA and PGV give nominally better results. The magnitude and distance terms also reduce the overall residuals and are justifiable on an information theoretical basis.

Literature Review and Synthesis 17 Another approach to developing the appropriate seismic hazard comes out of Europe. Delavaud et al. (2012) presented a strategy to build a logic tree for ground motion prediction in European countries. Ground motion prediction equations and weights have been determined so that the logic tree captures epistemic uncertainty in ground motion prediction for six different tectonic regions in Europe. This includes selecting candidate GMPEs and simultaneously running them through a panel of six experts to generate independent logic trees and rank the GMPEs on available test data. The collaboration of this information is used to set a weight to the GMPEs and create a consensus logic tree. This output then is run through a sensitivity analysis of the proposed weights on the seismic hazard before setting a final logic tree for the GMPEs. Tehrani and Mitchell (2014) used updated seismic hazard maps for Montreal, Canada to develop a uniform hazard spectra for Site Class C and a seismic hazard curve to analyze bridges in the localized area. Kramer and Greenfield (2016) evaluated three case studies following the 2011 Tohoku earthquake to better understand and design for liquefaction. Existing case history databases are incomplete with respect to many conditions for which geotechnical engineers are often required to evaluate liquefaction potential. These include liquefaction at depth, liquefaction of relatively dense soils, and liquefaction of gravelly soils. Kramer and Greenfield’s investigation of the three case histories will add to the sparse existing data for those conditions, and their interpretations will aid in the validation and development of predictive procedures for liquefaction potential evaluation. Structural Analysis and Design Predicting the structural response to the earthquake ground motions is critical for the PBSD process. NCHRP Synthesis 440 outlined several analysis methods that can be used to accomplish this task. The multimodal linear dynamic procedures are outlined in AASHTO LRFD Bridge Design Specifications (AASHTO 2014) and AASHTO Guide Specifications for LRFD Seismic Bridge Design (AASHTO 2011), although the Guide Specifications also include the parameters for performing a model pushover analysis in addition to prescriptive detail practices to ensure energy-dissipating systems behave as intended and other elements are capacity-protected. Other methods of analysis may be better suited for PBSD, but the initial PBSD approach will likely follow the procedures of the AASHTO guide specifications, with multi-level hazards and performance expectations. Limited research and code development have been accomplished since NCHRP Synthesis 440, but one new analysis method, outlined in Babazadeh et al. (2015), includes a three-dimensional finite element model simulation that is used to efficiently predict intermediate damage limit states in a consistent manner, with the experimental observations extracted from the actual tested columns. Other recent articles of structural analysis identified areas of improvement in the current design methodology that may be beneficial to PBSD. Huff and Pezeshk (2016) compared the substitute structure method methodology for isolated bearings with the displacement-based design methodology for ordinary bridges and showed that these two methodologies vary in estimating inelastic displacements. Huff (2016a) identified issues that are generally simplified or ignored in current practice of predicting inelastic behavior of bridges during earthquakes, both on the capacity (in the section of the element type and geometric nonlinearities) and demand (issues related to viscous dampening levels) sides of the process. The current SGS methodology for nonlinear static procedures were compared in Hajihashemi et al. (2017) with recent methodologies for multimodal pushover procedures that take into account all significant modes of the structure and with modified equivalent linearization procedures developed for

18 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design FEMA-440 (FEMA 2005). All of these analysis articles identify areas of current discussion on how to improve the analytical procedures proposed in the SGS. NCHRP Synthesis 440 focused primarily on new analysis methods, but a recent increased focus, in both academia and industry, has to do with new materials and systems and their impacts on PBSD. The evolution of enhanced seismic performance has been wrapped into several research topics, such as accelerated bridge construction (ABC), novel columns, and PBSD. The following are several aspects, though not all-encompassing, which have been improved upon in the last 6 years or so. Improving Structural Analysis Through Better Material Data The analysis and performance of a bridge are controlled with material property parameters incorporated into the seismic analysis models, specifically for the push-over analysis method. AASHTO Guide Specifications for LRFD Seismic Bridge Design (AASHTO 2011) specifies the strain limits to use for ASTM A706 (Grade 60) and ASTM A615 Grade 60 reinforcement. These strain limits come from Caltrans study of 1,100 mill certificates for ASTM A706 Grade 60 in the mid-1990s for projects in Caltrans bridge construction. The results were reported as elongation—not strain—at peak stress, so select bar pull tests were performed to correlate elongation to strain at peak stress. This was assumed to be a conservative approach, though it has recently been validated with a new ASTM A706 Grade 80 study at North Carolina State University by Overby et al. (2015a), which showed Caltrans numbers, by comparison, for Grade 60 are reasonable and conservative. Overby et al. (2015b) developed stress strain parameters for ASTM A706 Grade 80 reinforcing steel. Approximately 800 tests were conducted on bars ranging from #4 to #18 from multiple heats from three producing mills. Statistical results were presented for elastic modulus, yield strain and stress, strain-hardening strain, strain at maximum stress, and ultimate stress. Research is currently under way at North Carolina State University that aims to identify strain limit states, plastic hinge lengths, and equivalent viscous damping models for bridge columns constructed from A706 Grade 80 reinforcing steel. Work is also under way at the University of California, San Diego, on applications of Grade 80 rebar for capacity-protected members such as bridge cap beams. Design Using New Materials and Systems Structural analysis and design are fundamentally about structural response to the earthquake ground motion and the analysis methods used to develop this relationship. The complexity of the analysis depends on the geometry of the structure and elements and the extent of inelastic behavior. This is coupled with the damage, or performance criteria but has been broken out for the purposes of this report and NCHRP Synthesis 440. Next generation bridge columns, often referred to as novel columns, are improving as a tool for engineers to control both the structural analysis, as the make-up of the material changes the inelastic behavior, and the element performance of bridges in higher seismic hazards. The energy-dissipating benefits of low damage materials—such as ultrahigh-performance concrete (UHPC), engineered cementi- tious composites (ECC), and shape memory alloy, fiber-reinforced polymer (FRP) wraps and tubes, elastomeric bearings, and post-tensioned strands or bars—can be utilized by engineers to improve seismic performance and life-cycle costs after a significant seismic event. Recent (Saiidi et al. 2017) studies tested various combinations of these materials to determine if there are columns that can be built with these materials that are equivalent to, or better than, conventional reinforced concrete columns (in terms of cost, complexity, and construction duration) but that improve seismic performance, provide greater ductility, reduce damage, and accommodate a quicker recovery time and reduce loss in both the bridge and the economic environment.

Literature Review and Synthesis 19 Accelerated bridge construction is also a fast-developing field in bridge engineering, with draft guide specifications for design and construction currently being developed for adop- tion by AASHTO for AASHTO LRFD Bridge Design Specifications (AASHTO 2014). ABC has economic impacts that go beyond seismic engineering, but research is focusing on details and connections for accelerated construction in higher seismic regions, moving two research paths forward at the same time. Tazarv and Saiidi (2014) incorporated ABC research with novel column research to evaluate combined novel column materials that can be constructed quickly. The research focused on the performance of materials and how to incorporate them into practice. Key mechanical properties of reinforcing SMA were defined as follows: • Observed yield strength (fyo) is the stress at the initiation of nonlinearity on the first cycle of loading to the upper plateau. • Austenite modulus (k1) is the average slope between 15% to 70% of fyo. • Post yield stiffness (k2) is the average slope of curve between 2.5% and 3.5% of strain on the upper plateau of the first cycle of loading to 6% strain. • Austenite yield strength (fy) is the stress at the intersection of line passing through origin with slope of k1 and line passing through stress at 3% strain with slope of k2. • Lower plateau inflection strength (fi) is the stress at the inflection point of lower plateau during unloading from the first cycle to 6% strain. • Lower plateau stress factor, β = 1 – (fi/fy). • Residual strain (eres) is the tensile strain after one cycle to 6% and unloading to 1 ksi (7 MPa). • Recoverable super-elastic strain (er) is maximum strain with at least 90% strain recovery capacity. Using the ASTM standard for tensile testing, er ≤ 6%. • Martensite modulus (k3) is the slope of the curve between 8% to 9% strain, subsequent to one cycle of loading to 6% strain, unloading to 1 ksi (7 MPa) and reloading to the ultimate stress. • Secondary post-yield stiffness ratio, α = k3/k1. • Ultimate strain (eu) is strain at failure. A graphical representation is shown in Figure 9, and minimum and expected mechanical properties are listed in Table 3. Other researchers, such as at the University of Washington, are currently testing grouted bars using conventional grouts and finding that these development lengths can be reduced greatly. However, it is the force transfer of the grouted duct to the reinforcing outside the duct that may Figure 9. NiTi SE SMA nonlinear model.

20 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design require additional length to adequately develop the energy-dissipating or capacity-protecting system that was intended by the designer for performance of the bridge in a high seismic event. Tazarv and Saiidi (2014) identified other material properties such as UHPC and ECC, shown in Tables 4 and 5, respectively. Tazarv and Saiidi (2014) also addressed grouted splice sleeve couplers, self-consolidating concrete (SCC), and other connection types that could be used in ABC and novel column configurations, testing these materials in the laboratory to see if various combinations produced a logical system to be carried forward in research, design, and implementation. Trono et al. (2015) studied a rocking post-tensioned hybrid fiber-reinforced concrete (HyFRC) bridge column that was designed to limit damage and residual drifts and that was tested dynamically under earthquake excitation. The column utilized post-tensioned strands, HyFRC, and a combination of unbonded and headed longitudinal reinforcement. There have been two projects related to the field of novel columns and ABC through the National Cooperative Highway Research Program. One project was NCHRP Project 12-101, which resulted in NCHRP Report 864, 2 volumes (Saiidi et al. 2017), and the other project was NCHRP Project 12-105, which resulted in NCHRP Research Report 935 (Saiidi et al. 2020). NCHRP Project 12-101 identified three novel column systems—specifically, SMA and ECC, ECC and FRP, and hybrid rocking column using post-tensioned strands and fiber-reinforced Parameter Tensile Compressive,ExpectedbExpectedbMinimuma Table 3. Minimum expected reinforcing NiTi SE SMA mechanical properties. Properties Range Poisson’s Ratio 0.2 Creep Coefficient* 0.2 to 0.8 Total Shrinkage** *Depends on curing conditions and age of loading. up to 900x10-6 Equation Compressive Strength (f'UHPC) f'UHPC 20 to 30 ksi, (140 to 200 MPa) Coefficient of Thermal Expansion (5.5 to 8.5)x10 -6/°F, (10 to 15)x10-6/°C Specific Creep* (0.04 to 0.3)x10 -6/psi, (6 to 45)x10-6/MPa A time-dependent equation for UHPC strength is available. Tensile Cracking Strength (ft,UHPC) ft,UHPC = 6.7 (psi) f'UHPCEUHPC = 49000 (psi) 0.9 to 1.5 ksi, (6 to 10 MPa) Modulus of Elasticity (EUHPC) 6000 to 10000 ksi, (40 to 70 GPa) **Combination of drying shrinkage and autogenous shrinkage and depends on curing method. Table 4. UHPC mechanical properties.

Literature Review and Synthesis 21 polymer confinement—and compared them to a conventional reinforced column. The research and properties of the material are provided; incorporating laboratory tests and calibration, design examples are created to help engineers understand how to use these advanced materials in a linear elastic seismic demand model and to determine performance using a pushover analysis. It is worth noting that ductility requirements do not accurately capture the perfor- mance capabilities of these novel columns, and drift ratio limits are being used instead, similar to the building industry. NCHRP Project 12-101 also provided evaluation criteria that can be evaluated and incorporated by AASHTO into a guide specification or into AASHTO Guide Specifications for LRFD Seismic Bridge Design (AASHTO 2011) directly. NCHRP Project 12-105 synthesized research, design codes, specifications, and contract language throughout all 50 states and combined the knowledge base and lessons learned for ABC into proposed guide specifications for both design and construction. This work focused on connections, and most of that information is related to seismic performance of ABC elements and systems. Earthquake resisting elements (ERE) and earthquake resisting systems (ERS) are specifically identified, defined, and prescribed for performance in AASHTO guide specifica- tions (AASHTO 2011) but only implicitly applied in AASHTO LRFD Bridge Design Specifications (AASHTO 2014). Since NCHRP Project 12-105 is applicable to both of these design resources, ERE and ERS are discussed in terms of how to apply performance to the force-based seismic design practice of AASHTO LRFD Bridge Design Specifications (AASHTO 2014). The proposed guide specification language also identifies when performance of materials have to be incor- porated into the design, say in higher seismic hazards, and when it is acceptable to apply ABC connections and detailing practices with prescriptive design methodologies. As the industry’s understanding of performance increases, the engineering industry is accepting the benefits that come from a more user-defined engineering practice that is implemented by identifying material properties; evaluating hazards and soil and structural responses; and verifying performance through strain limits, damage limits states, moment curvature, displacements, and ductility. These tools and advancements in ABC and novel column designs, including other material property performance and analytical methodologies, are allowing PBSD to advance in other areas, such as hazard prediction, loss prediction, and the owner decision-making process. Feng et al. (2014a) studied the application of fiber-based analysis to predict the nonlinear response of reinforced concrete bridge columns. Specifically considered were predictions of overall force-deformation hysteretic response and strain gradients in plastic hinge regions. The authors also discussed the relative merits of force-based and displacement-based fiber elements and proposed a technique for prediction of nonlinear strain distribution based on the modified compression field theory. Fulmer et al. (2013) developed a new steel bridge system that is based upon ABC techniques that employ an external socket to connect a circular steel pier to a cap beam through the use of grout and shear studs. The resulting system develops a plastic hinge in the pipe away from the column-to-cap interface. An advantage of the design is ease of construction, as no field welding Properties Range Flexural Strength 1.5 to 4.5 ksi (10 to 30 MPa) Modulus of Elasticity 2600 to 5000 ksi (18 to 34 GPa) Ultimate Tensile Strain 1 to 8% Ultimate Tensile Strength 0.6 to 1.7 ksi (4 to 12 MPa) First Crack Strength 0.4 to 1.0 ksi (3 to 7 MPa) Compressive Strength 3 to 14 ksi (20 to 95 MPa) Table 5. ECC mechanical properties.

22 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design is required: the two assemblies are placed together and the annular space between the column and cap filled with grout. Figure 10 shows the details of this connection, and Figure 11 shows a test of the system. Another system being investigated is isolation bearings or dampening devices. Xie and Zhiang (2016) investigated the effectiveness and optimal design of protective devices for the seismic protection of highway bridges. Fragility functions are first derived by probabilistic seismic demand analysis, repair cost ratios are then derived using a performance-based methodol- ogy, and the associated component failure probability. Subsequently, the researchers tried to identify the optimal design parameters of protective devices for six design cases with various combinations of isolation bearings and fluid dampers and discussed the outcomes. Damage mitigation through isolation and energy dissipation devices is continually improving based on research, development, and implementation in the field. Recent events within the State of Washington, Alaska, and other state agencies have shown that the benefits of these tools can be compromised if the intended performance cannot be sustained for the 75-year design life of the structure. Mackie and Stojadinovic (2015) outlined performance criteria for fabrica- tion and construction that need to be administered properly, and engineers should consider the effects of moisture, salts, or other corrosive environmental conditions that can affect the performance of the isolation or energy-dissipating system. Another constraint with these systems can be the proprietary nature that occurs as a specific isolation or energy-dissipating system is utilized to develop a specific performance expectation that can only be accomplished with the prescribed system. This proprietary nature of these systems can create issues for certain funding sources that require equal bidding opportunities and the project expense that can accompany a proprietary system. To address this type of design constraint, Illinois DOT has been developing an earthquake-resisting system (ERS) to leverage the displacement capacity available at typical bearings in order to provide seismic protection to substructures of typical bridges. LaFave et al. (2013a) identified the effects and design parameters, Source: Fulmer et al. (2013). 5" 4 at 5" O.C. A A A-A Connection Details 45° UT 100% 3 8" 12 Studs Spaced Around Cross Section 30°Typ. 15° Offset Studs Inside Pipe from Cap Beam CL HSS16x0.500 Pipe 24x0.500 2'-0"2 14 " 4 at 5" O.C. 212"-34 "Ø Shear Studs 1'-11" Pipe Stud Detail Grout Provided By and Placed by NCSU Figure 10. Grouted shear stud bridge system.

Literature Review and Synthesis 23 such as fuse capacity, shear response, and sliding response, which can be used to account for more standard bearing configurations in seismic analysis, especially lower seismic hazard regions. A variation on the use of bearings in order to improve seismic performance of a pier wall configuration was outlined in Bignell et al. (2006). Historically, pinned, rocking, and sliding bearings have been used with interior pier walls and steel girder superstructures. These bearing configurations were compared with replacement elastomeric bearing configurations and details for structural analysis techniques, damage limit states, and structural fragility, and performance through probability distributions were utilized as a PBSD process for determining solutions to seismic isolation and enhanced seismic performance. The foundation conditions, pier wall effects, bearing type, and even embankment effects to structural performance were included in this evaluation. Another approach to enhanced performance is modifications to foundation elements or increased understanding and modeling of soil–structure interaction, specifically where lateral spread or liquefaction design conditions make conventional bridge design and elements imprac- tical. One example of this is the seismic design and performance of bridges constructed with rocking foundations, as evaluated in Antonellis and Panagiotou (2013). This type of rocking goes beyond the loss of contact area currently allowed in the guide specifications. The applica- tion of columns supported on rocking foundations accommodates large deformations, while there is far less damage, and can re-center after large earthquakes. Another approach is to tie a tolerable displacement of an individual deep foundation element to a movement that would cause adverse performance, excessive maintenance issues, or functionality problems with the bridge structure. Roberts et al. (2011) established a performance-based soil–structure–interaction design approach for drilled shafts. Chiou and Tsai (2014) evaluated displacement ductility of an in-ground hinging of a fixed head pile. Assessment formulas were developed for the displacement ductility capacity of a fixed-head pile in cohesion-less soils. The parameters in the formulas included the sectional over-strength ratio and curvature ductility capacity, as well as a modification factor for consider- ing soil nonlinearity. The modification factor is a function of the displacement ratio of the pile’s ultimate displacement to the effective soil yield displacement, which is constructed through a number of numerical pushover analyses. Source: Fulmer et al. (2013). Figure 11. Photograph of completed system before seismic testing showing hinge locations.

24 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design Damage Analysis As stated in NCHRP Synthesis 440, it is a fundamental need for the PBSD methodology to determine the type of damage and the likelihood that such damage will occur in the particular components of the structural system. This determination is of vital importance, as the damage sustained by a structure (and its nonstructural components) is directly relatable to the use or loss of a system after an earthquake. Therefore, there is a need to be able to reliably link structural and nonstructural response (internal forces, deformations, accelerations, and displacements) to damage. This is the realm of damage analyses, where damage is defined as discrete observable damage states (e.g., yield, spalling, longitudinal bar buckling, and bar fracture). Although the primary focus of the discussions is on structural components, similar considerations must be made for nonstructural components as well. NCHRP Synthesis 440 outlined an initial discussion on types of structural damage observed during historic earthquakes and laboratory experiments, prefaced the methods that have been developed to predict damage, identified structural details and concepts that could be used to reduce damage even in strong ground shaking, and reviewed post-event inspection tools. The new materials discussed in previous sections also apply to this discussion but are not repeated herein. Accurate damage prediction relies upon accurate definitions of performance limit states at the material level (i.e., strain limits) and the corresponding relationship between strain and displacement. Examples of recent research follow. Research by Feng et al. (2014b, 2014c) used finite element analysis validated by experimental test results to develop a model for predicting the tension strain corresponding to bar buckling. The model considers the impact of loading history on the boundary conditions of longitudinal bar restraint provided by the transverse steel. Goodnight et al. (2016a) identified strain limits to initiate bar buckling based on experimental results from 30 column tests (Equation 2). Following additional bidirectional tests on 12 columns, Equation 2 was revised to Equation 3. In addition, strain limit state equations were proposed for the compression strain in concrete to cause spiral yielding (Goodnight et al. 2017a). Goodnight et al. (2016b) also developed a new plastic hinge length model based on the data collected during those tests, which accounts for the actual curvature distribution in RC bridge columns. The revised model separates the strain penetration component from the flexural component while also recognizing that the hinge length for compression is smaller than that for tension. Brown et al. (2015) developed strain limit state (Equation 4) (tube wall local buckling) and equivalent viscous damping equations for reinforced concrete filled steel tubes (RCFSTs). The recommendations of the authors were based upon reversed cyclic tests of 12 RCFSTs of variable D/t (diameter to thickness) ratios. 0.03 700 0.1 (2)bucklingbar f E P f A s s yhe s ce g ε = + ρ − ′ 0.032 790 0.14 (3)bucklingbar f E P f A s s yhe s ce g ε = + ρ − ′ 0.021 9100 (4)tension buckling D t yε = − ≥ ε

Literature Review and Synthesis 25 where rs = reinforcement ratio, fyhe = expected yield strength of the steel tube (ksi), Es = elastic modulus of steel (ksi), P = axial load (kip), f ′ce = expected concrete strength (ksi), Ag = gross area of concrete (in.2), D = diameter of tube (in.), t = thickness of tube (in.), and ey = yield strain for steel (in./in.). Loss Analysis The PBSD combines the seismic hazard, structural, and damage analysis into a performance matrix that can be estimated into a loss metric. There are many loss metrics that can be used by, and that are important to, stakeholders and decision makers (discussed in detail in NCHRP Synthesis 440), but all these metrics can be boiled down to three main categories: deaths, dollars, and downtime. Bertero (2014) discussed earthquake lessons, in terms of loss, to be considered in both design and construction of buildings. At the beginning of 2010, two large earthquakes struck the Americas. The January 12, 2010, Haiti earthquake with a magnitude 7.0 produced about 300,000 deaths (second by the number of fatalities in world history after the 1556 Shaanxi, China earthquake). A month later, the February 27, 2010, Maule Chilean earthquake with a magnitude 8.8 (an energy release 500 times bigger than that from the Haiti earthquake) produced 500 deaths, most due to the resulting tsunami. However, the Chilean earthquake caused more than $30 billion of direct damage, left dozens of hospitals and thousands of schools nonoperational, and caused a general blackout for several hours, as well as the loss of service of essential communications facilities, crucial to take control of the chaotic after-earthquake situ- ation. Bertero (2014) compared the severity of both earthquakes and comments on their effects to life and the economy of the affected countries, as well as the features of the seismic codes or the absence of codes. An example of risk analysis with PBSD is utilized in Bensi et al. (2011), with the development of a Bayesian network (BN) methodology for performing infrastructure seismic risk assessment and providing decision support with an emphasis on immediate post-earthquake applications. A BN is a probabilistic graphical model that represents a set of random variables and their probabilistic dependencies. The proposed methodology consists of four major components: (1) a seismic demand model of ground motion intensity as a spatially distributed random field, accounting for multiple sources and including finite fault rupture and directivity effects; (2) a model for seismic performance of point-site and distributed components; (3) models of system performance as a function of component states; and (4) models of post-earthquake decision making for inspection and operation or shutdown of components. The use of the term Bayesian to describe this approach comes from the well-known Bayes rule, attributed to the 18th-century mathematician and philosopher Thomas Bayes: A B AB B B A B A( ) ( )( ) ( ) ( ) ( )= =Pr Pr Pr Pr Pr Pr (5) Pr(AB) is the probability of joint occurrence of Events A and B; Pr(A) is the marginal probability of Event A; Pr(A|B) is the conditional probability of Event A, given that Event B

26 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design has occurred; and Pr(B) is the marginal probability of Event B. The quantity Pr(B | A) is known as the likelihood of the observed Event B. Note that the probability of Event A appears on both sides of Equation 5. The Bayes rule describes how the probability of Event A changes given information gained about the occurrence of Event B. For discrete nodes, a conditional probability table is attached to each node that provides the conditional probability mass function (PMF) of the random variable represented by the node, given each of the mutually exclusive combinations of the states of its parents. For nodes without parents (e.g., X1 and X2 in Figure 12), known as root nodes, a marginal probability table is assigned. The joint PMF of all random variables X in the BN is constructed as the product of the conditional PMFs: (6) 1 p x p x Pa xi ii n∏( ) ( )( )= = Bensi et al. (2011) goes on to introduce BN models further and discusses how to incorporate BN-based seismic demand models into bridge design. The BN methodology is applied to modeling of random fields, construction of an approximate transformation matrix, and numer- ical investigation of approximation methods, including a discussion on the effect of correlation approximations on system reliability. Modeling component performance with BNs to capture seismic fragility of point-site components and distributed components, as well as modeling system performance of BNs with both qualitative and conventional methods, is explained. This reference goes on to identify efficient minimal link set (MLS), minimal cut set (MCS) formulations, optimal ordering of efficient MLS and MCS formulations, and heuristic augmen- tation that can be utilized with the BN methodology. Bensi et al. (2011) continues the PBSD process by addressing the owner decision-making process (see more discussion later in the report) and how to incorporate this model into that process. Two example problems are provided utilizing this methodology, including a California high-speed rail system that incorporates the bridge modeling into the example. Similarly, in Tehrani and Mitchell (2014), the seismic performance of 15 continuous four- span bridges with different arrangements of column heights and diameters was studied using incremental dynamic analysis (IDA). These bridges were designed using the Canadian Highway Bridge Design Code provisions (CSA 2006). The IDA procedure has been adopted by some guidelines to determine the seismic performance, collapse capacity, and fragility of buildings. Similar concepts can be used for the seismic assessment of bridges. Fragility curves can be devel- oped using the IDA results to predict the conditional probability that a certain damage state is exceeded at a given intensity measure value. Assuming that the IDA data are lognormally distributed, it is possible to develop the fragility curves at collapse (or any other damage state) by computing only the median collapse capacity and the logarithmic standard deviation of the IDA results for any given damage state. The fragility curves can then be analytically computed using Equation 7 as follows: ln ln (7)50% TOT P failure S x x S a a C( )( ) ( )= = Φ − β     where function F = cumulative normal distribution function, SCa 50% = median capacity determined from the IDA, and βTOT = total uncertainty caused by record-to-record variability, design requirements, test data, and structural modeling. Figure 12. A simple BN.

Literature Review and Synthesis 27 The seismic risk associated with exceeding different damage states in the columns, includ- ing yielding, cover spalling, bar buckling, and structural collapse (i.e., dynamic instability) was predicted. Some simplified equations were derived for Montreal, Quebec, Canada, to estimate the mean annual probability of exceeding different damage states in the columns using the IDA results. Repair and retrofit procedures are linked to loss predictions, as outlined in the FHWA’s retro- fitting manual (Buckle et al. 2006). Several chapters/articles address analysis, methodologies, effects, analytical tools, and costs for retrofit and repairs to mitigate damage or return a structure to a serviceable condition. Zimmerman et al. (2013) is one example, in which numerical techniques and seismic retrofit solutions for shear-critical reinforced concrete columns was investigated, utilizing test data of a reinforced concrete column with widely spaced transverse reinforcement. The study focused on the analysis method of nonlinear trusses and the retrofit option known as supplemental gravity columns, which is an example of how loss prediction and the analysis process are linked and should be iterated through PBSD. Organization-Specific Criteria for Bridges and Project-Specific Criteria NCHRP Synthesis 440 has two sections of criteria: organization-specific criteria for bridges and project-specific criteria. New information for both of these sections since NCHRP Synthesis 440 published is combined. The California DOT (Caltrans) Caltrans is currently updating their Seismic Design Criteria (SDC) to specify requirements to meet the performance goals for newly designed Ordinary Standard and Recovery Standard con- crete bridges. Nonstandard bridges require Project-Specific Seismic Design Criteria, in addition to the SDC, to address their nonstandard features. For both standard and nonstandard bridges, Caltrans is also categorizing their inventory in terms of Ordinary Bridges, Recovery Bridges, and Important Bridges. Some states have had issues with terms like Important or Essential, as a bridge is considered important to those that utilize each bridge. Caltrans is using these terms to correlate with loss analysis of an owner’s infrastructure and the time to reopen the bridge to support lifeline and recovery corridors. The bridge performance is also evaluated using a dual-seismic hazard; for Caltrans SDC they are listed as a Safety Evaluation Earthquake (SEE) for Ordinary Bridges. Both SEE and Functional Evaluation Earthquake (FEE) for Recovery Bridges are summarized in Table 6. Caltrans SDC revisions will also provide updates to the design parameters in Chapter 3 of the SDC and updates to both the analysis methods and displacement ductility demand values in Chapter 4 of the SDC. The adjustments to the displacement ductility demand values are revised to limit the bridge displacements beyond the initial yielding point of the ERE, specifically if a recovery standard bridge is being designed. The revisions to their SDC is an example of how PBSD is being gradually introduced as a better method of dealing with the hazards, soil–structure interaction, analysis tools, methodologies, material properties, damage states, performance, and loss. Similar revisions are being made to Seismic Design Specifications of Highway Bridges, as detailed in Japan Road Association (JRA) revisions in 2012. A synopsis of the revisions is provided in Kuwabara et al. (2013). The JRA specifications apply to Japanese road bridges and consist of five parts: Part I, Common; Part II, Steel Bridges; Part III, Concrete Bridges; Part IV, Substruc- tures; and Part V, Seismic Design. The revisions are based on improvements in terms of safety,

28 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design serviceability, and durability of bridges. Based on those lessons, design earthquake ground motions corresponding to the subduction-type earthquake were revised, and the requirements for easy and secure maintenance (inspection and repair works) for the bridges were clearly specified. JRA has clarified their performance of ERE conventionally reinforced columns for a dual-level (SPL 2 and SPL 3) seismic performance evaluation, as summarized in Table 7. The JRA 2012 revisions also address connection failures between reinforced concrete steel piles and the pile-supported spread footing to improve structural detailing and performance at the head of the piles. This is similar to research performed by the University of Washington, see Stephens et al. (2015) and Stephens et al. (2016) for both Caltrans and Washington State DOT, respectively, to evaluate capacity protecting this region and even considering the development of plastic hinges at these locations for combined hazard events or large lateral spreading and liquefaction occurrences. Caltrans also funded a study by Saini and Saiidi (2014) to address probabilistic seismic design of bridge columns using a probabilistic damage control approach and reliability analysis. Source: Caltrans. BRIDGE CATEGORY SEISMIC HAZARD EVALUATION LEVEL POST EARTHQUAKE DAMAGE STATE EXPECTED POST EARTHQUAKE SERVICE LEVEL Table 6. Caltrans draft proposed seismic design bridge performance criteria. SPL2 SPL3 Note: SPL1: Fully operational is required. Limit state of bridge is serviceability limit state. Negligible structural damage and nonstructural damage are allowed. Table 7. Seismic performance of bridge and limit states of conventionally reinforced concrete bridge column.

Literature Review and Synthesis 29 The probabilistic damage control approach uses the extent of lateral displacement nonlinearity defined by Damage Index (DI) to measure the performance of bridge columns. DI is a measure of damage from the lower measure of zero damage to the ultimate measure of a collapse mecha- nism for an element that has been subjected to base excitations. The performance objective was defined based on predefined apparent Damage States (DS), and the DS were correlated to DIs based on a previous study at the University of Nevada, Reno (Figure 13) (Vosooghi and Saiidi 2010). A statistical analysis of the demand damage index (DIL) was performed to develop fragility curves (load model) and to determine the reliability index for each DS. The results of the reliability analyses were analyzed, and a direct probabilistic damage control approach was developed to calibrate design DI to obtain a desired reliability index against failure. The calculated reliability indices and fragility curves showed that the proposed method could be effectively used in seismic design of new bridges, as well as in seismic assessment of existing bridges. The DS and DI are summarized with performance levels defined by Caltrans in Table 8, which shows the correlation between DS and DI. Figure 14 shows a fragility curve using lognormal distribution. Figure 15 shows both the fragility curves (upper two graphs) and reliability indices (lower two graphs) for four column bents (FCBs), with 4-foot diameter columns that are 30 feet in length in Site D for both the 1000 year and 2500 year seismic events. Note: O-ST = ordinary standard bridge, O-NST = ordinary nonstandard bridge, Rec. = recovery bridge, Imp. = important bridge, and NA = not applicable. Damage State (DS) Service to Public Service to Emergency Emergency Repair Design Damage Index (DI) Earthquake Levels (Years) Table 8. Design performance levels. DI P (D I { D S) Figure 13. Correlation between DS and DI.

30 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design Figure 14. Fragility curve. 100% 80% 60% 40% 20% 0% 0.00 0.20 0.40 0.60 0.80 1.00 P (D I L ) DIL 4.0 3.0 2.0 1.0 0.0 R el ia bi lit y In de x | D S DS3 DS4 DS5 DS6 Damage State (DS) 6.0 5.0 4.0 3.0 2.0 1.0 0.0 R el ia bi lit y In de x | D S DS3 DS4 DS5 DS6 Damage State (DS) (a) (b) (d)(c) 0.00 0.20 0.40 0.60 0.80 1.00 DIL 100% 80% 60% 40% 20% 0% P (D I L ) Figure 15. Fragility curves and reliability indices for FCBs with 4-foot columns in Site D. The Oregon DOT The Oregon DOT is developing a global plan for addressing resiliency in order to improve recovery for the next Cascadia Earthquake and Tsunami, using PBSD in terms of applying applicable hazards, identifying critical services, developing a comprehensive assessment of structures and systems, and updating public policies. The resilience goals are similar to those discussed at the beginning of this chapter, with the following statement: Oregon citizens will not only be protected from life-threatening physical harm, but because of risk reduction measures and pre-disaster planning, communities will recover more quickly and with less continuing vulnerability following a Cascadia subduction zone earthquake and tsunami.

Literature Review and Synthesis 31 Research has shown that the next great (magnitude 9.0) Cascadia subduction zone earth- quake is pending, as shown in Figure 16. This comparison of historical subduction zone earthquakes in northern California, Oregon, and Washington covers 10000 years of seismic history. The evidence of a pending event has made decision makers and the public take notice and put forth resources to develop strategies revolving around PBSD. Oregon’s performance-based features are modified from NCHRP Synthesis 440 to account for a third hazard condition: Cascadia Subduction Zone Earthquake (CSZE) in Oregon DOT’s Bridge Design and Drafting Manual—Section 1, Design (Oregon DOT 2016a; see also Oregon DOT 2016b). Design of new bridges on and west of US 97 references two levels of perfor- mance criteria: life safety and operational. Design of new bridges east of US 97 requires life safety criteria only. Seismic design criteria for life safety and operational criteria are described as follows. • “Life Safety” Criteria: Design all bridges for a 1,000-year return period earthquake (7 percent prob- ability of exceedance in 75 years) to meet the “Life Safety” criteria using the 2014 USGS Hazard Maps. The probabilistic hazard maps for an average return period of 1,000 years and 500 years are available at ODOT Bridge Section website, but not available on USGS website. To satisfy the “Life Safety” criteria, use Response Modification Factors from LRFD Table 3.10.7.1-1 using an importance category of “other.” • “Operational” Criteria: Design all bridges on and west of US 97 to remain “Operational” after a full rupture of Cascadia Subduction Zone Earthquake (CSZE). The full-rupture CSZE hazard maps are available at the ODOT Bridge Section website. To satisfy the “Operational” criteria, use Response Modification Factors from LRFD Table 3.10.7.1-1 using an importance category of “essential.” When requested in writing by a local agency, the “Operational” criteria for local bridges may be waived. The CSZE is a deterministic event, and a deterministic design response spectrum must be generated. To allow for consistency and efficiency in design for the CSZE, an application for generating the design response spectra has been developed by Portland State University (Nako et al. 2009). AASHTO guide specifications values for Table 3.4.2.3-1 are modified into two tables for (1) values of Site Factor, Fpga, at zero-period on the acceleration spectrum and (2) values of Site Factor, Fa, for short-period range of acceleration spectrum. Table 3.4.2.3-2 is replaced with values of Site Factor, Fv, for long-period range of acceleration spectrum. For seismic retrofit projects, the lower level ground motion is modified to be the CSZE with full rupture, as seen in Table 9. Performance levels, including performance level zero (PL0), are specified based on bridge importance and the anticipated service life (ASL) category required. Source: OSSPAC (2013). Figure 16. Cascadia earthquake timeline.

32 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design The South Carolina DOT South Carolina Department of Transportation (South Carolina DOT) has updated its geo- technical design manual (South Carolina DOT 2019). Chapters 12, 13, and 14 for geo technical seismic analysis, hazard, and design, respectively, have been updated to current practices and research, including incorporation of PBSD hazard prediction. South Carolina DOT is also updating their site coefficients to be more appropriate for South Carolina’s geologic and seismic conditions; see Andrus et al. (2014). Note that with the revisions, South Carolina DOT issued a design memorandum in November 2015 that revised the substructure unit quantitative damage criteria (maximum ductility demand) table (Table 7.1 of the SCDOT Seismic Design Specifications for Highway Bridges). See Table 10. The Utah DOT The Utah DOT and Brigham Young University (see Franke et al. 2014a, 2014b, 2015a, 2015b, 2015c, 2016) are researching the ability for engineers to apply the benefits of the full performance- based probabilistic earthquake analysis without requiring specialized software, training, or education. There is an emphasis on differences between deterministic and performance-based procedures for assessing liquefaction hazards and how the output can vary significantly with these two methodologies, especially in areas of low seismicity. Guidance is provided regarding when to use each of the two methodologies and how to bind the analysis effort. Additionally, a simplified performance-based procedure for assessment of liquefaction triggering using liquefaction loading maps was developed with this research. The components of this tool, as well as step-by-step procedures for the liquefaction initiation and lateral spread displacement models, are provided. The tool incorporates the simplified performance-based procedures determined with this research. National Highway Institute Marsh et al. (2014) referenced a manual for the National Highway Institute’s training course for engineers to understand displacement-based LRFD seismic analysis and design of bridges, which is offered through state agencies and open to industry engineers and geotechnical engi- neers. This course helps designers understand the principles behind both force-based AASHTO (AASHTO 2014) and displacement-based AASHTO (AASHTO 2011) methodologies, including a deeper understanding of what performance means in a seismic event. Other similar courses are also being offered to industry and are improving the understanding of practicing engineers. Federal Emergency Management Agency The Federal Emergency Management Agency (FEMA) has developed a series of design guidelines for seismic performance assessment of buildings and three of the five documents EARTHQUAKE GROUND MOTION BRIDGE IMPORTANCE and SERVICE LIFE CATEGORY Table 9. Modifications to minimum performance levels for retrofitted bridges.

Literature Review and Synthesis 33 are referenced in FEMA (2012a, 2012b, 2012c). A step-by-step methodology and explanation of implementation are provided for an intensity-based assessment and for a time-based assess- ment. The process of identifying and developing appropriate fragility curves is demonstrated. A software program called Performance Assessment Calculation Tool has also been developed with a user manual that is included in the FEMA documents to help engineers apply PBSD to the building industry. Japan Road Association The Japan Road Association (JRA) Design Specifications have been revised based on the performance-based design code concept in response to the international harmonization of design codes and the flexible employment of new structures and new construction methods. Figure 17 shows the code structure for seismic design using the JRA Design Specifications. The performance matrix is based on a two-level ground motion (Earthquakes 1 and 2), with the first one based on an interpolate-type earthquake and magnitude of around 8, and the second one with a magnitude of around 7 with a short distance to the structure. Kuwabara et al. (2013) outlined the incremental revisions from the JRA Design Specif i- cations between 2002 and 2012. These revisions include, but are not limited to, the ductility design method of reinforced concrete bridges, plastic hinge length equation, evaluation of hollow columns, and the introduction of high-strength steel reinforcement. Following the 2016 earthquake in Kumamoto, Japan, a new version of the JRA Design Specifications is in the works. Note: Analysis for FEE is not required for OC III bridges. Source: South Carolina DOT (2015). Design Earthquake Operational Classification (OC)Bridge Systems Table 10. South Carolina DOT substructure unit quantitative damage criteria (maximum ductility demand ld).

34 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design Identification of Knowledge Gaps The resources to develop guide specifications for PBSD are improving with examples such as the upcoming Seismic Design Criteria, Version 2 from Caltrans, which will address aspects of PBSD and the building industry’s efforts to develop practices in PBSD and tools for engineers and owners to collaborate on solutions based on performance criteria and expectations. There is still a perception that the bridge industry could better predict likely performance in large, damaging earthquakes than is being done at the present, and there are still gaps in that knowledge base that need to be closed. Most of the knowledge gaps listed in Marsh and Stringer (2013) are still applicable today; see Table 11. The technology readiness levels represent what has been developed and used; what research is done, ongoing, and being discussed; and what only exists in concept. Knowledge gaps certainly exist in all facets of PBSD; however, other key knowledge gaps beyond those listed in NCHRP Synthesis 440 (Marsh and Stringer 2013) that should be closed in order to improve the implementation of PBSD are covered. Objectives of Codes Mandated Specifications Overall Goals Functional Requirements (Basic Requirements) Performance Requirement Level Verification Methods and Acceptable Solutions Can be Modified or May be Selected with Necessary Verifications Importance, Loads, Design Ground Motion, Limit States Principles of Performance Verification Verifications of Seismic Performances (Static and Dynamic Verifications) Evaluation of Limit States of Members (RC and Steel Columns, Bearings, Foundations and Superstructure) Unseating Prevention Systems Principles of Seismic Design Figure 17. Code structure for seismic design using JRA design specifications. TRL Description 0-25 25-50 50-75 75-100 1 PBSD concept exists 2 Seismic hazard deployable 3 Structural analysis deployable 4 Damage analysis deployable 5 Loss analysis deployable 6 Owners willing and skilled in PBSD 7 Design guidelines 8 Demonstration projects 9 Proven effectiveness in earthquake Technology Readiness Level (TRL) % of Development Complete Table 11. Technology readiness levels for PBSD.

Literature Review and Synthesis 35 Gaps related to structural analysis can include minimum and expected properties for reinforcing greater than Grade 80, stainless steel, and other materials that can improve serviceability and in some conditions performance. Oregon DOT has been using stainless steel in their bridges located along the coastline and other highly corrosive environments to extend the service life of the bridge; however, many of these locations are also prone to large CSZE and the use of these materials in earthquake resisting elements is still being developed. In the State of Washington’s resiliency plan, outlined in Washington State Emergency Management Council–Seismic Safety Committee (2012), what is missing is a link between damage levels and return to service. This is a knowledge gap given what we know structurally and what this report is suggesting as a desired goal for post-earthquake recovery. Gaps related to decision makers can include bridge collapse. It is not intended that the PBSD guide specifications will address tsunami events, but the JRA specifications do address tsunami as well as landslide effects. Figures 18 and 19 are examples of these other types of failure systems and show the collapse of bridges caused by effects other than ground motion (Kuwabara et al. 2013). The decision to combine these types of effects with a seismic hazard, even combining liquefaction, down drag, and lateral spreading effects, needs additional clarification and is currently left up to the owner to assess implications of probability, safety, and cost ramifications. Liang and Lee (2013) summarized that in order to update the extreme event design limit states in the AASHTO 2014, combinations of all nonextreme and extreme loads need to be formulated on the same probability-based platform. Accounting for more than one-time variable load creates a complex situation, in which all of the possible load combinations, even many that are not needed for the purpose of bridge design, have to be determined. A formulation of a criterion to determine if a specific term is necessary to be included or rejected is described, and a comparison of the value of a given failure probability to the total pre-set permissible design failure probability can be chosen as this criterion. Figure 18. Collapse of bridge due to landslide. (Note: Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States). Source: Kuwabara et al. (2013).

36 Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design While the seismic hazard definition was once thought to be relatively well understood, there is a growing knowledge gap related to the effect of rotation angle on intensity of ground motions and how the use of a geometric mean of the motions, or other methods of including the effect of rotation angle (RotDxx), should be incorporated into seismic design. This issue is not specific to PBSD; like all seismic design methods, PBSD is reliant on a full understanding of the hazard definition for proper implementation. The knowledge gaps identified in NCHRP Synthesis 440 are still applicable. Many of these knowledge gaps will become evident to both engineers and decision makers as the PBSD guidelines are developed. Overall, the baseline information to develop PBSD guide specifica- tions are in place. Industry’s end goal of understanding the relationship between risk-based decision making and design decisions and methodologies to meet performance goals is going to be an iterative process. Figure 19. Collapse of bridge due to tsunami. (Note: Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States). Source: Kuwabara et al. (2013).

Performance-based seismic design (PBSD) for infrastructure in the United States is a developing field, with new research, design, and repair technologies; definitions; and methodologies being advanced every year.

The TRB National Cooperative Highway Research Program's NCHRP Research Report 949: Proposed AASHTO Guidelines for Performance-Based Seismic Bridge Design presents a methodology to analyze and determine the seismic capacity requirements of bridge elements expressed in terms of service and damage levels of bridges under a seismic hazard. The methodology is presented as proposed AASHTO guidelines for performance-based seismic bridge design with ground motion maps and detailed design examples illustrating the application of the proposed guidelines and maps.

Supplemental materials to the report include an Appendix A - SDOF Column Investigation Sample Calculations and Results and Appendix B - Hazard Comparison.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Academic Success Center

Writing Resources

  • Creating a Title
  • Outlining and Annotating
  • Using Generative AI (Chat GPT and others)
  • Introduction, Thesis, and Conclusion
  • Strategies for Citations
  • Determining the Resource This link opens in a new window
  • Citation Examples
  • Paragraph Development
  • Paraphrasing
  • Inclusive Language
  • International Center for Academic Integrity
  • How to Synthesize and Analyze
  • Synthesis and Analysis Practice
  • Synthesis and Analysis Group Sessions
  • Decoding the Assignment Prompt
  • Annotated Bibliography
  • Comparative Analysis
  • Conducting an Interview
  • Office Memo
  • Policy Brief
  • PowerPoint Presentation
  • White Paper
  • Writing a Blog
  • Poster Presentations
  • Infographics
  • Research Writing: The 5 Step Approach
  • Step 1: Seek Out Evidence
  • Step 2: Explain
  • Step 3: The Big Picture
  • Step 4: Own It
  • Step 5: Illustrate
  • Student Paper Template
  • APA Writing Guidelines
  • APA Punctuation Guidelines
  • MLA Resources
  • Time Management

ASC Chat Hours

ASC Chat is usually available at the following times ( Pacific Time):

If there is not a coach on duty, submit your question via one of the below methods:

  928-440-1325

  Ask a Coach

  [email protected]

Search our FAQs on the Academic Success Center's  Ask a Coach   page.

Learning about Synthesis Analysis

What D oes Synthesis and Analysis Mean?

Synthesis: the combination of ideas to

Synthesis, Analysis, and Evaluation

  • show commonalities or patterns

Analysis: a detailed examination

  • of elements, ideas, or the structure of something
  • can be a basis for discussion or interpretation

Synthesis and Analysis: combine and examine ideas to

  • show how commonalities, patterns, and elements fit together
  • form a unified point for a theory, discussion, or interpretation
  • develop an informed evaluation of the idea by presenting several different viewpoints and/or ideas

Key Resource: Synthesis Matrix

Synthesis Matrix

A synthesis matrix is an excellent tool to use to organize sources by theme and to be able to see the similarities and differences as well as any important patterns in the methodology and recommendations for future research. Using a synthesis matrix can assist you not only in synthesizing and analyzing,  but it can also aid you in finding a researchable problem and gaps in methodology and/or research.

Synthesis Matrix

Use the Synthesis Matrix Template attached below to organize your research by theme and look for patterns in your sources .Use the companion handout, "Types of Articles" to aid you in identifying the different article types for the sources you are using in your matrix. If you have any questions about how to use the synthesis matrix, sign up for the synthesis analysis group session to practice using them with Dr. Sara Northern!

Writing Icon Purple Circle w/computer inside

Was this resource helpful?

  • << Previous: International Center for Academic Integrity
  • Next: How to Synthesize and Analyze >>
  • Last Updated: Feb 23, 2024 1:29 PM
  • URL: https://resources.nu.edu/writingresources

NCU Library Home

  • Open access
  • Published: 11 August 2009

Methods for the synthesis of qualitative research: a critical review

  • Elaine Barnett-Page 1 &
  • James Thomas 1  

BMC Medical Research Methodology volume  9 , Article number:  59 ( 2009 ) Cite this article

191k Accesses

1065 Citations

40 Altmetric

Metrics details

In recent years, a growing number of methods for synthesising qualitative research have emerged, particularly in relation to health-related research. There is a need for both researchers and commissioners to be able to distinguish between these methods and to select which method is the most appropriate to their situation.

A number of methodological and conceptual links between these methods were identified and explored, while contrasting epistemological positions explained differences in approaches to issues such as quality assessment and extent of iteration. Methods broadly fall into 'realist' or 'idealist' epistemologies, which partly accounts for these differences.

Methods for qualitative synthesis vary across a range of dimensions. Commissioners of qualitative syntheses might wish to consider the kind of product they want and select their method – or type of method – accordingly.

Peer Review reports

The range of different methods for synthesising qualitative research has been growing over recent years [ 1 , 2 ], alongside an increasing interest in qualitative synthesis to inform health-related policy and practice [ 3 ]. While the terms 'meta-analysis' (a statistical method to combine the results of primary studies), or sometimes 'narrative synthesis', are frequently used to describe how quantitative research is synthesised, far more terms are used to describe the synthesis of qualitative research. This profusion of terms can mask some of the basic similarities in approach that the different methods share, and also lead to some confusion regarding which method is most appropriate in a given situation. This paper does not argue that the various nomenclatures are unnecessary, but rather seeks to draw together and review the full range of methods of synthesis available to assist future reviewers in selecting a method that is fit for their purpose. It also represents an attempt to guide the reader through some of the varied terminology to spring up around qualitative synthesis. Other helpful reviews of synthesis methods have been undertaken in recent years with slightly different foci to this paper. Two recent studies have focused on describing and critiquing methods for the integration of qualitative research with quantitative [ 4 , 5 ] rather than exclusively examining the detail and rationale of methods for the synthesis of qualitative research. Two other significant pieces of work give practical advice for conducting the synthesis of qualitative research, but do not discuss the full range of methods available [ 6 , 7 ]. We begin our Discussion by outlining each method of synthesis in turn, before comparing and contrasting characteristics of these different methods across a range of dimensions. Readers who are more familiar with the synthesis methods described here may prefer to turn straight to the 'dimensions of difference' analysis in the second part of the Discussion.

Overview of synthesis methods

Meta-ethnography.

In their seminal work of 1988, Noblit and Hare proposed meta-ethnography as an alternative to meta-analysis [ 8 ]. They cited Strike and Posner's [ 9 ] definition of synthesis as an activity in which separate parts are brought together to form a 'whole'; this construction of the whole is essentially characterised by some degree of innovation, so that the result is greater than the sum of its parts. They also borrowed from Turner's theory of social explanation [ 10 ], a key tenet of which was building 'comparative understanding' [[ 8 ], p22] rather than aggregating data.

To Noblit and Hare, synthesis provided an answer to the question of 'how to "put together" written interpretive accounts' [[ 8 ], p7], where mere integration would not be appropriate. Noblit and Hare's early work synthesised research from the field of education.

Three different methods of synthesis are used in meta-ethnography. One involves the 'translation' of concepts from individual studies into one another, thereby evolving overarching concepts or metaphors. Noblit and Hare called this process reciprocal translational analysis (RTA). Refutational synthesis involves exploring and explaining contradictions between individual studies. Lines-of-argument (LOA) synthesis involves building up a picture of the whole (i.e. culture, organisation etc) from studies of its parts. The authors conceptualised this latter approach as a type of grounded theorising.

Britten et al [ 11 ] and Campbell et al [ 12 ] have both conducted evaluations of meta-ethnography and claim to have succeeded, by using this method, in producing theories with greater explanatory power than could be achieved in a narrative literature review. While both these evaluations used small numbers of studies, more recently Pound et al [ 13 ] conducted both an RTA and an LOA synthesis using a much larger number of studies (37) on resisting medicines. These studies demonstrate that meta-ethnography has evolved since Noblit and Hare first introduced it. Campbell et al claim to have applied the method successfully to non-ethnographical studies. Based on their reading of Schutz [ 14 ], Britten et al have developed both second and third order constructs in their synthesis (Noblit and Hare briefly allude to the possibility of a 'second level of synthesis' [[ 8 ], p28] but do not demonstrate or further develop the idea).

In a more recent development, Sandelowski & Barroso [ 15 ] write of adapting RTA by using it to ' integrate findings interpretively, as opposed to comparing them interpretively' (p204). The former would involve looking to see whether the same concept, theory etc exists in different studies; the latter would involve the construction of a bigger picture or theory (i.e. LOA synthesis). They also talk about comparing or integrating imported concepts (e.g. from other disciplines) as well as those evolved 'in vivo'.

Grounded theory

Kearney [ 16 ], Eaves [ 17 ] and Finfgeld [ 18 ] have all adapted grounded theory to formulate a method of synthesis. Key methods and assumptions of grounded theory, as originally formulated and subsequently refined by Glaser and Strauss [ 19 ] and Strauss and Corbin [ 20 , 21 ], include: simultaneous phases of data collection and analysis; an inductive approach to analysis, allowing the theory to emerge from the data; the use of the constant comparison method; the use of theoretical sampling to reach theoretical saturation; and the generation of new theory. Eaves cited grounded theorists Charmaz [ 22 ] and Chesler [ 23 ], as well as Strauss and Corbin [ 20 ], as informing her approach to synthesis.

Glaser and Strauss [ 19 ] foresaw a time when a substantive body of grounded research should be pushed towards a higher, more abstract level. As a piece of methodological work, Eaves undertook her own synthesis of the synthesis methods used by these authors to produce her own clear and explicit guide to synthesis in grounded formal theory. Kearney stated that 'grounded formal theory', as she termed this method of synthesis, 'is suited to study of phenomena involving processes of contextualized understanding and action' [[ 24 ], p180] and, as such, is particularly applicable to nurses' research interests.

As Kearney suggested, the examples examined here were largely dominated by research in nursing. Eaves synthesised studies on care-giving in rural African-American families for elderly stroke survivors; Finfgeld on courage among individuals with long-term health problems; Kearney on women's experiences of domestic violence.

Kearney explicitly chose 'grounded formal theory' because it matches 'like' with 'like': that is, it applies the same methods that have been used to generate the original grounded theories included in the synthesis – produced by constant comparison and theoretical sampling – to generate a higher-level grounded theory. The wish to match 'like' with 'like' is also implicit in Eaves' paper. This distinguishes grounded formal theory from more recent applications of meta-ethnography, which have sought to include qualitative research using diverse methodological approaches [ 12 ].

  • Thematic Synthesis

Thomas and Harden [ 25 ] have developed an approach to synthesis which they term 'thematic synthesis'. This combines and adapts approaches from both meta-ethnography and grounded theory. The method was developed out of a need to conduct reviews that addressed questions relating to intervention need, appropriateness and acceptability – as well as those relating to effectiveness – without compromising on key principles developed in systematic reviews. They applied thematic synthesis in a review of the barriers to, and facilitators of, healthy eating amongst children.

Free codes of findings are organised into 'descriptive' themes, which are then further interpreted to yield 'analytical' themes. This approach shares characteristics with later adaptations of meta-ethnography, in that the analytical themes are comparable to 'third order interpretations' and that the development of descriptive and analytical themes using coding invoke reciprocal 'translation'. It also shares much with grounded theory, in that the approach is inductive and themes are developed using a 'constant comparison' method. A novel aspect of their approach is the use of computer software to code the results of included studies line-by-line, thus borrowing another technique from methods usually used to analyse primary research.

Textual Narrative Synthesis

Textual narrative synthesis is an approach which arranges studies into more homogenous groups. Lucas et al [ 26 ] comment that it has proved useful in synthesising evidence of different types (qualitative, quantitative, economic etc). Typically, study characteristics, context, quality and findings are reported on according to a standard format and similarities and differences are compared across studies. Structured summaries may also be developed, elaborating on and putting into context the extracted data [ 27 ].

Lucas et al [ 26 ] compared thematic synthesis with textual narrative synthesis. They found that 'thematic synthesis holds most potential for hypothesis generation' whereas textual narrative synthesis is more likely to make transparent heterogeneity between studies (as does meta-ethnography, with refutational synthesis) and issues of quality appraisal. This is possibly because textual narrative synthesis makes clearer the context and characteristics of each study, while the thematic approach organises data according to themes. However, Lucas et al found that textual narrative synthesis is 'less good at identifying commonality' (p2); the authors do not make explicit why this should be, although it may be that organising according to themes, as the thematic approach does, is comparatively more successful in revealing commonality.

Paterson et al [ 28 ] have evolved a multi-faceted approach to synthesis, which they call 'meta-study'. The sociologist Zhao [ 29 ], drawing on Ritzer's work [ 30 ], outlined three components of analysis, which they proposed should be undertaken prior to synthesis. These are meta-data-analysis (the analysis of findings), meta-method (the analysis of methods) and meta-theory (the analysis of theory). Collectively, these three elements of analysis, culminating in synthesis, make up the practice of 'meta-study'. Paterson et al pointed out that the different components of analysis may be conducted concurrently.

Paterson et al argued that primary research is a construction; secondary research is therefore a construction of a construction. There is need for an approach that recognises this, and that also recognises research to be a product of its social, historical and ideological context. Such an approach would be useful in accounting for differences in research findings. For Paterson et al, there is no such thing as 'absolute truth'.

Meta-study was developed to study the experiences of adults living with a chronic illness. Meta-data-analysis was conceived of by Paterson et al in similar terms to Noblit and Hare's meta-ethnography (see above), in that it is essentially interpretive and seeks to reveal similarities and discrepancies among accounts of a particular phenomenon. Meta-method involves the examination of the methodologies of the individual studies under review. Part of the process of meta-method is to consider different aspects of methodology such as sampling, data collection, research design etc, similar to procedures others have called 'critical appraisal' (CASP [ 31 ]). However, Paterson et al take their critique to a deeper level by establishing the underlying assumptions of the methodologies used and the relationship between research outcomes and methods used. Meta-theory involves scrutiny of the philosophical and theoretical assumptions of the included research papers; this includes looking at the wider context in which new theory is generated. Paterson et al described meta-synthesis as a process which creates a new interpretation which accounts for the results of all three elements of analysis. The process of synthesis is iterative and reflexive and the authors were unwilling to oversimplify the process by 'codifying' procedures for bringing all three components of analysis together.

Meta-narrative

Greenhalgh et al [ 32 ]'s meta-narrative approach to synthesis arose out of the need to synthesise evidence to inform complex policy-making questions and was assisted by the formation of a multi-disciplinary team. Their approach to review was informed by Thomas Kuhn's The Structure of Scientific Revolutions [ 33 ], in which he proposed that knowledge is produced within particular paradigms which have their own assumptions about theory, about what is a legitimate object of study, about what are legitimate research questions and about what constitutes a finding. Paradigms also tend to develop through time according to a particular set of stages, central to which is the stage of 'normal science', in which the particular standards of the paradigm are largely unchallenged and seen to be self-evident. As Greenhalgh et al pointed out, Kuhn saw paradigms as largely incommensurable: 'that is, an empirical discovery made using one set of concepts, theories, methods and instruments cannot be satisfactorily explained through a different paradigmatic lens' [[ 32 ], p419].

Greenhalgh et al synthesised research from a wide range of disciplines; their research question related to the diffusion of innovations in health service delivery and organisation. They thus identified a need to synthesise findings from research which contains many different theories arising from many different disciplines and study designs.

Based on Kuhn's work, Greenhalgh et al proposed that, across different paradigms, there were multiple – and potentially mutually contradictory – ways of understanding the concept at the heart of their review, namely the diffusion of innovation. Bearing this in mind, the reviewers deliberately chose to select key papers from a number of different research 'paradigms' or 'traditions', both within and beyond healthcare, guided by their multidisciplinary research team. They took as their unit of analysis the 'unfolding "storyline" of a research tradition over time' [[ 32 ], p417) and sought to understand diffusion of innovation as it was conceptualised in each of these traditions. Key features of each tradition were mapped: historical roots, scope, theoretical basis; research questions asked and methods/instruments used; main empirical findings; historical development of the body of knowledge (how have earlier findings led to later findings); and strengths and limitations of the tradition. The results of this exercise led to maps of 13 'meta-narratives' in total, from which seven key dimensions, or themes, were identified and distilled for the synthesis phase of the review.

Critical Interpretive Synthesis

Dixon-Woods et al [ 34 ] developed their own approach to synthesising multi-disciplinary and multi-method evidence, termed 'critical interpretive synthesis', while researching access to healthcare by vulnerable groups. Critical interpretive synthesis is an adaptation of meta-ethnography, as well as borrowing techniques from grounded theory. The authors stated that they needed to adapt traditional meta-ethnographic methods for synthesis, since these had never been applied to quantitative as well as qualitative data, nor had they been applied to a substantial body of data (in this case, 119 papers).

Dixon-Woods et al presented critical interpretive synthesis as an approach to the whole process of review, rather than to just the synthesis component. It involves an iterative approach to refining the research question and searching and selecting from the literature (using theoretical sampling) and defining and applying codes and categories. It also has a particular approach to appraising quality, using relevance – i.e. likely contribution to theory development – rather than methodological characteristics as a means of determining the 'quality' of individual papers [ 35 ]. The authors also stress, as a defining characteristic, critical interpretive synthesis's critical approach to the literature in terms of deconstructing research traditions or theoretical assumptions as a means of contextualising findings.

Dixon-Woods et al rejected reciprocal translational analysis (RTA) as this produced 'only a summary in terms that have already been used in the literature' [[ 34 ], p5], which was seen as less helpful when dealing with a large and diverse body of literature. Instead, Dixon-Woods et al adopted a lines-of-argument (LOA) synthesis, in which – rejecting the difference between first, second and third order constructs – they instead developed 'synthetic constructs' which were then linked with constructs arising directly from the literature.

The influence of grounded theory can be seen in particular in critical interpretive synthesis's inductive approach to formulating the review question and to developing categories and concepts, rejecting a 'stage' approach to systematic reviewing, and in selecting papers using theoretical sampling. Dixon-Woods et al also claim that critical interpretive synthesis is distinct in its 'explicit orientation towards theory generation' [[ 34 ], p9].

Ecological Triangulation

Jim Banning is the author of 'ecological triangulation' or 'ecological sentence synthesis', applying this method to the evidence for what works for youth with disabilities. He borrows from Webb et al [ 36 ] and Denzin [ 37 ] the concept of triangulation, in which phenomena are studied from a variety of vantage points. His rationale is that building an 'evidence base' of effectiveness requires the synthesis of cumulative, multi-faceted evidence in order to find out 'what intervention works for what kind of outcomes for what kind of persons under what kind of conditions' [[ 38 ], p1].

Ecological triangulation unpicks the mutually interdependent relationships between behaviour, persons and environments. The method requires that, for data extraction and synthesis, 'ecological sentences' are formulated following the pattern: 'With this intervention, these outcomes occur with these population foci and within these grades (ages), with these genders ... and these ethnicities in these settings' [[ 39 ], p1].

Framework Synthesis

Brunton et al [ 40 ] and Oliver et al [ 41 ] have applied a 'framework synthesis' approach in their reviews. Framework synthesis is based on framework analysis, which was outlined by Pope, Ziebland and Mays [ 42 ], and draws upon the work of Ritchie and Spencer [ 43 ] and Miles and Huberman [ 44 ]. Its rationale is that qualitative research produces large amounts of textual data in the form of transcripts, observational fieldnotes etc. The sheer wealth of information poses a challenge for rigorous analysis. Framework synthesis offers a highly structured approach to organising and analysing data (e.g. indexing using numerical codes, rearranging data into charts etc).

Brunton et al applied the approach to a review of children's, young people's and parents' views of walking and cycling; Oliver et al to an analysis of public involvement in health services research. Framework synthesis is distinct from the other methods outlined here in that it utilises an a priori 'framework' – informed by background material and team discussions – to extract and synthesise findings. As such, it is largely a deductive approach although, in addition to topics identified by the framework, new topics may be developed and incorporated as they emerge from the data. The synthetic product can be expressed in the form of a chart for each key dimension identified, which may be used to map the nature and range of the concept under study and find associations between themes and exceptions to these [ 40 ].

'Fledgling' approaches

There are three other approaches to synthesis which have not yet been widely used. One is an approach using content analysis [ 45 , 46 ] in which text is condensed into fewer content-related categories. Another is 'meta-interpretation' [ 47 ], featuring the following: an ideographic rather than pre-determined approach to the development of exclusion criteria; a focus on meaning in context; interpretations as raw data for synthesis (although this feature doesn't distinguish it from other synthesis methods); an iterative approach to the theoretical sampling of studies for synthesis; and a transparent audit trail demonstrating the trustworthiness of the synthesis.

In addition to the synthesis methods discussed above, Sandelowski and Barroso propose a method they call 'qualitative metasummary' [ 15 ]. It is mentioned here as a new and original approach to handling a collection of qualitative studies but is qualitatively different to the other methods described here since it is aggregative; that is, findings are accumulated and summarised rather than 'transformed'. Metasummary is a way of producing a 'map' of the contents of qualitative studies and – according to Sandelowski and Barroso – 'reflect [s] a quantitative logic' [[ 15 ], p151]. The frequency of each finding is determined and the higher the frequency of a particular finding, the greater its validity. The authors even discuss the calculation of 'effect sizes' for qualitative findings. Qualitative metasummaries can be undertaken as an end in themselves or may serve as a basis for a further synthesis.

Dimensions of difference

Having outlined the range of methods identified, we now turn to an examination of how they compare with one another. It is clear that they have come from many different contexts and have different approaches to understanding knowledge, but what do these differences mean in practice? Our framework for this analysis is shown in Additional file 1 : dimensions of difference [ 48 ]. We have examined the epistemology of each of the methods and found that, to some extent, this explains the need for different methods and their various approaches to synthesis.

Epistemology

The first dimension that we will consider is that of the researchers' epistemological assumptions. Spencer et al [ 49 ] outline a range of epistemological positions, which might be organised into a spectrum as follows:

Subjective idealism : there is no shared reality independent of multiple alternative human constructions

Objective idealism : there is a world of collectively shared understandings

Critical realism : knowledge of reality is mediated by our perceptions and beliefs

Scientific realism : it is possible for knowledge to approximate closely an external reality

Naïve realism : reality exists independently of human constructions and can be known directly [ 49 , 45 , 46 ].

Thus, at one end of the spectrum we have a highly constructivist view of knowledge and, at the other, an unproblematized 'direct window onto the world' view.

Nearly all of positions along this spectrum are represented in the range of methodological approaches to synthesis covered in this paper. The originators of meta-narrative synthesis, critical interpretive synthesis and meta-study all articulate what might be termed a 'subjective idealist' approach to knowledge. Paterson et al [ 28 ] state that meta-study shies away from creating 'grand theories' within the health or social sciences and assume that no single objective reality will be found. Primary studies, they argue, are themselves constructions; meta-synthesis, then, 'deals with constructions of constructions' (p7). Greenhalgh et al [ 32 ] also view knowledge as a product of its disciplinary paradigm and use this to explain conflicting findings: again, the authors neither seek, nor expect to find, one final, non-contestable answer to their research question. Critical interpretive synthesis is similar in seeking to place literature within its context, to question its assumptions and to produce a theoretical model of a phenomenon which – because highly interpretive – may not be reproducible by different research teams at alternative points in time [[ 34 ], p11].

Methods used to synthesise grounded theory studies in order to produce a higher level of grounded theory [ 24 ] appear to be informed by 'objective idealism', as does meta-ethnography. Kearney argues for the near-universal applicability of a 'ready-to-wear' theory across contexts and populations. This approach is clearly distinct from one which recognises multiple realities. The emphasis is on examining commonalities amongst, rather than discrepancies between, accounts. This emphasis is similarly apparent in most meta-ethnographies, which are conducted either according to Noblit and Hare's 'reciprocal translational analysis' technique or to their 'lines-of-argument' technique and which seek to provide a 'whole' which has a greater explanatory power. Although Noblit and Hare also propose 'refutational synthesis', in which contradictory findings might be explored, there are few examples of this having been undertaken in practice, and the aim of the method appears to be to explain and explore differences due to context, rather than multiple realities.

Despite an assumption of a reality which is perhaps less contestable than those of meta-narrative synthesis, critical interpretive synthesis and meta-study, both grounded formal theory and meta-ethnography place a great deal of emphasis on the interpretive nature of their methods. This still supposes a degree of constructivism. Although less explicit about how their methods are informed, it seems that both thematic synthesis and framework synthesis – while also involving some interpretation of data – share an even less problematized view of reality and a greater assumption that their synthetic products are reproducible and correspond to a shared reality. This is also implicit in the fact that such products are designed directly to inform policy and practice, a characteristic shared by ecological triangulation. Notably, ecological triangulation, according to Banning, can be either realist or idealist. Banning argues that the interpretation of triangulation can either be one in which multiple viewpoints converge on a point to produce confirming evidence (i.e. one definitive answer to the research question) or an idealist one, in which the complexity of multiple viewpoints is represented. Thus, although ecological triangulation views reality as complex, the approach assumes that it can be approximately knowable (at least when the realist view of ecological triangulation is adopted) and that interventions can and should be modelled according to the products of its syntheses.

While pigeonholing different methods into specific epistemological positions is a problematic process, we do suggest that the contrasting epistemologies of different researchers is one way of explaining why we have – and need – different methods for synthesis.

Variation in terms of the extent of iteration during the review process is another key dimension. All synthesis methods include some iteration but the degree varies. Meta-ethnography, grounded theory and thematic synthesis all include iteration at the synthesis stage; both framework synthesis and critical interpretive synthesis involve iterative literature searching – in the case of critical interpretive synthesis, it is not clear whether iteration occurs during the rest of the review process. Meta-narrative also involves iteration at every stage. Banning does not mention iteration in outlining ecological triangulation and neither do Lucas or Thomas and Harden for thematic narrative synthesis.

It seems that the more idealist the approach, the greater the extent of iteration. This might be because a large degree of iteration does not sit well with a more 'positivist' ideal of procedural objectivity; in particular, the notion that the robustness of the synthetic product depends in part on the reviewers stating up front in a protocol their searching strategies, inclusion/exclusion criteria etc, and being seen not to alter these at a later stage.

Quality assessment

Another dimension along which we can look at different synthesis methods is that of quality assessment. When the approaches to the assessment of the quality of studies retrieved for review are examined, there is again a wide methodological variation. It might be expected that the further towards the 'realism' end of the epistemological spectrum a method of synthesis falls, the greater the emphasis on quality assessment. In fact, this is only partially the case.

Framework synthesis, thematic narrative synthesis and thematic synthesis – methods which might be classified as sharing a 'critical realist' approach – all have highly specified approaches to quality assessment. The review in which framework synthesis was developed applied ten quality criteria: two on quality and reporting of sampling methods, four to the quality of the description of the sample in the study, two to the reliability and validity of the tools used to collect data and one on whether studies used appropriate methods for helping people to express their views. Studies which did not meet a certain number of quality criteria were excluded from contributing to findings. Similarly, in the example review for thematic synthesis, 12 criteria were applied: five related to reporting aims, context, rationale, methods and findings; four relating to reliability and validity; and three relating to the appropriateness of methods for ensuring that findings were rooted in participants' own perspectives. Studies which were deemed to have significant flaws were excluded and sensitivity analyses were used to assess the possible impact of study quality on the review's findings. Thomas and Harden's use of thematic narrative synthesis similarly applied quality criteria and developed criteria additional to those they found in the literature on quality assessment, relating to the extent to which people's views and perspectives had been privileged by researchers. It is worth noting not only that these methods apply quality criteria but that they are explicit about what they are: assessing quality is a key component in the review process for both of these methods. Likewise, Banning – the originator of ecological triangulation – sees quality assessment as important and adapts the Design and Implementation Assessment Device (DIAD) Version 0.3 (a quality assessment tool for quantitative research) for use when appraising qualitative studies [ 50 ]. Again, Banning writes of excluding studies deemed to be of poor quality.

Greenhalgh et al's meta-narrative review [ 32 ] modified a range of existing quality assessment tools to evaluate studies according to validity and robustness of methods; sample size and power; and validity of conclusions. The authors imply, but are not explicit, that this process formed the basis for the exclusion of some studies. Although not quite so clear about quality assessment methods as framework and thematic synthesis, it might be argued that meta-narrative synthesis shows a greater commitment to the concept that research can and should be assessed for quality than either meta-ethnography or grounded formal theory. The originators of meta-ethnography, Noblit and Hare [ 8 ], originally discussed quality in terms of quality of metaphor, while more recent use of this method has used amended versions of CASP (the Critical Appraisal Skills Programme tool, [ 31 ]), yet has only referred to studies being excluded on the basis of lack of relevance or because they weren't 'qualitative' studies [ 8 ]. In grounded theory, quality assessment is only discussed in terms of a 'personal note' being made on the context, quality and usefulness of each study. However, contrary to expectation, meta-narrative synthesis lies at the extreme end of the idealism/realism spectrum – as a subjective idealist approach – while meta-ethnography and grounded theory are classified as objective idealist approaches.

Finally, meta-study and critical interpretive synthesis – two more subjective idealist approaches – look to the content and utility of findings rather than methodology in order to establish quality. While earlier forms of meta-study included only studies which demonstrated 'epistemological soundness', in its most recent form [ 51 ] this method has sought to include all relevant studies, excluding only those deemed not to be 'qualitative' research. Critical interpretive synthesis also conforms to what we might expect of its approach to quality assessment: quality of research is judged as the extent to which it informs theory. The threshold of inclusion is informed by expertise and instinct rather than being articulated a priori.

In terms of quality assessment, it might be important to consider the academic context in which these various methods of synthesis developed. The reason why thematic synthesis, framework synthesis and ecological triangulation have such highly specified approaches to quality assessment may be that each of these was developed for a particular task, i.e. to conduct a multi-method review in which randomised controlled trials (RCTs) were included. The concept of quality assessment in relation to RCTs is much less contested and there is general agreement on criteria against which quality should be judged.

Problematizing the literature

Critical interpretive synthesis, the meta-narrative approach and the meta-theory element of meta-study all share some common ground in that their review and synthesis processes include examining all aspects of the context in which knowledge is produced. In conducting a review on access to healthcare by vulnerable groups, critical interpretive synthesis sought to question 'the ways in which the literature had constructed the problematics of access, the nature of the assumptions on which it drew, and what has influenced its choice of proposed solutions' [[ 34 ], p6]. Although not claiming to have been directly influenced by Greenhalgh et al's meta-narrative approach, Dixon-Woods et al do cite it as sharing similar characteristics in the sense that it critiques the literature it reviews.

Meta-study uses meta-theory to describe and deconstruct the theories that shape a body of research and to assess its quality. One aspect of this process is to examine the historical evolution of each theory and to put it in its socio-political context, which invites direct comparison with meta-narrative synthesis. Greenhalgh et al put a similar emphasis on placing research findings within their social and historical context, often as a means of seeking to explain heterogeneity of findings. In addition, meta-narrative shares with critical interpretive synthesis an iterative approach to searching and selecting from the literature.

Framework synthesis, thematic synthesis, textual narrative synthesis, meta-ethnography and grounded theory do not share the same approach to problematizing the literature as critical interpretive synthesis, meta-study and meta-narrative. In part, this may be explained by the extent to which studies included in the synthesis represented a broad range of approaches or methodologies. This, in turn, may reflect the broadness of the review question and the extent to which the concepts contained within the question are pre-defined within the literature. In the case of both the critical interpretive synthesis and meta-narrative reviews, terminology was elastic and/or the question formed iteratively. Similarly, both reviews placed great emphasis on employing multi-disciplinary research teams. Approaches which do not critique the literature in the same way tend to have more narrowly-focused questions. They also tend to include a more limited range of studies: grounded theory synthesis includes grounded theory studies, meta-ethnography (in its original form, as applied by Noblit and Hare) ethnographies. The thematic synthesis incorporated studies based on only a narrow range of qualitative methodologies (interviews and focus groups) which were informed by a similarly narrow range of epistemological assumptions. It may be that the authors of such syntheses saw no need for including such a critique in their review process.

Similarities and differences between primary studies

Most methods of synthesis are applicable to heterogeneous data (i.e. studies which use contrasting methodologies) apart from early meta-ethnography and synthesis informed by grounded theory. All methods of synthesis state that, at some level, studies are compared; many are not so explicit about how this is done, though some are. Meta-ethnography is one of the most explicit: it describes the act of 'translation' where terms and concepts which have resonance with one another are subsumed into 'higher order constructs'. Grounded theory, as represented by Eaves [ 17 ], is undertaken according to a long list of steps and sub-steps, includes the production of generalizations about concepts/categories, which comes from classifying these categories. In meta-narrative synthesis, comparable studies are grouped together at the appraisal phase of review.

Perhaps more interesting are the ways in which differences between studies are explored. Those methods with a greater emphasis on critical appraisal may tend (although this is not always made explicit) to use differences in method to explain differences in finding. Meta-ethnography proposes 'refutational synthesis' to explain differences, although there are few examples of this in the literature. Some synthesis methods – for example, thematic synthesis – look at other characteristics of the studies under review, whether types of participants and their context vary, and whether this can explain differences in perspective.

All of these methods, then, look within the studies to explain differences. Other methods look beyond the study itself to the context in which it was produced. Critical interpretive synthesis and meta-study look at differences in theory or in socio-economic context. Critical interpretive synthesis, like meta-narrative, also explores epistemological orientation. Meta-narrative is unique in concerning itself with disciplinary paradigm (i.e. the story of the discipline as it progresses). It is also distinctive in that it treats conflicting findings as 'higher order data' [[ 32 ], p420], so that the main emphasis of the synthesis appears to be on examining and explaining contradictions in the literature.

Going 'beyond' the primary studies

Synthesis is sometimes defined as a process resulting in a product, a 'whole', which is more than the sum of its parts. However, the methods reviewed here vary in the extent to which they attempt to 'go beyond' the primary studies and transform the data. Some methods – textual narrative synthesis, ecological triangulation and framework synthesis – focus on describing and summarising their primary data (often in a highly structured and detailed way) and translating the studies into one another. Others – meta-ethnography, grounded theory, thematic synthesis, meta-study, meta-narrative and critical interpretive synthesis – seek to push beyond the original data to a fresh interpretation of the phenomena under review. A key feature of thematic synthesis is its clear differentiation between these two stages.

Different methods have different mechanisms for going beyond the primary studies, although some are more explicit than others about what these entail. Meta-ethnography proposes a 'Line of Argument' (LOA) synthesis in which an interpretation is constructed to both link and explain a set of parts. Critical interpretive synthesis based its synthesis methods on those of meta-ethnography, developing an LOA using what the authors term 'synthetic constructs' (akin to 'third order constructs' in meta-ethnography) to create a 'synthesising argument'. Dixon-Woods et al claim that this is an advance on Britten et al's methods, in that they reject the difference between first, second and third order constructs.

Meta-narrative, as outlined above, focuses on conflicting findings and constructs theories to explain these in terms of differing paradigms. Meta study derives questions from each of its three components to which it subjects the dataset and inductively generates a number of theoretical claims in relation to it. According to Eaves' model of grounded theory [ 17 ], mini-theories are integrated to produce an explanatory framework. In ecological triangulation, the 'axial' codes – or second level codes evolved from the initial deductive open codes – are used to produce Banning's 'ecological sentence' [ 39 ].

The synthetic product

In overviewing and comparing different qualitative synthesis methods, the ultimate question relates to the utility of the synthetic product: what is it for? It is clear that some methods of synthesis – namely, thematic synthesis, textual narrative synthesis, framework synthesis and ecological triangulation – view themselves as producing an output that is directly applicable to policy makers and designers of interventions. The example of framework synthesis examined here (on children's, young people's and parents' views of walking and cycling) involved policy makers and practitioners in directing the focus of the synthesis and used the themes derived from the synthesis to infer what kind of interventions might be most effective in encouraging walking and cycling. Likewise, the products of the thematic synthesis took the form of practical recommendations for interventions (e.g. 'do not promote fruit and vegetables in the same way in the same intervention'). The extent to which policy makers and practitioners are involved in informing either synthesis or recommendation is less clear from the documents published on ecological triangulation, but the aim certainly is to directly inform practice.

The outputs of synthesis methods which have a more constructivist orientation – meta-study, meta-narrative, meta-ethnography, grounded theory, critical interpretive synthesis – tend to look rather different. They are generally more complex and conceptual, sometimes operating on the symbolic or metaphorical level, and requiring a further process of interpretation by policy makers and practitioners in order for them to inform practice. This is not to say, however, that they are not useful for practice, more that they are doing different work. However, it may be that, in the absence of further interpretation, they are more useful for informing other researchers and theoreticians.

Looking across dimensions

After examining the dimensions of difference of our included methods, what picture ultimately emerges? It seems clear that, while similar in some respects, there are genuine differences in approach to the synthesis of what is essentially textual data. To some extent, these differences can be explained by the epistemological assumptions that underpin each method. Our methods split into two broad camps: the idealist and the realist (see Table 1 for a summary). Idealist approaches generally tend to have a more iterative approach to searching (and the review process), have less a priori quality assessment procedures and are more inclined to problematize the literature. Realist approaches are characterised by a more linear approach to searching and review, have clearer and more well-developed approaches to quality assessment, and do not problematize the literature.

Mapping the relationships between methods

What is interesting is the relationship between these methods of synthesis, the conceptual links between them, and the extent to which the originators cite – or, in some cases, don't cite – one another. Some methods directly build on others – framework synthesis builds on framework analysis, for example, while grounded theory and constant comparative analysis build on grounded theory. Others further develop existing methods – meta-study, critical interpretive synthesis and meta-narrative all adapt aspects of meta-ethnography, while also importing concepts from other theorists (critical interpretive synthesis also adapts grounded theory techniques).

Some methods share a clear conceptual link, without directly citing one another: for example, the analytical themes developed during thematic synthesis are comparable to the third order interpretations of meta-ethnography. The meta-theory aspect of meta-study is echoed in both meta-narrative synthesis and critical interpretive synthesis (see 'Problematizing the literature, above); however, the originators of critical interpretive synthesis only refer to the originators of meta-study in relation to their use of sampling techniques.

While methods for qualitative synthesis have many similarities, there are clear differences in approach between them, many of which can be explained by taking account of a given method's epistemology.

However, within the two broad idealist/realist categories, any differences between methods in terms of outputs appear to be small.

Since many systematic reviews are designed to inform policy and practice, it is important to select a method – or type of method – that will produce the kind of conclusions needed. However, it is acknowledged that this is not always simple or even possible to achieve in practice.

The approaches that result in more easily translatable messages for policy-makers and practitioners may appear to be more attractive than the others; but we do need to take account lessons from the more idealist end of the spectrum, that some perspectives are not universal.

Dixon-Woods M, Agarwhal S, Jones D, Young B, Sutton A: Synthesising qualitative and quantitative evidence: a review of possible methods. J Health Serv Res Pol. 2005, 10 (1): 45-53b. 10.1258/1355819052801804.

Article   Google Scholar  

Barbour RS, Barbour M: Evaluating and synthesizing qualitative research: the need to develop a distinctive approach. J Eval Clin Pract. 2003, 9 (2): 179-186. 10.1046/j.1365-2753.2003.00371.x.

Article   PubMed   Google Scholar  

Mays N, Pope C, Popay J: Systematically reviewing qualitative and quantitative evidence to inform management and policy-making in the health field. J Health Serv Res Pol. 2005, 10 (Suppl 1): 6-20. 10.1258/1355819054308576.

Dixon-Woods M, Bonas S, Booth A, Jones DR, Miller T, Shaw RL, Smith J, Sutton A, Young B: How can systematic reviews incorporate qualitative research? A critical perspective. Qual Res. 2006, 6: 27-44. 10.1177/1468794106058867.

Pope C, Mays N, Popay J: Synthesizing Qualitative and Quantitative Health Evidence: a Guide to Methods. 2007, Maidenhead: Open University Press

Google Scholar  

Thorne S, Jenson L, Kearney MH, Noblit G, Sandelowski M: Qualitative metasynthesis: reflections on methodological orientation and ideological agenda. Qual Health Res. 2004, 14: 1342-1365. 10.1177/1049732304269888.

Centre for Reviews and Dissemination: Systematic Reviews. CRD's Guidance for Undertaking Reviews in Health Care. 2008, York: CRD

Noblit GW, Hare RD: Meta-Ethnography: Synthesizing Qualitative Studies. 1988, London: Sage

Book   Google Scholar  

Strike K, Posner G: Types of synthesis and their criteria. Knowledge Structure and Use. Edited by: Ward S, Reed L. 1983, Philadelphia: Temple University Press

Turner S: Sociological Explanation as Translation. 1980, New York: Cambridge University Press

Britten N, Campbell R, Pope C, Donovan J, Morgan M, Pill R: Using meta-ethnography to synthesis qualitative research: a worked example. J Health Serv Res. 2002, 7: 209-15. 10.1258/135581902320432732.

Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J: Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Soc Sci Med. 2003, 65: 671-84. 10.1016/S0277-9536(02)00064-3.

Pound P, Britten N, Morgan M, Yardley L, Pope C, Daker-White G, Campbell R: Resisting medicines: a synthesis of qualitative studies of medicine taking. Soc Sci Med. 2005, 61: 133-155. 10.1016/j.socscimed.2004.11.063.

Schutz A: Collected Paper. 1962, The Hague: Martinus Nijhoff, 1:

Sandelowski M, Barroso J: Handbook for Synthesizing Qualitative Research. 2007, New York: Springer Publishing Company

Kearney MH: Enduring love: a grounded formal theory of women's experience of domestic violence. Research Nurs Health. 2001, 24: 270-82. 10.1002/nur.1029.

Article   CAS   Google Scholar  

Eaves YD: A synthesis technique for grounded theory data analysis. J Adv Nurs. 2001, 35: 654-63. 10.1046/j.1365-2648.2001.01897.x.

Article   CAS   PubMed   Google Scholar  

Finfgeld D: Courage as a process of pushing beyond the struggle. Qual Health Res. 1999, 9: 803-814. 10.1177/104973299129122298.

Glaser BG, Strauss AL: The Discovery of Grounded Theory: Strategies for Qualitative Research. 1967, New York: Aldine De Gruyter

Strauss AL, Corbin J: Basics of Qualitative Research: Grounded Theory Procedures and Techniques. 1990, Newbury Park, CA: Sage

Strauss AL, Corbin J: Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. 1998, Thousand Oaks, CA: Sage

Charmaz K: The grounded theory method: an explication and interpretation. Contemporary Field Research: A Collection of Readings. Edited by: Emerson RM. 1983, Waveland Press: Prospect Heights, IL, 109-126.

Chesler MA: Professionals' Views of the Dangers of Self-Help Groups: Explicating a Grounded Theoretical Approach. 1987, [Michigan]: Department of Sociology, University of Michigan, Ann Arbour Centre for Research on Social Organisation, Working Paper Series

Kearney MH: Ready-to-wear: discovering grounded formal theory. Res Nurs Health. 1988, 21: 179-186. 10.1002/(SICI)1098-240X(199804)21:2<179::AID-NUR8>3.0.CO;2-G.

Thomas J, Harden A: Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Meth. 2008, 8: 45-10.1186/1471-2288-8-45.

Lucas PJ, Arai L, Baird , Law C, Roberts HM: Worked examples of alternative methods for the synthesis of qualitative and quantitative research in systematic reviews. BMC Med Res Meth. 2007, 7 (4):

Harden A, Garcia J, Oliver S, Rees R, Shepherd J, Brunton G, Oakley A: Applying systematic review methods to studies of people's views: an example from public health research. J Epidemiol Community H. 2004, 58: 794-800. 10.1136/jech.2003.014829.

Paterson BL, Thorne SE, Canam C, Jillings C: Meta-Study of Qualitative Health Research. A Practical Guide to Meta-Analysis and Meta-Synthesis. 2001, Thousand Oaks, CA: Sage Publications

Zhao S: Metatheory, metamethod, meta-data-analysis: what, why and how?. Sociol Perspect. 1991, 34: 377-390.

Ritzer G: Metatheorizing in Sociology. 1991, Lexington, MA: Lexington Books

CASP (Critical Appraisal Skills Programme). date unknown, [ http://www.phru.nhs.uk/Pages/PHD/CASP.htm ]

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O, Peacock R: Storylines of research in diffusion of innovation: a meta-narrative approach to systematic review. Soc Sci Med. 2005, 61: 417-30. 10.1016/j.socscimed.2004.12.001.

Kuhn TS: The Structure of Scientific Revolutions. 1962, Chicago: University of Chicago Press

Dixon-Woods M, Cavers D, Agarwal S, Annandale E, Arthur A, Harvey J, Hsu R, Katbamna S, Olsen R, Smith L, Riley R, Sutton AJ: Conducting a critical interpretive synthesis of the literature on access to healthcare by vulnerable groups. BMC Med Res Meth. 2006, 6 (35):

Gough D: Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Applied and Practice-based Research. Edited by: Furlong J, Oancea A. 2007, Special Edition of Research Papers in Education, 22 (2): 213-228.

Webb EJ, Campbell DT, Schwartz RD, Sechrest L: Unobtrusive Measures. 1966, Chicago: Rand McNally

Denzin NK: The Research Act: a Theoretical Introduction to Sociological Methods. 1978, New York: McGraw-Hill

Banning J: Ecological Triangulation. [ http://mycahs.colostate.edu/James.H.Banning/PDFs/Ecological%20Triangualtion.pdf ]

Banning J: Ecological Sentence Synthesis. [ http://mycahs.colostate.edu/James.H.Banning/PDFs/Ecological%20Sentence%20Synthesis.pdf ]

Brunton G, Oliver S, Oliver K, Lorenc T: A Synthesis of Research Addressing Children's, Young People's and Parents' Views of Walking and Cycling for Transport. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London

Oliver S, Rees R, Clarke-Jones L, Milne R, Oakley A, Gabbay J, Stein K, Buchanan P, Gyte G: A multidimensional conceptual framework for analysing public involvement in health services research. Health Expect. 2008, 11: 72-84. 10.1111/j.1369-7625.2007.00476.x.

Pope C, Ziebland S, Mays N: Qualitative research in health care: analysing qualitative data. BMJ. 2000, 320: 114-116. 10.1136/bmj.320.7227.114.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ritchie J, Spencer L: Qualitative data analysis for applied policy research. Analysing Qualitative Data. Edited by: Bryman A, Burgess R. 1993, London: Routledge, 173-194.

Miles M, Huberman A: Qualitative Data Analysis. 1984, London: Sage

Evans D, Fitzgerald M: Reasons for physically restraining patients and residents: a systematic review and content analysis. Int J Nurs Stud. 2002, 39: 739-743. 10.1016/S0020-7489(02)00015-9.

Suikkala A, Leino-Kilpi H: Nursing student-patient relationships: a review of the literature from 1984–1998. J Adv Nurs. 2000, 33: 42-50. 10.1046/j.1365-2648.2001.01636.x.

Weed M: 'Meta-interpretation': a method for the interpretive synthesis of qualitative research. Forum: Qual Soc Res. 2005, 6: Art 37-

Gough D, Thomas J: Dimensions of difference in systematic reviews. [ http://www.ncrm.ac.uk/RMF2008/festival/programme/sys1 ]

Spencer L, Ritchie J, Lewis J, Dillon L: Quality in Qualitative Evaluation: a Framework for Assessing Research Evidence. 2003, London: Government Chief Social Researcher's Office

Banning J: Design and Implementation Assessment Device (DIAD) Version 0.3: A response from a qualitative perspective. [ http://mycahs.colostate.edu/James.H.Banning/PDFs/Design%20and%20Implementation%20Assessment%20Device.pdf ]

Paterson BL: Coming out as ill: understanding self-disclosure in chronic illness from a meta-synthesis of qualitative research. Reviewing Research Evidence for Nursing Practice. Edited by: Webb C, Roe B. 2007, [Oxford]: Blackwell Publishing Ltd, 73-83.

Chapter   Google Scholar  

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/9/59/prepub

Download references

Acknowledgements

The authors would like to acknowledge the helpful contributions of the following in commenting on earlier drafts of this paper: David Gough, Sandy Oliver, Angela Harden, Mary Dixon-Woods, Trisha Greenhalgh and Barbara L. Paterson. We would also like to thank the peer reviewers: Helen J Smith, Rosaline Barbour and Mark Rodgers for their helpful reviews. The methodological development was supported by the Department of Health (England) and the ESRC through the Methods for Research Synthesis Node of the National Centre for Research Methods (NCRM). An earlier draft of this paper currently appears as a working paper on the National Centre for Research Methods' website http://www.ncrm.ac.uk/ .

Author information

Authors and affiliations.

Social Science Research Unit, Evidence for Policy and Practice Information and Co-ordinating (EPPI-) Centre, 18 Woburn Square, London, WC1H 0NS, UK

Elaine Barnett-Page & James Thomas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Elaine Barnett-Page .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

Both authors made substantial contributions, with EBP taking a lead on writing and JT on the analytical framework. Both authors read and approved the final manuscript.

Electronic supplementary material

12874_2009_375_moesm1_esm.doc.

Additional file 1: Dimensions of difference. Ranging from subjective idealism through objective idealism and critical realism to scientific realism to naïve realism (DOC 46 KB)

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Barnett-Page, E., Thomas, J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol 9 , 59 (2009). https://doi.org/10.1186/1471-2288-9-59

Download citation

Received : 09 March 2009

Accepted : 11 August 2009

Published : 11 August 2009

DOI : https://doi.org/10.1186/1471-2288-9-59

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Narrative Synthesis
  • Theoretical Sampling
  • Qualitative Synthesis
  • Order Construct

BMC Medical Research Methodology

ISSN: 1471-2288

research paper on synthesis

  • Systematic review
  • Open access
  • Published: 19 February 2024

‘It depends’: what 86 systematic reviews tell us about what strategies to use to support the use of research in clinical practice

  • Annette Boaz   ORCID: orcid.org/0000-0003-0557-1294 1 ,
  • Juan Baeza 2 ,
  • Alec Fraser   ORCID: orcid.org/0000-0003-1121-1551 2 &
  • Erik Persson 3  

Implementation Science volume  19 , Article number:  15 ( 2024 ) Cite this article

1381 Accesses

61 Altmetric

Metrics details

The gap between research findings and clinical practice is well documented and a range of strategies have been developed to support the implementation of research into clinical practice. The objective of this study was to update and extend two previous reviews of systematic reviews of strategies designed to implement research evidence into clinical practice.

We developed a comprehensive systematic literature search strategy based on the terms used in the previous reviews to identify studies that looked explicitly at interventions designed to turn research evidence into practice. The search was performed in June 2022 in four electronic databases: Medline, Embase, Cochrane and Epistemonikos. We searched from January 2010 up to June 2022 and applied no language restrictions. Two independent reviewers appraised the quality of included studies using a quality assessment checklist. To reduce the risk of bias, papers were excluded following discussion between all members of the team. Data were synthesised using descriptive and narrative techniques to identify themes and patterns linked to intervention strategies, targeted behaviours, study settings and study outcomes.

We identified 32 reviews conducted between 2010 and 2022. The reviews are mainly of multi-faceted interventions ( n  = 20) although there are reviews focusing on single strategies (ICT, educational, reminders, local opinion leaders, audit and feedback, social media and toolkits). The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. Furthermore, a lot of nuance lies behind these headline findings, and this is increasingly commented upon in the reviews themselves.

Combined with the two previous reviews, 86 systematic reviews of strategies to increase the implementation of research into clinical practice have been identified. We need to shift the emphasis away from isolating individual and multi-faceted interventions to better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice. This will involve drawing on a wider range of research perspectives (including social science) in primary studies and diversifying the types of synthesis undertaken to include approaches such as realist synthesis which facilitate exploration of the context in which strategies are employed.

Peer Review reports

Contribution to the literature

Considerable time and money is invested in implementing and evaluating strategies to increase the implementation of research into clinical practice.

The growing body of evidence is not providing the anticipated clear lessons to support improved implementation.

Instead what is needed is better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice.

This would involve a more central role in implementation science for a wider range of perspectives, especially from the social, economic, political and behavioural sciences and for greater use of different types of synthesis, such as realist synthesis.

Introduction

The gap between research findings and clinical practice is well documented and a range of interventions has been developed to increase the implementation of research into clinical practice [ 1 , 2 ]. In recent years researchers have worked to improve the consistency in the ways in which these interventions (often called strategies) are described to support their evaluation. One notable development has been the emergence of Implementation Science as a field focusing explicitly on “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice” ([ 3 ] p. 1). The work of implementation science focuses on closing, or at least narrowing, the gap between research and practice. One contribution has been to map existing interventions, identifying 73 discreet strategies to support research implementation [ 4 ] which have been grouped into 9 clusters [ 5 ]. The authors note that they have not considered the evidence of effectiveness of the individual strategies and that a next step is to understand better which strategies perform best in which combinations and for what purposes [ 4 ]. Other authors have noted that there is also scope to learn more from other related fields of study such as policy implementation [ 6 ] and to draw on methods designed to support the evaluation of complex interventions [ 7 ].

The increase in activity designed to support the implementation of research into practice and improvements in reporting provided the impetus for an update of a review of systematic reviews of the effectiveness of interventions designed to support the use of research in clinical practice [ 8 ] which was itself an update of the review conducted by Grimshaw and colleagues in 2001. The 2001 review [ 9 ] identified 41 reviews considering a range of strategies including educational interventions, audit and feedback, computerised decision support to financial incentives and combined interventions. The authors concluded that all the interventions had the potential to promote the uptake of evidence in practice, although no one intervention seemed to be more effective than the others in all settings. They concluded that combined interventions were more likely to be effective than single interventions. The 2011 review identified a further 13 systematic reviews containing 313 discrete primary studies. Consistent with the previous review, four main strategy types were identified: audit and feedback; computerised decision support; opinion leaders; and multi-faceted interventions (MFIs). Nine of the reviews reported on MFIs. The review highlighted the small effects of single interventions such as audit and feedback, computerised decision support and opinion leaders. MFIs claimed an improvement in effectiveness over single interventions, although effect sizes remained small to moderate and this improvement in effectiveness relating to MFIs has been questioned in a subsequent review [ 10 ]. In updating the review, we anticipated a larger pool of reviews and an opportunity to consolidate learning from more recent systematic reviews of interventions.

This review updates and extends our previous review of systematic reviews of interventions designed to implement research evidence into clinical practice. To identify potentially relevant peer-reviewed research papers, we developed a comprehensive systematic literature search strategy based on the terms used in the Grimshaw et al. [ 9 ] and Boaz, Baeza and Fraser [ 8 ] overview articles. To ensure optimal retrieval, our search strategy was refined with support from an expert university librarian, considering the ongoing improvements in the development of search filters for systematic reviews since our first review [ 11 ]. We also wanted to include technology-related terms (e.g. apps, algorithms, machine learning, artificial intelligence) to find studies that explored interventions based on the use of technological innovations as mechanistic tools for increasing the use of evidence into practice (see Additional file 1 : Appendix A for full search strategy).

The search was performed in June 2022 in the following electronic databases: Medline, Embase, Cochrane and Epistemonikos. We searched for articles published since the 2011 review. We searched from January 2010 up to June 2022 and applied no language restrictions. Reference lists of relevant papers were also examined.

We uploaded the results using EPPI-Reviewer, a web-based tool that facilitated semi-automation of the screening process and removal of duplicate studies. We made particular use of a priority screening function to reduce screening workload and avoid ‘data deluge’ [ 12 ]. Through machine learning, one reviewer screened a smaller number of records ( n  = 1200) to train the software to predict whether a given record was more likely to be relevant or irrelevant, thus pulling the relevant studies towards the beginning of the screening process. This automation did not replace manual work but helped the reviewer to identify eligible studies more quickly. During the selection process, we included studies that looked explicitly at interventions designed to turn research evidence into practice. Studies were included if they met the following pre-determined inclusion criteria:

The study was a systematic review

Search terms were included

Focused on the implementation of research evidence into practice

The methodological quality of the included studies was assessed as part of the review

Study populations included healthcare providers and patients. The EPOC taxonomy [ 13 ] was used to categorise the strategies. The EPOC taxonomy has four domains: delivery arrangements, financial arrangements, governance arrangements and implementation strategies. The implementation strategies domain includes 20 strategies targeted at healthcare workers. Numerous EPOC strategies were assessed in the review including educational strategies, local opinion leaders, reminders, ICT-focused approaches and audit and feedback. Some strategies that did not fit easily within the EPOC categories were also included. These were social media strategies and toolkits, and multi-faceted interventions (MFIs) (see Table  2 ). Some systematic reviews included comparisons of different interventions while other reviews compared one type of intervention against a control group. Outcomes related to improvements in health care processes or patient well-being. Numerous individual study types (RCT, CCT, BA, ITS) were included within the systematic reviews.

We excluded papers that:

Focused on changing patient rather than provider behaviour

Had no demonstrable outcomes

Made unclear or no reference to research evidence

The last of these criteria was sometimes difficult to judge, and there was considerable discussion amongst the research team as to whether the link between research evidence and practice was sufficiently explicit in the interventions analysed. As we discussed in the previous review [ 8 ] in the field of healthcare, the principle of evidence-based practice is widely acknowledged and tools to change behaviour such as guidelines are often seen to be an implicit codification of evidence, despite the fact that this is not always the case.

Reviewers employed a two-stage process to select papers for inclusion. First, all titles and abstracts were screened by one reviewer to determine whether the study met the inclusion criteria. Two papers [ 14 , 15 ] were identified that fell just before the 2010 cut-off. As they were not identified in the searches for the first review [ 8 ] they were included and progressed to assessment. Each paper was rated as include, exclude or maybe. The full texts of 111 relevant papers were assessed independently by at least two authors. To reduce the risk of bias, papers were excluded following discussion between all members of the team. 32 papers met the inclusion criteria and proceeded to data extraction. The study selection procedure is documented in a PRISMA literature flow diagram (see Fig.  1 ). We were able to include French, Spanish and Portuguese papers in the selection reflecting the language skills in the study team, but none of the papers identified met the inclusion criteria. Other non- English language papers were excluded.

figure 1

PRISMA flow diagram. Source: authors

One reviewer extracted data on strategy type, number of included studies, local, target population, effectiveness and scope of impact from the included studies. Two reviewers then independently read each paper and noted key findings and broad themes of interest which were then discussed amongst the wider authorial team. Two independent reviewers appraised the quality of included studies using a Quality Assessment Checklist based on Oxman and Guyatt [ 16 ] and Francke et al. [ 17 ]. Each study was rated a quality score ranging from 1 (extensive flaws) to 7 (minimal flaws) (see Additional file 2 : Appendix B). All disagreements were resolved through discussion. Studies were not excluded in this updated overview based on methodological quality as we aimed to reflect the full extent of current research into this topic.

The extracted data were synthesised using descriptive and narrative techniques to identify themes and patterns in the data linked to intervention strategies, targeted behaviours, study settings and study outcomes.

Thirty-two studies were included in the systematic review. Table 1. provides a detailed overview of the included systematic reviews comprising reference, strategy type, quality score, number of included studies, local, target population, effectiveness and scope of impact (see Table  1. at the end of the manuscript). Overall, the quality of the studies was high. Twenty-three studies scored 7, six studies scored 6, one study scored 5, one study scored 4 and one study scored 3. The primary focus of the review was on reviews of effectiveness studies, but a small number of reviews did include data from a wider range of methods including qualitative studies which added to the analysis in the papers [ 18 , 19 , 20 , 21 ]. The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. In this section, we discuss the different EPOC-defined implementation strategies in turn. Interestingly, we found only two ‘new’ approaches in this review that did not fit into the existing EPOC approaches. These are a review focused on the use of social media and a review considering toolkits. In addition to single interventions, we also discuss multi-faceted interventions. These were the most common intervention approach overall. A summary is provided in Table  2 .

Educational strategies

The overview identified three systematic reviews focusing on educational strategies. Grudniewicz et al. [ 22 ] explored the effectiveness of printed educational materials on primary care physician knowledge, behaviour and patient outcomes and concluded they were not effective in any of these aspects. Koota, Kääriäinen and Melender [ 23 ] focused on educational interventions promoting evidence-based practice among emergency room/accident and emergency nurses and found that interventions involving face-to-face contact led to significant or highly significant effects on patient benefits and emergency nurses’ knowledge, skills and behaviour. Interventions using written self-directed learning materials also led to significant improvements in nurses’ knowledge of evidence-based practice. Although the quality of the studies was high, the review primarily included small studies with low response rates, and many of them relied on self-assessed outcomes; consequently, the strength of the evidence for these outcomes is modest. Wu et al. [ 20 ] questioned if educational interventions aimed at nurses to support the implementation of evidence-based practice improve patient outcomes. Although based on evaluation projects and qualitative data, their results also suggest that positive changes on patient outcomes can be made following the implementation of specific evidence-based approaches (or projects). The differing positive outcomes for educational strategies aimed at nurses might indicate that the target audience is important.

Local opinion leaders

Flodgren et al. [ 24 ] was the only systemic review focusing solely on opinion leaders. The review found that local opinion leaders alone, or in combination with other interventions, can be effective in promoting evidence‐based practice, but this varies both within and between studies and the effect on patient outcomes is uncertain. The review found that, overall, any intervention involving opinion leaders probably improves healthcare professionals’ compliance with evidence-based practice but varies within and across studies. However, how opinion leaders had an impact could not be determined because of insufficient details were provided, illustrating that reporting specific details in published studies is important if diffusion of effective methods of increasing evidence-based practice is to be spread across a system. The usefulness of this review is questionable because it cannot provide evidence of what is an effective opinion leader, whether teams of opinion leaders or a single opinion leader are most effective, or the most effective methods used by opinion leaders.

Pantoja et al. [ 26 ] was the only systemic review focusing solely on manually generated reminders delivered on paper included in the overview. The review explored how these affected professional practice and patient outcomes. The review concluded that manually generated reminders delivered on paper as a single intervention probably led to small to moderate increases in adherence to clinical recommendations, and they could be used as a single quality improvement intervention. However, the authors indicated that this intervention would make little or no difference to patient outcomes. The authors state that such a low-tech intervention may be useful in low- and middle-income countries where paper records are more likely to be the norm.

ICT-focused approaches

The three ICT-focused reviews [ 14 , 27 , 28 ] showed mixed results. Jamal, McKenzie and Clark [ 14 ] explored the impact of health information technology on the quality of medical and health care. They examined the impact of electronic health record, computerised provider order-entry, or decision support system. This showed a positive improvement in adherence to evidence-based guidelines but not to patient outcomes. The number of studies included in the review was low and so a conclusive recommendation could not be reached based on this review. Similarly, Brown et al. [ 28 ] found that technology-enabled knowledge translation interventions may improve knowledge of health professionals, but all eight studies raised concerns of bias. The De Angelis et al. [ 27 ] review was more promising, reporting that ICT can be a good way of disseminating clinical practice guidelines but conclude that it is unclear which type of ICT method is the most effective.

Audit and feedback

Sykes, McAnuff and Kolehmainen [ 29 ] examined whether audit and feedback were effective in dementia care and concluded that it remains unclear which ingredients of audit and feedback are successful as the reviewed papers illustrated large variations in the effectiveness of interventions using audit and feedback.

Non-EPOC listed strategies: social media, toolkits

There were two new (non-EPOC listed) intervention types identified in this review compared to the 2011 review — fewer than anticipated. We categorised a third — ‘care bundles’ [ 36 ] as a multi-faceted intervention due to its description in practice and a fourth — ‘Technology Enhanced Knowledge Transfer’ [ 28 ] was classified as an ICT-focused approach. The first new strategy was identified in Bhatt et al.’s [ 30 ] systematic review of the use of social media for the dissemination of clinical practice guidelines. They reported that the use of social media resulted in a significant improvement in knowledge and compliance with evidence-based guidelines compared with more traditional methods. They noted that a wide selection of different healthcare professionals and patients engaged with this type of social media and its global reach may be significant for low- and middle-income countries. This review was also noteworthy for developing a simple stepwise method for using social media for the dissemination of clinical practice guidelines. However, it is debatable whether social media can be classified as an intervention or just a different way of delivering an intervention. For example, the review discussed involving opinion leaders and patient advocates through social media. However, this was a small review that included only five studies, so further research in this new area is needed. Yamada et al. [ 31 ] draw on 39 studies to explore the application of toolkits, 18 of which had toolkits embedded within larger KT interventions, and 21 of which evaluated toolkits as standalone interventions. The individual component strategies of the toolkits were highly variable though the authors suggest that they align most closely with educational strategies. The authors conclude that toolkits as either standalone strategies or as part of MFIs hold some promise for facilitating evidence use in practice but caution that the quality of many of the primary studies included is considered weak limiting these findings.

Multi-faceted interventions

The majority of the systematic reviews ( n  = 20) reported on more than one intervention type. Some of these systematic reviews focus exclusively on multi-faceted interventions, whilst others compare different single or combined interventions aimed at achieving similar outcomes in particular settings. While these two approaches are often described in a similar way, they are actually quite distinct from each other as the former report how multiple strategies may be strategically combined in pursuance of an agreed goal, whilst the latter report how different strategies may be incidentally used in sometimes contrasting settings in the pursuance of similar goals. Ariyo et al. [ 35 ] helpfully summarise five key elements often found in effective MFI strategies in LMICs — but which may also be transferrable to HICs. First, effective MFIs encourage a multi-disciplinary approach acknowledging the roles played by different professional groups to collectively incorporate evidence-informed practice. Second, they utilise leadership drawing on a wide set of clinical and non-clinical actors including managers and even government officials. Third, multiple types of educational practices are utilised — including input from patients as stakeholders in some cases. Fourth, protocols, checklists and bundles are used — most effectively when local ownership is encouraged. Finally, most MFIs included an emphasis on monitoring and evaluation [ 35 ]. In contrast, other studies offer little information about the nature of the different MFI components of included studies which makes it difficult to extrapolate much learning from them in relation to why or how MFIs might affect practice (e.g. [ 28 , 38 ]). Ultimately, context matters, which some review authors argue makes it difficult to say with real certainty whether single or MFI strategies are superior (e.g. [ 21 , 27 ]). Taking all the systematic reviews together we may conclude that MFIs appear to be more likely to generate positive results than single interventions (e.g. [ 34 , 45 ]) though other reviews should make us cautious (e.g. [ 32 , 43 ]).

While multi-faceted interventions still seem to be more effective than single-strategy interventions, there were important distinctions between how the results of reviews of MFIs are interpreted in this review as compared to the previous reviews [ 8 , 9 ], reflecting greater nuance and debate in the literature. This was particularly noticeable where the effectiveness of MFIs was compared to single strategies, reflecting developments widely discussed in previous studies [ 10 ]. We found that most systematic reviews are bounded by their clinical, professional, spatial, system, or setting criteria and often seek to draw out implications for the implementation of evidence in their areas of specific interest (such as nursing or acute care). Frequently this means combining all relevant studies to explore the respective foci of each systematic review. Therefore, most reviews we categorised as MFIs actually include highly variable numbers and combinations of intervention strategies and highly heterogeneous original study designs. This makes statistical analyses of the type used by Squires et al. [ 10 ] on the three reviews in their paper not possible. Further, it also makes extrapolating findings and commenting on broad themes complex and difficult. This may suggest that future research should shift its focus from merely examining ‘what works’ to ‘what works where and what works for whom’ — perhaps pointing to the value of realist approaches to these complex review topics [ 48 , 49 ] and other more theory-informed approaches [ 50 ].

Some reviews have a relatively small number of studies (i.e. fewer than 10) and the authors are often understandably reluctant to engage with wider debates about the implications of their findings. Other larger studies do engage in deeper discussions about internal comparisons of findings across included studies and also contextualise these in wider debates. Some of the most informative studies (e.g. [ 35 , 40 ]) move beyond EPOC categories and contextualise MFIs within wider systems thinking and implementation theory. This distinction between MFIs and single interventions can actually be very useful as it offers lessons about the contexts in which individual interventions might have bounded effectiveness (i.e. educational interventions for individual change). Taken as a whole, this may also then help in terms of how and when to conjoin single interventions into effective MFIs.

In the two previous reviews, a consistent finding was that MFIs were more effective than single interventions [ 8 , 9 ]. However, like Squires et al. [ 10 ] this overview is more equivocal on this important issue. There are four points which may help account for the differences in findings in this regard. Firstly, the diversity of the systematic reviews in terms of clinical topic or setting is an important factor. Secondly, there is heterogeneity of the studies within the included systematic reviews themselves. Thirdly, there is a lack of consistency with regards to the definition and strategies included within of MFIs. Finally, there are epistemological differences across the papers and the reviews. This means that the results that are presented depend on the methods used to measure, report, and synthesise them. For instance, some reviews highlight that education strategies can be useful to improve provider understanding — but without wider organisational or system-level change, they may struggle to deliver sustained transformation [ 19 , 44 ].

It is also worth highlighting the importance of the theory of change underlying the different interventions. Where authors of the systematic reviews draw on theory, there is space to discuss/explain findings. We note a distinction between theoretical and atheoretical systematic review discussion sections. Atheoretical reviews tend to present acontextual findings (for instance, one study found very positive results for one intervention, and this gets highlighted in the abstract) whilst theoretically informed reviews attempt to contextualise and explain patterns within the included studies. Theory-informed systematic reviews seem more likely to offer more profound and useful insights (see [ 19 , 35 , 40 , 43 , 45 ]). We find that the most insightful systematic reviews of MFIs engage in theoretical generalisation — they attempt to go beyond the data of individual studies and discuss the wider implications of the findings of the studies within their reviews drawing on implementation theory. At the same time, they highlight the active role of context and the wider relational and system-wide issues linked to implementation. It is these types of investigations that can help providers further develop evidence-based practice.

This overview has identified a small, but insightful set of papers that interrogate and help theorise why, how, for whom, and in which circumstances it might be the case that MFIs are superior (see [ 19 , 35 , 40 ] once more). At the level of this overview — and in most of the systematic reviews included — it appears to be the case that MFIs struggle with the question of attribution. In addition, there are other important elements that are often unmeasured, or unreported (e.g. costs of the intervention — see [ 40 ]). Finally, the stronger systematic reviews [ 19 , 35 , 40 , 43 , 45 ] engage with systems issues, human agency and context [ 18 ] in a way that was not evident in the systematic reviews identified in the previous reviews [ 8 , 9 ]. The earlier reviews lacked any theory of change that might explain why MFIs might be more effective than single ones — whereas now some systematic reviews do this, which enables them to conclude that sometimes single interventions can still be more effective.

As Nilsen et al. ([ 6 ] p. 7) note ‘Study findings concerning the effectiveness of various approaches are continuously synthesized and assembled in systematic reviews’. We may have gone as far as we can in understanding the implementation of evidence through systematic reviews of single and multi-faceted interventions and the next step would be to conduct more research exploring the complex and situated nature of evidence used in clinical practice and by particular professional groups. This would further build on the nuanced discussion and conclusion sections in a subset of the papers we reviewed. This might also support the field to move away from isolating individual implementation strategies [ 6 ] to explore the complex processes involving a range of actors with differing capacities [ 51 ] working in diverse organisational cultures. Taxonomies of implementation strategies do not fully account for the complex process of implementation, which involves a range of different actors with different capacities and skills across multiple system levels. There is plenty of work to build on, particularly in the social sciences, which currently sits at the margins of debates about evidence implementation (see for example, Normalisation Process Theory [ 52 ]).

There are several changes that we have identified in this overview of systematic reviews in comparison to the review we published in 2011 [ 8 ]. A consistent and welcome finding is that the overall quality of the systematic reviews themselves appears to have improved between the two reviews, although this is not reflected upon in the papers. This is exhibited through better, clearer reporting mechanisms in relation to the mechanics of the reviews, alongside a greater attention to, and deeper description of, how potential biases in included papers are discussed. Additionally, there is an increased, but still limited, inclusion of original studies conducted in low- and middle-income countries as opposed to just high-income countries. Importantly, we found that many of these systematic reviews are attuned to, and comment upon the contextual distinctions of pursuing evidence-informed interventions in health care settings in different economic settings. Furthermore, systematic reviews included in this updated article cover a wider set of clinical specialities (both within and beyond hospital settings) and have a focus on a wider set of healthcare professions — discussing both similarities, differences and inter-professional challenges faced therein, compared to the earlier reviews. These wider ranges of studies highlight that a particular intervention or group of interventions may work well for one professional group but be ineffective for another. This diversity of study settings allows us to consider the important role context (in its many forms) plays on implementing evidence into practice. Examining the complex and varied context of health care will help us address what Nilsen et al. ([ 6 ] p. 1) described as, ‘society’s health problems [that] require research-based knowledge acted on by healthcare practitioners together with implementation of political measures from governmental agencies’. This will help us shift implementation science to move, ‘beyond a success or failure perspective towards improved analysis of variables that could explain the impact of the implementation process’ ([ 6 ] p. 2).

This review brings together 32 papers considering individual and multi-faceted interventions designed to support the use of evidence in clinical practice. The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. Combined with the two previous reviews, 86 systematic reviews of strategies to increase the implementation of research into clinical practice have been conducted. As a whole, this substantial body of knowledge struggles to tell us more about the use of individual and MFIs than: ‘it depends’. To really move forwards in addressing the gap between research evidence and practice, we may need to shift the emphasis away from isolating individual and multi-faceted interventions to better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice. This will involve drawing on a wider range of perspectives, especially from the social, economic, political and behavioural sciences in primary studies and diversifying the types of synthesis undertaken to include approaches such as realist synthesis which facilitate exploration of the context in which strategies are employed. Harvey et al. [ 53 ] suggest that when context is likely to be critical to implementation success there are a range of primary research approaches (participatory research, realist evaluation, developmental evaluation, ethnography, quality/ rapid cycle improvement) that are likely to be appropriate and insightful. While these approaches often form part of implementation studies in the form of process evaluations, they are usually relatively small scale in relation to implementation research as a whole. As a result, the findings often do not make it into the subsequent systematic reviews. This review provides further evidence that we need to bring qualitative approaches in from the periphery to play a central role in many implementation studies and subsequent evidence syntheses. It would be helpful for systematic reviews, at the very least, to include more detail about the interventions and their implementation in terms of how and why they worked.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Before and after study

Controlled clinical trial

Effective Practice and Organisation of Care

High-income countries

Information and Communications Technology

Interrupted time series

Knowledge translation

Low- and middle-income countries

Randomised controlled trial

Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362:1225–30. https://doi.org/10.1016/S0140-6736(03)14546-1 .

Article   PubMed   Google Scholar  

Green LA, Seifert CM. Translation of research into practice: why we can’t “just do it.” J Am Board Fam Pract. 2005;18:541–5. https://doi.org/10.3122/jabfm.18.6.541 .

Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1:1–3. https://doi.org/10.1186/1748-5908-1-1 .

Article   PubMed Central   Google Scholar  

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:2–14. https://doi.org/10.1186/s13012-015-0209-1 .

Article   Google Scholar  

Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10:1–8. https://doi.org/10.1186/s13012-015-0295-0 .

Nilsen P, Ståhl C, Roback K, et al. Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Sci. 2013;8:2–12. https://doi.org/10.1186/1748-5908-8-63 .

Rycroft-Malone J, Seers K, Eldh AC, et al. A realist process evaluation within the Facilitating Implementation of Research Evidence (FIRE) cluster randomised controlled international trial: an exemplar. Implementation Sci. 2018;13:1–15. https://doi.org/10.1186/s13012-018-0811-0 .

Boaz A, Baeza J, Fraser A, European Implementation Score Collaborative Group (EIS). Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Res Notes. 2011;4:212. https://doi.org/10.1186/1756-0500-4-212 .

Article   PubMed   PubMed Central   Google Scholar  

Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, et al. Changing provider behavior – an overview of systematic reviews of interventions. Med Care. 2001;39 8Suppl 2:II2–45.

Google Scholar  

Squires JE, Sullivan K, Eccles MP, et al. Are multifaceted interventions more effective than single-component interventions in changing health-care professionals’ behaviours? An overview of systematic reviews. Implement Sci. 2014;9:1–22. https://doi.org/10.1186/s13012-014-0152-6 .

Salvador-Oliván JA, Marco-Cuenca G, Arquero-Avilés R. Development of an efficient search filter to retrieve systematic reviews from PubMed. J Med Libr Assoc. 2021;109:561–74. https://doi.org/10.5195/jmla.2021.1223 .

Thomas JM. Diffusion of innovation in systematic review methodology: why is study selection not yet assisted by automation? OA Evid Based Med. 2013;1:1–6.

Effective Practice and Organisation of Care (EPOC). The EPOC taxonomy of health systems interventions. EPOC Resources for review authors. Oslo: Norwegian Knowledge Centre for the Health Services; 2016. epoc.cochrane.org/epoc-taxonomy . Accessed 9 Oct 2023.

Jamal A, McKenzie K, Clark M. The impact of health information technology on the quality of medical and health care: a systematic review. Health Inf Manag. 2009;38:26–37. https://doi.org/10.1177/183335830903800305 .

Menon A, Korner-Bitensky N, Kastner M, et al. Strategies for rehabilitation professionals to move evidence-based knowledge into practice: a systematic review. J Rehabil Med. 2009;41:1024–32. https://doi.org/10.2340/16501977-0451 .

Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. J Clin Epidemiol. 1991;44:1271–8. https://doi.org/10.1016/0895-4356(91)90160-b .

Article   CAS   PubMed   Google Scholar  

Francke AL, Smit MC, de Veer AJ, et al. Factors influencing the implementation of clinical guidelines for health care professionals: a systematic meta-review. BMC Med Inform Decis Mak. 2008;8:1–11. https://doi.org/10.1186/1472-6947-8-38 .

Jones CA, Roop SC, Pohar SL, et al. Translating knowledge in rehabilitation: systematic review. Phys Ther. 2015;95:663–77. https://doi.org/10.2522/ptj.20130512 .

Scott D, Albrecht L, O’Leary K, Ball GDC, et al. Systematic review of knowledge translation strategies in the allied health professions. Implement Sci. 2012;7:1–17. https://doi.org/10.1186/1748-5908-7-70 .

Wu Y, Brettle A, Zhou C, Ou J, et al. Do educational interventions aimed at nurses to support the implementation of evidence-based practice improve patient outcomes? A systematic review. Nurse Educ Today. 2018;70:109–14. https://doi.org/10.1016/j.nedt.2018.08.026 .

Yost J, Ganann R, Thompson D, Aloweni F, et al. The effectiveness of knowledge translation interventions for promoting evidence-informed decision-making among nurses in tertiary care: a systematic review and meta-analysis. Implement Sci. 2015;10:1–15. https://doi.org/10.1186/s13012-015-0286-1 .

Grudniewicz A, Kealy R, Rodseth RN, Hamid J, et al. What is the effectiveness of printed educational materials on primary care physician knowledge, behaviour, and patient outcomes: a systematic review and meta-analyses. Implement Sci. 2015;10:2–12. https://doi.org/10.1186/s13012-015-0347-5 .

Koota E, Kääriäinen M, Melender HL. Educational interventions promoting evidence-based practice among emergency nurses: a systematic review. Int Emerg Nurs. 2018;41:51–8. https://doi.org/10.1016/j.ienj.2018.06.004 .

Flodgren G, O’Brien MA, Parmelli E, et al. Local opinion leaders: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD000125.pub5 .

Arditi C, Rège-Walther M, Durieux P, et al. Computer-generated reminders delivered on paper to healthcare professionals: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2017. https://doi.org/10.1002/14651858.CD001175.pub4 .

Pantoja T, Grimshaw JM, Colomer N, et al. Manually-generated reminders delivered on paper: effects on professional practice and patient outcomes. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD001174.pub4 .

De Angelis G, Davies B, King J, McEwan J, et al. Information and communication technologies for the dissemination of clinical practice guidelines to health professionals: a systematic review. JMIR Med Educ. 2016;2:e16. https://doi.org/10.2196/mededu.6288 .

Brown A, Barnes C, Byaruhanga J, McLaughlin M, et al. Effectiveness of technology-enabled knowledge translation strategies in improving the use of research in public health: systematic review. J Med Internet Res. 2020;22:e17274. https://doi.org/10.2196/17274 .

Sykes MJ, McAnuff J, Kolehmainen N. When is audit and feedback effective in dementia care? A systematic review. Int J Nurs Stud. 2018;79:27–35. https://doi.org/10.1016/j.ijnurstu.2017.10.013 .

Bhatt NR, Czarniecki SW, Borgmann H, et al. A systematic review of the use of social media for dissemination of clinical practice guidelines. Eur Urol Focus. 2021;7:1195–204. https://doi.org/10.1016/j.euf.2020.10.008 .

Yamada J, Shorkey A, Barwick M, Widger K, et al. The effectiveness of toolkits as knowledge translation strategies for integrating evidence into clinical care: a systematic review. BMJ Open. 2015;5:e006808. https://doi.org/10.1136/bmjopen-2014-006808 .

Afari-Asiedu S, Abdulai MA, Tostmann A, et al. Interventions to improve dispensing of antibiotics at the community level in low and middle income countries: a systematic review. J Glob Antimicrob Resist. 2022;29:259–74. https://doi.org/10.1016/j.jgar.2022.03.009 .

Boonacker CW, Hoes AW, Dikhoff MJ, Schilder AG, et al. Interventions in health care professionals to improve treatment in children with upper respiratory tract infections. Int J Pediatr Otorhinolaryngol. 2010;74:1113–21. https://doi.org/10.1016/j.ijporl.2010.07.008 .

Al Zoubi FM, Menon A, Mayo NE, et al. The effectiveness of interventions designed to increase the uptake of clinical practice guidelines and best practices among musculoskeletal professionals: a systematic review. BMC Health Serv Res. 2018;18:2–11. https://doi.org/10.1186/s12913-018-3253-0 .

Ariyo P, Zayed B, Riese V, Anton B, et al. Implementation strategies to reduce surgical site infections: a systematic review. Infect Control Hosp Epidemiol. 2019;3:287–300. https://doi.org/10.1017/ice.2018.355 .

Borgert MJ, Goossens A, Dongelmans DA. What are effective strategies for the implementation of care bundles on ICUs: a systematic review. Implement Sci. 2015;10:1–11. https://doi.org/10.1186/s13012-015-0306-1 .

Cahill LS, Carey LM, Lannin NA, et al. Implementation interventions to promote the uptake of evidence-based practices in stroke rehabilitation. Cochrane Database Syst Rev. 2020. https://doi.org/10.1002/14651858.CD012575.pub2 .

Pedersen ER, Rubenstein L, Kandrack R, Danz M, et al. Elusive search for effective provider interventions: a systematic review of provider interventions to increase adherence to evidence-based treatment for depression. Implement Sci. 2018;13:1–30. https://doi.org/10.1186/s13012-018-0788-8 .

Jenkins HJ, Hancock MJ, French SD, Maher CG, et al. Effectiveness of interventions designed to reduce the use of imaging for low-back pain: a systematic review. CMAJ. 2015;187:401–8. https://doi.org/10.1503/cmaj.141183 .

Bennett S, Laver K, MacAndrew M, Beattie E, et al. Implementation of evidence-based, non-pharmacological interventions addressing behavior and psychological symptoms of dementia: a systematic review focused on implementation strategies. Int Psychogeriatr. 2021;33:947–75. https://doi.org/10.1017/S1041610220001702 .

Noonan VK, Wolfe DL, Thorogood NP, et al. Knowledge translation and implementation in spinal cord injury: a systematic review. Spinal Cord. 2014;52:578–87. https://doi.org/10.1038/sc.2014.62 .

Albrecht L, Archibald M, Snelgrove-Clarke E, et al. Systematic review of knowledge translation strategies to promote research uptake in child health settings. J Pediatr Nurs. 2016;31:235–54. https://doi.org/10.1016/j.pedn.2015.12.002 .

Campbell A, Louie-Poon S, Slater L, et al. Knowledge translation strategies used by healthcare professionals in child health settings: an updated systematic review. J Pediatr Nurs. 2019;47:114–20. https://doi.org/10.1016/j.pedn.2019.04.026 .

Bird ML, Miller T, Connell LA, et al. Moving stroke rehabilitation evidence into practice: a systematic review of randomized controlled trials. Clin Rehabil. 2019;33:1586–95. https://doi.org/10.1177/0269215519847253 .

Goorts K, Dizon J, Milanese S. The effectiveness of implementation strategies for promoting evidence informed interventions in allied healthcare: a systematic review. BMC Health Serv Res. 2021;21:1–11. https://doi.org/10.1186/s12913-021-06190-0 .

Zadro JR, O’Keeffe M, Allison JL, Lembke KA, et al. Effectiveness of implementation strategies to improve adherence of physical therapist treatment choices to clinical practice guidelines for musculoskeletal conditions: systematic review. Phys Ther. 2020;100:1516–41. https://doi.org/10.1093/ptj/pzaa101 .

Van der Veer SN, Jager KJ, Nache AM, et al. Translating knowledge on best practice into improving quality of RRT care: a systematic review of implementation strategies. Kidney Int. 2011;80:1021–34. https://doi.org/10.1038/ki.2011.222 .

Pawson R, Greenhalgh T, Harvey G, et al. Realist review–a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10Suppl 1:21–34. https://doi.org/10.1258/1355819054308530 .

Rycroft-Malone J, McCormack B, Hutchinson AM, et al. Realist synthesis: illustrating the method for implementation research. Implementation Sci. 2012;7:1–10. https://doi.org/10.1186/1748-5908-7-33 .

Johnson MJ, May CR. Promoting professional behaviour change in healthcare: what interventions work, and why? A theory-led overview of systematic reviews. BMJ Open. 2015;5:e008592. https://doi.org/10.1136/bmjopen-2015-008592 .

Metz A, Jensen T, Farley A, Boaz A, et al. Is implementation research out of step with implementation practice? Pathways to effective implementation support over the last decade. Implement Res Pract. 2022;3:1–11. https://doi.org/10.1177/26334895221105585 .

May CR, Finch TL, Cornford J, Exley C, et al. Integrating telecare for chronic disease management in the community: What needs to be done? BMC Health Serv Res. 2011;11:1–11. https://doi.org/10.1186/1472-6963-11-131 .

Harvey G, Rycroft-Malone J, Seers K, Wilson P, et al. Connecting the science and practice of implementation – applying the lens of context to inform study design in implementation research. Front Health Serv. 2023;3:1–15. https://doi.org/10.3389/frhs.2023.1162762 .

Download references

Acknowledgements

The authors would like to thank Professor Kathryn Oliver for her support in the planning the review, Professor Steve Hanney for reading and commenting on the final manuscript and the staff at LSHTM library for their support in planning and conducting the literature search.

This study was supported by LSHTM’s Research England QR strategic priorities funding allocation and the National Institute for Health and Care Research (NIHR) Applied Research Collaboration South London (NIHR ARC South London) at King’s College Hospital NHS Foundation Trust. Grant number NIHR200152. The views expressed are those of the author(s) and not necessarily those of the NIHR, the Department of Health and Social Care or Research England.

Author information

Authors and affiliations.

Health and Social Care Workforce Research Unit, The Policy Institute, King’s College London, Virginia Woolf Building, 22 Kingsway, London, WC2B 6LE, UK

Annette Boaz

King’s Business School, King’s College London, 30 Aldwych, London, WC2B 4BG, UK

Juan Baeza & Alec Fraser

Federal University of Santa Catarina (UFSC), Campus Universitário Reitor João Davi Ferreira Lima, Florianópolis, SC, 88.040-900, Brazil

Erik Persson

You can also search for this author in PubMed   Google Scholar

Contributions

AB led the conceptual development and structure of the manuscript. EP conducted the searches and data extraction. All authors contributed to screening and quality appraisal. EP and AF wrote the first draft of the methods section. AB, JB and AF performed result synthesis and contributed to the analyses. AB wrote the first draft of the manuscript and incorporated feedback and revisions from all other authors. All authors revised and approved the final manuscript.

Corresponding author

Correspondence to Annette Boaz .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: appendix a., additional file 2: appendix b., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Boaz, A., Baeza, J., Fraser, A. et al. ‘It depends’: what 86 systematic reviews tell us about what strategies to use to support the use of research in clinical practice. Implementation Sci 19 , 15 (2024). https://doi.org/10.1186/s13012-024-01337-z

Download citation

Received : 01 November 2023

Accepted : 05 January 2024

Published : 19 February 2024

DOI : https://doi.org/10.1186/s13012-024-01337-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation
  • Interventions
  • Clinical practice
  • Research evidence
  • Multi-faceted

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research paper on synthesis

research paper on synthesis

Journal of Materials Chemistry A

In situ confined synthesis of interlayer-riveted carbon shell encapsulated pdznbi alloy as highly active and durable oxygen reduction reaction catalyst.

Stability has always been the main limiting problem for developing oxygen reduction reaction (ORR) catalysts practically. A PdZnBi alloy ORR catalyst with interlayer-riveted carbon shell encapsulation structure is synthesized by small molecule in-situ confinement effect. Meanwhile, the reaction molecular dynamics (RMD) method presents and reveals the mechanism of acetic acid molecules forming a uniform thin carbon layer on the metal surface during heat treatment. The riveted porous carbon shell structure allows rapid electron and mass transfer, which acts as a protective layer that limits alloy surface oxidation and maintains the initial catalyst structure. The catalyst exhibits excellent ORR durability, with an initial mass activity (MA) of 94% after 60,000 accelerated durability tests (ADT), which is applied to zinc-air batteries and exhibits high current density discharge stability (>100 h, 100 mA cm−2).

  • This article is part of the themed collection: Journal of Materials Chemistry A HOT Papers

Supplementary files

  • Supplementary information PDF (2548K)
  • Supplementary movie MP4 (2156K)
  • Supplementary movie MP4 (1888K)

Article information

Download citation, permissions.

research paper on synthesis

L. Chang, K. Zhou, W. Si, C. Wang, C. Wang, M. Zhang, X. Ke, G. Chen and R. Wang, J. Mater. Chem. A , 2024, Accepted Manuscript , DOI: 10.1039/D3TA07060C

To request permission to reproduce material from this article, please go to the Copyright Clearance Center request page .

If you are an author contributing to an RSC publication, you do not need to request permission provided correct acknowledgement is given.

If you are the author of this article, you do not need to request permission to reproduce figures and diagrams provided correct acknowledgement is given. If you want to reproduce the whole article in a third-party publication (excluding your thesis/dissertation for which permission is not required) please go to the Copyright Clearance Center request page .

Read more about how to correctly acknowledge RSC content .

Social activity

Search articles by author.

This article has not yet been cited.

Advertisements

IMAGES

  1. Synthesis Essay

    research paper on synthesis

  2. Well-Written Synthesis Essay Examples

    research paper on synthesis

  3. PPT

    research paper on synthesis

  4. A Synthesis Apa Paper Example : 005 Running Essay Example Apa Style

    research paper on synthesis

  5. 💄 Synthesis essay template. How to Write the AP Lang Synthesis Essay

    research paper on synthesis

  6. Synthesis paper

    research paper on synthesis

VIDEO

  1. Lecture No. 5, How to Write a Research Paper

  2. Selective Carbon Nanorings Synthesis via Intramicellar Phase-Transition Tip-to-Tip Assembly

  3. Adversarial Diffusion Distillation

  4. BSC 5th semester chemistry objective questions🙋🙋 paper 1 (organic synthesis A)

  5. Steps of writing a synthesis research paper

  6. organic synthesis unsolved paper ccsu#msc#chemistrynotes #ccsuexam2022

COMMENTS

  1. What Synthesis Methodology Should I Use? A Review and Analysis of Approaches to Research Synthesis

    Types of Research Synthesis: Key Characteristics: Purpose: Methods: Product: CONVENTIONAL Integrative Review: What is it? "The integrative literature review is a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated" [, p.356]. ...

  2. Understanding the Impacts of Research Synthesis

    1. Introduction. Research or scientific synthesis is the integration and assessment of knowledge and research findings pertinent to a particular issue with the aim of increasing the generality and applicability of, and access to, those findings (Hampton & Parker 2011, Magliocca et al., 2014, Baron et al. 2017).Synthesis of existing research and case studies can also generate new knowledge.

  3. How To Write Synthesis In Research: Example Steps

    Step 1: Organize your sources After collecting the relevant literature, you've got a lot of information to work through, and no clear idea of how it all fits together. Before you can start writing, you need to organize your notes in a way that allows you to see the relationships between sources.

  4. Synthesizing Sources

    Synthesizing sources involves combining the work of other scholars to provide new insights. It's a way of integrating sources that helps situate your work in relation to existing research. Synthesizing sources involves more than just summarizing.

  5. Methods for the synthesis of qualitative research: a critical review

    Background. The range of different methods for synthesising qualitative research has been growing over recent years [1,2], alongside an increasing interest in qualitative synthesis to inform health-related policy and practice [].While the terms 'meta-analysis' (a statistical method to combine the results of primary studies), or sometimes 'narrative synthesis', are frequently used to describe ...

  6. Meta-analysis and the science of research synthesis

    Meta-analysis is the quantitative, scientific synthesis of research results. Since the term and modern approaches to research synthesis were first introduced in the 1970s, meta-analysis has had a ...

  7. Research Synthesis Methods

    Research Synthesis Methods, the official journal of the Society for Research Synthesis Methodology, is a multidisciplinary peer reviewed journal devoted to the development and dissemination of methods for designing, conducting, analyzing, interpreting, reporting, and applying systematic research synthesis.It aims to facilitate the creation and exchange of knowledge about research synthesis ...

  8. Flow Chemistry: Recent Developments in the Synthesis of Pharmaceutical

    Organic synthesis has traditionally been performed in batch which means in round-bottomed flasks, test tubes, or closed vessels; recently, continuous flow methodologies have gained much attention from synthetic organic chemists. ... Since then, this research area has grown very rapidly as indicated by the number of papers that appeared in the ...

  9. Chemical synthesis and materials discovery

    The initial synthesis itself is a chemical discovery, and it usually precedes the realization of its associated functionality, which could be considered a discovery in the materials science area ...

  10. Qualitative research synthesis: An appreciative and critical

    This paper introduces models and techniques for synthesizing multiple qualitative studies on a topic. Qualitative research synthesis is a diverse set of methods for combining the data or the results of multiple studies on a topic to generate new knowledge, theory and applications.

  11. Synthesizing Sources

    Writing a research paper usually requires synthesizing the available sources in order to provide new insight or a different perspective into your particular topic (as opposed to simply restating what each individual source says about your research topic). Note that synthesizing is not the same as summarizing.

  12. Cu and Cu-Based Nanoparticles: Synthesis and Applications in Catalysis

    The catalytic activity of nanoparticles (NPs) represents a rich resource for chemical processes, employed both in industry and in academia. NPs have applications in diverse fields, including energy conversion and storage, chemical manufacturing, biological applications, and environmental technology. The great interest in catalysis using nanomaterials has prompted the synthesis and ...

  13. Nanomaterials: a review of synthesis methods, properties, recent

    Various nanomaterial synthesis methods, including top-down and bottom-up approaches, are discussed. The unique features of nanomaterials are highlighted throughout the review. This review describes advances in nanomaterials, specifically fullerenes, carbon nanotubes, graphene, carbon quantum dots, nanodiamonds, carbon nanohorns, nanoporous ...

  14. PDF 1. Planning a Synthesis Paper

    recognizing and revising for synthesis. Planning a Synthesis Paper common strategy for planning a synthesis paper is to create a "grid of common points." grid of common points is a heuristic that allows a writer to group source material into specific categories. These categories can help the writer organize the paper.

  15. Guide to Synthesis Essays: How to Write a Synthesis Essay

    Writing Guide to Synthesis Essays: How to Write a Synthesis Essay Written by MasterClass Last updated: Aug 19, 2021 • 4 min read The writing process for composing a good synthesis essay requires curiosity, research, and original thought to argue a certain point or explore an idea.

  16. Green Synthesis of Nanomaterials

    Green synthesis is a method just as effective, if not more so, than traditional synthesis; it provides a sustainable approach to nanomaterial manufacturing by using naturally sourced starting materials and relying on low energy processes. The recent use of active molecules in natural biological systems such as bacteria, yeast, algae and fungi ...

  17. PDF Synthesizing Source Ideas for Your Research Paper

    "Synthesis: the combining of the constituent elements of separate material or abstract entities into a single or unified entity; the process of combining objects or ideas into a complex whole."1*...

  18. Synthesis

    By synthesizing research, you are showing that you can combine current information in your field of study and add a new interpretation or analysis of those sources. What steps do I need to take to reach synthesis? To effectively synthesize the literature, you must first critically read the research on your topic.

  19. Chapter 2

    A synthesis report, NCHRP Synthesis 440: Performance- Based Seismic Bridge Design (Marsh and Stringer 2013), was created to capture PBSD understanding up to that point. This synthesis report described the background, objec- tives, and research up until 2011 to 2012 and synthesized the information, including areas where knowledge gaps existed. ...

  20. PDF Writing A Literature Review and Using a Synthesis Matrix

    The synthesis matrix is a chart that allows a researcher to sort and categorize the different arguments presented on an issue. Across the top of the chart are the spaces to record sources, and along the side of the chart are the spaces to record the main points of argument on the topic at hand. As you examine your first source, you will work ...

  21. LibGuides: Writing Resources: Synthesis and Analysis

    A synthesis matrix is an excellent tool to use to organize sources by theme and to be able to see the similarities and differences as well as any important patterns in the methodology and recommendations for future research.

  22. Methods for the synthesis of qualitative research: a critical review

    The range of different methods for synthesising qualitative research has been growing over recent years [1, 2], alongside an increasing interest in qualitative synthesis to inform health-related policy and practice [].While the terms 'meta-analysis' (a statistical method to combine the results of primary studies), or sometimes 'narrative synthesis', are frequently used to describe how ...

  23. Teaching grad students the important skill of text synthesis (opinion)

    Synthesis, also called intertextual integration, is an essential skill that Ph.D. students should have for composing graduate program milestones—such as literature reviews, prelim exam papers, theses and dissertations—allowing novice scholars to develop a deep understanding of research in their fields and to build confidence as emergent ...

  24. How to Write a Synthesis Essay, WIth Examples

    A synthesis essay is a type of essay that combines points, data, and evidence from multiple sources and turns them into one idea that the writing revolves around. In other words, the writer synthesizes their own idea using other sources' research and points.

  25. 'It depends': what 86 systematic reviews tell us about what strategies

    Background The gap between research findings and clinical practice is well documented and a range of strategies have been developed to support the implementation of research into clinical practice. The objective of this study was to update and extend two previous reviews of systematic reviews of strategies designed to implement research evidence into clinical practice. Methods We developed a ...

  26. In situ confined synthesis of interlayer-riveted carbon shell

    Stability has always been the main limiting problem for developing oxygen reduction reaction (ORR) catalysts practically. A PdZnBi alloy ORR catalyst with interlayer-riveted carbon shell encapsulation structure is synthesized by small molecule in-situ confinement effect. Meanwhile, the reaction molecular dyn Journal of Materials Chemistry A HOT Papers

  27. Identification of a Curtius Rearrangement Byproduct in LAP1 ...

    Abstract. BMS-986020 was developed as an antagonist of the LPA1 receptor for the treatment of lung fibrosis (Idiopathic Pulmonary Fibrosis(IPF)). During the development of BMS-986020, a high percentage byproduct was observed in the mother liquor of a Curtius rearrangement reaction.