learning task 5

LT 5: Designing Differentiated Class Activities

learning task 5

Learning Task 5

edTPA Tips and Examples for edTPA Lesson Plans

  • Getting Organized
  • Instruction

soccer strategy drawing

December 30, 2014

Teaching Strategies and Learning Tasks

In preparing your lesson plans and commentaries, you will need to include and describe teaching strategies and learning tasks.

Teaching Strategies are the actions you, the teacher, make during a lesson.  

In your plans, detail what teaching strategies you will use and when.   In your commentaries, describe why you chose those strategies.

Having trouble deciding on your teaching strategies?  An easy way to spur your thinking is to ask yourself:   What will I be doing during the lesson?

Learning Tasks  are opportunities you create for students to engage with the content you’re teaching.  You want to be sure your plans and commentaries clearly describe the learning tasks you create.

In designing your learning tasks ,  ask yourself:

  • What will students be doing during the lessons?
  • Will you have them “turn and talk”?
  • Will they use individual white boards or response cards?
  • Will they work with partners?
  • Will they use a graphic organizer?
  • Will they do an experiment?

About the author

Edtpa tips admin, about the post.

Start Researching Learning Theories

Identify Struggling and Above Level Students

Comments are closed.

Who made the tips?

The tips were authored by Nancy Casey, Ed.D.

Nancy is a professor of teacher education at St. Bonaventure University.

Read more...

Who made this site?

This site was made by the team at Edthena. Edthena is a classroom observation and video coaching platform used by teacher education programs across the country. Edthena also provides free tools for building an edTPA® portfolio .

Learn more...

Get Tips Delivered Daily

© 2024 edTPA Tips and Examples for edTPA Lesson Plans

Theme by Anders Norén — Up ↑

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Learn More Effectively

10 Learning Techniques to Try

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

learning task 5

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

learning task 5

Knowing the most effective strategies for how to learn can help you maximize your efforts when you are trying to acquire new ideas, concepts, and skills. If you are like many people, your time is limited, so it is important to get the most educational value out of the time you have.

Speed of learning is not the only important factor, however. It is important to be able to accurately remember the information that you learn, recall it at a later time, and use it effectively in a wide variety of situations.

How can you teach yourself to learn? As you approach a new subject, incorporate some of the following tactics:

  • Find ways to boost your memory
  • Always keep learning new things
  • Use a variety of learning techniques
  • Try teaching it to someone else
  • Connect new information to things you already know
  • Look for opportunities to have hands-on experiences
  • Remember that mistakes are part of the process
  • Study a little bit every day
  • Test yourself
  • Focus on one thing at a time

Knowing how to learn well is not something that happens overnight, but putting a few of these learning techniques into daily practice can help you get more out of your study time.

Improve Your Memory

There are a number of different strategies that can boost memory . Basic tips such as improving your focus, avoiding cram sessions, and structuring your study time are good places to start, but there are even more lessons from psychology that can dramatically improve your learning efficiency.

Strategies that can help improve your memory include:

  • Getting r egular physical exercise , which is linked to improvements in memory and brain health
  • Spending time socializing with other people
  • Getting enough sleep
  • Eliminating distractions so you can focus on what you are learning
  • Organizing the information you are studying to make it easier to remember
  • Using elaborative rehearsal when studying; when you learn something new, spend a few moments describing it to yourself in your own words
  • Using visual aids like photographs, graphs, and charts
  • Reading the information you are studying out loud

For example, you might use general learning techniques like setting aside quiet time to study, rehearsing, and reading information aloud. You might combine this with strategies that can foster better memory, such as exercising and socializing.

If you're pressed for time, consider combining study strategies. Listen to a podcast while you're taking a walk or join a group where you can practice your new skills with others.

Keep Learning New Things

Prasit photo / Getty Images

One sure-fire way to become a more effective learner is to simply keep learning. Research has found that the brain is capable of producing new brain cells, a process known as neurogenesis . However, many of these cells will eventually die unless a person engages in some type of effortful learning.

By learning new things, these cells are kept alive and incorporated into brain circuits.

So, if you are learning a new language, it is important to keep practicing the language in order to maintain the gains you have achieved. This "use-it-or-lose-it" phenomenon involves a brain process known as "pruning."

In pruning , certain pathways in the brain are maintained, while others are eliminated. If you want the new information you just learned to stay put, keep practicing and rehearsing it.

Learn in Multiple Ways

Another one of the best ways to learn is to focus on learning in more than one way. For example, instead of just listening to a podcast, which involves auditory learning , find a way to rehearse the information both verbally and visually.

This might involve describing what you learned to a friend, taking notes, or drawing a mind map. By learning in more than one way, you’re further cementing the knowledge in your mind.

For example, if you are learning a new language, try varying techniques such as listening to language examples, reading written language, practicing with a friend, and writing down your own notes.

One helpful tip is to try writing out your notes on paper rather than typing on a laptop, tablet, or computer. Research has found that longhand notes can help cement information in memory more effectively than digital note-taking.

Varying your learning techniques and giving yourself the opportunity to learn in different ways and in different contexts can help make you a more efficient learner.

Teach What You Are Learning

Educators have long noted that one of the best ways to learn something is to teach it to someone else. Remember your seventh-grade presentation on Costa Rica? By teaching to the rest of the class, your teacher hoped you would gain even more from the assignment.

You can apply the same principle today by sharing newly learned skills and knowledge with others. Start by translating the information into your own words. This process alone helps solidify new knowledge in your brain. Next, find some way to share what you’ve learned.

Some ideas include writing a blog post, creating a podcast, or participating in a group discussion.

Build on Previous Learning

Tara Moore\ / Getty Images

Another great way to become a more effective learner is to use relational learning, which involves relating new information to things that you already know.

For example, if you are learning a new language, you might associate the new vocabulary and grammar you are learning with what you already know about your native language or other languages you may already speak.

Gain Practical Experience

LWA / Dann Tardif / Getty Images

For many students, learning typically involves reading textbooks, attending lectures, or doing research in the library or online. While seeing information and then writing it down is important, actually putting new knowledge and skills into practice can be one of the best ways to improve learning.

If it is a sport or athletic skill, perform the activity on a regular basis. If you are learning a new language, practice speaking with another person and surround yourself with language-immersion experiences. Watch foreign-language films and strike up conversations with native speakers to practice your budding skills.

If you are trying to acquire a new skill or ability, focus on gaining practical experience.

Don't Be Afraid to Make Mistakes

Research suggests that making mistakes when learning can improve learning outcomes. According to one study, trial-and-error learning where the mistakes were close to the actual answer was actually a helpful part of the learning process.

Another study found that mistakes followed by corrective feedback can be beneficial to learning. So if you make a mistake when learning something new, spend some time correcting the mistake and examining how you arrived at the incorrect answer.

This strategy can help foster critical thinking skills and make you more adaptable in learning situations that require being able to change your mind.

Research suggests that making mistakes when learning can actually help improve outcomes, especially if you correct your mistake and take the time to understand why it happened.

Use Distributed Practice

David Schaffer / Getty Images

Another strategy that can help is known as distributed practice. Instead of trying to cram all of your learning into a few long study sessions, try a brief, focused session, and then take a break.

So if you were learning a new language, you might devote a period of time to an intensive session of studying. After a break, you would then come back and rehearse your previous learning while also extending it to new learning.

This process of returning for brief sessions over a long period of time is one of the best ways to learn efficiently and effectively. 

What is the best way to learn?

Research suggests that this type of distributed learning is one of the most effective learning techniques. Focus on spending a little time studying each topic every day.

While it may seem that spending more time studying is one of the best ways to maximize learning, research has demonstrated that taking tests actually helps you better remember what you've learned, even if it wasn't covered on the test.

This phenomenon, known as the testing effect, suggests that spending time retrieving information from memory improves the long-term memory of that information. This retrieval practice makes it more likely that you will be able to remember that information again in the future.

Stop Multitasking

For many years, it was thought that people who multitask (perform more than one activity at once) had an edge over those who did not. However, research now suggests that multitasking can actually make learning less effective.

Multitasking can involve trying to do more than one thing at the same time, but it can also involve quickly switching back and forth between tasks or trying to rapidly perform tasks one after the other. 

According to research, doing this not only makes people less productive when they work but also impairs attention and reduces comprehension. Multitasking when you are studying makes it harder to focus on the information and reduces how much you understand it.

Research has also found that media multitasking, or dividing attention between different media sources, can also have a detrimental impact on learning and academic performance.

To avoid the dangers of multitasking, start by focusing your attention on the task at hand and continue working for a predetermined amount of time.

If you want to know how to learn, it is important to explore learning techniques that have been shown to be effective. Strategies such as boosting your memory and learning in multiple ways can be helpful. Regularly learning new things, using distributed practice, and testing yourself often can also be helpful ways to become a more efficient learner.

A Word From Verywell

Becoming a more effective learner can take time, and it always takes practice and determination to establish new habits. Start by focusing on just a few of these tips to see if you can get more out of your next study session.

Perhaps most importantly, work on developing the mindset that you are capable of improving your knowledge and skills. Research suggests that believing in your own capacity for growth is one of the best ways to take advantage of the learning opportunities you pursue.

Frequently Asked Questions

Create a study schedule, eliminate distractions, and try studying frequently for shorter periods of time. Use a variety of learning methods such as reading the information, writing it down, and teaching it to someone else.

Learning techniques that can help when you have ADHD include breaking up your study sessions into small blocks, giving yourself plenty of time to prepare, organizing your study materials, and concentrating on information at times when you know that your focus is at its best.

Practice testing and distributed practice have been found to be two of the most effective learning strategies. Test yourself in order to practice recalling information and spread your learning sessions out into shorter sessions over a longer period of time.

The easiest way to learn is to build on the things that you already know. As you gradually extend your knowledge a little bit at a time, you'll eventually build a solid body of knowledge around that topic.

Five ways to learn include visual, auditory, text-based, kinesthetic, and multimodal learning. The VARK model of learning styles suggests that people tend to have a certain preference for one or more of these ways to learn.

Chaire A, Becke A, Düzel E. Effects of physical exercise on working memory and attention-related neural oscillations . Front Neurosci . 2020;14:239. doi:10.3389/fnins.2020.00239

Mazza S, Gerbier E, Gustin M-P, et al. Relearn faster and retain longer: Along with practice, sleep makes perfect . Psychol Sci. 2016;27(10):1321-1330. doi:10.1177/0956797616659930

Manning JR, Kahana MJ.  Interpreting semantic clustering effects in free recall .  Memory . 2012;20(5):511-517. doi:10.1080/09658211.2012.683010

Forrin ND, Macleod CM.  This time it's personal: the memory benefit of hearing oneself .  Memory.  2018;26(4):574-579. doi:10.1080/09658211.2017.1383434

Shors TJ, Anderson ML, Curlik DM 2nd, Nokia MS. Use it or lose it: how neurogenesis keeps the brain fit for learning .  Behav Brain Res . 2012;227(2):450-458. doi:10.1016/j.bbr.2011.04.023

Mueller PA, Oppenheimer DM. The pen Is mightier than the keyboard: Advantages of longhand over laptop note taking . Psychol Sci . 2014. 2014;25(6):1159-1168. doi:10.1177/0956797614524581

Cyr AA, Anderson ND. Learning from your mistakes: does it matter if you’re out in left foot, I mean field ? Memory . 2018;26(9):1281-1290. doi:10.1080/09658211.2018.1464189

Metcalfe J. Learning from errors . Annu Rev Psychol . 2017;68(1):465-489. doi:10.1146/annurev-psych-010416-044022

Dunlosky J, Rawson KA, Marsh EJ, Nathan MJ, Willingham DT. Improving students’ learning with effective learning techniques: promising directions from cognitive and educational psychology . Psychol Sci Public Interest . 2013;14(1):4-58. doi:10.1177/1529100612453266

Pastotter B, Bauml KHT. Retrieval practice enhances new learning: the forward effect of testing . Front Psychol . 2014;5. doi:10.3389/fpsyg.2014.00286

Jeong S-H, Hwang Y.  Media multitasking effects on cognitive vs. attitudinal outcomes: A meta-analysis .  Hum Commun Res . 2016;42(4):599-618. doi:10.1111/hcre.12089

May K, Elder A. Efficient, helpful, or distracting? A literature review of media multitasking in relation to academic performance . Int J Educ Technol High Educ.  2018;15(1):13. doi:10.1186/s41239-018-0096-z

Sarrasin JB, Nenciovici L, Foisy LMB, Allaire-Duquette G, Riopel M, Masson S. Effects of teaching the concept of neuroplasticity to induce a growth mindset on motivation, achievement, and brain activity: A meta-analysis . Trends Neurosci Educ . 2018;12:22-31. doi:10.1016/j.tine.2018.07.003

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Book cover

Encyclopedia of the Sciences of Learning pp 1606–1610 Cite as

Interactive Learning Tasks

  • Antje Proske 2 ,
  • Hermann Körndle 2 &
  • Susanne Narciss 2  
  • Reference work entry

641 Accesses

5 Citations

Interactive exercises ; Interactive questions ; Interactive quizzes ; Interactive problems

Basically, a task is a piece of work to be done consisting of two elements – the question or problem and the solution. From a cognitive perspective, a task is a stimulus to which an individual responds in a certain way. Solving a task is thus an activity which requires individuals to perform a series of cognitive processes and actions in order to solve the problem and to produce an outcome representing the problem solution. A learning task is a specifically designed task in which the series of cognitive operations and actions conducing to the production of the learning task outcome lead learners to be actively engaged in knowledge construction at various levels. First, learning task processing itself can directly contribute to knowledge construction by stimulating cognitive processes which strengthen or modify learners’ current mental representation of the particular subject...

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Anderson, R. C., & Biddle, W. B. (1975). On asking people questions about what they are reading. In G. Bower (Ed.), The psychology of learning and motivation (Vol. 9, pp. 89–132). New York: Academic Press.

Google Scholar  

Merrill, M. (2002). First principles of instruction. Educational Technology Research and Development, 50 (3), 43–59.

Article   Google Scholar  

Narciss, S. (2008). Feedback strategies for interactive learning tasks. In J. M. Spector, M. D. Merrill, J. J. G. van Merrienboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 125–144). Mahwah: Lawrence Erlbaum.

Proske, A., Körndle, H., & Narciss, S. (2004). The exercise format editor: A multimedia tool for the design of multiple learning tasks. In H. Niegemann, D. Leutner, & R. Brünken (Eds.), Instructional design for multimedia learning (pp. 149–164). Münster: Waxmann.

van Merriënboer, J. J. G., & Dijkstra, S. (1997). The four-component instructional design model for training complex cognitive skills. In R. D. Tennyson, N. Seel, S. Dijkstra, & F. Schott (Eds.), Instructional design: International perspectives (Vol. 1, pp. 427–445). Hillsdale: Lawrence Erlbaum.

Download references

Author information

Authors and affiliations.

Psychology of Learning and Instruction, TU Dresden, Helmholtzstr. 10, 01062, Dresden, Germany

Dr. Antje Proske, Hermann Körndle & Susanne Narciss

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Antje Proske .

Editor information

Editors and affiliations.

Faculty of Economics and Behavioral Sciences, Department of Education, University of Freiburg, 79085, Freiburg, Germany

Norbert M. Seel

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media, LLC

About this entry

Cite this entry.

Proske, A., Körndle, H., Narciss, S. (2012). Interactive Learning Tasks. In: Seel, N.M. (eds) Encyclopedia of the Sciences of Learning. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-1428-6_1100

Download citation

DOI : https://doi.org/10.1007/978-1-4419-1428-6_1100

Publisher Name : Springer, Boston, MA

Print ISBN : 978-1-4419-1427-9

Online ISBN : 978-1-4419-1428-6

eBook Packages : Humanities, Social Sciences and Law

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

6 Chapter 6: Designing Engaging Tasks

  • Tasks are designed to help students meet objectives.
  • Tasks must be engaging in order for students to learn.
  • Engaging tasks make pedagogical connections between students’ backgrounds and needs in relation to the objectives.
  • Tasks should incorporate culture and be culturally responsive.
  • Students can help design and carry out tasks.

As you read the scenario below, think about issues that the principal needs to address with Mr. Carhart.

Having students present to the class is a technique that is commonly used in schools. In the chapter-opening scenario, however, the principal, Dr. Johnson, has some justifiable concerns. If students were not spending time on task, and the ELLs were not engaged at all, chances are that they were not learning as much as they could. The amount of time that students spend on task is clearly related to the amount of engagement that they feel (Christenson, Reschly, & Wylie, 2013). Creating language objectives to help students access and understand goals and making connections in the lesson’s introduction to help initiate engagement are important steps in helping students engage. However, the design of learning tasks must also emphasize access and engagement.

STOP AND THINK

Before reading the rest of Chapter 6, think about how you might change the task that Mr. Carhart’s students are involved in so that students are more engaged in the content and language and can meet lesson objectives.

Understanding Engagement and Tasks

An engaging task does not necessarily mean one that is fun but rather one that is worth doing because it is inherently interesting or meaningful to students in some other way. Decades of studies in learning, brain research, psychology, motivation, and second language acquisition clearly show that engaged students achieve more (Bender, 2017; Bruner, 1961; Christenson, Reschly, & Wylie, 2013; Egbert & Borysenko, 2019; Meltzer & Hamman, 2004). This is particularly true for ELLs and other diverse students because engagement in tasks can mediate the effects of factors outside school that may otherwise interfere with achievement (Csikszentmihalyi, 1990; Guthrie, Shafer, & Huang, 2001). As Egbert (2007) notes; Engagement includes student involvement and ownership. . .. An engaging task means that students spend more time on task and have deeper focus, leading to greater success. In order to engage students, teachers should understand their needs, wants, and interests as relevant to their [learning]; in other words, to comprehend their learning goals. (n.p.) Meltzer and Hamman (2004) refer to engagement as “persistence in and absorption with reading, writing, speaking, listening, and thinking even when there are other choices available” (p.10). They propose three strategies, supported throughout the literature on engagement, for engaging students in tasks that integrate content and language:

  • Making connections to students’ lives by creating opportunities for authentic interactions with people, objects, and experiences that initiate student interest. In other words, tasks should be authentic and relevant for learners.
  • Having students interact with each other and with language. Tasks should be cooperative and/or collaborative in both focusing on language and using language for authentic purposes.
  • Creating responsive classrooms , or considering students’ needs, wants, abilities, and interests.

In other words, tasks should be differentiated, challenging, and scaffolded (Egbert, 2007). Clearly, understanding students’ backgrounds and interests, as suggested in Chapter 3, is central to student engagement.

Elements of Tasks

An understanding of tasks is also crucial to creating engaging ones. Tasks can be divided into two overlapping components, process and product. Task process is what the students do and how they do it during the task. Process can include whether students work in groups, what kind of language they use, and what tools they employ in doing a task. Task product can be seen as the outcome of this process or the end result of the task. Products can include written essays, plays, art pieces, dioramas, and many other (usually concrete and graded) artifacts. In the past, more emphasis was typically placed on task products, but the process is equally important because engagement and learning depend on what happens during it.

STOP AND DO

Before reading further, list the elements of task process and product that you know (in other words, think about what is involved in designing tasks). Elements of task process and product that teachers can consider intentionally in their task design are listed in Figure 6.1.

Figure 6.1 Elements of task process and product. \

Elements o Task Process

Regardless of the content of the task, the elements of the process that require thought and careful design are the same. Each of these elements will be described next.

  • Instructional grouping. Grouping includes how many students work together and also with whom they work. In different tasks or different parts of one task, students can work individually, in dyads or trios, in large groups, or as a whole class. In addition, students can work in either homogeneous or hetergeneous groups that should be determined by aspects such as ability level, first language, interest, and/or skill. Which of these groupings is part of the design of a specific task depends on what the task is meant to accomplish. It also depends on how students connect to the groupings. For example, students who come from educational backgrounds where group work was prevalent may prefer collaboration and may need help working individually, and students who are used to working individually may prefer that approach and also need to learn skills for working in groups. Students in diverse classrooms benefit from teachers balancing the use of many participation structures (Peregoy & Boyle, 2016): from teacher-directed activities to small cooperative groups, to solo work. Students also profit from frequent opportunities to interact with each other and with the teacher during instructional activities.
  • Modes. In addition to the basic modes [1] of reading, writing, listening, and speaking, teachers and students can use graphics, video, art, music, storytelling and other modes that incorporate student backgrounds and help students access the content and language of the lesson. Students learn by interacting in all of these modes. Completing written worksheets, while useful for remediation and practice, should not be the main task of a lesson.
  • Task structure . Tasks can be open, partially structured, or highly structured. The task structure can determine how students get information and how they express themselves during the task. For example, in a structured task, the teacher may ask students to complete individually a predetermined set of task steps using specific materials, or in a more open task, students may choose which materials they use and how they arrive at the product. Whether the structure is cooperative or competitive, open or structured, or some combination, teachers can make sure that students understand how to participate via explicit modeling or instruction of group processes and language.
  • Time and pacing . Because they are such a diverse group, students do not get the same work done in the same amount of time. Some students work faster, some slower; some have language or content barriers; others complete the overall requirements but do not get deep into the topic. In designing a task, teachers need to consider how much time different students need while also considering how to provide enough scaffolding that students can complete their tasks. Having a set of task extensions or additional tasks that students are expected to tackle when they complete the required task sooner than expected can help them spend classroom time to the best advantage.
  • Scaffolding . Teachers can scaffold [2] student learning with such strategies as modeling, eliciting, probing, restating, clarifying, questioning, and praising, as appropriate. This can be done in a carefully planned way and when the teacher sees that students need help during a task. Students can be scaffolded in both content and language, particularly in the informal, intercultural, instructional, and academic language to which they have not previously been exposed. These kinds of scaffolds can also be provided by other students and paraprofessionals, class guests, carefully constructed computer programs, and the use of dictionaries and other reference works. If students are given too much scaffolding, however, they may not feel challenged and may become bored; if too little scaffolding is available, the task may seem too difficult and some students may flounder. The idea is to plan scaffolding so that there is just enough challenge to keep students engaged, regardless of their level. Understanding students’ backgrounds helps in designing lessons that have the appropriate amount of scaffolding.

Think about the ways that you scaffold instruction or have had instruction scaffolded for you. Make a list of scaffolds that may work for different groups of students.

  • Resources/texts. Lesson texts and other content and language resources must be at appropriate levels. Text sets, consisting of texts with similar content but a variety of language levels, can be assembled from different sources. Other resources should be used if they help students meet the objectives and can engage students in doing so.[foonote] National Geographic Explorer magazine comes in Pioneer and Pathfinder editions, both with the same cover and illustrations so that elementary school students do not know who has the easier text. The focus of both is on content and language. Newsela (newsela.com) has 6 language levels of the same news articles.[/footnote]

Search the Internet for a text set centered around a specific content topic. Find at least one reading that can be used with each of the following three student groups: improving, grade level, and above grade level in language ability and knowledge.

  • Teacher/student roles. Who is the expert? Who gives help? Who asks questions? Who talks? Research shows that when the answer to most of these questions is the student, the more likely it is that students will be engaged and achieve (Meltzer & Hamman, 2004). Tasks should be developed with the intention that students will be active and engaged in learning rather than recipients of it. For example, instead of lecture, teachers can ask essential questions like What?, How?, and Why? (Prensky, 2007) that lead students to create, with the teacher, a process for answering them
  • Procedural tools. Tools that can support ELLs’ processes include everything from books to pencils, to visitors, to blogging software. Teachers need to determine which tool(s) has the best fit for the task. If computers are not really necessary, they probably should not be used. Likewise, if a book cannot give the best idea of the content or language, a different tool should be chosen. This tool–task fit is important because it takes the focus off the tool and keeps it on the content and language of the lesson. In other words, tools should not get in the way of learning.

One, some, or all of the elements of the task process can be designed to be engaging based on a teacher’s understanding of her students and the curriculum. In addition, by allowing students to make some of the design choices, teachers can differentiate [3] both task process and product. Differentiation, in turn, promotes greater access and engagement. [4]

Elements of Task Product

The elements of task process are clearly instrumental in engaging students and supporting achievement. Several aspects of the task product are also important and will be discussed next.

  • Audience. Students are typically more engaged in their products when they will be viewed by an audience other than the teacher. A letter written to a scientist or politician, a book to be read to students in other classes or be placed in the library, or a model to be entered in a competition are more likely to engage students than worksheets or writing assignments that the teacher grades and then students “file” in the nearest trash can.
  • Mode. How can students complete their products? As in the task process, modes have an important role. Speaking, writing, drawing, acting, singing, constructing, and creating are among the many choices teachers can make. While designing what the students will produce, teachers can review the lesson objective verbs (see Figure 4.1) and create the products broadly enough that students have a chance to express themselves in ways that they can be understood. Students can also be given choices about how to represent their learning.

Assessment of both the process and the product should help students see relationships among objectives, connections, and the task, including both the process and the product. Assessment is discussed further in Chapter 7.

Pedagogical Connections

Engagement comes when task elements—of both process and product—are designed to work for students. To design effective tasks, teachers can make pedagogical connections; in other words, they should think about the educational backgrounds and interests of their students while designing tasks. Making and explaining such connections can help lead to student success. For example, Oh (2005) notes that successful learning tasks in her classroom were those in which her students were encouraged to produce products using their creativity and experiences, including creating short stories, poems, raps, mobiles, video clips, quilts, puppet shows, and PowerPoint presentations. Murray (1999) likewise describes projects in which students chose the topic or procedure for their learning and recorded in some way how the course content connected to their daily lives. When making the connection, the teacher can tell the students, “We are working in groups today because I know that you learn best that way,” or “We will be working individually on this project so that you each get to present your own ideas, which I know you like to do.” These ideas, and the general techniques described below, are based on the teacher’s understanding of the diversity of learners within the classroom.

Techniques for Making Pedagogical Connections

In 1998, The Center for Research on Education, Diversity & Excellence recommended the pedagogical strategies in Figure 6.2, which teachers can still employ to make instructional connections to student backgrounds. Another principle is to use culturally relevant resources such as minority or first language literature, film, and artifacts.

  • Listen to students talk about familiar topics, such as home and community.
  • Respond to students’ talk and questions, making on-the-spot changes that relate directly to their comments.
  • Interact with students in ways that respect their speaking styles, which may be different from the teacher’s, such as paying attention to wait-time, eye contact, turn taking, and spotlighting.
  • Connect student language with literacy and content-area knowledge through speaking, listening, reading, and writing activities.
  • Encourage students to use content vocabulary to express their understanding.
  • Encourage students to use their first and second languages in instructional activities (p. 2).

Figure 6.2 Principles for connecting instruction to students’ lives. Teachers can also promote cultural awareness, engage students, and enrich the presentation of content by integrating facts from a variety of cultures where they naturally fit into the lesson. Figure 6.3 presents examples of tasks into which teachers have integrated cultural facts.

Figure 6.3 Integrating cultural facts. Pedagogical connections, or the design of tasks that support achievement for all learners, work with personal and academic connections to provide students with both access and reasons to engage.

What topics do you know enough about to include cross-cultural facts? Which do you need to learn more about?

Guidelines or Task Design

In addition to the suggestions above, two additional guidelines can help teachers create effective tasks. Guideline 1: Give students a reason to listen. In the chapter-opening scenario, students were listening to practically the same presentation over and over. From their reactions, it is clear that they had little incentive to listen, even though the teacher has asked them to. To make this task more engaging for students, Mr. Carhart has many options. For example, he could ask the students to take notes for an upcoming test, or to list differences in the information that the groups found. Even better, he could design a jigsaw activity, asking each student group to present on a different aspect of the war, providing information to their peers that they would need to synthesize in order to complete their final product. Whether students are required to fill out a graphic organizer or ask two questions of the presenters, students always need a reason to listen to ensure that they do. Guideline 2: Do not do what students can do. The more students have invested in a task or lesson, the more engaged they tend to be. Teachers who give students choices and allow them more autonomy in making instructional decisions will find the students more involved in their learning. By understanding students’ backgrounds, teachers can design specific roles for students in tasks and lessons that they would not have previously considered. [5] The list below presents some tasks that students can do and that teachers typically take responsibility for:

  • Write test questions.
  • Help their peers review.
  • Lead a brainstorming session.
  • Explain tasks.
  • Form effective groups.
  • Decorate the classroom.
  • Provide feedback.
  • Search for resources.
  • Find cultural facts.
  • Create choices for products.

With a partner, list other tasks that students can do that teachers often do not allow them to do. Providing students with reasons to listen and letting them participate in instructional planning can facilitate student engagement and thereby their success. Figure 6.4 summarizes these guidelines. Additional guidelines are presented throughout this book.

Chapter 6.4 Guidelines for designing engaging tasks.

After reading the chapter, what advice would you give to the principal and teacher in the chapter-opening scenario?

The careful design of task processes and products can result in student engagement, particularly when the backgrounds and needs of all students are considered. Instructional connections, the integration of cultural knowledge, and a focus on student autonomy contribute to achievement for all students. The measurement of lessons and student process and outcomes is the subject of Chapter 7.

For Reflection

  • Task process . Think about times that you have given students worksheets or been given worksheets by a teacher. How might students be involved in the information they must learn in a more active way?
  • Task product . What’s the most interesting product you have created? What made it engaging to you?
  • Organizing task design. Use the elements chart in Figure 6.1 to make a checklist of elements you want to remember to include in your lessons.
  • Standards and culture . Look at the standards for your content area and/or grade level. Find cultural facts that you could integrate into lessons on the topics that the standards require.

Bender, W. (2017). 20 strategies for increasing student engagement. West Palm Beach, FL: Learning Sciences International. Christenson, S., Reschly, A., & Wylie, C. (2013) (Eds). Handbook of research on student engagement . New York: Springer. Csikszentmihalyi, M. (1990). Literacy and intrinsic motivation. Daedalus, 119, 115–140. Egbert, J. (2007). Asking useful questions: Goals, engagement, and differentiation in technology-enhanced language learning.  Teaching English with Technology, 7 (1), n.p. Available at http://www.iatefl.org.pl/call/j_article27.htm Egbert, J., & Borysenko, N. (2019, October). Standards, engagement, and Minecraft: Optimizing experiences in language teacher education. Teaching and Teacher Education, 85 , 115-124. Guthrie, J. T., Schafer, W. D., & Huang, C. (2001). Benefits of opportunity to read and balanced reading instruction for reading achievement and engagement: A policy analysis of state NAEP in Maryland. Journal of Educational Research, 94 (3), 145–162. Meltzer, J., & Hamann, E. (2004). Meeting the literacy development needs of adolescent English language learners through content area learning. Part one: Focus on motivation and engagement. Providence, RI: The Brown University Education Alliance/Northeast and Islands Regional Education Laboratory. Oh, J. (2005). Connecting learning with students’ interests and daily lives with project assignment: “It is my project.” Proceedings of the 2005 American Society for Engineering Education Annual Conference and Exposition . Available at www.aaee.com.au/conferences/papers/2005/Paper/Paper253.pdf Peregoy, S., & Boyle, O. (2016). Reading, writing and learning in ESL: A resource book for K – 12 teachers (7th ed.). Boston, MA: Pearson. Prensky, M. (2007). New issues, new answers: Changing paradigms. Educational Technology, 47 (4), 64. Vygotsky, L. (1986). Thought and language . Boston: MIT Press.

  • Language modes include listening, speaking, reading, writing, viewing, and representing. Multiple modes should be integrated in all tasks, unless the task is specifically designed to focus on one mode. ↵
  • Scaffolding means providing support of the appropriate type and level of difficulty. ↵
  • Differentiation of instruction means designing instruction based on student abilities, interests, and backgrounds. The purpose is to help all students reach the same goal but to do so in a way that works for each student. ↵
  • See these useful texts and websites:  Differentiated Instruction by T. Hall, 2002 ( http://www.cast.org/publications/ncac/ncac_diffinstruc.html ); How to Differentiate Instruction in Mixed-Ability Classrooms (2nd ed.), by C. Tomlinson, 2001, Alexandria, VA: ASCD; and many other useful resources from the Association for Supervision and Curriculum Development (ASCD) (www.ascd.org). ↵
  • Learner autonomy refers to the amount of responsibility that learners take or are given for their own learning, including the extent to which they make choices about task process and product. ↵

Creative Commons License

Share This Book

  • Increase Font Size

Our next-generation model: Gemini 1.5

Feb 15, 2024

The model delivers dramatically enhanced performance, with a breakthrough in long-context understanding across modalities.

SundarPichai_2x.jpg

A note from Google and Alphabet CEO Sundar Pichai:

Last week, we rolled out our most capable model, Gemini 1.0 Ultra, and took a significant step forward in making Google products more helpful, starting with Gemini Advanced . Today, developers and Cloud customers can begin building with 1.0 Ultra too — with our Gemini API in AI Studio and in Vertex AI .

Our teams continue pushing the frontiers of our latest models with safety at the core. They are making rapid progress. In fact, we’re ready to introduce the next generation: Gemini 1.5. It shows dramatic improvements across a number of dimensions and 1.5 Pro achieves comparable quality to 1.0 Ultra, while using less compute.

This new generation also delivers a breakthrough in long-context understanding. We’ve been able to significantly increase the amount of information our models can process — running up to 1 million tokens consistently, achieving the longest context window of any large-scale foundation model yet.

Longer context windows show us the promise of what is possible. They will enable entirely new capabilities and help developers build much more useful models and applications. We’re excited to offer a limited preview of this experimental feature to developers and enterprise customers. Demis shares more on capabilities, safety and availability below.

Introducing Gemini 1.5

By Demis Hassabis, CEO of Google DeepMind, on behalf of the Gemini team

This is an exciting time for AI. New advances in the field have the potential to make AI more helpful for billions of people over the coming years. Since introducing Gemini 1.0 , we’ve been testing, refining and enhancing its capabilities.

Today, we’re announcing our next-generation model: Gemini 1.5.

Gemini 1.5 delivers dramatically enhanced performance. It represents a step change in our approach, building upon research and engineering innovations across nearly every part of our foundation model development and infrastructure. This includes making Gemini 1.5 more efficient to train and serve, with a new Mixture-of-Experts (MoE) architecture.

The first Gemini 1.5 model we’re releasing for early testing is Gemini 1.5 Pro. It’s a mid-size multimodal model, optimized for scaling across a wide-range of tasks, and performs at a similar level to 1.0 Ultra , our largest model to date. It also introduces a breakthrough experimental feature in long-context understanding.

Gemini 1.5 Pro comes with a standard 128,000 token context window. But starting today, a limited group of developers and enterprise customers can try it with a context window of up to 1 million tokens via AI Studio and Vertex AI in private preview.

As we roll out the full 1 million token context window, we’re actively working on optimizations to improve latency, reduce computational requirements and enhance the user experience. We’re excited for people to try this breakthrough capability, and we share more details on future availability below.

These continued advances in our next-generation models will open up new possibilities for people, developers and enterprises to create, discover and build using AI.

Context lengths of leading foundation models

Highly efficient architecture

Gemini 1.5 is built upon our leading research on Transformer and MoE architecture. While a traditional Transformer functions as one large neural network, MoE models are divided into smaller "expert” neural networks.

Depending on the type of input given, MoE models learn to selectively activate only the most relevant expert pathways in its neural network. This specialization massively enhances the model’s efficiency. Google has been an early adopter and pioneer of the MoE technique for deep learning through research such as Sparsely-Gated MoE , GShard-Transformer , Switch-Transformer, M4 and more.

Our latest innovations in model architecture allow Gemini 1.5 to learn complex tasks more quickly and maintain quality, while being more efficient to train and serve. These efficiencies are helping our teams iterate, train and deliver more advanced versions of Gemini faster than ever before, and we’re working on further optimizations.

Greater context, more helpful capabilities

An AI model’s “context window” is made up of tokens, which are the building blocks used for processing information. Tokens can be entire parts or subsections of words, images, videos, audio or code. The bigger a model’s context window, the more information it can take in and process in a given prompt — making its output more consistent, relevant and useful.

Through a series of machine learning innovations, we’ve increased 1.5 Pro’s context window capacity far beyond the original 32,000 tokens for Gemini 1.0. We can now run up to 1 million tokens in production.

This means 1.5 Pro can process vast amounts of information in one go — including 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, we’ve also successfully tested up to 10 million tokens.

Complex reasoning about vast amounts of information

1.5 Pro can seamlessly analyze, classify and summarize large amounts of content within a given prompt. For example, when given the 402-page transcripts from Apollo 11’s mission to the moon, it can reason about conversations, events and details found across the document.

Reasoning across a 402-page transcript: Gemini 1.5 Pro Demo

Gemini 1.5 Pro can understand, reason about and identify curious details in the 402-page transcripts from Apollo 11’s mission to the moon.

Better understanding and reasoning across modalities

1.5 Pro can perform highly-sophisticated understanding and reasoning tasks for different modalities, including video. For instance, when given a 44-minute silent Buster Keaton movie , the model can accurately analyze various plot points and events, and even reason about small details in the movie that could easily be missed.

Multimodal prompting with a 44-minute movie: Gemini 1.5 Pro Demo

Gemini 1.5 Pro can identify a scene in a 44-minute silent Buster Keaton movie when given a simple line drawing as reference material for a real-life object.

Relevant problem-solving with longer blocks of code

1.5 Pro can perform more relevant problem-solving tasks across longer blocks of code. When given a prompt with more than 100,000 lines of code, it can better reason across examples, suggest helpful modifications and give explanations about how different parts of the code works.

Problem solving across 100,633 lines of code | Gemini 1.5 Pro Demo

Gemini 1.5 Pro can reason across 100,000 lines of code giving helpful solutions, modifications and explanations.

Enhanced performance

When tested on a comprehensive panel of text, code, image, audio and video evaluations, 1.5 Pro outperforms 1.0 Pro on 87% of the benchmarks used for developing our large language models (LLMs). And when compared to 1.0 Ultra on the same benchmarks, it performs at a broadly similar level.

Gemini 1.5 Pro maintains high levels of performance even as its context window increases. In the Needle In A Haystack (NIAH) evaluation, where a small piece of text containing a particular fact or statement is purposely placed within a long block of text, 1.5 Pro found the embedded text 99% of the time, in blocks of data as long as 1 million tokens.

Gemini 1.5 Pro also shows impressive “in-context learning” skills, meaning that it can learn a new skill from information given in a long prompt, without needing additional fine-tuning. We tested this skill on the Machine Translation from One Book (MTOB) benchmark, which shows how well the model learns from information it’s never seen before. When given a grammar manual for Kalamang , a language with fewer than 200 speakers worldwide, the model learns to translate English to Kalamang at a similar level to a person learning from the same content.

As 1.5 Pro’s long context window is the first of its kind among large-scale models, we’re continuously developing new evaluations and benchmarks for testing its novel capabilities.

For more details, see our Gemini 1.5 Pro technical report .

Extensive ethics and safety testing

In line with our AI Principles and robust safety policies, we’re ensuring our models undergo extensive ethics and safety tests. We then integrate these research learnings into our governance processes and model development and evaluations to continuously improve our AI systems.

Since introducing 1.0 Ultra in December, our teams have continued refining the model, making it safer for a wider release. We’ve also conducted novel research on safety risks and developed red-teaming techniques to test for a range of potential harms.

In advance of releasing 1.5 Pro, we've taken the same approach to responsible deployment as we did for our Gemini 1.0 models, conducting extensive evaluations across areas including content safety and representational harms, and will continue to expand this testing. Beyond this, we’re developing further tests that account for the novel long-context capabilities of 1.5 Pro.

Build and experiment with Gemini models

We’re committed to bringing each new generation of Gemini models to billions of people, developers and enterprises around the world responsibly.

Starting today, we’re offering a limited preview of 1.5 Pro to developers and enterprise customers via AI Studio and Vertex AI . Read more about this on our Google for Developers blog and Google Cloud blog .

We’ll introduce 1.5 Pro with a standard 128,000 token context window when the model is ready for a wider release. Coming soon, we plan to introduce pricing tiers that start at the standard 128,000 context window and scale up to 1 million tokens, as we improve the model.

Early testers can try the 1 million token context window at no cost during the testing period, though they should expect longer latency times with this experimental feature. Significant improvements in speed are also on the horizon.

Developers interested in testing 1.5 Pro can sign up now in AI Studio, while enterprise customers can reach out to their Vertex AI account team.

Learn more about Gemini’s capabilities and see how it works .

Get more stories from Google in your inbox.

Your information will be used in accordance with Google's privacy policy.

Done. Just one step more.

Check your inbox to confirm your subscription.

You are already subscribed to our newsletter.

You can also subscribe with a different email address .

Related stories

Gemini models are coming to performance max.

gemma-header

Gemma: Introducing new state-of-the-art open models

What is a long context window.

MSC_Keyword_Cover (3)

How AI can strengthen digital security

Shield

Working together to address AI risks and opportunities at MSC

AI Evergreen 1 (1)

How we’re partnering with the industry, governments and civil society to advance AI

Let’s stay in touch. Get the latest news from Google in your inbox.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 24 February 2024

A novel interpretable deep transfer learning combining diverse learnable parameters for improved T2D prediction based on single-cell gene regulatory networks

  • Sumaya Alghamdi 1 , 2 &
  • Turki Turki 1  

Scientific Reports volume  14 , Article number:  4491 ( 2024 ) Cite this article

Metrics details

  • Computational biology and bioinformatics
  • Computer science

Accurate deep learning (DL) models to predict type 2 diabetes (T2D) are concerned not only with targeting the discrimination task but also with learning useful feature representation. However, existing DL tools are far from perfect and do not provide appropriate interpretation as a guideline to explain and promote superior performance in the target task. Therefore, we provide an interpretable approach for our presented deep transfer learning (DTL) models to overcome such drawbacks, working as follows. We utilize several pre-trained models including SEResNet152, and SEResNeXT101. Then, we transfer knowledge from pre-trained models via keeping the weights in the convolutional base (i.e., feature extraction part) while modifying the classification part with the use of Adam optimizer to deal with classifying healthy controls and T2D based on single-cell gene regulatory network (SCGRN) images. Another DTL models work in a similar manner but just with keeping weights of the bottom layers in the feature extraction unaltered while updating weights of consecutive layers through training from scratch. Experimental results on the whole 224 SCGRN images using five-fold cross-validation show that our model (TFeSEResNeXT101) achieving the highest average balanced accuracy (BAC) of 0.97 and thereby significantly outperforming the baseline that resulted in an average BAC of 0.86. Moreover, the simulation study demonstrated that the superiority is attributed to the distributional conformance of model weight parameters obtained with Adam optimizer when coupled with weights from a pre-trained model.

Introduction

Type 2 diabetes (T2D) is a common condition that over time when left untreated can cause damage reaching various organs, including kidney, eye, and heart, to just name a few 1 , 2 . Patients with diabetes incur an overall average medical expenditure more than two times that of those without diabetes. Therefore, diabetes is considered as a burden associated with higher medical costs, and increased mortality rates 3 . Obtaining a highly accurate tool to discriminate between healthy and T2D subjects can aid in disease diagnosis, management, prevention and understanding 4 . Therefore, scientific efforts have been made contributing to detect T2D using computational methods 5 , 6 , 7 , 8 .

Pyrros et al. 9 employed deep learning (DL)-based approach to identify T2D using chest X-ray (CXR) images pertaining to healthy and T2D patients working as follows. They employed the ResNet34 DL model incorporating typical data augmentation with the use of Adam optimizer 10 . The dataset to develop (i.e., train from scratch to induce) the ResNet34 model consisted of 271,065 CXR training images in which 45,961 were designated as T2D CXR images and the remaining 225,104 were as CXR images for healthy control subjects. The trained model was then applied for a testing set consisting of 9943 CXR images. Results demonstrated that the DL model achieved an AUC of 0.84 when compared to an AUC of 0.79 for the baseline linear regression (LR) incorporating only clinical information data. An ensemble of both LR and ResNet34 generated an AUC of 0.85, considered as a marginal improvement in the prediction performance. These results demonstrate the feasibility of DL in screening T2D patients using CXR images. Wachinger et al. 11 presented a DL approach to predict T2D based on neck-to-knee MRI images and clinical information. The MRI images dataset consisted of 3406 MRI images in which the class distribution is uniformed (i.e., 1703 as T2D and 1703 as MRI images for healthy control subjects). The DL approach consists of convolutional layers, maxpooling layers, batch normalization layers, dropout layer and incorporation of dynamic affine feature map transform (DAFT) within convolutional layers to concatenate features obtained from clinical and MRI image data. Five-fold cross-validation was utilized to assess the performance of the whole data. Results demonstrated the superiority of CNN-DAFT achieving an AUC of 0.871, significantly outperforming CNN using only MRI images and linear regression using only clinical information.

Das 12 et al. presented a learning-based approach combining deep and machine learning for the diagnosis of T2D based on DNA sequences working as follows. First, they transformed the DNA sequence pertaining to healthy and T2D to images, provided as input to ResNet 13 and VGG19 DL models to extract features. Then, providing the extracted features along with corresponding class labels to machine learning algorithms, namely support vector machines(SVM) and KNN. Experimental results using cross-validation on the whole image dataset demonstrate the good performance of SVM when coupled with extracted features using ResNet DL model. Naveed et al. 14 employed DL to predict T2D. The dataset consisted of 19,181 patient records data in which 7715 records were for diabetic patients while the remaining 11,466 records were for non-diabetic patients. The dataset was divided into training and testing in which training composed of 80% of the dataset while the remaining 20% was assigned for testing. DL models included CNN, LSTM, and CNN-LSTM 15 . Experimental results demonstrated the superior performance of CNN-LSTM achieving the highest performance results when compared to other models including decision tree and SVM. Specifically, CNN-LSTM generated an accuracy of 91.6, and F1-Score of 89.2. These results demonstrate the feasibility of DL in early predicting T2D. Other AI-driven computational methods have been proposed to aid in predicting T2D 8 , 16 .

As inferred single-cell gene regulatory networks (SCGRNs) encode the molecular interactions pertaining to components of specific cell types and thereby can aid in characterizing cellular differentiation in healthy and disease subjects 17 , 18 , Turki et al. 19 presented a novel DL approach to discriminate between heathy controls and T2D based on SCGRN images working as follows. Because rapid progress in single-cell technologies has contributed to the availability of biological experiments pertaining to gene regulatory networks (GRNs), single-cell gene expression data from the ArrayExpress repository was processed with the use of bigSCale and NetBioV packages 20 , 21 , 22 , generating 224 SCGRN images. The class distribution was distributed evenly in terms of healthy controls and T2D images. Then, utilizing RMSprop optimizer 15 with the following DL models: VGG16 23 , VGG19 23 , Xception 24 , ResNet50 13 , ResNet101 13 , DenseNet121 25 , and DenseNet169 25 to discriminate between healthy controls and T2D SCGRN images. Experimental results demonstrated the VGG19 performed better than studied DL models. However, no interpretation was provided to back the prediction performance.

Although these recently developed methods aimed to address the task of prediction T2D, these methods are still far from perfect and do not provide interpretation for practical deep transfer learning (DTL) models aiding in the explanation of the performance superiority. Therefore, this study is unique in the following aspects: (1) we present highly accurate DTL models working by combining weight parameters from pre-trained models and weights obtained with the use of Adam optimizer; (2) we provide, to the best of our knowledge, the first interpretation behind DTL models inspecting and quantifying that conformance of pre-trained model weight parameters with weight parameters obtained with the incorporation of Adam and RMSprop Optimizers. This interpretation framework can guide in the process of designing highly efficient DTL models applicable to wide range of problems; and (3) we conduct experimental study to report the prediction performance and computational running time for the task of predicting single-cell gene regulatory network images pertaining to healthy controls and T2D. Experimental results demonstrate the superiority of our DTL model, TFeSEResNeXT101, performing better than the baseline with 11% improvements. In terms of the running time, our DL models exhibited a significant reduction in training time attributed to transfer learning, which reduced the number of trainable weight parameters. In addition, simulation study unveiled the conformance of parameter weights of both transfer weights from pre-trained models with weights obtained from Adam optimizer as compared to RMSprop that was used by the baseline and resulted in inferior prediction performance, attributed to the divergence of its weight parameters from the weight parameters of pre-trained models.

Materials and methods

Biological networks.

We provide an illustration in Fig.  1 for the biological network images used in this study, which were downloaded from 19 and consisted of 224 SCGRN images pertaining to healthy and T2D. The class distribution for these biological network images is balanced (i.e., 224 divided evenly into the two classes). These biological network images were produced with the help of bigSCale package to process the single-cell gene expression data and build regulatory networks, then visualizing networks via the NetBioV package. In terms of the single-cell gene expression data pertaining to healthy controls and T2D patients, it was obtained from ArrayExpress repository under accession number E-MTAB-5061 26 .

figure 1

Flowchart of the deep transfer learning-based approach for the predicting T2D using SCGRNs. Biological Networks: To infer single-cell gene regulatory network (SCGRN), gene expression data are provided to bigSCale (performing clustering and differential expression analysis) changing measured correlation between genes from expression values to Z-score, followed by retaining significant correlations to guide in building a regulatory network. A visualization is performed using NetBioV. Deep Transfer Learning: Transfer learning applying feature extraction with new classifier (TFe) to distinguish between T2D and healthy control SCGRNs.

  • Deep transfer learning

Figure  1 demonstrates how our deep transfer learning (DTL) approach is performed. First, we adapt the following pre-trained models: VGG19, DenseNet201 25 , InceptionV3 27 , ResNet50V2 28 , ResNet101V2 28 , SEResNet152 29 , and SEResNeXT101 29 . Each pre-trained model has a feature extraction part (i.e., series of convolutional and pooling layers) for feature extraction and a densely connected classifier for classification. Then, we keep the weights unchanged for the feature extraction part of a pre-trained model and change the densely connected classifier to deal with binary classification instead of 1000 classes. Therefore, when feeding the SCGRN image dataset, we extract features using weights of pre-trained models while training the densely connected classifier from scratch and performing prediction. We refer to models using this type of DTL computations as TFeVGG19, TFeDenseNet201, TFeInceptionV3, TFeResNet50V2, TFeResNet101V2, TFeSEResNet152, and TFeSEResNeXT101 (see Fig.  1 ). For the other DTL computations, we keep weights of the bottom layers unchanged in the feature extraction part while performing training from scratch to change weights of top layers in feature extraction part and densely connected layers.

As in TFe-based models, we modify the densely connected classifier dealing with binary classification problem before performing the training phase. As seen in Fig.  2 , we refer to models employing this type of deep transfer learning as TFtVGG19, TFtDenseNet201, TFtInceptionV3, TFtResNet50V2, TFtResNet101V2, TFtSEResNet152, and TFtSEResNeXT101.

figure 2

Transfer learning applying fine tuning with new classifier (TFt) to distinguish between T2D and healthy control SCGRNs.

When changing weights during training, we employed three optimizers: Adam, RMSprop, and SGD 30 . When weights are kept unchanged referring to the transfer of knowledge from pre-trained models using SGD optimizer. In terms of predictions of unseen SCGRN images, predictions are mapped to healthy control subjects if the predicted values are greater than 0.5. Otherwise, predictions are mapped to T2D.

Classification methodology

In this study, we considered seven pre-trained models, namely VGG19, DenseNet201, InceptionV3, ResNet50V2, ResNet101V2, SEResNet152, and SEResNeXT101. Each of the pre-trained models was trained on 1.28 million images from ImageNet database to classify images into 1000 different categories. In terms of TFe-based models, we used the feature extraction part of pre-trained models in which weights were kept unchanged and were used to extract feature from SCGRN images. Moreover, the densely connected classifier was trained from scratch to handle the binary class classification problem. Regarding the TFt-based models, we trained the top layers and densely connected classifier from scratch while retaining the weights of bottom layers unchanged in the feature extraction part. For both TFt-based and TFe-based models, we employed Adam optimizer when updating weights of layers. Moreover, we compared the performance of our deep transfer learning approaches using different optimizers including the baseline (i.e., RMSprop optimizer) as well as against training models from scratch. We set optimization parameters as follows: 0.00001 for the learning rate, 10 for the number of epochs, and 32 for the batch size. In terms of the loss function, we utilized categorical cross-entropy 31 .

To assess the performance of studied models, we employed Balanced Accuracy (BAC), Accuracy (ACC), Precision (PRE), Recall (REC), and F1 computed as follows:

where TN designates true negative, corresponding to the number of T2D images that were correctly predicted as T2D. FP designates false positive, corresponding to the number of T2D images that were incorrectly predicted as healthy controls. TP designates true positive, corresponding to the number of healthy control images that were correctly predicted as healthy controls. FN designates false negative, corresponding to the number of healthy control images that were incorrectly predicted as T2D.

To evaluate the results on the whole SCGRN image dataset, we employed five-fold cross-validation as follows. We partitioned the SCGRN image datasets and randomly assigned images into 5 folds. During the first run of five-fold cross-validation, we used 4 of the folds to train our deep learning models and perform predictions to the remaining fold for testing and record the performance results. Such a process was repeated for an additional 4 runs in which performance results were recorded. Finally, we report the average performance results corresponding to the results obtained from five-fold cross-validation.

Implementation details

All experiments were run on a machine equipped with central processing unit (CPU) of Google Colab. The specifications of CPU runtime offered by Google Colab were Intel Xeon Processor with two cores with 2.30 GHz and 13 GB RAM where the installed version of Python is 3.10.11. For the analysis of models, we used R statistical software 32 to run the experiments and utilized the optimg package in R to run Adam optimizer 33 . All plots were performed using Matplotlib package in python 34 .

Classification results

Training results.

In Fig.  3 , we illustrate the training accuracy performance results when running five-fold cross-validation. It can be seen that our models outperformed all other models trained from scratch. Specifically, TFeVGG19 and TFtVGG19 achieved average accuracies of 0.976 and 0.962, respectively, while VGG19 achieved an average accuracy of 0.530. TFeDenseNet201 outperformed DensNet201 via achieving an average accuracy of 0.988 while DenseNet201 performed better than TFtDenseNet201 via achieving an average accuracy of 0.982 compared to 0.946. For TFe- and TFt-based models when coupled with ResNet101V2, SEResNet152 and SEResNetXT101, they outperformed their counterparts when not applying deep transfer learning (DTL) models. These superior performance results are attributed to the learned representation using transfer learning.

figure 3

The boxplots presenting the average five-fold cross-validation results using the ACC measure for the training folds. ( a ) Deep transfer learning models using feature extraction (referred with the prefix TFe). ( b ) Deep transfer learning models using fine tuning (referred with the prefix TFt). (c) Deep learning models trained from scratch. ACC is accuracy.

Testing results

Figures  4 and 5 report the generalization (i.e., test) accuracy performance results and combined confusion matrices, respectively, when five-fold cross-validation is utilized. TFeSEResNeXT101 achieved the highest average accuracy of 0.968.

figure 4

The boxplots presenting the average five-fold cross-validation results using the ACC measure for the testing folds. ( a ) Deep transfer learning models using feature extraction (referred with the prefix TFe). ( b ) Deep transfer learning models using fine tuning (referred with the prefix TFt). ( c ) Deep learning models trained from scratch. ACC is accuracy.

figure 5

Combined confusion matrices for all methods during the running of five-fold cross-validation.

The second-best model is TFeDenseNet201, achieving an average accuracy of 0.958, followed by TFeVGG19, TFeResNet50V2, TFeSEResNet152, TFeInceptionV3, and TFeResNet101V2 (generating average accuracies of 0.946, 0.940, 0.936, 0.930, and 0.918, respectively). TFt-based models also outperformed all models trained from scratch (see Fig.  4 b,c). Particularly, TFt-based models generated average accuracies lower and upper bounded by 0.864 and 0.916, respectively, while models trained from scratch were lower and upper bounded by average accuracies of 0.468 and 0.590. These results demonstrate the superior performance of models employing our DTL computations.

In terms of reporting testing performance results using different metrics, our model TFeSEResNeXT101 outperforms all other models (see Table 1 ) via achieving an average BAC of 0.97, average PRE of 0.97 (tie with our model TFeSEResNet152), and average F1 of 0.97. Moreover, TFeVGG19 and TFtVGG19 perform better than VGG19. Similarly, TFeDenseNet201, TFeInceptionV3, TFeResNet50V2, TFeResNet101V2, and TFeSEResNet152 performed better than DenseNet201, InceptionV3, ResNet50V2, ResNet101V2, and SEResNet152, respectively. Th same holds true for TFt-based models outperforming their counterparts (i.e., VGG19, DenseNet201, InceptionV3, ResNet50V2, ResNet101V2, SEResNet152, and SEResNetXT101).

Table 2 reports our best DTL models with different optimizers. It can be shown that TFeSEResNeXT101 and TFtSEResNeXT101 generate the highest performance results when coupled with Adam optimizer method. Specifically, TFeSEResNeXT101 with Adam optimizer generates the highest average BAC (and F1) of 0.97 (and 0.97). TFtSEResNeXT101 with Adam optimizer archives the highest average BAC of 0.91, highest average F1 of 0.90 (tie with SGD optimizer). When TFeSEResNeXT101 and TFtSEResNeXT101 are coupled with SGD optimizer, they generate inferior performance results.

In Table 3 , we compare our model TFeVGG19 with Adam optimizer against the best performing baseline TFeVGG19 with RMSprop optimizer, named VGG19 in 19 . It is evident that our model TFeVGG19 with Adam optimizer achieves the highest average BAC of 0.94 while the baseline obtained an average BAC of 0.86. Moreover, when F1 performance measure is considered, TFeVGG19 with Adam optimizer attains the highest average F1 of 0.94 while the baseline achieved an average F1 of 0.88. The same holds true for TFtVGG19, which achieved the highest average BAC of 0.91, highest average F1 of 0.90.

In Fig.  6 , we report the running time in seconds for the process of running five-fold cross-validation when utilizing our best model (TFeSEResNeXT101) and TFtSEResNeXT101 compared to their peer SEResNeXT101. Our model TFeSEResNeXT101 is 208.45 × faster than SEResNeXT101. Also, our model TFtSEResNeXT101 is 3.82 × faster than SEResNeXT101. Moreover, TFeVGG19 and TFtVGG19 are 802.67 × and 2.53 ×, respectively, faster than VGG19. These results demonstrate the computational efficiency of the DTL models, in addition to the highly achieved performance results.

figure 6

Running time comparisons in seconds for selected models when running five-fold cross-validation.

Models introspection

Stochastic gradient descent (sgd).

To minimize the objective function \(Q({\theta }_{0},{\theta }_{1})\) for parameters \({\theta }_{0} \ {\text{and}} \ {\theta }_{1}\) of model \(H\left({x}_{i}\right)\) , we employ gradient descent optimization algorithms to find \({\theta }_{0} \ {\text{and}} \ {\theta }_{1}\) minimizing the objective function. The optimization problem can be formulated as follows:

We utilize SGD, RMSprop, and Adam optimization algorithm to minimize the objective function and estimate the model parameters. For SGD, we initialize the parameters \({\theta }_{0} \ {\text{and}} \ {\theta }_{1}\) according to the uniform distribution \(U\left(\mathrm{0,1}\right)\) and setting the learning rate \(\eta =0.001\) , maximum number of iterations to 3000. Then, in each time, shuffling the data of m examples followed by looping m times over the following to update model parameters After the end of looping, the algorithm stops if the maximum number of iterations is reached or \(\Vert \nabla Q\left({\theta }_{0},{\theta }_{1}\right)\Vert \le 0.001\) :

Root mean square propagation (RMSprop)

For RMSprop, we initialize model parameters according to uniform distribution \(U\left(\mathrm{0,1}\right)\) , set the learning rate \(\eta =0.001\) , maximum number of iterations to 3000, \(\beta =0.9\) , \(\epsilon ={10}^{-6}\) , BatchSize = 16, referred in the following as | S |. Then, looping to update model parameters according to each selected batch. After ending of looping over all selected batches, the algorithm terminates when \(\Vert \nabla Q\left({\theta }_{0},{\theta }_{1}\right)\Vert \le 0.001\) or the maximum number of iterations is reached:

Adaptive moment estimation (Adam)

In terms of Adam, \({{m}_{\theta }}_{0}, {{m}_{\theta }}_{1}, {{v}_{\theta }}_{0}, {{ v}_{\theta }}_{1},{\beta }_{1}, {\beta }_{2}, \epsilon , \text{and} \ \eta\) are initialized as in 35 . Then, looping to update model parameters according to all selected batches of examples. After ending of looping over all selected batches, the algorithm stops if the maximum number of iterations (i.e., 3000) is reached or \(\Vert \nabla Q\left({\theta }_{0},{\theta }_{1}\right)\Vert \le 0.001\) :

Simulated data

To demonstrate the efficiency of the proposed deep transfer learning (DTL) models incorporation mixed parameters derived from both SGD and Adam, we conducted simulation studies to explain the superiority behind the proposed models as well as imitate the numerical behavior. Particularly, we consider the following four predictive models:

where \(X\sim U(\mathrm{0,1})\) and \(\epsilon \sim N\left(\mathrm{0,0.2}\right)\) in which \(U()\) and N () are uniform and normal distributions, respectively. For F 1 , when we have X and Y  = F 1 ( X ), we perform the following steps. Let \({H}_{SGD}\left({x}_{i}\right)={\theta }_{0}+{\theta }_{1}{x}_{i}\) (for i  = 1.. m ) be the model in which we want to estimate parameters using ( X , Y ) data from F 1 coupled with Eq. ( 7 ). Similarly, let \({H}_{RMSprop}\left({x}_{i}\right) \text{and} \ {H}_{Adam}\left({x}_{i}\right)\) be models in which we want to estimate their parameters using Eqs. ( 8 ) and ( 9 ), respectively, coupled with ( X , Y ) data from F 1 . Then, we provide each \({x}_{i}\in X\) to perform predictions corresponding to \({y}_{i}^{\prime}\) . In Fig.  7 , we report 2D plots for X and predicted \({Y}^{\prime}=\{{y}_{1}^{\prime},\dots ,{y}_{m}^{\prime}\}\) via each model using data generated according to Eq. ( 10 ), where SGD refers to plotting \(({x}_{i},{H}_{SGD}\left({x}_{i}\right))\) while Adam and RMSprop refer to plotting results obtained via \(({x}_{i},{H}_{Adam}\left({x}_{i}\right))\) and \(({x}_{i},{H}_{RMSprop}\left({x}_{i}\right))\) , respectively, and i  = 1.. m . We then repeat this process for an additional 8 runs. Therefore, we have 9 runs in total.

figure 7

Plots for the three models as \(({x}_{i},{H}_{SGD}\left({x}_{i}\right))\) , \(({x}_{i},{H}_{Adam}\left({x}_{i}\right))\) , and \(({x}_{i},{H}_{RMSprop}\left({x}_{i}\right))\) for i  = 1.. m according to \({x}_{i}\) generated using F 1 .

It can be seen from Fig.  7 that model induced via RMSprop has more distributional differences compared to those obtained via SGD and Adam. To quantify distributional differences between SGD and Adam against SGD and RMSprop, we perform the following computations:

where d SA measuring the distance between data associated with SGD and Adam. Similarly, d SR measures the distance between data associated with SGD and RMSprop. The lower the distance value, the less the distribution difference is. Figure  11 a plots d SA and d SR for the 9 runs. It can be seen that \({H}_{Adam}\left({x}_{i}\right)\) is closer to \({H}_{SGD}\left({x}_{i}\right)\) than \({H}_{RMSprop}\left({x}_{i}\right)\) to \({H}_{SGD}\left({x}_{i}\right)\) in most runs. Moreover, the distributional differences are statistically significant ( P -value = \(7.28 \times {10}^{-14}\) from t -test).

These results demonstrate conformance of the weight parameters of models utilizing Adam and SGD optimizers. Figure  8 reports the 2D plots of three induced models as \(({x}_{i},{H}_{SGD}\left({x}_{i}\right))\) , \(({x}_{i},{H}_{Adam}\left({x}_{i}\right))\) , and \(({x}_{i},{H}_{RMSprop}\left({x}_{i}\right))\) for i  = 1 ..m using data generated according to Eq. ( 11 ) (i.e., F 2 ) 36 . It can be clearly seen that the data distributional difference of results via SGD is closer to that of Adam when compared to results obtained with the help of RMSprop. In Fig.  11 b, we quantify distributional differences using Eqs. ( 14 ) and ( 15 ). It can be shown that Adam is closer to SGD as shown from AdamSGD when compared to that of RMSprop to SGD (i.e., RMSpropSGD) over the 9 runs. The quantification of AdamSGD is attributed to d SA while RMSpropSGD is attributed to d SR . In addition, the distributional differences between AdamSGD and RMSpropSGD are statistically significant ( P- value = \(7.01\times {10}^{-7}\) from t -test).

figure 8

Plots for the three models as \(({x}_{i},{H}_{{\text{SGD}}}\left({x}_{i}\right))\) , \(({x}_{i},{H}_{{\text{Adam}}}\left({x}_{i}\right))\) , and \(({x}_{i},{H}_{{\text{RMSprop}}}\left({x}_{i}\right))\) for i  = 1.. m according to \({x}_{i}\) generated using F 2 .

Figures  9 and 10 report 2D plots of \(({x}_{i},{H}_{SGD}\left({x}_{i}\right))\) , \(({x}_{i},{H}_{Adam}\left({x}_{i}\right))\) , and \(({x}_{i},{H}_{RMSprop}\left({x}_{i}\right))\) for i  = 1 ..m using generated data of Eqs. ( 12 ) (F 3 ) and ( 13 ) (F 4 ) 37 , where models were induced with SGD, Adam, and RMSprop optimizers. It can be seen from the alignment of Adam with SGD that Adam has a closer data representation to SGD compared to RMSprop to SGD. When quantifying the data distributional differences in Fig.  11 c and d, it can be clearly shown that the distributional differences of SGD and Adam (referred to AdamSGD) are closer than SGD to RMSprop over the 9 runs.

figure 9

Plots for the three models as \(({x}_{i},{H}_{{\text{SGD}}}\left({x}_{i}\right))\) , \(({x}_{i},{H}_{{\text{Adam}}}\left({x}_{i}\right))\) , and \(({x}_{i},{H}_{{\text{RMSprop}}}\left({x}_{i}\right))\) for i  = 1.. m according to \({x}_{i}\) generated using F 3 .

figure 10

Plots for the three models as \(({x}_{i},{H}_{{\text{SGD}}}\left({x}_{i}\right))\) , \(({x}_{i},{H}_{{\text{Adam}}}\left({x}_{i}\right))\) , and \(({x}_{i},{H}_{{\text{RMSprop}}}\left({x}_{i}\right))\) for i  = 1.. m according to \({x}_{i}\) generated using F 4 .

figure 11

Boxplots of the four studied models, F 1 -F 4 , showing the distance distribution over nine runs for AdamSGD and RMSpropSGD. ( a ) results for F 1 . ( b ) results for F 2 . ( c ) results for F 3 . ( d ) results for F 4 .

These quantified results for AdamSGD and RMSprop are attributed to d SA and d SR , respectively. Additionally, the distributional differences of between AdamSGD and RMSprop were statistically significant ( P- value = \(3.48\times {10}^{-12}\) from t -test when F3 is used while P- value = \(1.49\times {10}^{-3}\) from t -test when F4 is used). These results demonstrate the stable performance when SGD is coupled with Adam.

Our deep transfer learning (DTL) models work as follows. In the TFe-based models, the convolutional base (also called the feature extraction part) in the pre-trained model is left unchanged while the densely connected classifier is modified to deal with the binary class classification at hand. Therefore, we applied the features extraction part of pre-trained models to the SCGRN images to extract features followed by a flattening step to train densely connected classifier from scratch. It can be noted that only weights of densely connected classifier are changed according to Adam optimizer while we transferred knowledge (i.e., weights) of the feature extraction part from pre-trained models. In terms of the TFt-based models, we keep weights of the bottom layers in the feature extraction part of pre-trained models unchanged while modifying weights in the proceeding layers including the densely connected classifier according to the Adam optimizer. Moreover, the densely connected classifier was altered to deal with the binary class classification problem pertaining to distinguishing between healthy controls and T2D SCGRN images. It can be seen that updating model weight parameters is done through the training with the use of Adam optimizer.

When conducting experimental study to assess deep transfer learning models, we used different optimizers, including SGD, RMSprop, and Adam. For SGD optimizer used in our DTL models, the model weight parameters were different than DTL models coupled with RMSprop and Adam optimizers. Therefore, we induced three sets of different models attributed to the three optimizers. Experimental results demonstrate the superiority of DTL models utilizing SGD and Adam optimizers when compared to that using SGD and RMSprop optimizers. We reported training loss for epochs pertaining to TFe-based and TFt-based models when running five-fold cross-validation in Supplementary Fig. S1 . For each DTL model, the number of layers including frozen and unfrozen layers is reported in Supplementary Table S1 .

In our study, mitigation of overfitting is attributed to (1) transfer learning in which many layers in DTL models are freezed and thereby reducing the number of trainable parameters; and (2) applying the dropout to the last fully-connected layer in which we set the dropout rate to 0.5 38 . It is worth mentioning that we assessed the performance of other deep learning (DL) models such as ConvNeXtTiny and ConvNeXtLarge 39 . Although ConvNeXtLarge outperformed ConvNeXtTiny, they didn’t exhibit superior performance when compared to TFeSEResNeXT101. Therefore, we include their performance results in Supplementary Tables S2 and S3 (and Supplementary Fig. S2 ). In terms of the running time, TFeSEResNeXT101was 1.54 × faster than TFeConvNeXtLarge. We report running time for ConvNeXtTiny-based and ConvNeXtLarge-based models in Supplementary Fig. S3 .

In our DTL models, we have transferred weights from pre-trained models coupled with weights obtained with the help of Adam optimizer. If weight parameters obtained using two optimizers are close, then the two models almost behave the same. On the other hand, when the weight parameters resulted from two optimizers are not close, then the two models behave differently. To mimic the real scenario and investigate the effects of coupling different model weight parameters, we performed a simulated study. In Figs.  7 , 8 , 9 , 10 and 11 , we showed that a model induced with the help of SGD optimizer is closer to a model induced with Adam optimizer when compared to a model induced with the help of RMSprop optimizer. It can be evident from visualized results in our study that SGD and Adam had less distributional differences than that of SGD and RMSprop. That resembles the case of having two related datasets for SGD and Adam against unrelated datasets for SGD and RMSprop. As a result, inferior performance results for models utilizing RMSprop are attributed to the high distributional differences in model weight parameters.

It is worth noting that our DTL models keep weights of many layers unchanged. Therefore, when we trained our models, we had fewer number of updated weights compared to updated weights in models trained from scratch. It can be seen from Fig.  6 that our DTL models are fast and can be adopted into mobile applications. It can be noticed from Tables 2 and 3 that leveraging source task knowledge contributed to improved prediction performance when coupled with updated weight parameters in the target task using Adam optimizer. On the other hand, the transferred knowledge from the source task contributed to degraded performance when coupled with updated weight parameters in the target task using RMSprop and SGD optimizers. Also, when we assessed additional DL models such as ConvNeXtLarge and ConvNeXtTiny, the knowledge transfer contributed to maintain the same performance behavior in which leveraging source domain knowledge when coupled with updated weights in the target task remained to be the best (see Supplementary Tables S2 and S3 ).

Conclusions and future work

In this paper, we present and analyze deep transfer learning (DTL) models for the task of classifying 224 SCGRN images pertaining to healthy controls and T2D patients. First, we utilized seven pre-trained models (including SEResNet152 and SEResNeXT101) already trained on more than million images from the ImageNet dataset. Then, we left weights in the convolutional base (i.e., feature extraction part) unchanged and thereby transferring knowledge from pre-trained models while modifying the densely connected classifier with the use of Adam optimizer to discriminate heathy and T2D SCGRN images. Another presented DTL models work as follows. We kept weights of bottom layers in the feature extraction part of pre-trained model unchanged while modifying consequent layers including the densely connected classifier with the use of Adam optimizer. Experimental results on the whole 224 SCGRN image dataset using five-fold cross-validation demonstrate the superiority of TFeSEResNeXT101, achieving the highest average BAC of 0.97 and therefore significantly surpassing the performance of the baseline resulted in an average BAC of 0.86. Furthermore, our simulation study showed that the highly accurate performance in our models is attributed to the distributional conformance of weights obtained with the use of Adam optimizer when coupled with weights of pre-trained models.

Future work includes (1) adopting our computational framework to analyze DTL models with different network topologies and thereby identifying the best practice for DTL; (2) incorporating multi-omics datasets with images to improve the prediction performance using DTL models; (3) developing a boosting mechanism to improve the performance of DTL models in different biological problems 40 , 41 ; (4) incorporating feature representation obtained via our DTL models with machine learning algorithms for the task of inferring SCGRNs; and (5) utilizing our framework to speed up the learning process, e.g., TFeVGG19 was 802.67 × faster than VGG19, trained from scratch.

Data availability

The dataset analyzed during the current study is available in the dataset folder within supplementary material at https://www.biorxiv.org/content/10.1101/2020.08.30.273839v1.supplementary-material . The single-cell gene expression data is available in the ArrayExpress repository under accession number E-MTAB-5061 ( https://www.ebi.ac.uk/biostudies/arrayexpress/studies/E-MTAB-5061 ).

Hemerich, D. et al. Effect of tissue-grouped regulatory variants associated to type 2 diabetes in related secondary outcomes. Sci. Rep. 13 (1), 3579 (2023).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Xie, D. et al. Global burden and influencing factors of chronic kidney disease due to type 2 diabetes in adults aged 20–59 years, 1990–2019. Sci. Rep. 13 (1), 20234 (2023).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Parker, E. D. et al. Economic costs of diabetes in the US in 2022. Diabetes Care 47 (1), 26–43 (2024).

Article   PubMed   Google Scholar  

Mohsen, F. et al. A scoping review of artificial intelligence-based methods for diabetes risk prediction. NPJ Dig. Med. 6 (1), 197 (2023).

Article   Google Scholar  

Su, X. et al. Ten metabolites-based algorithm predicts the future development of type 2 diabetes in Chinese. J. Adv. Res. https://doi.org/10.1016/j.jare.2023.11.026 (2023).

Article   PubMed   PubMed Central   Google Scholar  

He, Y. et al. Comparisons of polyexposure, polygenic, and clinical risk scores in risk prediction of type 2 diabetes. Diabetes Care 44 (4), 935–943 (2021).

Edlitz, Y. & Segal, E. Prediction of type 2 diabetes mellitus onset using logistic regression-based scorecards. Elife 11 , e71862 (2022).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Kokkorakis, M. et al. Effective questionnaire-based prediction models for type 2 diabetes across several ethnicities: A model development and validation study. EClinicalMedicine 64 , 102235 (2023).

Pyrros, A. et al. Opportunistic detection of type 2 diabetes using deep learning from frontal chest radiographs. Nat. Commun. 14 (1), 4039 (2023).

Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. in 3rd International Conference on Learning Representations (2015).

Wachinger, C., Wolf, T. N. & Pölsterl, S. Deep learning for the prediction of type 2 diabetes mellitus from neck-to-knee Dixon MRI in the UK biobank. Heliyon 9 (11), e22239 (2023).

Das, B. A deep learning model for identification of diabetes type 2 based on nucleotide signals. Neural Comput. Appl. 34 (15), 12587–12599 (2022).

He, K. et al . Deep residual learning for image recognition. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . (2016).

Naveed, I. et al. Artificial intelligence with temporal features outperforms machine learning in predicting diabetes. PLOS Dig. Health 2 (10), e0000354 (2023).

Bengio, Y., Goodfellow, I. & Courville, A. Deep Learning Vol. 1 (MIT Press, 2017).

Google Scholar  

Wu, D. et al. Multi-feature map integrated attention model for early prediction of type 2 diabetes using irregular health examination records. IEEE J. Biomed. Health Inform. 65 , 1–10 (2023).

Shu, H. et al. Modeling gene regulatory networks using neural network architectures. Nat. Comput. Sci. 1 (7), 491–501 (2021).

Badia-i-Mompel, P. et al. Gene regulatory network inference in the era of single-cell multi-omics. Nat. Rev. Genet. 24 , 739–754 (2023).

Article   CAS   PubMed   Google Scholar  

Turki, T. & Taguchi, Y. H. Discriminating the single-cell gene regulatory networks of human pancreatic islets: A novel deep learning application. Comput. Biol. Med. 132 , 104257 (2021).

Iacono, G. et al. bigSCale: An analytical framework for big-scale single-cell data. Genome Res. 28 (6), 878–890 (2018).

Iacono, G., Massoni-Badosa, R. & Heyn, H. Single-cell transcriptomics unveils gene regulatory network plasticity. Genome Biol. 20 (1), 110 (2019).

Tripathi, S., Dehmer, M. & Emmert-Streib, F. NetBioV: An R package for visualizing large network data in biology and medicine. Bioinformatics 30 (19), 2834–2836 (2014).

Simonyan, K. & Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition , in 3rd International Conference on Learning Representations (ICLR) . (2015).

Chollet, F. Xception: Deep learning with depthwise separable convolutions. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . (2017).

Huang, G. et al . Densely connected convolutional networks. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . (2017).

Segerstolpe, Å. et al. Single-cell transcriptome profiling of human pancreatic islets in health and type 2 diabetes. Cell Metab. 24 (4), 593–607 (2016).

Szegedy, C. et al . Rethinking the inception architecture for computer vision. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . (2016).

He, K. et al . Identity mappings in deep residual networks. in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 . (Springer, 2016).

Hu, J., Shen, L. & Sun, G. Squeeze-and-excitation networks. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . (2018).

Bottou, L. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade 2nd edn 421–436 (Springer, 2012).

Chapter   Google Scholar  

Chollet, F. Deep Learning with Python (Manning Publications Co., 2017).

RC Team. R: A language and environment for statistical computing. J. Stat. Softw. 25 (1), 1–10 (2008).

Franco, V. R. Optimg: General-Purpose Gradient-Based Optimization . (2021).

Hunter, J. D. Matplotlib: A 2D graphics environment. Comput. Sci. Eng. 9 (03), 90–95 (2007).

Ruder, S. An Overview of Gradient Descent Optimization Algorithms. arXiv:1609.04747 (2016).

Currin, C. et al. A Bayesian Approach to the Design and Analysis of Computer Experiments (Oak Ridge National Lab, 1988).

Book   Google Scholar  

Forrester, A., Sobester, A. & Keane, A. Engineering Design Via Surrogate Modelling: A Practical Guide (Wiley, 2008).

Gao, H., Pei, J. & Huang, H. Demystifying dropout. in International Conference on Machine Learning . (PMLR, 2019).

Liu, Z. et al . A convnet for the 2020s. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . (2022).

Turki, T. & Wei, Z. Boosting support vector machines for cancer discrimination tasks. Comput. Biol. Med. 101 , 236–249 (2018).

Turki, T. & Wei, Z. Improved deep convolutional neural networks via boosting for predicting the quality of in vitro bovine embryos. Electronics 11 (9), 1363 (2022).

Download references

This study received no funding.

Author information

Authors and affiliations.

Department of Computer Science, King Abdulaziz University, 21589, Jeddah, Saudi Arabia

Sumaya Alghamdi & Turki Turki

Department of Computer Science, Albaha University, 65799, Albaha, Saudi Arabia

Sumaya Alghamdi

You can also search for this author in PubMed   Google Scholar

Contributions

T.T. conceived and designed the study. S.A. performed the deep learning experiments and the visualization of results. T.T. performed the analysis. T.T. and S.A. wrote the manuscript. T.T. supervised the study. All authors have read and agreed to the revised version of the manuscript.

Corresponding author

Correspondence to Turki Turki .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary figures., supplementary tables., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Alghamdi, S., Turki, T. A novel interpretable deep transfer learning combining diverse learnable parameters for improved T2D prediction based on single-cell gene regulatory networks. Sci Rep 14 , 4491 (2024). https://doi.org/10.1038/s41598-024-54923-y

Download citation

Received : 12 September 2023

Accepted : 18 February 2024

Published : 24 February 2024

DOI : https://doi.org/10.1038/s41598-024-54923-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Explainable AI
  • T2D prediction
  • Single-cell gene regulatory network

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

learning task 5

  • Open access
  • Published: 20 February 2024

An intelligent grasper to provide real-time force feedback to shorten the learning curve in laparoscopic training

  • Xuemei Huang 1   na2 ,
  • Pingping Wang 1   na2 ,
  • Jie Chen 1 ,
  • Yuxin Huang 1 ,
  • Qiongxiu Liao 1 ,
  • Yuting Huang 1 ,
  • Zhengyong Liu 2   na1 &
  • Dongxian Peng 1   na1  

BMC Medical Education volume  24 , Article number:  161 ( 2024 ) Cite this article

94 Accesses

Metrics details

A lack of force feedback in laparoscopic surgery often leads to a steep learning curve to the novices and traditional training system equipped with force feedback need a high educational cost. This study aimed to use a laparoscopic grasper providing force feedback in laparoscopic training which can assist in controlling of gripping forces and improve the learning processing of the novices.

Firstly, we conducted a pre-experiment to verify the role of force feedback in gripping operations and establish the safe gripping force threshold for the tasks. Following this, we proceeded with a four-week training program. Unlike the novices without feedback (Group A 2 ), the novices receiving feedback (Group B 2 ) underwent training that included force feedback. Finally, we completed a follow-up period without providing force feedback to assess the training effect under different conditions. Real-time force parameters were recorded and compared.

In the pre-experiment, we set the gripping force threshold for the tasks based on the experienced surgeons’ performance. This is reasonable as the experienced surgeons have obtained adequate skill of handling grasper. The thresholds for task 1, 2, and 3 were set as 0.731 N, 1.203 N and 0.938 N, respectively. With force feedback, the gripping force applied by the novices with feedback (Group B 1 ) was lower than that of the novices without feedback (Group A 1 ) ( p  < 0.005). During the training period, the Group B 2 takes 6 trails to achieve gripping force of 0.635 N, which is lower than the threshold line, whereas the Group A 2 needs 11 trails, meaning that the learning curve of Group B 2 was significantly shorter than that of Group A 2 . Additionally, during the follow-up period, there was no significant decline in force learning, and Group B 2 demonstrated better control of gripping operations. The training with force feedback received positive evaluations.

Our study shows that using a grasper providing force feedback in laparoscopic training can help to control the gripping force and shorten the learning curve. It is anticipated that the laparoscopic grasper equipped with FBG sensor is promising to provide force feedback during laparoscopic training, which ultimately shows great potential in laparoscopic surgery.

Peer Review reports

Introduction

In the past few decades, Minimally Invasive Surgery (MIS) has changed the surgical process [ 1 , 2 , 3 ], greatly improving the implementation of MIS in general surgery, gynecology, cardiothoracic surgery, colorectal surgery and urological surgery [ 4 , 5 , 6 , 7 ]. MIS has many advantages, such as small wound, quickly recovery, etc [ 1 , 8 ]. However, the lack of force feedback, even completely lose in Robot-assisted minimally invasive surgery (RMIS), during operation restricts its development [ 9 ] and affects the training of novices’ operating during MIS to some extent. The lack of force feedback in MIS leads novices to undergo more training before operating successfully on patients [ 2 ]. Generally, the skill acquisition involves a steep learning curve [ 10 ]. Most residents who receive training in laparoscopic surgical skills take more than 3 years to master the operation skills [ 11 ]. Performing complex surgical tasks in laparoscopic surgery requires more precise control and extensive training [ 12 ].

Indeed, in order to meet the precise requirements of MIS, more and more simulation training and education methods outside of the operation room are proposed to improve the surgical skills of surgeons, particularly novice surgeons. With the increasing availability and use of laparoscopic training models, more and more novice surgeons are able to acquire the necessary skills for MIS through simulation-based training programs. This has the potential to improve patient outcome by reducing the risk of surgical errors and complications during actual procedures. There have been a series of training models developed for simulation-based surgical training. Currently, the most widely used simulation methods can be classified into three categories: box training (BT), virtual reality (VR) and augmented reality (AR) training. However, traditional BT models do not provide force feedback, which limits their ability to provide a realistic training context and objective results. On the other hand, VR training is able to provide objective results, but these results are typically not available to operators in real-time. AR that provides both haptic feedback and objective results during training [ 13 ] is costly. Therefore, it is essential to develop cost-effective training systems that can provide force feedback in order to enhance the effectiveness and realism of surgical simulation training. There are some systems appearing, which however rely on external force measuring platform and may lead to imprecise results [ 14 ]. For example, Luis et al. [ 15 ] used an intelligent trainer equipped with a gripping sensor to measure the gripping force, but the measuring force is not really accurate. Hardon et al. [ 16 ] use a box trainer with a built-in force tracking system to monitor the force to assess the operation skill of residents. In fact, due to the status of the usage of the training system, the training in MIS offering to novices is limited [ 17 ]. Regarding the current laparoscopic training system with force feedback, its high cost of training and absence of objective real-time assessments have resulted in a prolonged learning curve for novice surgeons [ 18 ]. Therefore, there is an urgent need to develop an effective and costless training system. In the training processing of MIS, it is preferable to use a laparoscopic grasper with real-time force feedback to achieve targeted training and increase the efficiency of every surgical operation. The training process will be effectively standardized and the training period will be shortened [ 19 ].

In the past few years, fiber Bragg grating (FBG) sensors have been widely applied in minimally invasive surgery to offer force feedback because of their small size, high sensitivity, good biocompatibility, light weight, immunity to electromagnetic interference (EMI), etc [ 20 , 21 , 22 , 23 , 24 ]. The surgical instrument integrated with FBG sensor provides the surgeon with force information during MIS, facilitating more precise and accurate operations. For example, Li et al. [ 25 ] proposed a three-axis tactile probe based on fiber grating, which can accurately identify long blood vessels in the prosthesis and locate the wrapped tissue in three dimensions. Furthermore, they verified its effectiveness and feasibility in isolated porcine kidney tissue. Xue et al. [ 26 ] introduced FBGs to grooves in the laparoscopic surgery robot, which is used to estimate gripping force and perform precise force control. Besides, Imbrie-moore et al. [ 27 ] mounted a pig mitral valve in a cardiac simulator, each valve was repaired with Teflon sutures. In their work, an FBG sensor was used to measure real-time suture force. In addition, Scott et al. [ 28 ] developed an FBG-based sensor and measured the force at the tip of the electrode array during insertion into the cochlea in real time in guinea pigs. Although FBG has been proposed for use in medical surgery, the additional value of force feedback based on FBG in MIS training has not been established.

In our previous study, we designed an intelligent laparoscopic grasper integrated with an FBG-based tactile sensor, which can provide real-time force feedback to the novice operators and has shown excellent performance in the laparoscopic training box [ 29 ]. As a continuation of this work, we utilized the laparoscopic grasper to provide real-time force feedback to the trainers during laparoscopic training, allowing for quantification of force information obtained during training. Results indicate that training system with an FBG force sensor has significant potential to shorten the learning curve of laparoscopic training by providing real-time force feedback to trainees through an intelligent laparoscopic grasper.

The procedures, methods, and consent forms employed in this study received approval from the Ethics Review Board of Zhujiang Hospital, Southern Medical University. The training program, which was based on basic laparoscopic skills (FLS) closely simulated real-world clinical circumstance.

The proposed laparoscopic training system and gripping tasks setting

The proposed training system is shown in Fig.  1 . To imitate the laparoscopic surgery, we use a validated Lap Game box trainer (Lap Game Inc., Hangzhou, China). The device (Fig. 1 ) includes a light source, an internal camera for imaging and a display screen for operation on the computer. Different from traditional laparoscopic training box, we used a previously designed smart laparoscopic grasper with fiber Bragg grating sensor during training [ 29 ]. The intelligent laparoscopic grasper is used to clamp the specific objects and the real-time force information obtained by the fiber Bragg grating sensor was demodulated by an optical spectrometer (I-MON 51 USB, Ibsen Photonics, Denmark). In principle, FBG can reflect a specific wavelength (also called Bragg wavelength λ B ) of broadband light , which is determined by the effective index of refraction (n eff ) of the fundamental mode propagating in the fiber core and the grating period (Λ), as expressed in Eq. ( 1 ). When there is a certain force applied to the FBG, the Bragg wavelength would show a corresponding shift due to the deformation of the grating as well as a change in the refractive index. This relationship can be defined by Eq. ( 2 ), in which P e is photo-elastic coefficient, η is a factor of force transferred to strain and F is the applied force.

figure 1

Instrument connection diagram

In our study, we can obtain the gripping force by demodulating the Bragg wavelength change in the reflection spectrum detected by the spectrometer during the operation process. The real-time force information is displayed and stored on the computer.

The experiment consisted of 3 gripping transfer tasks (Table  1 ). These tasks are designed based on basic laparoscopic skills (FLS) training [ 30 ] and close to real clinical circumstance [ 31 ].

Participant selection and baseline characteristics

We recruited medical students with no prior training in laparoscopic surgery from our institution’s medical school through a virtual announcement. Gynecologists with extensive experience (having perform over 100 advanced procedures) were selected as surgeons with experience for the study. During the study, the participants included 6 experienced surgeons and 42 novices. Novices were divided randomly into novices without force feedback (Group A) and novices with force feedback (Group B). All participants filled out the personal questionnaire before the experiment. The basic information of all subjects was documented and recorded in Table  2 . In order to ensure that the novice participants had a similar level of laparoscopic surgery skills, 42 novices were required to complete Task 1 without feedback for baseline assessment. Novices at the same level were included in this study.

Study protocol

Preliminary experiment.

We conducted a pre-experiment to verify the role of force feedback in laparoscopic gripping and set the threshold for the gripping tasks. Without providing force feedback, the experienced surgeons (Group C) and six novices of Group A (Group A 1 ) were asked to complete three tasks to see whether there is some difference between the two and further determine the threshold of the tasks. The threshold of the task was defined as the force level of safe grasping. We set the average of the maximum gripping force of the experienced surgeons as the threshold of the tasks. The thresholds for task 1, task 2 and task 3 are 0.731 N, 1.203 N and 0.938 N respectively. Then, with real-time force feedback, six novices of Group B (Group B 1 ) carried out three tasks. Different from novices without force feedback (Group A 1 ), novices with force feedback (Group B 1 ) conducted the tasks with real-time force feedback, meaning that novices with force feedback can quantify the grasping force during the tasks and adjust the force they used according to the threshold of the tasks. Throughout the operation, all the grasping data was collected by the computer. A comparison of the grasping force between Group A 1 and Group B 1 was conducted. The purpose of the preliminary experiment is to determine whether force feedback during gripping operations can have a positive influence on MIS. Finally, laparoscopic training sessions were conducted for the remaining novices in the laparoscopic training box.

Training program

The other thirty novices who are at the same level in laparoscopic surgery entered the laparoscopic training. They are Group A 2 and Group B 2 , i.e., the other 15 novices of Group A and Group B, respectively. A four-week laparoscopic training was conducted. Taking schedule of the novices into consideration, we ensured that all novices completed an equal number of gripping trials during this training period. In our study, 30 novices were assigned to conduct 10 trials at different time intervals. Each trial required them to successfully complete the task three times. It is important to note that all novices utilized the same smart laparoscopic grasper. As for the Group B 2 , with the real-time force feedback, participants were able to quantitatively measure the force applied during tasks and adjust it in time according to the predefined threshold. In contrast, Group A 2 completed the training without force feedback, relying solely on subjective perception for force adjustment. All the grasping force data during the whole training was collected by the FBG force sensor.

Follow-up test

Follow-up testing was carried out 1 week after the completion of training to evaluate the retention of grasping skills acquired during the training period. Both Group A 2 and Group B 2 were require to complete the task without force feedback in the follow-up period. Besides, all participants were asked to complete the NASA Task Load Index [ 32 ]. Only Group B 2 filled out the Force Feedback System Evaluation Survey additionally. Figure  2 shows the schematic flowchart of the study protocol.

figure 2

Schematic flowchart of the study protocol

Outcome evaluation

The maximum absolute force and the standard deviation of the gripping force collected during the experiment are utilized to evaluate the effectiveness of the training. Additionally, scales filled out by the participants are analyzed at the same time to assess the training system from subjective point of view. A detailed overview and description of these parameters are provided in Table  3 . In addition, in order to better analyze the control and learning progress of two groups during the training period, we plotted individual learning curves for each group and conducted a comparison analysis. Specifically, we recorded the maximum and standard deviation of the gripping force during 10 laparoscopic training sessions of novices under different feedback conditions. Additionally, we included the final follow-up stage where both groups performed gripping operations without force feedback. The training curves were plotted at each data point. Drawing the learning curves during the laparoscopic training allows for a more intuitive observation of the impact of real-time force feedback. All participants were asked to fill out a questionnaire and NASA Task Load Index after completion of the four-week training to obtain information about their general impression of the force feedback system and training tasks. General comments on the training were obtained from participants and presented using a 5 points Likert scale.

Data were analyzed using IBM SPSS statistical version 23.0 (IBM Corp, Armonk, NY). The t-test was employed for normally distributed data, while nonparametric Mann-Whitney U test was used for non-normally distributed data. Basically, a probability of p <0.05 was considered statistically significant [ 33 ].

Baseline assessment of the novices

At the baseline evaluation, all novices had no prior experience in laparoscopic operation. Figure  3 shows the box plots of gripping force that represents the baseline assessment of the novices ( n  = 42). Obviously, none of the peak, mean and standard deviation of the gripping force showed a significant difference among the novices, indicating that the two groups of novices were at the same level of laparoscopic grasping ( p =  0.653, 0.996, 0.831 respectively).

figure 3

Box plots of the gripping force among novices for baseline assessment

Analysis results of preliminary experimental

In the preliminary experiments, Table  4 presents the gripping force of three groups in different tasks. The statistical results comparing the three groups are also summarized. Taking task 1 as an example, the average of the maximum gripping force for experienced surgeons was 0.731 N, while novices without feedback recorded 3.686 N. The standard deviation of the gripping force for the two groups was 0.096 and 0.468 respectively. The force values for the task exhibited significantly differences ( p  < 0.001). The results indicated that the experienced surgeons exhibited significantly lower force and better stability of control than the novices without feedback. Similar results were observed across other tasks. Therefore, we established the average maximum value of the experienced surgeons as the threshold for the tasks. The threshold for task 1, task 2 and task 3 were determined to be 0.731 N, 1.203 N and 0.938 N respectively. With providing real-time force feedback to Group B 1 , the maximum and the standard deviation of the gripping force of task 1 is 0.979 N and 0.112, while is 3.686 N and 0.486 of Group A 1 . In the comparison between Group A 1 and Group B 1 , the force values of all tasks were compared, resulting in a p value of less than 0.05. This indicates that the introduction of force feedback has obvious advantages in better maintaining gripping force. Additionally, the maximum value of all tasks in novices with feedback and experienced surgeons is significantly different( p <0.001). Although novices with feedback exhibit better control of gripping force, this result suggests that, compared to the experienced surgeons, novices still have room for improvement in controlling gripping force. Therefore, implementing a standardized training process is necessary to further enhance the control of gripping force.

Analysis results of the training program

To gain a deeper understanding of the impact of force feedback on the learning curve, we presented the learning curves for the entire training process and subsequent follow-up trial. Figure  4 illustrates that in both groups, the maximum gripping force and standard deviation of the gripping force exhibit a gradual decrease throughout the training period.

figure 4

( A ) and ( B ) represent the learning curves of maximum force and standard deviation of the gripping force, respectively (SD, standard deviation)

In Fig.  4 A, compared to Group A 2 , at the sixth training trial, the maximum gripping force of Group B 2 is 0.635 N, which was below the threshold, while Group A 2 still failed to reach the level below the threshold even by the tenth training. Furthermore, in the final training trial, the maximum gripping force of Group B 2 was only 0.363 N, while the maximum gripping force of Group A 2 is 0.765 N, still surpassing the threshold level. Obviously, Group B 2 demonstrated a significantly shorter learning curve compared to Group A 2 . Therefore, the use of a smart grasper with an FBG force sensor can effectively expedite the training process and lead to a shorter curve.

In terms of the standard deviation of gripping force during training, which indicates gripping stability, Fig.  4 B demonstrates that Group B 2 exhibits a smaller standard deviation compared to Group A 2. This implies that in the whole training process, Group B 2 demonstrates superior stability and less fluctuation in gripping force than Group A 2 .

In the comparison of training effects in each stage of training stages (Table  5 ), the gripping force exhibited significant differences between Group A 2 and Group B 2 ( p<0.05 ).These results indicate that novices who received feedback performed better during laparoscopic gripping training.

Analysis results of follow-up test

Throughout the follow-up period, there was no significant decline in force learning, and the gripping force continued to decrease. Without providing force feedback, the maximum gripping force of Group A 2 was 0.706 N, while Group B 2 was only 0.316 N. Notably, the maximum gripping force in both groups fell below the threshold level, indicating a better retention of force. Group B 2 maintained superior gripping performance compared to Group A 2 after training. Additionally, the standard deviation of the gripping force for Group A 2 increased slightly to 0.089 N, whereas Group B 2 ’s standard deviation continued to decrease to 0.035 N.

In Fig.  5 , we employed statistical methods to analyze the impact of force feedback on laparoscopic training across the baseline stage, training stage, and follow-up stage. During the baseline stage, the results indicate that there were no statistically significant differences in both the maximum and standard deviation of gripping force between Group A 2 and Group B 2 ( p >0.05). These findings suggest that novices in both groups performed at a similar level of gripping proficiency. Compared to the experienced surgeons, novices tend to apply higher gripping force during the initial stage of an operation. This often causes unnecessary tissue damage in clinical operation, so it is essential to train novices and verify whether incorporating force feedback in the operation has a positive effect on novices’ training. During the training phase, the results showed that the maximum force and standard deviation of the novices decreased during the training process. Moreover, the maximum gripping force of Group B 2 on day 1, 5 and 10 were 1.281 N, 0.796 N, 0.363 N respectively while they were 1.811 N, 1.263 N, 0.765 N of Group A 2. The difference between Group A 2 and Group B 2 was statistically significant ( p  < 0.05), indicating that the training effect was better with providing force feedback. Without providing force feedback, novices completed the follow-up period. The maximum gripping force of Group A 2 was 0.706 N, while Group B 2 was only 0.316 N. The difference between Group A 2 and Group B 2 was statistically significant ( p  < 0.05), indicating that Group B 2 maintained superior gripping performance compared to Group A 2 after training.

figure 5

Gripping force at different stages under various conditions. ( a ) The maximum gripping force of Group C, Group A 2 and Group B 2 during baseline assessment stage, training stage and follow-up stage, ( b ) the standard deviation of gripping force of Group C, Group A 2 and Group B 2 during these three stages

All thirty novices enrolled in the training period completed the questionnaire. The evaluation scales for both the force feedback system and force feedback training utilized a 5 points Likert scale, which a maximum score of 5 points per question. As illustrated in Table  6 , the system design received a score of 4.53 ± 0.52, indicating high satisfaction with the system. Similarly, regarding the visualization, instrumentation, user-friendliness, task description, force accuracy of feedback, and necessity of the system, scores were closed to the maximum of 5 points. Regarding for the evaluation of the force feedback training, participants self-rated an improvement in their technical skills and self-confidence as a result of the training. Moreover, trainees expressed that they found the training to be necessary and stated their willingness to recommend it to others. In term of the results from The NASA Task Load Index (Table  7 ), which assesses novices’ subjective feeling during the training, it was observed that novices who received feedback obtained lower scores in mental stress, psychological burden, operating time, and effort compared to those without feedback. These results suggest that the use of the smart laparoscopic grasper, which provides real-time force feedback, can lead to a more comfortable and confident gripping experience for novices. When it comes to the feeling of satisfaction, the scores of novices with feedback was higher than those without feedback , indicating that the incorporation of feedback resulted in a more positive emotional experience during the training.

This study included a preliminary experiment to evaluate the impact of real-time force feedback on gripping, which demonstrated improved control of gripping force during MIS. Furthermore, a four-week laparoscopic training was conducted to confirm that force feedback helps to shorten the learning curve in laparoscopic training. Through the use of proposed smart laparoscopic grasper integrated with an FBG-based tactile sensor, novice participants were able to achieve real-time force feedback resulting in a shorter learning curve and an improved control of the gripping force during training. As a further assessment, a follow-up gripping operation was carried up, and several scales were performed to gauge the participants’ subjective perception of the training process. During the follow-up period, novices who had trained with force feedback demonstrated improved control of gripping even without the provision of force feedback. As for the result of scales assessment, both the training system itself and the emotional experience of the training process, as well as the benefits derived from the training, were highly rated. Accounting for the novices may vary in different gripping levels at the beginning, which somehow influence the result of the study, we did a baseline assessment and only participants at the same level were enrolled in the experiment. While individual abilities may differ, there were no significant differences in the overall evaluation. This suggests that the shortened learning curve is mainly due to the use of force feedback instead of the differences in individual participants’ abilities. Besides, taking the schedule of the novices into consideration, we ensured that all novices had the same number of gripping trails during the four-week training. Novices were allowed to take their time to complete the gripping tasks, accommodating their individual pace.

As pointed out by Hopper et al. [ 34 ], the ideal surgical learning curve (LC) tends to show a steep curve at first, which then fades out to a more gradual LC as the plateau phase approached. The technical skills are sufficiently established to operate independently and safely at last [ 35 ]. In our study, both the maximum force and the standard deviation of force were observed to have significantly decreased throughout the training period, as shown in Fig.  4 . Furthermore, during the follow-up period, there was no significant increase in gripping force. On the contrary, gripping force further decreased due to improved control of the grasper. These results confirmed that the training effect can be preserved, and the LC of novices reach a fairly stable stage. Furthermore, compared to novices without feedback (Group A 2 ), novices with feedback (Group B 2 ) were observed to achieve better control of gripping force during the training. The learning curve of Group B 2 was also noticeably shortened in comparison to Group A 2 . These results further demonstrate the effectiveness of real-time force feedback in improving the learning process and enhance the performance of novices in laparoscopic training. According to the results of our study, the gripping skills in the training box of the novices are improved because of the training and real-time force feedback. The integration of a smart laparoscopic grasper with force feedback capability contributed to maintaining optimal gripping force levels and results in a shorten the learning curve. These results highlight the potential benefits of incorporating force feedback technology in laparoscopic training to improve the performance and skill acquisition of novices. The introduction of force feedback is a feasible and valuable mechanism for enhancing laparoscopic training among novices. In the future, it should be considered as an integrated part of laparoscopic training programs, allowing for the development of individualized courses based on trainee’s learning curve. Eventually, the cost of laparoscopic learning training is expected to decrease. Besides, implementing a threshold for gripping tasks serves to raise novice’s awareness about the potential risks associated with applying excessive forces in the box trainer, which can lead to unnecessary tissue damage. Real-time force feedback allows them to adjust their gripping force in time according to the threshold, which is likely to be beneficial in clinical surgery [ 36 ]. If laparoscopy novices can hand and adjust their force in time, tissue damage [ 37 ] will be reduced. Furthermore, by analyzing the learning curve of the novices in the laparoscopic training, we can distinguish skill levels of surgeons to some extent [ 38 ]. Thus, a more targeted and individualized training plan can be established to realize precise training [ 39 ]. The promising results demonstrate the feasibility and value of integrating force feedback into laparoscopic training for novices. This study highlights the significant potential of the smart laparoscopic grasper with FBG force sensor in improving training, quality and reducing the learning curve. Ultimately, it offers a robust force feedback system for MIS.

In summary, the results of our study show that the utilization of an intelligent laparoscopic grasper with real-time force feedback in laparoscopic training contributes to improve the control of gripping force and a shortened learning curve.

A few limitations are encountered in the research. First of all, two screens may affect the mental and cause distraction in the process of gripping. The real-time force feedback of the training system needs to be further optimized by setting an operational screen in a form of “traffic lights” [ 40 ] or audio reminders. Secondly, clinical operations involve multiple gripping motion, and training tasks should be designed to imitate these movements. Finally, in-vivo experiments will be necessary to determine whether the positive effects observed in our study can be transferred to real-time scenarios. While our current results are not based on in-vivo testing, they provide promising evidence of the potential benefits of incorporating force feedback into laparoscopic training.

In conclusion, using a grasper providing real-time force feedback in laparoscopic training can help to control the gripping force and shorten the learning curve. It is anticipated that the laparoscopic grasper equipped with fiber Bragg grating sensor is promising to provide force feedback during laparoscopic training, which ultimately demonstrating significant potential in the field of laparoscopic surgery.

Availability of data and materials

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Minimally Invasive Surgery

Robot-assisted Minimally Invasive Surgery

Box Training

Virtual Reality

Augmented Reality

  • Fiber Bragg grating

Electromagnetic Interference

Learning Curve

Othman W, Lai ZA, Abril C, Barajas-Gamboa JS, Corcelles R, Kroh M, et al. Tactile sensing for minimally invasive surgery: conventional methods and potential emerging tactile technologies. Front Robot AI. 2021;8:705662.

Article   PubMed   Google Scholar  

Jung Kim MM, Kim H, Srinivasan MA. Haptics in minimally invasive surgical simulation and training. IEEE Comput Graph Appl. 2004;24:56–64.

De Win G, Van Bruwaene S, Allen C, De Ridder D. Design and implementation of a proficiency-based, structured endoscopy course for medical students applying for a surgical specialty. Adv Med Educ Pract. 2013;4:103–15.

Article   PubMed   PubMed Central   Google Scholar  

Zhu R, Marechal M, Suetsugu M, Yamamoto I, Lawn MJ, Matsumoto K, et al. Research and Development of testing device for evaluating force transmission and grasping pressure of laparoscopic forceps. Sensor Mater. 2019;31:4205–14.

Article   Google Scholar  

Ukai T, Tanaka Y, Fukuda T, Kajikawa T, Miura H, Terada Y. Softness sensing probe with multiple acoustic paths for laparoscopic surgery. Int J Comput Assist Radiol Surg. 2020;15:1537–47.

Ebina K, Abe T, Higuchi M, Furumido J, Iwahara N, Kon M, et al. Motion analysis for better understanding of psychomotor skills in laparoscopy: objective assessment-based simulation training using animal organs. Surg Endosc. 2021;35:4399–416.

Bandari N, Dargahi J, Packirisamy M. Image-based optical-Fiber force sensor for minimally invasive surgery with ex-vivo validation. J Electrochem Soc. 2020;167:127504.

Article   CAS   Google Scholar  

Othman W, Vandyck KE, Abril C, Barajas-Gamboa JS, Pantoja JP, Kroh M, et al. Stiffness assessment and lump detection in minimally invasive surgery using in-house developed smart laparoscopic forceps. IEEE J Transl Eng Health Med. 2022;10:2500410.

Smyk NJ, Weiss SM, Marshall PJ. Sensorimotor oscillations during a reciprocal touch paradigm with a human or robot partner. Front Psychol. 2018;9:2280.

Halim J, Jelley J, Zhang N, Ornstein M, Patel B. The effect of verbal feedback, video feedback, and self-assessment on laparoscopic intracorporeal suturing skills in novices: a randomized trial. Surg Endosc. 2021;35:3787–95.

Wenger L, Richardson C, Tsuda S. Retention of fundamentals of laparoscopic surgery (FLS) proficiency with a biannual mandatory training session. Surg Endosc. 2015;29:810–4.

Horeman T, van Delft F, Blikkendaal MD, Dankelman J, van den Dobbelsteen JJ, Jansen FW. Learning from visual force feedback in box trainers: tissue manipulation in laparoscopic surgery. Surg Endosc. 2014;28:1961–70.

Botden SM, Torab F, Buzink SN, Jakimowicz JJ. The importance of haptic feedback in laparoscopic suturing training and the additive value of virtual reality simulation. Surg Endosc. 2008;22:1214–22.

Hernandez R, Onar-Thomas A, Travascio F, Asfour S. Attainment and retention of force moderation following laparoscopic resection training with visual force feedback. Surg Endosc. 2017;31:4805–15.

Olivas-Alanis LH, Calzada-Briseno RA, Segura-Ibarra V, Vazquez EV, Diaz-Elizondo JA, Flores-Villalba E, et al. LAPKaans: tool-motion tracking and gripping force-sensing modular smart laparoscopic training system. Sensors (Basel). 2020;20(23):6937.

Article   ADS   CAS   PubMed   Google Scholar  

Hardon SF, Horeman T, Bonjer HJ, Meijerink W. Force-based learning curve tracking in fundamental laparoscopic skills training. Surg Endosc. 2018;32:3609–21.

Rekman JF, Alseidi A. Training for minimally invasive cancer surgery. Surg Oncol Clin N Am. 2019;28:11–30.

Elessawy M, Mabrouk M, Heilmann T, Weigel M, Zidan M, Abu-Sheasha G, et al. Evaluation of laparoscopy virtual reality training on the improvement of Trainees' surgical skills. Medicina (Kaunas). 2021;57:130.

Dai Y, Abiri A, Pensa J, Liu S, Paydar O, Sohn H, et al. Biaxial sensing suture breakage warning system for robotic surgery. Biomed Microdevices. 2019;21:10.

Deng Y, Yang T, Dai S, Song G. A miniature Triaxial Fiber optic force sensor for flexible Ureteroscopy. IEEE Trans Biomed Eng. 2021;68:2339–47.

Article   ADS   PubMed   Google Scholar  

Abushagur AA, Arsad N, Reaz MI, Bakar AA. Advances in bio-tactile sensors for minimally invasive surgery using the fibre Bragg grating force sensor technique: a survey. Sensors. 2014;14:6633–65.

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

He X, Handa J, Gehlbach P, Taylor R, Iordachita I. A submillimetric 3-DOF force sensing instrument with integrated fiber Bragg grating for retinal microsurgery. IEEE Trans Biomed Eng. 2014;61:522–34.

Ping Z, Zhang T, Gong L, Zhang C, Zuo S. Miniature flexible instrument with fibre Bragg grating-based Triaxial force sensing for intraoperative gastric Endomicroscopy. Ann Biomed Eng. 2021;49:2323–36.

Bandari N, Dargahi J, Packirisamy M. Tactile sensors for minimally invasive surgery: a review of the state-of-the-art, applications, and perspectives. IEEE Access. 2020;8:7682–708.

Li T, Pan A, Ren H. Reaction force mapping by 3-Axis tactile sensing with arbitrary angles for tissue hard-inclusion localization. IEEE Trans Biomed Eng. 2021;68:26–35.

Xue R, Ren B, Huang J, Yan Z, Du Z. Design and evaluation of FBG-based tension sensor in laparoscope surgical robots. Sensors (Basel). 2018;18:2067.

Aminebili-moore ZY, Yaoming P, Paulsen WH, Yao-jie W. Artificial papillary muscle device for off pump transapical mitral. J Cardiothorac Surg. 2020;4:e133–e41.

Google Scholar  

Scott A, Wade JBF, Wise AK, Shepherd RK, James NL, Stoddart PR. Force measurements at the tip of the cochlear implant during insertion. IEEE Trans Biomed Eng. 2014;61:1177–86.

Wang P, Zhang S, Liu Z, Huang Y, Huang J, Huang X, et al. Smart laparoscopic grasper integrated with fiber Bragg grating based tactile sensor for real-time force feedback. J Biophotonics. 2022;15:e202100331.

Derossis AM, Fried GM, Abrahamowicz M, Sigman HH, Barkun JS, Meakins JL. Development of a model for training and evaluation of laparoscopic skills. Am J Surg. 1998;175:482–7.

Article   CAS   PubMed   Google Scholar  

Fried GM, Derossis AM, Bothwell J, Sigman HH. Comparison of laparoscopic performance in vivo with performance measured in a laparoscopic simulator. Surg Endosc. 1999;13:1077–81. discussion 82

Staveland SGHLE. Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv Psychol. 1988;52:139–83.

Liu W, Bretz F, Cortina-Borja M. Reference range: which statistical intervals to use? Stat Methods Med Res. 2021;30:523–34.

Article   MathSciNet   PubMed   Google Scholar  

Hopper AN, Jamison MH, Lewis WG. Learning curves in surgical practice. Postgrad Med J. 2007;83:777–9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Szasz P, Louridas M, Harris KA, Aggarwal R, Grantcharov TP. Assessing technical competence in surgical trainees: a systematic review. Ann Surg. 2015;261:1046–55.

Rodrigues SP, Horeman T, Sam P, Dankelman J, van den Dobbelsteen JJ, Jansen FW. Influence of visual force feedback on tissue handling in minimally invasive surgery. Br J Surg. 2014;101:1766–73.

Wottawa CR, Genovese B, Nowroozi BN, Hart SD, Bisley JW, Grundfest WS, et al. Evaluating tactile feedback in robotic surgery for potential clinical application using an animal model. Surg Endosc. 2016;30:3198–209.

Sugiyama T, Lama S, Gan LS. Forces of tool-tissue interaction to assess surgical skill level. JAMA Surg. 2018;153:234–42.

Scott DJ, Dunnington GL. The new ACS/APDS skills curriculum: moving the learning curve out of the operating room. J Gastrointest Surg. 2008;12:213–21.

Smit D, Spruit E, Dankelman J, Tuijthof G, Hamming J, Horeman T. Improving training of laparoscopic tissue manipulation skills using various visual force feedback types. Surg Endosc. 2017;31:299–308.

Download references

Acknowledgements

The authors would like to sincerely thank the gynecologists and other medical students who participated in this study.

The authors would like to thank the National Natural Science Foundation of China for funding (61975250).

Author information

Dongxian Peng and Zhengyong Liu are senior authors who contributed equally to this paper.

Xuemei Huang, Pingping Wang contributed equally to this work as co-first authors.

Authors and Affiliations

Obstetrics and Gynecology Center, Department of Gynecology, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China

Xuemei Huang, Pingping Wang, Jie Chen, Yuxin Huang, Qiongxiu Liao, Yuting Huang & Dongxian Peng

Guangdong Provincial Key Laboratory of Optoelectronic Information Processing Chips and Systems, School of Electronics and Information Technology, Sun Yat-Sen University, Guangzhou, 510275, China

Zhengyong Liu

You can also search for this author in PubMed   Google Scholar

Contributions

Yuxin Huang, and Jie Chen collected and analyzed the data. Qiongxiu Liao and Yuting Huang organized the participants to participate in the study. Xuemei Huang and Pingping Wang contributed to design the research and write the manuscript, Zhengyong Liu and Dongxian Peng were major contributors in revising and editing the manuscript. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Zhengyong Liu or Dongxian Peng .

Ethics declarations

Ethics approval and consent to participate.

This project was approved by the ethics Review Board of Zhujiang Hospital, Southern Medical University (2022-KY-081-01). Written informed consents were obtained from participants. All the methods and procedures carried out in this study were in accordance with relevant guidelines and regulation.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Huang, X., Wang, P., Chen, J. et al. An intelligent grasper to provide real-time force feedback to shorten the learning curve in laparoscopic training. BMC Med Educ 24 , 161 (2024). https://doi.org/10.1186/s12909-024-05155-1

Download citation

Received : 02 June 2023

Accepted : 09 February 2024

Published : 20 February 2024

DOI : https://doi.org/10.1186/s12909-024-05155-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Minimally invasive surgery
  • Force feedback
  • Laparoscopic training
  • Learning curve

BMC Medical Education

ISSN: 1472-6920

learning task 5

IMAGES

  1. [Solved] Learning Task 5: Using the template below, start

    learning task 5

  2. learning task 5 complete the table below help meeeeeee.

    learning task 5

  3. EDUCATION1023

    learning task 5

  4. Learning-Task-5- SORO, Kristal DG

    learning task 5

  5. INTERNSHIP SERIES: Learning Task 5

    learning task 5

  6. (done) Learning Task

    learning task 5

VIDEO

  1. English Quick

  2. Leadership and Team Effectiveness week 3 assignment answers

  3. How to MASTER Task Management for Studying & Learning

  4. Preschool Learning Videos for 3 Year Olds

  5. ABCD- English Letters Videos

  6. Unit 5 Task 1 Conversations 1 6

COMMENTS

  1. Learning-Task-5- SORO, Kristal DG

    Covid 19 Reflection Paper Learning Modalities/Styles: (a) Visual, (b) Auditory, (c) Kinesthetics and (d) Tactile Multi Intelligence: (a) Spatial, (b) Verbal/ Linguistic, (c) Musical, (d) Intrapersonal, (e) Interpersonal, (f) Bodily/Kinesthetics, (g) Naturalistic, (h) Existential, and (i) Logical/Mathematical

  2. Episodes 5 and 6

    Learning Task 5: Designing Differentiated Class Activities Performance Task 1: Assist your cooperating teacher in planning out activities for differentiated instruction to address the given diversities. Include some notes on the strengths and weaknesses of the said activities. Cooperating Teacher: Paul Adrian Pacquiao Year Level: Grade 10

  3. Learning Task 5

    At the end of this learning task, I should be able to: assist the Cooperating Teacher in preparing class activities. (1.2) facilitate LDM class activities with minimum supervision from the Cooperating Teacher. (1.2, 1.3, 2.3) design contextualized learning activities aligned with the most essential learning competencies. (1.4, 1.7, 3....

  4. Learning Tasks 5: Designing Differentiated Class Activities

    Learning Task (LT) 1: Attending Virtual Orientation Sessions. LT 2: Doing Observation of Classes, Pre-Observation, and Post-Observation C. LT 3: Managing my Classroom Structure and Routines. LT 4: Creating Instructional Materials. LT 5: Designing Differentiated Class Activities.

  5. Learning Tasks

    The formal structure of learning tasks consists of the following components (Fig. 1 ): Learning Tasks. Fig. 1 Formal structure of learning tasks in programmed instruction Full size image An information component presents the subject matter to be learned. Then a stimulus is presented, to which the learner is supposed to respond actively.

  6. Teaching Intership Learning Task 5

    Teaching Intership Learning Task 5 | PDF | Learning Styles | Learning Teaching Intership Learning Task 5 - Read online for free. Teaching Internship

  7. How to Design Authentic and Meaningful Learning Tasks

    Education Teaching What are the best ways to design authentic and meaningful learning tasks for your learners? Powered by AI and the LinkedIn community 1 Define authenticity 2 Identify...

  8. Learning Task 5.docx

    Performance Task 3 Write a lesson plan using contextualized learning activities addressing the different strength and needs of your students. Make sure that these activities are aligned with the Most Essential Learning Competencies (MELC) prescribe by the Department of Education. You may paste/attach your encoded lesson plan on the space provide below.

  9. SirDale

    Learning Task (LT) 1: Attending Virtual Orientation Sessions. LT 2: Doing Observation of Classes, Pre-Observation, and Post-Observation C. ... LT 5: Designing Differentiated Class Activities. Cortez_LT5.pdf. Learning Task 5. Contact number: +63 995 641-3129. Email address: [email protected].

  10. Learning Task No 5 Understanding My Learners, Their Strengths ...

    © 2024 Google LLC This video is all about the Learning Task 5 of Teaching Internship. It is all about "Understanding My Learners, Their Strengths, Needs, interests & Exercises"

  11. INTERNSHIP SERIES: Learning Task 5

    719 subscribers Subscribe 5.1K views 1 year ago TALISAY CITY This video series documents my experience during teaching internship. Each episode corresponds to individual learning tasks as...

  12. LEARNING TASK 5 ISC.pdf

    PERFORMING MY TEACHING-LEARNING ACTIVITIES Performance Task 1 Assist your Cooperating Teacher in planning out activities for differentiated instruction to address the given diversities. Include some notes on the strengths and weaknesses of the said activities.

  13. edTPA tip: Teaching Strategies and Learning Tasks

    Teaching Strategies and Learning Tasks. In preparing your lesson plans and commentaries, you will need to include and describe teaching strategies and learning tasks. Teaching Strategies are the actions you, the teacher, make during a lesson. In your plans, detail what teaching strategies you will use and when.

  14. How to Learn More Effectively: 10 Learning Techniques to Try

    1 Improve Your Memory Sam Edwards / Getty Images There are a number of different strategies that can boost memory. Basic tips such as improving your focus, avoiding cram sessions, and structuring your study time are good places to start, but there are even more lessons from psychology that can dramatically improve your learning efficiency.

  15. A five-step cycle to improve learning in your classroom.

    As learning consists of making new long-term pathways between brain-cells, from the brain's perspective, the Learning Cycle looks like this: 1. Prior Knowledge: Ensure there is something to connect to 2. Presentation: Initiate the pathway 3. Challenging Task: Activate the pathway 4. Feedback for Improvement: Check that it's the right pathway 5.

  16. Interactive Learning Tasks

    Interactive learning tasks provide learners with the possibility to interact with a system or a person during learning task processing. This interactivity supports learners in performing the necessary series of cognitive operations and actions by (a) giving them opportunities for repetition and correction, (b) tutoring learning task processing ...

  17. Chapter 6: Designing Engaging Tasks

    However, the design of learning tasks must also emphasize access and engagement. STOP AND THINK. Before reading the rest of Chapter 6, think about how you might change the task that Mr. Carhart's students are involved in so that students are more engaged in the content and language and can meet lesson objectives. Understanding Engagement and ...

  18. Learning Task 5

    Learning Task 5 - Read online for free. Learning task 5

  19. Learning Task 5

    Learning Task 5 - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. task 5

  20. E ST- Learning-Task 5

    Identify the needs, strengths, interest and experiences of my learners. Demonstrate knowledge and understandings of differentiated teaching to suit learners; gender, needs, strengths, interest and experiences Research on one (1) study about the diversity of learners

  21. Introducing Gemini 1.5, Google's next-generation AI model

    Google has been an early adopter and pioneer of the MoE technique for deep learning through research such as Sparsely-Gated MoE, GShard-Transformer, Switch-Transformer, M4 and more. Our latest innovations in model architecture allow Gemini 1.5 to learn complex tasks more quickly and maintain quality, while being more efficient to train and serve.

  22. Full article: Short-term traffic flow prediction at isolated

    In their work, LSTM and Transformer were used to model the short-term correlations and the long-term correlations, respectively. Cheng et al. (Citation 2019) designed a multi-task learning model to predict multiple views of regional traffic and obtained good prediction results using a particle swarm optimisation algorithm.

  23. A novel interpretable deep transfer learning combining diverse ...

    Accurate deep learning (DL) models to predict type 2 diabetes (T2D) are concerned not only with targeting the discrimination task but also with learning useful feature representation. However ...

  24. Learning Task 5

    Learning Task 5 - Read online for free.

  25. An intelligent grasper to provide real-time force feedback to shorten

    The thresholds for task 1, 2, and 3 were set as 0.731 N, 1.203 N and 0.938 N, respectively. With force feedback, the gripping force applied by the novices with feedback (Group B1) was lower than that of the novices without feedback (Group A1) (p < 0.005). ... A lack of force feedback in laparoscopic surgery often leads to a steep learning curve ...

  26. Learning TASK 5

    LEARNING TASK 5. Performance Task 1. Assist your Cooperating Teacher in planning out activities for differentiated instruction to address the given diversities. Include some notes or strengths and weaknesses of the said activities. Cooperating Teacher: _____ Date: _____

  27. ESPN Analyst Picks Mavs as Top 5 Title Contender

    Here was the entire top-five list: 1) Denver Nuggets, 2) Boston Celtics, 3) Los Angeles Clippers, 4) Dallas Mavericks, 5) Phoenix Suns. Kevin Jairaj-USA TODAY Sports

  28. Applied Sciences

    Multi-task learning methods are generally classified into two categories based on parameter sharing strategies. The first category is hard parameter sharing, where all tasks share the same backbone encoder, and different tasks obtain their respective target results through branch decoders.

  29. Task 5

    ADA Compliant ELA Standards D171 Curriculum, Instruction, and Assessment (Differentiated Instruction Checklist) D171 Curriculum, Instruction, and Assessment (Differentiated Instruction Checklist) Lesson Learning Objective & Academic Standard The lesson learning objective and academic standard are not included in the lesson plan.