Having a machine learning agent interact with its environment requires true unsupervised learning, skill acquisition, active learning, exploration and reinforcement, all ingredients of human learning that are still not well understood or exploited through the supervised approaches that dominate deep learning today. Our goal is to improve robotics via machine learning, and improve machine learning via robotics. We foster close collaborations between machine learning researchers and roboticists to enable learning at scale on real and simulated robotic systems.

Recent Publications

Some of our teams.

We're always looking for more talented, passionate people.

Careers

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NATURE INDEX
  • 12 October 2022

Growth in AI and robotics research accelerates

It may not be unusual for burgeoning areas of science, especially those related to rapid technological changes in society, to take off quickly, but even by these standards the rise of artificial intelligence (AI) has been impressive. Together with robotics, AI is representing an increasingly significant portion of research volume at various levels, as these charts show.

Across the field

The number of AI and robotics papers published in the 82 high-quality science journals in the Nature Index (Count) has been rising year-on-year — so rapidly that it resembles an exponential growth curve. A similar increase is also happening more generally in journals and proceedings not included in the Nature Index, as is shown by data from the Dimensions database of research publications.

Bar charts comparing AI and robotics publications in Nature Index and Dimensions

Source: Nature Index, Dimensions. Data analysis by Catherine Cheung; infographic by Simon Baker, Tanner Maxwell and Benjamin Plackett

Leading countries

Five countries — the United States, China, the United Kingdom, Germany and France — had the highest AI and robotics Share in the Nature Index from 2015 to 2021, with the United States leading the pack. China has seen the largest percentage change (1,174%) in annual Share over the period among the five nations.

Line graph showing the rise in Share for the top 5 countries in AI and robotics

AI and robotics infiltration

As the field of AI and robotics research grows in its own right, leading institutions such as Harvard University in the United States have increased their Share in this area since 2015. But such leading institutions have also seen an expansion in the proportion of their overall index Share represented by research in AI and robotics. One possible explanation for this is that AI and robotics is expanding into other fields, creating interdisciplinary AI and robotics research.

Graphs showing Share of the 5 leading institutions in AI and robotics

Nature 610 , S9 (2022)

doi: https://doi.org/10.1038/d41586-022-03210-9

This article is part of Nature Index 2022 AI and robotics , an editorially independent supplement. Advertisers have no influence over the content.

Related Articles

research papers on robotics

Partner content: AI helps computers to see and hear more efficiently

Partner content: Canada's welcoming artificial intelligence research ecosystem

Partner content: TINY robots inspired by insects

Partner content: Pioneering a new era of drug development

Partner content: New tool promises smarter approach to big data and AI

Partner content: Intelligent robots offer service with a smile

Partner content: Hong Kong’s next era fuelled by innovation

Partner content: Getting a grip on mass-produced artificial muscles with control engineering tools

Partner content: A blueprint for AI-powered smart speech technology

Partner content: All in the mind’s AI

Partner content: How artificial intelligence could turn thoughts into actions

Partner content: AI-powered start-up puts protein discovery on the fast track

Partner content: Intelligent tech takes on drone safety

  • Computer science
  • Mathematics and computing

Generative AI’s environmental costs are soaring — and mostly secret

Generative AI’s environmental costs are soaring — and mostly secret

World View 20 FEB 24

Cyberattacks on knowledge institutions are increasing: what can be done?

Cyberattacks on knowledge institutions are increasing: what can be done?

Editorial 07 FEB 24

AI chatbot shows surprising talent for predicting chemical properties and reactions

AI chatbot shows surprising talent for predicting chemical properties and reactions

News 06 FEB 24

First private Moon lander touches down on lunar surface to make history

First private Moon lander touches down on lunar surface to make history

News 23 FEB 24

Mind-reading devices are revealing the brain’s secrets

Mind-reading devices are revealing the brain’s secrets

News Feature 20 FEB 24

Avoiding fusion plasma tearing instability with deep reinforcement learning

Avoiding fusion plasma tearing instability with deep reinforcement learning

Article 21 FEB 24

Structural biology for researchers with low vision

Structural biology for researchers with low vision

Career Column 19 FEB 24

Postdoctoral Fellow

A Postdoctoral Fellow position is immediately available in the laboratory of Dr. Fen-Biao Gao at the University of Massachusetts Chan Medical Schoo...

Worcester, Massachusetts (US)

Umass Chan Medical School - Fen-Biao Gao Lab

research papers on robotics

Washing, Sterilisation and Media Preparation Technician

APPLICATION CLOSING DATE: March 7th, 2024 About Human Technopole:  Human Technopole (HT) is an interdisciplinary life science research institute, c...

Human Technopole

research papers on robotics

Scientific Officer

ABOUT US The Human Frontier Science Program Organization (HFSPO) is a unique organization, supporting international collaboration to undertake inno...

Strasbourg-Ville, Bas-Rhin (FR)

HUMAN FRONTIER SCIENCE PROGRAM ORGANIZATION

research papers on robotics

Tenure Track Assistant Professor towards Associate Professor in the field of biomedical sciences

UNIL is a leading international teaching and research institution, with over 5,000 employees and 17,000 students split between its Dorigny campus, ...

Lausanne, Canton of Vaud (CH)

University of Lausanne (UNIL)

research papers on robotics

Faculty Positions at City University of Hong Kong (Dongguan)

CityU (Dongguan) warmly invites individuals from diverse backgrounds to apply for various faculty positions available at the levels of Professor...

Dongguan, Guangdong, China

City University of Hong Kong (Dongguan)

research papers on robotics

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • IEEE Xplore Digital Library
  • IEEE Standards
  • IEEE Spectrum

IEEE

  • Publications

IEEE Transactions on Robotics (T-RO)

-  ICRA@40 :    September 1, 2023 to May 31, 2024

The IEEE Transactions on Robotics (T-RO)  publishes research papers that represent major advances in the state-of-the-art in all areas of robotics. The Transactions welcomes original papers that report on any combination of theory, design, experimental studies, analysis, algorithms, and integration and application case studies involving all aspects of robotics. You can learn more about T-RO's scope, paper length policy, open access option, and preparation of papers for submission at the  Information for Authors page .

As of late May 2020, T-RO no longer has a "short paper" category for new submissions.  Papers that are short may still be published, but they are treated as Regular paper submissions, and they are subject to the same standards for significance.  Authors of short papers (8 pages or fewer) may consider our sister journal, the  IEEE Robotics and Automation Letters  (RA-L).

Table of Contents of the latest T-RO issue ( IEEE Xplore ) Early Access Articles Most Downloaded Articles Special Collections

Joining the Transactions on Robotics Editorial Board

Presenting your transactions on robotics paper at icra, iros, and case.

Any IEEE Transactions on Robotics (T-RO) paper, other than communication items and survey papers, may be presented at either an upcoming IEEE International Conference on Robotics and Automation (ICRA), an upcoming IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), or International Conference On Automation Science and Engineering (CASE), provided most of the key ideas of the paper have never appeared at a conference with a published proceedings (i.e., the paper is a "new" paper and not the evolved version of a previous conference paper or papers). For conference eligibility deadlines, see the RAS conference dates in the blue box above.

Authors may not request any acceleration or delay of the review process based on these criteria.

Upon final notification of acceptance, eligible papers will be offered an option to present at conference in the author's workspace within the PaperCept platform. The prompt within the workspace will include an option to transfer the paper directly to conference organizers. Authors will have a window of one month to select and accept which conference they will present at. Authors are expected to pay the conference fee. Eligible papers may only be presented at one conference.

Historically papers in the Transactions on Robotics have been either "evolutionary" papers (papers extended, with new results, from previously presented conference papers by the same authors) or "new" direct-to-journal papers (papers that are not evolved from conference papers).  Since the introduction of the Robotics and Automation Letters (RA-L), the robotics community has demonstrated strong support for direct-to-journal papers (maximum of eight pages) with the possibility of presentation at a conference.

This IEEE RAS policy, adopted by AdCom in September 2017 and formalizing pilots of the policy at ICRA 2017 and 2018, provides a conference presentation option for "new" direct-to-journal T-RO papers.  Authors are no longer forced to write two versions of the paper (a short one for conference presentation and a longer one for the "final" journal version) if they want the work both to be presented at a conference and to appear in a journal.  This saves on author and reviewer effort, eliminates the confusion over which paper to cite, and reduces the stress on authors and reviewers arising due to submission deadlines for ICRA, IROS, or CASE. The new policy gives a new benefit to T-RO authors and brings high-quality T-RO papers to ICRA, IROS, or CASE without harming the traditional evolutionary model.

Is My Paper "Evolved" or "New?"

This initiative distinguishes between papers that have evolved directly from conference papers ("evolved" papers) and papers that have not ("new" direct-to-journal papers).  Of course the distinction is not always clear-cut, since almost all of one's research has evolved in some way from one's previous papers.

Below are some criteria to consider in the judgment of whether a paper is evolved or new.  If the answer to one or more of these questions is "yes," this is a good sign that your paper should be considered to be evolved.

  • Does the journal paper have the same title as the previous conference paper?
  • Is there a direct lineage from the conference paper(s) to the journal paper?
  • Typically a paper has one or a small number of key new ideas.  (There may be many supporting details.)  Does a majority of the key ideas in the T-RO paper appear in the previous conference paper(s)?
  • Would the T-RO paper have been rejected without the content of the previous conference paper(s)?
  • Does the T-RO paper use a significant amount of text, results, data, or figures from the previous conference paper(s)?

An advantage of having your paper be considered "evolved" is that you are free to incorporate much of the material from your conference paper(s) without penalty in the review process, provided the new paper provides a significant contribution beyond the conference paper(s) (see the guidance here for more details).  The disadvantage is that your "evolved" paper is not eligible for presentation at ICRA, IROS, or CASE.  The disadvantage of declaring your paper "new" is that you cannot reuse significant portions of the material from your conference paper(s), but the advantage is that the new paper (if accepted) is eligible for presentation at ICRA, IROS, or CASE.

Note that no submission can be considered to be "evolved" from a paper that previously appeared in a journal (including the IEEE Robotics and Automation Letters).

If you are in doubt, send your brief analysis along with the T-RO paper and the relevant conference paper(s) to the Editor-in-Chief for an evaluation.  It is unethical to withhold relevant previous conference paper(s) in this analysis.

IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award

2022:  " Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems "   by Yulun Tian; Yun Chang; Fernando Herrera Arias; Carlos Nieto-Granda; Jonathan P. How; Luca Carlone   vol. 38, no. 4, pp. 2022-2038, August 2022, [ Xplore Link ]

Honorable Mention

"Stabilization of Complementarity Systems via Contact-Aware Controllers"   [ Xplore Link ]

"Autonomous Cave Surveying With an Aerial Robot"   [ Xplore Link ]

"Prehensile Manipulation Planning: Modeling, Algorithms and Implementation"   [ Xplore Link ]

"Rock-and-Walk Manipulation: Object Locomotion by Passive Rolling Dynamics and Periodic Active Control"   [ Xplore Link ]

        "Origami-Inspired Soft Actuators for Stimulus Perception and Crawling Robot Applications"   [ Xplore Link ]

2021:  " Collision Resilient Insect-scale Soft-actuated Aerial Robots With High Agility "   by YuFeng Chen; Siyi Xu; Zhijian Ren; Pakpong Chirarattananon   vol. 37, no. 5, pp. 1752-1764, October 2021, [ Xplore Link ]

"A Backdrivable Kinematically Redundant (6+3)-dof Hybrid Parallel Robot for Intuitive Sensorless Physical Human-Robot Interaction"   [ Xplore Link ]

"Stochastic Dynamic Games in Belief Space"   [ Xplore Link ]

"ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM"   [ Xplore Link ]

"Active Interaction Force Control for Contact-Based Inspection with a Fully Actuated Aerial Vehicle"   [ Xplore Link ]

        "Distributed Certifiably Correct Pose-Graph Optimization"   [ Xplore Link ]

2020: "TossingBot: Learning to Throw Arbitrary Objects With Residual Physics"   by Andy Zeng; Shuran Song; Johnny Lee; Alberto Rodriguez; Thomas Funkhouser vol. 36, no. 4, pp. 1307-1319, August 2020, [ Xplore Link ]

"Design and Validation of a Powered Knee-Ankle Prosthesis With High-Torque, Low-Impedance Actuators"    [ Xplore Link ]

"Quantifying Hypothesis Space Misspecification in Learning From Human-Robot Demonstrations and Physical Corrections"    [ Xplore Link ]

"Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments"    [ Xplore Link ]

"Deep Drone Racing: From Simulation to Reality With Domain Randomization"    [ Xplore Link ]

2019: "Active Learning of Dynamics for Data-Driven Control Using Koopman Operators"   by Ian Abraham and Todd D. Murphey   vol. 35, no. 5, pp. 1071-1083, October 2019, [ Xplore Link ]

2018: "Grasping Without Squeezing: Design and Modeling of Shear-Activated Grippers"   by Elliot Wright Hawkes, Hao Jiang, David L. Christensen, Amy K. Han, and Mark R. Cutkosky   vol. 34, no. 2, pp. 303-316, April 2018, [ Xplore Link ]

"Exploiting Elastic Energy Storage for “Blind” Cyclic Manipulation: Modeling, Stability Analysis, Control, and Experiments for Dribbling"   [ Xplore Link ]

"VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator"  [ Xplore Link ]

2017: "On-Manifold Preintegration for Real-Time Visual-Inertial Odometry"   by Christian Forster, Luca Carlone, Frank Dellaert, and Davide Scaramuzza   vol. 33, no. 1, pp. 1-21, February 2017, [ Xplore Link ]

2016: "Rapidly Exploring Random Cycles: Persistent Estimation of Spatiotemporal Fields With Multiple Sensing Robots"   by Xiaodong Lan and Mac Schwager   vol. 32, no. 5, pp. 1230-1244, October 2016, [ Xplore Link ]

2015:  " ORB-SLAM: A Versatile and Accurate Monocular SLAM System" by  Raul Mur-Artal, J. M. M. Montiel and Juan D. Tardos vol. 31, no. 5, pp. 1147-1163, 2015 [ Xplore Link ].

2014:  " Catching Objects in Flight" by  Seungsu Kim, Ashwini Shukla, Aude Billard vol. 30, no. 5, pp. 1049-1065, 2014 [ Xplore Link ].

2013: " Robots Driven by Compliant Actuators: Optimal Control under Actuation Constraints" by  David J. Braun, Florian Petit, Felix Huber, Sami Haddadin, Patrick van der Smagt, Alin Albu-Schäffer, Sethu Vijayakumar vol. 29, no. 5, pp. 1085-1101, 2013 [ Xplore Link ].

2012: " Reinforcement Learning With Sequences of Motion Primitives for Robust Manipulation" by  Freek Stulp, Evangelos A. Theodorou, Stefan Schaal vol. 28, no. 6, pp. 1360-1370, 2012 [ Xplore Link ].

2011: " Human-Like Adaptation of Force and Impedance in Stable and Unstable Interactions" by  Chenguang Yang, Gowrishankar Ganesh, Sami Haddadin, Sven Parusel, Alin Albu-Schaeffer, Etienne Burdet vol. 27, no. 5, pp. 918-930, 2011 [ Xplore Link ].

2010: " Design and Control of Concentric-Tube Robots" by  Pierre E. Dupont, Jesse Lock, Brandon Itkowitz, Evan Butler vol. 26, no. 2, pp. 209-225, 2010 [ Xplore Link ].

2009: " Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing" by  Anastasios I. Mourikis, Nikolas Trawny, Stergios I. Roumeliotis, Andrew E. Johnson, Adnan Ansar, Larry Matthies vol. 25, no, 2, pp. 264-280, 2009 [ Xplore Link ].

2008: " Smooth Vertical Surface Climbing with Directional Adhesion" by  Sangbae Kim, Matthew Spenko, Salomon Trujillo, Barrett Heyneman, Daniel Santos, Mark R. Cutkosky vol. 24, no. 1, pp. 65-74, 2008 [ Xplore Link ].

2007: " Manipulation Planning for Deformable Linear Objects" by  Mitul Saha, Pekka Isto vol. 23, no. 6, pp. 1141-1150, 2007 [ Xplore Link ].

2006: " Exactly Sparse Delayed-State Filters for View-Based SLAM" by  Ryan M. Eustice, Hanumant Singh, John J. Leonard vol. 22, no. 6, pp. 1100-1114, 2006 [ Xplore Link ].

2005: " Active Filtering of Physiological Motion in Robotized Surgery Using Predictive Control" by  Romuald Ginhoux, Jacques Gangloff, Michel de Mathelin,Luc Soler, Mara M. Arenas Sanchez, Jacques Marescaux vol. 21, no. 1, pp. 67-79, 2005 [ Xplore Link ].

2004: " Reactive Path Deformation for Nonholonomic Mobile Robots" by  Florent Lamiraux, David Bonnafous, Olivier Lefebvre vol. 20, no. 6, pp. 967-977, 2004 [ Xplore Link ].

  • Subscription Information
  • Video Submission Guidelines
  • RA Magazine
  • Information for Authors
  • Submission Procedures
  • Special Collections
  • Information for Reviewers
  • List of Reviewers
  • Editorial Board
  • Information for Associate Editors
  • Information for Editors
  • T-RO Papers Presented at Conferences
  • Tips for Making a Good Robot Video
  • IEEE Author Center
  • Plagiarism & Ethical Issues
  • Young Reviewers Program

RAS is a volunteer driven society with over 13,000 members worldwide.

easyLink-chapters

Students are future of robotics and automation.

easyLink-students

IEEE International Conference on Automation Science and Engineering

CASE 2024 Logo

IEEE/RSJ International Conference on Intelligent Robots and Systems

IROS 2023 Logo

IEEE International conference on Robotics and Automation

ICRA2024 logo quick links

Advertisement

Advertisement

Machine learning techniques for robotic and autonomous inspection of mechanical systems and civil infrastructure

  • Open access
  • Published: 29 April 2022
  • Volume 2 , article number  8 , ( 2022 )

Cite this article

You have full access to this open access article

  • Michael O. Macaulay   ORCID: orcid.org/0000-0003-2027-0545 1 &
  • Mahmood Shafiee 1  

8714 Accesses

14 Citations

1 Altmetric

Explore all metrics

Machine learning and in particular deep learning techniques have demonstrated the most efficacy in training, learning, analyzing, and modelling large complex structured and unstructured datasets. These techniques have recently been commonly deployed in different industries to support robotic and autonomous system (RAS) requirements and applications ranging from planning and navigation to machine vision and robot manipulation in complex environments. This paper reviews the state-of-the-art with regard to RAS technologies (including unmanned marine robot systems, unmanned ground robot systems, climbing and crawler robots, unmanned aerial vehicles, and space robot systems) and their application for the inspection and monitoring of mechanical systems and civil infrastructure. We explore various types of data provided by such systems and the analytical techniques being adopted to process and analyze these data. This paper provides a brief overview of machine learning and deep learning techniques, and more importantly, a classification of the literature which have reported the deployment of such techniques for RAS-based inspection and monitoring of utility pipelines, wind turbines, aircrafts, power lines, pressure vessels, bridges, etc. Our research provides documented information on the use of advanced data-driven technologies in the analysis of critical assets and examines the main challenges to the applications of such technologies in the industry.

Similar content being viewed by others

research papers on robotics

Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions

Iqbal H. Sarker

research papers on robotics

A comprehensive literature review of the applications of AI techniques through the lifecycle of industrial equipment

Mahboob Elahi, Samuel Olaiya Afolaranmi, … Jose Antonio Perez Garcia

research papers on robotics

Review of deep learning: concepts, CNN architectures, challenges, applications, future directions

Laith Alzubaidi, Jinglan Zhang, … Laith Farhan

Avoid common mistakes on your manuscript.

1 Introduction

There has been considerable literature concerning the deterioration of critical systems and infrastructure around the world, and the resulting health and safety implications, whether these are roads, bridges, or energy related infrastructure. As reported by [ 1 ], there are at least 150,000 bridges in the United States alone that have lost their structural integrity and are no longer fit for purpose. Mechanical systems and civil infrastructure deemed critical assets by both government and industry, are vulnerable to damage mechanisms, which can adversely affect social services and the overall productivity of an economy.

This has ensured that regular inspection and maintenance is now standard practice. The operation and maintenance (O&M) costs, resulting from standardized inspection and maintenance practices, have been quite considerable for government and industry. O&M cost accounts for a large proportion of lifecycle costs in critical systems; for instance, the O&M expenditure in the wind energy industry amounts to 25%-30% of total costs [ 2 ]. Challenges to conventional maintenance and inspection practices of civil infrastructure and mechanical systems involves the fact that most methods and protocols employed are bureaucratic and labour intensive. The inspection and monitoring of assets are usually undertaken manually, with technicians and operators having to travel sometimes to distant locations hundreds of miles away. In some cases, operators and technicians must work in environments which are subject to intensive conditions caused by heat, cold, noise, wetness, dryness, etc. In other cases, the location may be inaccessible to human technicians, as in the case of large storage tanks or underground pipelines.

Technological advancements and emergence of robotics and autonomous systems (RAS) have begun to revolutionize monitoring and inspection of mechanical systems and civil infrastructure. This revolution has provided an interest and demand for the use of RAS technologies to support the monitoring, inspection and maintenance of offshore wind farms, gas and utility pipelines, power lines, bridges, railways, high rise buildings, vessels, storage tanks, underwater infrastructure, etc., in order to mitigate the current health and safety risks that human operators currently experience while inspecting or monitoring such infrastructure within the energy, transport, aerospace and manufacturing sectors [ 3 ]. There is a drive in both industry and government for the development and availability of RAS technologies that can be deployed to provide data on the condition of assets and help technicians undertake actions deemed necessary, based on the information provided by RAS. This information can be signals provided by hardware instruments, or images taken by cameras from damaged, shadowed, rough, or rusty surfaces.

Robot inspections have been proven to be more efficient and faster than human inspections. For instance, the inspection of wind turbines using unmanned aerial vehicles (UAVs) take considerably shorter time than that using conventional visual inspection [ 3 ]. As indicated in a case study reported by [ 4 ], the traditional rope access method can inspect only one wind turbine per day, whereas a UAV can inspect up to three wind turbines in a day. Vast amount of data with diverse formats (such as, audio, video, or digital codes) can be collected by numerous RAS technologies that are deployed to monitor and inspect infrastructure. However, it will be quite time-consuming, if not impossible for human operators to analyze this volume of incoming data using conventional computing models. Machine learning (ML) techniques provide advanced computational tools to process and analyze all the data provided by RAS technologies efficiently, speedily, and accurately. The evolution from teleoperated robot systems which require remote human control to autonomous systems, which when pre-programmed can operate without human intervention has helped in the maturity and ascendance of RAS technologies which remove the need for travel, bureaucratic paperwork requirements, etc. While there might still be a number of RAS technologies that work offline, there is a growing number of RAS technologies that are wireless, remotely transferring data, e.g., images of structures, materials, etc. to a control office through inter-networks for analysis.

The aim of this paper is to provide an academic contribution by reporting on literature and research related to the use of ML in RAS-based inspection and monitoring of mechanical systems and civil infrastructure. It also proposes a classification and analysis of different ML techniques used for the analysis of data yielded from RAS-based inspections. This means that the research in this paper investigates and identifies which study, in which literature, has used which ML technique, to support different RAS technologies deployed for inspection purposes. To achieve this aim, we identify the relevant literature with keywords including: robotics, inspection, machine learning, maintenance, mechanical engineering, civil infrastructure, and asset. We will also provide a review and classification of ML techniques; the types of damage mechanisms being considered, e.g., corrosion, erosion, fatigue, cracks, etc.; the types of inspections; and robotic platforms that have been used to support both industry and academic research. In addition, a review will be conducted on the characteristics of datasets collected during RAS inspections of civil and mechanical infrastructure, including: sources of data (public or non-public); types of data (e.g., image, video, documents etc.); size of data; velocity or rate of data generation and transmission; and the variety of data (structured or unstructured). Following on from this, there will be an evaluation of the results and findings. Finally, there will an exploration of potential development in RAS for inspection and monitoring of future assets.

The rest of this paper is organized as follows. Section  2 reviews different types of RAS technologies that have been proposed and designed to support the inspection and monitoring of mechanical systems and civil infrastructure. Section  3 reviews the characteristics of the data collected by RAS systems for inspection and monitoring purposes. Section  4 reviews various types of ML techniques that can and have been used to process and analyze data from RAS inspections. Section  5 discusses the findings of the literature review undertaken in this research and then finally, Section  6 reviews some of the current technology gaps and challenges in the application of ML techniques for RAS based inspection of mechanical systems and civil infrastructure. The organization of this literature review is schematically illustrated in Fig.  1 .

figure 1

Schematic illustration of organization for the literature review

2 RAS technologies for monitoring and inspection

Today, there are a variety of robotic and autonomous systems being developed and deployed in various industries, including aerospace, manufacturing, energy, transport, agriculture, healthcare, etc. RAS systems are widely used to support monitoring, maintenance and inspection of mechanical systems and civil infrastructure. These technologies are provided with artificial intelligence (AI), sometimes referred to as machine learning (ML), to enable and complete complex tasks, as well as process vast amounts of data. The mechanical design for an RAS system used to support inspection, monitoring and maintenance purposes, can be categorized by specific locomotion and adhesion mechanisms. The adoption of an inspection robot’s locomotion and adhesion mechanisms is sometimes offset against task or application specific requirements such as payload, power requirements, velocity, and mobility [ 5 , 6 ].

Locomotion in robotics specifically refers to directional movement that makes it possible for an object to move from location to another, the mechanism that makes a robot capable of moving in its environment. The literature states that there are four main types of locomotion that a robot system could be fitted with, depending on the task and environment they are being built to support [ 5 ]. These four locomotion types include: arms and legs, wheels and chains, sliding frame, and wires and rails. Considering the pros and cons of each type, arm and legged robots are better suited for maneuver around obstacles in the environment when compared to other locomotion systems. Conversely, wheels and chain-driven locomotion are best suited to environments with a flat and even surface and ill-suited for navigating obstacles in the environment. The sliding frame locomotion mechanism comprises of a mechanical design that has two frames which move against one another in rotation. This mechanical design however provides for low speeds. Finally, locomotion involving wires and rails comprises of a simple system where the robot is held in place by wires and rails [ 5 ]. Adhesion in robotics refers to the mechanism by which robot systems can attach or cling to surfaces in their environment. Common adhesion mechanisms for RAS systems that support inspection tasks ranging from magnetic adhesion and pneumatic adhesion (whether that be the passive suction cups type or active suction chambers type or vortex thrust systems type) to vacuum sucker, propeller, and dry adhesion.

In the following subsections, we review RAS-based inspection systems that are currently being used in different industry sectors. These systems range from platforms operating below sea level to those operating within the troposphere (ground level to about 10-20 km above sea level) and the ones purposed to operate in the thermosphere and exosphere (space and beyond). We therefore suggest five robot categories, including: unmanned marine robots, ground-based robots, climbing and crawler inspection robots, unmanned aerial robots, and space inspection robot systems.

2.1 Unmanned marine robot systems

Some literature use the term ‘unmanned marine vehicles (UMV)’ as an umbrella term for unmanned surface vehicles (USV) and unmanned underwater vehicles (UUV) respectively [ 6 ]. UUVs however can be further classified as either autonomous underwater vehicles (AUV) or remotely operated vehicles (ROV) [ 7 ].

AUVs are unmanned, pre-programmed robot vehicles, purposed and deployed into the ocean depth, autonomously without the support of cabling and human intervention. When the AUV completes its task, it returns to a pre-programmed location, where its data can be retrieved, downloaded, processed, and analyzed. An ROV is an unmanned robot which is deployed into ocean depths; however, the difference is that an ROV is connected to a ship by cables. An operator located on the ship pilots the ROV. The cables attached to an ROV are used to transmit commands and data between the operator and robot. AUVs can be deployed to support inspection of hazardous objects, surveying and mapping wrecks, and deep underwater infrastructure (e.g., subsea cables). ROVs are usually deployed into dangerous or challenging deep water environments for human divers. Therefore, both AUV and ROV robot systems are fitted with and supported by a variety of sensors to collect data. The data provided may be used for military or civilian surveys, inspections, surveillance, and exploration purposes. AUVs and ROVs are usually equipped with cameras for obtaining video images underwater. ROVs use cameras to transmit video telemetry to human operators for analysis and decision-making. Sound navigation ranging (sonar) and fiber optic gyros (FOG) support object detection, obstacle avoidance and navigation. ROVs might also be fitted with robotic arms for collecting underwater samples [ 8 ].

2.2 Unmanned ground robot systems

Unmanned ground-based robots operate autonomously on ground surfaces. They are sometimes referred to as mobile robots in some academic literature, alternatively they are otherwise referred to as unmanned ground vehicles (UGV) or land-based robots. Ground-based robots in literature are also sometimes categorized based on their locomotion, which among other criteria is based on the environment it is deployed into, which are usually even, stable environments. A ground-based robot has the advantage of being able to support and carry maximum amounts of payload where appropriate, however the disadvantage of these robot types is their lack of mobility with uneven terrain [ 6 ].

Unmanned ground-based robots can be categorized as wheeled robots, walking (or legged) robots, tracked robots, or robots with a hybrid of either wheeled, legged or tracking [ 9 ]. Wheeled robots navigate on the ground using motorized wheels to propel themselves [ 9 ]. Literature states that there are four types of wheeled robot, which can be differentiated by the number of degrees of freedom (DOF) they hold. DOF is defined as the number of independent variables that can define the motion or position of an object (or mechanism) in space. These four types include the fixed standard wheel; the castor wheel; Swedish wheel, and finally the ball or spherical wheel. There are also several types of wheeled robots, including the single-wheeled robot, two-wheeled robot, three-wheeled robot, and so on, each with their unique mobility feature or characteristic [ 9 ].

Legged (or walking) robots, unlike wheeled robots, navigate on both even and uneven surfaces, hard and soft surfaces, and can detect obstacles in their path or environment. Legged robots can be classified as one-legged (hoppers), two-legged (humanoid), three-legged, four legged (quadruped), five-legged, six-legged (hexapod), and so on [ 9 ]. Hybrid ground-based robots are robots that combine legged, wheel and track locomotion systems in any given configuration [ 9 ].

The applications of unmanned ground robots are numerous, and they range from the nuclear industry, where human operators are replaced by robots to operate in radioactive environments, to military operations for surface repairs, navigating minefields, explosive ordinance disposal (EOD), carrying and transporting payload, etc. Other state-of-the-art applications include reconnaissance, surveillance, and target acquisition operations; and space exploration as in the case of NASA’s planetary rovers [ 10 ]. Unmanned ground-based vehicles are also fitted with array of sensor payload options to support autonomous operations, navigation through the environment and data collection. Cameras are used to scan the robot’s environment and support calculation of its position. Furthermore, motion detectors, infrared (IR) sensors, temperature and contact sensors support object detection, obstacle and collision avoidance and obstacle localization. Laser range finder sensors which use a laser beam to generate distance measurements, producing range data, also support object detection and obstacle avoidance [ 8 ].

2.3 Climbing and crawler inspection robots

Wall climbing and crawler robots were developed for movement on vertical plane environments for the inspection and maintenance of a range of assets such as storage tanks, nuclear power facilities, and high-rise buildings [ 11 , 12 ]. Oil refineries consist of storage tanks that require cleaning; along with a requirement for routine inspection and non-destructive testing (NDT) of these tanks to check for cracks and leaks. The traditional and manual implementation of these routine inspection and maintenance tasks results in very high labor and financial costs. The development and deployment of climbing and crawling robot systems can help with automating these tasks [ 12 ].

Climbing robots adopt an adhesion mechanism, based on the type of the environment they are deployed into [ 6 ]. These robots employ magnetic adhesion or pneumatic or negative pressure (depending on the suction or thrust type) adhesion. According to literature, climbing robot systems are usually fitted with arm and leg locomotion mechanism systems. The number of arms and legs can vary from two to eight legs, albeit eight legged robots are not as common. Alternatively, climbing and crawler robots can otherwise be fitted with wheels of chain-driven locomotion [ 5 ]. The adhesion mechanisms available today make climbing and crawling robots capable of attaching to structures and materials, while also providing a reliable platform for attached payload and tools [ 6 ].

Literature reports some information about the velocity, and mobility of the climbing and crawler robots. Climbing robot systems might need to reach high velocity on a vertical plane, for optimal movement between inspection locations. With respect to mobility, climbing robot systems which are fitted with arms and legs can navigate uneven surface, steps, and other objects in the environment [ 5 ]. When considering the payload requirements for climbing and crawler robots, some sensors such as ultrasonic sensors, gravity sensors, acceleration sensors etc. are used to measure and provide data about the distance of objects or obstacles in front [ 5 , 6 , 11 – 13 ]. The literature provides a varied advice on the weight of the payload capacity, ranging between 10 kg and 30 kg. Obviously, the weight requirements will be dependent on the tasks that the robot system is deployed to complete [ 5 , 11 , 12 ]. However, robotics engineers must engage with the problem that climbing and crawling robots will need to suitably offset the take up of heavy payloads, while securing adhesion on the challenging surfaces that they are designed for [ 6 ].

2.4 Unmanned aerial robot systems (or drones)

Unmanned aerial robot systems are interchangeably referred to as unmanned aerial vehicles (UAVs), and more commonly referred to as drones in the literature. Various multidisciplinary disciplines, ranging from environmental monitoring to civil engineering are increasingly deploying drones to support with various inspection type applications [ 14 ]. This is because research has continuously shown that the use of drones for inspection-based tasks reduce the need for human actors, their risk and exposure to injury and fatality, maintenance, and downtime costs [ 14 – 16 ]. Seo et al . [ 14 ] indicates that the selection of a drone for a particular application is based on some criteria including the mission duration, battery life, camera and video resolution, payload capacity, GPS and collision avoidance and cost performance [ 14 ]. Locomotion in drones is provided by propellers, or as referred to rotors in certain literature. The term propellers and rotors are used interchangeably in the literature, although technically, a rotor could be considered a horizontal propeller (such as those mounted on a helicopter), while propeller could technically be the vertical rotor mounted on an airplane. Nevertheless, they are identical objects seen from different angles. A propeller propels an object, using the thrust as the force for horizontal movement and lift for vertical movement – providing for a vertical take-off and landing (VTOL) capability. Motors provide power to propellers and spin them at high speeds. These high speeds in turn create the thrust or lift which provide the drone with the required locomotion [ 15 ]. Lattanzi et al . [ 6 ] contended that drones compromise with having reduced stability and how much payload they can support, with the advantage of increased movement and mobility [ 6 ]. In literature, flight time or duration of operation is inextricably linked to battery life or number of batteries available in a drone. The battery provides the electricity to power the drone, and most drones are only capable of providing enough power to support 20-30 minutes of flight time [ 14 ].

Drones are described in literature as technology platforms that can support a variety of applications and carry a variety of sensor payloads. Sensor payloads can vary from thermal and infrared to optical cameras. Light detection and ranging (LIDAR) is used to measure and provide data on the distance of objects, and with radar, the angle and velocity of objects as well [ 16 ]. Literature indicates that the higher the number of propellers (or rotors) supporting the drone, the greater the drone’s payload capacity. Although there are caveats to this guidance, one of which is that increased payload results in a decrease in the drone’s flight time and range capacity [ 17 ]. Drones can support payload from 150 g to 830 g, depending on battery life [ 17 ]. Literature and research that have deployed drones for infrastructure inspection have indicated the use of commercial cameras with resolutions of 12 to 18 megapixel, with each 15-minute flight instance providing over 1200 images [ 18 ].

2.5 Space inspection robot systems

RAS systems have increasingly become a critical aspect of space technology, supporting a variety of space missions. Space robotics can be classified as either small or large manipulators, or humanoid robots [ 19 ]. Space robots, also referred to as space manipulators in some literature, meet decreased and applied gravitational forces on them. In most cases, these robots rotate and hover and glide in orbit [ 20 ]. Literature demonstrates two types of applications for space-based robotics, these include on-orbit assembly and on-orbit maintenance (OMM). OMM applications involve repair, refueling, debris removal, inspection, etc.

Since the scope of this paper is concerned with inspection purposed platforms, we review only those space robot systems that are developed to support inspection tasks. This includes the development of the orbiter boom sensor system (OBSS) by the Canadian Space Agency (CSA). The OBSS was deployed to inspect the façade of the thermal protection system of space shuttles [ 19 ]. Space robots can be deployed to ferry varied payloads of kilograms to tons on space installations [ 21 ].

[ 22 ] provided a description of the teleoperated robotic flying camera called the Autonomous Extra Vehicular Robotic Camera (AERCam), that is used to support astronauts by providing them with a way to inspect and monitor the shuttle and space station. The first version of the robot was called AERCam Sprint and was deployed on a shuttle in 1997. The AERCam Sprint was fitted with a ring of twelve infrared detectors and two color cameras to enable vision capability [ 22 ]. [ 23 ] provided a report on a space inspection robot developed by NASA, called Tendril. The Tendril is a manipulator robot type purposed to support space missions by inspecting difficult to reach locations, e.g. fissures, craters, etc. Nishida et al . [ 24 ] provided a report on a prototype model for what they describe as an ‘end-effector’ for an inspection and diagnosis space robot. Pedersen et al . [ 25 ] mentioned a space robot designed to inspect the Mir station called Inspector, indicating however that it failed while in flight.

3 Features of data collected from RAS-based inspections

This section reviews the characteristics of data collected by RAS technologies, which are then processed and analyzed by analytical methods and techniques to support the monitoring and inspection of mechanical systems and civil infrastructure. We will explore the characteristics of input data collected by RAS systems using the four Vs data model [ 26 – 29 ]. Literature indicates that data can be categorized through its volume, which refers to the quantity of data that can be generated and stored; veracity, which refers to the quality of data collected as input; velocity, which refers to the speed at which data can be produced; and variety, which refers to the type and format of data collected. These four characteristics are briefly described below:

3.1 Volume (quantity or size of data)

Meyrowitz et al . [ 8 ] advised that there is a direct relationship between the type of sensor fitted to a robotic autonomous system and the volume of data produced. This position considers that certain sensors by default generate larger quantities of data compared to others, e.g., cameras which produce video data can generate millions of bits of data.

3.2 Variety (type and format of data)

Literature review showed that most research papers that have deployed climbing and crawler robots fit them with an array of sensors to collect a variety of data types. The data types include sound waves and their distance to an object, using ultrasonic sensors; acceleration and velocity using accelerometers and gravity sensors [ 13 ]. Climbing and crawling robots also collect image and video data using cameras [ 11 , 13 ]. Literature review of the types of data collected by UAVs have demonstrated that they have been purposed to collect image and video data [ 15 , 18 , 30 – 32 ]. In a study by Alharam et al . [ 33 ], the UAVs have also been purposed to collect data on gas leakage, specifically methane (CH4) from oil and gas pipelines. The types of data collected by UUVs, specifically ROVs for underwater inspection, include image and video data; angular, velocity, orientation, depth, and pressure data, collected by optical and gyro sensors [ 7 , 13 ]. The types of data collected by unmanned ground robots (UGRs) vary from images and videos collected by cameras; distance measurements collected by range finder sensors; and sound wave data collected by ultrasonic sensors [ 10 ].

3.3 Velocity (speed of data generation)

While the literature indicates that certain types of sensors produce higher volumes of data compared to others, it is also indicated that the speed of data generation and transmission has a direct correlation with the transmission medium or link used, and sometimes the environment the data is transferred within [ 8 ]. Except for on-board RAS data processing and analysis, data rates are slowed depending on their environment, Meyrowitz et al . [ 8 ] demonstrated that, with current technology, underwater RAS systems operate in an environment that reduce the rate of data transmission [ 8 ].

3.4 Veracity (quality and accuracy of data)

Literature indicates that the quality and accuracy of data can be directly linked to the frequency and type of transmission link used for data on RAS systems. In the instance of UUVs, Meyrowitz et al . [ 8 ] provided the instance of sonar imaging, where the combination of low frequency and bad transmission links result in reduced resolution and lots of interference that will require cleaning in image and acoustic data [ 8 ]. This has led to research into the development of better, high performance data transmission links, ranging from fibre optics to laser links. Alternatively, RAS systems with ML technology on-board, can process and analyze data with greater veracity because the data has not yet been subject to the degradation that is directly linked to transmitting the input data.

4 Machine learning techniques

A review of literature provides a distinct categorization between different ML algorithms. These algorithms are referred to as either supervised learning, unsupervised learning, or reinforcement learning algorithms. This section provides an overview of some of the popular ML techniques documented in literature and used to process and analyze data collected from RAS-based inspection operations.

4.1 Supervised learning

There are supervised learning techniques that are used to provide prediction-based solutions for problems that can either be categorized as classification or regression. These techniques require vast amounts of labelled data as input. In this approach to ML, the outputs (sometimes referred to as targets in literature) are pre-determined and directed towards interpretation and prediction. The dataset provided is separated into a training set and test set respectively, and then it is labelled with features of interest. When trained, the system can identify and apply these labels to new data, e.g., when taken through the supervised ML process, a system can be trained to identify new images of an object. Therefore, given input (x) and output (y), a supervised learning algorithm can learn the mapping function y = f(x); so that given input (x), this can lead to a prediction of output (y) for the given data [ 34 – 37 ].

There are two types of supervised learning approaches to ML. The first type is the regression learning process, where a model predicts a continuous quantity based on its input variables. Regression predicts continuous values such as height, weight etc., these values are referred to as continuous because they reside within an infinite number of possibilities, e.g., weight is a continuous value because there is an infinite number of possible values for a person’s weight [ 38 , 39 ]. The other type refers to the classification-based supervised learning process, where the output or target is categorical (discrete or finite number of distinct groups or classes). Classification refers to the process of identifying a model that takes a given dataset and sorts the data within it into distinct or separate classes or labels. Classification models in supervised machine learning are often described as a technique for predicting a class or label [ 34 – 37 , 40 , 41 ].

While most ML algorithms can be applied to solve both classification and regression problems, algorithms best suited to support classification-based problems include the K-nearest neighbors (KNN), logistic regression, support vector machine (SVM), decision tree, naive bayes, random forest, and artificial neural network (ANN). Conversely, algorithms best suited to support with regression-based problems include linear regression as well as random forest [ 39 ]. These techniques are briefly reviewed in the following.

4.1.1 Linear regression

Linear regression is a supervised model or algorithm used to predict the value of a variable based on the value of another variable. The model assumes that linear relationship between the input variables (x) and a single output (y), and that the output (y) can be calculated from a linear combination of the input variable (x). This is referred to as a simple linear regression. Literature describes multiple input variables as multiple linear regression. Linear regression models fit a straight line to a dataset, to describe the relationship between two variables.

4.1.2 Support vector machine (SVM)

SVM is a supervised machine learning algorithm that can be applied to both classification and regression approaches. SVM algorithm identifies a “decision boundary” or “hyper plane” to separate a dataset into two distinct classifications. The algorithm attempts to maximize the distance between the nearest data points of the two classes within the dataset. Support vectors are the data points nearest to the decision boundary; and a change in the position of the support vectors will result in the change in the position of the decision boundary. The greater the distance the data points are from the decision boundary, the more concrete their classification. The distance between the decision boundary and the nearest data point is called the margin. SVM is very accurate and works very well with small datasets. However, with large datasets it usually results in longer training times [ 33 , 35 , 42 – 44 ].

4.1.3 Decision trees (DT)

Decision Trees (DTs) are a supervised ML algorithm, that builds classification or regression models. DTs take the form of a tree diagram, breaking down a dataset into smaller subsets, to facilitate the development of a tree with a root node, a decision node, with each outward branch of the node representing a possible decision, outcome, or reaction. The decision tree comprises of decision nodes and leaf nodes, with leaf nodes representing a classification or a decision. DTs are typically used to determine a statistical probability or more simply, a course of action for complex problems. DTs provide a visual output of a given decision-making process and they can process both numerical and categorical data. However, DTs are susceptible to unbalanced datasets which generate biased models. DTs are also susceptible to overfitting - which occurs when a model fits too closely to the training data and is less accurate when introduced to new and previously unseen data [ 33 – 35 , 42 – 44 ].

4.1.4 Random forest (RF)

Random forest is a supervised ML algorithm that can be applied to both classification and regression-based problems. It grows multiple individual decision trees for a given problem and merges them together to make a more accurate prediction. The RF technique uses randomness and ensemble learning to produce uncorrelated forests of decision trees. Ensemble learning is a method that combines various classifiers such as decision trees, and takes the aggregation of their predictions to provide solutions. The most commonly known ensemble methods are bagging and boosting. Bagging creates a different subset from the training data, with the final output based on majority voting. Boosting on the other hand is a method (e.g., ADA Boost, XG Boost) that combines “weak learners” into “strong learners” by creating sequential models so that the final model generated delivers the highest accuracy. Random forests, unlike DTs, are not susceptible to over-fitting, however, they are a time-consuming and resource-intensive technique [ 33 ].

4.1.5 XGBoost (Extreme Gradient Boosting)

XGBoost is short for ‘Extreme Gradient Boosting’. The XGBoost is a supervised ML algorithm implementing the gradient boosting decision tree framework. The algorithm can be applied to solve classification, regression, and prediction problems. It creates and works to optimize (through a boosting technique) each upcoming decision tree, so that the errors of each following decision tree are reduced compared to the previous tree that came before it. The boosting technique involves a process where there is a gradual learning from data, resulting in improved prediction for building subsequent decision iterations.

4.1.6 K-Nearest Neighbor (K-NN)

The KNN algorithm is a supervised ML algorithm, best suited to classification models. The algorithm makes an estimation of the probability that a new data point belongs to a particular group. This process involves looking at the data points in proximity and then identifying which data points have similar features to the new data point. The new data point is then assigned to the group which has most data points with similar features close the new data point. The KNN algorithm is very easy to implement and fast to execute. However, KNN does not classify data points very well and the accuracy of the algorithm is dependent on the quality of the dataset [ 35 , 42 – 44 ].

4.1.7 Naive Bayes

Naive Bayes is a supervised ML technique used to solve classification problems, and is based around counting and conditional probability. It uses the Bayes theorem to classify data. The Naive Bayes algorithm naively assumes that all characteristics of a data point are independent of one another. The Bayes’ theorem is based on the understanding that the probability of an event may require to be updated as new data becomes available. The algorithm seems to perform much better with categorical data (for example, it works well when applied to document classification and spam filtering) than with numerical data [ 34 , 35 , 42 ].

4.1.8 Logistic regression

Logistic regression is a supervised learning and classification algorithm for predicting a binary outcome, where an event occurs (True) or does not occur (False). The algorithm is used to distinguish between two distinct classes. It is considered a supervised ML algorithm because it has X input features and a y target value, and uses labels on the dataset for training. The algorithm works to find the logistic function of best fit to describe the relationship between X and y . Logistic regression algorithm is similar to the linear regression algorithm, except that the linear regression works with continuous target variables (numbers within a range) while the logistic regression is used when the target variable is categorical. The algorithm transforms its output using the sigmoid function to return a value which is then mapped to two or more discrete classes. Binary regression, multinominal logistic regression and ordinal regression are the three main types of logistic regression. Binary regression is used to process Boolean values, multinomial is used to process n ≥3 values, and ordinal logistic regression processes n ≥3 ordered classes. Logistic regression has been used to support various applications from medical diagnosis to fraud detection in banking [ 34 , 42 ].

4.1.9 Artificial Neural Network (ANN)

Artificial Neural Networks (ANNs), which can solve both regression and classification problems, are modelled on the neural networks in the human brain. Like the human brain that contains billions of neuron cells that are connected and distribute signals in the human brain, ANNs are made up of artificial neurons, called units, grouped into three different layers. The first layer is called the ‘input layer’ which receives data and then forwards the data received to the second layer called the ‘hidden layer’. The hidden layer performs mathematical computations on the data received from the input layer. The last layer is the output layer, which returns data as output. Deep neural networks (sometimes called deep learning in academic literature) refers to neural networks that contain multiple hidden layers [ 35 , 42 – 46 ].

4.1.10 Convolutional Neural Network (CNN)

Convolutional Neural Network (CNN) is a type of ANN that detects patterns and helps with processing of vision-based tasks. CNN is made up of an ML unit algorithm, called perceptions. A CNN can make predictions by analysing an image, check to identify features, and classify the images based on this analysis. CNN consists of multiple layers that process and extract features from data. These layers include the Convolutional Layer, Rectified Linear Unit (ReLU), Pooling Layer, and Fully Connected Network (FCN). The Convolutional Layer contains filters that perform the convolution operation, while the ReLU layer performs operations on elements and outputs a rectified feature map. The Pooling Layer takes the rectified feature map as input, and performs a ‘down-sampling’ operation that reduces the dimensions of the feature map. The pooling layer then converts the two-dimensional array output from the pooled feature map into a linear vector by flattening it. FCN layer takes the flattened matrix from the pooling layer as input and then proceeds to classify and identify the images [ 45 – 48 ].

4.2 Unsupervised learning

Literature tells us that unsupervised learning is where algorithms identify patterns within a given dataset. Unsupervised learning process involves searching for similarities that can be used to group data. Some of the most used unsupervised learning algorithms include the K-means clustering algorithm, Hierarchical clustering, Anomaly detection, principle component analysis (PCA), Independent Component Analysis, Apriori algorithm, singular value decomposition [ 34 – 37 ].

4.3 Reinforcement learning

Reinforcement learning (RL) takes an alternative approach to supervised and unsupervised learning. RL does not require the system to learn from data, instead learning is the result of feedback and reward. This involves a series of trial and error by a software agent. Some of the most common unsupervised learning algorithms include: SARSA – Lambda algorithm; Deep Q Network (DQN) algorithm; Deep Deterministic Policy Gradient (DDPG) and the Asynchronous Advantage Actor-Critic algorithm (A3C) [ 35 – 37 ].

4.4 Deep learning

The term deep learning refers to a subset of ML techniques that requires vast amounts of data to train models to output values, interpretations, or predictions. Deep learning methods are ANNs with more than one hidden layer and they can be supervised or unsupervised. Applications that require the application of deep learning techniques as supervised learning, include image classification, object detection and face recognition. Alternatively, applications that require the deep learning techniques as unsupervised learning are usually instances where there is no labelled data and for clustering problems, e.g., image encoding and word embedding.

4.4.1 Deep Neural Network (DNN)

A deep neural network (DNN) are ANNs that have more than one hidden layer (therefore the term “deep”), that are trained with vast amounts of data. Each hidden layer comprises of neurons that map a function to input to provide an output. DNNs are trained through the adjustment of its neurons, biases and weight features. These types of neural networks are also supported by various techniques such as the back-propagation algorithm and optimization methods such as stochastic gradient descent. Three types of deep neural networks include multi-layer perceptrons (MLP), convolutional neural networks (CNN) and recurrent neural networks (RNN). DNN features support speech recognition systems, and translation systems like Google Translate [ 49 , 50 ].

4.4.2 Deep Belief Networks (DBNs)

Deep belief networks (DBNs) consist of unsupervised networks, that comprise of a stack and sequence of connected restricted Boltzmann machines (RBMs). The DBN trains each of the Boltzmann machine layers until they converge. The value of the output layer of a Boltzmann machine is input into the next Boltzmann machine in the sequence, then again trained until convergence is reached. This process is repeated with each Boltzmann machine until the whole network has been successfully trained. Applications of DBNs vary from generating of images to video sequences and motion capture [ 51 – 53 ].

4.4.3 Recurrent Convolutional Neural Networks (RCNN)

Recurrent Convolutional Neural Networks (RCNN) algorithms detect and localize objects in an image. This is done by drawing rectangular boundary like boxes around objects contained within an image, placing a label on, or categorizing each defined box in an image, extracting features in the image using the SVM algorithm, and then processing the features using a pre-trained CNN. The last stage in the process brings separate regions together to obtain the original image with the identification of the objects within the image [ 47 , 54 – 58 ].

4.4.4 Fast R-CNN

An iteration or evolution and improvement of the R-CNN model can be found with the Fast R-CNN algorithm. Fast R-CNN model takes the image as a whole and passes it to its neural network to output, the output is then sliced into region of interests (ROI).

4.4.5 Faster R-CNN

A further evolution of the R-CNN model is the Faster R-CNN algorithm [ 59 ]. Faster R-CNN is a better performing and faster algorithm than R-CNN and Fast R-CNN, because it only uses CNNs and does not use SVMs, and provides a single feature extraction of an image, instead of region-by-region extractions of an image like R-CNN. According to literature, this results in Faster R-CNN training networks at least nine times faster, with more accuracy than R-CNN [ 59 , 60 ]. However, what makes Faster R-CNN distinct to its predecessor Fast R-CNN is the use of the Region Proposal Network (RPN) technique [ 60 ].

4.4.6 Mask R-CNN

The Mask R-CNN is an extension of the Faster R-CNN technique. Literature describes the Mask R-CNN technique as an advanced image segmentation method, which takes a digital image and breaks it down into segments or pixels, and then categorizes the segments. For example, a single image is segmented and categorized to identify multiple objects in the image [ 61 ].

4.4.7 R-FCN

Literature describes the R-FCN model as being based on region proposal. The difference between the R-FCN and R-CNN techniques (which is also based on region proposal) is that R-FCN applies the selective pooling technique that extracts features for prediction on the last layer of its network [ 62 ].

4.4.8 Single Shot Detector (SSD)

The Single Shot Detector (SSD) is an ML technique that breaks down an image into a grid of cells. In turn each cell has the function of detecting objects, by predicting the category and location of objects in the region where the images are located within. Literature indicates that the SSD model is faster than the Faster R-CNN model. However, when the object size is small, the model’s performance decreases [ 63 ].

4.4.9 You Only Look Once (YOLO)

The YOLO (You Only Look Once) algorithm uses CNNs to detect and recognize objects in a picture in real-time. YOLO first takes an entire image as input, divides the image into grids (this is different to R-CNN that uses regions to localize objects in an image), image classification and localization are applied to each grid. The algorithm then predicts the rectangular (bounding) boxes and their associated classes. The YOLO model does however find it difficult to localize objects properly compared with R-CNN [ 37 , 64 ].

4.4.10 Recurrent Neural Networks (RNNs)

Classic neural networks are described as ‘feed forward’ networks because they channel information in a single forward direction, through a series of mathematical operations performed at the nodes of the network. Data is fed through each node as input, never visiting a node more than once, before being processed and converted into an output. Feed forward networks only perceive the current sample data that has been provided in present time and have no facility for memory with respect to previous data samples processed. In other words, classic neural networks do not have the facility for data persistence.

RNNs are a type of deep neural network, and unlike classic neural networks they take both the current data sample and the previously received samples as input. RNNs can process data from the first input to the last output and initiate feedback loops throughout the entire computation process, enabling the loop of data back into the network. RNNs are distinct from feed forward networks by the feedback loop connected to their past decisions. RNNs allow previous outputs to be provided as inputs, while also having hidden states. RNN models are commonly used in the natural language processing (NLP) and speech recognition domain [ 65 , 66 ].

4.4.11 Long Short-Term Memory Networks (LSTMs)

LSTMs are a special type of RNN that help preserve the error that can be back propagated through layers and time. LSTMs provide recurrent networks with the ability to learn over time. This is made possible in large part to LSTMs’ gated cell, from which data can be written to, read from and stored into, all external to the back and forth of the recurrent network [ 67 ].

4.4.12 Generative Adversarial Networks (GANs)

GANs are described as generative deep learning unsupervised learning algorithms. The technique was introduced in 2014 by Ian Goodfellow. The premise of GANs involves a neural called a generator, which produces fake data samples. The generator works in concert with another network called the discriminator which has to differentiate between two different input data samples. The first being the original data samples and the second being the fake data samples being created and output by the generator. The discriminator has to evaluate, learn and make decisions as to which data sample is from the actual training set and which are form the generator [ 68 , 69 ].

4.4.13 Multilayer Perceptrons (MLPs)

A perceptron is an input layer and an output layer that are fully connected; and comprise of input values, weights and bias, net sum, and an activation function. A fully connected neural network with multiple layers is called Multilayer Perceptron (MLP). MLP is a supervised learning feed forward deep neural network that connects multiple layers in a directed graph, in other words, where the signal path is through a single direction through the nodes, between the input and output layers. In this network, every node, with the exception of the input nodes, contains a non-linear activation function. MLPs can be used to build speech-recognition, image-recognition, and machine-translation applications [ 70 , 71 ].

4.4.14 Restricted Boltzmann Machines (RBMs)

Boltzmann machines are non-deterministic, generative deep learning models with only two types of nodes - hidden and visible nodes. There are no output nodes, which provides them with the non-deterministic feature. Boltzmann Machine has connections among the input nodes, with all nodes connected to all other nodes including input or hidden nodes. This allows universal information sharing of parameters, patterns and correlations of the data. Restricted Boltzmann Machine (RBM) are a special class of Boltzmann Machines. RBM is an unsupervised two-layered (visible layer and hidden layer) neural network. RBM is characterized by restrictions where every node in the visible layer is connected to every node in the hidden layer but no two nodes in the same group are connected to each other [ 51 – 53 ].

4.4.15 Autoencoders

Autoencoders are an unsupervised type of neural network that can be used to detect patterns or structure within data to learn a compressed representation of data provided as input. The autoencoder learns how to compress the data based on its attributes during training. An autoencoder is a feed forward neural network where the input is the same as the output. It is made up of encoder and decoder models. The encoder compresses the input, and the decoder works to re-create the input from the compressed version of the input that was provided by the encoder. Applications of autoencoders vary from anomaly detection, data denoising (audio and images), dimensionality reduction etc. [ 72 – 76 ].

Table  1 reviews the advantages and disadvantages of different ML techniques reviewed in Section  4 of this paper.

4.5 Performance evaluation metrics for machine learning techniques

In this section, we examine the academic conventions in literature for measuring the performance of machine learning techniques in the detection of material defects and equipment failures. Current academic literature has provided methods for evaluating the performance of computer vision and especially machine learning methods and techniques, when applied to extract, process and analyze datasets. The literature indicates that while ML techniques can be used to extract and analyze data, these techniques however can also generate false results due to misclassification or misinterpretation of data collected. This is the reason for performance measurement metrics, which evaluate the performance of ML methods and the ratio of correct predictions or classifications to misclassification or incorrect predictions.

Classifiers or classification-based ML techniques in literature usually use the confusion matrix, accuracy/error, precision, recall, F1 measure (or F-measure), ROC, AUC, Hypothesis test (t-test, Wilcoxon signed-rank test, Kappa test) as the metric system or evaluation measurements [ 78 ]. Regression based problems use MSE (mean squared error), MAE (mean absolute error), MAPE (mean absolute percentage error), RMSE (root mean squared error) which is the square root of the average distance between the actual score and the predicted score, and quantiles error for evaluating the performance of machine learning methods applied as solutions [ 79 ].

4.5.1 Confusion matrix

Given that classification is when the output data is one or more discrete labels, regression however is when the output data or prediction of the model is a continuous quantity or value. Binary classification is where there are only two class categories of vector y (target label) (e.g., True or False, 1 or 0, etc.) in the dataset. Conversely, multi-class classification is when there are three or more classes or categories of the vector y (target label) in the dataset. The performance measure of ML techniques, specifically for classification tasks (e.g., binary classification, multi-class classification) are commonly analyzed using a confusion matrix, which is a two-dimensional matrix that measures the False Negative (FN), True Negative (TN), False Positive (FP), and True Positive (TP) for each model used. TP is in reference to the positive points that are correctly labelled by the classifier. TN refers to the negative points that were correctly labelled by the classifier. FP are the negative points that were incorrectly labelled as positive, and FN are the positive points that we mislabeled as negative.

These measurements are then used to calculate the performance metrics: recall, F1 score, accuracy, and precision. The accuracy is a percentage of predictions that are correct (TP + TN). The precision measures how accurate the model being used is for predicting values. The recall measures the sensitivity of the model to predict positive outcomes. The F-measure is a combination of the precision score and recall score [ 33 , 80 ].

4.5.2 Mean Square Error (MSE) and Mean Absolute Error (MSD)

The performance measure of a regression model is commonly analyzed using either the mean square error (MSE) or the mean absolute error (MSD). The MSE measures the average of the squares of the errors, that is the average squared difference between the predicted value and the target value. The lower the MSE, the better. The MSD measures the average of the absolute differences between model prediction and target value.

4.5.3 Mean Average Precision (MAP)

Some studies have recommended the use of the evaluation metric called the mean average precision (MAP) to analyse the performance measure of object detection (localization and classification) models such as SSD, R-CNN and Faster RCNN and YOLO [ 31 ]. The MAP is also commonly applied to analyze the performance of computer vision models and image segmentation problems. In MAP, an object (the predicted value) is taken as accurate on the condition it overlaps with what is labelled as the ground truth (the original damage annotated by human inspectors) that is greater than a given threshold. This is calculated using the Intersection over Union measure, which is expressed as the area overlap (predicted value overlap with the target value) over the area of union (area of the union of the predicted value and target value) [ 81 ].

5 Findings of literature review

This section provides an overview of the findings of the literature review in terms of types of mechanical systems and civil infrastructure assets. We then proceed to examine different types of damage mechanisms that can be found on these assets. This is followed by a review of our findings regarding the application of ML techniques used to support RAS-based monitoring and inspection.

5.1 Types of assets under inspection

We begin the review of our findings with discussing the types of various mechanical systems and civil infrastructure assets that literature indicate are subject to routine inspection and monitoring due to their vulnerability to catastrophic damage.

5.1.1 Pipelines

Pipelines in the energy industry support the transport and distribution of water, oil, and gas. Similar to most infrastructure, pipelines are subject to internal and external mechanical stresses. Such stresses can lead to damage mechanisms ranging from corrosion, cracks scale formation etc. This phenomenon can be mitigated through regular monitoring, inspection, and maintenance. There has been some research aimed at methods and techniques to support the inspection of pipes or pipelines more efficiently, this research ranges from Mohamed et al . [ 82 ], who looked at the use of mobile in-pipe inspection robots (IPIR) for inspection of corrosion and cracks in pipelines, using NDT sensors, e.g., ultrasonic. There is also the work carried out by Bastian et al . [ 30 ], who applied CNN architecture-based techniques to detect corrosion from pipeline images and DNN to extract features from those images.

5.1.2 Wind turbines

Wind turbines are another part of the energy industry and are generally located in remote environments, which are subject to extreme external stresses, e.g., wind, water, heat, etc. Similarly, just like pipelines, wind turbine infrastructure is also subject to mechanical stresses resulting in damage mechanisms such as erosion, cracks, etc. This is the reason for the emphasis and importance placed on regular inspection, monitoring and maintenance in industry planning and cost allocation. While wind turbine inspection is generally carried out using traditional methods, involving manual climbing of heights by technical inspection teams, in sometimes hazardous conditions, the last decade has seen a lot of research into the development of cost effective and safe methods and techniques that support the inspection and monitoring of asset infrastructure. This research has provided the alternative use of robot platforms, which vary from magnetic climbing robots to unmanned aerial drones. Current research has also seen the development of artificial intelligence-based algorithms and techniques that will analyze the vast amount of data collected by robot platforms, during inspection. Wang and Zhang [ 2 ] has researched the use of Haar-like feature algorithms for the automatic detection of surface crack images collected by UAVs; while Frank et al . [ 83 ] proposed applying a deep convolutional neural network (DCNN) technique on images from a wind turbine, taken by a multi-robot system. Shihavuddin et al . [ 31 ] carried out research using convolutional neural network (CNN) techniques to extract feature descriptors from images of wind turbines taken by drones; and the Faster R-CNN technique to train for object detection.

5.1.3 Aircraft fuselage

In the aerospace industry, the aircraft fuselage is one of the core and most important regular inspection tasks performed by maintenance technicians. The process involves deploying platforms that elevate technicians so that they can reach and inspect the external surface of the aircraft’s fuselage, searching for damage mechanisms or defects. The main damage or defect mechanism found on aircraft surfaces is corrosion. Corrosion is a main cause of fuselage fatigue. Alongside checks for corrosion, technicians also look for rust, cracks, and deformation of the aircraft surface during their inspections. This process has traditionally involved a painstaking, methodical task, undertaken by a technician, equipped with a flashlight and mobile elevation platform [ 84 ]. Research into alternative methods of inspection have been published over recent years. This includes Malekzadeh et al . [ 85 ] work which applied deep learning techniques: SURF, AlexNet and VGG-F to images of aircraft fuselage, taken by a custom-made platform. Miranda et al . [ 86 ] did similar work when they applied a CNN based application, along with an SVM model to images of an aircraft fuselage, taken by UAVs. The most recent work is research carried out by Brandoli et al. [ 84 ] which involved applying CNN models: DenseNet and SqueezeNet to detect corrosion pillowing in images taken from an aircraft fuselage.

5.1.4 Power lines

In the energy industry, power transmission lines act as connections between the source of power (the power plants) and endpoints (the consumers) [ 87 ]. The regular inspection of power transmission lines is considered vital in ensuring uninterrupted power supply, as damage to this part of the electrical infrastructure through rusted conductors etc. can result in downtime and power interruption [ 88 ]. As indicated by Titov et al. [ 88 ], the traditional operation for power line inspection involves a process characterized by high cost and safety risk, where human technicians manually taking images from the ground or from above with the support of a helicopter. There has been research in recent years that have demonstrated the use of UAVs (drones) to support the collection of image data. Jalil et al . [ 87 ] carried out work on power lines, employing a drone, fitted with image capture equipment to collect data, which is then passed on to a neural network model for detection and analysis of damage or defects from the image dataset. Research undertaken by Titov et al. [ 88 ] employed UAVs to take images of power lines for detection and analysis of cracks on concrete poles, missing or dirty insulator plates etc. using the YOLO version 3 deep learning technique.

5.1.5 Vessels

The maintenance of vessels, especially maritime transport ships such as oil tankers or very large crude carriers (VLCC) require regular monitoring and inspection schedules [ 89 , 90 ]. These vessels are subject to typical internal and external phenomenon such as cracks, corrosion etc. Current inspection procedures for vessels today require are expensive and the vessel to dock at a shipyard, where inspectors, with the support of all sorts of mobile platforms undertake visual assessment of the structural health and condition of the vessel. Recent research over the years have studied the use of robot platforms in support of vessel inspections [ 89 , 90 ]. This research has been accompanied with the development of various image processing and damage (e.g., corrosion and cracks) detection algorithms to analyze image datasets. However, there is evidence of very little to no research of the use of deep learning (neural network) based techniques.

5.1.6 Bridges

Bridges are typical civil infrastructures that are subject to external phenomenon such as wind, heat, water, and vibrations. The inspection of bridges currently requires manual visual inspection by human inspectors, working at different levels of elevation, with various levels of risk associated, to access and view parts of the bridge. This manual approach requires high time durations, in some cases road closures, high costs and due to the vast number of bridges in cities today, a lot of manpower. The last decade has seen quite a significant amount of research into the use of robot and autonomous systems (RAS) to support bridge inspections. These robot platforms are fitted with sensors ranging from infrared (IR) cameras to ultrasonic sensors [ 14 , 91 ]. This has been accompanied with research into computer vision and image processing techniques that detect damage mechanisms, defects, and features (e.g., cracks) [ 1 ]. This research has recently moved towards the use of deep learning methods to analyze images and detect cracks [ 49 , 92 , 93 ].

5.1.7 Automotive vehicles (cars)

Currently most literature available has concentrated on the analysis of image datasets have been focused on vehicle make and model classification in support of the transport and security industry [ 94 ] defect and damage detection and analysis in support of the automotive industry and the insurance sector that supports it [ 95 , 96 ]. Most automotive inspection required are focused on analysis of vehicular accidents, requiring image analysis of damages such as bumper dents, door dents, glass shutters, broken head lamps and tail lamps, as well as scratches etc. [ 95 – 97 ]. Recent years has produced research in the development of computer vision and neural network techniques for image analysis of car damage [ 95 – 99 ].

5.2 Damage mechanisms on mechanical systems and civil infrastructure

Corrosion and cracks are damage mechanisms associated with bridges, roads, rail, and levees in literature [ 1 , 49 , 93 , 100 ]. Most of the current literature focus on the inspection of wind turbines on surface damage, specifically, leading edge erosion, surface cracks, damaged lightning receptors and damaged vortex receptors [ 2 , 31 , 83 ]. In the case of pipelines, literature focus inspection of cracks, corrosion, or erosion [ 30 , 33 , 37 ]. Literature discussions of aircraft inspection consistently focuses on the fuselage, for damage mechanisms such as corrosion pillowing and insulator surface cracks. Some literature on aircraft inspection however do not specify damage mechanisms, and instead uniformly refer to defect regions [ 84 – 86 ].

Power lines and transmission lines in literature, are associated with cracks on concrete poles, identification of missing or dirty insulator plates, rusted conductors, broken cables, insulator damage, conductor corrosion and cracks on insulator surfaces [ 87 , 88 , 101 , 102 ]. The concentration on the inspection of vessels or ships in literature surrounds the detection, localization and classification of cracks, corrosion, or coating breakdown, pitting, and buckling [ 89 , 90 , 103 ].

While there could be a case for the monitoring and inspection of automotive vehicles during servicing and checkups for damage mechanisms from internal and external phenomenon (e.g., wind, water, heat), such as corrosion, fatigue etc. there seems to be little to no literature on such research available at this time. The literature available is mostly focused on damage to cars because of vehicular accidents [ 95 – 99 ]. A review of literature shows that corrosion and cracks are the most addressed damage mechanism, while erosions and fatigue seem to be the least addressed damage mechanisms. Table  2 demonstrates the typical research into damage mechanisms on mechanical infrastructure, that machine learning techniques have been used to detect, classify, and model.

Based on the literature reviewed, manual visual, data collection is still the norm for bridge inspection, however, in research where a robot platform is deployed to collect image datasets of damage mechanisms on bridges, most papers have reported the deployment of UAVs [ 49 , 93 , 100 ]. Likewise, literature on wind turbine damage inspection showed a majority preference for drones to support with data collection [ 17 , 31 , 83 ].

In the case of pipelines, the literature shows a mixed picture, with a balanced preference for deploying drones for some, and a preference for the use of mobile in-pipe inspection (IPIR) robots for others [ 33 , 37 ]. The literature, however, provides a different landscape for the use of RAS systems in aircraft inspections. While some literature has demonstrated the use of UAVs in recent years, the literature however, indicates a preference for the use of D-Sight Aircraft inspection system (DAIS) platforms. These are portable non-destructive devices, that support the visual analysis of aircraft surface areas of the fuselage [ 84 – 86 ].

While in some cases, piloted helicopters are still used to gather data, however, literature demonstrates that power lines and transmission lines inspection overwhelmingly deploy UAVs for data collection of damage mechanisms [ 87 , 88 , 101 , 102 ]. Literature regarding vessels or ships, present a very mixed preference for the use of semi-autonomous micro-aerial vehicles (MAVs) and more recently, climber or UAV robot platforms for inspection and data collection of damage mechanisms [ 89 , 90 , 103 ]. Literature review of automotive vehicles for cars, has unfortunately provided very little to no research with respect to the use of robot and autonomous system platforms for damage inspection.

5.3 Application of machine learning techniques for RAS inspection

This section reviews the application of machine learning techniques used to support RAS system inspection of civil and mechanical infrastructure (wind turbines, pipelines, rail, aircraft fuselage, power lines, vessels, and automobiles) in literature and performance evaluations of their use where documented. Gopalakrishnan et al . [ 49 ] looked at the processing and analysis of images of cracks on civil and mechanical infrastructure, obtained through UAVs (or drones). They applied deep convolutional neural networks (DCNN) models to process and analyze the image data and lauded the efficacy of DCNN as the more efficient technique for processing and analyzing both images and video type data [ 49 ]. Nguyen et al . [ 104 ] reviewed the use of various deep learning techniques in support of RAS systems inspection of power line infrastructure and indicated that Region-Based Convolutional Neural Networks (R-CNN) and the You Only Look Twice (YOLO) techniques as the optimal techniques for object detection for inspection tasks by RAS systems.

Shihavuddin et al . [ 31 ] explored the use of drones to obtain image data of damage on wind turbines (WT), for processing, analysis and classification using deep learning techniques. The research paper indicates that the research used Convolutional Neural Networks (CNN) as the backbone framework for processing and extracting features from image data obtained from WT using drones. Shihavuddin report their research proceeded to then use the faster R-CNN technique to train their models for object detection. They report achieving high accuracy results compared to other deep learning type algorithms used to train the models during their research. Shihavuddin et al . [ 31 ] also indicated that employing the technique called advanced image augmentation, allows you to expand your dataset, as this technique creates additional images for the training model, by altering existing image data obtained and fed into training sets. The utility of this technique is invaluable, as the larger the dataset, the more efficient the training model.

Franko et al . [ 83 ]’s research provieded findings in the use of a combined, multiple RAS platform, ranging from climbing and multicopter robots, fitted with LiDAR, RGB and ZED cameras, ultrasonic, radar and other vision type sensors, to inspect and detect corrosion and welding line damage mechanisms on the tower surfaces of WTs [ 83 ]. Alharam et al . [ 33 ] provided a case study in the use of UAVs to provide inspection of oil and gas pipelines in Bahrain. The UAVs are fitted with GPS, thermal cameras, and gas detectors to obtain image and methane (CH4) readings from gas and oil pipelines. The research looked at the use of the Decision Tree (DT), Support Vector Machine (SVM) and Random Forest (RF) techniques to process and analyze the data obtained from the drones. Franko et al . [ 83 ] reported that the RF technique provided 93% accuracy and much better performance than other classification techniques used in their research [ 33 ].

Bastian et al . [ 30 ] studied the external corrosion on pipelines and used deep neural networks to process and analyze the image and video data obtained from inspections. In their research paper, they proposed the use of a DNN technique, based on the CNN architecture, to extract and distinguish between images with corrosion and those without, from image data taken from pipelines by the UAV. They boast in their paper that CNNs have the most optimal results in terms of object detection and image classification.

Table  3 shows us that the literature demonstrates that the DCNN architecture provides up to 90% accuracy for the detection of cracks in civil infrastructure [ 49 ]; 92% accuracy in defect detection in rail infrastructure [ 105 ]. Table  1 also indicates that while the Random Forest algorithm is the best performing algorithm when compared to the Decision Tree and Support Vector Machine algorithms for detection of cracks, corrosion, and erosion on pipelines, with SVM yielding the least precision and accuracy between them [ 33 ]. However, custom CNN architectures have been reported to provide over 93% with respect to precision, accuracy, and other metrics within the confusion matrix [ 30 ]. Table  3 also advises that research on the RAS inspection and monitoring of aircraft fuselage has demonstrated that CNNs can provide up to 92% accuracy in the detection of surface and joints corrosion [ 84 ], while DNNs can provide a better performing accuracy of 96% [ 85 ]. In the case of power lines, Table  1 shows us that there is a preference for the use of custom -CNNs, Faster R-CNN or the YOLO v3 technique for extracting, analyzing, and classifying data collected from the RAS inspection of power lines; with these techniques having been reported to provide over 90% in their precision or accuracy in classifying image data [ 30 , 88 , 104 ]. There seems to currently be very little literature on the use machine learning techniques in the support of RAS inspection of vessels, one of the exceptions is research documented by Ortiz et al. [ 103 ], where the ANN technique has been used to extract, classify, and analyze corrosion, cracks and coating breakdown from image data collected by a micro-aerial vehicle [ 103 ].

Most of the limited literature that examines the application of machine learning techniques supporting the RAS inspection of automobiles focuses on the damage classification of damaged vehicles, because of accidents and associated insurance claims. The research shows a preference for CNNs or Mask R-CNN object recognition or damage detection of automobiles, with CNNs providing accuracy as high as 87% and the Mask R-CNN as high as 94% [ 95 , 96 ].

Figure  2 provides an illustration of our findings regarding the frequency of use of popular ML techniques to process, analyze and model damage mechanisms on mechanical systems and civil infrastructure.

figure 2

The use of different Machine Learning techniques in literature

Table  3 lists and maps the machine learning methods used in the robot inspection of mechanical systems and infrastructure that have been reviewed in this paper. Also, following on from the performance evaluation metrics discussed in Section  4 of this paper, Table  3 provides us with performance evaluation figures and results of the machine learning techniques deployed to process and analyze data collected by RAS platforms for civil mechanical infrastructure in reviewed papers.

6 Technology gaps and challenges

This section reviews technology gaps and challenges in the application of machine learning techniques for robotic inspection of mechanical systems and civil infrastructure.

6.1 Challenges of small object detection for deep learning techniques

Object detection of small (perhaps even undetectable to the human eye) damage mechanisms on mechanical and civil infrastructure has invaluable application to industries ranging from aerospace (for detecting cracks on aircraft) to energy and utility (for detecting erosion or corrosion). A small object has been defined by [ 106 , 107 ] as a 32 × 32 pixels object within an image. Current literature acknowledges that while object detection of medium to large size objects in image data is now a proven technology, accurate detection of small objects has not yet been mastered and it remains a challenge for researchers [ 106 , 108 – 111 ].

The reasons for this research gap are a result of several realities and constraints of current state of the art object detection technology. The first is that small objects are challenging to detect because high-level resolution-based feature maps, that are characteristic of CNN architectures, used to identify large objects in images, do not support the identification of small objects in images. This is because small objects in images are mostly in low resolution. The second is due to currently limited context data; there is significantly less pixels associated with small objects, resulting in little to nothing for the detection algorithms to identify. Furthermore, there is a class imbalance in the datasets that are currently being used to train deep learning models. Current image datasets usually comprise of large to medium sized objects, this results in an imbalance in the groups of object sizes in images available to deep learning models for training.

There is a gap in the research and development of deep learning techniques or models that could provide the higher precision required for accurate localization for small objects in images; as well as an on-going race between researchers in the improvement of current object detection deep learning algorithms for small objects in image datasets [ 106 , 108 – 111 ].

6.2 Evaluating accuracy and performance of machine learning techniques

Following this paper’s review of the methods and metrics used in academic literature to evaluate the performance of machine learning methods, techniques and models when trained on datasets to output values, interpretations, or predictions; this section briefly reviews critics of these metrics.

There are arguments in literature that contend that the current methods (such as the confusion matrix, accuracy, precision, MAP, RSME and quantile error) used for evaluating the performance and utility of machine learning techniques when applied as solutions to extract or analyze data, et al., can only be understood and applied by subject matter experts in statistics, computer science, artificial intelligence (AI), etc. [ 78 , 112 ].

Both Shen et al . [ 112 ] and Beauxis-Aussalet et al . [ 78 ] contended that non-subject matter experts do not always have the background knowledge to understand terminology such as true negative (TN) or false positive (FP), that form part of the underlying metric framework for evaluating machine learning techniques. Furthermore, Shen et al .’s [ 112 ] research found that non-experts found it challenging to both use and relate some of the evaluation metrics back to the problem that the techniques are being applied to solve. Beauxis-Aussalet et al . [ 78 ] underscored the fact that some of the evaluation metrics that exist can even be misunderstood, misinterpreted, or even deployed incorrectly to case studies by non-subject matter experts [ 78 ]. It is therefore a contention in literature that there is a gap or requirement for more accessible methods for evaluating the performance of machine learning techniques, that can be understood and used by both subject matter experts in AI and their lay colleagues.

6.3 Machine learning challenges with unstructured data

Throughout the course of this paper, we have reviewed the data collected by RAS systems during inspection of mechanical infrastructure, the types of data collected, the techniques that have been deployed to process and analyze the data collected. This section extends our review of data collected by robotic and autonomous systems to examine the structure of this data and the gaps in our ability to work with this data.

Structured and unstructured is a type of description that data scientists and researchers use to categorize data. Structured data categorizes data with a schema, which means that data is seen as having some sort of logical organization. Structured data is quantitative and is usually displayed as numbers, dates, values, and strings. Structured data can be queried, searched, and analyzed because it is organized in rows and columns, e.g., CSV files, spreadsheets, SQL databases, etc. Traditional sources for structured data vary from sensors, weblogs, network traffic, etc.

Unstructured data however, cannot be contained in rows and columns and has no discernable structure or logic. It is qualitative data, comprising of video, audio, images etc. Structured data cannot be processed or analyzed with the same methods used for structured data, e.g., rows and columns, databases etc.

The challenge for computer scientists and data and AI scientists is that most of the machine learning tools available today, are better suited to train on datasets with structured data. However, most data in the world are unstructured, data in unstructured formats. As indicated by Rai et al . [ 113 ], literature and research estimate that over 80% of the collected data in the world is unstructured [ 113 , 114 ]. Traditional sources for unstructured data include social media platforms, images videos and audio data [ 113 ]. While there are some AI techniques and tools that can process, analyze, and train unstructured data, e.g., Natural Language Processing (NLP) techniques are used to add structure, such as context and syntax to unstructured text data, while AI techniques such as autoencoders are used to extract and analyze unstructured data. However, these tools are few and the technology has not yet sufficiently matured. This therefore means that there is a gap or requirement for research and development into effective machine learning tools and techniques that can process and analyze unstructured data [ 113 , 114 ].

6.4 Big data and challenges with real-time data analytics

This paper has reviewed how literature has characterized the data collected by RAS systems in terms of volume, veracity, variety, and velocity. It is also noteworthy to mention the fact that current literature indicates that the vast amounts of volume, variety, and complexity of data being collected by modern day sensors, robotic and autonomous systems has resulted in today’s large datasets being referred to as ‘Big Data’. The term Big Data refers to a variety of high volume and high velocity datasets, comprising of structured, semi-structured and unstructured data, collected from, and feeding into social networks, academia, sensors networks, international trading markets, surveillance, and communication networks [ 82 , 115 – 117 ].

Big Data is exceeding the capacity and capability of current technology to contain, process and analyze in real-time, without the need for data storage and batch processing, in support of real-time dependent applications for international trading markets, smart city infrastructure and robot autonomous systems, e.g. self-driving cars etc. [ 82 , 115 , 116 , 118 ]. The very fundamentals of neural networks (deep learning) techniques means that they are best suited for the processing and analysis of Big Data, as neural network algorithms require vast amounts of data to train on to provide meaningful predictions, pattern recognition or representations of any real use. However, while these techniques have resulted in technologies e.g., speech recognition, computer vision and natural language processing (NLP) that can be applied to volumes of unsupervised and unstructured data, however, these technologies are still in their infancy and have not yet matured to the point of coping with complex variety, high volume, and velocity of Big Data [ 82 , 115 – 118 ].

6.5 On-board integration of machine learning with RAS platforms

In this paper, we have described robotic platforms as autonomous systems. We have also discussed the machine learning techniques, algorithms that process and analyze the data they collect. However, Panella [ 119 ] made an argument that while UAVs are capable of semi-autonomous operation, there is still no integrated unitary technique providing complete autonomy for UAV platforms, that allows for real-time decision-making within the environment they are located within; compared to responding to stimuli or events based on pre-programming. This was one of the stated reasons for the development of an on-board integration of various AI or machine learning techniques to deliver and effect the fully autonomous UAV system that can “think” like humans and make decisions within their environment.

Despite current strides in research and development, there are still challenges in the integration of machine learning techniques on-board RAS platforms. Ono et al . [ 120 ] noted that while there is a suite of on-board algorithms that can be integrated as part or alongside the Robot Operating System (ROS) providing robotic systems with the autonomy to respond to events in their environment and complete tasks (e.g., the Mars rover); there is still a gap in available algorithms and technology that could provide complete on-board autonomy for future rover missions [ 120 ].

Furthermore, Ono et al . [ 120 ] discussed about the gaps in intelligent algorithms on-board robot systems which could result in what the paper refers to as the “unnoticed green monster problem” where human decision-makers and operators are not able to take real-time action to events or stimuli detected by the RAS system (the mars rover in this particular case study), due to delay or loss of data (imagery or otherwise), being fed from the robot system, e.g., Mars to the human operator, in this case, a control centre on Earth. The point being that this demonstrated the need for the development and on-board integration of AI algorithms that would provide on-board decision-making on the robot platform, to enable real-time responses for what Ono et al . [ 120 ] describes as “scientific opportunities and avoid the “green monster problem” [ 120 ].

Hillebrand et al . [ 121 ] and Contreras et al . [ 122 ] suggested the use of deep learning (neural networks), specifically reinforcement learning, as a response to the absence of neural network design methodology in robotic systems [ 121 , 122 ]. Chen et al . [ 123 ] noted that while autonomous robot navigation exists as a mainstream technology, the current capability still has challenges in its ability to manage complex and dynamic environments and reduce misclassifications by current perception algorithms [ 123 ].

7 Conclusions

This review has reported on the types of robotic platforms deployed for inspection of different mechanical systems and civil infrastructure such as storage tanks, high rise facilities and nuclear power plants. While unmanned marine vehicles are deployed for systems located underwater (such as subsea power cables); unmanned ground robots are better suited to horizontal ground surface environments, with UAVs mostly deployed for both indoor and outdoor, remote, hazardous environments.

This paper demonstrated through an extensive literature review that machine learning, has been used with varied efficacy, to support the processing and analysis and classification of data collected by RAS systems during inspection of mechanical and civil infrastructure. The review revealed that there are few studies demonstrating use of deep learning techniques for the analysis of datasets collected during structural health inspections. In these studies, it was shown that deep learning techniques performed better than most machine learning methods in the processing and analysis of image (damage mechanism) datasets. Furthermore, almost all research reviewed have focused on the inspection, analysis, and classification of single damage mechanisms, e.g., corrosion, cracks, erosion, etc. This indicated a research gap in the use and application of machine learning techniques to analyze and classify multiple types of damage mechanisms from video or image datasets collected during the inspection of mechanical systems and civil infrastructure.

Availability of data and materials

The authors confirm that the data supporting the findings of this study are available within the article and its supplementary materials.

H.M. La, N. Gucunski, K. Dana, S.-H. Kee, Development of an autonomous bridge deck inspection robotic system. J. Field Robot. 2017 , 1489 (2017)

Article   Google Scholar  

L. Wang, Z. Zhang, Automatic detection of wind turbine blade surface cracks based on UAV-taken images. IEEE Trans. Ind. Electron. 64 (9), 7293–7303 (2017)

S. Bernardini, F. Jovan, Z. Jiang, S. Watson, A. Weightman, P. Moradi, T. Richardson, R. Sadeghian, S. Sareh, A multi-robot platform for the autonomous operation and maintenance of offshore wind farms blue sky ideas track, in Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 2020 , May 9–13, 2020, Auckland, New Zealand (2020)

Google Scholar  

C. Stout, D. Thompson, UAV Approaches to Wind Turbine Inspection: Reducing Reliance on Rope-Access. Offshore Renewable Energy Catapult. (2019)

D. Schmidt et al., Climbing robots for maintenance and inspections of vertical structures—A survey of design aspects and technologies. Robot. Auton. Syst. (2013). https://doi.org/10.1016/j.robot.2013.09.002

D. Lattanzi et al., Review of Robotic Infrastructure Inspection Systems. J. Infrastruct. Syst. (2017). https://doi.org/10.1061/(ASCE)IS.1943-555X.0000353

M.A.M. Yusoff et al., Development of a Remotely Operated Vehicle (ROV) for underwater inspection. Jurutera (2013)

A.L. Meyrowitz et al., Autonomous vehicles, in Proceedings of the IEEE 1996 (1996). https://doi.org/10.1109/5.533960

Chapter   Google Scholar  

F. Rubio et al., A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 2019 (2019). https://doi.org/10.1177/1729881419839596

D.W. Gage, A Brief History of Unmanned Ground Vehicle (UGV) Development Efforts (1995)

W. Shen et al., Proposed wall climbing robot with permanent magnetic tracks for inspecting oil tanks, in IEEE International Conference Mechatronics and Automation (2005). https://doi.org/10.1109/ICMA.2005.1626882

L.P. Kalra et al., A wall climbing robot for oil tank inspection, in 2006 IEEE International Conference on Robotics and Biomimetics (2006). https://doi.org/10.1109/ROBIO.2006.340155

S. Campbell et al., Sensor technology in autonomous vehicles: a review, in 2018 29th Irish Signals and Systems Conference , ISSC, 2018 (2018). https://doi.org/10.1109/ISSC.2018.8585340

J. Seo et al., Drone-enabled bridge inspection methodology and application. Autom. Constr. (2018). https://doi.org/10.1016/j.autcon.2018.06.006 . https://www.sciencedirect.com/science/article/pii/S0926580517309755 DOI

M. Shafiee et al., Unmanned Aerial Drones for Inspection of Offshore Wind Turbines: A Mission-Critical Failure Analysis. Robotics J. (2021). https://doi.org/10.3390/robotics10010026

M.H. Frederiksen et al., Drones for inspection of infrastructure: Barriers, opportunities and successful uses. Center for Integrative Innovation Management (2019)

M. Drones Lt, Best Commercial Drones for Beginners, Sep. 02, 2019, 2018. https://www.coptrz.com/best-commercial-drones-for-beginners/

C. Eschmann et al., High-resolution multisensor infrastructure inspection with unmanned aircraft systems, in ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2013 (2013). https://doi.org/10.5194/isprsarchives-XL-1-W2-125-2013 https://ui.adsabs.harvard.edu/abs/2013ISPAr.XL1b.125E

X.L. Ding et al., A review of structures, verification, and calibration technologies of space robotic systems for on-orbit servicing (2020). https://doi.org/10.1007/s11431-020-1737-4

A. Flores-Abad et al., A Review of Space Robotics Technologies for on-Orbit Servicing (Elsevier, Amsterdam, 2014). https://doi.org/10.1016/j.paerosci.2014.03.002

Book   Google Scholar  

P.J. Staritz et al., Skyworker: A Robot for Assembly, Inspection and Maintenance of Large-Scale Orbital Facilities. IEEE (2001). https://doi.org/10.1109/ROBOT.2001.933271

H. Choset, D. Kortenkamp, Path planning and control for free-flying inspection robot in space. J. Aerosp. Eng. (1999). https://doi.org/10.1061/(ASCE)0893-1321(1999)12:2(74)

J.S. Mehling et al., A minimally invasive tendril robot for in-space inspection, in The First IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, BioRob 2006 (2006), pp. 690–695. https://doi.org/10.1109/BIOROB.2006.1639170

S.-I. Nishida et al., Prototype of an end-effector for a space inspection robot. Adv. Robot. (2012). https://doi.org/10.1163/156855301300235788

L. Pedersen et al., A survey of space robotics, in ISAIRAS (2003)

J. Redmon et al., You only look once: unified, real-time object detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

W. Fan et al., Mining big data: current status, and forecast to the future, in 2013 Association for Computing Machinery (2013). https://doi.org/10.1145/2481244.2481246

B. Matturdi et al., Big data security and privacy: a review. China Commun. 11 (14), 135–145 (2014). https://doi.org/10.1109/CC.2014.7085614

D. Laney, 3-D Data Management: Controlling Data Volume, Velocity and Variety . META Group Research Note, February, vol. 6 (2001)

B.T. Bastian, J. N, S.K. Kumar, C.V. Jiji, Visual inspection and characterization of external corrosion in pipelines using deep neural network. NDT & E International Journal 107 , 102134 (2019)

A. Shihavuddin et al., Wind turbine surface damage detection by deep learning aided drone inspection analysis. Energies 12 (4), 676 (2019). https://doi.org/10.3390/en12040676

M. Hassanalian et al., Classifications, applications, and design challenges of drones: a review. Prog. Aerosp. Sci. (2017). https://doi.org/10.1016/j.paerosci.2017.04.003 . https://www.sciencedirect.com/science/article/pii/S0376042116301348

A. Alharam et al., Real time AI-based pipeline inspection using drone for oil and gas industries in Bahrain, in 2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT) (2020)

V. Nasteski, An overview of the supervised machine learning methods. Horizons B 4 (2017). https://doi.org/10.20544/HORIZONS.B.04.1.17.P05

B. Mahesh, Machine Learning Algorithms – a Review (2019). https://doi.org/10.21275/ART20203995

A. Carrio et al., A review of deep learning methods and applications for unmanned aerial vehicles. Hindawi J. Sens. (2017). https://doi.org/10.1155/2017/3296874

M.N. Mohammed et al., Design and Development of Pipeline Inspection Robot for Crack and Corrosion Detection (2018)

https://www.analyticssteps.com/blogs/how-does-k-nearest-neighbor-works-machine-learning-classification-problem

F. Hoffmann et al., Benchmarking in classification and regression. WIREs Data Mining Knowl. Discov. 9 , e1318 (2019). https://doi.org/10.1002/widm.1318

A. Geron, Hands-On Machine Learning with Scikit-Learn, Keras &TensorFlow, 2nd edn. (2019). 2019

I. Goodfellow et al., Deep Learning (MIT Press, Cambridge, 2016)

MATH   Google Scholar  

F.Y. Osisanwo et al., Supervised machine learning algorithms: classification and comparison. Int. J. Comput. Trends. Technol. (IJCTT) 48 (3) 128–138 (2017)

C.-F. Tsai et al., Intrusion detection by machine learning: a review. Expert Syst. Appl. 36 (10), 11994–12000 (2009)

A. Matsunaga et al., On the use of machine learning to predict the time and resources consumed by applications, in 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing (2010), pp. 495–504. https://doi.org/10.1109/CCGRID.2010.98

M. Jogin et al., Feature extraction using Convolution Neural Networks (CNN) and deep learning, in 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT) (2018), pp. 2319–2323. https://doi.org/10.1109/RTEICT42901.2018.9012507

https://ujjwalkarn.me/2016/08/09/quick-intro-neural-networks/

https://morioh.com/p/73fce91e9846

Y. Guo et al., Deep learning for visual understanding: a review. Neurocomputing 187 , 27–48 (2016). https://doi.org/10.1016/j.neucom.2015.09.116

K. Gopalakrishnan et al., Crack damage detection in unmanned aerial vehicle images of civil infrastructure using pre-trained deep learning model. Int. J. Traffic Transp. Eng. (IJTTE) (2017)

H. Larochelle et al., Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 1 , 1–40 (2009). https://doi.org/10.1145/1577069.1577070

Article   MATH   Google Scholar  

A. Fischer, C. Igel, An Introduction to Restricted Boltzmann Machines . Iberoamerican Congress on Pattern Recognition (Springer, Berlin, 2012)

N. Agarwalla et al., Deep learning using restricted Boltzmann machines. Int. J. Comput. Sci. Inf. Secur. 7 (3), 1552–1556 (2016)

Y. Hua et al., Deep belief networks and deep learning, in Proceedings of 2015 International Conference on Intelligent Computing and Internet of Things (2015), pp. 1–4. https://doi.org/10.1109/ICAIOT.2015.7111524

https://blog.paperspace.com/faster-r-cnn-explained-object-detection/

https://neurohive.io/en/popular-networks/r-cnn/

Z.-Q. Zhao et al., Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Syst. 30 (11), 3212–3232 (2019). https://doi.org/10.1109/TNNLS.2018.2876865

https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4

P.S. Bithas et al., A Survey on Machine-Learning Techniques for UAV-Based Communications. Sensors (Basel, Switzerland) 26 November 2019 (2019). https://europepmc.org/articles/PMC6929112 . Accessed September 2020

R. Girshick et al., Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)

R. Girshick, Fast r-cnn, in Proceedings of the IEEE International Conference on Computer Vision (2015)

K. He et al. Mask R-CNN. In ICCV, 2017

J. Dai et al., R-FCN: Object Detection via Region-based Fully Convolutional Networks (2016). arXiv:1605.06409

W. Liu et al., Ssd: Single shot multibox detector (2015). Preprint arXiv:1512.02325

S. Ren et al., Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015 (2015)

S. Grossberg, Recurrent neural networks. Scholarpedia 8 (2), 1888 (2013)

J.A. Bullinaria, Recurrent neural networks. Neural Computation: Lecture 12 (2013)

S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural Comput. 9 (8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

I.J. Goodfellow et al., Generative Adversarial Networks (2014). arXiv, stat.ML

M. Mirza, S. Osindero, Conditional Generative Adversarial Nets (2014)

L. Noriega, Multilayer perceptron tutorial. School of Computing. Staffordshire University (2005)

H. Taud, J. Mas, Multilayer perceptron (MLP), in Geomatic Approaches for Modeling Land Change Scenarios . Lecture Notes in Geoinformation and Cartography. (Springer, Cham, 2018). https://doi.org/10.1007/978-3-319-60801-3_27

G. Alain, Y. Bengio, What Regularized Auto-Encoders Learn from the Data Generating Distribution (2014)

F.Q. Lauzon, An introduction to deep learning, in 2012 11th International Conference on Information Science, Signal Processing and Their Applications (ISSPA) , (2012), pp. 1438–1439. https://doi.org/10.1109/ISSPA.2012.6310529

P. Baldi, Autoencoders, unsupervised learning, and deep architectures, in Proceedings of ICML Workshop on Unsupervised and Transfer Learning (2012)

Q.V. Le, A tutorial on deep learning part 2: autoencoders, convolutional neural networks and recurrent neural networks. Google Brain 20 , 1–20 (2015)

A. Agarwal, A. Motwani, An Overview of Convolutional and AutoEncoder Deep Learning Algorithm (2016)

Y. Coadou, Boosted decision trees and applications. EPJ Web Conf. 55 , 02004 (2013). https://doi.org/10.1051/epjconf/20135502004

E. Beauxis-Aussalet et al., Visualization of confusion matrix for non-expert users, in IEEE Conference on Visual Analytics Science and Technology (VAST) - Poster Proceedings (2014)

G. Shobha et al., Machine learning, in Handbook of Statistics , vol. 38 (Elsevier, Amsterdam, 2018), pp. 197–228. https://doi.org/10.1016/bs.host.2018.07.004 . https://www.sciencedirect.com/science/article/pii/S0169716118300191 . ISSN 0169-7161. ISBN 9780444640420

A. Kulkarni et al., Foundations of data imbalance and solutions for a data democracy, in Data Democracy (Academic Press, San Diego, 2020), pp. 83–106. https://doi.org/10.1016/B978-0-12-818366-3.00005-8 . ISBN 9780128183663

https://towardsdatascience.com/map-mean-average-precision-might-confuse-you-5956f1bfa9e2

N. Mohamed et al., Real-time big data analytics: applications and challenges, in Proc. Int. Conf. High Perform. Comput. Simulation (2014), pp. 305–310

J. Franko et al., Design of a multi-robot system for wind turbine maintenance. Energies (2020)

B. Brandoli et al., Aircraft fuselage corrosion detection using artificial intelligence. Sensors 2021 (21), 4026 (2021). https://doi.org/10.3390/s21124026

T. Malekzadeh et al., Aircraft Fuselage Defect Detection using Deep Neural Networks (2017). arXiv:1712.09213

J. Miranda et al., Machine learning approaches for defect classification on aircraft fuselage images aquired by an UAV, in Fourteenth International Conference on Quality Control by Artificial Vision . 16 July 2019, Proc. SPIE, vol. 11172 (2019), p. 1117208. https://doi.org/10.1117/12.2520567 .

B. Jalil et al., Fault detection in power equipment via an unmanned aerial system using multi modal data. Sensors 2019 (19), 3014 (2019). https://doi.org/10.3390/s19133014

E. Titov et al., The deep learning based power line defect detection system built on data collected by the cablewalker drone, in 2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON) (2019), pp. 0700–0704. https://doi.org/10.1109/SIBIRCON48586.2019.8958397

A. Ortiz et al., First steps towards a roboticized visual inspection system for vessels, in 2010 IEEE 15th Conference on Emerging Technologies & Factory Automation (ETFA 2010) (2010), pp. 1–6. https://doi.org/10.1109/ETFA.2010.5641246

F. Bonnin-Pascual et al., Semi-autonomous visual inspection of vessels assisted by an unmanned micro aerial vehicle, in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 3955–3961. https://doi.org/10.1109/IROS.2012.6385891

S. Kawabata et al., Autonomous flight drone with depth camera for inspection task of infra structure, in Proceedings of the International MultiConference of Engineers and Computer Scientists , vol. 2 (2018)

Y.-J. Cha et al., Deep learning-based crack damage detection using convolutional neural networks. Comput.-Aided Civ. Infrastruct. Eng. 32 , 361–378 (2017)

M.M. Karim et al., Modeling and simulation of a robotic bridge inspection system, in Procedia Computer Science (2020), pp. 177–185. https://doi.org/10.1016/j.procs.2020.02.276 . https://www.sciencedirect.com/science/article/pii/S1877050920304154 . ISSN 1877-0509

M.A. Manzoor et al., Vehicle make and model classification system using bag of SIFT features, in 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC) , (2017), pp. 1–5. https://doi.org/10.1109/CCWC.2017.7868475

P. Rakshata et al., Car damage detection and analysis using deep learning algorithm for automotive. Int. J. Sci. Technol. Res. 5 (6) (2019). Nov-Dec-2019, ISSN (Online): 2395-566X

Q. Zhang et al., Vehicle-damage-detection segmentation algorithm based on improved mask RCNN. IEEE Access 8 , 6997–7004 (2020). https://doi.org/10.1109/ACCESS.2020.2964055

H. Bandi et al., Assessing car damage with convolutional neural networks, in 2021 International Conference on Communication Information and Computing Technology (ICCICT) (2021), pp. 1–5. https://doi.org/10.1109/ICCICT50803.2021.9510069

C. Giovany Pachón-Suescún et al., Scratch Detection in Cars Using a Convolutional Neural Network by Means of Transfer Learning. IJAER (2018) 16 Nov 2018

K. Patil et al., Deep learning based car damage classification, in 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA) (2017), pp. 50–54. https://doi.org/10.1109/ICMLA.2017.0-179

R. Ali et al., Subsurface damage detection of a steel bridge using deep learning and uncooled micro-bolometer. Constr. Build. Mater. 226 (2019). https://doi.org/10.1016/j.conbuildmat.2019.07.293 . 2019, 376-387, ISSN 0950-0618. https://www.sciencedirect.com/science/article/pii/S0950061819319671

Y. Liu et al., The method of insulator recognition based on deep learning, in Proceedings of the 2016 4th International Conference on Applied Robotics for the Power Industry (CARPI) , Jinan, China, 11–13 October, 2016 (2016), pp. 1–5

Z. Zhao et al., Multi-patch deep features for power line insulator status classification from aerial images, in 2016 International Joint Conference on Neural Networks (IJCNN), 2016 (2016), pp. 3187–3194. https://doi.org/10.1109/IJCNN.2016.7727606

A. Ortiz et al., Vision-based corrosion detection assisted by a micro-aerial vehicle in a vessel inspection application. Sensors 2016 (16), 2118 (2016). https://doi.org/10.3390/s16122118

V.N. Nguyen et al., Automatic autonomous vision-based power line inspection: A review of current status and the potential role of deep learning. Int. J. Electr. Power Energy Syst. 2018 (2018)

S. Faghih-Roohi et al., Deep convolutional neural networks for detection of rail surface defects, in Neural Networks (IJCNN), 2016 International Joint Conference on, 2016 (2016), pp. 2584–2589

Z.-Q. Tong et al., Recent advances in small object detection based on deep learning: a review. Image Vis. Comput. 97 , 103910 (2020)

T.-Y. Lin et al., Microsoft Coco: Common Objects in Context . European Conference on Computer Vision (Springer, Cham, 2014)

Y. Liu et al., A survey and performance evaluation of deep learning methods for small object detection. Expert Syst. Appl. 172 , 114602 (2021)

N.-D. Nguyen et al., An evaluation of deep learning methods for small object detection. J. Electr. Comput. Eng. 2020 , 3189691 (2020)

C. Chenyi et al., R-CNN for small object detection, in Asian Conference on Computer Vision (Springer, Cham, 2016)

Z.-Q. Zhao et al., Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Syst. 30 (11), 3212–3232 (2019)

Y. Shen et al., Design alternative representations of confusion matrices to support non-expert public understanding of algorithm performance. Proc. ACM Hum. Comput. Interact. 4 (CSCW2), 153 (2020)

R.K. Rai et al., Intricacies of Unstructured Data. EAI Endorsed Transactions on Scalable Information Systems 4 (14) (2017). https://doi.org/10.4108/eai.25-9-2017.153151

A. Gandomi et al., Beyond the hype: big data concepts, methods, and analytics. Int. J. Inf. Manag. 35 (2), 137–144 (2015)

D.P. Acharjya et al., A survey on big data analytics: challenges, open research issues and tools. Int. J. Adv. Comput. Sci. Appl. 7 , 511–518 (2016)

A. Oussous et al., Big data technologies: a survey. J. King Saud Univ, Comput. Inf. Sci. 30 , 431–448 (2018)

X. Jin et al., Significance andchallenges of big data research. Big Data Res. 2 (2), 59–64 (2015)

M.M. Najafabadi et al., Deep learning applications and challenges in big data analytics. Big Data 2 (1), 1–21 (2015)

I. Panella, Artificial intelligence methodologies applicable to support the decision-making capability on board unmanned aerial vehicles, in ECSIS Symposium on Bio-Inspired Learning and Intelligent Systems for Security , Edinburgh (2008), pp. 111–118. https://doi.org/10.1109/BLISS.2008.14

M. Ono et al., MAARS: machine learning-based analytics for automated rover systems, in Proc. IEEE Aerosp. Conf (2020), pp. 1–17

M. Hillebrand et al., A design methodology for deep reinforcement learning in autonomous systems. Procedia Manufacturing 52 (2020). https://doi.org/10.1016/j.promfg.2020.11.044 . https://www.sciencedirect.com/science/article/pii/S2351978920321879

S. Contreras et al., Using deep learning for exploration and recognition of objects based on images, in 2016 XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR) (2016), pp. 1–6. https://doi.org/10.1109/LARS-SBR.2016.8

W. Chen et al., Door recognition and deep learning algorithm for visual based robot navigation, in 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014) (2014), pp. 1793–1798. https://doi.org/10.1109/ROBIO.2014.7090595

Download references

This research was funded by EPSRC https://gtr.ukri.org/projects?ref=studentship-2466663#/tabOverview , grant number 2466663.

Author information

Authors and affiliations.

Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury, CT2 7NT, United Kingdom

Michael O. Macaulay & Mahmood Shafiee

You can also search for this author in PubMed   Google Scholar

Contributions

“Conceptualization, MM and MS; methodology, MM and MS; investigation, MM; resources, MM; MS; writing—original draft preparation, MM and MS; writing—review and editing, MM and MS; supervision, MS; project administration, MM; funding acquisition, MS. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Michael O. Macaulay .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Code availability (software application or custom code).

Not applicable.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Macaulay, M.O., Shafiee, M. Machine learning techniques for robotic and autonomous inspection of mechanical systems and civil infrastructure. Auton. Intell. Syst. 2 , 8 (2022). https://doi.org/10.1007/s43684-022-00025-3

Download citation

Received : 07 January 2022

Accepted : 05 April 2022

Published : 29 April 2022

DOI : https://doi.org/10.1007/s43684-022-00025-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Machine learning
  • Deep learning
  • Robotics and autonomous system (RAS)
  • Mechanical systems
  • Civil infrastructure
  • Find a journal
  • Publish with us
  • Track your research

Swarm Robotics: Past, Present, and Future [Point of View]

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Artificial Intelligence

  • Data Science
  • Hardware & Sensors

Machine Learning

Agriculture.

  • Defense & Cyber Security
  • Healthcare & Sports
  • Hospitality & Retail
  • Logistics & Industrial
  • Office & Household
  • Write for Us

research papers on robotics

Traditional and modern approaches to robotics

Three essential books on warehouse robotics, open-knowledge robotics (ok-robot) – bridging the gap between vision and action, revolutionizing robotics: an exclusive interview with elad inbar on the future…, how robots are improving warehouse security, education in virtual worlds: the college classroom of tomorrow, unveiling the future of multi-biomarker analytics – interview with ash anwar…, everything tech: how technology has evolved and how to keep up…, state-by-state employment trends in the us – insights from unmudl’s ceo…, difference between merchant cash advances and business loans, exploring the dynamics of seo in digital marketing strategy, the rise of cryptocurrencies in africa: trends and opportunities.

  • Technologies

500 research papers and projects in robotics – Free Download

research papers on robotics

The recent history of robotics is full of fascinating moments that accelerated the rapid technological advances in artificial intelligence , automation , engineering, energy storage, and machine learning. The result transformed the capabilities of robots and their ability to take over tasks once carried out by humans at factories, hospitals, farms, etc.

These technological advances don’t occur overnight; they require several years of research and development in solving some of the biggest engineering challenges in navigation, autonomy, AI and machine learning to build robots that are much safer and efficient in a real-world situation. A lot of universities, institutes, and companies across the world are working tirelessly in various research areas to make this reality.

In this post, we have listed 500+ recent research papers and projects for those who are interested in robotics. These free, downloadable research papers can shed lights into the some of the complex areas in robotics such as navigation, motion planning, robotic interactions, obstacle avoidance, actuators, machine learning, computer vision, artificial intelligence, collaborative robotics, nano robotics, social robotics, cloud, swan robotics, sensors, mobile robotics, humanoid, service robots, automation, autonomous, etc. Feel free to download. Share your own research papers with us to be added into this list. Also, you can ask a professional academic writer from  CustomWritings – research paper writing service  to assist you online on any related topic.

Navigation and Motion Planning

  • Robotics Navigation Using MPEG CDVS
  • Design, Manufacturing and Test of a High-Precision MEMS Inclination Sensor for Navigation Systems in Robot-assisted Surgery
  • Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment
  • One Point Perspective Vanishing Point Estimation for Mobile Robot Vision Based Navigation System
  • Application of Ant Colony Optimization for finding the Navigational path of Mobile Robot-A Review
  • Robot Navigation Using a Brain-Computer Interface
  • Path Generation for Robot Navigation using a Single Ceiling Mounted Camera
  • Exact Robot Navigation Using Power Diagrams
  • Learning Socially Normative Robot Navigation Behaviors with Bayesian Inverse Reinforcement Learning
  • Pipelined, High Speed, Low Power Neural Network Controller for Autonomous Mobile Robot Navigation Using FPGA
  • Proxemics models for human-aware navigation in robotics: Grounding interaction and personal space models in experimental data from psychology
  • Optimality and limit behavior of the ML estimator for Multi-Robot Localization via GPS and Relative Measurements
  • Aerial Robotics: Compact groups of cooperating micro aerial vehicles in clustered GPS denied environment
  • Disordered and Multiple Destinations Path Planning Methods for Mobile Robot in Dynamic Environment
  • Integrating Modeling and Knowledge Representation for Combined Task, Resource and Path Planning in Robotics
  • Path Planning With Kinematic Constraints For Robot Groups
  • Robot motion planning for pouring liquids
  • Implan: Scalable Incremental Motion Planning for Multi-Robot Systems
  • Equilibrium Motion Planning of Humanoid Climbing Robot under Constraints
  • POMDP-lite for Robust Robot Planning under Uncertainty
  • The RoboCup Logistics League as a Benchmark for Planning in Robotics
  • Planning-aware communication for decentralised multi- robot coordination
  • Combined Force and Position Controller Based on Inverse Dynamics: Application to Cooperative Robotics
  • A Four Degree of Freedom Robot for Positioning Ultrasound Imaging Catheters
  • The Role of Robotics in Ovarian Transposition
  • An Implementation on 3D Positioning Aquatic Robot

Robotic Interactions

  • On Indexicality, Direction of Arrival of Sound Sources and Human-Robot Interaction
  • OpenWoZ: A Runtime-Configurable Wizard-of-Oz Framework for Human-Robot Interaction
  • Privacy in Human-Robot Interaction: Survey and Future Work
  • An Analysis Of Teacher-Student Interaction Patterns In A Robotics Course For Kindergarten Children: A Pilot Study
  • Human Robotics Interaction (HRI) based Analysis–using DMT
  • A Cautionary Note on Personality (Extroversion) Assessments in Child-Robot Interaction Studies
  • Interaction as a bridge between cognition and robotics
  • State Representation Learning in Robotics: Using Prior Knowledge about Physical Interaction
  • Eliciting Conversation in Robot Vehicle Interactions
  • A Comparison of Avatar, Video, and Robot-Mediated Interaction on Users’ Trust in Expertise
  • Exercising with Baxter: Design and Evaluation of Assistive Social-Physical Human- Robot Interaction
  • Using Narrative to Enable Longitudinal Human- Robot Interactions
  • Computational Analysis of Affect, Personality, and Engagement in HumanRobot Interactions
  • Human-robot interactions: A psychological perspective
  • Gait of Quadruped Robot and Interaction Based on Gesture Recognition
  • Graphically representing child- robot interaction proxemics
  • Interactive Demo of the SOPHIA Project: Combining Soft Robotics and Brain-Machine Interfaces for Stroke Rehabilitation
  • Interactive Robotics Workshop
  • Activating Robotics Manipulator using Eye Movements
  • Wireless Controlled Robot Movement System Desgined using Microcontroller
  • Gesture Controlled Robot using LabVIEW
  • RoGuE: Robot Gesture Engine

Obstacle Avoidance

  • Low Cost Obstacle Avoidance Robot with Logic Gates and Gate Delay Calculations
  • Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance
  • Controlling Obstacle Avoiding And Live Streaming Robot Using Chronos Watch
  • Movement Of The Space Robot Manipulator In Environment With Obstacles
  • Assis-Cicerone Robot With Visual Obstacle Avoidance Using a Stack of Odometric Data.
  • Obstacle detection and avoidance methods for autonomous mobile robot
  • Moving Domestic Robotics Control Method Based on Creating and Sharing Maps with Shortest Path Findings and Obstacle Avoidance
  • Control of the Differentially-driven Mobile Robot in the Environment with a Non-Convex Star-Shape Obstacle: Simulation and Experiments
  • A survey of typical machine learning based motion planning algorithms for robotics
  • Linear Algebra for Computer Vision, Robotics , and Machine Learning
  • Applying Radical Constructivism to Machine Learning: A Pilot Study in Assistive Robotics
  • Machine Learning for Robotics and Computer Vision: Sampling methods and Variational Inference
  • Rule-Based Supervisor and Checker of Deep Learning Perception Modules in Cognitive Robotics
  • The Limits and Potentials of Deep Learning for Robotics
  • Autonomous Robotics and Deep Learning
  • A Unified Knowledge Representation System for Robot Learning and Dialogue

Computer Vision

  • Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot
  • Non-Euclidean manifolds in robotics and computer vision: why should we care?
  • Topology of singular surfaces, applications to visualization and robotics
  • On the Impact of Learning Hierarchical Representations for Visual Recognition in Robotics
  • Focused Online Visual-Motor Coordination for a Dual-Arm Robot Manipulator
  • Towards Practical Visual Servoing in Robotics
  • Visual Pattern Recognition In Robotics
  • Automated Visual Inspection: Position Identification of Object for Industrial Robot Application based on Color and Shape
  • Automated Creation of Augmented Reality Visualizations for Autonomous Robot Systems
  • Implementation of Efficient Night Vision Robot on Arduino and FPGA Board
  • On the Relationship between Robotics and Artificial Intelligence
  • Artificial Spatial Cognition for Robotics and Mobile Systems: Brief Survey and Current Open Challenges
  • Artificial Intelligence, Robotics and Its Impact on Society
  • The Effects of Artificial Intelligence and Robotics on Business and Employment: Evidence from a survey on Japanese firms
  • Artificially Intelligent Maze Solver Robot
  • Artificial intelligence, Cognitive Robotics and Human Psychology
  • Minecraft as an Experimental World for AI in Robotics
  • Impact of Robotics, RPA and AI on the insurance industry: challenges and opportunities

Probabilistic Programming

  • On the use of probabilistic relational affordance models for sequential manipulation tasks inrobotics
  • Exploration strategies in developmental robotics: a unified probabilistic framework
  • Probabilistic Programming for Robotics
  • New design of a soft-robotics wearable elbow exoskeleton based on Shape Memory Alloy wires actuators
  • Design of a Modular Series Elastic Upgrade to a Robotics Actuator
  • Applications of Compliant Actuators to Wearing Robotics for Lower Extremity
  • Review of Development Stages in the Conceptual Design of an Electro-Hydrostatic Actuator for Robotics
  • Fluid electrodes for submersible robotics based on dielectric elastomer actuators
  • Cascaded Control Of Compliant Actuators In Friendly Robotics

Collaborative Robotics

  • Interpretable Models for Fast Activity Recognition and Anomaly Explanation During Collaborative Robotics Tasks
  • Collaborative Work Management Using SWARM Robotics
  • Collaborative Robotics : Assessment of Safety Functions and Feedback from Workers, Users and Integrators in Quebec
  • Accessibility, Making and Tactile Robotics : Facilitating Collaborative Learning and Computational Thinking for Learners with Visual Impairments
  • Trajectory Adaptation of Robot Arms for Head-pose Dependent Assistive Tasks

Mobile Robotics

  • Experimental research of proximity sensors for application in mobile robotics in greenhouse environment.
  • Multispectral Texture Mapping for Telepresence and Autonomous Mobile Robotics
  • A Smart Mobile Robot to Detect Abnormalities in Hazardous Zones
  • Simulation of nonlinear filter based localization for indoor mobile robot
  • Integrating control science in a practical mobile robotics course
  • Experimental Study of the Performance of the Kinect Range Camera for Mobile Robotics
  • Planification of an Optimal Path for a Mobile Robot Using Neural Networks
  • Security of Networking Control System in Mobile Robotics (NCSMR)
  • Vector Maps in Mobile Robotics
  • An Embedded System for a Bluetooth Controlled Mobile Robot Based on the ATmega8535 Microcontroller
  • Experiments of NDT-Based Localization for a Mobile Robot Moving Near Buildings
  • Hardware and Software Co-design for the EKF Applied to the Mobile Robotics Localization Problem
  • Design of a SESLogo Program for Mobile Robot Control
  • An Improved Ekf-Slam Algorithm For Mobile Robot
  • Intelligent Vehicles at the Mobile Robotics Laboratory, University of Sao Paolo, Brazil [ITS Research Lab]
  • Introduction to Mobile Robotics
  • Miniature Piezoelectric Mobile Robot driven by Standing Wave
  • Mobile Robot Floor Classification using Motor Current and Accelerometer Measurements
  • Sensors for Robotics 2015
  • An Automated Sensing System for Steel Bridge Inspection Using GMR Sensor Array and Magnetic Wheels of Climbing Robot
  • Sensors for Next-Generation Robotics
  • Multi-Robot Sensor Relocation To Enhance Connectivity In A WSN
  • Automated Irrigation System Using Robotics and Sensors
  • Design Of Control System For Articulated Robot Using Leap Motion Sensor
  • Automated configuration of vision sensor systems for industrial robotics

Nano robotics

  • Light Robotics: an all-optical nano-and micro-toolbox
  • Light-driven Nano- robotics
  • Light-driven Nano-robotics
  • Light Robotics: a new tech–nology and its applications
  • Light Robotics: Aiming towards all-optical nano-robotics
  • NanoBiophotonics Appli–cations of Light Robotics
  • System Level Analysis for a Locomotive Inspection Robot with Integrated Microsystems
  • High-Dimensional Robotics at the Nanoscale Kino-Geometric Modeling of Proteins and Molecular Mechanisms
  • A Study Of Insect Brain Using Robotics And Neural Networks

Social Robotics

  • Integrative Social Robotics Hands-On
  • ProCRob Architecture for Personalized Social Robotics
  • Definitions and Metrics for Social Robotics, along with some Experience Gained in this Domain
  • Transmedia Choreography: Integrating Multimodal Video Annotation in the Creative Process of a Social Robotics Performance Piece
  • Co-designing with children: An approach to social robot design
  • Toward Social Cognition in Robotics: Extracting and Internalizing Meaning from Perception
  • Human Centered Robotics : Designing Valuable Experiences for Social Robots
  • Preliminary system and hardware design for Quori, a low-cost, modular, socially interactive robot
  • Socially assistive robotics: Human augmentation versus automation
  • Tega: A Social Robot

Humanoid robot

  • Compliance Control and Human-Robot Interaction – International Journal of Humanoid Robotics
  • The Design of Humanoid Robot Using C# Interface on Bluetooth Communication
  • An Integrated System to approach the Programming of Humanoid Robotics
  • Humanoid Robot Slope Gait Planning Based on Zero Moment Point Principle
  • Literature Review Real-Time Vision-Based Learning for Human-Robot Interaction in Social Humanoid Robotics
  • The Roasted Tomato Challenge for a Humanoid Robot
  • Remotely teleoperating a humanoid robot to perform fine motor tasks with virtual reality

Cloud Robotics

  • CR3A: Cloud Robotics Algorithms Allocation Analysis
  • Cloud Computing and Robotics for Disaster Management
  • ABHIKAHA: Aerial Collision Avoidance in Quadcopter using Cloud Robotics
  • The Evolution Of Cloud Robotics: A Survey
  • Sliding Autonomy in Cloud Robotics Services for Smart City Applications
  • CORE: A Cloud-based Object Recognition Engine for Robotics
  • A Software Product Line Approach for Configuring Cloud Robotics Applications
  • Cloud robotics and automation: A survey of related work
  • ROCHAS: Robotics and Cloud-assisted Healthcare System for Empty Nester

Swarm Robotics

  • Evolution of Task Partitioning in Swarm Robotics
  • GESwarm: Grammatical Evolution for the Automatic Synthesis of Collective Behaviors in Swarm Robotics
  • A Concise Chronological Reassess Of Different Swarm Intelligence Methods With Multi Robotics Approach
  • The Swarm/Potential Model: Modeling Robotics Swarms with Measure-valued Recursions Associated to Random Finite Sets
  • The TAM: ABSTRACTing complex tasks in swarm robotics research
  • Task Allocation in Foraging Robot Swarms: The Role of Information Sharing
  • Robotics on the Battlefield Part II
  • Implementation Of Load Sharing Using Swarm Robotics
  • An Investigation of Environmental Influence on the Benefits of Adaptation Mechanisms in Evolutionary Swarm Robotics

Soft Robotics

  • Soft Robotics: The Next Generation of Intelligent Machines
  • Soft Robotics: Transferring Theory to Application,” Soft Components for Soft Robots”
  • Advances in Soft Computing, Intelligent Robotics and Control
  • The BRICS Component Model: A Model-Based Development Paradigm For ComplexRobotics Software Systems
  • Soft Mechatronics for Human-Friendly Robotics
  • Seminar Soft-Robotics
  • Special Issue on Open Source Software-Supported Robotics Research.
  • Soft Brain-Machine Interfaces for Assistive Robotics: A Novel Control Approach
  • Towards A Robot Hardware ABSTRACT ion Layer (R-HAL) Leveraging the XBot Software Framework

Service Robotics

  • Fundamental Theories and Practice in Service Robotics
  • Natural Language Processing in Domestic Service Robotics
  • Localization and Mapping for Service Robotics Applications
  • Designing of Service Robot for Home Automation-Implementation
  • Benchmarking Speech Understanding in Service Robotics
  • The Cognitive Service Robotics Apartment
  • Planning with Task-oriented Knowledge Acquisition for A Service Robot
  • Cognitive Robotics
  • Meta-Morphogenesis theory as background to Cognitive Robotics and Developmental Cognitive Science
  • Experience-based Learning for Bayesian Cognitive Robotics
  • Weakly supervised strategies for natural object recognition in robotics
  • Robotics-Derived Requirements for the Internet of Things in the 5G Context
  • A Comparison of Modern Synthetic Character Design and Cognitive Robotics Architecture with the Human Nervous System
  • PREGO: An Action Language for Belief-Based Cognitive Robotics in Continuous Domains
  • The Role of Intention in Cognitive Robotics
  • On Cognitive Learning Methodologies for Cognitive Robotics
  • Relational Enhancement: A Framework for Evaluating and Designing Human-RobotRelationships
  • A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering
  • Spatial Cognition in Robotics
  • IOT Based Gesture Movement Recognize Robot
  • Deliberative Systems for Autonomous Robotics: A Brief Comparison Between Action-oriented and Timelines-based Approaches
  • Formal Modeling and Verification of Dynamic Reconfiguration of Autonomous RoboticsSystems
  • Robotics on its feet: Autonomous Climbing Robots
  • Implementation of Autonomous Metal Detection Robot with Image and Message Transmission using Cell Phone
  • Toward autonomous architecture: The convergence of digital design, robotics, and the built environment
  • Advances in Robotics Automation
  • Data-centered Dependencies and Opportunities for Robotics Process Automation in Banking
  • On the Combination of Gamification and Crowd Computation in Industrial Automation and Robotics Applications
  • Advances in RoboticsAutomation
  • Meshworm With Segment-Bending Anchoring for Colonoscopy. IEEE ROBOTICS AND AUTOMATION LETTERS. 2 (3) pp: 1718-1724.
  • Recent Advances in Robotics and Automation
  • Key Elements Towards Automation and Robotics in Industrialised Building System (IBS)
  • Knowledge Building, Innovation Networks, and Robotics in Math Education
  • The potential of a robotics summer course On Engineering Education
  • Robotics as an Educational Tool: Impact of Lego Mindstorms
  • Effective Planning Strategy in Robotics Education: An Embodied Approach
  • An innovative approach to School-Work turnover programme with Educational Robotics
  • The importance of educational robotics as a precursor of Computational Thinking in early childhood education
  • Pedagogical Robotics A way to Experiment and Innovate in Educational Teaching in Morocco
  • Learning by Making and Early School Leaving: an Experience with Educational Robotics
  • Robotics and Coding: Fostering Student Engagement
  • Computational Thinking with Educational Robotics
  • New Trends In Education Of Robotics
  • Educational robotics as an instrument of formation: a public elementary school case study
  • Developmental Situation and Strategy for Engineering Robot Education in China University
  • Towards the Humanoid Robot Butler
  • YAGI-An Easy and Light-Weighted Action-Programming Language for Education and Research in Artificial Intelligence and Robotics
  • Simultaneous Tracking and Reconstruction (STAR) of Objects and its Application in Educational Robotics Laboratories
  • The importance and purpose of simulation in robotics
  • An Educational Tool to Support Introductory Robotics Courses
  • Lollybot: Where Candy, Gaming, and Educational Robotics Collide
  • Assessing the Impact of an Autonomous Robotics Competition for STEM Education
  • Educational robotics for promoting 21st century skills
  • New Era for Educational Robotics: Replacing Teachers with a Robotic System to Teach Alphabet Writing
  • Robotics as a Learning Tool for Educational Transformation
  • The Herd of Educational Robotic Devices (HERD): Promoting Cooperation in RoboticsEducation
  • Robotics in physics education: fostering graphing abilities in kinematics
  • Enabling Rapid Prototyping in K-12 Engineering Education with BotSpeak, a UniversalRobotics Programming Language
  • Innovating in robotics education with Gazebo simulator and JdeRobot framework
  • How to Support Students’ Computational Thinking Skills in Educational Robotics Activities
  • Educational Robotics At Lower Secondary School
  • Evaluating the impact of robotics in education on pupils’ skills and attitudes
  • Imagining, Playing, and Coding with KIBO: Using Robotics to Foster Computational Thinking in Young Children
  • How Does a First LEGO League Robotics Program Provide Opportunities for Teaching Children 21st Century Skills
  • A Software-Based Robotic Vision Simulator For Use In Teaching Introductory Robotics Courses
  • Robotics Practical
  • A project-based strategy for teaching robotics using NI’s embedded-FPGA platform
  • Teaching a Core CS Concept through Robotics
  • Ms. Robot Will Be Teaching You: Robot Lecturers in Four Modes of Automated Remote Instruction
  • Robotic Competitions: Teaching Robotics and Real-Time Programming with LEGO Mindstorms
  • Visegrad Robotics Workshop-different ideas to teach and popularize robotics
  • LEGO® Mindstorms® EV3 Robotics Instructor Guide
  • DRAFT: for Automaatiop iv t22 MOKASIT: Multi Camera System for Robotics Monitoring and Teaching
  • MOKASIT: Multi Camera System for Robotics Monitoring and Teaching
  • Autonomous Robot Design and Build: Novel Hands-on Experience for Undergraduate Students
  • Semi-Autonomous Inspection Robot
  • Sumo Robot Competition
  • Engagement of students with Robotics-Competitions-like projects in a PBL Bsc Engineering course
  • Robo Camp K12 Inclusive Outreach Program: A three-step model of Effective Introducing Middle School Students to Computer Programming and Robotics
  • The Effectiveness of Robotics Competitions on Students’ Learning of Computer Science
  • Engaging with Mathematics: How mathematical art, robotics and other activities are used to engage students with university mathematics and promote
  • Design Elements of a Mobile Robotics Course Based on Student Feedback
  • Sixth-Grade Students’ Motivation and Development of Proportional Reasoning Skills While Completing Robotics Challenges
  • Student Learning of Computational Thinking in A Robotics Curriculum: Transferrable Skills and Relevant Factors
  • A Robotics-Focused Instructional Framework for Design-Based Research in Middle School Classrooms
  • Transforming a Middle and High School Robotics Curriculum
  • Geometric Algebra for Applications in Cybernetics: Image Processing, Neural Networks, Robotics and Integral Transforms
  • Experimenting and validating didactical activities in the third year of primary school enhanced by robotics technology

Construction

  • Bibliometric analysis on the status quo of robotics in construction
  • AtomMap: A Probabilistic Amorphous 3D Map Representation for Robotics and Surface Reconstruction
  • Robotic Design and Construction Culture: Ethnography in Osaka University’s Miyazaki Robotics Lab
  • Infrastructure Robotics: A Technology Enabler for Lunar In-Situ Resource Utilization, Habitat Construction and Maintenance
  • A Planar Robot Design And Construction With Maple
  • Robotics and Automations in Construction: Advanced Construction and FutureTechnology
  • Why robotics in mining
  • Examining Influences on the Evolution of Design Ideas in a First-Year Robotics Project
  • Mining Robotics
  • TIRAMISU: Technical survey, close-in-detection and disposal mine actions in Humanitarian Demining: challenges for Robotics Systems
  • Robotics for Sustainable Agriculture in Aquaponics
  • Design and Fabrication of Crop Analysis Agriculture Robot
  • Enhance Multi-Disciplinary Experience for Agriculture and Engineering Students with Agriculture Robotics Project
  • Work in progress: Robotics mapping of landmine and UXO contaminated areas
  • Robot Based Wireless Monitoring and Safety System for Underground Coal Mines using Zigbee Protocol: A Review
  • Minesweepers uses robotics’ awesomeness to raise awareness about landminesexplosive remnants of war
  • Intelligent Autonomous Farming Robot with Plant Disease Detection using Image Processing
  • Auotomatic Pick And Place Robot
  • Video Prompting to Teach Robotics and Coding to Students with Autism Spectrum Disorder
  • Bilateral Anesthesia Mumps After RobotAssisted Hysterectomy Under General Anesthesia: Two Case Reports
  • Future Prospects of Artificial Intelligence in Robotics Software, A healthcare Perspective
  • Designing new mechanism in surgical robotics
  • Open-Source Research Platforms and System Integration in Modern Surgical Robotics
  • Soft Tissue Robotics–The Next Generation
  • CORVUS Full-Body Surgical Robotics Research Platform
  • OP: Sense, a rapid prototyping research platform for surgical robotics
  • Preoperative Planning Simulator with Haptic Feedback for Raven-II Surgical Robotics Platform
  • Origins of Surgical Robotics: From Space to the Operating Room
  • Accelerometer Based Wireless Gesture Controlled Robot for Medical Assistance using Arduino Lilypad
  • The preliminary results of a force feedback control for Sensorized Medical Robotics
  • Medical robotics Regulatory, ethical, and legal considerations for increasing levels of autonomy
  • Robotics in General Surgery
  • Evolution Of Minimally Invasive Surgery: Conventional Laparoscopy Torobotics
  • Robust trocar detection and localization during robot-assisted endoscopic surgery
  • How can we improve the Training of Laparoscopic Surgery thanks to the Knowledge in Robotics
  • Discussion on robot-assisted laparoscopic cystectomy and Ileal neobladder surgery preoperative care
  • Robotics in Neurosurgery: Evolution, Current Challenges, and Compromises
  • Hybrid Rendering Architecture for Realtime and Photorealistic Simulation of Robot-Assisted Surgery
  • Robotics, Image Guidance, and Computer-Assisted Surgery in Otology/Neurotology
  • Neuro-robotics model of visual delusions
  • Neuro-Robotics
  • Robotics in the Rehabilitation of Neurological Conditions
  • What if a Robot Could Help Me Care for My Parents
  • A Robot to Provide Support in Stigmatizing Patient-Caregiver Relationships
  • A New Skeleton Model and the Motion Rhythm Analysis for Human Shoulder Complex Oriented to Rehabilitation Robotics
  • Towards Rehabilitation Robotics: Off-The-Shelf BCI Control of Anthropomorphic Robotic Arms
  • Rehabilitation Robotics 2013
  • Combined Estimation of Friction and Patient Activity in Rehabilitation Robotics
  • Brain, Mind and Body: Motion Behaviour Planning, Learning and Control in view of Rehabilitation and Robotics
  • Reliable Robotics – Diagnostics
  • Robotics for Successful Ageing
  • Upper Extremity Robotics Exoskeleton: Application, Structure And Actuation

Defence and Military

  • Voice Guided Military Robot for Defence Application
  • Design and Control of Defense Robot Based On Virtual Reality
  • AI, Robotics and Cyber: How Much will They Change Warfare
  • BORDER SECURITY ROBOT
  • Brain Controlled Robot for Indian Armed Force
  • Autonomous Military Robotics
  • Wireless Restrained Military Discoursed Robot
  • Bomb Detection And Defusion In Planes By Application Of Robotics
  • Impacts Of The Robotics Age On Naval Force Design, Effectiveness, And Acquisition

Space Robotics

  • Lego robotics teacher professional learning
  • New Planar Air-bearing Microgravity Simulator for Verification of Space Robotics Numerical Simulations and Control Algorithms
  • The Artemis Rover as an Example for Model Based Engineering in Space Robotics
  • Rearrangement planning using object-centric and robot-centric action spaces
  • Model-based Apprenticeship Learning for Robotics in High-dimensional Spaces
  • Emergent Roles, Collaboration and Computational Thinking in the Multi-Dimensional Problem Space of Robotics
  • Reaction Null Space of a multibody system with applications in robotics

Other Industries

  • Robotics in clothes manufacture
  • Recent Trends in Robotics and Computer Integrated Manufacturing: An Overview
  • Application Of Robotics In Dairy And Food Industries: A Review
  • Architecture for theatre robotics
  • Human-multi-robot team collaboration for efficent warehouse operation
  • A Robot-based Application for Physical Exercise Training
  • Application Of Robotics In Oil And Gas Refineries
  • Implementation of Robotics in Transmission Line Monitoring
  • Intelligent Wireless Fire Extinguishing Robot
  • Monitoring and Controlling of Fire Fighthing Robot using IOT
  • Robotics An Emerging Technology in Dairy Industry
  • Robotics and Law: A Survey
  • Increasing ECE Student Excitement through an International Marine Robotics Competition
  • Application of Swarm Robotics Systems to Marine Environmental Monitoring

Future of Robotics / Trends

  • The future of Robotics Technology
  • RoboticsAutomation Are Killing Jobs A Roadmap for the Future is Needed
  • The next big thing (s) in robotics
  • Robotics in Indian Industry-Future Trends
  • The Future of Robot Rescue Simulation Workshop
  • PreprintQuantum Robotics: Primer on Current Science and Future Perspectives
  • Emergent Trends in Robotics and Intelligent Systems

RELATED ARTICLES MORE FROM AUTHOR

Revolutionizing robotics: an exclusive interview with elad inbar on the future of industry integration, revolutionizing contactless food prep with robotics & ai – interviewing vipin jain, ceo of blendid, how ai and robotics optimize the supply chain, do retail robots reduce theft, cnc machining, 3d printing or laser cutting: choosing the best to create robots.

  • Privacy Policy
  • Terms & Conditions
  • Search Menu
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • Self-Archiving Policy
  • Reasons to Submit
  • About Journal of Surgical Protocols and Research Methodologies
  • Editorial Board
  • Advertising & Corporate Services
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Cardiothoracic surgery, general surgery, head and neck surgery, orthopaedic surgery, urology and gynaecology, conflict of interest statement.

  • < Previous

Robotic surgery: an evolution in practice

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Elizabeth Z Goh, Tariq Ali, Robotic surgery: an evolution in practice, Journal of Surgical Protocols and Research Methodologies , Volume 2022, Issue 1, January 2022, snac003, https://doi.org/10.1093/jsprm/snac003

  • Permissions Icon Permissions

Robotic surgery is a progression on the minimally invasive spectrum and represents an evolution in practice across numerous disciplines.

From its origins in the late 1980s, pioneering technologies like the ROBODOC for hip replacements and the PROBOT for urological procedures were early iterations of the idea that mechanical augmentations could at the very least be useful adjuncts in the complex task that is surgery [ 1 ]. In the 1990s, researchers from the United States (US) National Aeronautics and Space Administration and Stanford Research Institute investigated the potential of robotics for telepresence surgery [ 1 ]. Subsequent US Army funding attempted to devise a system to remotely operate on wounded soldiers via robotic equipment, in hopes of decreasing battlefield mortality [ 1 ]. Commercial development introduced Automated Endoscopic System for Optimal Positioning (AESOP) (Computer Motion, CA), a voice-controlled robotic arm with an endoscopic camera, to the civilian surgical community [ 1 ]. This was superseded in the 2000s by two comprehensive master–slave platforms: the da Vinci system (Intuitive Surgical, CA), an eponymous nod to Leonardo da Vinci’s fifteenth-century ‘mechanical knight’ automaton [ 2 ], and the Zeus system (Computer Motion, CA), which was designed for cardiac surgery [ 1 ]. A company merger established the former as today’s main platform [ 1 ].

The da Vinci system consists of a console from which the surgeon remotely controls arms connected to a robotic cart beside the patient [ 3 ]. A dual-camera endoscope mounted on one arm transmits images of the surgical field to the console, providing the surgeon with a magnified three-dimensional (3D) view [ 3 ]. In response, the surgeon manipulates instruments attached to the other arms via the console [ 3 ]. The assistant is positioned beside the patient to suction and retract at the surgical field [ 3 ].

Robotic surgery offers advantages over conventional endoscopic surgery in visualization, dexterity and ergonomics, while maintaining the peri-operative benefits of minimally invasive surgery [ 1 ]. The dual-camera system offers 3D views with depth perception, unlike conventional endoscopic views [ 1 ]. Precision features include articulated ‘EndoWrist’ instruments with increased degrees of freedom, removal of the fulcrum effect and motion scaling with tremor filtration [ 1 , 3 ]. Accordingly, objective advantages over laparoscopic techniques in terms of dexterity and muscle fatigue have been demonstrated [ 4 ]. The remote console also allows an ergonomic operating position while optimizing visualization and manoeuvrability [ 1 ]. Recent da Vinci iterations have included a reconfigured robotic arm design to improve access; faster docking to reduce operative time; fluorescence-detection to identify structures and lesions of interest; robotic staplers to overcome difficulties in endoscopic stapler positioning by the assistant and a dual console for training [ 5 , 6 ].

Feasibility, efficacy and cost considerations exist. Access concerns may be ameliorated with a pre-operative screening endoscopy, whereas operative time reduces with experience [ 3 ]. Ongoing technological advances and global uptake of robotic surgery are expected to improve efficacy through optimization of case selection and equipment guided by growing longitudinal data [ 3 ]. Purchase and maintenance costs are significant, but will be offset by high volume use as well as savings from reduced length of stay and improved clinical outcomes [ 3 ].

The benefits of 3D vision and enhanced manoeuvrability provided by robotic surgery are crucial in the mediastinum, which contains many vital structures. Myriad applications exist for cardiac surgery, including cardiac revascularization and mitral valve repair, which were some of the earliest robotic surgeries performed [ 7 ]. Robotic thymectomy for thymomas is aided by fluorescence-guided detection of the tumour and adjacent structures [ 5 ]. Robotic lobectomy for lung cancer is also gaining traction, with Yang et al. ’s 10-year cohort study reporting comparable oncologic and peri-operative outcomes to video-assisted and open approaches [ 8 ].

Robotic surgery is feasible for numerous general surgical procedures, pending cost and operative time considerations, which will improve with technological advances. It has been used for rectal cancer resection, with the 2017 ROLARR trial finding comparable open conversion rates with laparoscopic techniques [ 9 ], and Lee et al. ’s large cohort study finding comparable resection quality with transanal techniques [ 10 ]. Robotic surgery is also a safe and effective clinical alternative for common operations such as gastrectomy [ 11 ], Roux-en-Y gastric bypass [ 12 ] and thyroidectomy [ 13 ]; as well as rare procedures such as median arcuate ligament (MAL) release in MAL syndrome [ 14 ]. Recent da Vinci iterations incorporate a more flexible robotic arm configuration to simplify set-up and facilitate four-quadrant access for complex procedures, and specific single-site surgery instruments with similar peri-operative benefits to single-port laparoscopic surgery [ 6 ].

The head and neck area is difficult to access due to its complex anatomy and confined space. Transoral robotic surgery (TORS) is an emerging option for oropharyngeal carcinoma, as it enables minimally invasive access to the oropharynx without large and mutilating open procedures such as a mandibulotomy and/or pharyngotomy, which cause significant functional and aesthetic deficits [ 15 ]. It also offers similar oncologic and functional outcomes to radiotherapy, pending further comparisons [ 16 , 17 ]. In addition, TORS is being increasingly used for cancers of unknown origin. Systematic reviews by Farooq et al. [ 18 ] and Fu et al. [ 19 ] found that tongue base mucosectomies and lingual tonsillectomies performed with TORS and transoral laser microsurgery (TLM) identified the primary tumour in over 70% of cases with negative conventional diagnostic findings. Other indications for TORS include laryngeal tumours [ 20 ] and parapharyngeal space tumours [ 21 ]; salvage surgery [ 22 ]; free flap reconstruction [ 23 ] and sleep apnoea surgery [ 24 ].

Various robotic systems for orthopaedic procedures exist. Haptic systems, which provide intra-operative feedback based on pre-operative data for accurate resection and reconstruction, are commonly used [ 25 ]. A common application is robotic-arm-assisted total knee arthroplasty, which has been found to result in decreased iatrogenic trauma to periarticular soft tissue and bone, increased accuracy of component positioning and improved peri-operative outcomes compared to conventional jig-based techniques [ 26 , 27 ]. Cost-effectiveness analysis of robotic arthroplasty is also in progress via the Robotic Arthroplasty: a Clinical and cost Effectiveness Randomised controlled (RACER) trial [ 28 ]. Still under investigation for clinical use are passive systems, such as the da Vinci platform for hip and shoulder arthroscopy, and active systems, which can independently perform procedures without surgeon input [ 29 ].

Robotic surgery is particularly suited for surgical access within the anatomically restrained pelvic space. Robotic-assisted radical prostatectomy is one of the most common robotic procedures. It is a widely-accepted management option for prostate cancer, with Tewari et al. ’s landmark meta-analysis reporting comparable oncologic and peri-operative outcomes to laparoscopic and open techniques [ 30 ]. Robotic partial nephrectomy is an emerging indication, with Bravi et al. ’s prospective multicentre cohort study reporting better peri-operative outcomes than laparoscopic and open approaches for anatomically low-risk renal tumours [ 31 ]. Robotic surgery provides improved outcomes for complex benign hysterectomy, where superior post-operative quality-of-life may offset the increased operating time, and endometrial cancer staging, where obesity and other comorbidities are common in the population [ 32 ]. There is emerging evidence for its use in cervical and ovarian cancer [ 33 ], myomectomy and sacrocolpopexy [ 32 ].

Robotic surgery is an emerging modality across numerous surgical specialties. It offers advantages over conventional endoscopic surgery in visualization, dexterity and ergonomics, while maintaining the benefits of minimally invasive surgery. Feasibility, efficacy and cost concerns may be ameliorated with technological advances and increased uptake. Robust longitudinal comparisons with established treatment modalities are imperative to support this evolution in practice.

None declared.

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Lanfranco   AR , Castellanos   AE , Desai   JP , Meyers   WC . Robotic surgery: a current perspective . Ann Surg   2004 ; 239 : 14 .

Google Scholar

Moran   ME . The da Vinci robot . J Endourol   2006 ; 20 : 986 – 90 .

Weinstein   GS , O'Malley   BW  Jr , Desai   SC , Quon   H . Transoral robotic surgery: does the ends justify the means?   Curr Opin Otolaryngol Head Neck Surg   2009 ; 17 : 126 – 31 .

Kuo   L-J , Ngu   JC-Y , Lin   Y-K , Chen   C-C , Tang   Y-H . A pilot study comparing ergonomics in laparoscopy and robotics: beyond anecdotes, and subjective claims . J Surg Case Rep   2020 ; 2020 :rjaa005.

Zirafa   CC , Romano   G , Key   TH , Davini   F , Melfi   F . The evolution of robotic thoracic surgery . Ann Cardiothorac Surg   2019 ; 8 : 210 .

Hagen   ME , Tauxe   WM , Morel   P . Robotic applications in advancing general surgery. In: Technological Advances in Surgery, Trauma and Critical Care . New York: Springer , 2015 , 377 – 90

Google Preview

Doulamis   IP , Spartalis   E , Machairas   N , Schizas   D , Patsouras   D , Spartalis   M , et al.    The role of robotics in cardiac surgery: a systematic review . J Robot Surg   2019 ; 13 : 41 – 52 .

Yang   H-X , Woo   KM , Sima   CS , Bains   MS , Adusumilli   PS , Huang   J , et al.    Long-term survival based on the surgical approach to lobectomy for clinical stage I non-small cell lung cancer: comparison of robotic, video assisted thoracic surgery, and thoracotomy lobectomy . Ann Surg   2017 ; 265 : 431 .

Jayne   D , Pigazzi   A , Marshall   H , Croft   J , Corrigan   N , Copeland   J , et al.    Effect of robotic-assisted vs conventional laparoscopic surgery on risk of conversion to open laparotomy among patients undergoing resection for rectal cancer: The ROLARR randomized clinical trial . JAMA   2017 ; 318 : 1569 – 80 .

Lee   L , de   Lacy   B , Ruiz   MG , Liberman   AS , Albert   MR , Monson   JR , et al.    A multicenter matched comparison of transanal and robotic total mesorectal excision for mid and low-rectal adenocarcinoma . Ann Surg   2019 ; 270 : 1110 – 6 .

Ojima   T , Nakamura   M , Hayata   K , Kitadani   J , Katsuda   M , Takeuchi   A , et al.    Short-term outcomes of robotic gastrectomy vs laparoscopic gastrectomy for patients with gastric cancer: a randomized clinical trial . JAMA Surg   2021 ; 156 : 954 – 63 .

El Chaar   M , King   K , Salem   JF , Arishi   A , Galvez   A , Stoltzfus   J . Robotic surgery results in better outcomes following Roux-en-Y gastric bypass: Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program analysis for the years 2015–2018 . Surg Obes Relat Dis   2021 ; 17 : 694 – 700 .

Chen   Y-H , Kim   H-Y , Anuwong   A , Huang   T-S , Duh   Q-Y . Transoral robotic thyroidectomy versus transoral endoscopic thyroidectomy: a propensity-score-matched analysis of surgical outcomes . Surg Endosc   2021 ; 35 : 6179 – 89 .

Bustos   R , Papamichail   M , Mangano   A , Valle   V , Giulianotti   PC . Robotic approach to treat median arcuate ligament syndrome: a case report . J Surg Case Rep   2020 ; 2020 :rjaa088.

Golusiński   W , Golusińska-Kardach   E . Current role of surgery in the management of oropharyngeal cancer . Front Oncol   2019 ; 9 : 388 .

De Virgilio   A , Costantino   A , Mercante   G , Pellini   R , Ferreli   F , Malvezzi   L , et al.    Transoral robotic surgery and intensity-modulated radiotherapy in the treatment of the oropharyngeal carcinoma: a systematic review and meta-analysis . Eur Arch Otorhinolaryngol   2021 ; 278 : 1321 – 35 .

Nichols   AC , Theurer   J , Prisman   E , Read   N , Berthelet   E , Tran   E , et al.    Radiotherapy versus transoral robotic surgery and neck dissection for oropharyngeal squamous cell carcinoma (ORATOR): an open-label, phase 2, randomised trial . Lancet Oncol   2019 ; 20 : 1349 – 59 .

Farooq   S , Khandavilli   S , Dretzke   J , Moore   D , Nankivell   PC , Sharma   N , et al.    Transoral tongue base mucosectomy for the identification of the primary site in the work-up of cancers of unknown origin: systematic review and meta-analysis . Oral Oncol   2019 ; 91 : 97 – 106 .

Fu   TS , Foreman   A , Goldstein   DP , de   Almeida   JR . The role of transoral robotic surgery, transoral laser microsurgery, and lingual tonsillectomy in the identification of head and neck squamous cell carcinoma of unknown primary origin: a systematic review . J Otolaryngol Head Neck Surg   2016 ; 45 : 1 – 10 .

Gorphe   P . A contemporary review of evidence for transoral robotic surgery in laryngeal cancer . Front Oncol   2018 ; 8 : 121 .

De Virgilio   A , Costantino   A , Mercante   G , Di Maio   P , Iocca   O , Spriano   G . Trans-oral robotic surgery in the management of parapharyngeal space tumors: a systematic review . Oral Oncol   2020 ; 103 :104581.

Gazda   P , Gauche   C , Chaltiel   L , Chabrillac   E , Vairel   B , De Bonnecaze   G , et al.    Functional and oncological outcomes of salvage transoral robotic surgery: a comparative study . Eur Arch Otorhinolaryngol   2021 ; 1 – 10 .

Chalmers   R , Schlabe   J , Yeung   E , Kerawala   C , Cascarini   L , Paleri   V . Robot-assisted reconstruction in head and neck surgical oncology: the evolving role of the reconstructive microsurgeon . ORL J Otorhinolaryngol Relat Spec   2018 ; 80 : 178 – 85 .

Meccariello   G , Cammaroto   G , Montevecchi   F , Hoff   PT , Spector   ME , Negm   H , et al.    Transoral robotic surgery for the management of obstructive sleep apnea: a systematic review and meta-analysis . Eur Arch Otorhinolaryngol   2017 ; 274 : 647 – 53 .

Chen   AF , Kazarian   GS , Jessop   GW , Makhdom   A . Robotic technology in orthopaedic surgery . J Bone Joint Surg   2018 ; 100 : 1984 – 92 .

Kayani   B , Konan   S , Tahmassebi   J , Pietrzak   J , Haddad   F . Robotic-arm assisted total knee arthroplasty is associated with improved early functional recovery and reduced time to hospital discharge compared with conventional jig-based total knee arthroplasty: a prospective cohort study . Bone Joint J   2018 ; 100 : 930 – 7 .

Kayani   B , Tahmassebi   J , Ayuob   A , Konan   S , Oussedik   S , Haddad   FS . A prospective randomized controlled trial comparing the systemic inflammatory response in conventional jig-based total knee arthroplasty versus robotic-arm assisted total knee arthroplasty . Bone Joint J   2021 ; 103 : 113 – 22 .

Parsons   H , Smith   T , Rees   S , Fox   J , Grant   N , Hutchinson   C , et al.    Robotic Arthroplasty: a Clinical and cost Effectiveness Randomised controlled trial. (RACER) . Southampton: National Institute for Health Research Evaluation, Trials and Studies Coordinating Centre (NETSCC)   2020 . URL: https://www.journalslibrary.nihr.ac.uk/programmes/hta/NIHR128768/ .

Karthik   K , Colegate-Stone   T , Dasgupta   P , Tavakkolizadeh   A , Sinha   J . Robotic surgery in trauma and orthopaedics: a systematic review . Bone Joint J   2015 ; 97 : 292 – 9 .

Tewari   A , Sooriakumaran   P , Bloch   DA , Seshadri-Kreaden   U , Hebert   AE , Wiklund   P . Positive surgical margin and perioperative complication rates of primary surgical treatments for prostate cancer: a systematic review and meta-analysis comparing retropubic, laparoscopic, and robotic prostatectomy . Eur Urol   2012 ; 62 : 1 – 15 .

Bravi   CA , Larcher   A , Capitanio   U , Mari   A , Antonelli   A , Artibani   W , et al.    Perioperative outcomes of open, laparoscopic, and robotic partial nephrectomy: a prospective multicenter observational study (The RECORd 2 Project) . Eur Urol Focus   2021 ; 7 : 390 – 6 .

Varghese   A , Doglioli   M , Fader   AN . Updates and controversies of robotic-assisted surgery in gynecologic surgery . Clin Obstet Gynecol   2019 ; 62 : 733 .

Zanagnolo   V , Garbi   A , Achilarre   MT , Minig   L . Robot-assisted surgery in gynecologic cancers . J Minim Invasive Gynecol   2017 ; 24 : 379 – 96 .

Email alerts

Citing articles via.

  • Advertising and Corporate Services
  • Journals Career Network
  • JSPRM Twitter

Affiliations

  • Online ISSN 2752-616X
  • Copyright © 2024 Oxford University Press and JSCR Publishing Ltd
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • v.15(5); 2023 May
  • PMC10287569

Logo of cureus

Artificial Intelligence With Robotics in Healthcare: A Narrative Review of Its Viability in India

1 Medical School, Jawaharlal Nehru Medical College, Datta Meghe Institute of Medical Sciences, Wardha, IND

Ashish Anjankar

2 Biochemistry, Jawaharlal Nehru Medical College, Datta Meghe Institute of Medical Sciences, Wardha, IND

This short review focuses on the emerging role of artificial intelligence (AI) with robotics in the healthcare sector. It may have particular utility for India, which has limited access to healthcare providers for a large growing population and limited health resources in rural India. AI works with an amalgamation of enormous amounts of data using fast and complex algorithms. This permits the software to quickly adapt the pattern of the data characteristics. It has the possibility to collide with most of the facets of the health system which may range from discovery to prediction and deterrence. The use of AI with robotics in the healthcare sector has shown a remarkable rising trend in the past few years. Functions like assistance with surgery, streamlining hospital logistics, and conducting routine checkups are some of the tasks that may be managed with great efficiency using artificial intelligence in urban and rural hospitals across the country. AI in the healthcare sector is advantageous in terms of ensuring exclusive patient care, safe working conditions where healthcare providers are at a lower risk of getting infected, and perfectly organized operational tasks. As the healthcare segment is globally recognized as one of the most dynamic and biggest industries, it tends to expedite development through modernization and original approaches. The future of this lucrative industry is looking forward to a great revolution aiming to create intelligent machines that work and respond like human beings. The future perspective of AI and robotics in the healthcare sector encompasses the care of elderly people, drug discovery, diagnosis of deadly diseases, a boost in clinical trials, remote patient monitoring, prediction of epidemic outbreaks, etc. However, the viability of using robotics in healthcare may be questionable in terms of expenditure, skilled workforce, and the conventional mindset of people. The biggest challenge is the replication of these technologies to the smaller towns and rural areas so that these facilities may reach the larger segment of the entire population of the country. This review aims to examine the adaptability and viability of these new technologies in the Indian scenario and identify the major challenges.

Introduction and background

The status of the healthcare sector in India is far from providing universal healthcare coverage to the entire population and lags behind many developing and few least developed countries in terms of health indicators. In addition to this, there are large disparities among various states in achieving the desired health outcomes, as well as the establishment of a sound information system. The adoption of the National Health Policy of India in 2017 has largely facilitated the bridging of the gap among various stakeholders of National Healthcare through the digital corridor. The policy recognizes the significant role of technology in healthcare delivery. It advocates the setting up of a National Digital health authority (NDHA) to regulate, develop and deploy digital health within the field of care. National Institution for Transforming India (NITI) Aayog, after being authorized by the Government of India to draft a National Strategy on Artificial Intelligence (AI) emphasized five sectors that would benefit the most from AI in 2018, of which healthcare is one [ 1 ]. 

The application of AI in healthcare may be classified into four broad categories, i.e. expressive, analytical, prognostic, and prescriptive. The gap created by a lack of skilled healthcare professionals can only be bridged by enhancing the use of AI in the health sector. Usual health issues can easily be diagnosed with the help of AI, thus reducing the workload of expert health professionals as well as reducing the cost of treatment in India [ 2 ]. It is envisaged that by the year 2035, AI would be able to enhance the economy of India by adding 957 billion USD to it (Accenture, 2017) [ 2 ]. AI will also prove to be a medium for reducing the economic disparity in the country. A report of the TCS global survey (TCS, 2017) projects that the visible reduction of jobs by AI could possibly be replaced by the creation of new jobs in the upcoming AI-integrated healthcare projects [ 2 ].

As a matter of fact, the healthcare setup in India is not perfect. It is deficient in terms of the availability of doctors, nurses, medical technicians, and healthcare facilities needed to attend to the community. The number of qualified doctors is insufficient for the rapidly growing needs of the Indian healthcare system. At the same time, these doctors are concentrated in urban areas and there is a huge gap in medical personnel in rural areas as compared to urban settings. Approximately 74% of the graduate doctors in India work in urban areas which cater to only about one-fourth of the population [ 3 ]. Because of the maldistribution of resources, each doctor serves 19,000 people [ 4 ]. India will need 2.3 million doctors by 2030 to reach the minimum doctor-patient ratio of 1:1000, which the World Health Organization recommends. The early ideas by a few dozen of healthcare startups have the potential to boost the Indian healthcare systems in the future and also have the capability to reduce the burden of the healthcare system.

Recently the coronavirus disease 2019 (COVID-19) pandemic posed a great challenge to the healthcare sector creating a huge demand for equipment, medicines AI-based applications, and robotics. Many reputed hospitals all over the world have switched over to AI and robotic procedures during the COVID-19 pandemic for functions like disinfection and screening of patients and employees at the entry point. Measures such as distantly supervised surgeries, distance education, telemedicine, and video conferencing with doctors were used during the recent pandemic. The experience gained during the pandemic has primarily enhanced the adaptability for use of robotics in the healthcare sector [ 5 ].

AI's major forms of relevance in healthcare are as follows: 1. Machine learning: The use and development of complete systems that are able to learn and adapt without explicit instructions to analyze and draw inferences from data patterns; 2. Natural language processing: A specialized branch of AI focused on the interpretation and manipulation of human-generated written or spoken data; 3. Robotic process automation: An automation technology that uses software to mimic the back office tasks of human workers, such as extracting data, filling the forms, moving files, etc.

In addition, AI also supports the healthcare system in diagnosis and treatment applications, patient engagement and adherence, and administrative applications [ 6 ]. AI not only simplifies the work of doctors, nurses, and other healthcare workers but also saves an ample amount of time. Thus the adoption of digital solutions for the prevention, diagnosis, and cure of various ailments is the wise route for India to deal with the aim of providing health for all. 

Research methodology

The present study was conducted between the months of April to June 2022. Databases like Pubmed and Google Scholar were mainly used to search the literature. Databases like Scopus and Web of Science were excluded. Most of the research publications taken into account for gathering the data were from 2013 to 2022. Research papers related to the use of robotics and artificial intelligence in healthcare were thoroughly studied with special emphasis on its viability in the Indian scenario. The relevant search terms used were artificial intelligence, robotics, healthcare, India, etc. It was a difficult task to explore the required information, as meager data is available regarding the use of robotics in the Indian healthcare sector which requires enhanced attention of researchers. 

Functioning of robotics in healthcare

Working of robotics in healthcare comprises AI applications like machine learning and deep learning. AI works with an amalgamation of vast amounts of data using fast and intelligent algorithms. This permits the software to quickly adapt the pattern of the data characteristics. Execution of AI is basically program oriented and the designed program consists of the basic information as to how it has to work. All the data is fed into web platforms such as the “cloud” which have the potential to store massive data and information to be used through the internet. There are immense possibilities for development in the healthcare sector through the use of AI in the future [ 7 ].

The main objective of AI is to solve problems by gathering and analyzing the information provided by the program and sensors. Another goal is to learn and respond in uncommon situations by taking alternate ways and remembering the successful alternative to be used in similar situations. It works for creating proficient arrangements so that it can learn, think, and suggest the best possible ways to the users. They work towards accomplishing intellect in machines so that they can perform just like human beings [ 8 ].

Artificial intelligence has the possibility to collide with most of the facets of the health system which may range from discovery to forecast and deterrence. Although the rate of adherence to the new technologies is much lower than their appearance, it is needed that all healthcare professionals be trained uniformly to adopt these new technologies which include techniques like robotic process automation, natural language processing, machine learning, etc. [ 9 ]. The interplay between artificial intelligence, machine learning, and deep learning ultimately leads to the working of robotics in healthcare, which can be seen in Figure ​ Figure1 1 .

An external file that holds a picture, illustration, etc.
Object name is cureus-0015-00000039416-i01.jpg

Use of robotics in healthcare

Assistance in Surgery

The application of robotics in surgery was first imagined in 1967, but it was just a dream for about 30 years until the United States defense department set up research organizations that gradually developed the first surgical robot designed to conduct different types of tasks. Initially, these robots were used during wars on the battlefields [ 10 ]. 

Today the most rapidly growing field with the application of robotics in healthcare is surgery. It aims to enhance the capabilities of humans and overcome human limitations in the field of surgery [ 11 ]. In India, the first urologic robot named da Vinci S was set up at the All India Institute of Medical Sciences, New Delhi in 2006. This initiation was followed by an exceptional expansion of robotic surgery in the country. Till July 2019 there were 66 centers and more than 500 skilled robotic surgeons in India who had successfully performed more than 12,800 surgeries with the assistance of robots [ 12 ]. This unexpected expansion of robotic surgery shows that the future of robotic surgery in India is very bright. The introduction of the da Vinci Surgical System is one of the biggest inventions in surgery [ 8 ]. The use of high-definition computer vision enables surgeons to get detailed information about the inner condition of the patients which enhances their performance during the surgery [ 13 ].

For many years engineers and medical researchers, are constantly trying to invent ways in which robotics can be used in surgery, as it has advantages like mechanical accuracy, permanence, and the ability to work in unsafe surroundings [ 14 ]. In the past few years, surgeries assisted by robots have played a significant role in boosting the Indian healthcare system. Reports show that hundreds of robotic surgeons are positioned at different hospitals in India. Surgeries performed with the help of robotics are thought to be better in comparison to other conventional methods due to their precision, shorter recovery periods, lesser pain, and blood loss. These kinds of surgeries are also preferred because they save traveling and boarding costs [ 15 ].

Robotic surgery has successfully sorted the limitations of laparoscopic surgery which is a big leap toward surgery with minimal access. As it may be predicted that almost all surgeries will be performed with robotic assistance in the future, a realistic training approach will be required to enhance the skills of surgeons, thus reshaping the knowledge curvature of the trainees by exposure to new methods like robotic surgical simulators and robotic telementoring [ 16 ]. The role of robotics is increasingly becoming crucial in surgeon training. For example, virtual reality simulators provide realistic situations and real training experiences to the trainees. Practicing the procedures becomes easy within the virtual environment [ 17 ].

Surgical robots are widely being used in over a million surgical actions related to various departments of the healthcare sector. AI helps the surgeon to get actual warnings and suggest appropriately during the process. Profound learning data helps a lot to provide the best surgical application suitable for the patient [ 18 ]. Robotics is also helpful in facilitating experts who are often concentrated in big cities and are not available for patients residing in small towns and rural areas. 

Support to Healthcare Workers

In addition to assistance in the operating room, robotics are also useful in clinics and Outdoor Patient Departments to enhance patient care. For example, robots were used to screen suspected patients at the entrance of health facilities during the COVID-19 pandemic. The use of automation and robots can also be seen in research laboratories where they are used to conduct many manual and repetitive tasks so that scientists can focus on more deliberate tasks and move faster towards discoveries. Remedial treatment after strokes, paralysis, traumatic brain injuries, etc. can be ensured with the help of therapeutic robots. These robots can monitor the patients as they perform prescribed exercises, and measure degrees of motion in various positions in a better way compared to the human eye. Social robots can also be used to interact with patients and also encourage them [ 19 ].

Logistic Arrangements

Medical robots efficiently streamline workflows and reduce risk which makes them more feasible to be used for many purposes. For example, robots can clean and organize patients' rooms autonomously, thus lowering the risk of interpersonal contact in infectious disease wards. Thus, for cleaning purposes, human support robots (HSR) are used [ 20 ]. Enabled medicine identifier software in robots helps in the distribution of medicines to patients in hospitals. Due to this kind of support hospital staff can devote more time to giving direct care to the patients.

Advantages of using robotics in healthcare

Exclusive Patient Care

Socially assistive robots (SARs) are the result of the development of AI along with physically assisted technologies. SARs are emotionally intelligent machines that lead to exclusive patient care, as these are capable of communicating with patients through a communicative range that makes them respond emotionally. The different types of response include interaction, communication, companionship, and emotional attachment [ 12 ]. Judicious use of robotics in the healthcare system ensures excellent patient care, perfect processes in medical surroundings, and a secure atmosphere for patients and medical professionals. Chances of human error and negligence are meager with the use of automated robots in healthcare. The health and social care sector is redefined by the invention and continuous development of SARs [ 12 ].

Protected Working Conditions

The role of nurses, ward boys, receptionists, and other healthcare workers can be easily performed by robots. The different types of robots: (i) receptionist robots, (ii) medical servers, (iii) nurse robots, etc., are capable of performing the above-mentioned roles very efficiently [ 15 ]. Automated mobile robots (AMRs) are used in many health facilities such as to distribute medical supplies and linen, collect data and information about patients, and serve food and water to patients in hospitals in order to keep medical professionals safe from pathogen exposure and thus prevent the spread of infections. Therefore, these robots were vigorously used during the recent COVID-19 pandemic. According to Podpora et al., hospitality robots like Wegree and Pepper developed by SoftBank Robotics in Japan were the most used robots during the pandemic, as they were helpful to control the rate of spreading of disease [ 15 ]. During the COVID-19 pandemic, excellent work was done for pandemic preparedness, screening, contact tracing, disinfecting, and enforcing quarantine and social distancing. The Arogya Setu app which was developed by National Informatics Centre and Information Technology Ministry has proven to be a boon in the management of the COVID-19 pandemic. Social robots are used for doing strenuous work like lifting heavy beds or transferring patients, thus reducing the physical strain on healthcare workers.

 Organized Operational Tasks

Automated mobile robots (AMRs) regularize regular tasks, decrease the physical burden on health workers, and make sure that more precise procedures are used. These robots can address the shortage of staff, keep a trail of records and place orders on time. They ensure that medicines and other equipment are available as and when needed. Rooms can be quickly cleaned and sanitized and are timely ready for incoming patients by automated robots which enable health professionals to perform other important patient-related jobs. Robots can be efficiently used for making diagnoses of different diseases by using artificial intelligence. The radiologist robots, which are equipped with computational imaging capabilities, are used for making diagnoses with the help of AI through deep learning. These robots are also used for doing diagnosis procedures like MRIs and X-rays and hence are of great advantage for healthcare workers, as it protects them from harmful radiations used in these procedures [ 15 ].

Future perspective

The healthcare segment is globally recognized as one of the most dynamic and biggest industries. It aims to expedite development through modernization and original approaches. Previously this sector was reliant upon manual processes which required more time and were prone to human errors. The latest discoveries in machine learning have brought a revolution in the health sector which aims to create intelligent machines that work and respond like actual persons [ 8 ]. Although the application of AI and robotics in the healthcare sector is still in its infant stage, the future seems to be very bright in terms of acceptability and viability [ 21 ]. The fields prone to fast adaptability of AI and robotics in healthcare are as follows:

Care for Elderly People

It is predicted that the population of elderly people will double globally by 2050. Socially assistive robot technology may emerge as a solution to this growing demand. The major factors that enhance loneliness among older people living alone are ownership of the house, marital status, bad health, and lack of people to support. A study conducted by Abdi et al. has revealed that the role of social robots is crucial in healthcare of the elderly people [ 22 ]. Although many participants of the study were hesitant to accept the significance of robots taking their care, it was quite evident that they were equally apprehensive about having humans as caretakers. Many participants accepted that humanoid robots are programmed with positive human qualities and therefore are more reliable than humans. It can be said that role of robots in taking care of elderly people will prove to be a milestone in the present scenario where the number of elderly people is increasing in India due to improved health services and there is an apparent gap between the demand and supply of trained professionals in hospitals to address the surging need [ 22 ].

Mental commit robots are being developed for the therapy of elderly patients in hospitals. These robots are capable of providing a psychological, physiological, and social impact on human beings through physical contact. It was observed that the mood of elderly people improved with this input [ 23 ]. Several studies are underway to explore the possibilities of expanding the capabilities of social robots to improve their communication with human beings. The physical appearance of the robot largely influences its acceptability by elderly people. Positive results have been seen in older adults suffering from dementia when they were provided with companion animal robots. Studies demonstrate that companion animal robots of appropriate size, weight, and shape are capable of providing cognitive stimulation to elderly people having dementia [ 24 ]. Animal robot like seal PARO developed by Japan's National Institute of Advanced Industrial Science and Technology (AIST) have proven to be quite advantageous for improving the cognitive abilities and sleeping patterns of older adults [ 25 ].

Drug Discovery

One of the major areas where the use of AI can prove to be a boon is the field of drug discovery. It takes about 14 years and an average of 2.6 billion dollars for a new drug to reach the market through conventional procedures, whereas the same can be done using AI in a lesser amount of time. Recently in 2015, the outbreak of the Ebola virus in West Africa and some European countries were controlled with the application of AI which helped to discover an appropriate drug in a very meager time and prevented the outbreak from becoming a global pandemic [ 8 ]. In addition to this, it has been proven that it takes very little time to conduct clinical trials of newly discovered drugs using AI [ 8 ]. AI can also be used to recognize cardiotoxic and non-cardiotoxic drugs of the anticancer group. It is also capable of identifying probable antibiotics from a list of thousands of molecules and can be used as a medium to discover new antibiotics. These algorithms are also being used to identify the molecule with the potential to combat antimicrobial resistance leading to resistance from antibiotics. Studies are underway to explore the role of AI in combating fast-growing antibiotic resistance [ 26 ]. 

AI in Diagnosis

Reports say that about 80,000 people die every year due to wrong diagnoses of illnesses. Loads of excessive cases with partial details have led to severe mistakes in the past. As AI is resistant to these errors, it is capable of predicting and diagnosing diseases at a faster pace [ 27 ]. The use of AI is extensively explored in the detection of cancer where early detection and prediction are very important. Many companies are using AI-supported tools for diagnosing and detecting different kinds of cancer [ 28 ].

Boost in Clinical Trials

Previously the process of clinical trials was very slow and success rates were very poor. Before the year 2000, the success rate of completing the clinical trials via all three stages, for the candidates was only 13.8% [ 29 ]. The execution of AI has reduced the cycle time and has also impacted the production cost and outcome in a positive direction. The AI helps in ensuring the continuous flow of clinical trial data and also coding, storing, and managing them. Details of patients saved in the computer can be analyzed and the lessons learned can be used for future trials, thus saving time and cost [ 30 ]. It also works efficiently to observe the patients consistently and share the data across different computers. The self-learning capacity of AI enhances the accuracy of the trial and foresees the chances of dropouts [ 31 ].

Consultation in Digital Mode

The idea of digital consultation is aimed at lessening hospital visits for minor ailments, which can be cured easily at home with the guidance of a medical professional. Several apps are using AI for collecting information from patients based on a questionnaire and then facilitating the consultation with a medical practitioner [ 32 ]. In the future, digital consultation through AI will be the most viable and efficient way for the treatment of common diseases. It would also help people to find good doctors near their houses with the help of AI and internet hospitals.

Remote Patient Monitoring

The concept of remote patient monitoring has evolved very fast with the application of AI sensors and advanced predictive analysis. Apart from personal sensors and devices for monitoring health like glucometers, blood pressure monitors, etc., more advanced systems are now coming up like smart implants and smart prosthetics which are used for post-operative rehabilitation purposes to avoid complications after surgery. Smart implants help in monitoring the patient's conditions such as movements, muscle strength, etc which are important parameters for assessing the rate of recovery. Sensors implanted within the muscles or nerves are quite helpful in providing consistent information about the healing process of the patient.

In recent times many new forms are coming up for patient monitoring, such as digital pills, nanorobots, smart fabrics, etc. These monitoring tools are used for ensuring regular medication, wound management, and management of cardiac diseases by keeping track of patients' emotional, physiological, and mental status [ 33 ]. It is calculated that by 2025 the market of AI-based monitoring tools and other wearables will be widely accepted by 50% of the population in developed countries [ 34 ]. The initial data and the details during the time of discharge are collected through cell phones having Wi-Fi or Bluetooth. It is further stored in the cloud and constant monitoring is done to avoid complications and readmissions to the hospitals. The review is shared with the patient with recommendations through the internet [ 35 ].

AI in Nanotechnology Research

Recent advances have been made in the field of medicine using nanotechnology. AI tools can be successfully merged with nanotechnology to understand the various events happening in the nanosystems. This can help in designing and developing drugs by developing the nanosystems [ 36 ]. The field of nanomedicine has grown and continues to develop, numerous approaches have been experimented with successfully to provide several curative instruments in predetermined doses. This advancement has greatly helped in getting efficient results in combination therapy [ 37 ].

Prediction of an Epidemic Outbreak

One of the most amazing tasks of AI in healthcare is that it is capable of forecasting the outbreak of an epidemic. Although it cannot control or mitigate the outbreak, it can warn us beforehand to make preparations in time. It gathers, analyses and monitors the inflow of data through machine learning or social networking sites to locate the epicenter of the endemic. The calculation is done by generating an algorithm by collecting all the data from the news bulletins in all languages, airline ticketing, and reports related to plant and animal diseases [ 38 ]. On 30th December 2019, the AI engine Blue Dot found groups of uncommon pneumonia cases occurring in the wet and dry markets of Wuhan, China, and alerted the government and other stakeholders. This was the first warning signal of the novel COVID-19 pandemic [ 39 ]. Figure ​ Figure2 2 depicts the various future perspectives of AI and robotics in the field of healthcare.

An external file that holds a picture, illustration, etc.
Object name is cureus-0015-00000039416-i02.jpg

Barriers to using AI in India

Besides the innumerable benefits of employing robotics in health facilities, there are chances of errors and mechanical failures too. One mechanical breakdown can cost a precious human life. Apparently, there are several disadvantages of robots in the healthcare sector, especially in the Indian scenario.

High cost is the major limitation of introducing robotics in healthcare. In India priority is given to the large burden of contagious diseases like tuberculosis and malaria. The introduction of robotics will be an additional load on the meager budget of the healthcare sector for non-prioritized work. The cost of buying and maintaining the robots is very high. Besides this, the expenditure is very huge for setting up a unit appropriate for robotic operations. 

Another drawback of the present robotic systems used for different healthcare applications is their narrow spectrum for customization. Every patient is different and hence, customization of the healthcare service systems is the need of the hour, for both patients as well as healthcare professionals. Hence, the current healthcare system needs to be more flexible in respect of providing robotic services that can be easily acceptable as per the patient's needs [ 40 ]. The use of surgical robots is practically limited to developed countries, advanced research centers, and super specialty hospitals. Practically it is out of reach for patients from a very big section of society in India who actually need it. Expensive robotic interventions are not feasible at the small town and village hospitals where they are actually needed due to excessive workload and lack of health professionals in government-owned health facilities.

Studies related to adverse events in robotic surgery show that several undesirable events were recorded including injuries and deaths due to device fault. Robots are mechanical devices and are susceptible to breakdowns and errors. Shortage of power and lack of other infrastructural facilities do not permit access to the use of robotics universally in the Indian healthcare system. In addition to this, positions of medical professionals at the grass root level are largely vacant and the lack of a trained and skilled workforce for operating and maintaining the robotics and AI system is a challenge. The interconnection between AI and computer programming has a major impact on health and care innovation, where benevolent service delivery systems are increasingly becoming important. These mechanical systems focus on affinity, including the essence of passionate and moral relationships along with therapeutic considerations [ 12 ].

Due to its growing popularity, there is also a threat of an increase in irrational demand for robotic surgery in India where the literacy rate and awareness about health are poor. This may lead to hospitals buying robots for commercial publicity and push doctors into unethical use of robotics.

The use of robotics in healthcare also has major medico-legal problems. Like other computers, the surgical robot may also be affected by virus threats and may not adhere to the surgeon's commands, thus leading to a hazardous situation. The government has taken steps to strengthen the medical education system and the delivery of healthcare in rural areas. The introduction of robotics working with mechanical procedures in the healthcare sector in India will possibly deduct the empathy and humanitarian aspect of treatment which is highly appreciated in the Indian scenario where a big percentage of the population is illiterate with low socio-economic status.

Apart from this, there are insufficient laws to address security and privacy issues arising out of data storage through artificial intelligence in the healthcare sector in India [ 2 ]. Quality training of the huge and diversified workforce related to the use of AI and robotics in healthcare is another major challenge that needs to be addressed. More and more simulation-based trainings are required to be performed at all levels to enhance the skills of surgeons regarding minimally invasive and robotic colorectal surgery [ 18 ]. 

Conclusions

Although the introduction of robots in healthcare is in its infant stage, it offers a lot of opportunities for medical professionals, especially in the urban setting. The significant role of AI in areas like drug discovery, diagnosis of diseases, digital medical consultations, robotic surgeries, remote patient monitoring and prediction of epidemic outbreaks cannot be denied. The emerging role of robotics in care of elderly people has been recognized and is gradually being accepted by Indian society. In the present scenario it is not possible to think about implementation and monitoring of health services in the absence of AI and robotics. Many new techniques are underway in the use of robotics in the health sector which may be more cost-effective in the future. But the quality of robotic procedures needs to be controlled by establishing a stringent and continuous monitoring system. Use of AI and robotics in healthcare sector in India may prove to be a milestone in improving the present status of healthcare services. It has certainly helped in bridging the gap created by lack of skilled health professionals as well as the huge vacancies of doctors, nurses and paramedical staff. The main challenge is to reach the remote regions of the country with poor infrastructural facilities and lack of advanced technologies. The high cost of using AI and robotics in the healthcare sector stands as the major barrier in the path of reaching the disadvantaged community. Besides this, there are chances of errors and mechanical failures due to improper maintenance arrangements resulting in fatal consequences. The Indian government should support companies to invest in AI and encourage public-private partnership (PPP) in the domain of AI and health. The ethical issues must be addressed by the policy makers to enhance the use of AI and robotics in the healthcare sector. After considering the various facts and practicality, it can be said that the use of robotics in India should be expanded in a phased manner initiating with the reputed and equipped hospitals. It is viable only if used judiciously with a standardized reporting and monitoring system in place. 

The authors have declared that no competing interests exist.

research papers on robotics

Top 5 Robot Trends 2024

New Technology simplifies Automation

research papers on robotics

1 – Artificial Intelligence (AI) and machine learning

The trend of using Artificial Intelligence in robotics and automation keeps growing. The emergence of generative AI opens-up new solutions. This subset of AI is specialized to create something new from things it’s learned via training, and has been popularized by tools such as ChatGPT. Robot manufacturers are developing generative AI-driven interfaces which allow users to program robots more intuitively by using natural language instead of code. Workers will no longer need specialized programming skills to select and adjust the robot´s actions.

Another example is predictive AI analyzing robot performance data to identify the future state of equipment. Predictive maintenance can save manufacturers machine downtime costs. In the automotive parts industry, each hour of unplanned downtime is estimated to cost US$1.3m - the Information Technology & Innovation Foundation reports. This indicates the massive cost-saving potential of predictive maintenance. Machine learning algorithms can also analyze data from multiple robots performing the same process for optimization. In general, the more data a machine learning algorithm is given, the better it performs.

2 – Cobots expanding to new applications

Human-robot collaboration continues to be a major trend in robotics. Rapid advances in sensors, vision technologies and smart grippers allow robots to respond in real-time to changes in their environment and thus work safely alongside human workers.

Collaborative robot applications offer a new tool for human workers, relieving and supporting them. They can assist with tasks that require heavy lifting, repetitive motions, or work in dangerous environments.

The range of collaborative applications offered by robot manufacturers continues to expand.

A recent market development is the increase of cobot welding applications, driven by a shortage of skilled welders. This demand shows that automation is not causing a labor shortage but rather offers a means to solve it. Collaborative robots will therefore complement – not replace – investments in traditional industrial robots which operate at much faster speeds and will therefore remain important for improving productivity in response to tight product margins.

New competitors are also entering the market with a specific focus on collaborative robots. Mobile manipulators, the combination of collaborative robot arms and mobile robots (AMRs), offer new use cases that could expand the demand for collaborative robots substantially.

3 – Mobile Manipulators

Mobile manipulators – so called “MoMas” - are automating material handling tasks in industries such as automotive, logistics or aerospace. They combine the mobility of robotic platforms with the dexterity of manipulator arms. This enables them to navigate complex environments and manipulate objects, which is crucial for applications in manufacturing. Equipped with sensors and cameras, these robots perform inspections and carry out maintenance tasks on machinery and equipment. One of the significant advantages of mobile manipulators is their ability to collaborate and support human workers. Shortage of skilled labor and a lack of staff applying for factory jobs is likely to increase demand.

4 – Digital Twins

Digital twin technology is increasingly used as a tool to optimize the performance of a physical system by creating a virtual replica. Since robots are more and more digitally integrated in factories, digital twins can use their real-world operational data to run simulations and predict likely outcomes. Because the twin exists purely as a computer model, it can be stress-tested and modified with no safety implications while saving costs. All experimentation can be checked before the physical world itself is touched. Digital twins bridge the gap between digital and physical worlds.

5 – Humanoid Robots

Robotics is witnessing significant advancements in humanoids, designed to perform a wide range of tasks in various environments. The human-like design with two arms and two legs allows the robot to be used flexibly in work environments that were actually created for humans. It can therefore be easily integrated e.g. into existing warehouse processes and infrastructure.

research papers on robotics

The Chinese Ministry of Industry and Information Technology (MIIT) recently published detailed goals for the country’s ambitions to mass-produce humanoids by 2025. The MIIT predicts humanoids are likely to become another disruptive technology, similar to computers or smartphones, that could transform the way we produce goods and the way humans live.

The potential impact of humanoids on various sectors makes them an exciting area of development, but their mass market adoption remains a complex challenge. Costs are a key factor and success will depend on their return on investment competing with well-established robot solutions like mobile manipulators, for example.

“The five mutually reinforcing automation trends in 2024 show that robotics is a multidisciplinary field where technologies are converging to create intelligent solutions for a wide range of tasks,” says Marina Bill, President of the International Federation of Robotics. “These advances continue to shape the merging industrial and service robotics sectors and the future of work.”

  • Social robot ARI © PAL Robotics - high resolution (7.3 MB )
  • TOP 5 Global Robotics Trends 2024 (187 KB )
  • IFR Pressemeldung Robotertrends 2024 - deutsch (248 KB )

research papers on robotics

Global Robotics Race: Korea, Singapore and Germany in the Lead

research papers on robotics

Robots Help to Solve “Japan’s 2024 Problem”

research papers on robotics

Staff Shortage Boosts Service Robots – Sales Up 48% 

Press Contact IFR

research papers on robotics

Carsten Heer

IFR Press Inquiries

Phone: +49 40-822 44 284 E-Mail: press(at)ifr.org

research papers on robotics

Dr. Susanne Bieller

Lyoner Str. 18 DE-60528 Frankfurt am Main Phone: +49 69-6603-1502 E-Mail: secretariat(at)ifr.org

Credits · Legal Disclaimer · Privacy Policy · World Robotics Terms of Usage · © IFR 2024

Privacy Settings

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: an interactive agent foundation model.

Abstract: The development of artificial intelligence systems is transitioning from creating static, task-specific models to dynamic, agent-based systems capable of performing well in a wide range of applications. We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents across a wide range of domains, datasets, and tasks. Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction, enabling a versatile and adaptable AI framework. We demonstrate the performance of our framework across three separate domains -- Robotics, Gaming AI, and Healthcare. Our model demonstrates its ability to generate meaningful and contextually relevant outputs in each area. The strength of our approach lies in its generality, leveraging a variety of data sources such as robotics sequences, gameplay data, large-scale video datasets, and textual information for effective multimodal and multi-task learning. Our approach provides a promising avenue for developing generalist, action-taking, multimodal systems.

Submission history

Access paper:.

  • Download PDF
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. (PDF) Medical Robotics: State-of-the-Art Applications and Research

    research papers on robotics

  2. (PDF) The review of educational robotics research and the need for real

    research papers on robotics

  3. Robotics Topics For Research Paper

    research papers on robotics

  4. Robot Research: Introduction to Robotics

    research papers on robotics

  5. (PDF) Artificial Intelligence and Robotics: A Research Overview

    research papers on robotics

  6. ROBOTICS Paper Presentation

    research papers on robotics

VIDEO

  1. ROBOTIC ENGINEER subscribe and stay for more updates

  2. Robotics

  3. vision based autonomous navigation

  4. A Robotics Job was a Portfolio Away

  5. Disney's New AI ROBOT SHOCKS The Entire Industry

  6. How Can I Effectively Read a Robotics Research Paper?

COMMENTS

  1. The International Journal of Robotics Research: Sage Journals

    First published Feb 20, 2024 RoBUTCHER: A novel robotic meat factory cell platform Alex Mason Ian de Medeiros Esper Olga Korostynska [...] View all Open Access Research article First published Feb 15, 2024 The surface edge explorer (SEE): A measurement-direct approach to next best view planning Rowan Border Jonathan D. Gammell Open Access Other

  2. Science Robotics

    Research 21 Feb 2024 Tracking and navigation of a microswarm under laser speckle contrast imaging for targeted delivery 21 Feb 2024 Magnetic soft microfiberbots for robotic embolization By Xurui Liu Liu Wang et al. 14 Feb 2024 Dexterous helical magnetic robot for improved endovascular access

  3. Journal of Robotics

    Journal of Robotics publishes original research articles as well as review articles on all aspects of automated mechanical devices, from their design and fabrication, to testing and practical implementation. About this journal Editor spotlight Chief Editor Professor Yangmin Li is based at The Hong Kong Polytechnic University, Hong Kong.

  4. IEEE Transactions on Robotics

    IEEE Transactions on Robotics. null | IEEE Xplore. Need Help? US & Canada: +1 800 678 4333 Worldwide: +1 732 981 0060 Contact & Support

  5. Review of Robotics Technologies and Its Applications

    Based on a brief introduction of the development history of robotics, this paper reviews the classification of the type of robots, the key technologies involved, and the applications in various fields, analyze the development trend of robotics and recent research hotspots, and provides an outlook on the future development of robotics and its app...

  6. Robotics

    Latest Articles 24 pages, 4657 KiB Open Access Review Automation's Impact on Agriculture: Opportunities, Challenges, and Economic Effects by Khadijeh Bazargani and Taher Deemyad Robotics 2024, 13 (2), 33; https://doi.org/10.3390/robotics13020033 (registering DOI) - 19 Feb 2024 Abstract

  7. Robotics and Autonomous Systems

    Read the latest articles of Robotics and Autonomous Systems at ScienceDirect.com, Elsevier's leading platform of peer-reviewed scholarly literature ... Selected papers from the 18th International Conference on Intelligent Autonomous Systems (IAS18-2023) ... New technologies and application domains push the need for research and development ...

  8. Robotics

    Robotics Robotics Having a machine learning agent interact with its environment requires true unsupervised learning, skill acquisition, active learning, exploration and reinforcement, all ingredients of human learning that are still not well understood or exploited through the supervised approaches that dominate deep learning today.

  9. Growth in AI and robotics research accelerates

    The number of AI and robotics papers published in the 82 high-quality science journals in the Nature Index (Count) has been rising year-on-year — so rapidly that it resembles an exponential...

  10. Artificial intelligence, machine learning and deep learning in advanced

    1. Introduction. Artificial intelligence (AI), machine learning (ML), and deep learning (DL) are all important technologies in the field of robotics [1].The term artificial intelligence (AI) describes a machine's capacity to carry out operations that ordinarily require human intellect, such as speech recognition, understanding of natural language, and decision-making.

  11. T-RO

    The IEEE Transactions on Robotics (T-RO) publishes research papers that represent major advances in the state-of-the-art in all areas of robotics. The Transactions welcomes original papers that report on any combination of theory, design, experimental studies, analysis, algorithms, and integration and application case studies involving all aspects of robotics.

  12. (PDF) ARTIFICIAL INTELLIGENCE IN ROBOTICS: FROM ...

    This research paper explores the integration of artificial intelligence (AI) in robotics, specifically focusing on the transition from automation to autonomous systems.

  13. Machine learning techniques for robotic and autonomous ...

    This paper reviews the state-of-the-art with regard to RAS technologies (including unmanned marine robot systems, unmanned ground robot systems, climbing and crawler robots, unmanned aerial vehicles, and space robot systems) and their application for the inspection and monitoring of mechanical systems and civil infrastructure.

  14. Augmented Reality Meets Artificial Intelligence in Robotics: A

    A total of 29 papers were analyzed from two perspectives: a theme-based perspective showcasing the relation between AR and AI, and an application-based analysis highlighting how the robotics application was affected. These two sections are further categorized based on the type of robotics platform and the type of robotics application, respectively.

  15. Swarm Robotics: Past, Present, and Future [Point of View]

    Swarm robotics deals with the design, construction, and deployment of large groups of robots that coordinate and cooperatively solve a problem or perform a task. It takes inspiration from natural self-organizing systems, such as social insects, fish schools, or bird flocks, characterized by emergent collective behavior based on simple local interaction rules [1], [2]. Typically, swarm robotics ...

  16. Frontiers in Robotics and AI

    301 Research Topics Submission open The Future of 3D Printing: In the Aspect of Machine Learning, Industry 4.0, and Industry 5.0 Tariku Sinshaw Tamir Jiewu Leng Xijin Hua Gang Xiong 176 views Submission open Advanced Sensing, Learning and Control for Effective Human-Robot Interaction Zhenyu Lu Lu Chen Yanpeng Guan Chao Zeng Jing Luo 117 views

  17. 500 research papers and projects in robotics

    These free, downloadable research papers can shed lights into the some of the complex areas in robotics such as navigation, motion planning, robotic interactions, obstacle avoidance, actuators, machine learning, computer vision, artificial intelligence, collaborative robotics, nano robotics, social robotics, cloud, swan robotics, sensors, mobile...

  18. Robotic surgery: an evolution in practice

    In the 1990s, researchers from the United States (US) National Aeronautics and Space Administration and Stanford Research Institute investigated the potential of robotics for telepresence surgery . Subsequent US Army funding attempted to devise a system to remotely operate on wounded soldiers via robotic equipment, in hopes of decreasing ...

  19. Soft Robotics: A Systematic Review and Bibliometric Analysis

    1. Introduction The field of soft robotics is scientifically considered a field of spectacular development from one year to the next, this being based on the potential that it has, namely, to offer other perspectives in the field of robotics and many others.

  20. (PDF) The future of Robotics Technology

    Robotics The future of Robotics Technology Journal of Robotics Networking and Artificial Life 3 (4):270 License CC BY-NC 4.0 Authors: Luigi Pagliarini Accademia di Belle Arti di Macerata...

  21. Artificial Intelligence With Robotics in Healthcare: A Narrative Review

    Most of the research publications taken into account for gathering the data were from 2013 to 2022. Research papers related to the use of robotics and artificial intelligence in healthcare were thoroughly studied with special emphasis on its viability in the Indian scenario. The relevant search terms used were artificial intelligence, robotics ...

  22. (PDF) Research Paper on Robotics-New Era

    THE NEW ERA OF ROBOTICS: DEVICES AND SYSTEMS. December 2018. Alex Sergevich. Prabhu Prasad. This paper consists of detailed information about the robot's device and system. As everyone knows, how ...

  23. Top 5 Robot Trends 2024

    "The five mutually reinforcing automation trends in 2024 show that robotics is a multidisciplinary field where technologies are converging to create intelligent solutions for a wide range of tasks," says Marina Bill, President of the International Federation of Robotics.

  24. [2402.05929] An Interactive Agent Foundation Model

    The development of artificial intelligence systems is transitioning from creating static, task-specific models to dynamic, agent-based systems capable of performing well in a wide range of applications. We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents across a wide range of domains, datasets, and tasks. Our training ...