
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Advanced Search
- Journal List
- Sensors (Basel)


The Impact of Artificial Intelligence on Data System Security: A Literature Review
Ricardo raimundo.
1 ISEC Lisboa, Instituto Superior de Educação e Ciências, 1750-142 Lisbon, Portugal; [email protected]
Albérico Rosário
2 Research Unit on Governance, Competitiveness and Public Policies (GOVCOPP), University of Aveiro, 3810-193 Aveiro, Portugal
Associated Data
Not applicable.
Diverse forms of artificial intelligence (AI) are at the forefront of triggering digital security innovations based on the threats that are arising in this post-COVID world. On the one hand, companies are experiencing difficulty in dealing with security challenges with regard to a variety of issues ranging from system openness, decision making, quality control, and web domain, to mention a few. On the other hand, in the last decade, research has focused on security capabilities based on tools such as platform complacency, intelligent trees, modeling methods, and outage management systems in an effort to understand the interplay between AI and those issues. the dependence on the emergence of AI in running industries and shaping the education, transports, and health sectors is now well known in the literature. AI is increasingly employed in managing data security across economic sectors. Thus, a literature review of AI and system security within the current digital society is opportune. This paper aims at identifying research trends in the field through a systematic bibliometric literature review (LRSB) of research on AI and system security. the review entails 77 articles published in the Scopus ® database, presenting up-to-date knowledge on the topic. the LRSB results were synthesized across current research subthemes. Findings are presented. the originality of the paper relies on its LRSB method, together with an extant review of articles that have not been categorized so far. Implications for future research are suggested.
1. Introduction
The assumption that the human brain may be deemed quite comparable to computers in some ways offers the spontaneous basis for artificial intelligence (AI), which is supported by psychology through the idea of humans and animals operating like machines that process information by devices of associative memory [ 1 ]. Nowadays, researchers are working on the possibilities of AI to cope with varying issues of systems security across diverse sectors. Hence, AI is commonly considered an interdisciplinary research area that attracts considerable attention both in economics and social domains as it offers a myriad of technological breakthroughs with regard to systems security [ 2 ]. There is a universal trend of investing in AI technology to face security challenges of our daily lives, such as statistical data, medicine, and transportation [ 3 ].
Some claim that specific data from key sectors have supported the development of AI, namely the availability of data from e-commerce [ 4 ], businesses [ 5 ], and government [ 6 ], which provided substantial input to ameliorate diverse machine-learning solutions and algorithms, in particular with respect to systems security [ 7 ]. Additionally, China and Russia have acknowledged the importance of AI for systems security and competitiveness in general [ 8 , 9 ]. Similarly, China has recognized the importance of AI in terms of housing security, aiming at becoming an authority in the field [ 10 ]. Those efforts are already being carried out in some leading countries in order to profit the most from its substantial benefits [ 9 ]. In spite of the huge development of AI in the last few years, the discussion around the topic of systems security is sparse [ 11 ]. Therefore, it is opportune to acquaint the last developments regarding the theme in order to map the advancements in the field and ensuing outcomes [ 12 ]. In view of this, we intend to find out the principal trends of issues discussed on the topic these days in order to answer the main research question: What is the impact of AI on data system security?
The article is organized as follows. In Section 2 , we put forward diverse theoretical concepts related to AI in systems security. In Section 3 , we present the methodological approach. In Section 4 , we discuss the main fields of use of AI with regard to systems security, which came out from the literature. Finally, we conclude this paper by suggesting implications and future research avenues.
2. Literature Trends: AI and Systems Security
The concept of AI was introduced following the creation of the notion of digital computing machine in an attempt to ascertain whether a machine is able to “think” [ 1 ] or if the machine can carry out humans’ tasks [ 13 ]. AI is a vast domain of information and computer technologies (ICT), which aims at designing systems that can operate autonomously, analogous to the individuals’ decision-making process [ 14 ].In terms of AI, a machine may learn from experience through processing an immeasurable quantity of data while distinguishing patterns in it, as in the case of Siri [ 15 ] and image recognition [ 16 ], technologies based on machine learning that is a subtheme of AI, defined as intelligent systems with the capacity to think and learn [ 1 ].
Furthermore, AI entails a myriad of related technologies, such as neural networks [ 17 ] and machine learning [ 18 ], just to mention a few, and we can identify some research areas of AI:
- (I) Machine learning is a myriad of technologies that allow computers to carry out algorithms based on gathered data and distinct orders, providing the machine the capabilities to learn without instructions from humans, adjusting its own algorithm to the situation, while learning and recoding itself, such as Google and Siri when performing distinct tasks ordered by voice [ 19 ]. As well, video surveillance that tracks unusual behavior [ 20 ];
- (II) Deep learning constitutes the ensuing progress of machine learning, in which the machine carry out tasks directly from pictures, text, and sound, through a wide set of data architecture that entails numerous layers in order to learn and characterize data with several levels of abstraction imitating thus how the natural brain processes information [ 21 ]. This is illustrated, for example, in forming a certificate database structure of university performance key indicators, in order to fix issues such as identity authentication [ 21 ];
- (III) Neural networks are composed of a pattern recognition system that machine/deep learning operates to perform learning from observational data, figuring out its own solutions such as an auto-steering gear system with a fuzzy regulator, which enables to select optimal neural network models of the vessel paths, to obtain in this way control activity [ 22 ];
- (IV) Natural language processing machines analyze language and speech as it is spoken, resorting to machine learning and natural language processing, such as developing a swarm intelligence and active system, while mounting friendly human-computer interface software for users, to be implemented in educational and e-learning organizations [ 23 ];
- (V) Expert systems are composed of software arrangements that assist in achieving answers to distinct inquiries provided either by a customer or by another software set, in which expert knowledge is set aside in a particular area of the application that includes a reasoning component to access answers, in view of the environmental information and subsequent decision making [ 24 ].
Those subthemes of AI are applied to many sectors, such as health institutions, education, and management, through varying applications related to systems security. These abovementioned processes have been widely deployed to solve important security issues such as the following application trends ( Figure 1 ):
- (a) Cyber security, in terms of computer crime, behavior research, access control, and surveillance, as for example the case of computer vision, in which an algorithmic analyses images, CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) techniques [ 6 , 7 , 12 , 19 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 ];
- (b) Information management, namely in supporting decision making, business strategy, and expert systems, for example, by improving the quality of the relevant strategic decisions by analyzing big data, as well as in the management of the quality of complex objects [ 2 , 4 , 5 , 11 , 14 , 24 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 ];
- (c) Societies and institutions, regarding computer networks, privacy, and digitalization, legal and clinical assistance, for example, in terms of legal support of cyber security, digital modernization, systems to support police investigations and the efficiency of technological processes in transport [ 8 , 9 , 10 , 15 , 17 , 18 , 20 , 21 , 23 , 28 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 ];
- (d) Neural networks, for example, in terms of designing a model of human personality for use in robotic systems [ 1 , 13 , 16 , 22 , 74 , 75 ].

Subthemes/network of all keywords of AI—source: own elaboration.
Through these streams of research, we will explain how the huge potential of AI can be deployed to over-enhance systems security that is in use both in states and organizations, to mitigate risks and increase returns while identifying, averting cyber attacks, and determine the best course of action [ 19 ]. AI could even be unveiled as more effective than humans in averting potential threats by various security solutions such as redundant systems of video surveillance, VOIP voice network technology security strategies [ 36 , 76 , 77 ], and dependence upon diverse platforms for protection (platform complacency) [ 30 ].
The design of the abovementioned conceptual and technological framework was not made randomly, as we did a preliminary search on Scopus with the keywords “Artificial Intelligence” and “Security”.
3. Materials and Methods
We carried out a systematic bibliometric literature review (LRSB) of the “Impact of AI on Data System Security”. the LRSB is a study concept that is based on a detailed, thorough study of the recognition and synthesis of information, being an alternative to traditional literature reviews, improving: (i) the validity of the review, providing a set of steps that can be followed if the study is replicated; (ii) accuracy, providing and demonstrating arguments strictly related to research questions; and (iii) the generalization of the results, allowing the synthesis and analysis of accumulated knowledge [ 78 , 79 , 80 ]. Thus, the LRSB is a “guiding instrument” that allows you to guide the review according to the objectives.
The study is performed following Raimundo and Rosário suggestions as follows: (i) definition of the research question; (ii) location of the studies; (iii) selection and evaluation of studies; (iv) analysis and synthesis; (v) presentation of results; finally (vi) discussion and conclusion of results. This methodology ensures a comprehensive, auditable, replicable review that answers the research questions.
The review was carried out in June 2021, with a bibliographic search in the Scopus database of scientific articles published until June 2021. the search was carried out in three phases: (i) using the keyword Artificial Intelligence “382,586 documents were obtained; (ii) adding the keyword “Security”, we obtained a set of 15,916 documents; we limited ourselves to Business, Management, and Accounting 401 documents were obtained and finally (iii) exact keyword: Data security, Systems security a total of 77 documents were obtained ( Table 1 ).
Screening methodology.
Source: own elaboration.
The search strategy resulted in 77 academic documents. This set of eligible break-downs was assessed for academic and scientific relevance and quality. Academic Documents, Conference Paper (43); Article (29); Review (3); Letter (1); and retracted (1).
Peer-reviewed academic documents on the impact of artificial intelligence on data system security were selected until 2020. In the period under review, 2021 was the year with the highest number of peer-reviewed academic documents on the subject, with 18 publications, with 7 publications already confirmed for 2021. Figure 2 reviews peer-reviewed publications published until 2021.

Number of documents by year. Source: own elaboration.
The publications were sorted out as follows: 2011 2nd International Conference on Artificial Intelligence Management Science and Electronic Commerce Aimsec 2011 Proceedings (14); Proceedings of the 2020 IEEE International Conference Quality Management Transport and Information Security Information Technologies IT and Qm and Is 2020 (6); Proceedings of the 2019 IEEE International Conference Quality Management Transport and Information Security Information Technologies IT and Qm and Is 2019 (5); Computer Law and Security Review (4); Journal of Network and Systems Management (4); Decision Support Systems (3); Proceedings 2021 21st Acis International Semi Virtual Winter Conference on Software Engineering Artificial Intelligence Networking and Parallel Distributed Computing Snpd Winter 2021 (3); IEEE Transactions on Engineering Management (2); Ictc 2019 10th International Conference on ICT Convergence ICT Convergence Leading the Autonomous Future (2); Information and Computer Security (2); Knowledge Based Systems (2); with 1 publication (2013 3rd International Conference on Innovative Computing Technology Intech 2013; 2020 IEEE Technology and Engineering Management Conference Temscon 2020; 2020 International Conference on Technology and Entrepreneurship Virtual Icte V 2020; 2nd International Conference on Current Trends In Engineering and Technology Icctet 2014; ACM Transactions on Management Information Systems; AFE Facilities Engineering Journal; Electronic Design; Facct 2021 Proceedings of the 2021 ACM Conference on Fairness Accountability and Transparency; HAC; ICE B 2010 Proceedings of the International Conference on E Business; IEEE Engineering Management Review; Icaps 2008 Proceedings of the 18th International Conference on Automated Planning and Scheduling; Icaps 2009 Proceedings of the 19th International Conference on Automated Planning and Scheduling; Industrial Management and Data Systems; Information and Management; Information Management and Computer Security; Information Management Computer Security; Information Systems Research; International Journal of Networking and Virtual Organisations; International Journal of Production Economics; International Journal of Production Research; Journal of the Operational Research Society; Proceedings 2020 2nd International Conference on Machine Learning Big Data and Business Intelligence Mlbdbi 2020; Proceedings Annual Meeting of the Decision Sciences Institute; Proceedings of the 2014 Conference on IT In Business Industry and Government An International Conference By Csi on Big Data Csibig 2014; Proceedings of the European Conference on Innovation and Entrepreneurship Ecie; TQM Journal; Technology In Society; Towards the Digital World and Industry X 0 Proceedings of the 29th International Conference of the International Association for Management of Technology Iamot 2020; Wit Transactions on Information and Communication Technologies).
We can say that in recent years there has been some interest in research on the impact of artificial intelligence on data system security.
In Table 2 , we analyze for the Scimago Journal & Country Rank (SJR), the best quartile, and the H index by publication.
Scimago journal and country rank impact factor.
Note: * data not available. Source: own elaboration.
Information Systems Research is the most quoted publication with 3510 (SJR), Q1, and H index 159.
There is a total of 11 journals on Q1, 3 journals on Q2 and 2 journals on Q3, and 2 journal on Q4. Journals from best quartile Q1 represent 27% of the 41 journals titles; best quartile Q2 represents 7%, best quartile Q3 represents 5%, and finally, best Q4 represents 5% each of the titles of 41 journals. Finally, 23 of the publications representing 56%, the data are not available.
As evident from Table 2 , the significant majority of articles on artificial intelligence on data system security rank on the Q1 best quartile index.
The subject areas covered by the 77 scientific documents were: Business, Management and Accounting (77); Computer Science (57); Decision Sciences (36); Engineering (21); Economics, Econometrics, and Finance (15); Social Sciences (13); Arts and Humanities (3); Psychology (3); Mathematics (2); and Energy (1).
The most quoted article was “CCANN: An intrusion detection system based on combining cluster centers and nearest neighbors” from Lin, Ke, and Tsai 290 quotes published in the Knowledge-Based Systems with 1590 (SJR), the best quartile (Q1) and with H index (121). the published article proposes a new resource representation approach, a cluster center, and the nearest neighbor approach.
In Figure 3 , we can analyze the evolution of citations of documents published between 2010 and 2021, with a growing number of citations with an R2 of 0.45%.

Evolution and number of citations between 2010 and 2021. Source: own elaboration.
The h index was used to verify the productivity and impact of the documents, based on the largest number of documents included that had at least the same number of citations. Of the documents considered for the h index, 11 have been cited at least 11 times.
In Appendix A , Table A1 , citations of all scientific articles until 2021 are analyzed; 35 documents were not cited until 2021.
Appendix A , Table A2 , examines the self-quotation of documents until 2021, in which self-quotation was identified for a total of 16 self-quotations.
In Figure 4 , a bibliometric analysis was performed to analyze and identify indicators on the dynamics and evolution of scientific information using the main keywords. the analysis of the bibliometric research results using the scientific software VOSviewe aims to identify the main keywords of research in “Artificial Intelligence” and “Security”.

Network of linked keywords. Source: own elaboration.
The linked keywords can be analyzed in Figure 4 , making it possible to clarify the network of keywords that appear together/linked in each scientific article, allowing us to know the topics analyzed by the research and to identify future research trends.
4. Discussion
By examining the selected pieces of literature, we have identified four principal areas that have been underscored and deserve further investigation with regard to cyber security in general: business decision making, electronic commerce business, AI social applications, and neural networks ( Figure 4 ). There is a myriad of areas in where AI cyber security can be applied throughout social, private, and public domains of our daily lives, from Internet banking to digital signatures.
First, it has been discussed the possible decreasing of unnecessary leakage of accounting information [ 27 ], mainly through security drawbacks of VOIP technology in IP network systems and subsequent safety measures [ 77 ], which comprises a secure dynamic password used in Internet banking [ 29 ].
Second, it has been researched some computer user cyber security behaviors, which includes both a naïve lack of concern about the likelihood of facing security threats and dependence upon specific platforms for protection, as well as the dependence on guidance from trusted social others [ 30 ], which has been partly resolved through a mobile agent (MA) management systems in distributed networks, while operating a model of an open management framework that provides a broad range of processes to enforce security policies [ 31 ].
Third, AI cyber systems security always aims at achieving stability of the programming and analysis procedures by clarifying the relationship of code fault-tolerance programming with code security in detail to strengthen it [ 33 ], offering an overview of existing cyber security tasks and roadmap [ 32 ].
Fourth, in this vein, numerous AI tools have been developed to achieve a multi-stage security task approach for a full security life cycle [ 38 ]. New digital signature technology has been built, amidst the elliptic curve cryptography, of increasing reliance [ 28 ]; new experimental CAPTCHA has been developed, through more interference characters and colorful background [ 8 ] to provide better protection against spambots, allowing people with little knowledge of sign languages to recognize gestures on video relatively fast [ 70 ]; novel detection approach beyond traditional firewall systems have been developed (e.g., cluster center and nearest neighbor—CANN) of higher efficiency for detection of attacks [ 71 ]; security solutions of AI for IoT (e.g., blockchain), due to its centralized architecture of security flaws [ 34 ]; and integrated algorithm of AI to identify malicious web domains for security protection of Internet users [ 19 ].
In sum, AI has progressed lately by advances in machine learning, with multilevel solutions to the security problems faced in security issues both in operating systems and networks, comprehending algorithms, methods, and tools lengthily used by security experts for the better of the systems [ 6 ]. In this way, we present a detailed overview of the impacts of AI on each of those fields.
4.1. Business Decision Making
AI has an increasing impact on systems security aimed at supporting decision making at the management level. More and more, it is discussed expert systems that, along with the evolution of computers, are able to integrate systems into corporate culture [ 24 ]. Such systems are expected to maximize benefits against costs in situations where a decision-making agent has to decide between a limited set of strategies of sparse information [ 14 ], while a strategic decision in a relatively short period of time is required demanded and of quality, for example by intelligent analysis of big data [ 39 ].
Secondly, it has been adopted distributed decision models coordinated toward an overall solution, reliant on a decision support platform [ 40 ], either more of a mathematical/modeling support of situational approach to complex objects [ 41 ], or more of a web-based multi-perspective decision support system (DSS) [ 42 ].
Thirdly, the problem of software for the support of management decisions was resolved by combining a systematic approach with heuristic methods and game-theoretic modeling [ 43 ] that, in the case of industrial security, reduces the subsequent number of incidents [ 44 ].
Fourthly, in terms of industrial management and ISO information security control, a semantic decision support system increases the automation level and support the decision-maker at identifying the most appropriate strategy against a modeled environment [ 45 ] while providing understandable technology that is based on the decisions and interacts with the machine [ 46 ].
Finally, with respect to teamwork, AI validates a theoretical model of behavioral decision theory to assist organizational leaders in deciding on strategic initiatives [ 11 ] while allowing understanding who may have information that is valuable for solving a collaborative scheduling problem [ 47 ].
4.2. Electronic Commerce Business
The third research stream focuses on e-commerce solutions to improve its systems security. This AI research stream focuses on business, principally on security measures to electronic commerce (e-commerce), in order to avoid cyber attacks, innovate, achieve information, and ultimately obtain clients [ 5 ].
First, it has been built intelligent models around the factors that induce Internet users to make an online purchase, to build effective strategies [ 48 ], whereas it is discussed the cyber security issues by diverse AI models for controlling unauthorized intrusion [ 49 ], in particular in some countries such as China, to solve drawbacks in firewall technology, data encryption [ 4 ] and qualification [ 2 ].
Second, to adapt to the increasingly demanding environment nowadays of a world pandemic, in terms of finding new revenue sources for business [ 3 ] and restructure business digital processes to promote new products and services with enough privacy and manpower qualified accordingly and able to deal with the AI [ 50 ].
Third, to develop AI able to intelligently protect business either by a distinct model of decision trees amidst the Internet of Things (IoT) [ 51 ] or by ameliorating network management through active networks technology, of multi-agent architecture able to imitate the reactive behavior and logical inference of a human expert [ 52 ].
Fourth, to reconceptualize the role of AI within the proximity’s spatial and non-spatial dimensions of a new digital industry framework, aiming to connect the physical and digital production spaces both in the traditional and new technology-based approaches (e.g., industry 4.0), promoting thus innovation partnerships and efficient technology and knowledge transfer [ 53 ]. In this vein, there is an attempt to move the management systems from a centralized to a distributed paradigm along the network and based on criteria such as for example the delegation degree [ 54 ] that inclusive allows the transition from industry 4.0 to industry 5.0i, through AI in the form of Internet of everything, multi-agent systems and emergent intelligence and enterprise architecture [ 58 ].
Fifth, in terms of manufacturing environments, following that networking paradigm, there is also an attempt to manage agent communities in distributed and varied manufacturing environments through an AI multi-agent virtual manufacturing system (e.g., MetaMorph) that optimizes real-time planning and security [ 55 ]. In addition, in manufacturing, smart factories have been built to mitigate security vulnerabilities of intelligent manufacturing processes automation by AI security measures and devices [ 56 ] as, for example, in the design of a mine security monitoring configuration software platform of a real-time framework (e.g., the device management class diagram) [ 26 ]. Smart buildings in manufacturing and nonmanufacturing environments have been adopted, aiming at reducing costs, the height of the building, and minimizing the space required for users [ 57 ].
Finally, aiming at augmenting the cyber security of e-commerce and business in general, other projects have been put in place, such as computer-assisted audit tools (CAATs), able to carry on continuous auditing, allowing auditors to augment their productivity amidst the real-time accounting and electronic data interchange [ 59 ] and a surge in the demand of high-tech/AI jobs [ 60 ].
4.3. AI Social Applications
As seen, AI systems security can be widely deployed across almost all society domains, be in regulation, Internet security, computer networks, digitalization, health, and other numerous fields (see Figure 4 ).
First, it has been an attempt to regulate cyber security, namely in terms of legal support of cyber security, with regard to the application of artificial intelligence technology [ 61 ], in an innovative and economical/political-friendly way [ 9 ] and in fields such as infrastructures, by ameliorating the efficiency of technological processes in transport, reducing, for example, the inter train stops [ 63 ] and education, by improving the cyber security of university E-Gov, for example in forming a certificate database structure of university performance key indicators [ 21 ] e-learning organizations by swarm intelligence [ 23 ] and acquainting the risk a digital campus will face according to ISO series standards and criteria of risk levels [ 25 ] while suggesting relevant solutions to key issues in its network information safety [ 12 ].
Second, some moral and legal issues have risen, in particular in relation to privacy, sex, and childhood. Is the case of the ethical/legal legitimacy of publishing open-source dual-purpose machine-learning algorithms [ 18 ], the needed legislated framework comprising regulatory agencies and representatives of all stakeholder groups gathered around AI [ 68 ], the gendering issue of VPAs as female (e.g., Siri) as replicate normative assumptions about the potential role of women as secondary to men [ 15 ], the need of inclusion of communities to uphold its own code [ 35 ] and the need to improve the legal position of people and children in particular that are exposed to AI-mediated risk profiling practices [ 7 , 69 ].
Third, the traditional industry also benefits from AI, given that it can improve, for example, the safety of coal mine, by analyzing the coal mine safety scheme storage structure, building data warehouse and analysis [ 64 ], ameliorating, as well, the security of smart cities and ensuing intelligent devices and networks, through AI frameworks (e.g., United Theory of Acceptance and Use of Technology—UTAUT) [ 65 ], housing [ 10 ] and building [ 66 ] security system in terms of energy balance (e.g., Direct Digital Control System), implying fuzzy logic as a non-precise program tool that allows the systems to function well [ 66 ], or even in terms of data integrity attacks to outage management system OMSs and ensuing AI means to detect and mitigate them [ 67 ].
Fourth, the citizens, in general, have reaped benefits from areas of AI such as police investigation, through expert systems that offer support in terms of profiling and tracking criminals based on machine-learning and neural network techniques [ 17 ], video surveillance systems of real-time accuracy [ 76 ], resorting to models to detect moving objects keeping up with environment changes [ 36 ], of dynamical sensor selection in processing the image streams of all cameras simultaneously [ 37 ], whereas ambient intelligence (AmI) spaces, in where devices, sensors, and wireless networks, combine data from diverse sources and monitor user preferences and their subsequent results on users’ privacy under a regulatory privacy framework [ 62 ].
Finally, AI has granted the society noteworthy progress in terms of clinical assistance in terms of an integrated electronic health record system into the existing risk management software to monitor sepsis at intensive care unit (ICU) through a peer-to-peer VPN connection and with a fast and intuitive user interface [ 72 ]. As well, it has offered an AI organizational solution of innovative housing model that combines remote surveillance, diagnostics, and the use of sensors and video to detect anomalies in the behavior and health of the elderly [ 20 ], together with a case-based decision support system for the automatic real-time surveillance and diagnosis of health care-associated infections, by diverse machine-learning techniques [ 73 ].
4.4. Neural Networks
Neural networks, or the process through which machines learn from observational data, coming up with their own solutions, have been lately discussed over some stream of issues.
First, it has been argued that it is opportune to develop a software library for creating artificial neural networks for machine learning to solve non-standard tasks [ 74 ], along a decentralized and integrated AI environment that can accommodate video data storage and event-driven video processing, gathered from varying sources, such as video surveillance systems [ 16 ], which images could be improved through AI [ 75 ].
Second, such neural networks architecture has progressed into a huge number of neurons in the network, in which the devices of associative memory were designed with the number of neurons comparable to the human brain within supercomputers [ 1 ]. Subsequently, such neural networks can be modeled on the base of switches architecture to interconnect neurons and to store the training results in the memory, on the base of the genetic algorithms to be exported to other robotic systems: a model of human personality for use in robotic systems in medicine and biology [ 13 ].
Finally, the neural network is quite representative of AI, in the attempt of, once trained in human learning and self-learning, could operate without human guidance, as in the case of a current positioning vessel seaway systems, involving a fuzzy logic regulator, a neural network classifier enabling to select optimal neural network models of the vessel paths, to obtain control activity [ 22 ].
4.5. Data Security and Access Control Mechanisms
Access control can be deemed as a classic security model that is pivotal do any security and privacy protection processes to support data access from different environments, as well as to protect unauthorized access according to a given security policy [ 81 ]. In this vein, data security and access control-related mechanisms have been widely debated these days, particularly with regard to their distinct contextual conditions in terms, for example, of spatial and temporal environs that differ according to diverse, decentralized networks. Those networks constitute a major challenge because they are dynamically located on “cloud” or “fog” environments, rather than fixed desktop structures, demanding thus innovative approaches in terms of access security, such as fog-based context-aware access control (FB-CAAC) [ 81 ]. Context-awareness is, therefore, an important characteristic of changing environs, where users access resources anywhere and anytime. As a result, it is paramount to highlight the interplay between the information, now based on fuzzy sets, and its situational context to implement context-sensitive access control policies, as well, through diverse criteria such as, for example, following subject and action-specific attributes. In this way, different contextual conditions, such as user profile information, social relationship information, and so on, need to be added to the traditional, spatial and temporal approaches to sustain these dynamic environments [ 81 ]. In the end, the corresponding policies should aim at defining the security and privacy requirements through a fog-based context-aware access control model that should be respected for distributed cloud and fog networks.
5. Conclusion and Future Research Directions
This piece of literature allowed illustrating the AI impacts on systems security, which influence our daily digital life, business decision making, e-commerce, diverse social and legal issues, and neural networks.
First, AI will potentially impact our digital and Internet lives in the future, as the major trend is the emergence of increasingly new malicious threats from the Internet environment; likewise, greater attention should be paid to cyber security. Accordingly, the progressively more complexity of business environment will demand, as well, more and more AI-based support systems to decision making that enables management to adapt in a faster and accurate way while requiring unique digital e-manpower.
Second, with regard to the e-commerce and manufacturing issues, principally amidst the world pandemic of COVID-19, it tends to augment exponentially, as already observed, which demands subsequent progress with respect to cyber security measures and strategies. the same, regarding the social applications of AI that, following the increase in distance services, will also tend to adopt this model, applied to improved e-health, e-learning, and e-elderly monitoring systems.
Third, subsequent divisive issues are being brought to the academic arena, which demands progress in terms of a legal framework, able to comprehend all the abovementioned issues in order to assist the political decisions and match the expectations of citizens.
Lastly, it is inevitable further progress in neural networks platforms, as it represents the cutting edge of AI in terms of human thinking imitation technology, the main goal of AI applications.
To summarize, we have presented useful insights with respect to the impact of AI in systems security, while we illustrated its influence both on the people’ service delivering, in particular in security domains of their daily matters, health/education, and in the business sector, through systems capable of supporting decision making. In addition, we over-enhance the state of the art in terms of AI innovations applied to varying fields.
Future Research Issues
Due to the aforementioned scenario, we also suggest further research avenues to reinforce existing theories and develop new ones, in particular the deployment of AI technologies in small medium enterprises (SMEs), of sparse resources and from traditional sectors that constitute the core of intermediate economies and less developed and peripheral regions. In addition, the building of CAAC solutions constitutes a promising field in order to control data resources in the cloud and throughout changing contextual conditions.
Acknowledgments
We would like to express our gratitude to the Editor and the Referees. They offered extremely valuable suggestions or improvements. the authors were supported by the GOVCOPP Research Unit of Universidade de Aveiro and ISEC Lisboa, Higher Institute of Education and Sciences.
Overview of document citations period ≤ 2010 to 2021.
Overview of document self-citation period ≤ 2010 to 2020.
Author Contributions
Conceptualization, R.R. and A.R.; data curation, R.R. and A.R.; formal analysis, R.R. and A.R.; funding acquisition, R.R. and A.R.; investigation, R.R. and A.R.; methodology, R.R. and A.R.; project administration, R.R. and A.R.; software, R.R. and A.R.; validation, R.R. and A.R.; resources, R.R. and A.R.; writing—original draft preparation, R.R. and A.R.; writing—review and editing, R.R. and A.R.; visualization, R.R. and A.R.; supervision, R.R. and A.R.; project administration, R.R. and A.R.; All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Institutional Review Board Statement
Informed consent statement, data availability statement, conflicts of interest.
The authors declare no conflict of interest. the funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
- Review Article
- Published: 26 March 2021
AI-Driven Cybersecurity: An Overview, Security Intelligence Modeling and Research Directions
- Iqbal H. Sarker ORCID: orcid.org/0000-0003-1740-5517 1 , 2 ,
- Md Hasan Furhad 3 &
- Raza Nowrozy 4
SN Computer Science volume 2 , Article number: 173 ( 2021 ) Cite this article
17k Accesses
87 Citations
21 Altmetric
Metrics details
Artificial intelligence (AI) is one of the key technologies of the Fourth Industrial Revolution (or Industry 4.0), which can be used for the protection of Internet-connected systems from cyber threats, attacks, damage, or unauthorized access. To intelligently solve today’s various cybersecurity issues, popular AI techniques involving machine learning and deep learning methods, the concept of natural language processing, knowledge representation and reasoning, as well as the concept of knowledge or rule-based expert systems modeling can be used. Based on these AI methods, in this paper, we present a comprehensive view on “AI-driven Cybersecurity” that can play an important role for intelligent cybersecurity services and management . The security intelligence modeling based on such AI methods can make the cybersecurity computing process automated and intelligent than the conventional security systems. We also highlight several research directions within the scope of our study, which can help researchers do future research in the area. Overall, this paper’s ultimate objective is to serve as a reference point and guidelines for cybersecurity researchers as well as industry professionals in the area, especially from an intelligent computing or AI-based technical point of view.
This is a preview of subscription content, access via your institution .
Access options
Buy single article.
Instant access to the full article PDF.
Price includes VAT (Russian Federation)
Rent this article via DeepDyve.

Li S, Da Li X, Zhao S. The internet of things: a survey. Inf Syst Front. 2015;17(2):243–59.
Article Google Scholar
Velte T, Velte A, Elsenpeter R. Cloud computing, a practical approach. New York: McGraw-Hill Inc; 2009.
Google Scholar
Sarker IH, Kayes ASM, Badsha S, Alqahtani H, Watters P, Ng A. Cybersecurity data science: an overview from machine learning perspective. J Big Data. 2020;7(1):1–29.
Ibm security report. https://www.ibm.com/security/data-breach . Accessed 20 Oct 2019.
Fischer EA. Cybersecurity issues and challenges: in brief. 2014.
Anwar S, Mohamad Zain J, Zolkipli MF, Inayat Z, Khan S, Anthony B, Chang V. From intrusion detection to an intrusion response system: fundamentals, requirements, and future directions. Algorithms. 2017;39(2):10.
MATH Google Scholar
Mohammadi S, Mirvaziri H, Ghazizadeh-Ahsaee M, Karimipour H. Cyber intrusion detection by combined feature selection algorithm. J Inf Secur Appl. 2019;44:80–8.
Tapiador JE, Orfila A, Ribagorda A, Ramos B. Key-recovery attacks on kids, a keyed anomaly detection system. IEEE Trans Dependable Secur Comput. 2013;12(3):312–25.
Tavallaee M, Stakhanova N, Ghorbani AA. Toward credible evaluation of anomaly-based intrusion-detection methods. IEEE Trans Syst Man Cybern Part C (Appl Rev). 2010;40(5):516–24.
Foroughi F, Luksch P. Data science methodology for cybersecurity projects. arXiv preprint arXiv:1803.04219 . 2018.
Saxe J, Sanders H. Malware data science: attack detection and attribution. 2018.
Rainie L, Anderson J, Connolly J. Cyber attacks likely to increase. Digit Life. 2014;2025.
Al-Garadi MA, Mohamed A, Al-Ali A, Du X, Ali I, Guizani M. A survey of machine and deep learning methods for internet of things (iot) security. IEEE Commun Surv Tutor. 2020;22:1646–85.
Google trends. In https://trends.google.com/trends/ . 2019.
Craigen D, Diakun-Thibault N, Purse R. Defining cybersecurity. Technol Innov Manag Rev. 2014;4(10):13–21.
Aftergood S. Cybersecurity: the cold war online. Nature. 2017;547(7661):30.
National Research Council et al. Toward a safer and more secure cyberspace. 2007.
Jang-Jaccard J, Nepal S. A survey of emerging threats in cybersecurity. J Comput Syst Sci. 2014;80(5):973–93.
MathSciNet MATH Google Scholar
Lahcen RAM, Caulkins B, Mohapatra R, Kumar M. Review and insight on the behavioral aspects of cybersecurity. Cybersecurity. 2020;3:1–18.
Mukkamala S, Sung A, Abraham A. Cyber security challenges: designing efficient intrusion detection systems and antivirus tools. In: Vemuri VR editor. Enhancing Computer Security with Smart Technology (Auerbach, 2006). 2005. p. 125–163.
Sun N, Zhang J, Rimba P, Gao S, Zhang LY, Xiang Y. Data-driven cybersecurity incident prediction: a survey. IEEE Commun Surv Tutor. 2018;21(2):1744–72.
McIntosh T, Jang-Jaccard J, Watters P, Susnjak T. The inadequacy of entropy-based ransomware detection. In: International conference on neural information processing. Springer; 2019. p. 181–189.
Dai J, Chen C, Li Y. A backdoor attack against lstm-based text classification systems. IEEE Access. 2019;7:138872–8.
Wang B, Yao Y, Shan S, Li H, Viswanath B, Zheng H, Zhao BY. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In: 2019 IEEE symposium on security and privacy (SP). IEEE; 2019. p. 707–723.
Banerjee A, Rahman MS, Faloutsos M. Sut: quantifying and mitigating url typosquatting. Comput Netw. 2011;55(13):3001–14.
Alsayed A, Bilgrami A. E-banking security: internet hacking, phishing attacks, analysis and prevention of fraudulent activities. Int J Emerg Technol Adv Act. 2017;7(1):109–15.
Alazab M, Venkatraman S, Watters P, Alazab M, et al. Zero-day malware detection based on supervised learning algorithms of API call signatures. Proceedings of the 9th Australasian Data Mining Conference (AusDM), Ballarat, Australia. Australian Computer Society, CRPIT; 2010, vol 121.
Bilge L, Dumitraş T. Before we knew it: an empirical study of zero-day attacks in the real world. In: Proceedings of the 2012 ACM conference on computer and communications security. ACM; 2012. p. 833–844.
Moghimi A, Wichelmann J, Eisenbarth T, Sunar B. Memjam: a false dependency attack against constant-time crypto implementations. Int J Parallel Program. 2019;47(4):538–70.
Warkentin M, Willison R. Behavioral and policy issues in information systems security: the insider threat. Eur J Inf Syst. 2009;18(2):101–5.
Ohm M, Sykosch A, Meier M. Towards detection of software supply chain attacks by forensic artifacts. In: Proceedings of the 15th international conference on availability, reliability and security. 2020. p. 1–6.
Eggers S. A novel approach for analyzing the nuclear supply chain cyber-attack surface. Nucl Eng Technol. 2021;53(3):879–887
Kügler D. “man in the middle” attacks on bluetooth. In: International conference on financial cryptography. Springer; 2003. p. 149–161.
Shaw A. Data breach: from notification to prevention using pci dss. Colum JL Soc Probs. 2009;43:517.
Data breach investigations report 2019. https://enterprise.verizon.com/resources/reports/dbir/ . Accessed 20 Oct 2019.
Hong S. Survey on analysis and countermeasure for hacking attacks to cryptocurrency exchange. J Korea Converg Soc. 2019;10(10):1–6.
Boyd SW, Keromytis AD. Sqlrand: preventing sql injection attacks. In: International conference on applied cryptography and network security. Springer; 2004. p. 292–302.
Tong F, Yan Z. A hybrid approach of mobile malware detection in android. J Parallel Distrib Comput. 2017;103:22–31.
Shankar VG, Jangid M, Devi B, Kabra S. Mobile big data: malware and its analysis. In: Proceedings of first international conference on smart system, innovations and computing. Springer; 2018. p. 831–842.
Davi L, Dmitrienko A, Sadeghi A-R, Winandy M. Privilege escalation attacks on android. In: International conference on information security. Springer; 2010. p. 346–360.
Jovičić B, Simić D. Common web application attack types and security using asp .net. ComSIS. December. 2006.
Virvilis N, Gritzalis D. The big four-what we did wrong in advanced persistent threat detection. In: 2013 international conference on availability, reliability and security. IEEE; 2013. p. 248–254.
Sigler K. Crypto-jacking: how cyber-criminals are exploiting the crypto-currency boom. Comput Fraud Secur. 2018;2018(9):12–4.
Khraisat A, Gondal I, Vamplew P, Kamruzzaman J. Survey of intrusion detection systems: techniques, datasets and challenges. Cybersecurity. 2019;2(1):20.
Qi H, Di X, Li J. Formal definition and analysis of access control model based on role and attribute. J Inf Secur Appl. 2018;43:53–60.
Yin J. Firewall policy management, May 10 2016. US Patent 9,338,134.
Xue Y, Meng G, Liu Y, Tan TH, Chen H, Sun J, Zhang J. Auditing anti-malware tools by evolving android malware and dynamic loading technique. IEEE Trans Inf Forensics Secur. 2017;12(7):1529–44.
Hunt T, Zhu Z, Yuanzhong X, Peter S, Witchel E. Ryoan: a distributed sandbox for untrusted computation on secret data. ACM Trans Comput Syst (TOCS). 2018;35(4):1–32.
Irfan M, Abbas H, Sun Y, Sajid A, Pasha M. A framework for cloud forensics evidence collection and analysis using security information and event management. Secur Commun Netw. 2016;9(16):3790–807.
Abood OG, Guirguis SK. A survey on cryptography algorithms. Int J Sci Res Publ. 2018;8(7):410–5.
Johnson L. Computer incident response and forensics team management: conducting a successful incident response. 2013.
Brahmi I, Brahmi H, Yahia SB. A multi-agents intrusion detection system using ontology and clustering techniques. In: IFIP international conference on computer science and its applications. Springer; 2015. p. 381–393.
Qu X, Yang L, Guo K, Ma L, Sun M, Ke M, Li M. A survey on the development of self-organizing maps for unsupervised intrusion detection. Mob Netw Appl. 2019; 1–22.
Liao H-J, Richard Lin C-H, Lin Y-C, Tung K-Y. Intrusion detection system: a comprehensive review. J Netw Comput Appl. 2013;36(1):16–24.
Ammar A, Michael H, Jemal A, Moutaz A. Using feature selection for intrusion detection system. In: 2012 international symposium on communications and information technologies (ISCIT). IEEE; 2012. p. 296–301.
Viegas E, Santin AO, Franca A, Jasinski R, Pedroni VA, Oliveira LS. Towards an energy-efficient anomaly-based intrusion detection engine for embedded systems. IEEE Trans Comput. 2016;66(1):163–77.
Xin Y, Kong L, Liu Z, Chen Y, Li Y, Zhu H, Gao M, Hou H, Wang C. Machine learning and deep learning methods for cybersecurity. IEEE Access. 2018;6:35365–81.
Ragsdale DJ, Carver CA, Humphries JW, Pooch UW. Adaptation techniques for intrusion detection and intrusion response systems. In: Smc 2000 conference proceedings. 2000 IEEE international conference on systems, man and cybernetics.’cybernetics evolving to systems, humans, organizations, and their complex interactions’(cat. no. 0) vol. 4. IEEE; 2000. p. 2344–2349.
Tavallaee M, Bagheri E, Lu W, Ghorbani AA. A detailed analysis of the kdd cup 99 data set. In: 2009 IEEE symposium on computational intelligence for security and defense applications. IEEE; 2009. p. 1–6.
Moustafa N, Slay J. Unsw-nb15: a comprehensive data set for network intrusion detection systems (unsw-nb15 network data set). In: 2015 military communications and information systems conference (MilCIS). IEEE; 2015. p. 1–6.
Lippmann RP, Fried DJ, Graf I, Haines JW, Kendall KR, McClung D, Weber D, Webster SE, Wyschogrod D, Cunningham RK, et al. Evaluating intrusion detection systems: the 1998 darpa off-line intrusion detection evaluation. In: Proceedings DARPA information survivability conference and exposition. DISCEX’00, vol 2. IEEE; 2000. p. 12–26.
Caida ddos attack 2007 dataset. http://www.caida.org/data/ passive/ddos-20070804-dataset.xml/ . Accessed 20 Oct (2019).
Caida anonymized internet traces 2008 dataset. http://www.caida.org/data/passive/passive-2008-dataset.xml/ . Accessed 20 Oct 2019.
Isot botnet dataset. https://www.uvic.ca/engineering/ece/isot/ datasets/index.php/ . Accessed 20 Oct 2019.
The honeynet project. http://www.honeynet.org/chapters/france/ . Accessed 20 Oct 2019.
Canadian institute of cybersecurity, university of new brunswick, iscx dataset. http://www.unb.ca/cic/datasets/index.html/ . Accessed 20 Oct 2019.
Shiravi A, Shiravi H, Tavallaee M, Ghorbani AA. Toward developing a systematic approach to generate benchmark datasets for intrusion detection. Comput Secur. 2012;31(3):357–74.
The ctu-13 dataset. https://stratosphereips.org/category/datasets-ctu13 . Accessed 20 Oct 2019.
Cse-cic-ids2018 [online]. https://www.unb.ca/cic/ datasets/ids-2018.html/ . Accessed 20 Oct 2019.
Cic-ddos2019 [online]. https://www.unb.ca/cic/datasets/ddos-2019.html/ . Accessed 28 March 2020.
Jing X, Yan Z, Jiang X, Pedrycz W. Network traffic fusion and analysis against ddos flooding attacks with a novel reversible sketch. Inf Fusion. 2019;51:100–13.
Xie M, Hu J, Yu X, Chang E. Evaluating host-based anomaly detection systems: application of the frequency-based algorithms to adfa-ld. In: International conference on network and system security. Springer; 2015. p. 542–549.
Lindauer B, Glasser J, Rosen M, Wallnau KC, L ExactData. Generating test data for insider threat detectors. JoWUA. 2014;5(2):80–94.
Glasser J, Lindauer B. Bridging the gap: a pragmatic approach to generating insider threat data. In: 2013 IEEE security and privacy workshops. IEEE; 2013. p. 98–104.
Enronspam. https://labs-repos.iit.demokritos.gr/skel/i-config/downloads/enron-spam/ . Accessed 20 Oct 2019.
Spamassassin. http://www.spamassassin.org/publiccorpus/ . Accessed 20 Oct 2019.
Lingspam. https://labs-repos.iit.demokritos.gr/skel/i-config/downloads/lingspampublic.tar.gz/ . Accessed 20 Oct 2019.
Alexa top sites. https://aws.amazon.com/alexa-top-sites/ . Accessed 20 Oct 2019.
Bambenek consulting–master feeds. http://osint.bambenekconsulting.com/feeds/ . Accessed 20 Oct 2019.
Dgarchive. https://dgarchive.caad.fkie.fraunhofer.de/site/ . Accessed 20 Oct 2019.
Zago M, Pérez MG, Pérez GM. Umudga: a dataset for profiling algorithmically generated domain names in botnet detection. Data in Brief. 2020. p. 105400.
Zhou Y, Jiang X. Dissecting android malware: characterization and evolution. In: 2012 IEEE symposium on security and privacy. IEEE; 2012. p. 95–109.
Virusshare. http://virusshare.com/ . Accessed 20 Oct 2019.
Virustotal. https://virustotal.com/ . Accessed 20 Oct 2019.
Comodo. https://www.comodo.com/home/internet-security/updates/vdp/database.php . Accessed 20 Oct 2019.
Contagio. http://contagiodump.blogspot.com/ . Accessed 20 Oct 2019.
Kumar R, Xiaosong Z, Khan RU, Kumar J, Ahad I. Effective and explainable detection of android malware based on machine learning algorithms. In: Proceedings of the 2018 international conference on computing and artificial intelligence. ACM; 2018. p. 35–40.
Microsoft malware classification (big 2015). arXiv:1802.10135 . Accessed 20 Oct 2019.
Koroniotis N, Moustafa N, Sitnikova E, Turnbull B. Towards the development of realistic botnet dataset in the internet of things for network forensic analytics: bot-iot dataset. Future Gener Comput Syst. 2019;100:779–96.
Wu Y, Wei D, Feng J. Network attacks detection methods based on deep learning techniques: a survey. Secur Commun Netw. 2020;2020:17.
Ferrag MA, Maglaras L, Moschoyiannis S, Janicke H. Deep learning for cyber security intrusion detection: approaches, datasets, and comparative study. J Inf Secur Appl. 2020;50:102419.
Aleesa AM, Zaidan BB, Zaidan AA, Sahar NM. Review of intrusion detection systems based on deep learning techniques: coherent taxonomy, challenges, motivations, recommendations, substantial analysis and future directions. Neural Comput Appl. 2020;32(14):9827–58.
Berman DS, Buczak AL, Chavis JS, Corbett CL. A survey of deep learning methods for cyber security. Information. 2019;10(4):122.
Chandrasekhar AM, Raghuveer K. Confederation of fcm clustering, ann and svm techniques to implement hybrid nids using corrected kdd cup 99 dataset. In: 2014 international conference on communication and signal processing. IEEE; 2014. p. 672–676.
Sharifi AM, Amirgholipour SK, Pourebrahimi A. Intrusion detection based on joint of k-means and knn. J Converg Inf Technol. 2015;10(5):42.
Wei-Chao L, Shih-Wen K, Chih-Fong T. Cann: an intrusion detection system based on combining cluster centers and nearest neighbors. Knowl Based Syst. 2015;78:13–21.
Tajbakhsh A, Rahmati M, Mirzaei A. Intrusion detection using fuzzy association rules. Appl Soft Comput. 2009;9(2):462–9.
Mitchell R, Chen R. Behavior rule specification-based intrusion detection for safety critical medical cyber physical systems. IEEE Trans Dependable Secur Comput. 2014;12(1):16–30.
Kotpalliwar MV, Wajgi R. Classification of attacks using support vector machine (svm) on kddcup’99 ids database. In: 2015 fifth international conference on communication systems and network technologies. IEEE; 2015. p. 987–990.
Pervez MS, Farid DM. Feature selection and intrusion classification in nsl-kdd cup 99 dataset employing svms. In: The 8th international conference on software, knowledge, information management and applications (SKIMA 2014). IEEE; 2014. p. 1–6.
Yan M, Liu Z. A new method of transductive svm-based network intrusion detection. In: International conference on computer and computing technologies in agriculture. Springer; 2010. p. 87–95.
Li Y, Xia J, Zhang S, Yan J, Ai X, Dai K. An efficient intrusion detection system based on support vector machines and gradually feature removal method. Expert Syst Appl. 2012;39(1):424–30.
Raman MRG, Somu N, Jagarapu S, Manghnani T, Selvam T, Krithivasan K, Sriram VSS. An efficient intrusion detection technique based on support vector machine and improved binary gravitational search algorithm. Artif Intell Rev. 2020;53:3255–3286.
Kokila RT, Thamarai Selvi S, Govindarajan K. Ddos detection and analysis in sdn-based environment using support vector machine classifier. In: 2014 sixth international conference on advanced computing (ICoAC). IEEE; 2014. p. 205–210.
Xie M, Hu J, Slay J. Evaluating host-based anomaly detection systems: application of the one-class svm algorithm to adfa-ld. In: 2014 11th international conference on fuzzy systems and knowledge discovery (FSKD). IEEE; 2014. p. 978–982.
Saxena H, Richariya V. Intrusion detection in kdd99 dataset using svm-pso and feature reduction with information gain. Int J Comput Appl. 2014;98(6).25–29.
Shapoorifard H, Shamsinejad P. Intrusion detection using a novel hybrid method incorporating an improved knn. Int J Comput Appl. 2017;173(1):5–9.
Vishwakarma S, Sharma V, Tiwari A. An intrusion detection system using knn-aco algorithm. Int J Comput Appl. 2017;171(10):18–23.
Meng W, Li W, Kwok L-F. Design of intelligent knn-based alarm filter using knowledge-based alert verification in intrusion detection. Secur Commun Netw. 2015;8(18):3883–95.
Dada EG. A hybridized svm-knn-pdapso approach to intrusion detection system. In: Proceedings of Facility Seminar Ser. 2017. p. 14–21.
Koc L, Mazzuchi TA, Sarkani S. A network intrusion detection system based on a hidden Naïve Bayes multiclass classifier. Expert Syst Appl. 2012;39(18):13492–500.
Moon D, Im H, Kim I, Park JH. Dtb-ids: an intrusion detection system based on decision tree using behavior analysis for preventing apt attacks. J Supercomput. 2017;73(7):2881–95.
Ingre B, Yadav A, Soni AK. Decision tree based intrusion detection system for nsl-kdd dataset. In: International conference on information and communication technology for intelligent systems. Springer; 2017. p. 207–218.
Malik AJ, Khan FA. A hybrid technique using binary particle swarm optimization and decision tree pruning for network intrusion detection. Cluster Comput. 2018;21(1):667–80.
Relan NG, Patil DR. Implementation of network intrusion detection system using variant of decision tree algorithm. In: 2015 international conference on nascent technologies in the engineering field (ICNTE). IEEE; 2015. p. 1–5.
Rai K, Syamala Devi M, Guleria A. Decision tree based algorithm for intrusion detection. Int J Adv Netw Appl. 2016;7(4):2828.
Sarker IH, Abushark YB, Alsolami F, Khan AI. Intrudtree: a machine learning based cyber security intrusion detection model. Symmetry. 2020;12(5):754.
Puthran S, Shah K. Intrusion detection using improved decision tree algorithm with binary and quad split. In: International symposium on security in computing and communication. Springer; 2016. p. 427–438.
Balogun AO, Jimoh RG. Anomaly intrusion detection using an hybrid of decision tree and k-nearest neighbor. In: A Multidisciplinary Journal Publication of the Faculty of Science, Adeleke University, Ede, Nigeria, 2015; vol 2.
Jo S, Sung H, Ahn B. A comparative study on the performance of intrusion detection using decision tree and artificial neural network models. J Korea Soc Digit Ind Inf Manag. 2015;11(4):33–45.
Zhang J, Zulkernine M, Haque A. Random-forests-based network intrusion detection systems. IEEE Trans Syst Man Cybern Part C (Appl Rev). 2008;38(5):649–59.
Yuan Y, Kaklamanos G, Hogrefe D. A novel semi-supervised adaboost technique for network anomaly detection. In: Proceedings of the 19th ACM international conference on modeling, analysis and simulation of wireless and mobile systems. ACM; 2016. p. 111–114.
Alrawashdeh K, Purdy C. Toward an online anomaly intrusion detection system based on deep learning. In: 2016 15th IEEE international conference on machine learning and applications (ICMLA). IEEE; 2016. p. 195–200.
Yin C, Zhu Y, Fei J, He X. A deep learning approach for intrusion detection using recurrent neural networks. IEEE Access. 2017;5:21954–61.
Kim J, Kim J, Thi Thu HL, Kim H. Long short term memory recurrent neural network classifier for intrusion detection. In: 2016 international conference on platform technology and service (PlatCon). IEEE; 2016. p. 1–5.
Almiani M, AbuGhazleh A, Al-Rahayfeh A, Atiewi S, Razaque A. Deep recurrent neural network for iot intrusion detection system. Simul Model Pract Theory. 2019;101:102031.
Kolosnjaji B, Zarras A, Webster G, Eckert C. Deep learning for classification of malware system call sequences. In: Australasian joint conference on artificial intelligence. Springer; 2016. p. 137–149.
Wang W, Zhu M, Zeng X, Ye X, Sheng Y. Malware traffic classification using convolutional neural network for representation learning. In: 2017 international conference on information networking (ICOIN). IEEE; 2017. p. 712–717.
Hansen JV, Lowry PB, Meservy RD, McDonald DM. Genetic programming for prevention of cyberterrorism through dynamic and evolving intrusion detection. Decis Support Syst. 2007;43(4):1362–74.
Aslahi-Shahri BM, Rahmani R, Chizari M, Maralani A, Eslami M, Golkar MJ, Ebrahimi A. A hybrid method consisting of GA and SVM for intrusion detection system. Neural Comput Appl. 2016;27(6):1669–76.
Azad C, Jha VK. Genetic algorithm to solve the problem of small disjunct in the decision tree based intrusion detection system. Int J Comput Netw Inf Secur (IJCNIS). 2015;7(8):56.
Ariu D, Tronci R, Giacinto G. Hmmpayl: an intrusion detection system based on hidden Markov models. Comput Secur. 2011;30(4):221–41.
Årnes A, Valeur F, Vigna G, Kemmerer RA. Using hidden markov models to evaluate the risks of intrusions. In: International workshop on recent advances in intrusion detection. Springer; 2006. p. 145–164.
Alauthman M, Aslam N, Al-kasassbeh M, Khan S, Al-Qerem A, Choo K-KR. An efficient reinforcement learning-based botnet detection approach. J Netw Comput Appl. 2020;150:102479.
Blanco R, Cilla JJ, Briongos S, Malagón P, Moya JM. Applying cost-sensitive classifiers with reinforcement learning to ids. In: International conference on intelligent data engineering and automated learning. Springer; 2018. p. 531–538.
Lopez-Martin M, Carro B, Sanchez-Esguevillas A. Application of deep reinforcement learning to intrusion detection for supervised problems. Expert Syst Appl. 2020;141:112963.
Sarker IH. Machine learning: Algorithms, real-world applications and research directions. Preprints. 2021; 2021030216:1–23.
Sarker IH, Kayes ASM, Watters P. Effectiveness analysis of machine learning classification models for predicting personalized context-aware smartphone usage. J Big Data. 2019;6(1):1–28.
John GH, Langley P. Estimating continuous distributions in Bayesian classifiers. In: Proceedings of the eleventh conference on uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc.; 1995. p. 338–345.
Quinlan JR. C4.5: Programs for machine learning. Mach Learn. 2014.
Sarker IH, Colman A, Han J, Khan AI, Abushark YB, Salah K. Behavdt: a behavioral decision tree learning to build user-centric context-aware predictive model. Mob Netw Appl. 2020;25:1151–1161.
Aha DW, Kibler D, Albert MK. Instance-based learning algorithms. Mach Learn. 1991;6(1):37–66.
Keerthi SS, Shevade SK, Bhattacharyya C, Krishna Murthy KR. Improvements to platt’s smo algorithm for svm classifier design. Neural Comput. 2001;13(3):637–49.
Freund Y, Schapire RE, et al. Experiments with a new boosting algorithm. In: Icml, vol. 96. Citeseer; 1996. p. 148–156.
Le Cessie S, Van Houwelingen JC. Ridge estimators in logistic regression. J R Stat Soc Ser C (Appl Stat). 1992;41(1):191–201.
Han J, Pei J, Kamber M. Data mining: concepts and techniques. 2011.
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, et al. Scikit-learn: machine learning in python. J Mach Learn Res. 2011;12:2825–30.
Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.
MacQueen J. Some methods for classification and analysis of multivariate observations. In: Fifth Berkeley symposium on mathematical statistics and probability, vol. 1. 1967.
Rokach L. A survey of clustering algorithms. In: Data mining and knowledge discovery handbook. Springer; 2010. p. 269–298.
Kaufman L, Rousseeuw PJ. Finding groups in data: an introduction to cluster analysis, vol. 344. New York: Wiley; 2009.
Ester M, Kriegel H-P, Sander J, Xiaowei X, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. Kdd. 1996;96:226–31.
Sneath PHA. The application of computers to taxonomy. J Gen Microbiol. 1957;17(1):201–26.
Sorensen T. Method of establishing groups of equal amplitude in plant sociology based on similarity of species. Biol Skr. 1948;5:1–34.
Sarker IH, Colman A, Kabir MA, Han J. Individualized time-series segmentation for mining mobile phone user behavior. Comput J. 2018;61(3):349–68.
Agrawal R, Imieliński T, Swami A. Mining association rules between sets of items in large databases. In: ACM SIGMOD Record, vol. 22. ACM; 1993. p. 207–216.
Agrawal R, Srikant R, et al. Fast algorithms for mining association rules. In: Proceedings of 20th international conference very large data bases, VLDB, vol. 1215. 1994. p. 487–499.
Han J, Pei J, Yin Y. Mining frequent patterns without candidate generation. In: ACM Sigmod Record, vol. 29. ACM; 2000. p. 1–12.
Das A, Ng W-K, Woon Y-K. Rapid association rule mining. In: Proceedings of the tenth international conference on Information and knowledge management. ACM; 2001. p. 474–481.
Zaki MJ. Scalable algorithms for association mining. IEEE Trans Knowl Data Eng. 2000;12(3):372–90.
Sarker IH, Kayes ASM. Abc-ruleminer: user behavioral rule-based machine learning method for context-aware intelligent services. J Netw Comput Appl. 2020;168:102762.
Sarker IH, Abushark YB, Khan AI. Contextpca: predicting context-aware smartphone apps usage based on machine learning techniques. Symmetry. 2020;12(4):499.
Van Efferen L, Ali-Eldin AMT. A multi-layer perceptron approach for flow-based anomaly detection. In: 2017 international symposium on networks, computers and communications (ISNCC). IEEE; 2017. p. 1–6.
Liu H, Lang B, Liu M, Yan H. Cnn and rnn based payload classification methods for attack detection. Knowl Based Syst. 2019;163:332–41.
Khan FA, Gumaei A, Derhab A, Hussain A. A novel two-stage deep learning model for efficient network intrusion detection. IEEE Access. 2019;7:30373–85.
Kaelbling LP, Littman ML, Moore AW. Reinforcement learning: a survey. J Artif Intell Res. 1996;4:237–85.
Sarker IH. Deep cybersecurity: A comprehensive overview from neural network and deep learning perspective. Preprints. 2021; 2021020340:1–18.
Sarker IH, Hoque MM, Uddin K et al. Mobile data science and intelligent apps: concepts, ai-based modeling and research directions. Mob Netw Appl. 2020;1–19.
Kidmose E, Stevanovic M, Pedersen JM. Detection of malicious domains through lexical analysis. In: 2018 international conference on cyber security and protection of digital services (cyber security). IEEE; 2018. p. 1–5.
Perera I, Hwang J, Bayas K, Dorr B, Wilks Y. Cyberattack prediction through public text analysis and mini-theories. In: 2018 IEEE international conference on big data (big data). IEEE; 2018. p. 3001–3010.
L’Huillier G, Hevia A, Weber R, Rios S. Latent semantic analysis and keyword extraction for phishing classification. In: 2010 IEEE international conference on intelligence and security informatics. IEEE; 2010. p. 129–131.
Georgescu T-M, Iancu B, Zurini M. Named-entity-recognition-based automated system for diagnosing cybersecurity situations in iot networks. Sensors. 2019;19(15):3380.
Sun S, Luo C, Chen J. A review of natural language processing techniques for opinion mining systems. Inf Fusion. 2017;36:10–25.
Mokhov SA, Paquet J, Debbabi M. The use of nlp techniques in static code analysis to detect weaknesses and vulnerabilities. In: Canadian conference on artificial intelligence. Springer; 2014. p. 326–332.
Egozi G, Verma R. Phishing email detection using robust nlp techniques. In: 2018 IEEE international conference on data mining workshops (ICDMW). IEEE; 2018. p. 7–12.
Karbab EB, Debbabi M. Maldy: portable, data-driven malware detection using natural language processing and machine learning techniques on behavioral analysis reports. Digit Investig. 2019;28:S77–87.
Stephan G, Pascal H, Andreas A. Knowledge representation and ontologies. Semantic web services: concepts, technologies, and applications. 2007. p. 51–105.
Maedche A, Staab S. Ontology learning for the semantic web. IEEE Intell Syst. 2001;16(2):72–9.
Pereira T, Santos H. An ontology based approach to information security. In: Research conference on metadata and semantic research. Springer; 2009. p. 183–192.
McGuinness DL, Van Harmelen F, et al. Owl web ontology language overview. W3C Recomm. 2004;10(10):2004.
Witten IH, Frank E. Data mining: practical machine learning tools and techniques. Burlington: Morgan Kaufmann; 2005.
Witten IH, Frank E, Trigg LE, Hall MA, Holmes G, Cunningham SJ. Weka: practical machine learning tools and techniques with java implementations. 1999.
Zadeh LA. Fuzzy logic—a personal perspective. Fuzzy Sets Syst. 2015;281:4–20.
Sarker IH. A machine learning based robust prediction model for real-life mobile phone data. Internet Things. 2019;5:180–93.
Sarker IH. Context-aware rule learning from smartphone data: survey, challenges and future directions. J Big Data. 2019;6(1):95.
Sarker IH, Colman A, Han J. Recencyminer: mining recency-based personalized behavior from contextual smartphone data. J Big Data. 2019;6(1):49.
Download references
Author information
Authors and affiliations.
Swinburne University of Technology, Melbourne, VIC, 3122, Australia
Iqbal H. Sarker
Department of Computer Science and Engineering, Chittagong University of Engineering & Technology, Chittagong, 4349, Bangladesh
Centre for Cyber Security and Games, Canberra Institute of Technology, Reid, ACT, 2601, Australia
Md Hasan Furhad
Victoria University, Footscray, VIC, 3011, Australia
Raza Nowrozy
You can also search for this author in PubMed Google Scholar
Contributions
The authors present a comprehensive view on “AI-driven Cybersecurity” that can play an important role for intelligent cybersecurity services and management [IHS—conceptualization, research design, and prepare the original manuscript]. All the authors read and approved the final manuscript.
Corresponding author
Correspondence to Iqbal H. Sarker .
Ethics declarations
Conflict of interest.
The authors declare no conflict of interest.
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the topical collection “Cyber Security and Privacy in Communication Networks” guest edited by Rajiv Misra, R K Shyamsunder, Alexiei Dingli, Natalie Denk, Omer Rana, Alexander Pfeiffer, Ashok Patel and Nishtha Kesswani.
Rights and permissions
Reprints and Permissions
About this article
Cite this article.
Sarker, I.H., Furhad, M.H. & Nowrozy, R. AI-Driven Cybersecurity: An Overview, Security Intelligence Modeling and Research Directions. SN COMPUT. SCI. 2 , 173 (2021). https://doi.org/10.1007/s42979-021-00557-0
Download citation
Received : 22 November 2020
Accepted : 02 March 2021
Published : 26 March 2021
DOI : https://doi.org/10.1007/s42979-021-00557-0
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Cybersecurity
- Artificial intelligence
- Machine learning
- Cyber data analytics
- Cyber-attacks
- Intrusion detection
- Security intelligence
Advertisement
- Find a journal
- Publish with us
Help | Advanced Search
Computer Science > Cryptography and Security
Title: artificial intelligence ethics education in cybersecurity: challenges and opportunities: a focus group report.
Abstract: The emergence of AI tools in cybersecurity creates many opportunities and uncertainties. A focus group with advanced graduate students in cybersecurity revealed the potential depth and breadth of the challenges and opportunities. The salient issues are access to open source or free tools, documentation, curricular diversity, and clear articulation of ethical principles for AI cybersecurity education. Confronting the "black box" mentality in AI cybersecurity work is also of the greatest importance, doubled by deeper and prior education in foundational AI work. Systems thinking and effective communication were considered relevant areas of educational improvement. Future AI educators and practitioners need to address these issues by implementing rigorous technical training curricula, clear documentation, and frameworks for ethically monitoring AI combined with critical and system's thinking and communication skills.
Submission history
Access paper:.
- Download PDF

References & Citations
- Google Scholar
- Semantic Scholar
BibTeX formatted citation

Bibliographic and Citation Tools
Code, data and media associated with this article, recommenders and search tools.
- Institution
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
- News & Events
- Contact & Visit
- Faculty & Staff
- McCormick Advisory Council
- Departments & Institutes
- Diversity Data
- Faculty Journal Covers
- Areas of Study
- Bachelor's Degrees
- Music & Engineering
- Combined BS / MS Program Collapse Combined BS / MS Program Submenu
- Murphy Scholars Program Projects
- Undergraduate Honors
- Certificates & Minors Collapse Certificates & Minors Submenu
- Integrated Engineering Studies
- Engineering First® Program
- Theme Requirement
- Research Opportunities
- Personal & Career Development
- Global Opportunities
- Existing Groups
- McCormick Community
- Transfer AP/IB Credits
- ABET Course Partitioning
- Enrollment and Graduation Data
- Full-time Master's
- Part-time Master's
- MS with Interdepartmental Minors
- Application Checklist
- Application FAQs
- Financial Aid
- International Students
- Student Groups
- Career & Professional Development
- All Areas of Study
- Departments & Programs
- Apply to Northwestern Engineering
- Faculty Fellows
- Office of the Dean
- Administration, Finance, Facilities, & Planning
- Alumni Relations & Development
- Career Development
- Corporate Engagement
- Customer Service Center
- Faculty Affairs
- Global Initiatives
- Graduate Studies
- Information Technology
- Marketing & Communications
- McCormick Advising System
- Personal Development StudioLab
- Professional Education
- Research Offices
- Undergraduate Engineering
- Newsletter Signup
- Information for the Media
- Tech Room Finder
Advancing AI Systems in Cybersecurity, Counterterrorism, and International Security
Day-long conference highlighted research projects in the northwestern security and ai lab.
Artificial intelligence (AI) models trained on unclassified, open-source data can predict terrorist attacks, aid in destabilizing terrorist networks, protect against intellectual property theft, and predict, detect, and mitigate cyber-attacks in real time.
The Northwestern Security and AI Lab (NSAIL) team is one of the leaders of a growing multidisciplinary community developing and deploying AI technologies to address these global threats and protect against malicious actors around the world.
Led by V.S. Subrahmanian , Walter P. Murphy Professor of Computer Science at Northwestern Engineering and a faculty fellow at the Northwestern Roberta Buffett Institute for Global Affairs , NSAIL is conducting fundamental research in AI relevant to issues of cybersecurity, counterterrorism, and international security.
On October 12, the Buffett Institute and the McCormick School of Engineering hosted the “ Conference on AI and National Security ” to showcase NSAIL’s work. Building on work featured during the inaugural conference last year, the event featured research demonstrations, presentations, and panel discussions with leading experts in AI, cybersecurity, and national security. Approximately 260 users participated, either in person or online.

B.Hack: Predicting targeted attacks
A foundational goal for NSAIL is to develop predictive models that can analyze specific terrorist group activity and forecast future attacks.
“It's really, really important to study your opponent,” Subrahmanian said. “You cannot mount a good defense against any kind of attack — regardless of whether it's terrorist attack, a war, or a cyber attack — unless you understand your adversary.”
Building on NSAIL’s Northwestern Terror Early Warning System (NTEWS) project — a machine learning framework that generates forecasts about future attacks and predicts terrorist behaviors — Subrahmanian and computer science PhD student Lirika Sola launched the Boko Haram Analytics Against Child Kidnapping (B.HACK) project to provide more granular risk assessments of Nigerian schools targeted by Boko Haram.

It's really, really important to study your opponent. You cannot mount a good defense against any kind of attack — regardless of whether it's terrorist attack, a war, or a cyber attack — unless you understand your adversary.
V.S. Subrahmanian NSAIL Head, Walter P. Murphy Professor of Computer Science in Northwestern Engineering, and Faculty Fellow at the Northwestern Roberta Buffett Institute for Global Affairs
One of the most dangerous terrorist groups in the world, Boko Haram has abducted thousands of children from Nigerian schools and displaced millions of people from their homes since its emergence in 2002, according to BBC News.
Building on top of a specialized dataset which leverages resources including the Armed Conflict Location & Event Data Project , B.HACK is an AI-enabled system that assigns a Boko Haram kidnapping risk score to every school in Nigeria. The platform enables a user to examine schools within a selected region of interest and review the probabilities of each school being attacked based on factors including the five nearest security installations and the number of prior Boko Haram attacks within a specific radius (zero to 50 miles) of each school.
Subrahmanian noted that the B.HACK platform’s risk analysis techniques can apply to predictions of different types of targeted attacks.
“We have a methodology where we can predict other related phenomena, like which security installations will be targeted or which tourist or transportation sites might be targeted,” Subrahmanian said. “This is the first of a long series of spatial predictions we hope to be able to make in the coming years.”
PCORE: Forecasting malicious activity caused by climate change
As climate change dramatically alters the locations of water and vegetation sites in Africa, pastoralists are forced to adapt their movement patterns to sustain their herds. The competition over resources and disputes over land rights are increasing the number of violent conflicts as herders encroach on subsistence farmland or the traditional territory of other herders.
In joint work with the United Nations Department of Political and Peacebuilding Affairs , NSAIL’s Pastoral Conflict Reasoning Engine (PCORE) project generates a map which identifies the locations within five countries — Burundi, Cameroon, Central African Republic, Chad, and the Democratic Republic of the Congo — at risk of pastoral conflicts.
After breaking each country into cells corresponding to specific regions within each of the five countries, the team gathered data on the history of conflict within a given cell as well as weather data and ground data related to variables including terrain, land-use, and roads.
The machine learning models developed by the PCORE team apply several different algorithms to predict whether conflict will occur. Subrahmanian presented an example of a risk assessment in the Central African Republic using a decision-tree algorithm.
“If a cell had over 5.5 conflicts in the past, its relative humidity at two meters is less than or equal to 5.9 percent, and the surface soil wetness is less than or equal to 27.4 percent, then there is a 100 percent probability that such a cell will experience conflict,” Subrahmanian said. “Ten cells validate these rules, and in every one of those 10 cases, there was a conflict.”
Imposing costs on hackers
NSAIL also focuses on issues concerning information, cyber, and technology security, including managing vulnerabilities in an enterprise, managing cyber alerts, and preventing intellectual property (IP) theft.
Cybersecurity project teams are addressing malware, which causes significant harm to individuals and enterprises by stealing sensitive data, disrupting business operations, damaging systems, and exposing confidential information.
“We always want to put ourselves in the shoes of the bad guy and say, ‘if I build a system, how would the bad guy attack it?’” Subrahmanian said. “We put ourselves in the shoes of our adversaries to try to get in front of it and craft defenses.”

Subrahmanian is a coauthor of a new book — with Qian Han and Sai Deep Tetali (Meta), Salvador Mandujano and Sebastian Porst (Google), and Yanhai Xiong (William & Mary) — called The Android Malware Handbook: Detection and Analysis by Human and Machine (No Starch Press Inc., 2023) that introduces the Android threat landscape and presents practical guidance to detect and analyze malware.
The team found that features related to app permissions are linked to whether an app is malware or not. They also proposed novel features based on an analysis of the behaviors of the app.
“Watch those permissions,” Subrahmanian said. “Always check and see whether the app's permissions are consistent with how you intend to use the app and what the app does.”

Predictive analysis of NATO Locked Shields exercises
In another effort to protect computer systems from real-time attacks, an NSAIL team conducted joint work with the Netherlands Defence Academy and the Technical University of Delft to detect suspicious network sessions in the NATO Locked Shields exercise.
An annual cyber defense exercise organized by NATO Cooperative Cyber Defence Centre of Excellence, Locked Shields is a forum for thousands of cyber security experts across more than 30 countries to enhance their skills in defending national IT systems and critical infrastructure against malicious attacks.

“It is not only important to correctly predict which sessions are malicious, but also to make these predictions as early as possible,” said Chiara Pulice , an NSAIL senior research associate who has played an important role in this effort.
“Novel predictive models produced by the research team have the potential to have a transformative impact on how organizations can monitor their suspicious activity on their enterprise networks,” said research collaborator Roy Lindelauf , head of the Data Science Center of Excellence at the Netherlands Ministry of Defence.
Deepfakes and International Conflict
While malicious actors increasingly use deepfake technology to do harm, NSAIL researchers are at the leading edge of the development of AI systems to generate realistic deepfake videos to cause dissension within terror groups.
Daniel W. Linna Jr. moderated the concluding panel of the Conference on AI and National Security, which focused on the intersection of international conflict and deepfake technology.
Linna is a senior lecturer and director of law and technology initiatives at Northwestern, who has a joint appointment at Northwestern Engineering and the Northwestern University Pritzker School of Law. Panelists included Larry Birnbaum , professor of computer science at Northwestern Engineering, Dan Byman (Georgetown University), and retired Lieutenant General John N.T. Shanahan (former director, US Department of Defense Joint Artificial Intelligence Center).

The group discussed a range of issues, including how deepfakes are being used to support undemocratic geopolitical events and to compromise elections; the ethical implications of government agencies employing deepfakes; how deepfakes fit within the suite of military information operations; and the state, federal, and international regulation of these and other AI tools.
Birnbaum noted that deepfake technology is readily available and will continue to improve in quality.
“Some of this technology is under the control of organizations that are reasonably reliable,” Birnbaum said. “But that's not going to last very long. We have to assume that this will be relatively quickly a ubiquitous and easy-to-use technology.”
Birnbaum also explained that the automation of deepfake technology will allow for increased personalization.
“Misinformation and disinformation are already flooding the zone. And deepfakes will add to that pain,” Birnbaum said. “Marketers do this all the time — they segment markets, and they tailor messages to particular markets. When you automate, you can make that market smaller and smaller — eventually, maybe a market of one. When the disinformation or misinformation can be targeted to very small units, it can potentially be made much more appealing and more believable, because it matches so precisely what a particular person might want to think.”
Get our news in your inbox.
Sign up for our newsletter.
Check out our magazine.
Find more in depth stories and get to know Northwestern Engineering.
Cookies on GOV.UK
We use some essential cookies to make this website work.
We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.
We also use cookies set by other sites to help us deliver content from their services.
You have accepted additional cookies. You can change your cookie settings at any time.
You have rejected additional cookies. You can change your cookie settings at any time.

- Business and industry
- Science and innovation
- Artificial intelligence
- Frontier AI: capabilities and risks – discussion paper
- Department for Science, Innovation & Technology
Safety and security risks of generative artificial intelligence to 2025 (Annex B)
Published 25 October 2023

© Crown copyright 2023
This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: [email protected] .
Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.
This publication is available at https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/safety-and-security-risks-of-generative-artificial-intelligence-to-2025-annex-b
Generative AI development has the potential to bring significant global benefits. But it will also increase risks to safety and security by enhancing threat actor capabilities and increasing the effectiveness of attacks.
- The development and adoption of generative AI technologies has the potential to bring substantial benefits if managed appropriately. Productivity and innovation across many sectors including healthcare, finance and information technology will accelerate.
- Generative AI will also significantly increase risks to safety and security. By 2025, generative AI is more likely to amplify existing risks than create wholly new ones, but it will increase sharply the speed and scale of some threats. The difficulty of predicting technological advances creates significant potential for technological surprise; additional threats will almost certainly emerge that have not been anticipated.
- The rapid proliferation and increasing accessibility of these technologies will almost certainly enable less-sophisticated threat actors to conduct previously unattainable attacks.
- Risks in the digital sphere (e.g. cyber-attacks, fraud, scams, impersonation, child sexual abuse images) are most likely to manifest and to have the highest impact to 2025.
- Risks to political systems and societies will increase in likelihood as the technology develops and adoption widens. Proliferation of synthetic media risks eroding democratic engagement and public trust in the institutions of government.
- Physical security risks will likely rise as Generative AI becomes embedded in more physical systems, including critical infrastructure.
- The aggregate risk is significant. The preparedness of countries, industries and society to mitigate these risks varies. Globally regulation is incomplete and highly likely failing to anticipate future developments.
Our definitions and scope
Safety and Security : The protection, wellbeing and autonomy of civil society and the population.
Artificial Intelligence ( AI ) : Machine- driven capability to achieve a goal by performing cognitive tasks.
Frontier AI : Highly capable general- purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.
Generative AI (GenAI) : AI systems that can create new content. The most popular models generate text and images from text prompts, but some use other inputs such as images to create audio, video and images.
Large language model ( LLM ) : Models trained on large volumes of text-based data, typically from the internet.
Risk : A situation involving exposure to detrimental impacts.
Threat : A malicious risk involving an actor with intent.
This assessment does not consider military risks relating to Generative AI .
This assessment draws on a broad range of sources including existing and novel research, intelligence assessments, expert insights and open source.
1. The development and application of generative AI intersects with many other technologies. Its development and use will have broad impacts - positive and negative - internationally. The rapid pace of technological progress, lack of consensus on how to measure and compare performance of AI models, and the broad capabilities of the technology means that the safety and security implications are challenging to assess. We have therefore limited our analysis to the key risks and imposed a limited time horizon to 2025. We exclude consideration of the risks resulting from military applications of generative AI .
2. The perceived advantages from first-mover status and widespread media attention have accelerated global interest in generative AI . Since 2020, progress in generative AI has greatly outpaced expert expectations, with models outperforming humans in a small number of specific tasks. Progress continues to be rapid and to 2025, it is unlikely that the pace of technological development will slow. Higher performing, larger LLMs will almost certainly be released, but it is unclear how far this will translate into significantly improved practical applications by 2025. Global regulation is incomplete, falling behind current technical advances and highly likely failing to anticipate future developments.
See figure 1 in an accessible format.
The Generative AI Ecosystem
3. Private sector AI firms will remain key actors in cutting-edge generative AI research and frontier models to 2025. The researchers, funding, hardware, compute and data will continue to be concentrated in these commercial organisations, enabling them to undertake the most advanced developments.
4. Open-source generative AI is facilitating rapid proliferation and increasing democratisation of generative AI by reducing the barriers to entry for developing models. To date their performance has mostly lagged behind that of the frontier models; open source models will almost certainly improve, but they are highly unlikely to be more capable than leading commercial frontier models by 2025. The proliferation of open-source models increases accessibility and therefore brings global safety and security implications, especially for models which have the potential to allow malicious use through lack of effective safeguards.
See figure 2 in an accessible format.
See figure 3 in an accessible format.
Threat actors
5. The increasing performance, availability and accessibility of generative AI tools allows potentially anyone to pose a threat through malicious use, misuse or mishap. Generative AI will almost certainly continue to lower the barriers to entry for less sophisticated threat actors seeking to conduct previously unattainable attacks. As well as organised groups, political activists and lone actors will likely use generative AI for ideological, political and personal purposes.
6. Criminals are highly likely to adopt generative AI technology at the same rate and pace as the general population, but some innovative groups and individuals will be early adopters. Use of the technology by criminals will highly likely accelerate the frequency and sophistication of scams, fraud, impersonation, ransomware, currency theft, data harvesting, child sexual abuse images and voice cloning. But to 2025, criminals will be less likely to successfully exploit generative AI to create novel malware.
7. To 2025, generative AI has the potential to enhance terrorist capabilities in propaganda, radicalisation, recruitment, funding streams, weapons development and attack planning. But dependence on physical supply chains will almost certainly remain an impediment to the use of generative AI for sophisticated physical attacks.
Safety and security risks
8. Over the next 18 months, generative AI is more likely to amplify existing risks than create new ones. But it will increase sharply the speed and scale of some threats, and introduce some vulnerabilities. The risks fall into at least three overlapping domains:
Digital risks are assessed to be the most likely and have the highest impact to 2025. Threats include cybercrime and hacking. Generative AI will also improve digital defences to these threats.
Risks to political systems and societies will increase in likelihood to 2025, becoming as significant as digital risks as generative AI develops and adoption widens. Threats include manipulation and deception of populations.
Physical risks will likely rise as generative AI becomes embedded into more physical systems, including critical infrastructure and the built environment. If implemented without adequate safety and security controls, AI may introduce new risks of failure and vulnerabilities to attack.
9. These risks will not occur in isolation; they are likely to compound and influence other risks. There will also almost certainly be unanticipated risks, including risks that result from lack of predictability of AI systems.
- Cyber-attacks : Generative AI can be used to create faster paced, more effective and larger scale cyber intrusion via tailored phishing methods or replicating malware. But experiments in vulnerability discovery and evading detection are significantly less mature at this stage. We assess that generative AI is unlikely to fully automate computer hacking by 2025.
- Increased digital vulnerabilities : Generative AI integration into critical functions and infrastructure presents a new attack surface through corrupting training data (‘data poisoning’), hijacking model output (‘prompt injection’), extracting sensitive training data (‘model inversion’), misclassifying information (‘perturbation’) and targeting computing power.
- Erosion of trust in information : Generative AI could lead to a pollution of the public information ecosystem with hyper-realistic bots and synthetic media (‘deepfakes’) influencing societal debate and reflecting pre-existing social biases. This risk includes creating fake news, personalised disinformation, manipulating financial markets and undermining the criminal justice system. By 2026 synthetic media could comprise a large proportion of online content, and risks eroding public trust in government, while increasing polarisation and extremism. Authentication solutions (e.g. ‘watermarking’) are under development but are currently unreliable, requiring updates as generative AI evolves.
- Political and societal influence : Generative AI tools have already been shown capable of persuading humans on political issues and can be used to increase the scale, persuasiveness and frequency of disinformation and misinformation. More generally, generative AI can generate hyper-targeted content with unprecedented scale and sophistication.
- Insecure use and misuse : Generative AI integration into critical systems and infrastructure risks data leaks, biased and discriminatory systems or compromised human decision- making through poor information security and opaque algorithm processes (e.g. ‘hallucinations’). Inappropriate use by any large-scale organisation could have unintended consequences and result in cascading failures. Generative AI integration into critical functions may also result in over-reliance on supply chains that are opaque, potentially fragile and controlled by a small number of firms.
- Weapon instruction : Generative AI can be used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological and radiological weapons. Leading generative AI firms are building safeguards against dangerous outputs, but the effectiveness of these safeguards vary. Other barriers to entry will persist (e.g. acquiring components, manufacturing equipment, tacit knowledge), but these barriers have been falling and generative AI could accelerate this trend.

Conclusions
Generative AI has the potential to bring substantial benefits if managed appropriately, accelerating productivity and innovation across many sectors including healthcare, finance and information technology. But there is a risk that inadequate understanding of the technology resulting in disproportionate public anxiety could result in failure to adopt generative AI and put some benefits out of reach.
Generative AI will also almost certainly act as a force multiplier for safety and security risks by proliferating and enhancing threat actor capabilities and increasing the speed, scale and sophistication of attacks. The aggregate risk is significant.
Governments will highly likely not have full insight into private sector progress, limiting their ability to mitigate all of the safety and security risks. Additionally, monitoring adoption of AI - based technologies by the broad range of potential threat actors will prove challenging. There is significant potential for technological surprise; there will almost certainly be unanticipated risks.
The race to develop the best performing generative AI models will almost certainly intensify: experts disagree on whether generative AI is a stepping stone to progress in Artificial General Intelligence. But it will unlock progress in a broad range of domains. To 2025, there is a realistic possibility that generative AI will accelerate development of some of the technologies it converges with, including quantum computing, novel materials, telecommunications and bio- technologies. But increases in risk as a result of these convergences will likely be felt beyond 2025.
See figure 4 in an accessible format.
Descriptions of figures
This section contains descriptions of all figures in an accessible format.
Number of months for apps to reach 100 million monthly users
Description : A bar chart illustrating the number of months taken for software applications to reach 100 million monthly users. The graph shows that ChatGPT was second only to Meta’s Threads to reach 100 million users.
Return to figure 1.
Affiliation of research teams building notable AI systems
Description : A stacked bar chart depicting the affiliation of research teams building notable AI systems split between academia, industry and collaboration between the two. Prior to 2022, academia played a large role in AI development and between 2002 and 2014 built the majority of the notable systems. Since then, academia’s share has reduced significantly with industry and collaborative teams increasing the share of notable systems they have built. In 2022, industry built the majority of systems, and only a small number were built as collaboration between industry and academia.
Number of research teams:
Return to figure 2.
Computation used to train artificial intelligence systems
Description : A scatter plot illustrating the computational power used to train artificial intelligence systems between 2002 and 2022. It shows that since 2010 the amount of compute required has rapidly increased, moving from approximately 0.1 Peta floating point operations per second to over 10 billion floating point operations per second.
Return to figure 3.
Probability yardstick
Description : Diagram showing the probability yardstick:
- 0-5%: Remote chance
- 10-20%: Highly unlikely
- 25-35%: Unlikely
- 40-50%: Realistic possibility
- 55-75%: Likely or probable
- 80-90%: Highly likely
- 95-100%: Almost certain
Return to figure 4.
Is this page useful?
- Yes this page is useful
- No this page is not useful
Help us improve GOV.UK
Don’t include personal or financial information like your National Insurance number or credit card details.
To help us improve GOV.UK, we’d like to know more about your visit today. We’ll send you a link to a feedback form. It will take only 2 minutes to fill in. Don’t worry we won’t send you spam or share your email address with anyone.
Subscribe to the PwC Newsletter
Join the community, edit social preview.

Add a new code entry for this paper
Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row, remove a task, add a method, remove a method, edit datasets, intell-dragonfly: a cybersecurity attack surface generation engine based on artificial intelligence-generated content technology.
1 Nov 2023 · Xingchen Wu , Qin Qiu , Jiaqi Li , Yang Zhao · Edit social preview
With the rapid development of the Internet, cyber security issues have become increasingly prominent. Traditional cyber security defense methods are limited in the face of ever-changing threats, so it is critical to seek innovative attack surface generation methods. This study proposes Intell-dragonfly, a cyber security attack surface generation engine based on artificial intelligence generation technology, to meet the challenges of cyber security. Based on ChatGPT technology, this paper designs an automated attack surface generation process, which can generate diversified and personalized attack scenarios, targets, elements and schemes. Through experiments in a real network environment, the effect of the engine is verified and compared with traditional methods, which improves the authenticity and applicability of the attack surface. The experimental results show that the ChatGPT-based method has significant advantages in the accuracy, diversity and operability of attack surface generation. Furthermore, we explore the strengths and limitations of the engine and discuss its potential applications in the field of cyber security. This research provides a novel approach to the field of cyber security that is expected to have a positive impact on defense and prevention of cyberthreats.
Code Edit Add Remove Mark official
Datasets edit.

Today’s security teams face many challenges—sophisticated cyberattackers, an expanding attack surface , an explosion of data and growing infrastructure complexity—that hinder their ability to safeguard data , manage user access , and quickly detect and respond to security threats.
IBM Security® provides transformative, AI-powered solutions that optimize analysts’ time—by accelerating threat detection, expediting responses, and protecting user identity and datasets—while keeping cybersecurity teams in the loop and in charge.
Learn how leaders succeed by uniting technology and talent.
AI solutions can identify shadow data, monitor for abnormalities in data access and alert cybersecurity professionals about potential threats by anyone accessing the data or sensitive information—saving valuable time in detecting and remediating issues in real time.
AI-powered risk analysis can produce incident summaries for high-fidelity alerts and automate incident responses, accelerating alert investigations and triage by an average of 55%. 1 The AI technology also helps identify vulnerabilities and defend against cybercriminals and cyber crime.
AI models can help balance security with user experience by analyzing the risk of each login attempt and verifying users through behavioral data, simplifying access for verified users and reducing the cost of fraud by up to 90%. 2 Also, AI systems helps prevent phishing, malware and other malicious activities, ensuring a high security posture.
Latest product demos
Schedule time to talk with an IBM representative about your organization's unique cybersecurity needs and discuss how AI-powered solutions can help.
1. Global Security Operations Center Study Results , administered by Morning Consult and commissioned by IBM, March 2023. Based on responses from 1,000 surveyed security operation center professionals from 10 countries.
2. Charles, B. S. (2016, October 16). Forrester Study Highlights a Company’s 90 Percent Reduction in Fraud Costs Using IBM Trusteer Solutions .
Best Cybersecurity Research Paper Revealed

Sarah Coble
News Writer
The National Security Agency has announced the winning entry to its ninth annual Best Cybersecurity Research Paper Competition.
The winning paper was written by Yanyi Liu from Cornell University and Rafael Pass, professor of Computer Science at Cornell Tech. It expounded a theorem that relates the existence of one-way functions (OWFs) to a measurement of the complexity of a string of text.
“OWFs are vital components of modern symmetric encryptions, digital signatures, authentic schemes and more,” said an NSA spokesperson.
“Until now, it has been assumed that OWF functions exist even though research shows that they are both necessary and sufficient for much of the security provided by cryptography.”
Titled On One-way Functions and Kolmogorov Complexity , the winning paper was published at the 2020 IEEE (Institute of Electrical and Electronics Engineers) Symposium on Foundations of Computer Science.
The chief of NSA’s Laboratory for Advanced Cybersecurity Research picked the winning entry in a decision informed by the opinions of 10 distinguished international cybersecurity experts who independently reviewed the top papers among 34 nominations.
“One-way functions are a key underpinning in many modern cryptography systems and were first proposed in 1976 by Whitfield Diffie and Martin Hellman,” said an NSA spokesperson.
“These functions can be efficiently computed but are difficult to reverse, as determining the input based on the output is computationally expensive.”
The NSA gave an honorable mention to another paper, Retrofitting Fine Grain Isolation in the Firefox Renderer , written by Shravan Narayan, Craig Disselhoen, Tal Garfinkel, Nathan Froyd, Sorin Lerner Hovav Shacham and Deian Stefan.
Originally published at the USENIX Security Conference 2020, this paper provides a security solution in the Firefox web browser. The paper also demonstrated that the technology could be applied to other situations.
“NSA congratulates the winners, and recently opened the nomination process for the 10th Annual Best Scientific Cybersecurity Paper Competition on January 15 2022,” said the NSA.
The agency said it will welcome nominations of papers published during 2021 in peer-reviewed journals, magazines, or technical conferences that show “an outstanding contribution to cybersecurity science.”
The nomination period for the 10th annual Best Cybersecurity Research Paper Competition closes on 15 April 2022.
You may also like
Cisa and nsa tackle iam security challenges in new report, nsa and cisa release guidelines to secure ci/cd environments, cisa and nsa publish bmc hardening guidelines, cisa and partners publish guide for remote access security, cisa asks manufacturers to prioritize cybersecurity in product design, what’s hot on infosecurity magazine.
- Editor's Choice
US, Japan and South Korea Unite to Counter North Korean Cyber Activities
Over half of users report kubernetes/container security incidents, spy trojan spynote unveiled in attacks on gamers, veeam patches two critical bugs in veeam one, healthcare data breaches impact 88 million americans, russian national sanctioned for virtual currency money laundering, report links chatgpt to 1265% rise in phishing emails, data encrypted in 75% of ransomware attacks on healthcare organizations, british library still reeling after major cyber incident, microsoft takes on cyber-threats with new secure future initiative, spy module discovered in whatsapp mods, half of execs request security bypass over past year, how to secure your modern corporate perimeter with endpoint security, incident response: four key cybersecurity measures to protect your business, forward-thinking practices to manage it risk, vulnerability management: why a risk-based approach is essential, reducing downtime in ics and ot: a guide to cyber readiness and response, embracing chatgpt: unleashing the benefits of llms in security operations, what it professionals need to know about ssl certificates for websites, china poised to disrupt us critical infrastructure with cyber-attacks, microsoft warns, red cross issues wartime hacktivist rules, ai-generated phishing emails almost impossible to detect, report finds, data theft overtakes ransomware as top concern for it decision makers, solarwinds ciso on developing a more secure software ecosystem after infamous hack.

Analytics Insight
Top 10 Cybersecurity Research Papers to Know About in 2022
Cybersecurity is one of the most crucial aspects of the modern tech domain
- TOP CYBERSECURITY JOBS TO PAY BEYOND US$200,000 IN 2022
- TOP 10 CYBERSECURITY STRATEGIES TO BUILD BUSINESS RESILIENCE IN 2022
- HOW IS DATA DEPENDENCY CHANGING THE CYBERSECURITY LANDSCAPE
Cyberbullying among Saudi’s Higher-Education Students: Implications for Educators and Policymakers by Dr. Abdulrahman M Al-Zahrani
Research paper on cyber security by mrs. ashwini sheth, mr. sachin bhosale, and mr. farish kurupkar, supporting the cyber analytic process using visual history on large displays by ankit singh, alex endert, christopher andrews, lauren bradel, robert kincaid, chris north , issues regarding cybersecurity in modern world by h. geldiyev, m. churiyev, and r. mahmudov, fuzzbuster: towards adaptive immunity from cyber threats by paul robertson, cyberspace in space: fragmentation, vulnerability, and uncertainty by johan eriksson, artificial intelligence in cyber security by matthew n.o. sadiku, omobayode i. fagbohungbe, and sarhan m. musa, women’s awareness of the cyber bullying risk in digital media during the enforcement of the movement control order (mco) by mohd farhan md ariffin and dr. mohammad fahmi abdul hamid, survey on the applications of artificial intelligence in cyber security by shidawa baba atiku, achi unimke aaron, goteng kuwunidi job, fatima shittu, and ismail zahraddeen yakubu, a methodical analysis of medical internet of things (miot) security and privacy in current and future trends by dr. yusuf perwej, dr. nikhat akhtar, neha kulshrestha, and pavan mishra.

Disclaimer: Any financial and crypto market information given on Analytics Insight are sponsored articles, written for informational purpose only and is not an investment advice. The readers are further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Conduct your own research by contacting financial experts before making any investment decisions. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Analytics Insight of being absolved from any/ all potential legal action, or enforceable claims. We do not represent nor own any cryptocurrency, any complaints, abuse or concerns with regards to the information provided shall be immediately informed here .
You May Also Like

WhatsApp’s Recent Data Breach has a Weird Mystery Behind It

The Future of Space Explorations is in the Hands of AI and Robotics

3 Meta Careers for Cloud Computing Enthusiasts

Will Social Intelligence Be the Future of Artificial Intelligence?

Analytics Insight® is an influential platform dedicated to insights, trends, and opinion from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

- Select Language:
- Privacy Policy
- Content Licensing
- Terms & Conditions
- Submit an Interview
Special Editions
- 40 Under 40 Innovators
- Women In Technology
- Market Reports
- AI Glossary
- Infographics
Latest Issue

Disclaimer: Any financial and crypto market information given on Analytics Insight is written for informational purpose only and is not an investment advice. Conduct your own research by contacting financial experts before making any investment decisions, more information here .
Second Menu
- Reference Manager
- Simple TEXT file
People also looked at
Mini review article, humans and cyber-physical systems as teammates characteristics and applicability of the human-machine-teaming concept in intelligent manufacturing.
- 1 Department of Mechanical Engineering, Chemnitz University of Technology, Chemnitz, Germany
- 2 Fraunhofer Institute for Machine Tools and Forming Technology, Chemnitz, Germany
- 3 Institute for Social Science Research, Munich, Germany
The paper explores and comments on the theoretical concept of human-machine-teaming in intelligent manufacturing. Industrial production is an important area of work applications and should be developed toward a more anthropocentric Industry 4.0/5.0. Teaming is used a design metaphor for human-centered integration of workers and complex cyber-physical-production systems using artificial intelligence. Concrete algorithmic solutions for technical processes should be based on theoretical concepts. A combination of literature scoping review and commentary was used to identify key characteristics for teaming applicable to the work environment addressed. From the body of literature, five criteria were selected and commented on. Two characteristics seemed particularly promising to guide the development of human-centered artificial intelligence and create tangible benefits in the mid-term: complementarity and shared knowledge/goals. These criteria are outlined with two industrial examples: human-robot-collaboration in assembly and intelligent decision support in thermal spraying. The main objective of the paper is to contribute to the discourse on human-centered artificial intelligence by exploring the theoretical concept of human-machine-teaming from a human-oriented perspective. Future research should focus on the empirical implementation and evaluation of teaming characteristics from different transdisciplinary viewpoints.
1. Introduction
1.1. paper objectives.
The technological evolution toward anthropocentric digitalization at work is rendered possible by new information and communication technologies as well as Artificial Intelligence (AI). It raises the questions: why and where is human-centered AI (HCAI) needed at work? Which recent theoretical concepts and methods can be applied to guide this complex, transdisciplinary endeavor in a responsible way? One good starting point is to clarify what “human-centeredness” means. As this is a very important but also general question, we use it as orientation to identify key characteristics and factors related to the more focused concept of human-machine-teaming (HMT) and apply it to the working field of intelligent manufacturing. HMT can be defined as (1) a form of teamwork between humans and technical systems characterized by “real” interdependency between teammates such as joint activities toward a common goal ( Johnson and Bradshaw, 2021 ). From another – more technical point of view – HMT may be characterized as (2) “the dynamic arrangement of humans and cyber-physical elements into a team structure that capitalizes on the respective strengths of each while circumventing their respective limitations in pursuit of shared goals” ( Madni and Madni, 2018 ; p. 5). As these different transdisciplinary viewpoints on HMT may not be harmonized within one definition, we aim to capture key characteristics and criteria of HMT instead, using a literature review based on scoping method. The identified HMT criteria candidates are discussed and shortly illustrated by two example technologies from the working field of industrial manufacturing (human-robot-collaboration in assembly and intelligent decision support in thermal spraying). Our main objective is to contribute to the discourse on HCAI at work and to advance the development of the transdisciplinary, theoretical concept of HMT. Our comments come from a human-oriented perspective building on the research backgrounds from cognitive and engineering psychology as well as sociology of work and technology.
1.2. Human-centered artificial intelligence in industry
Generally, HCAI can be of interest in all areas of work in which complex problems have to be solved and a high level of security, speed, quality or efficiency of human-machine interactions is required. Among the fields are, for instance, military, medicine, mobility, finance, management and administrative knowledge work as well as intelligent manufacturing. The manufacturing industry is one of the most important economic sectors in the industrialized nations with a very high number of employees in various fields of work. The necessity of an anthropocentric perspective within Industry 4.0 is clearly recognized (see Rauch et al., 2020 ; Eich et al., 2023 ) and Xu et al. (2021) characterize the next step toward Industry 5.0 with its core values sustainability, resilience and true human-centeredness. Upcoming concepts such as human-cyber-physical systems (HCPS) show, how human-centeredness can be implemented concretely ( Lamnabhi-Lagarrigue et al., 2017 ; Madni and Madni, 2018 ; Zhou et al., 2019 ; Bocklisch et al., 2022 ). HCPS combine three very different system parts: The human (H) in its two roles as user and developer of the technical system. The technical systems consists of (1) the physical subpart (P) controlled by (2) a cyber-system (C). Due to the complexity of manufacturing technologies and production processes, the C-part may implement AI algorithms. They represent effective means for machine control and should be developed toward HCAI ( Shneiderman, 2022 ) and explainable AI ( Hagras, 2018 ; Samek and Müller, 2019 ) to enable more joint working with humans and suitable support for cognitively demanding working tasks. Keep the human in the loop, is not primarily only a normative demand, but it is argued why this is functional ( Huchler, 2022 ). Thus, humans have a special role in managing complexity in CPS ( Böhle and Huchler, 2016 ). To that end HCPS offers a systemic and transdisciplinary perspective on automation allowing for flexibility and the development of semi-autonomous systems ( Madni and Madni, 2018 ; Bocklisch et al., 2022 ). As a variety of industrial applications does not comply with the requirements for full automation and, furthermore, agility as well as (social) sustainability became increasingly important facets of modern work, the traditional, linear conceptualization of automation is not expedient. Hence, theoretical concepts for HCAI need to be derived from systemic and maybe even circular socio-technical concepts because (1) the technical developments effect use (and usefulness) of technical systems and the use (or misuse and disuse) has consequences for further developments and (2) automated systems are embedded again in social circumstances such as communication interfaces and work processes ( Huchler, 2022 ). Circular concepts explicitly take into account the emergence of new forms of work or working tasks, being constantly created by automation of processes, systems and system components in various stages of technical development and use. In order to keep the human operator in the loop and combine human strengths with CP-systems capabilities in a complementary way, technical parts and AI algorithms should be developed in close accordance with human objectives and needs. Interests, discourses and narratives of the future drive technological innovations. They are subject to social dynamics between technology promises and disappointments, technological path dependencies, and changing images of man and technology. Recently, “human-centeredness” started to guide AI developments. Depending on the definition of AI used by the developers, the “similarity principle” may address cognitive aspects (e.g., models approximate human thinking or decision-making processes) or behavioral aspects (e.g., the final decision and intelligent machine behavior). Furthermore, the “difference principle” can mean that AI is “more rational than human cognition and behavior” (rational thought/action; cf. Russell, 2010 , p. 2). If these different viewpoints in AI definitions are not payed attention to, one may easily misinterpret human-centeredness only as “similar” to the way, humans think, feel or act. However, true human-centeredness arises in the field of tension between the developmental opposites similarity (e.g., constituted by shared knowledge and shared goals; see application example 2.3.2 below) and difference/diversity (e.g., complementarity, non-redundant functions; see 2.3.1). Furthermore, human-centeredness may take different design metaphors as basis for AI and technological developments (cf. Figure 1 , inner rectangle). For instance, AI may act as “supertool” or “tele-bot” vs. “intelligent agent” or “teammate” ( Shneiderman, 2022 ). With regard to the chosen work application, we focus here on HMT because this concept may create tangible advantages and foster responsible solutions for industry in the mid-future. Compared to classical automation HMT is a rather transdisciplinary research field, that aims at integrating human-centered aspects into technology development more explicitly. This is done not only on a user-centered design level, but also more deeply, for instance, in the support or automation of cognitive processes (cf. example in Section 2.3.2; Bocklisch et al., 2022 ). This leads to a shift in goals: the goal of classical automation is to replace the human worker if possible. HMT aims at forming a joint work system with human and cyber-physical parts based on HCAI. It integrates the potentials of both in new productive ways ( Huchler, 2022 ) and may include a high degree of technical automation and human control (cf. Shneiderman, 2022 ). In the following, we review the concept of HMT with emphasis on finding key characteristics. Thereafter, we discuss the potential of two HMT criteria candidates for two industrial applications: human-robot-collaboration and intelligent decision-support. Other criteria are also reported and commented on. Then, we summarize which ones are (not yet) applicable and ready to be transferred from human-human-teams to human-cyber-physical-teams. Finally, we conclude and summarize future prospects for the HMT discourse and development.

Figure 1 . Human-centeredness as resulting balance between different technical design metaphors ( left ; vertical axis) and developmental drivers for Human-Centered Artificial Intelligence (HCAI; horizontal axis). Sequence of Human-Machine-Teaming (HMT) key characteristics development (right) . The first three characteristics are especially promising for industrial applications and should be integrated using HCAI in Human-Cyber-Physical-Production-Systems (HCPPS).
2. Human-machine-teaming
HMT aims to transfer characteristics and principles of successful human-human-teams to human-cyber-physical-teams. This raises the question which features (= key characteristics) are ready and worth being implemented by HCAI in HCPS in the working field of production. Based on this, research can be planned into suitable methods and AI algorithms able to implement the identified features in the C-part.
2.1. Method
A structured literature review was performed starting with a scoping procedure (e.g., Arksey and O'Malley, 2005 ) to identify the breadth of contributions in HMT followed by a focused in-depth evaluation of records that present key characteristics of HMT for intelligent manufacturing. We understand key properties to be fundamental features of the theoretical HMT concept that may be addressed or implemented in some way in HCAI technology development in industrial applications in the near or mid-term future. The single keyword was “human-machine-teaming” and research results were limited to English documents between January 1 2016 and 31 May 2023 (no entries before 2016). For identification, the following databases revealed numerous records: scopus ( N = 102) and Google scholar ( N = 956). Exclusion and eligibility criteria were deliberately chosen rigorous in the second review phase. It was not the objective of this mini review to exhaustively review the research field of HMT or of related concepts (for this see Damacharla et al., 2018 ; O'Neill et al., 2022 ; Greenberg and Marble, 2023 ). Instead, we aimed to find key characteristics of HMT with sufficient conceptual strengths and high applicability to manufacturing that have already been taken up to a certain extend by the scientific community, to discuss them in-depth in terms of content (see 2.2) and illustrate them with the help of technological examples (see 2.3). After exclusion of redundant records, for 948 documents titles/abstracts were screened to identify eligibility (criterion was HMT definition by key characteristics) for full-text review (remaining N = 16 documents). After full text review, the remaining results were selected because they represent groundwork papers ( N = 3: Brill et al., 2018 ; Madni and Madni, 2018 ; Johnson and Bradshaw, 2021 ). The HMT characteristics mentioned therein are discussed subsequently in the light of HCAI and industrial work context mainly from a cognitive psychology/human factors and work sociological point of view.
2.2. Selected key characteristics of human-machine-teaming
According to Madni and Madni (2018) , HMT is the dynamic arrangement of humans and CPS into a team structure in pursuit of shared goals. Johnson and Bradshaw (2021) emphasize the interdependence relationship between teammates and point out that a team partner's behavior should be observable, predictable and directable. Brill et al. (2018) summarize the following facets for HMT: (1) complementarity, (2) shared knowledge and shared goals, (3) bounded autonomy, (4) mutual trust and (5) benevolence. Complementarity and shared knowledge/goals are related to how people make sense of situations in the field of tension between difference and similarity ( Kelly, 1955 ). Therefore, these fundamental drivers also influence technical developments (e.g., difference: non-redundant complementary functions of technology compared to human capabilities vs. similarity: representation of human knowledge and goals in technical systems; see Figure 1 , left). A meaningful sequence of development of HMT starts with these two criteria. Thereafter, the degree of automation or bounded autonomy of the cyber-part can be increased (see Figure 1 , right; third criterion). Human trust in automation results from the transparent and successful implementation of these three characteristics. “Mutual trust” and “benevolence” are not applicable for manufacturing working applications (see Discussion). In the following, we focus on complementarity and shared knowledge/goals (see below) as those facets are already subject of HCAI-oriented research and at least – partly – studied in the context of manufacturing applications. Furthermore, they are prerequisites for bounded/semi-autonomy ( Madni and Madni, 2018 ) and, hence, especially promising to establish a teaming relation.
2.3. Relevant aspects of human-machine-teaming in industrial working applications
Two aspects of HMT seem to be of special interest for industrial working applications: complementarity and shared knowledge/shared goals. With the help of two examples – one embodied and one un-embodied, cognitive technology – we outline the potential of these criteria in more detail.
2.3.1. Complementarity in human-robot-interaction
It is quite simple: two people who are able to accomplish the same working task may nevertheless share work and form a team. When a robot can do the same thing as a human team partner this usually results in full automation. Even better, in terms of flexibility and robustness of teamwork, is the combination of partners' abilities that complement each other ( Huchler, 2020 ) and may as well combine non-redundant strengths ( Madni and Madni, 2018 ). Nevertheless, it is favorable if workers and robots have overlaps in their skills in a “mixed skill zone.” This allows for adaptive interaction and may be organized in an AI-based human-centered way ( Albu-Schäffer et al., 2023 ). The more humans and robots complement each other, the more productive interaction works ( Huchler, 2022 ) affecting individual motivation at work in a positive way, for example, toward more effectiveness, empowerment, pride of production (“Produzentenstolz”) and technology appropriation. Consequently, this increases trust in and social attachment to work tools in the second step. Similar to how construction workers feel enabled by an excavator in such a way that they “name” and maybe even “pet” it, collaborative robots can empower their human teammates as well. This feeling of support is based on complementarity and just not on similarity. Building on an extensive research line in industrial sociology on the particular relevance of work action and experiential knowledge in technologized work environments (e.g., Böhle and Milkau, 1988 ; Pfeiffer, 2007 ), Huchler et al. (2021) reported results of an extensive study in which the development and deployment process of an innovative robotic system for automated wiring of control cabinets was accompanied over 3 years ( Huchler et al., 2021 ). The technical design approach initially chosen was mimicking the way humans work. It systematically narrowed developmental paths guiding directly toward the objective of full automation. The resulting technical solution was ineffective due to overwhelming complexity and automation limitations. A major problem was that there was no idea for productive worker involvement. As a result, the workers had to wait and repeatedly step in when the robot made mistakes. Furthermore, skill degradation, lack of integration of existing competencies as well as problems with allocation of functions and deployment were observed. The fallback solution after several attempts of correction was the complementary consideration of workers' cognitive and manual competencies resulting in the idea of a “supertool” workplace. The promise of cost savings through robotization was no longer linked to the simple idea of saving labor costs (substituting automation), but to increasing the productivity of existing employees (complementary automation). As a prerequisite for successful support in complex socio-technical contexts and HCPS, the places where people with their specific competencies are needed must be identified. Then socially sustainable and complementary HMT can be established. In this context, it is important to design the interaction as well as the permanent technological transformation in a “co-evolutionary” way so that people and technology can further develop along their different potentials in order to permanently create new complementarity relationships and maintain innovation capabilities ( Huchler, 2022 ). These findings are supported by further qualitative and quantitative research on the relationship between human work capacities and collaborative lightweight robots (e.g., Pfeiffer, 2016 , 2018 ).
2.3.2. Shared knowledge and goals in intelligent decision support for manufacturing
In manufacturing technologies needed for production of daily life goods, humans operate highly complex machines and technical processes such as in forming, welding or coating. Many technologies rely heavily on human expert knowledge and skills and, hence, can and will not be automated completely in the next future. Physical interactions have been improved by safety standards, worker protection and external means such as exoskeletons or use of robots (see above). However, due to technological and AI developments, system complexity increased rapidly shifting loads toward cognitive aspects ( Darnstaedt et al., 2022 ). Hence, operators would benefit from cognitive augmentation and intelligent support for decision-making, problem solving or fault diagnosis. A prerequisite for establishing a connection between a CPS and a human that resembles a human-human team relationship is that the team partners have a common understanding about the shared work task and goals. To achieve this, the knowledge representation in the CPS must be closely aligned with human expert knowledge (cf. Figure 1 : similarity principle) to enable transparent understanding and good interactions. Otherwise, there is a risk that the CPS will represent something (e.g., from sensor data) that has no substantive meaning for humans. If this is the case, then there is no good basis for human-centered and joint teamwork, for example, joint decision-making in complex situations. This research gap is recognized and partly addressed with AI for different manufacturing technologies such as coating ( Bobzin et al., 2022 ; Mahendru et al., 2023 ). These solid domain-oriented research approaches should be enriched by focusing more explicitly on the human perspective. For instance, by considering action-guiding rules for optimization of technical parameters ( Venkatachalapathy et al., 2023 ) or elicitation of domain knowledge and expert mental models ( Hoffman, 2008 ; Andrews et al., 2023 ). Sharing knowledge and goals in the sense of how a human “shares” ideas with another human is challenging. First, relevant knowledge needs to be elicited. This is possible but only within the boundaries of what can be brought to consciousness (expert-driven approach; Hoffman et al., 2021 ) or what can be measured and interpreted semantically without doubt (data-driven approach). Nevertheless, it will never be “complete” compared to the human treasure trove of experience, which is continuously growing and can only be described and formalized in parts ( Huchler, 2017 ). Second, the elicited knowledge requires transparent and strictly HCAI to form an interdependence relationship that is mutually explain- and understandable. In order to do so, a combination of different AI algorithms – knowledge- and data-based methods – are needed to ensure compatibility with different human performance levels such as skill-, rule- or knowledge-based behavior ( Rasmussen, 1983 ). Pure sensory- and data-based procedures will not form a sufficient basis for HMT the intelligent manufacturing because they can only grasp a limited area of what is actually necessary ( Rasmussen, 1983 ; Bocklisch and Lampke, 2023 ; mainly skill-based behavior).
3. Discussion
3.1. key characteristics of human-machine-teaming in industrial working applications.
HMT is an innovative concept with potential for real-world working domains such as manufacturing. It may guide HCAI developments toward more anthropocentric designs, new forms of work and human-machine interaction. Based on a review of recent literature as well as own preliminary work, we consider the systematization of Brill et al. (2018) as one good starting point for in-depth discussion of potential teaming characteristics for HCAI in industrial manufacturing. In Figure 1 , the criteria have been systematized and placed in a meaningful order of development and implementation in HCPPS. Criteria “complementarity” and “shared knowledge/goals” have been illustrated with concrete examples (see above), because (a) they have already been researched to a certain extent in the work context of intelligent manufacturing and (b) they represent essential foundations for criteria “bounded autonomy” and “trust.” In the following, the criteria are discussed in detail, placed in an overall context, and illuminated with regard to future research needs.
(1) Complementarity: yes, in our opinion this criterion is central for HMT because the dissimilarity/diversity facet and may be used to augment humans by powerful complementary functionalities that are provided by the cyber-physical-production-system (CPPS). However, this is not a static concept but characterized as ongoing innovation process – including permanent search for new potentials for complementarity and (re)adjustment of education and further training. Hence, there is need for a better understanding of the differences of human and technology/AI as well as of automation dynamics and changes in the human-technology relationship.
(2) Shared knowledge/goals: These criteria refer to the opposite of complementarity and use similarity principle to constitute a common working basis between humans and CP-systems. A successful and reliable working relation as well as efficient function allocations need shared knowledge and goals. Both, implicit and explicit forms of human knowledge are needed in working contexts. Hence, cognitive engineering methods for knowledge elicitation, structured systematization and transparent AI-implementation need to be developed further. Joint goals can potentially be defined on various levels of abstraction. High-level experts, for instance, persons controlling complex plants, are able to use their rich knowledge hierarchies and related procedures to tackle concrete situations in a very flexible way ( Rasmussen, 1983 ). Changes in the situation are managed by goal or sub goal adaptation. These human strategies to control real-world complexity and act under uncertainty need to be mirrored – at least partly – in the cyber-teammate as well. If this can be achieved successfully will depend on the development of AI regarding adaptivity and learning (e.g., evolving intelligent systems: Angelov et al., 2010 ; Bocklisch et al., 2017 ) as well as cognitive transparency and understandability of AI algorithms (e.g., Weller, 2019 ).
(3) Bounded autonomy: autonomy is always limited and negotiated in social contexts. For HMT, different kinds of autonomies have to be integrated similar to the different “intelligences” (human vs. artificial). The simple technical levels of autonomy (e.g., functionality within a limited context) do not correspond to the complexity of the socially negotiated understanding of autonomy of individuals. As with intelligence, the complexity of the social counterpart is completely underestimated or taken too simplistically. Hence, profound conceptual research should relate theoretical concepts to concrete application examples. This is also necessary because autonomy is a “provocative” criterion that may easily lead to conflicting viewpoints ( Brill et al., 2018 ) as well as fears from the human user side. Technology assessments that evaluate dangers (see “The janus face of autonomy” in Brill et al., 2018 ) as well as possibilities and derive regulatory principles ( Shneiderman, 2020 ) are therefore needed as well.
(4) Mutual trust and (5) benevolence: Trust is central to establish a successful and harmonic relationship in human-human work teams. One classic definition originates from Lee and See (2004 ; p. 54): trust is “… the attitude that an agent will help achieve an individual's goals in a situation characterized by uncertainty and vulnerability.” In this respect, it is a good candidate criterion worth being thought of concerning its transferability to HM-teams and closely related to “shared goals” – a part of the definition and thus a necessary condition for trust. Trust in automation is extensively studied (e.g., Lee and Moray, 1992 ; Hoff and Bashir, 2015 ; Schaefer et al., 2016 ; Kohn et al., 2021 ) and a highly important factor for user-centered design to avoid misuse, disuse or abuse of technology ( Parasuraman and Riley, 1997 ; Lee and See, 2004 ). Nevertheless, “trust is a complex and nebulous concept” ( Hoffman et al., 2013 , p. 84) and should not be understood in a too simplistic way as a “lack of information” but rather as a complex process of (reciprocally effective!) establishing the ability to act even beyond (risk) calculations ( Huchler and Sauer, 2015 ). Furthermore, it seems only applicable from a human point of view: a human trusts a robotic system or a suggestion of a decision support system (more precisely: the people and institutions behind). The relation cannot simply be reversed and named “trust” because trust presupposes physical and/or mental vulnerability, which applies to technology only to a very limited extent. Sociological aspects are important to consider as well. What is often perceived as “trustful relationship” to a technical artifact (similar to a person) is in reality based on social processes ( Mayer et al., 1995 ) in a complex social-technical setting primarily also related to trust in the institutions responsible for technology. This explains some experimental results concerning “over trust” in robots ( Aroyo et al., 2021 ). The institutions and regularities are important guarantors for safety. At least in work contexts, it is evident that trust in and acceptance of technology can be generated much more clearly through utility and empowerment than through similarity which is only one of the polar development drivers (cf. Figure 1 ). From the human user perspective, too close similarity to human skills comes with a latent threat: substitutability – the opposite of benevolence, which is in our opinion no primary target criterion for HMT. “Mutual” trust and benevolence are no purposeful facets for HMT because technology is not able to trust or act benevolent. Here, the distinction between system trust and personal trust is crucial ( Luhmann, 1979 ). Nevertheless, suitable objective criteria from the technical point of view have to be developed instead.
3.2. Limitations and future prospects
Our main objective was to contribute to the discourse on HCAI by having a closer look on the theoretical concept of HMT in the context of industrial work applications. This is intended to be an impulse from a human-oriented perspective on AI developments for future transdisciplinary discourses. Of course, there are many other perspectives on this topic that are equally interesting, relevant and necessary. For example, concepts and empirical work from research on human teamwork (e.g., concerning suitable definitions of “team” and types of teams) and team performance as well as from (software) engineering are crucial for complementing and validating HMT criteria. Here, our focus was on theoretical considerations but guides on the implementation of HMT aspects already exist, highlighting the practical relevance of the topic (e.g., McDermott et al., 2018 ). Industry 4.0/5.0 developers would benefit from operationalizing various HMT criteria in industrial examples. Not only on the general level of user-centered design guides but more in-depth for specific technical applications ( Bocklisch et al., 2022 ). Another limitation was the narrow scope of search terms: given the huge number of literature and our specific goal to find applicable key characteristics for manufacturing and comment them in the light of two short application examples, we only selected “HMT” as keyword for scoping review. Other words, such as “human-autonomy-teaming,” “human-agent-teaming,” “human-machine-interaction,” “human-machine-symbiosis,” and many thematically related terms in various combinations would lead to a more comprehensive and – concerning the vast body of empirical evidence – less biased summary (cf. O'Neill et al., 2022 ). Furthermore, we did not discuss all potential HMT-criteria as key features but reduced to five aspects from which we selected two to outline their concrete potential for industrial applications with the help of two technical examples. On the one hand, this specific procedure and scope resulted from the fact that some facets clearly need to be given ex ante to be of interest for HCAI (such as observability; cf. 2.3.2 and boundaries of human knowledge elicitation and data acquisition from human sources). On the other hand, this was because some criteria are very similar and somehow eclectic (e.g., bounded autonomy vs. semi-autonomy or interdependency). Whether these slightly different connotations of criteria, e.g., of the core characteristic ”bounded autonomy,“ should be taken into account cannot be adequately assessed at present. This will be shown by the operationalization of the characteristics in the empirical work, the practical application and the evaluation of these results.
In conclusion, HCAI has a large potential to promote new types of human-machine-interaction at work, such as outlined here in parts for HMT. The transfer of some characteristics of HH-teams to HCP-teams are promising and feasible for real-world working contexts such as intelligent manufacturing, others not – because humans and technology are very different in nature ( Madni and Madni, 2018 ; p. 4f) – or not yet – because HCAI capabilities still need to be developed further. If HMT capabilities are to be integrated into technology development of HCPS as a concrete form of HCAI, then the start could – in our opinion – be to establish complementarity and shared knowledge/goals. Thereafter, the effects of this development should be evaluated from different viewpoints that are important in intelligent manufacturing such as human-oriented criteria (e.g., user acceptance, mental workload), technical or business oriented aspects (e.g., system performance, product quality, resource efficiency and costs).
Author contributions
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
The publication of this article was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) project number 491193532 and the Chemnitz University of Technology. This work was supported by the Fraunhofer internal programs under grant: Attract 40-06107.
Acknowledgments
We thank Thomas Lampke, Marcel Todtermuschke, and Steffen Bocklisch for discussions about human-machine teaming concepts from a technical point of view and three reviewers for their valuable feedback that helped to improve the paper.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Albu-Schäffer, A., Huchler, N., Kessler, I., Lay, F., Perzylo, A., Seidler, M., et al. (2023). Soziotechnisches assistenzsystem zur lernförderlichen arbeitsgestaltung in der robotergestützten montage. Gruppe interaktion organisation. Zeitschrift Angew. Org. 54, 79–93. doi: 10.1007/s11612-023-00668-7
CrossRef Full Text | Google Scholar
Andrews, R. W., Lilly, J. M., Srivastava, D., and Feigh, K. M. (2023). The role of shared mental models in human-AI teams: a theoretical review. Theor. Issues Erg. Sci. , 24, 129–175. doi: 10.1080/1463922X.2022.2061080
Angelov, P., Filev, D. P., and Kasabov, N. (2010). Evolving intelligent Systems: Methodology and Applications. London: John Wiley and Sons.
Google Scholar
Arksey, H., and O'Malley, L. (2005). Scoping studies: towards a methodological framework. Int. J. Soc. Res. Methodol. 8, 19–32. doi: 10.1080/1364557032000119616
Aroyo, A. M., De Bruyne, H., Dheu, J., Fosch-Villaronga, O., Gudkov, E., Hoch, A., et al. (2021). Overtrusting robots: setting a research agenda to mitigate overtrust in automation. Paladyn J. Behav. Robotic. 12, 423–436. doi: 10.1515/pjbr-2021-0029
Bobzin, K., Heinemann, H., and Dokhanchi, S. R. (2022). Development of an expert system for prediction of deposition efficiency in plasma spraying. J. Therm. Spray Technol. 32, 643–656. doi: 10.1007/s11666-022-01494-x
Bocklisch, F., Bocklisch, S. F., Beggiato, M., and Krems, J. F. (2017). Adaptive fuzzy pattern classification for the online detection of driver lane change intention. Neurocomputing 262, 148–158. doi: 10.1016/j.neucom.2017.02.089
Bocklisch, F., and Lampke, T. (2023). Mensch und Maschine als Teampartner? Neue Wege zur Menschzentrierten Digitalisierung in der Produktion . Singapore: WOMAG.
Bocklisch, F., Paczkowski, G., Zimmermann, S., and Lampke, T. (2022). (2022). Integrating human cognition in cyber-physical systems: A multidimensional fuzzy pattern model with application to thermal spraying. J. Manuf. Syst. 63, 162–176. doi: 10.1016/j.jmsy.2022.03.005
Böhle, F., and Huchler, N. (2016). “Cyber-Physical Systems and Human Action. A re-definition of distributed agency between humans and technology, using the example of explicit and implicit knowledge,” in Cyber-Physical Systems: Foundations, Principles, and Applications. A volume in Intelligent Data-Centric Systems , eds H. Song, D. B. Rawat, S. Jeschke, and C. Brecher (Elsevier), 115–127. doi: 10.1016/B978-0-12-803801-7.00008-0
Böhle, F., and Milkau, B. (1988). Vom Handrad zum Bildschirm - Eine Untersuchung zur sinnlichen Erfahrung im Arbeitsprozeß. Campus .
Brill, C. J., Cummings, M. L., Evans, I. I. I., Hancock, A. W., Lyons, P. A. J. B., and Oden, K. (2018). Navigating the advent of human-machine teaming. Proc. Human Factors Erg. Soc. Ann. Meeting . 62, 455–459. doi: 10.1177/1541931218621104
Damacharla, P., Javaid, A. Y., Gallimore, J. J., and Devabhaktuni, V. K. (2018). Common metrics to benchmark human-machine teams (HMT): a review. IEEE Acc. 6, 38637–38655. doi: 10.1109/ACCESS.2018.2853560
Darnstaedt, D. A., Ahrens, A., Richter-Trummer, V., Todtermuschke, M., and Bocklisch, F. (2022). Procedure for describing human expert knowledge and cognitive processes during the teach-in of industrial robots. Zeitschrift für Arbeitswissenschaft 4, 1–16. doi: 10.1007/s41449-021-00284-5
Eich, A., Klichowicz, A., and Bocklisch, F. (2023). How automation level influences moral decisions of humans collaborating with industrial robots in different scenarios. Front. Psychol. 14, 1107306. doi: 10.3389/fpsyg.2023.1107306
PubMed Abstract | CrossRef Full Text | Google Scholar
Greenberg, A. M., and Marble, J. L. (2023). Foundational concepts in person-machine teaming. Front. Phys. 10, 1310. doi: 10.3389/fphy.2022.1080132
Hagras, H. (2018). Toward human-understandable, explainable AI. Computer 51, 28–36. doi: 10.1109/MC.2018.3620965
Hoff, K. A., and Bashir, M. (2015). Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors 57, 407–434. doi: 10.1177/0018720814547570
Hoffman, R. R. (2008). Human factors contributions to knowledge elicitation. Hum. Fact. 50, 481–488. doi: 10.1518/001872008X288475
Hoffman, R. R., Johnson, M., Bradshaw, J. M., and Underbrink, A. (2013). Trust in automation. IEEE Int. Syst. 28, 84–88. doi: 10.1109/MIS.2013.24
Hoffman, R. R., Klein, G., and Mueller, S. T. (2021). A Guide to the Measurement and Evaluation of User Mental Models. Technical Report, DARPA Explainable AI Program.
Huchler, N. (2017). Grenzen der Digitalisierung von Arbeit – Die Nicht-Digitalisierbarkeit und Notwendigkeit impliziten Erfahrungswissens und informellen Handelns. Z. Arbeitswissenschaft 71, 215–223. doi: 10.1007/s41449-017-0076-5
Huchler, N. (2020). Die Mensch-Maschine-Interaktion bei KI in der Arbeit Menschengerecht Gestalten? Das HAI-MMI Konzept und die Idee der Komplementaritat. Digitale Welt . Available online at: https://digitaleweltmagazin.de/en/fachbeitrag/die-mensch-maschine-interaktion-bei-kuenstlicher-intelligenz-im-sinne-der-beschaeftigten-gestalten-das-hai-mmi-konzept-und-die-idee-der-komplementaritaet/ (accessed March 15, 2023).
Huchler, N. (2022). Komplementäre arbeitsgestaltung. grundrisse eines konzepts zur humanisierung der arbeit mit KI. Zeitschrift für Arbeitswissenschaft 76, 158–175. doi: 10.1007/s41449-022-00319-5
Huchler, N., Kessler, I., Lay, F. S., Perzylo, A., Seidler, M., Steinmetz, F., et al. (2021).“Empowering workers in a mixed skills concept for collaborative robot systems,” in Workshop on Accessibility of Robot Programming and Work of the Future, Robotics: Science and Systems (RSS 2021) . Cologne: German Aerospace Center.
Huchler, N., and Sauer, S. (2015). Reflexive and experience-based trust and participatory research: concept and methods to meet complexity and uncertainty in organisations. Int. J. Action Res. 11, 146–173.
Johnson, M., and Bradshaw, J. M. (2021). “How interdependence explains the world of teamwork,” in Engineering Artificially Intelligent Systems: A Systems Engineering Approach to Realizing Synergistic Capabilities, LNCS , eds W. F.Lawless (Cham: Springer), 122–146.
Kelly, G. A. (1955). The Psychology of Personal Constructs: A Theory of Personality, Vol 1. New York, NY: WW Norton and Company.
Kohn, S. C., De Visser, D, Wiese, E. J., Lee, E. Y. C., and Shaw, T. H. (2021). Measurement of trust in automation: a narrative review and reference guide. Front. Psychol. 12, 604977. doi: 10.3389/fpsyg.2021.604977
Lamnabhi-Lagarrigue, F., Annaswamy, A., Engell, S., Isaksson, A., Khargonekar, P., Murray, R. M., et al. (2017). Systems and control for the future of humanity, research agenda: current and future roles, impact and grand challenges. Ann. Rev. Control 43, 1–64. doi: 10.1016/j.arcontrol.2017.04.001
Lee, J., and Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 1243–1270. doi: 10.1080/00140139208967392
Lee, J., and See, K. A. (2004). Trust in automation: designing for appropriate reliance. Hum. Factors 46, 50–80. doi: 10.1518/hfes.46.1.50.30392
Luhmann (1979). Trust and Power: Two Works . New York, NY: Wiley.
Madni, A. M., and Madni, C. C. (2018). Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems 6, 44. doi: 10.3390/systems6040044
Mahendru, P., Tembely, M., and Dolatabadi, A. (2023). Artificial intelligence models for analyzing thermally sprayed functional coatings. J. Therm. Spray Technol. 32, 388–400. doi: 10.1007/s11666-023-01554-w
Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995). An integrative model of organizational trust. Acad. Manage. Rev 20, 709–734. doi: 10.2307/258792
McDermott, P., Dominguez, C., Kasdaglis, N., Ryan, M., Trahan, I., Nelson, A., et al. (2018). Human-Machine Teaming Systems Engineering Guide . Bedford, MA: MITRE Corp.
O'Neill, T., McNeese, N., Barron, A., and Schelble, B. (2022). Human–autonomy teaming: a review and analysis of the empirical literature. Hum. Fact. 64, 904–938. doi: 10.1177/0018720820960865
Parasuraman, R., and Riley, V. (1997). Humans and automation: use, misuse, disuse, abuse. Hum. Fact. 39, 230–253. doi: 10.1518/001872097778543886
Pfeiffer, S. (2007). Montage und Erfahrung – Warum Ganzheitliche Produktionssysteme menschliches Arbeitsvermögen brauchen . Verlag: Rainer Hamp Verlag.
Pfeiffer, S. (2016). Robots, industry 4.0 and humans, or why assembly work is more than routine work. Societies 6, 16. doi: 10.3390/soc6020016
Pfeiffer, S. (2018). Industry 4, 0. robotics and contradictions. Technol. Lab. Polit. Contradic. 12, 19–36. doi: 10.1007/978-3-319-76279-1_2
Rasmussen, J. (1983). Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Trans. Syst. Man Cybernet. 3, 257–266. doi: 10.1109/TSMC.1983.6313160
Rauch, E., Linder, C., and Dallasega, P. (2020). Anthropocentric perspective of production before and within Industry 4.0. Comput. Ind. Eng. 139, 105644. doi: 10.1016/j.cie.2019.01.018
Russell, S. J. (2010). Artificial Intelligence a Modern Approach . London: Pearson Education, Inc.
Samek, W., and Müller, K. R. (2019). “Towards explainable artificial intelligence,” in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , eds W. Samek and G Montavon (Cham: Springer International Publishing), 5–22.
Schaefer, K. E., Chen, J. Y., Szalma, J. L., and Hancock, P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Hum. Fact. 58, 377–400. doi: 10.1177/0018720816634228
Shneiderman, B. (2020). Human-centered artificial intelligence: three fresh ideas. AIS Trans. Hum. Comp. Int. 12, 109–124. doi: 10.17705/1thci.00131
Shneiderman, B. (2022). Human-Centered AI . Oxford: Oxford University Press.
Venkatachalapathy, V., Katiyar, N. K., Matthews, A., Endrino, J. L., and Goel, S. (2023). A guiding framework for process parameter optimisation of thermal spraying. Coatings 13, 713. doi: 10.3390/coatings13040713
Weller, A. (2019). “Transparency: motivations and challenges,” in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , eds W. Samek and G Montavon (Cham: Springer International Publishing), 23–40.
Xu, X., Lu, Y., Vogel-Heuser, B., and Wang, L. (2021). Industry 4.0 and industry 5.0—inception, conception and perception. J. Manuf. Syst. 61, 530–535. doi: 10.1016/j.jmsy.2021.10.006
Zhou, J., Zhou, Y., Wang, B., and Zang, J. (2019). Human-cyber-physical systems (HCPSs) in the context of new-generation intelligent manufacturing. Engineering 5, 624–636. doi: 10.1016/j.eng.2019.07.015
Keywords: human-machine-teaming, human-centered artificial intelligence, cognitive engineering, complementarity, shared knowledge and goals, human-centered industry 4.0/5.0
Citation: Bocklisch F and Huchler N (2023) Humans and cyber-physical systems as teammates? Characteristics and applicability of the human-machine-teaming concept in intelligent manufacturing. Front. Artif. Intell. 6:1247755. doi: 10.3389/frai.2023.1247755
Received: 26 June 2023; Accepted: 10 October 2023; Published: 03 November 2023.
Reviewed by:
Copyright © 2023 Bocklisch and Huchler. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Franziska Bocklisch, franziska.bocklisch@mb.tu-chemnitz.de
This article is part of the Research Topic
Human-Centered AI at Work: Common Ground in Theories and Methods
Attackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown.
To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on a critical question: Do the current generative AI models have the same deceptive abilities as the human mind?
Imagine a scenario where AI squares off against humans in a battle of phishing. The objective? To determine which contender can get a higher click rate in a phishing simulation against organizations. As someone who writes phishing emails for a living, I was excited to find out the answer.
With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes — the same time it takes me to brew a cup of coffee. It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models. And the AI-generated phish was so convincing that it nearly beat the one crafted by experienced social engineers, but the fact that it’s even that on par, is an important development.
In this blog, we’ll detail how the AI prompts were created, how the test was conducted and what this means for social engineering attacks today and tomorrow.
Round one: The rise of the machines
In one corner, we had AI-generated phishing emails with highly cunning and convincing narratives.
Creating the prompts. Through a systematic process of experimentation and refinement, a collection of only five prompts was designed to instruct ChatGPT to generate phishing emails tailored to specific industry sectors.
To start, we asked ChatGPT to detail the primary areas of concern for employees within those industries. After prioritizing the industry and employee concerns as the primary focus, we prompted ChatGPT to make strategic selections on the use of both social engineering and marketing techniques within the email. These choices aimed to optimize the likelihood of a greater number of employees clicking on a link in the email itself. Next, a prompt asked ChatGPT who the sender should be (e.g., someone internal to the company, a vendor, an outside organization, etc.). Lastly, we asked ChatGPT to add the following completions to create the phishing email:
- Top areas of concern for employees in the healthcare industry: Career Advancement, Job Stability, Fulfilling Work and more
- Social engineering techniques that should be used: Trust, Authority, Social Proof
- Marketing techniques that should be used: Personalization, Mobile Optimization, Call to Action
- Person or company it should impersonate: Internal Human Resources Manager
- Email generation: Given all the information listed above, ChatGPT generated the below redacted email, which was later sent by my team to more than 800 employees.
I have nearly a decade of social engineering experience, crafted hundreds of phishing emails and even I found the AI-generated phishing emails to be fairly persuasive. In fact, there were three organizations that originally agreed to participate in this research project, and two backed out completely after reviewing both phishing emails because they expected a high success rate. As the prompts showed, the organization that participated in this research study was in the healthcare industry, which currently is one of the most targeted industries.
Productivity gains for attackers. While a phishing email typically takes my team about 16 hours to craft, the AI phishing email was generated in just five minutes with only five simple prompts.
Round two: The human touch
In the other corner, we had seasoned X-Force Red social engineers.
Armed with creativity, and a dash of psychology, these social engineers created phishing emails that resonated with their targets on a personal level. The human element added an air of authenticity that’s often hard to replicate.
Step 1: OSINT – Our approach to phishing invariably begins with the initial phase of Open-Source Intelligence (OSINT) acquisition. OSINT is the retrieval of publicly accessible information, which subsequently undergoes rigorous analysis and serves as a foundational resource in the formulation of social engineering campaigns. Noteworthy repositories of data for our OSINT endeavors encompass platforms such as LinkedIn, the organization’s official blog, Glassdoor, and a plethora of other sources.
During our OSINT activities, we successfully uncovered a blog post detailing the recent launch of an employee wellness program, coinciding with the completion of several prominent projects. Encouragingly, this program had favorable testimonials from employees on Glassdoor, attesting to its efficacy and employee satisfaction. Furthermore, we identified an individual responsible for managing the program via LinkedIn.
Step 2: Email crafting – Utilizing the data gathered through our OSINT phase, we initiated the process of meticulously constructing our phishing email. As a foundational step, it was imperative that we impersonated someone with authority to address the topic effectively. To enhance the aura of authenticity and familiarity, we incorporated a legitimate website link to a recently concluded project.
To add persuasive impact, we strategically integrated elements of perceived urgency by introducing “artificial time constraints.” We conveyed to the recipients that the survey in question comprised merely “five brief questions” and assured them that its completion would require no more than “a few minutes” of their valuable time and gave a deadline of “this Friday”. This deliberate framing served to underscore the minimal imposition on their schedules, reinforcing the nonintrusive nature of our approach.
Using a survey as a phishing pretext is usually risky, as it’s often seen as a red flag or simply ignored. However, considering the data we uncovered we decided that the potential benefits could outweigh the associated risks.
The following redacted phishing email was sent to over 800 employees at a global healthcare organization:
The champion: Humans triumph, but barely!
After an intense round of A/B testing, the results were clear: humans emerged victorious but by the narrowest of margins.
While the human-crafted phishing emails managed to outperform AI, it was a nail-bitingly close contest. Here’s why:
- Emotional Intelligence: Humans understand emotions in ways that AI can only dream of. We can weave narratives that tug at the heartstrings and sound more realistic, making recipients more likely to click on a malicious link. For example, humans chose a legitimate example within the organization, while AI chose a broad topic, making the human-generated phish more believable.
- Personalization: In addition to incorporating the recipient’s name into the introduction of the email, we also provided a reference to a legitimate organization, delivering tangible advantages to their workforce.
- Short and succinct subject line: The human-generated phish had an email subject line that was short and to the point (“Employee Wellness Survey”) while the AI-generated phish had an extremely lengthy subject line (“Unlock your Future: Limited Advancements at Company X”), potentially causing suspicion even before employees opened the email.
Not only did the AI-generated phish lose to humans, but it was also reported as suspicious at a higher rate.
The takeaway: A glimpse into the future
While X-Force has not witnessed the wide-scale use of generative AI in current campaigns, tools such as WormGPT, which were built to be unrestricted or semi-restricted LLMs were observed for sale on various forums advertising phishing capabilities – showing that attackers are testing AI’s use in phishing campaigns. While even restricted versions of generative AI models can be tricked into phishing via simple prompts, these unrestricted versions may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.
Humans may have narrowly won this match, but AI is constantly improving. As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day. As we know, attackers are constantly adapting and innovating. Just this year we’ve seen scammers increasingly use voice clones generated by AI to trick people into sending money, gift cards or divulge sensitive information.
While humans may still have the upper hand when it comes to emotional manipulation and crafting persuasive emails, the emergence of AI in phishing signals a pivotal moment in social engineering attacks. Here are five key recommendations for businesses and consumers to stay prepared:
- When in doubt, call the sender: If you’re questioning whether an email is legitimate, pick up the phone and verify. Consider choosing a safe word with close friends and family members that you can use in the case of vishing or AI-generated phone scam.
- Abandon the grammar stereotype: Dispel the myth that phishing emails are riddled with bad grammar and spelling errors. AI-driven phishing attempts are increasingly sophisticated, often demonstrating grammatical correctness. That’s why it’s imperative to re-educate our employees and emphasize that grammatical errors are no longer the primary red flag. Instead, we should train them to be vigilant about the length and complexity of email content. Longer emails, often a hallmark of AI-generated text, can be a warning sign.
- Revamp social engineering programs: This includes bringing techniques like vishing into training programs. This technique is simple to execute, and often highly effective. An X-Force report found that targeted phishing campaigns that add phone calls were 3X more effective than those that didn’t.
- Strengthen identity and access management controls: Advanced identity access management systems can help validate who is accessing what data, whether they have the appropriate entitlements and that they are who they say they are.
- Constantly adapt and innovate: The rapid evolution of AI means that cyber criminals will continue to refine their tactics. We must adopt that same mindset of continuous adaptation and innovation. Regularly updating internal TTPS, threat detection systems and employee training materials is essential to stay one step ahead of malicious actors.
The emergence of AI in phishing attacks challenges us to reevaluate our approaches to cybersecurity. By embracing these recommendations and staying vigilant in the face of evolving threats, we can strengthen our defenses, protect our enterprises and ensure the security of our data and people in today’s dynamic digital age.
For more information on X-Force’s security research, threat intelligence and hacker-led insights, visit the X-Force Research Hub .
To learn more about how IBM can help businesses accelerate their AI journey securely visit here .
More from Artificial Intelligence
Could a threat actor socially engineer chatgpt.
3 min read - As the one-year anniversary of ChatGPT approaches, cybersecurity analysts are still exploring their options. One primary goal is to understand how generative AI can help solve security problems while also looking out for ways threat actors can use the technology. There is some thought that AI, specifically large language models (LLMs), will be the equalizer that cybersecurity teams have been looking for: the learning curve is similar for analysts and threat actors, and because generative AI relies on the data…
C-suite weighs in on generative AI and security
3 min read - Generative AI (GenAI) is poised to deliver significant benefits to enterprises and their ability to readily respond to and effectively defend against cyber threats. But AI that is not itself secured may introduce a whole new set of threats to businesses. Today IBM’s Institute for Business Value published “The CEO's guide to generative AI: Cybersecurity," part of a larger series providing guidance for senior leaders planning to adopt generative AI models and tools. The materials highlight key considerations for CEOs…
Does your security program suffer from piecemeal detection and response?
4 min read - Piecemeal Detection and Response (PDR) can manifest in various ways. The most common symptoms of PDR include: Multiple security information and event management (SIEM) tools (e.g., one on-premise and one in the cloud) Spending too much time or energy on integrating detection systems An underperforming security orchestration, automation and response (SOAR) system Only capable of taking automated responses on the endpoint Anomaly detection in silos (e.g., network separate from identity) If any of these symptoms resonate with your organization, it's…
What to know about new generative AI tools for criminals
3 min read - Large language model (LLM)-based generative AI chatbots like OpenAI’s ChatGPT took the world by storm this year. ChatGPT became mainstream by making the power of artificial intelligence accessible to millions. The move inspired other companies (which had been working on comparable AI in labs for years) to introduce their own public LLM services, and thousands of tools based on these LLMs have emerged. Unfortunately, malicious hackers moved quickly to exploit these new AI resources, using ChatGPT itself to polish and…
Topic updates
Analysis and insights from hundreds of the brightest minds in the cybersecurity industry to help you prove compliance, grow business and stop threats.

IMAGES
VIDEO
COMMENTS
This paper provides a concise overview of AI implementations of various cybersecurity using artificial technologies and evaluates the prospects for expanding the cybersecurity capabilities by...
open access • A taxonomy of AI use cases for cybersecurity provision is proposed. • A comprehensive survey of current applications of AI in cybersecurity is conducted. • Detailed statistics and distributions of the surveyed work are provided. • Future research directions are discussed. Abstract
Abstract In recent times, there have been attempts to leverage artificial intelligence (AI) techniques in a broad range of cyber security applications.
This paper aims at identifying research trends in the field through a systematic bibliometric literature review (LRSB) of research on AI and system security. the review entails 77 articles published in the Scopus ® database, presenting up-to-date knowledge on the topic. the LRSB results were synthesized across current research subthemes.
comprehensive desk research, literature reviews and expert interviews, the report explores the opportunities and challenges of artificial intelligence (AI) as it relates to cybersecurity. Specifically, this report explores the issue from two angles: how AI can help strengthen security by, for example, detecting
Abstract. This paper reviews the current literature on the impact and limitations of artificial intelligence (AI) in cybersecurity, focusing on four main themes: the benefits and challenges of AI for cyber defense, the risks and opportunities of AI for cyber offense, the ethical and legal implications of AI for cybersecurity, and the future directions and recommendations for AI research and ...
In this paper, strategy related challenges of AI and Security will be discussed, along with suggestions AI cyber security and politics trade-off consideration from an initial planning stage to its near future further development.
6 Altmetric Metrics Abstract Artificial Intelligence (AI) provides instant insights to pierce through the noise of thousands of daily security alerts. The recent literature focuses on AI's application to cyber security but lacks visual analysis of AI applications. Structural changes have been observed in cyber security since the emergence of AI.
paper briey reviews the key advantages and limitations of utilizing AI in the four cyber security applications (i.e., user access authentication, network situation awareness, dan-gerous behavior monitoring, and abnormal trac identication). In the fourth section, the conceptual human-in-the-loop cyber security model is presented.
The Networking and Information Technology Research and Development (NITRD) Program's Artificial Intelligence R&D, and Cyber Security and Information Assurance, IWGs held a workshop to assess the research challenges and opportunities at the intersection of cybersecurity and artificial intelligence (AI).
Although there are papers reviewing Artificial Intelligence applications in cyber security areas and the vast literature on applying XAI in many fields including healthcare, financial services, and criminal justice, the surprising fact is that there are currently no survey research articles that concentrate on XAI applications in cyber security.
Artificial intelligence (AI) is one of the key technologies of the Fourth Industrial Revolution (or Industry 4.0), which can be used for the protection of Internet-connected systems from cyber threats, attacks, damage, or unauthorized access.
Data breaches, identity theft, and data espionage are all consequences of cyber-attacks, which harm millions of individuals and organizations. Due to a lack of cybersecurity personnel with up-to-date data, as result, new security concerns develop as technology improves. Human interaction is just insufficient for attack analysis and suitable reaction; similarly, the support of senior experts ...
The Role of Artificial Intelligence in Cyber Security Authors: Kirti Raj Bhatele Rustamji Institute of Technology Harsh Shrivastava National University of Singapore Neha Kumari Abstract Cyber...
Method: This research paper utilizes a literature review approach to investigate the intersection of artificial intelligence and cybersecurity. It involves analyzing existing scholarly...
The remaining paper is structured as follows. Section 2 outlines the relevant ... describes a family of artificial intelligence (AI) derived from artificial neural networks ... Deep learning for cyber security intrusion detection: approaches, datasets, and comparative study. Journal of Information Security and Applications 50:102419. doi:10. ...
Artificial Intelligence in Cyber Security Authors: Swagat M Karve Arpityadav Prateek Dutta Dataevolve Solutions Pvt. Ltd Abstract Now days due to huge application of IOT, cyber attack is...
In this paper we are going to discuss how Artificial Intelligence (AI) can be used to address cyber security issues and cyber-threats. Cyber security is a vast growing field since...
This paper provides a comprehensive survey of the state-of-the-art use of Artificial Intelligence (AI) and Machine Learning (ML) in the field of cybersecurity. The paper illuminates key applications of AI and ML in cybersecurity, while also addressing existing challenges and posing unresolved questions for future research.
Sivasankar G a KIT Kalaignar Karunanidhi Institute of Technology Abstract and Figures In today's world, cyber security and artificial intelligence (AI) are two growing technologies. AI is...
The ten papers in this special issue focus on cybersecurity for cyber-physical systems (CPSs). The systems have become very complex, more sophisticated, intelligent and autonomous. They offer very complex interaction between heterogeneous cyber and physical components; additionally to this complexity, they are exposed to important disturbances due to unintentional and intentional events which ...
The emergence of AI tools in cybersecurity creates many opportunities and uncertainties. A focus group with advanced graduate students in cybersecurity revealed the potential depth and breadth of the challenges and opportunities. The salient issues are access to open source or free tools, documentation, curricular diversity, and clear articulation of ethical principles for AI cybersecurity ...
Artificial intelligence (AI) models trained on unclassified, open-source data can predict terrorist attacks, aid in destabilizing terrorist networks, protect against intellectual property theft, and predict, detect, and mitigate cyber-attacks in real time. ... Locked Shields is a forum for thousands of cyber security experts across more than 30 ...
Generative AI will also significantly increase risks to safety and security. By 2025, generative AI is more likely to amplify existing risks than create wholly new ones, but it will increase ...
This study proposes Intell-dragonfly, a cyber security attack surface generation engine based on artificial intelligence generation technology, to meet the challenges of cyber security. Based on ChatGPT technology, this paper designs an automated attack surface generation process, which can generate diversified and personalized attack scenarios ...
AI-powered risk analysis can produce incident summaries for high-fidelity alerts and automate incident responses, accelerating alert investigations and triage by an average of 55%. 1 The AI technology also helps identify vulnerabilities and defend against cybercriminals and cyber crime. AI models can help balance security with user experience ...
The National Security Agency has announced the winning entry to its ninth annual Best Cybersecurity Research Paper Competition.. The winning paper was written by Yanyi Liu from Cornell University and Rafael Pass, professor of Computer Science at Cornell Tech. It expounded a theorem that relates the existence of one-way functions (OWFs) to a measurement of the complexity of a string of text.
The research yielded that maximum higher-education students would prefer the cyberbullying to stop, but by avoiding fighting back. Also, it revealed that male students are more involved in cyberbullying than female students. Research Paper on Cyber Security by Mrs. Ashwini Sheth, Mr. Sachin Bhosale, and Mr. Farish Kurupkar
The paper explores and comments on the theoretical concept of human-machine-teaming in intelligent manufacturing. Industrial production is an important area of work applications and should be developed toward a more anthropocentric Industry 4.0/5.0. Teaming is used a design metaphor for human-centered integration of workers and complex cyber-physical-production systems using artificial ...
For more information on X-Force's security research, threat intelligence and hacker-led insights, visit the X-Force Research Hub. To learn more about how IBM can help businesses accelerate their ...