Help | Advanced Search

Computer Science > Computation and Language

Title: sparks of artificial general intelligence: early experiments with gpt-4.

Abstract: Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.

Submission history

Access paper:.

  • Download PDF
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

9 blog links

Bibtex formatted citation.

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Advertisement

Advertisement

Artificial intelligence and machine learning research: towards digital transformation at a global scale

  • Published: 17 April 2021
  • Volume 13 , pages 3319–3321, ( 2022 )

Cite this article

  • Akila Sarirete 1 ,
  • Zain Balfagih 1 ,
  • Tayeb Brahimi 1 ,
  • Miltiadis D. Lytras 1 , 2 &
  • Anna Visvizi 3 , 4  

6201 Accesses

12 Citations

Explore all metrics

Avoid common mistakes on your manuscript.

Artificial intelligence (AI) is reshaping how we live, learn, and work. Until recently, AI used to be a fanciful concept, more closely associated with science fiction rather than with anything else. However, driven by unprecedented advances in sophisticated information and communication technology (ICT), AI today is synonymous technological progress already attained and the one yet to come in all spheres of our lives (Chui et al. 2018 ; Lytras et al. 2018 , 2019 ).

Considering that Machine Learning (ML) and AI are apt to reach unforeseen levels of accuracy and efficiency, this special issue sought to promote research on AI and ML seen as functions of data-driven innovation and digital transformation. The combination of expanding ICT-driven capabilities and capacities identifiable across our socio-economic systems along with growing consumer expectations vis-a-vis technology and its value-added for our societies, requires multidisciplinary research and research agenda on AI and ML (Lytras et al. 2021 ; Visvizi et al. 2020 ; Chui et al. 2020 ). Such a research agenda should oscilate around the following five defining issues (Fig. 1 ):

figure 1

Source: The Authors

An AI-Driven Digital Transformation in all aspects of human activity/

Integration of diverse data-warehouses to unified ecosystems of AI and ML value-based services

Deployment of robust AI and ML processing capabilities for enhanced decision making and generation of value our of data.

Design of innovative novel AI and ML applications for predictive and analytical capabilities

Design of sophisticated AI and ML-enabled intelligence components with critical social impact

Promotion of the Digital Transformation in all the aspects of human activity including business, healthcare, government, commerce, social intelligence etc.

Such development will also have a critical impact on government, policies, regulations and initiatives aiming to interpret the value of the AI-driven digital transformation to the sustainable economic development of our planet. Additionally the disruptive character of AI and ML technology and research will required further research on business models and management of innovation capabilities.

This special issue is based on submissions invited from the 17th Annual Learning and Technology Conference 2019 that was held at Effat University and open call jointly. Several very good submissions were received. All of them were subjected a rigorous peer review process specific to the Ambient Intelligence and Humanized Computing Journal.

A variety of innovative topics are included in the agenda of the published papers in this special issue including topics such as:

Stock market Prediction using Machine learning

Detection of Apple Diseases and Pests based on Multi-Model LSTM-based Convolutional Neural Networks

ML for Searching

Machine Learning for Learning Automata

Entity recognition & Relation Extraction

Intelligent Surveillance Systems

Activity Recognition and K-Means Clustering

Distributed Mobility Management

Review Rating Prediction with Deep Learning

Cybersecurity: Botnet detection with Deep learning

Self-Training methods

Neuro-Fuzzy Inference systems

Fuzzy Controllers

Monarch Butterfly Optimized Control with Robustness Analysis

GMM methods for speaker age and gender classification

Regression methods for Permeability Prediction of Petroleum Reservoirs

Surface EMG Signal Classification

Pattern Mining

Human Activity Recognition in Smart Environments

Teaching–Learning based Optimization Algorithm

Big Data Analytics

Diagnosis based on Event-Driven Processing and Machine Learning for Mobile Healthcare

Over a decade ago, Effat University envisioned a timely platform that brings together educators, researchers and tech enthusiasts under one roof and functions as a fount for creativity and innovation. It was a dream that such platform bridges the existing gap and becomes a leading hub for innovators across disciplines to share their knowledge and exchange novel ideas. It was in 2003 that this dream was realized and the first Learning & Technology Conference was held. Up until today, the conference has covered a variety of cutting-edge themes such as Digital Literacy, Cyber Citizenship, Edutainment, Massive Open Online Courses, and many, many others. The conference has also attracted key, prominent figures in the fields of sciences and technology such as Farouq El Baz from NASA, Queen Rania Al-Abdullah of Jordan, and many others who addressed large, eager-to-learn audiences and inspired many with unique stories.

While emerging innovations, such as Artificial Intelligence technologies, are seen today as promising instruments that could pave our way to the future, these were also the focal points around which fruitful discussions have always taken place here at the L&T. The (AI) was selected for this conference due to its great impact. The Saudi government realized this impact of AI and already started actual steps to invest in AI. It is stated in the Kingdome Vision 2030: "In technology, we will increase our investments in, and lead, the digital economy." Dr. Ahmed Al Theneyan, Deputy Minister of Technology, Industry and Digital Capabilities, stated that: "The Government has invested around USD 3 billion in building the infrastructure so that the country is AI-ready and can become a leader in AI use." Vision 2030 programs also promote innovation in technologies. Another great step that our country made is establishing NEOM city (the model smart city).

Effat University realized this ambition and started working to make it a reality by offering academic programs that support the different sectors needed in such projects. For example, the master program in Energy Engineering was launched four years ago to support the energy sector. Also, the bachelor program of Computer Science has tracks in Artificial Intelligence and Cyber Security which was launched in Fall 2020 semester. Additionally, Energy & Technology and Smart Building Research Centers were established to support innovation in the technology and energy sectors. In general, Effat University works effectively in supporting the KSA to achieve its vision in this time of national transformation by graduating skilled citizen in different fields of technology.

The guest editors would like to take this opportunity to thank all the authors for the efforts they put in the preparation of their manuscripts and for their valuable contributions. We wish to express our deepest gratitude to the referees, who provided instrumental and constructive feedback to the authors. We also extend our sincere thanks and appreciation for the organizing team under the leadership of the Chair of L&T 2019 Conference Steering Committee, Dr. Haifa Jamal Al-Lail, University President, for her support and dedication.

Our sincere thanks go to the Editor-in-Chief for his kind help and support.

Chui KT, Lytras MD, Visvizi A (2018) Energy sustainability in smart cities: artificial intelligence, smart monitoring, and optimization of energy consumption. Energies 11(11):2869

Article   Google Scholar  

Chui KT, Fung DCL, Lytras MD, Lam TM (2020) Predicting at-risk university students in a virtual learning environment via a machine learning algorithm. Comput Human Behav 107:105584

Lytras MD, Visvizi A, Daniela L, Sarirete A, De Pablos PO (2018) Social networks research for sustainable smart education. Sustainability 10(9):2974

Lytras MD, Visvizi A, Sarirete A (2019) Clustering smart city services: perceptions, expectations, responses. Sustainability 11(6):1669

Lytras MD, Visvizi A, Chopdar PK, Sarirete A, Alhalabi W (2021) Information management in smart cities: turning end users’ views into multi-item scale development, validation, and policy-making recommendations. Int J Inf Manag 56:102146

Visvizi A, Jussila J, Lytras MD, Ijäs M (2020) Tweeting and mining OECD-related microcontent in the post-truth era: A cloud-based app. Comput Human Behav 107:105958

Download references

Author information

Authors and affiliations.

Effat College of Engineering, Effat Energy and Technology Research Center, Effat University, P.O. Box 34689, Jeddah, Saudi Arabia

Akila Sarirete, Zain Balfagih, Tayeb Brahimi & Miltiadis D. Lytras

King Abdulaziz University, Jeddah, 21589, Saudi Arabia

Miltiadis D. Lytras

Effat College of Business, Effat University, P.O. Box 34689, Jeddah, Saudi Arabia

Anna Visvizi

Institute of International Studies (ISM), SGH Warsaw School of Economics, Aleja Niepodległości 162, 02-554, Warsaw, Poland

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Akila Sarirete .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Sarirete, A., Balfagih, Z., Brahimi, T. et al. Artificial intelligence and machine learning research: towards digital transformation at a global scale. J Ambient Intell Human Comput 13 , 3319–3321 (2022). https://doi.org/10.1007/s12652-021-03168-y

Download citation

Published : 17 April 2021

Issue Date : July 2022

DOI : https://doi.org/10.1007/s12652-021-03168-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Artificial Intelligence in the 21st Century

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Tzu Chi Med J
  • v.32(4); Oct-Dec 2020

Logo of tcmj

The impact of artificial intelligence on human society and bioethics

Michael cheng-tek tai.

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.

W HAT IS ARTIFICIAL INTELLIGENCE ?

Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as “a system” with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 1 ].

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [ 2 ].

Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [ 3 ].

D IFFERENT TYPES OF ARTIFICIAL INTELLIGENCE

From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use “AI” are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.

The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.

Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [ 4 ].

In summary, we can see these different functions of AI [ 5 , 6 ]:

  • Automation: What makes a system or process to function automatically
  • Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing
  • Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate
  • Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines
  • Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.

D O HUMAN-BEINGS REALLY NEED ARTIFICIAL INTELLIGENCE ?

Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21 st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20 th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.

Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [ 7 ]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [ 8 , 9 ]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.

Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays…etc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.

T HE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN SOCIETY

Negative impact.

Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?

Let us see the negative impact the AI will have on human society [ 10 , 11 ]:

  • A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication
  • Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor
  • Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called “M” shape wealth distribution will be more obvious
  • New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller
  • The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.

P OSITIVE IMPACT

There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [ 7 , 11 ]:

Fast and accurate diagnostics

IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.

Socially therapeutic robots

Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [ 12 ].

Reduce errors related to human fatigue

Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

Artificial intelligence-based surgical contribution

AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.

Improved radiology

The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [ 9 ]. All those are the contribution of the technology of AI.

Virtual presence

The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.

S OME CAUTIONS TO BE REMINDED

Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [ 13 ].

The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [ 14 ]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [ 14 ]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.

T HE CHALLENGE OF ARTIFICIAL INTELLIGENCE TO BIOETHICS

Artificial intelligence ethics must be developed.

Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethics…etc. All these are concerned about relationships within and among natural existences.

As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [ 15 ]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [ 16 ].

The question is–do we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.

Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: “I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,”…. “What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them” [ 17 ]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:

  • Lawful-respecting all applicable laws and regulations
  • Ethical-respecting ethical principles and values
  • Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [ 18 ].

Seven requirements are recommended [ 18 ]:

  • AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes
  • AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable
  • Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen
  • Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make
  • Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines
  • AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.

From these guidelines, we can suggest that future AI must be equipped with human sensibility or “AI humanities.” To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [ 19 ] as criteria for the computerized society to think about.

S UGGESTED PRINCIPLES FOR ARTIFICIAL INTELLIGENCE BIOETHICS

Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that “We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient” [ 20 ]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [ 20 ].

All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:

  • Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature
  • Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further
  • Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards … In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't “explain its work” may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required
  • Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.

C ONCLUSION

AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [ 10 ]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI – A European approach to excellence and trust : “AI must serve people, and therefore, AI must always comply with people's rights…. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market” [ 21 ].

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS EXPLAINER
  • 16 February 2024

What the EU’s tough AI law means for research and ChatGPT

  • Elizabeth Gibney

You can also search for this author in PubMed   Google Scholar

The statement from the European Commission is being displayed on a smartphone with AI and EU stars in the background.

Representatives of EU member governments approved the EU AI Act this month. Credit: Jonathan Raa/NurPhoto via Getty

European Union countries are poised to adopt the world’s first comprehensive set of laws to regulate artificial intelligence (AI). The EU AI Act puts its toughest rules on the riskiest AI models, and is designed to ensure that AI systems are safe and respect fundamental rights and EU values.

“The act is enormously consequential, in terms of shaping how we think about AI regulation and setting a precedent,” says Rishi Bommasani, who researches the societal impact of AI at Stanford University in California.

The legislation comes as AI develops apace. This year is expected to see the launch of new versions of generative AI models — such as GPT, which powers ChatGPT, developed by OpenAI in San Francisco, California — and existing systems are being used in scams and to propagate misinformation. China already uses a patchwork of laws to guide commercial use of AI, and US regulation is under way. Last October, President Joe Biden signed the nation’s first AI executive order, requiring federal agencies to take action to manage the risks of AI.

EU nations’ governments approved the legislation on 2 February, and the law now needs final sign-off from the European Parliament, one of the EU’s three legislative branches; this is expected to happen in April. If the text remains unchanged, as policy watchers expect, the law will enter into force in 2026.

Some researchers have welcomed the act for its potential to encourage open science, whereas others worry that it could stifle innovation. Nature examines how the law will affect research.

What is the EU’s approach?

The EU has chosen to regulate AI models on the basis of their potential risk, by applying stricter rules to riskier applications and outlining separate regulations for general-purpose AI models, such as GPT, which have broad and unpredictable uses.

The law bans AI systems that carry ‘unacceptable risk’, for example those that use biometric data to infer sensitive characteristics, such as people’s sexual orientation. High-risk applications, such as using AI in hiring and law enforcement, must fulfil certain obligations; for example, developers must show that their models are safe, transparent and explainable to users, and that they adhere to privacy regulations and do not discriminate. For lower-risk AI tools, developers will still have to tell users when they are interacting with AI-generated content. The law applies to models operating in the EU and any firm that violates the rules risks a fine of up to 7% of its annual global profits.

“I think it’s a good approach,” says Dirk Hovy, a computer scientist at Bocconi University in Milan, Italy. AI has quickly become powerful and ubiquitous, he says. “Putting a framework up to guide its use and development makes absolute sense.”

Some don’t think the laws go far enough, leaving “gaping” exemptions for military and national-security purposes, as well as loopholes for AI use in law enforcement and migration, says Kilian Vieth-Ditlmann, a political scientist at AlgorithmWatch, a Berlin-based non-profit organization that studies the effects of automation on society.

How much will it affect researchers?

In theory, very little. Last year, the European Parliament added a clause to the draft act that would exempt AI models developed purely for research, development or prototyping. The EU has worked hard to make sure that the act doesn’t affect research negatively, says Joanna Bryson, who studies AI and its regulation at the Hertie School in Berlin. “They really don’t want to cut off innovation, so I’d be astounded if this is going to be a problem.”

Many people writing at rows of curved desks, photographed from a high angle.

The European Parliament must give the final green light to the law. A vote is expected in April. Credit: Jean-Francois Badias/AP via Alamy

But the act is still likely to have an effect, by making researchers think about transparency, how they report on their models and potential biases, says Hovy. “I think it will filter down and foster good practice,” he says.

Robert Kaczmarczyk, a physician at the Technical University of Munich in Germany and co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit organization aimed at democratizing machine learning, worries that the law could hinder small companies that drive research, and which might need to establish internal structures to adhere to the laws. “To adapt as a small company is really hard,” he says.

What does it mean for powerful models such as GPT?

After heated debate, policymakers chose to regulate powerful general-purpose models — such as the generative models that create images, code and video — in their own two-tier category.

The first tier covers all general-purpose models, except those used only in research or published under an open-source licence. These will be subject to transparency requirements, including detailing their training methodologies and energy consumption, and must show that they respect copyright laws.

The second, much stricter, tier will cover general-purpose models deemed to have “high-impact capabilities”, which pose a higher “systemic risk”. These models will be subject to “some pretty significant obligations”, says Bommasani, including stringent safety testing and cybersecurity checks. Developers will be made to release details of their architecture and data sources.

For the EU, ‘big’ effectively equals dangerous: any model that uses more than 10 25 FLOPs (the number of computer operations) in training qualifies as high impact. Training a model with that amount of computing power costs between US$50 million and $100 million — so it is a high bar, says Bommasani. It should capture models such as GPT-4, OpenAI’s current model, and could include future iterations of Meta’s open-source rival, LLaMA. Open-source models in this tier are subject to regulation, although research-only models are exempt.

Some scientists are against regulating AI models, preferring to focus on how they’re used. “Smarter and more capable does not mean more harm,” says Jenia Jitsev, an AI researcher at the Jülich Supercomputing Centre in Germany and another co-founder of LAION. Basing regulation on any measure of capability has no scientific basis, adds Jitsev. They use the analogy of defining as dangerous all chemistry that uses a certain number of person-hours. “It’s as unproductive as this.”

Will the act bolster open-source AI?

EU policymakers and open-source advocates hope so. The act incentivizes making AI information available, replicable and transparent, which is almost like “reading off the manifesto of the open-source movement”, says Hovy. Some models are more open than others, and it remains unclear how the language of the act will be interpreted, says Bommasani. But he thinks legislators intend general-purpose models, such as LLaMA-2 and those from start-up Mistral AI in Paris, to be exempt.

The EU’s approach of encouraging open-source AI is notably different from the US strategy, says Bommasani. “The EU’s line of reasoning is that open source is going to be vital to getting the EU to compete with the US and China.”

How it is the act going to be enforced?

The European Commission will create an AI Office to oversee general-purpose models, advised by independent experts. The office will develop ways to evaluate the capabilities of these models and monitor related risks. But even if companies such as OpenAI comply with regulations and submit, for example, their enormous data sets, Jitsev questions whether a public body will have the resources to scrutinize submissions adequately. “The demand to be transparent is very important,” they say. “But there was little thought spent on how these procedures have to be executed.”

doi: https://doi.org/10.1038/d41586-024-00497-8

Reprints and permissions

  • Machine learning

Largest post-pandemic survey finds trust in scientists is high

Largest post-pandemic survey finds trust in scientists is high

News 14 FEB 24

From the archive: river pollution, and a minister for science

From the archive: river pollution, and a minister for science

News & Views 13 FEB 24

Indonesian election promises boost to research funding — no matter who wins

Indonesian election promises boost to research funding — no matter who wins

News 13 FEB 24

Stockholm declaration on AI ethics: why others should sign

Correspondence 20 FEB 24

Generative AI’s environmental costs are soaring — and mostly secret

Generative AI’s environmental costs are soaring — and mostly secret

World View 20 FEB 24

How journals are fighting back against a wave of questionable images

How journals are fighting back against a wave of questionable images

News Explainer 12 FEB 24

ZHICHENG Young Professor

ZHICHENG Young Professor in the fields of Natural Sciences and Engineering Technologies.

Suzhou, Jiangsu, China

School of Sustainable Energy and Resources at Nanjing University

first artificial intelligence research paper

Postdoctoral Research Fellow - Electron Transport in multilayer Van der Waals Heterostructures

Electron transport in novel van der Waals heterostructures. National University of Singapore (NUS)

Singapore (SG)

Institute for Functional Intelligent Materials, NUS

first artificial intelligence research paper

Postdoctoral Fellow

A Postdoctoral Fellow position is immediately available in the laboratory of Dr. Fen-Biao Gao at the University of Massachusetts Chan Medical Schoo...

Worcester, Massachusetts (US)

Umass Chan Medical School - Fen-Biao Gao Lab

first artificial intelligence research paper

Washing, Sterilisation and Media Preparation Technician

APPLICATION CLOSING DATE: March 7th, 2024 About Human Technopole:  Human Technopole (HT) is an interdisciplinary life science research institute, c...

Human Technopole

first artificial intelligence research paper

Scientific Officer

ABOUT US The Human Frontier Science Program Organization (HFSPO) is a unique organization, supporting international collaboration to undertake inno...

Strasbourg-Ville, Bas-Rhin (FR)

HUMAN FRONTIER SCIENCE PROGRAM ORGANIZATION

first artificial intelligence research paper

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

https://www.nist.gov/news-events/news/2024/02/nist-researchers-suggest-historical-precedent-ethical-ai-research

NIST Researchers Suggest Historical Precedent for Ethical AI Research

The belmont report’s guidelines could help avoid repeating past mistakes in ai-related human subjects research..

  • A research paper suggests that a watershed report on ethical treatment of human subjects would translate well as a basis for ethical research in AI.
  • This 1979 work, the Belmont Report, has its findings codified in federal regulations, which apply to government-funded research.
  • Applying the Belmont Report’s principles to human subjects in AI research could bring us closer to trustworthy and responsible use of AI.

A person reaches out to a transparent screen with a circle of icons around the words "AI ETHICS."

If we train artificial intelligence (AI) systems on biased data, they can in turn make biased judgments that affect hiring decisions, loan applications and welfare benefits — to name just a few real-world implications. With this fast-developing technology potentially causing life-changing consequences, how can we make sure that humans train AI systems on data that reflects sound ethical principles? 

A multidisciplinary team of researchers at the National Institute of Standards and Technology (NIST) is suggesting that we already have a workable answer to this question: We should apply the same basic principles that scientists have used for decades to safeguard human subjects research. These three principles — summarized as “respect for persons, beneficence and justice” — are the core ideas of 1979’s watershed Belmont Report , a document that has influenced U.S. government policy on conducting research on human subjects.

The team has published its work in the February issue of IEEE’s Computer magazine , a peer-reviewed journal. While the paper is the authors’ own work and is not official NIST guidance, it dovetails with NIST’s larger effort to support the development of trustworthy and responsible AI. 

“We looked at existing principles of human subjects research and explored how they could apply to AI,” said Kristen Greene, a NIST social scientist and one of the paper’s authors. “There’s no need to reinvent the wheel. We can apply an established paradigm to make sure we are being transparent with research participants, as their data may be used to train AI.”

The Belmont Report arose from an effort to respond to unethical research studies, such as the Tuskegee syphilis study , involving human subjects. In 1974, the U.S. created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, and it identified the basic ethical principles for protecting people in research studies. A U.S. federal regulation later codified these principles in 1991’s Common Rule , which requires that researchers get informed consent from research participants. Adopted by many federal departments and agencies, the Common Rule was revised in 2017 to take into account changes and developments in research.  

There is a limitation to the Belmont Report and Common Rule, though: The regulations that require application of the Belmont Report’s principles apply only to government research. Industry, however, is not bound by them.  

The NIST authors are suggesting that the concepts be applied more broadly to all research that includes human subjects. Databases used to train AI can hold information scraped from the web, but the people who are the source of this data may not have consented to its use — a violation of the “respect for persons” principle.  

“For the private sector, it is a choice whether or not to adopt ethical review principles,” Greene said. 

While the Belmont Report was largely concerned with inappropriate inclusion of certain individuals, the NIST authors mention that a major concern with AI research is inappropriate exclusion, which can create bias in a dataset against certain demographics. Past research has shown that face recognition algorithms trained primarily on one demographic will be less capable of distinguishing individuals in other demographics.

Applying the report’s three principles to AI research could be fairly straightforward, the authors suggest. Respect for persons would require subjects to provide informed consent for what happens to them and their data, while beneficence would imply that studies be designed to minimize risk to participants. Justice would require that subjects be selected fairly, with a mind to avoiding inappropriate exclusion. 

Greene said the paper is best seen as a starting point for a discussion about AI and our data, one that will help companies and the people who use their products alike. 

“We’re not advocating more government regulation. We’re advocating thoughtfulness,” she said. “We should do this because it’s the right thing to do.”

Paper: K.K. Greene, M.F. Theofanos, C. Watson, A. Andrews and E. Barron. Avoiding Past Mistakes in Unethical Human Subjects Research: Moving from AI Principles to Practice. Computer. February 2024. DOI: 10.1109/MC.2023.3327653

EU AI Act: first regulation on artificial intelligence

The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law. Find out how it will protect you.

A man faces a computer generated figure with programming language in the background

As part of its digital strategy , the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits , such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI.

Learn more about what artificial intelligence is and how it is used

What Parliament wants in AI legislation

Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.

Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Learn more about Parliament’s work on AI and its vision for AI’s future

AI Act: different rules for different risk levels

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.

Unacceptable risk

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Biometric identification and categorisation of people
  • Real-time and remote biometric identification systems, such as facial recognition

Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

1) AI systems that are used in products falling under the EU’s product safety legislation . This includes toys, aviation, cars, medical devices and lifts.

2) AI systems falling into specific areas that will have to be registered in an EU database:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.

General purpose and generative AI

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.

Limited risk

Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.

On December 9 2023, Parliament reached a provisional agreement with the Council on the AI act . The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. Before all MEPs have their say on the agreement, Parliament’s internal market and civil liberties committees will vote on it.

More on the EU’s digital measures

  • Cryptocurrency dangers and the benefits of EU legislation
  • Fighting cybercrime: new EU cybersecurity laws explained
  • Boosting data sharing in the EU: what are the benefits?
  • EU Digital Markets Act and Digital Services Act
  • Five ways the European Parliament wants to protect online gamers
  • Artificial Intelligence Act

Related articles

Digital transformation in the eu, share this article on:.

  • Sign up for mail updates
  • PDF version
  • See us on facebook
  • See us on twitter
  • See us on youtube
  • See us on linkedin
  • See us on instagram

Stanford Medicine study identifies distinct brain organization patterns in women and men

Stanford Medicine researchers have developed a powerful new artificial intelligence model that can distinguish between male and female brains.

February 20, 2024

sex differences in brain

'A key motivation for this study is that sex plays a crucial role in human brain development, in aging, and in the manifestation of psychiatric and neurological disorders,' said Vinod Menon. clelia-clelia

A new study by Stanford Medicine investigators unveils a new artificial intelligence model that was more than 90% successful at determining whether scans of brain activity came from a woman or a man.

The findings, published Feb. 20 in the Proceedings of the National Academy of Sciences, help resolve a long-term controversy about whether reliable sex differences exist in the human brain and suggest that understanding these differences may be critical to addressing neuropsychiatric conditions that affect women and men differently.

“A key motivation for this study is that sex plays a crucial role in human brain development, in aging, and in the manifestation of psychiatric and neurological disorders,” said Vinod Menon , PhD, professor of psychiatry and behavioral sciences and director of the Stanford Cognitive and Systems Neuroscience Laboratory . “Identifying consistent and replicable sex differences in the healthy adult brain is a critical step toward a deeper understanding of sex-specific vulnerabilities in psychiatric and neurological disorders.”

Menon is the study’s senior author. The lead authors are senior research scientist Srikanth Ryali , PhD, and academic staff researcher Yuan Zhang , PhD.

“Hotspots” that most helped the model distinguish male brains from female ones include the default mode network, a brain system that helps us process self-referential information, and the striatum and limbic network, which are involved in learning and how we respond to rewards.

The investigators noted that this work does not weigh in on whether sex-related differences arise early in life or may be driven by hormonal differences or the different societal circumstances that men and women may be more likely to encounter.

Uncovering brain differences

The extent to which a person’s sex affects how their brain is organized and operates has long been a point of dispute among scientists. While we know the sex chromosomes we are born with help determine the cocktail of hormones our brains are exposed to — particularly during early development, puberty and aging — researchers have long struggled to connect sex to concrete differences in the human brain. Brain structures tend to look much the same in men and women, and previous research examining how brain regions work together has also largely failed to turn up consistent brain indicators of sex.

test

Vinod Menon

In their current study, Menon and his team took advantage of recent advances in artificial intelligence, as well as access to multiple large datasets, to pursue a more powerful analysis than has previously been employed. First, they created a deep neural network model, which learns to classify brain imaging data: As the researchers showed brain scans to the model and told it that it was looking at a male or female brain, the model started to “notice” what subtle patterns could help it tell the difference.

This model demonstrated superior performance compared with those in previous studies, in part because it used a deep neural network that analyzes dynamic MRI scans. This approach captures the intricate interplay among different brain regions. When the researchers tested the model on around 1,500 brain scans, it could almost always tell if the scan came from a woman or a man.

The model’s success suggests that detectable sex differences do exist in the brain but just haven’t been picked up reliably before. The fact that it worked so well in different datasets, including brain scans from multiple sites in the U.S. and Europe, make the findings especially convincing as it controls for many confounds that can plague studies of this kind.

“This is a very strong piece of evidence that sex is a robust determinant of human brain organization,” Menon said.

Making predictions

Until recently, a model like the one Menon’s team employed would help researchers sort brains into different groups but wouldn’t provide information about how the sorting happened. Today, however, researchers have access to a tool called “explainable AI,” which can sift through vast amounts of data to explain how a model’s decisions are made.

Using explainable AI, Menon and his team identified the brain networks that were most important to the model’s judgment of whether a brain scan came from a man or a woman. They found the model was most often looking to the default mode network, striatum, and the limbic network to make the call.

The team then wondered if they could create another model that could predict how well participants would do on certain cognitive tasks based on functional brain features that differ between women and men. They developed sex-specific models of cognitive abilities: One model effectively predicted cognitive performance in men but not women, and another in women but not men. The findings indicate that functional brain characteristics varying between sexes have significant behavioral implications.

“These models worked really well because we successfully separated brain patterns between sexes,” Menon said. “That tells me that overlooking sex differences in brain organization could lead us to miss key factors underlying neuropsychiatric disorders.”

While the team applied their deep neural network model to questions about sex differences, Menon says the model can be applied to answer questions regarding how just about any aspect of brain connectivity might relate to any kind of cognitive ability or behavior. He and his team plan to make their model publicly available for any researcher to use.

“Our AI models have very broad applicability,” Menon said. “A researcher could use our models to look for brain differences linked to learning impairments or social functioning differences, for instance — aspects we are keen to understand better to aid individuals in adapting to and surmounting these challenges.”

The research was sponsored by the National Institutes of Health (grants MH084164, EB022907, MH121069, K25HD074652 and AG072114), the Transdisciplinary Initiative, the Uytengsu-Hamilton 22q11 Programs, the Stanford Maternal and Child Health Research Institute, and the NARSAD Young Investigator Award.

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu .

Artificial intelligence

Exploring ways AI is applied to health care

Stanford Medicine Magazine: AI

Search Icon

Events See all →

Ica spring 2024 exhibitions.

An installation at the ICA.

“Dominique White and Alberta Whittle: Sargasso Sea” and “Tomashi Jackson: Across the Universe” are presented as the Institute of Contemporary Art’s spring 2024 exhibitions. The former is an installation that draws inspiration from the Sargasso Sea, the only body of water defined by oceanic currents. The latter, meanwhile, brings together paintings, video, prints, and sculpture by Jackson, who investigates histories related to cities, lands, and individuals in the U.S.

12:00 p.m. - 6:00 p.m.

Institute of Contemporary Art, 116 S. 36th St.

Immigration Act of 1924 Symposium

A row of seated students clap

10:00 a.m. - 4:00 p.m.

The ARCH, 3601 Locust Walk

ADHD Brown Bag Lunch Series

1 in 4 Americans develops insomnia each year, according to new research from Penn Medicine.

12:15 p.m. - 2:00 p.m.

Weingarten Center, 3702 Spruce St.

Mariana Sadovska

Mariana plays an instrument on stage.

Annenberg Center for the Performing Arts, 3680 Walnut St.

Science & Technology

Penn Engineering launches first Ivy League undergraduate degree in artificial intelligence

The new degree will push the limits on ai’s potential and prepare students to lead the use of this world-changing technology..

Photograph of Amy Gutmann Hall

The University of Pennsylvania School of Engineering and Applied Science today announced the launch of a  Bachelor of Science in Engineering in Artificial Intelligence (AI) degree, the first undergraduate major of its kind among Ivy League universities and one of the very first AI undergraduate engineering programs in the U.S.

The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning, and even national security. This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way.

“Inventive at its core, Penn excels at the cutting edge,” says Interim President J. Larry Jameson . “Data, including AI, is a critical area of focus for our strategic framework, In Principle and Practice, and this new degree program represents a leap forward for the Penn engineers who will lead in developing and deploying these powerful technologies in service to humanity. We are deeply grateful to Raj and Neera Singh, whose leadership helps make this possible.”

The Raj and Neera Singh Program in Artificial Intelligence equips students to unlock AI’s potential to benefit our society. Students in the program will be empowered to develop responsible AI tools that can harness the full knowledge available on the internet, provide superhuman attention to detail, and augment humans in making transformative scientific discoveries, researching materials for chips of the future, creating breakthroughs in health care through new antibiotics, applying lifesaving treatments, and accelerating knowledge and creativity.

Raj and Neera Singh are visionaries in technology and a constant force for innovation through their philanthropy. Their generosity graciously provides funding to support leadership, faculty, and infrastructure for the new program.

Photograph of Raj and Neera Singh

“Penn Engineering has long been a pioneer in computing and education, with ENIAC, the first digital computer, and the first Ph.D. in computer science,” says Raj Singh, who together with his wife Neera, have established the first undergraduate degree program in artificial intelligence within the Ivy League. “This proud legacy of innovation continues with Penn Engineering’s AI program, which will produce engineers that can leverage this powerful technology in a way that benefits all humankind.”

“We are thrilled to continue investing in Penn Engineering and the students who can best shape the future of this field,” says Neera Singh.

Preparing the next generation of AI engineers

The curriculum offers high-level coursework in topics including machine learning, computing algorithms, data analytics, and advanced robotics.

“The timing of this new undergraduate program comes as AI poses one of the most promising yet challenging opportunities the world currently faces,” says Vijay Kumar , Nemirovsky Family Dean of Penn Engineering. “Thanks to the generosity of Raj and Neera Singh to Penn Engineering’s B.S.E. in Artificial Intelligence program, we are preparing the next generation of engineers to create a society where AI isn’t just a tool, but a fundamental force for good to advance society in ways previously unimaginable.”

Leading the program will be George J. Pappas , UPS Foundation Professor of Transportation at Penn Engineering. “Realizing the potential of AI for positive social impact stands as one of the paramount challenges confronting engineering,” says Pappas, a 2024 National Academy of Engineering inductee. “We are excited to introduce a cutting-edge curriculum poised to train our students as leaders and innovators in the ongoing AI revolution.”

Ivy League coursework equipping students for the future

The new program’s courses will be taught by world-renowned faculty in the setting of Amy Gutmann Hall, Penn Engineering’s newest building. A hub for data science on campus and for the Philadelphia community when it officially opens this year, the state-of-the-art facilities in Amy Gutmann Hall will further transform the University’s capabilities in engineering education, research, and innovation as Penn Engineering advances the development of artificial intelligence.

“We are training students for jobs that don’t yet exist in fields that may be completely new or revolutionized by the time they graduate,” says Robert Ghrist , associate dean of Undergraduate Education in Penn Engineering and the Andrea Mitchell University Professor. “In my decades of teaching, this is one of the most exciting educational opportunities I’ve ever seen, and I can’t wait to work with these amazing students.”

More details about the AI curriculum and a full list of courses available within the program can be reviewed at Penn Engineering’s new artificial intelligence website .

“Our carefully selected curriculum reflects the reality that AI has come into its own as an academic discipline, not only because of the many amazing things it can do, but also because we think it’s important to address fundamental questions about the nature of intelligence and learning, how to align AI with our social values, and how to build trustworthy AI systems,” says Zachary Ives , Adani President’s Distinguished Professor and Chair of the Department of Computer and Information Science in Penn Engineering.

The new B.S.E in Artificial Intelligence program will begin in fall 2024, with applications for existing University of Pennsylvania students who would like to transfer into the 2024 cohort available this fall. Fall 2025 applications for all prospective students will be made available in fall 2024.

Two-and-a-half decades of research in Malawi

scuba diver researching coral

In hot water: Coral resilience in the face of climate change

Over a decade, researchers from Penn studied coral species in Hawaii to better understand their adaptability to the effects of climate change.

cop28 exterior

At COP28, Penn delegation shares wide-ranging knowledge and builds connections

More than two dozen researchers from schools and centers across the University traveled to Dubai for the UN’s annual climate change conference.

Exterior of Singh Center for Nanotechnology lit up at nighS

The Singh Center for Nanotechnology turns 10

Since its founding, the Center’s multidisciplinary approach has been a strength, where researchers from Penn Engineering, Arts & Sciences, and more come together in one space.

autumn leaves at the quad

Campus & Community

Penn’s urban forest

Penn’s West Philadelphia campus is home to 240 different tree species, which put on a show during the fall season.

IMAGES

  1. How To Write A Research Paper On Artificial Intelligence?

    first artificial intelligence research paper

  2. (PDF) Artificial intelligence

    first artificial intelligence research paper

  3. ️ Artificial intelligence papers. Artificial Intelligence Research

    first artificial intelligence research paper

  4. 007 Largepreview Essay Example Artificial ~ Thatsnotus

    first artificial intelligence research paper

  5. What is the History of artificial intelligence? And the earliest

    first artificial intelligence research paper

  6. (PDF) A review of artificial intelligence

    first artificial intelligence research paper

VIDEO

  1. Artificial Intelligence

  2. INTRODUCTION TO ARTIFICIAL INTELLIGENCE

  3. Artificial intelligence || lecture 8

  4. Artificial intelligence || lecture 5

  5. Artificial intelligence || lecture 11

  6. Artificial intelligence || lecture 1

COMMENTS

  1. The History of Artificial Intelligence

    It's considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956.

  2. A Brief History of Artificial Intelligence: On the Past, Present, and

    The inception of "artificial intelligence" (AI) traces back to John McCarthy, who coined the term in 1956, defining it as "the science and engineering of making intelligent machines" [29]. Early...

  3. History of artificial intelligence

    AI boom AI era Glossary v t e History of computing Hardware Hardware before 1960 Hardware 1960s to present Software Software Software configuration management Unix Free software and open-source software Computer science Artificial intelligence Compiler construction Early computer science Operating systems Programming languages Prominent pioneers

  4. A Brief History of Artificial Intelligence: On the Past, Present, and

    This introduction to this special issue discusses artificial intelligence (AI), commonly defined as "a system's ability to interpret external data correctly, to learn from such data, and to use tho...

  5. Artificial intelligence

    Artificial intelligence - Alan Turing, AI Beginnings: The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing.

  6. PDF The History of Artificial Intelligence

    Introduction Artificial Intelligence (AI) has been studied for decades and is still one of the most elusive subjects in Computer Science. This partly due to how large and nebulous the subject is. AI ranges from machines truly capable of thinking to search algorithms used to play board games.

  7. Timeline of artificial intelligence

    1920-1925. Wilhelm Lenz and Ernst Ising created and analyzed the Ising model (1925) [45] which can be viewed as the first artificial recurrent neural network (RNN) consisting of neuron-like threshold elements. [9] In 1972, Shun'ichi Amari made this architecture adaptive. [46] [9] 1920s and 1930s.

  8. Artificial Intelligence: Definition and Background

    First Online: 31 January 2023 27k Accesses 7 Citations 21 Altmetric Part of the Research for Policy book series (RP) Abstract If we want to embed AI in society, we need to understand what it is. What do we mean by artificial intelligence? How has the technology developed? Where do we stand now? Download chapter PDF 1 Definitions of AI

  9. A Brief History of Artificial Intelligence Research

    A Brief History of Artificial Intelligence Research Published in: Artificial Life ( Volume: 27 , Issue: 2 , 02 May 2021) Article #: Page(s): 131 - 137. Date of Publication: 02 May 2021 . ISSN Information: Print ISSN: 1064-5462 INSPEC Accession Number: Persistent Link: https ...

  10. Artificial intelligence in information systems research: A systematic

    The structure of the paper is as follows. First, an introduction to related work on AI in the IS field is presented. ... Russel & Norvig's book Artificial Intelligence: ... approaching the defining of AI as there seems to be a believe that there is a true (i.e., real, natural) meaning of "intelligence" that AI research projects should ...

  11. Artificial intelligence: A powerful paradigm for scientific research

    Cognitive intelligence is a higher-level ability of induction, reasoning and acquisition of knowledge. It is inspired by cognitive science, brain science, and brain-like intelligence to endow machines with thinking logic and cognitive ability similar to human beings. Once a machine has the abilities of perception and cognition, it is often ...

  12. Artificial Intelligence: Overview, Recent Advances, and Considerations

    The concept of AI has existed for decades, with the term first being coined in the 1950s, ... AEA Papers and Proceedings, vol. 108 (May 1, 2018), ... The term artificial intelligence was coined at the Dartmouth Summer Research Project on Artificial Intelligence, a conference proposed in 1955 and held the following year.6 Since that

  13. [2303.12712] Sparks of Artificial General Intelligence: Early

    Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an ...

  14. Tracing the evolution of AI in the past decade and forecasting the

    Five aspects of the past decade are highlighted: self-learning and self-coding algorithms, Recurrent Neural Networks (RNN) algorithms, reinforcement learning, pre-trained models, and other typical deep learning algorithms, which represent the significant progress of this field.

  15. PDF The Impact of Artificial Intelligence on Innovation

    The Impact of Artificial Intelligence on Innovation Iain M. Cockburn, Rebecca Henderson, and Scott Stern NBER Working Paper No. 24449 March 2018 JEL No. L1 ABSTRACT Artificial intelligence may greatly increase the efficiency of the existing economy.

  16. Artificial intelligence and machine learning research ...

    (1) Integration of diverse data-warehouses to unified ecosystems of AI and ML value-based services (2) Deployment of robust AI and ML processing capabilities for enhanced decision making and generation of value our of data. (3) Design of innovative novel AI and ML applications for predictive and analytical capabilities (4)

  17. (PDF) Artificial Intelligence

    ... The field of study that describes the ability of machines to learn just like humans can be defined as Artificial Intelligence (AI). Since it was market driven, the fields of technology and...

  18. Artificial Intelligence in the 21st Century

    The field of artificial intelligence (AI) has shown an upward trend of growth in the 21st century (from 2000 to 2015). The evolution in AI has advanced the development of human society in our own time, with dramatic revolutions shaped by both theories and techniques. However, the multidisciplinary and fast-growing features make AI a field in which it is difficult to be well understood. In this ...

  19. The impact of artificial intelligence on human society and bioethics

    This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships.

  20. (PDF) Research paper on Artificial Intelligence

    PDF Available Research paper on Artificial Intelligence December 2022 Authors: Ashutosh Kumar Galgotias University Rachna Priya Swarna Kumari Discover the world's research 25+ million...

  21. 'The Worlds I See' by AI visionary Fei-Fei Li '99 selected as Princeton

    Trailblazing computer scientist Fei-Fei Li's memoir "The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI" has been selected as the next Princeton Pre-read.. The book, which connects Li's personal story as a young immigrant and scientist with the origin stories of artificial intelligence and human-centered AI, was named to technology book lists for 2023 by the ...

  22. Generative AI's environmental costs are soaring

    Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. It ...

  23. What the EU's tough AI law means for research and ChatGPT

    European Union countries are poised to adopt the world's first comprehensive set of laws to regulate artificial intelligence (AI). The EU AI Act puts its toughest rules on the riskiest AI models ...

  24. NIST Researchers Suggest Historical Precedent for Ethical AI Research

    A research paper suggests that a watershed report on ethical treatment of human subjects would translate well as a basis for ethical research in AI. ... If we train artificial intelligence (AI) systems on biased data, they can in turn make biased judgments that affect hiring decisions, loan applications and welfare benefits — to name just a ...

  25. EU AI Act: first regulation on artificial intelligence

    As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.. In April 2021, the European Commission proposed the first EU ...

  26. Stanford Medicine study identifies distinct brain organization patterns

    A new study by Stanford Medicine investigators unveils a new artificial intelligence model that was more than 90% successful at determining whether scans of brain activity came from a woman or a man.. The findings, published Feb. 20 in the Proceedings of the National Academy of Sciences, help resolve a long-term controversy about whether reliable sex differences exist in the human brain and ...

  27. Penn Engineering launches first Ivy League undergraduate degree in

    The University of Pennsylvania School of Engineering and Applied Science today announced the launch of a Bachelor of Science in Engineering in Artificial Intelligence (AI) degree, the first undergraduate major of its kind among Ivy League universities and one of the very first AI undergraduate engineering programs in the U.S.. The rapid rise of generative AI is transforming virtually every ...

  28. AI technologies for education: Recent research & future directions

    This article reports the current state of AIEd research, highlights selected AIEd technologies and applications, reviews their proven and potential benefits for education, bridges the gaps between AI technological innovations and their educational applications, and generates practical examples and inspirations for both technological experts that...