Responsible media technology and AI: challenges and research directions

  • Opinion Paper
  • Open access
  • Published: 20 December 2021
  • Volume 2 , pages 585–594, ( 2022 )

Cite this article

You have full access to this open access article

  • Christoph Trattner   ORCID: orcid.org/0000-0002-1193-0508 1 ,
  • Dietmar Jannach 2 ,
  • Enrico Motta 3 ,
  • Irene Costera Meijer 4 ,
  • Nicholas Diakopoulos 5 ,
  • Mehdi Elahi 1 ,
  • Andreas L. Opdahl 1 ,
  • Bjørnar Tessem 1 ,
  • Njål Borch 6 ,
  • Morten Fjeld 1 ,
  • Lilja Øvrelid 7 ,
  • Koenraad De Smedt 1 &
  • Hallvard Moe 1  

12k Accesses

16 Citations

5 Altmetric

Explore all metrics

The last two decades have witnessed major disruptions to the traditional media industry as a result of technological breakthroughs. New opportunities and challenges continue to arise, most recently as a result of the rapid advance and adoption of artificial intelligence technologies. On the one hand, the broad adoption of these technologies may introduce new opportunities for diversifying media offerings, fighting disinformation, and advancing data-driven journalism. On the other hand, techniques such as algorithmic content selection and user personalization can introduce risks and societal threats. The challenge of balancing these opportunities and benefits against their potential for negative impacts underscores the need for more research in responsible media technology. In this paper, we first describe the major challenges—both for societies and the media industry—that come with modern media technology. We then outline various places in the media production and dissemination chain, where research gaps exist, where better technical approaches are needed, and where technology must be designed in a way that can effectively support responsible editorial processes and principles. We argue that a comprehensive approach to research in responsible media technology, leveraging an interdisciplinary approach and a close cooperation between the media industry and academic institutions, is urgently needed.

Similar content being viewed by others

research paper about traditional media

Algorithms and Media Ethics in the AI Age

Global media ethics and human rights: roles, responsibilities, and rehumanizing journalism.

research paper about traditional media

Balancing Efficiency and Ethics: The Challenges of Artificial Intelligence Implementation in Journalism

Avoid common mistakes on your manuscript.

1 Introduction

The past two decades have been marked by a rapid and profound disruption of the traditional media industry. Today, the Internet is ubiquitous, practically everyone has a smartphone, the cloud reduces up front investments in large computing infrastructures, processing power still doubles every 2 years and an increasing number of our physical assets are connected. These developments have provided the basis for new product and service innovations, which have made it possible to break up and restructure supply and demand, alter value chains and create new business models [ 17 ].

One of the most visible effects of the changes in the last decades is that media content is now largely consumed through online channels, while technological developments continue to impact how media is distributed and consumed. For instance, the increased digitization of media has opened up a variety of opportunities for collecting and analyzing large amounts of audience and consumption data, which can be used to tailor services and content to the perceived interests of individual consumers. Beyond distribution, new technological developments have opened up opportunities to enhance the media production process, such as through the use of machine learning (ML) to sift through large numbers of documents, the application of analytic tools for audience understanding, the deployment of automated media analysis capabilities, the development of sociotechnical processes to support fact-checking, and so on [ 21 ].

At the same time, a number of new challenges also arise with these developments. Some of these challenges affect the industry, where media organizations have to keep up both with rapid technological developments and with new players that enter the market. However, other challenges are more societally oriented, such as the ways in which new technologies increasingly automate media personalization. One of the most pressing problems in this context is often seen in the increasing opportunities for spreading misinformation and disinformation. Whereas the former is false and misleading information not necessarily meant to deceive, the latter is intentionally created and communicated to deceive people [ 42 ]. While misinformation and disinformation have always been a feature of human society, modern technology has made it much easier for malicious actors anywhere in the world to reach the largest possible audience very quickly, something that would have been impossible in the past [ 5 ].

Overall, these challenges for industry and the potential threats to society create a need for more research in responsible media technology, which we define as technology that aims to maximize the benefits for news organizations and for society while minimizing the risks of potential negative effects. In this paper, we will first review societal and industrial challenges in Sect. 2 . Afterwards, we outline a number of important research directions in responsible (AI-based) media technology in Sect. 3 , covering different aspects of the media production and dissemination process. Then, in Sect. 4 , we emphasize why an integrated approach is needed to address today’s challenges, which not only requires the cooperation of technology experts in academia and media organizations, but also an in-depth understanding of how today’s media industry operates, e.g., with respect to their editorial ethics and processes. In this context, we also introduce a new research center on responsible media technology which we have recently set up in Norway. Norway is a small, wealthy democratic nation state often described as a Nordic welfare state with high ICT penetration and comparatively egalitarian media use patterns. With a strong legacy news industry and widely used public service broadcasters, it is a case characterized by a proactive media policy operating at an arms’ lengths distance, with the main aim of providing media diversity to foster public debate [ 67 ]. In this context, the research center’s main goal is to foster interdisciplinary research and industry-academia co-operation, to tackle the key sociotechnical challenges relevant to the new media landscape. Footnote 1

2 Challenges for media industry and society

On the basis of the recent technological developments, this section introduces and discusses urgent challenges for the media industry and for society. Here, we give particular, but not exclusive, attention to the impact of artificial intelligence technologies.

2.1 Challenges for the media industry

A key consequence of digitalization and the new business models that have become possible is that new competition has emerged for the media industry. There are, for example, new niche players who are able to target specific user demands more accurately, thus threatening to take over positions previously held by traditional media houses and their established editorial processes. For example, finn.no has become the main platform for classified ads in Norway, a sector previously covered primarily by traditional media; Twitter has become a major debate platform, making it possible to bypass the traditional media; Facebook appears to give us far more insight into peoples’ lives than the personals sections in the newspapers ever did; and Netflix, HBO, Twitch, TikTok, and YouTube challenge the positions owned by the commercial and public broadcasters in the culture and entertainment sectors.

Large platforms, such as Facebook, aggregate content and services more efficiently than the media has been able to, capitalizing on both content curation by users and algorithms for predictive content personalization. Ultimately, these large platforms now act as powerful media distribution channels, while traditional media organizations have become content providers to these platforms, almost no different than just about anyone else with a smartphone.

In this weakened position, traditional media organizations also face new threats. Presented on an equal footing, it is easy for both malicious editorial and non-editorial players to present misinformation and disinformation as news (“fake news”), which may soak up attention. As a result it is often left up to the users to find out for themselves whether or not news mirror reality. This both hurts responsible media organizations in terms of the attention they garner and at the same time underscores credibility as an important currency. To strengthen their position and maintain comparative advantage in this new competitive landscape of untrustworthy sources, responsible media entities may benefit by fortifying their role to stand out as reliable sources of information.

In the context of meeting these challenges, we suggest that advanced media technologies that are deployed in responsible ways may be a meaningful way forward for traditional media organizations. For example, such organizations are in a strong position to understand the needs of their audiences in depth and to then personalize content to these needs and preferences while trying to minimize negative effects and create public benefits [ 70 ]. Likewise, they can leverage technology to scale their ability to fact-check the morass of content circulating on platforms to buttress both their own brand credibility and to increase the overall quality of information people encounter online. In the end, the use of such technologies may not only help to keep up with the competition for attention, but may also help to meet a media organization’s own goals in terms of editorial principles and ethics, including fulfilling any public service mandates.

2.2 Societal challenges

Societies and individuals may suffer in different ways from the negative effects that accompany the recent profound changes in the media landscape. For instance, the proliferation of misinformation and disinformation can threaten core democratic values by promoting political extremism and uninformed debate and discrimination [ 13 ]. Unfortunately, while there is much work on tackling these issues, e.g., through fact-checking organizations that counter disinformation, more needs to be done before they are effectively addressed [ 9 ].

As viewership and readership of linear TV and physical newspapers drop, users are going online, where they are bombarded with choices. The editorial voices which have for so long decided on what is relevant enough to publish and push, have been challenged by a combination of algorithms and user choice—creating users empowered to (or forced to) become their own editors. The world’s most frequented digital media platforms, such as Google, YouTube, Facebook, Twitter, Reddit, Netflix and others, use a variety of algorithms and machine learning in elaborate sociotechnical systems, to decide which content is made visible and amplified, and which is suppressed. While understanding how such AI technology impacts public discourse to the benefit of individuals, communities, and society, traditional media will also have an interest in making technology foster democratic values [ 36 ].

There are also a multitude of concerns about the degree to which media organizations, however unintentionally, may contribute to the polarization and radicalization of the public [ 72 ]. For example an increased focus on AI-based personalization and recommendation technology could lead media organizations to contribute to the formation of so-called “echo chambers” [ 31 ]. These can potentially reduce the degree to which citizens are exposed to serendipitous information or information with which they disagree. In addition, media organizations are often concerned with freedom of speech and facilitating public debate on important societal issues. As more technologically advanced services are created, care needs to be taken so that large groups of users are not alienated by their complexity.

A policy aspect is also present, as platforms might be held responsible for the views and statements of others. As such, content moderation will be necessary to limit distribution of harmful content (e.g., inciting, fraudulent, exploitative, hateful, or manipulative). Incoming EU legislation, such as ‘Article 13’ [ 58 ], increases the burden on media organizations that allow users to upload content. EU legislation will require media organizations to make greater efforts in checking copyright and hate speech, as media is produced, disseminated and promoted.

3 Research directions

Next, we introduce and discuss five main research areas in responsible media technology, areas we consider as priorities for research and development efforts:

Understanding media experiences;

User modeling, personalization and engagement;

Media content analysis and production;

Media content interaction and accessibility;

Natural language technologies.

3.1 Understanding media experiences

New developments and technological innovations are changing how news are being distributed, consumed, and experienced by users. However, we still lack knowledge on how users will interact with the media of the future, including highly personalized content [ 73 ], bots or other conversational agents [ 33 ], AI-mediated communication [ 35 ], augmented reality (AR) and virtual reality (VR), and so on. Research needs to understand to what extent the behavior and experiences of audiences can be meaningfully monitored, measured, and studied. The problem remains to develop a more substantial picture and understanding of consumers’ media use across all available media and platforms, both online and offline, in high-choice media environments, and via new modalities and interfaces.

For instance, technological innovations such as news recommender systems [ 40 ] can have both positive and negative impacts on people’s consumption of news, and society in general, and so it is paramount to both understand user experiences and develop designs to shape those experiences to support a well-functioning public sphere.

Research on changing media use has recognized the need to trace and analyze users across media. This is methodologically challenging and must be carefully weighed against privacy concerns, but is key to understanding how people engage with media in their daily lives [ 47 ]. With the datafication of everyday life, increasingly powerful platforms [ 71 ] and intensified competition for attention [ 74 ], media users face a media environment which is increasingly perceived as intrusive and exploitative of their data traces [ 52 ]. This situation causes ambivalence and resignation [ 24 ] as well as immersive and joyful media experiences. A comprehensive foresight analysis of the future of media use emphasizes the need to understand fragmented, hyper-connected and individualized experiences, but also to consider the agency and capabilities of users in the context of potentially intrusive media technologies, and to develop critical and trans-media research that speaks for the interests of users in datafied communicative conditions [ 16 ]. This challenge is crucial to democracy, as media use continues to be central for public connection and to enable citizens to access information and engage fully in the societal discourse [ 51 , 66 ]. Rather than predominantly making sense of media usage through quantitative metrics, such as clicks, time spent, shares or comments, critical attention to problematic representations of datafication [ 49 , 55 ] should be bridged with broader and deeper understandings of media as experience [ 15 ] using a range of mixed methods approaches. In this context, responsible media innovation must build on knowledge that is attentive to diverse users’ cross-media experiences and to the democratic role of media use.

The main questions in this area include the following. How will users interact with the media of the future? How can we monitor and understand users across media, including groups who leave few data traces, and user experiences beyond metrics? When do users evaluate media (organizations, platforms etc.) as responsible and how can studying user experiences feed into responsible innovation? More research is needed to answer these questions, through the design and development of novel qualitative and quantitative approaches and metrics, in combination with existing research methods for understanding audiences.

3.2 User modeling, personalization and engagement

Many modern media sites nowadays provide content personalization for their online consumers, e.g., additional news stories to read or related videos to watch [ 32 , 39 ]. Such recommender systems, which typically rely both on individual user interests and collective preference patterns in a community, are commonly designed to make it easier for consumers to discover relevant content. However, the use of recommendation technology may also lead to certain undesired effects, some of which only manifest themselves over time [ 26 ].

Probably the best known example is the idea of filter bubbles [ 57 ], which may emerge when a system learns about user interests and opinions over time, and then starts to preferentially present content that matches these assumed interests and opinions. In conjunction with user-driven selective exposure [ 64 ], this can lead to self-reinforcing feedback loops which may then result in undesired societal effects, such as opinion polarization. While stark filter bubbles are not typically observed in empirical studies [ 8 ], some more subtle self-reinforcing tendencies have been observed in real systems such as Facebook [ 1 ] and Twitter [ 2 ], raising questions about the long-term implications of more slight shifts in user exposure.

Other than the frequently discussed filter bubbles, echo chambers, as mentioned above, are another potential effect of recommendations that may lead to a polarized environment, where only certain viewpoints, information, and beliefs are shared [ 31 ] and where misinformation diffuses easily [ 18 ]. Such echo chambers are often seen as a phenomenon that is inherent to social media networks, where homogeneous and segregated communities are common. Recommender systems can reinforce such effects, e.g., by mainly providing content to users that supports the already existing beliefs in a community.

Looking beyond individual communities, recommender systems may also reinforce the promotion of content that is already generally popular, a phenomenon referred to as popularity bias. This phenomenon is well-studied in the e-commerce domain, where it was found that automated recommendations often focus more on already popular items than on promoting items from the “long tail” [ 29 ]. In the media domain, popularity biases may support the dominance of mainstream content in recommendations [ 69 ], thereby making it more difficult for consumers to discover niche or local content, and may, furthermore, have implications for the quality of content surfaced [ 2 , 11 , 27 ]. In addition, there is also evidence that the algorithms used by dominant content sites, such as YouTube, can drive users towards extreme content, paradoxically also on the basis of popularity biases [ 54 ].

A strong focus on already over-represented items is often considered as a situation that lacks fairness, see for example the discussion in the music domain in [ 48 ]. In general, the problem of fairness has received increased attention in recent years in the recommender systems research community. While no consistent definition of fairness is yet established and the perception of fairness can vary across consumers [ 63 ], fairness is often considered as the absence of any bias, prejudice, favoritism, mistreatment toward individuals, group, classes, or social categories based on their inherent or acquired characteristics [ 10 ]. Often, fairness and unfairness are also related to the problem of (digital) discrimination [ 25 , 28 ], which is often characterized as an unfair or unequal treatment of individuals, groups, classes or social categories according to certain characteristics. Discrimination is another phenomenon, which may be reinforced by recommender systems, in particular when they operate on data that have inherent biases. In the context of industry challenges, fairness can come up in the context of how national or local media are treated in recommendations on media platforms, with implications for how attention acquired through platforms converts to advertising or subscription revenue Footnote 2 .

Overall, the main questions in this context are the following: To what extent can we effectively and fairly both model and predict the behavior of users accessing online media? To what extent can we personalize and engage media users online to efficiently keep them informed, and at the same time do this responsibly? In general, more research is required in the area of responsible recommender systems, which are able to generate recommendations which are designed to avoid the reinforcement of negative effects over time (such as filter bubbles or popularity biases), e.g., by striving to provide alternative viewpoints on the same issue, thus leading to fair outcomes for the media industry.

3.3 Media content analysis and production

Media content analysis and production is becoming increasingly enabled by advanced AI techniques which are used intensively for a variety of journalistic tasks, including data mining, comment moderation, news writing, story discovery, fact checking and content verification, and more [ 3 , 21 ]. At the same time, the deployment of AI responsibly in the domain of news media requires close consideration of things such as how to avoid bias, how to design hybrid human-AI workflows that reflect domain values, how journalists and technologists can collaborate in interdisciplinary ways, and how future generations of practitioners should be educated to design, develop, and use AI-driven media tools responsibly [ 7 , 20 ].

A crucial task that can be supported by AI technology is that of news writing. Reasonably straightforward techniques (e.g., the use of text templates filled in with data from rich databases) are already used routinely to produce highly automated stories about topics, such as sports, finance, and elections [ 30 , 43 ]. Opportunities also exist for automated generation of highly personalized content, such as articles that adapt to appeal to a user’s location or demographic background [ 73 ]. A challenge is to avoid bias in the resulting AI-automated or AI-augmented workflows, which can result both from the selection of informants and other data sources, from the analysis techniques and training materials used, and from the language models that generate the final news text [ 65 ].

There is still quite a large gap between the domain- and story-specific news generation programs currently in use and the more ambitious technologies that can be found in the field of interactive computational creativity, where users collaborate with advanced AI software for text generation [ 37 ]. Newer approaches to controlled text synthesis using large language models in conjunction with knowledge bases are on the horizon [ 76 ], but have not yet been deployed by media organizations. End-user control and the ability to “edit at scale” will be essential to ensure the accuracy, credibility, and feasibility of deploying text synthesized using such techniques in the domain of news.

Another area of news production, referred to as computational news discovery , leverages AI techniques to help orient journalists towards new potential stories in vast data sets [ 22 ]. Such approaches can help journalists surveil the web, identify interesting patterns or documents, and alert them when additional digging may be warranted [ 23 ]. A concern is to detect and defuse biases in what the algorithms consider newsworthy. Related techniques for representing news angles used by journalists to identify and frame newsworthy content are also under development [ 53 , 56 ]. The goal of this work is to provide computational support to generate interesting new stories that match the news values and angles of interest to a particular media organization. Similar techniques can also be explored to foster news diversity by generating stories that report alternative viewpoints on the same underlying event.

An area of content analysis that has received substantial attention is in helping media detect and fight misinformation online. Multimedia forensic techniques are for example being used to uncover manipulated images and videos [ 14 ]. Moreover, automated fact checking uses machine learning and information retrieval to identify check-worthy claims, retrieve relevant evidence, classify claims, and explain decisions [ 68 ]. Research has also examined deep learning approaches to “fake news” detection [ 62 , 77 ], semi-supervised machine learning techniques that analyze message streams from social media, such as Twitter [ 6 ], and the analysis of propagation patterns that can assist in differentiating fake from genuine news items [ 45 ].

Overall, the problem of computational support for responsible media production is a complex one, requiring an interdisciplinary approach and the integration of different types of technologies. Some of the main open research questions in this context include: How can we computationally produce high-quality media content that can complement traditional news production? How can the biases inherent in AI systems be managed and mitigated when producing this content? And how can we analyze user-generated content accurately to generate more valuable insights?

Correspondingly, research is required in terms of (1) novel computational methods and AI-based models to generate high-quality, accurate content that is aligned with the values and standards of an editorial team, and on (2) novel algorithmic approaches for efficient media content analysis to support verification goals and content generation. In general, the integration of multimedia forensics techniques and fact checking into platforms that are used for content generation represents an important step in that direction.

The ultimate aim is then to develop sociotechnical systems that can effectively leverage AI to help produce newsworthy, interestingly-presented content that is verified, accurate, and generally adheres to the high quality standards of news media. Close collaboration with media production companies is crucial to ensure industry relevance and effective integration and testing of such methods and tools in realistic production settings.

3.4 Media content interaction and accessibility

Tomorrow’s media experiences will combine smart sensors with AI and personal devices to increase engagement and collaboration [ 75 , 79 ]. Enablers such as haptics, Augmented and Virtual Reality (AR/VR), conversational AI, tangible user interfaces, wearable sensors, and eye-free interactions have made clear progress. Recent work has for example studied the use of drones for various types of media production such as photography, cinematography, and film-making [ 46 ]. By employing a range of device-categories, tomorrow’s media experiences will become further specialized and individualized, better targeting individuals’ needs and preferences. Research into adaptation includes responsive user interfaces (UIs), adaptive streaming, content adaptation and multi-device adaptation [ 80 ]. Adaptation is also needed for collaborative and social use [ 34 ].

Another aspect of responsible media production is ensuring that users are able to understand the content. With the development of vastly more complex services and automated systems, ensuring that no user is left behind represents a major challenge. In a country like Norway, for example, 1 million people (19% of the population) have hearing disabilities, 180,000 (3%) are blind or have severely limited eye sight, 200,000 (4%) have reading disabilities, 870,000 (16%) are over 67 years, and there are about 790,000 foreign workers. While there is some overlap on these categories, it is clear that content and services designed for highly able young users will under-deliver to a substantial number of users.

To ensure usable services to all, it is not enough to just add subtitles or audio descriptions. Cognitive limitations can both be due to multitasking, age, but also due to unfamiliarity with the content, e.g., when watching an unknown sport or watching a TV series with a very large cast. It is also important to limit bias in user engagement. For example, interactive participation may be heavily skewed towards younger users if it is non-trivial to locate or interact with a voting service.

As more content is consumed through various different media types, it can also quickly become confusing or uninteresting if the combined service is deemed inconsistent. As an example, breaking news will often report inconsistent numbers. Even a single content provider might have several different news desks, producing content for their own formats, and with some content pieces fresher than others. This makes it difficult to trust the content, and could lead to less serious platforms being preferred by users who find them more consistent and thus easier to accept.

Research should, therefore, focus on different ways to interact with content and systems, providing personal adaptations of the content to match individual needs and wishes. Partially automating processes to cater to different wishes and needs is of high importance, as is understanding how smart sensors, specialized devices and varied setups can be integrated in the experience in an inclusive and engaging manner.

3.5 Natural language technologies

The automated analysis, generation and transformation of textual content in different languages nowadays rely on Natural Language Processing (NLP) technologies. Current NLP methods are based almost exclusively on neural machine learning. Hence it is data-driven at its core, relying on large, unlabeled samples of raw text, as well as on manually annotated data sets for training of supervised ML models. NLP models are increasingly being applied to content within the news domain as well as to user-generated media content [ 44 , 59 , 60 ]. Newsroom analysis of textual content can assist in text classification, extraction of keywords, summarization, event extraction and other types of automated text processing. Sentiment analysis on user-generated content can be applied to monitor user attitudes, as input to recommender systems, etc. Text generation models can assist journalists through the automatic or semi-automatic production of news stories. With the widespread use of NLP-based technology in the media sector, there are a number of open challenges that must be addressed to enable responsible media technology in the years to come.

The rapid developments in the field of NLP come with important ethical considerations. Large-scale language models [ 19 ] that are built on an extensive corpus of news texts will inherit many of the same biases as its sources [ 4 ]. An example is gender bias in language models trained on large quantities of text [ 41 ], where biases have been shown to negatively affect downstream tasks [ 61 , 78 ]. In NLP, biases can be found both in the data, the data annotation and the model (pre-trained input representations, fine-tuned models) [ 38 ]. Proper data documentation and curation is key to studying bias and raising awareness of it [ 50 ]. Furthermore, research on how to mitigate bias in NLP constitutes a crucial direction to enable responsible media technology [ 65 ].

Since current NLP technology is almost exclusively data driven, its quality is heavily reliant on the availability of language and domain specific data resources. Access to trusted NLP resources and tools for low-resource languages has become important not only for research but also from a democratic perspective. While NLP is a core activity in many large technology companies, their focus remains mainly on widely used languages, such as English and Chinese. The lack of task-related annotated training data and tools makes it difficult to apply novel algorithmic developments to the processing of news texts in smaller, low-resource languages and scenarios [ 12 ]. To address this challenge, a focus on data collection and annotation is important for a wide range of languages, language varieties and domains.

4 A call for interdisciplinary research

The described challenges cannot be addressed easily within a single scientific discipline or sub-discipline. On the contrary, they require the close collaboration of researchers from computer and information science (e.g., natural language processing, machine learning, recommender systems, human–computer interaction, and information retrieval) with researchers from other fields including, for example, communication sciences and journalism studies. Moreover, there are various interdependencies and cross-cutting aspects between the described research areas. Improved audience understanding, for example, can be seen as a prerequisite or input to personalized recommendation and tool-supported media production, and user modeling and personalization technology can be a basis for the synthesis of individualized and more accessible experiences.

Finally, the described research challenges cannot be reasonably addressed without a significant involvement of the relevant media industry and a corresponding knowledge transfer between academia and media organizations. To develop next-generation responsible media technology, it is of utmost importance to deeply understand the state-of-the-art, the value propositions, and the constraints under which today’s diverse media industry is operating and which goals they pursue. This in particular also includes the consideration of regional or national idiosyncrasies, as well as technologies that work appropriately for languages other than English.

To address the aforementioned issues in a holistic and interdisciplinary way it is necessary to develop new organizational structures and initiatives that bring together the relevant stakeholders, knowledge, and technical capabilities. This is why MediaFutures, a joint academia-industry research center, was founded at Media City Bergen (Norway’s largest media cluster) in October 2020. The center aims to stimulate intensive collaboration between its partners and provide means to bring together the multi-disciplinary range of expertise required to tackle the long-term challenges that the media industry faces. The center will develop advanced new media technology for responsible and effective media user engagement, media content production, media content interaction and accessibility, and will research novel methods and metrics for precise audience understanding. The center will deliver a variety of research outputs, e.g., in the form of patents, prototypes, papers and software, and perform significant research training in media technology and innovation to ensure that its outputs will sustain and impact the media landscape in the long run, including the creation of start-up companies.

The center is a consortium of the most important media players in Norway. The University of Bergen’s Department of Information Science and Media Studies hosts and leads the center. User partners include NRK and TV 2, the two main TV broadcasters in Norway, Schibsted, including Bergens Tidende (BT), and Amedia, the two largest news media houses in Scandinavia/Norway, as well as the world-renowned Norwegian media tech companies Vizrt, Vimond, Highsoft, Fonn Group, and the global tech and media player IBM. The center further collaborates with other national research institutions, including the University of Oslo, the University of Stavanger and NORCE, and with well-regarded international research institutions.

5 Conclusion

Rapid developments in technology have significantly disrupted the media landscape. In particular, the latest advances in AI and machine learning have created new opportunities to improve and extend the range of news coverage and services provided by media organizations. These new technologies however also come with a number of yet-unresolved challenges and societal risks, such as biased algorithms, filter bubbles and echo chambers, and massive and/or targeted spread of misinformation. In this paper, we have highlighted the need for responsible media technology and outlined a number of research directions, which will be addressed in the newly founded MediaFutures research center.

MediaFutures, https://mediafutures.no .

https://www.cjr.org/tow_center/apple-news-local-journalism.php .

Bakshy, E., Messing, S., Adamic, L.A.: Exposure to ideologically diverse news and opinion on Facebook. Science 348 (6239), 1130–1132 (2015). https://doi.org/10.1126/science.aaa1160

Article   MathSciNet   MATH   Google Scholar  

Bandy, J.: Diakopoulos, Nicholas: More Accounts, fewer links: How algorithmic curation impacts media exposure in twitter timelines. Proc. ACM on Hum.-Comput. Interact. 5 (CSCW1), 1–28 (2021). https://doi.org/10.1145/3449152

Article   MathSciNet   Google Scholar  

Beckett, C.: New powers, new responsibilities: A global survey of journalism and artificial intelligence. (2019). https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/

Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: Can language models be too big? Proc. ACM Conf. Fairness Account. Transpar. 21 , 610–623 (2021). https://doi.org/10.1145/3442188.3445922

Article   Google Scholar  

Bergstrom, C., Joseph, B.-C.: Information gerrymandering in social networks skews collective decision-making. Nature 573 , 40–41 (2019). https://doi.org/10.1038/d41586-019-02562-z

Boididou, C., Middleton, S.E., Jin, Z., Papadopoulos, S., Dang-Nguyen, D.T., Boato, G., Kompatsiaris, Y.: Verifying information with multimedia content on twitter. Multimed. Tools Appl. 77 (12), 15545–15571 (2018). https://doi.org/10.1007/s11042-017-5132-9

Broussard, M., Diakopoulos, N., Guzman, A.L., Abebe, R., Dupagne, M., Chuan, C.H.: Artificial intelligence and journalism: Artificial Intelligence and Journalism. J. Mass Commun. Q. 96 (3), 673–695 (2019). https://doi.org/10.1177/1077699019859901

Bruns, A.: Are Filter Bubbles Real? John Wiley and Sons, Amsterdam (2019)

Google Scholar  

Burel, G., Farrell, T., Mensio, M., Khare, P., Alani H.: Co-spread of misinformation and fact-checking content during the COVID-19 pandemic. InInternational Conference on Social Informatics, pp. 28-42 (2020)

Chen, J., Dong, H., Wang, X., Feng, F., Wang, M., He, X.: Bias and debias in recommender system: A survey and future directions. CoRR (2020). arXiv:2010.03240

Ciampaglia, G.L., Nematzadeh, A., Menczer, F., Flammini, A.: How algorithmic popularity bias hinders or promotes quality. Sci. Rep. 8 (1), 15951 (2018). https://doi.org/10.1038/s41598-018-34203-2

Cieri, C., Maxwell, M., Strassel, S., Tracey, J.: Selection criteria for low resource language programs. In: Proceedings of the Tenth International Conference on Language Resources and Evaluation, vol. LREC’16, pp. 4543–4549. European Language Resources Association (ELRA) (2016)

Commission European. Communication from The Commission to The European Parlament, The Council, The European Economic and Social Committee and The Committee of the Regions - Tackling online disinformation. A European Approach (2018). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0236

Conotter, V., Obrien, J.F., Farid, H.: Exposing digital forgeries in ballistic motion. IEEE Trans. Inf. Forensics Secur. 7 , 02 (2012). https://doi.org/10.1109/TIFS.2011.2165843

Costera, M.I.: Journalism, audiences and news experiences. In: Wahl-Jorgensen, K., Hanitzsch, T. (eds.) The Handbook of Journalism Studies. Routledge, New York (2020). https://doi.org/10.4324/9781315167497-25

Chapter   Google Scholar  

Das, R., Ytre-Arne, B. (eds.): The Future of Audiences. Palgrave Macmillan, London (2018). https://doi.org/10.1007/978-3-319-75638-7

Book   Google Scholar  

Dawson, A., Hirt, M., Scanlan, J.: The economic essentials of digital strategy. McKinsey Q. (2016). https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/the-economicessentials-of-digital-strategy

Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H.E., Quattrociocchi, W.: The spreading of misinformation online. Proc. Natl. Acad. Sci. 113 (3), 554–559 (2016)

Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. Proc. Conf. N. Am. Chapter Assoc. Comput. Linguist. 5 , 5 (2019)

Diakopoulos, N.: Towards a design orientation on algorithms and automation in news production. Digit. J. 7 (8), 1180–1184 (2019). https://doi.org/10.1080/21670811.2019.1682938

Diakopoulos, N.: Automating the News: How algorithms are Rewriting the Media. Harvard University Press, Cambridge (2019). https://doi.org/10.4159/9780674239302

Diakopoulos, N.: Computational News Discovery: Towards Design Considerations for Editorial Orientation Algorithms in Journalism. Digit. J. 8 (7), 1–23 (2020). https://doi.org/10.1080/21670811.2020.1736946

Diakopoulos, N., Trielli, D., Lee, G.: Towards understanding and supporting journalistic practices using semi-automated news discovery tools. In: Proceedings of the ACM (PACM): Human-Computer Interaction (CSCW), 5 (CSCW2) (2021)

Draper, N.A., Joseph, T.: The corporate cultivation of digital resignation. New Media Soc. 21 (8), 1824–1839 (2019). https://doi.org/10.1177/1461444819833331

Ekstrand, M.D., Burke, R., Diaz, F.: Fairness and discrimination in recommendation and retrieval. Proc. ACM Conf. Recomm. Syst. (2019). https://doi.org/10.1145/3331184.3331380

Elahi, M., Jannach, D., Skjærven, L., Knudsen, E., Sjøvaag, H., Tolonen, K., Holmstad, Ø., Pipkin, I., Throndsen, E., Stenbom, A., Fiskerud, E., Oesch, A., Vredenberg, L., Trattner, C.: Towards responsible media recommendation. AI Ethics (2021). https://doi.org/10.1007/s43681-021-00107-7

Elahi, M., Kholgh, D.K., Kiarostami, M.S., Saghari, S., Rad, S.P., Tkalcic, M.: Investigating the impact of recommender systems on user-based and item-based popularity bias. Inf. Process. Manag. (2021). https://doi.org/10.1016/j.ipm.2021.102655

Ferrer, X., van Nuenen, T., Such, J.M., Coté, M., Criado, N.: Bias and discrimination in AI: A cross-disciplinary perspective. IEEE Technol. Soc. Mag. 40 (2), 72–80 (2021). https://doi.org/10.1109/MTS.2021.3056293

Fleder, D., Hosanagar, K.: Blockbuster cultures next rise or fall: The impact of recommender systems on sales diversity. Manag. Sci. 55 , 697–712 (2009). https://doi.org/10.2139/ssrn.955984

Galily, Y.: Artificial intelligence and sports journalism: Is it a sweeping change? Technol. Soc. (2018). https://doi.org/10.1016/j.techsoc.2018.03.001

Ge, Y., Zhao, S., Zhou, H., Pei, C., Sun, F., Ou, W., Zhang, Y.: Understanding echo chambers in e-commerce recommender systems. Proc. Int. ACM SIGIR Conf. Res. Dev. Inf. Retr. (2020). https://doi.org/10.1145/3397271.3401431

Gomez-Uribe, C.A., Hunt, N.: The Netflix recommender system: Algorithms, business value, and innovation. Transactions on. Manag. Inf. Syst. 6 (4), 13:1-13:19 (2015). https://doi.org/10.1145/2843948

Gómez-Zará, D., Diakopoulos, N.: Characterizing communication patterns between audiences and newsbots. Digit. J. 8 (9), 1–21 (2020). https://doi.org/10.1080/21670811.2020.1816485 . ( ISSN 2167-0811 )

Hai, H.T., Dunne, M.P., Campbell, M.A., Gatton, M.L., Nguyen, H.T., Tran, N.T.: Temporal patterns and predictors of bullying roles among adolescents in Vietnam: A school-based cohort study. Psychol. Health Med. 22 , 107–121 (2017). https://doi.org/10.1080/13548506.2016.1271953

Hancock, J.T., Naaman, M., Levy, K.: AI-mediated communication: Definition, research agenda, and ethical considerations. J. Comput.-Mediat. Commun. 25 (1), 89–100 (2020). https://doi.org/10.1093/jcmc/zmz022

Helberger, N.: On the Democratic Role of News Recommenders. Digit. J. 5 (4), 1–20 (2019). https://doi.org/10.1080/21670811.2019.1623700

Hollister, J.R., Gonzalez, A.J.: The campfire storytelling system-automatic creation and modification of a narrative. J. Exp. Theor. Artif. Intell. 31 (1), 15–40 (2019). https://doi.org/10.1080/0952813X.2018.1517829

Hovy, D., Prabhumoye, S.: Five sources of bias in natural language processing. Lang. Linguist. Compass (2021). https://doi.org/10.1111/lnc3.12432

Jannach, D., Jugovac, M.: Measuring the business value of recommender systems. ACM Trans. Manag. Inf. Syst. (2019). https://doi.org/10.1145/3370082

Karimi, M., Jannach, D., Jugovac, M.: News recommender systems-survey and roads ahead. Inf. Process. Manag. 54 (6), 1203–1227 (2018). https://doi.org/10.1016/j.ipm.2018.04.008

Kurita, K., Vyas, N., Pareek, A., Black, A.W., Tsvetkov, Y.: Measuring bias in contextualized word representations. In: Proceedings of the 1st Workshop on Gender Bias in Natural Language Processing, pp. 166–172 (2019)

Lazer, D.M., Baum, M.A., Benkler, Y., Berinsky, A.J., Greenhill, K.M., Menczer, F., Metzger, M.J., Nyhan, B., Pennycook, G., Rothschild, D., et al.: The science of fake news. Science 359 (6380), 1094–1096 (2018). https://doi.org/10.1126/science.aao2998

Leppänen, L., Munezero, M., Granroth-Wilding, M., Toivonen, H.: Data-driven news generation for automated journalism. Proc. Int. Conf. Nat. Lang. Gener. (2017). https://doi.org/10.18653/v1/W17-3528

Li, C., Zhan, G., Li, Z.: News text classification based on improved Bi-LSTM-CNN. Int. Conf. Inf. Technol. Med. Educ. (ITME) (2018). https://doi.org/10.1109/ITME.2018.00199

Liu, Y., Wu, Y.-F.: Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In: AAAI Conference on Artificial Intelligence (2018)

Ljungblad, S., Man, Y., Baytaş, M.A., Gamboa, M., Obaid, M., Field, M.: What matters in professional drone pilots’ practice? An interview study to understand the complexity of their work and inform human-drone interaction research. Proc. CHI Conf. Hum. Fact. Comput. Syst. (2021). https://doi.org/10.1145/3411764.3445737

Lomborg, S., Mortensen, M.: Users across media: An introduction. Convergence 23 (4), 343–351 (2017). https://doi.org/10.1177/1354856517700555

Mehrotra, R., McInerney, J., Bouchard, H., Lalmas, M., Diaz, F.: Towards a fair marketplace: Counterfactual evaluation of the trade-off between relevance, fairness and satisfaction in recommendation systems. Proc. ACM Int. Conf. Inf. Knowl. Manag. (2018). https://doi.org/10.1145/3269206.3272027

Milan, S., Trere, E.: Big data from the south(s): Beyond data universalism. Telev. New Media 20 (4), 319–335 (2019). https://doi.org/10.1177/1527476419837739

Mitchell, M., Simone, W., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D., Gebru, T.: On the dangers of stochastic parrots: Can language models be too big? Proc. ACM Conf. Fairness Account. Transpar. (2021). https://doi.org/10.1145/3442188.3445922

Moe, H.: Distributed readiness citizenship: A realistic, normative concept for citizens public connection. Commun. Theory 30 , 205–225 (2020). https://doi.org/10.1093/ct/qtz016

Mollen, A., Dhaenens, F., Das, R., Ytre-Arne, B.: Audiences Coping Practices with Intrusive Interfaces: Researching Audiences In Algorithmic, Datafied, Platform Societies. The Future of Audiences. Palgrave Macmillan, London (2018). https://doi.org/10.1007/978-3-319-75638-7_3

Motta, E., Daga, E., Opdahl, A.L., Tessem, B.: Analysis and design of computational News Angles. Computer (2020). https://doi.org/10.1109/access.2020.3005513

Nicas, J.: How YouTube Drives People to the Internet’s Darkest Corners. Washington Post Journal, Washington (2018)

Noble, S.U.: Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, New York (2018). https://doi.org/10.2307/j.ctt1pwt9w5 . ( ISBN 9781479849949 )

Opdahl, A.L., Tessem, B.: Ontologies for finding journalistic angles. Softw. Syst. Model. 20 (1), 71–87 (2021). https://doi.org/10.1007/s10270-020-00801-w

Pariser, E.: The Filter Bubble: What the Internet Is Hiding from You. The Penguin Group, London (2011)

Parliament European. Polarisation and the use of technology in political campaigns and communication. (2019). https://www.europarl.europa.eu/RegData/etudes/STUD/2019/634414/EPRS_STU(2019)634414_EN.pdf

Petroni, F., Raman, N., Nugent, T., Nourbakhsh, A., Panic, Z., Shah, S., Leidner, J.L.: An extensible event extraction system with cross-media event resolution. Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. (2018). https://doi.org/10.1145/3219819.3219827

Reuver, M., Fokkens, A., Verberne, S.: No NLP task should be an island: multi-disciplinarity for diversity in news recommender systems. Proc. Conf. N. Am. Chapter Assoc. Comput. Linguist. 2 , 45–55 (2021)

Rudinger, R., Naradowsky, J., Leonard, B., Van Durme, B.: Gender bias in coreference resolution. Proc. Conf. N. Am. Chapter Assoc. Comput. Linguist. (2018). https://doi.org/10.18653/v1/N18-2003

Singhania, S., Fernandez, N., Rao, S.: 3HAN: A deep neural network for fake news detection. Neural Inf. Process. (2017). https://doi.org/10.1007/978-3-319-70096-0_59

Sonboli, N., Smith, J.J., Cabral Berenfus, F., Burke, R., Fiesler, C.: Fairness and transparency in recommendation: The users perspective. Proc. ACM Conf. User Model. Adapt. Personal. (2021). https://doi.org/10.1145/3450613.3456835

Stroud, N.: Polarization and partisan selective exposure. J. Commun. (2010). https://doi.org/10.1111/j.1460-2466.2010.01497.x

Sun, T., Gaut, A., Tang, S., Huang, Y., ElSherief, M., Zhao, J., Mirza, D., Belding, E., Chang, K.W., Wang, W.Y.: Mitigating gender bias in natural language processing: Literature review. Proc. Annu. Meet. Assoc. Comput. Linguist. (2019). https://doi.org/10.18653/v1/P19-1159

Swart, J., Peters, C., Broersma, M.: Repositioning news and public connection in everyday life: A user-oriented perspective on inclusiveness, engagement, relevance, and constructiveness. Media Cult. Soc. 39 (6), 902–918 (2017). https://doi.org/10.1177/0163443716679034

Syvertsen, T., Enli, G., Mjos, O., Moe, M.: Hallvard: The Media Welfare State: Nordic Media in the Digital Era. University of Michigan Press, Ann Arbor (2014). https://doi.org/10.3998/nmw.12367206.0001.001

Thorne, J., Vlachos, A.: Automated fact checking: Task formulations, methods and future directions. In: Proceedings of the 27th International Conference on Computational Linguistics, pp 3346–3359 (2018)

Trielli, D., Diakopoulos, N.: Search as news curator: The role of google in shaping attention to news information. Proc. CHI Conf. Hum. Fact. Comput. Syst. (2019). https://doi.org/10.1145/3290605.3300683

Van den Bluck, H., Hallvard, M.: Public service media, universiality and personalization through algorithms: Mapping strategies and exploring dilemmas. Media Cult. Soc. 40 (6), 875–892 (2018). https://doi.org/10.1177/0163443717734407

Van Dijck, J., Poell, T., de Waal, M.: The Platform Society Public Values in a Connective World. Oxford University Press, Oxford (2018). https://doi.org/10.1093/oso/9780190889760.001.0001

van Stekelenburg, J.: Going all the way: Politicizing, polarizing, and radicalizing identity offline and online. Sociology. Compass 8 (5), 540–555 (2014). https://doi.org/10.1111/soc4.12157

Wang, Y., Diakopoulos, N.: Readers perceptions of personalized news articles. In: Proceedings Computation + Journalism Symposium (2020)

Webster, J.G.: The Marketplace of Attention: How Audiences Take Shape in a Digital Age. The MIT Press, London (2014). https://doi.org/10.2307/j.ctt9qf9qj

Wozniak, A., Wessler, H., Luck, J.: Who prevails in the visual framing contest about the united nations climate change conferences? J. Stud. 18 (11), 1433–1452 (2017). https://doi.org/10.1080/1461670X.2015.1131129

Xu, P., Patwary, M., Shoeybi, M., Puri, R., Fung, P., Anandkumar, A., Bryan C.: MEGATRON-CNTRL: Controllable story generation with external knowledge using large-scale language models. (2020). https://aclanthology.org/2020.emnlp-main.226.pdf

Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., Choi, Y.: Defending against neural fake news. Adv. Neural Inf. Process. Syst. 32 , 9054–9065 (2019)

Zhao, J., Wang, T., Yatskar, M., Ordonez, V., Chang, K.W.: Gender bias in coreference resolution: Evaluation and debiasing methods. Proc. Conf. N. Am. Chapter Assoc. Comput. Linguist. (2018). https://doi.org/10.18653/v1/N18-2003

Zhu, K., Fjeld, M., Ünlüer, A.: WristOrigami: Exploring foldable design for multi-display smartwatch. Proc. Des. Interact. Syst. Conf. (2018). https://doi.org/10.1145/3196709.3196713

Zorrilla, M., Borch, N., Daoust, F., Erk, A., Florez, J., Lafuente, A.: A web-based distributed architecture for multi-device adaptation in media applications. Pers. Ubiquitos Comput. 19 , 803–820 (2015). https://doi.org/10.1007/s00779-015-0864-x

Download references

Acknowledgements

This work was supported by industry partners and the Research Council of Norway with funding to MediaFutures: Research Centre for Responsible Media Technology and Innovation, through the centers for Research-based Innovation scheme, project number 309339.

Open access funding provided by University of Bergen (incl Haukeland University Hospital).

Author information

Authors and affiliations.

University of Bergen, Bergen, Norway

Christoph Trattner, Mehdi Elahi, Andreas L. Opdahl, Bjørnar Tessem, Morten Fjeld, Koenraad De Smedt & Hallvard Moe

University of Klagenfurt, Klagenfurt, Austria

Dietmar Jannach

The Open University, Milton Keynes, UK

Enrico Motta

Vrije Universiteit Amsterdam, Amsterdam, Netherlands

Irene Costera Meijer

Northwestern University, Evanston, USA

Nicholas Diakopoulos

NORCE, Bergen, Norway

University of Oslo, Oslo, Norway

Lilja Øvrelid

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Christoph Trattner .

Ethics declarations

Conflicts of interest.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Trattner, C., Jannach, D., Motta, E. et al. Responsible media technology and AI: challenges and research directions. AI Ethics 2 , 585–594 (2022). https://doi.org/10.1007/s43681-021-00126-4

Download citation

Received : 25 November 2021

Accepted : 28 November 2021

Published : 20 December 2021

Issue Date : November 2022

DOI : https://doi.org/10.1007/s43681-021-00126-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Media technology
  • Artificial intelligence
  • Find a journal
  • Publish with us
  • Track your research

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

The Evolution of Traditional Media to New Media

Profile image of Rhea Vi Agot

Prehistory is the period of human activity between the use of the first stone tools ~3.3 million years ago and the invention of writing systems, the earliest of which appeared ~5300 years ago.Technology that predates recorded history. History is the study of the past using written records; it is also the record itself. Anything prior to the first written accounts of history is prehistoric (meaning "before history"), including earlier technologies. About 2.5 million years before writing was developed, technology began with the earliest hominids who used stone tools, which they may have used to start fires, hunt, cut food, and bury their dead.

Related Papers

Aïna Centeno Lmr

research paper about traditional media

Mikhail Pushkin

While digital technology has been with us from early 19th century, its accelerated introduction into mass-culture can be roughly attributed to mass-marketing of Personal Computers, the PCs, in 1980ies (Reimer). Since then in mere 30 years every sphere of human life has been subjected to some degree of digitization, be it on the level of use of simple calculator, digital watch or mobile phone, or active online “surfing”, blogging, actively partaking in multiuser networking and gaming or working in the IT-related field. This situation not only signals of the ever accelerating largely techno-centric social transformation, but also pinpoints the paradigmatic shift of global world culture towards Information Society, described by Toffler as the society of Third Wave. "In a Third Wave economy, the central resource – a single word broadly encompassing data, information, images, symbols, culture, ideology, and values – is actionable knowledge” (Dyson, Esther; Gilder, George; Keyworth, George;Toffler, Alvin ). Even within this earliest phase one can already identify evolutionary phases of development, which can be very generally described by the concept of convergence, conglomeration or even consumption-unification. This convergence is taking place on the technological, economic, social and personal levels. Technologically one witnesses a combination of miniaturization and multipurpose approaches, leading to more and more advanced mobile personal computers at the same time possessing traits of communication devices, cameras, flashlights, navigation systems, television sets and entertainment platforms (a smartphone). Economically both in terms of management, control, production, time and location, we witness convergence of human worker with software and hardware, which are able to incorporate ever-growing multitude of functions. A miniature device is able to provide one with knowledge and in part skills of an engineer, developer, designer and producer, gradually heading for substitution of human workforce or at least enabling for outsourcing of most projects on the local level (work from home) and the global level (research and development abroad, joint real-time international development). Now the central element in human experience is that of personal and social life, which, notwithstanding the importance of issues of class, labor, means of production, et cetera, is the core of everyone’s experience (social networking). At the same time personal computers incorporated and at large substituted diverse forms of entertainment (music, television, cinema, library, sports, sex…), work (coding, writing, researching, banking, calculating…) and communication (audio, video, textual, experimentally even sensory). As a consequence, while digitization didn’t replace physical production and existence, it certainly assumed an equally important role and altered the ways in which we perceive and evaluate reality (time, space, social norms).

Ray Edmondson

Is the “digital age”, like the industrial revolution, the atomic age and the space age, a historical era that has now ended? Has the digital revolution changed society so pervasively, universally and permanently that the term has become redundant? The effects on sound and audiovisual archiving have been immense, sweeping through every aspect of our work and our thinking. Yet as we adjust to constant technological change, we have little time left to ponder more fundamental values. This paper begins by looking at the digital revolution in retrospect, reviews its mythology, and discusses the sustainability of digital information. It questions whether the analogue/digital dichotomy is as stark as it is often represented. The revolution has been uneven, leaving a “digital divide” between rich and poor nations. The analogue disc, tape and film strip have surrendered their dominance to binary code; yet they have not disappeared, and some, like the vinyl disc, are now resurgent. Is this just reactionary nostalgia, or does it cater to something more fundamental? Does it change our concept of preservation? Will what some are now calling the “post-digital age” become the new paradigm?

TUBA LİVBERBER

tOnik'Z Lubaton

CONTENT STANDARDS The learner grasps the historical background of media and information; basic theories of media and information systems; and ownership, control and regulation of media. PERFORMANCE STANDARDS The learner examines the technology and identifies devices in traditional and new media through the different ages: prehistoric, industrial, electronic and digital age.

Theresa Giakoumatou

If we seek to determine the characteristics of the digital era, we will realize that the parameters mainly influenced are the speed and volume of information. A seminal consequence of the influence of Information Society is the acceleration of all processes, a fact that keeps users in a state of vigilance and in a permanent process of updating their knowledge, in a permanent state of alert. In the digital world, solutions of communication that up to now were inapplicable today begin to materialize.

Gabriele Balbi , Nelson Ribeiro , susan aasman , Tim van der Heijden

Phillip McIntyre

Technology in Society

RODRIGO CALLIZAYA FLORES

Gabriele Balbi

On the road toward a cultural history of digital interconnectivity, there are many potholes. It is not easy to define the digital culture itself and what a cultural history of the digital means. It is unclear what we have in mind with the digital (often used in an oversimplified sense to simply mean the Internet): The geographical horizon of this culture is unclear, and lastly the main trends are unclear. What is more, too many aspects of the contemporary digital landscape are taken for granted, as if “natural”, while they are historically determined. This is the reason why history is so helpful in reconstructing the origins, the changes, the main trends of digital media, simply because it shows how they came into being and they were metabolized by the cultures. This discussion aims to clarify these elements or at least discuss them critically, whereby“critically” is meant first of all in the etymological sense of differentiating, second in the sense of not accepting prima facie what is generally considered to be obvious. This will be only a preliminary contribution that, far from being exhaustive, will aim to start a reflection on the role and benefits of cultural history in analyzing the digital.

RELATED PAPERS

Documenta Ophthalmologica

Folkert Horn

dialnet.unirioja.es

José Luís Mourão

Irina Monich

Clinical Rheumatology

Lan Anh Nguyễn

Journal of Applied Membrane Science & Technology

Przemysław Borys

AFRICAN JOURNAL OF BIOTECHNOLOGY

Sunday Thomas

Sadiya Musa

Journal of Vascular Surgery

Michael Amendola

Israel Journal of Health Policy Research

amnon LAHAD

International Journal of Advanced Research

Soheir Abd El-Salam

Eğitim, Bilim, Toplum

Advanced Theory and Simulations

Yaron Amouyal

Priyanshu Kumar

Fusion Science and Technology

Abbas Nikroo

Chemical Science

Kelly Baker

Journal of Phytopathology

Dale R Walters

Nasir Rasool

Henry Joseph Ndangalasi

Riska Octaviana

Critical Care - CRIT CARE

Alexander Brawanski

arXiv (Cornell University)

Matthew Knepley

Cesar Silva

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

The disaster of misinformation: a review of research in social media

Sadiq muhammed t.

Department of Management Studies (DoMS), Indian Institute of Technology Madras, Chennai, Tamil Nadu 600036 India

Saji K. Mathew

The spread of misinformation in social media has become a severe threat to public interests. For example, several incidents of public health concerns arose out of social media misinformation during the COVID-19 pandemic. Against the backdrop of the emerging IS research focus on social media and the impact of misinformation during recent events such as the COVID-19, Australian Bushfire, and the USA elections, we identified disaster, health, and politics as specific domains for a research review on social media misinformation. Following a systematic review process, we chose 28 articles, relevant to the three themes, for synthesis. We discuss the characteristics of misinformation in the three domains, the methodologies that have been used by researchers, and the theories used to study misinformation. We adapt an Antecedents-Misinformation-Outcomes (AMIO) framework for integrating key concepts from prior studies. Based on the AMIO framework, we further discuss the inter-relationships of concepts and the strategies to control the spread of misinformation on social media. Ours is one of the early reviews focusing on social media misinformation research, particularly on three socially sensitive domains; disaster, health, and politics. This review contributes to the emerging body of knowledge in Data Science and social media and informs strategies to combat social media misinformation.

Introduction

Information disorder in social media.

Rumors, misinformation, disinformation, and mal-information are common challenges confronting media of all types. It is, however, worse in the case of digital media, especially on social media platforms. Ease of access and use, speed of information diffusion, and difficulty in correcting false information make control of undesirable information a horrid task [ 1 ]. Alongside these challenges, social media has also been highly influential in spreading timely and useful information. For example, the recent #BlackLivesMatter movement was enabled by social media, which united concurring people's solidarity across the world when George Floyd was killed due to police brutality, and so are 2011 Arab spring in the Middle East and the 2017 #MeToo movement against sexual harassments and abuse [ 2 , 3 ]. Although, scholars have addressed information disorder in social media, a synthesis of the insights from these studies are rare.

The information which is fake or misleading and spreads unintentionally is known as misinformation [ 4 ]. Prior research on misinformation in social media has highlighted various characteristics of misinformation and interventions thereof in different contexts. The issue of misinformation has become dominant with the rise of social media, attracting scholarly attention, particularly after the 2016 USA Presidential election, when misinformation apparently influenced the election results [ 5 ]. The word 'misinformation' was listed as one of the global risks by the World Economic Forum [ 6 ]. A similar term that is popular and confusing along with misinformation is 'disinformation'. It is defined as the information that is fake or misleading, and unlike misinformation, spreads intentionally. Disinformation campaigns are often seen in a political context where state actors create them for political gains. In India, during the initial stage of COVID-19, there was reportedly a surge in fake news linking the virus outbreak to a particular religious group. This disinformation spread gained media attention as it was widely shared on social media platforms. As a result of the targeting, it eventually translated into physical violence and discriminatory treatment against members of the community in some of the Indian states [ 7 ]. 'Rumors' and 'fake news' are similar terms related to misinformation. 'Rumors' are unverified information or statements circulated with uncertainty, and 'fake news' is the misinformation that is distributed in an official news format. Source ambiguity, personal involvement, confirmation bias, and social ties are some of the rumor-causing factors. Yet another related term, mal-information, is accurate information that is used in different contexts to spread hatred or abuse of a person or a particular group. Our review focuses on misinformation that is spread through social media platforms. The words 'rumor', and 'misinformation' are used interchangeably in this paper. Further, we identify factors that cause misinformation based on a systematic review of prior studies.

Ours is one of the early attempts to review social media research on misinformation. This review focuses on three sensitive domains of disaster, health, and politics, setting three objectives: (a) to analyze previous studies to understand the impact of misinformation on the three domains (b) to identify theoretical perspectives used to examine the spread of misinformation on social media and (c) to develop a framework to study key concepts and their inter-relationships emerging from prior studies. We identified these specific areas as the impact of misinformation with regards to both speed of spread and scale of influence are high and detrimental to the public and governments. To the best of our knowledge, the review of the literature on social media misinformation themes are relatively scanty. This review contributes to an emerging body of knowledge in Data Science and informs the efforts to combat social media misinformation. Data Science is an interdisciplinary area which incorporates different areas like statistics, management, and sociology to study the data and create knowledge out of data [ 8 ]. This review will also inform future studies that aim to evaluate and compare patterns of misinformation on sensitive themes of social relevance, such as disaster, health, and politics.

The paper is structured as follows. The first section introduces misinformation in social media context. In Sect.  2 , we provide a brief overview of prior research works on misinformation and social media. Section  3 describes the research methodology, which includes details of the literature search and selection process. Section  4 discusses the analysis of spread of misinformation on social media based on three themes- disaster, health, and politics and the review findings. This includes current state of research, theoretical foundations, determinants of misinformation in social media platforms, and strategies to control the spread of misinformation. Section  5 concludes with the implications and limitations of the paper.

Social media and spread of misinformation

Misinformation arises in uncertain contexts when people are confronted with a scarcity of information they need. During unforeseen circumstances, the affected individual or community experiences nervousness or anxiety. Anxiety is one of the primary reasons behind the spread of misinformation. To overcome this tension, people tend to gather information from sources such as mainstream media and official government social media handles to verify the information they have received. When they fail to receive information from official sources, they collect related information from their peer circles or other informal sources, which would help them to control social tension [ 9 ]. Furthermore, in an emergency context, misinformation helps community members to reach a common understanding of the uncertain situation.

The echo chamber of social media

Social media has increasingly grown in power and influence and has acted as a medium to accelerate sociopolitical movements. Network effects enhance participation in social media platforms which in turn spread information (good or bad) at a faster pace compared to traditional media. Furthermore, due to a massive surge in online content consumption primarily through social media both business organizations and political parties have begun to share content that are ambiguous or fake to influence online users and their decisions for financial and political gains [ 9 , 10 ]. On the other hand, people often approach social media with a hedonic mindset, which reduces their tendency to verify the information they receive [ 9 ]. Repetitive exposure to contents that coincides with their pre-existing beliefs, increases believability and shareability of content. This process known as the echo-chamber effect [ 11 ] is fueled by confirmation bias. Confirmation bias is the tendency of the person to support information that reinforces pre-existing beliefs and neglect opposing perspectives and viewpoints other than their own.

Platforms’ structure and algorithms also have an essential role in spreading misinformation. Tiwana et al. [ 12 ] have defined platform architecture as ‘a conceptual blueprint that describes how the ecosystem is partitioned into a relatively stable platform and a complementary set of modules that are encouraged to vary, and the design rules binding on both’. Business models of these platforms are based upon maximizing user engagement. For example, in the case of Facebook or Twitter, user feed is based on their existing belief or preferences. User feeds provide users with similar content that matches their existing beliefs, thus contributing to the echo chamber effect.

Platform architecture makes the transmission and retransmission of misinformation easier [ 12 , 13 ]. For instance, WhatsApp has a one-touch forward option that enables users to forward messages simultaneously to multiple users. Earlier, a WhatsApp user could forward a message to 250 groups or users at a time, which as a measure for controlling the spread of misinformation was limited to five members in 2019. WhatsApp claimed that globally this restriction reduced message forwarding by 25% [ 14 ]. Apart from platform politics, users also have an essential role in creating or distributing misinformation. In a disaster context, people tend to share misinformation based on their subjective feeling [ 15 ].

Misinformation has the power to influence the decisions of its audience. It can change a citizen's approach toward a topic or a subject. The anti-vaccine movement on Twitter during the 2015 measles (highly communicable disease) outbreak in Disneyland, California, serves as a good example. The movement created conspiracy theories and mistrust on the State, which increased vaccine refusal rate [ 16 ]. Misinformation could even influence election of governments by manipulating citizens’ political attitudes as seen in the 2016 USA and 2017 French elections [ 17 ]. Of late, people rely heavily on Twitter and Facebook to collect the latest happenings from mainstream media [ 18 ].

Combating misinformation in social media has been a challenging task for governments in several countries. When social media influences elections [ 17 ] and health campaigns (like vaccination), governments and international agencies demand social media owners to take necessary actions to combat misinformation [ 13 , 15 ]. Platforms began to regulate bots that were used to spread misinformation. Facebook announced the filtering of their algorithms to combat misinformation, down-ranking the post flagged by their fact-checkers which will reduce the popularity of the post or page. [ 17 ]. However, misinformation has become a complicated issue due to the growth of new users and the emergence of new social media platforms. Jang et al. [ 19 ] have suggested two approaches other than governmental regulation to control misinformation literary and corrective. The literary approach proposes educating users to increase their cognitive ability to differentiate misinformation from the information. The corrective approach provides more fact-checking facilities for users. Warnings would be provided against potentially fabricated content based on crowdsourcing. Both approaches have limitations; the literary approach attracted criticism as it transfers responsibility for the spread of misinformation to citizens. The corrective approach will only have a limited impact as the volume of fabricated content escalates [ 19 – 21 ].

An overview of the literature on misinformation reveals that most investigations focus on examining the methods to combat misinformation. Social media platforms are still discovering new tools and techniques to mitigate misinformation from their platforms, this calls for a research to understand their strategies.

Review method

This research followed a systematic literature review process. The study employed a structured approach based on Webster’s Guidelines [ 22 ] to identify relevant literature on the spread of misinformation. These guidelines helped in maintaining a quality standard while selecting the literature for review. The initial stage of the study involved exploring research papers from relevant databases to understand the volumes and availability of research articles. We extended the literature search to interdisciplinary databases too. We gathered articles from Web of Science, ACM digital library, AIS electronic library, EBSCO host business source premier, ScienceDirect, Scopus, and Springer link. Apart from this, a manual search was performed in Information Systems (IS) scholars' basket of journals [ 23 ] to ensure we did not miss any articles from these journals. We have also preferred articles that have Data Science and Information Systems background. The systematic review process began with keyword search using predefined keywords (Fig.  2 ). We identified related synonyms such as 'misinformation', 'rumors', 'spread', and 'social media' along with their combinations for the search process. The keyword search was on the title, abstract, and on the list of keywords. The literature search was conducted in the month of April 2020. Later, we revisited the literature in December 2021 to include latest publications from 2020 to 2021.

An external file that holds a picture, illustration, etc.
Object name is 41060_2022_311_Fig2_HTML.jpg

Systematic literature review process

It was observed that scholarly discussion about ‘misinformation and social media’ began to appear in research after 2008. Later in 2010, the topic gained more attention when Twitter bots were used or spreading fake news on the replacement of a USA Senator [ 24 ]. Hate campaigns and fake follower activities were simultaneously growing during that period. As evident from Fig.  1 , showing number of articles published between 2005 and 2021 on misinformation in three databases: Scopus, S pringer, and EBSCO, academic engagement on misinformation seems to have gained more impetus after the 2016 US Presidential election, when social media platforms had apparently influenced the election [ 20 ].

An external file that holds a picture, illustration, etc.
Object name is 41060_2022_311_Fig1_HTML.jpg

Articles published on misinformation during 2005–2021 (Databases; Scopus, Springer, and EBSCO)

As Data Science is an interdisciplinary field, the focus of our literature review goes beyond disciplinary boundaries. In particular, we focused on the three domains of disaster, health, and politics. This thematic focus of our review has two underlying reasons (a) the impact of misinformation through social media is sporadic and has the most damaging effects in these three domains and (b) our selection criteria in systematic review finally resulted in research papers that related to these three domains. This review has excluded platforms that are designed for professional and business users such as LinkedIn and Behance. A rational for the choice of these themes are discussed in the next section.

Inclusion–exclusion criteria

Figure  2 depicts the systematic review process followed in this study. In our preliminary search, 2148 records were retrieved from databases—all those articles were gathered onto a spreadsheet, which was manually cross-checked with the journals linked to the articles. Studies published during 2005–2021, studies published in English language, articles published from peer-reviewed journals, journals rating and papers relevant to misinformation were used as the inclusion criteria. We have excluded reviews, thesis, dissertations, and editorials; and articles on misinformation that are not akin to social media. To fetch the best from these articles, we selected articles that were from top journals, rated above three according to ABS rating and A*, A, and B according to ABDC rating. This process, while ensuring the quality of papers, also effectively shortened purview of study to 643 articles of acceptable quality. We have not performed track-back and track-forward on references. During this process, duplicate records were also identified and removed. Further screening of articles based on the title, abstract, and full text (wherever necessary)—brought down the number to 207 articles.

Further screening based on the three themes reduced the focus to 89 articles. We conducted a full-text analysis of these 89 articles. We further excluded articles that had not considered misinformation as a central theme and finally arrived at 28 articles for detailed review (Table ​ (Table1 1 ).

Reviewed articles

The selected studies used a variety of research methods to examine the misinformation on social media. Experimentation and text mining of tweets emerged as the most frequent research methods; there were 11 studies that used experimental methods, and eight used Twitter data analyses. Apart from these, there were three survey methods, two mixed methods, and case study methods each, and one opportunistic sampling and exploratory study each. The selected literature for review includes nine articles on disaster, eight on healthcare, and eleven from politics. We preferred papers for review based on three major social media platforms; Twitter, Facebook, and WhatsApp. These are the three social media owners with the highest transmission rates and most active users [ 25 ] and most likely platforms for misinformation propagation.

Coding procedure

Initially both the authors have manually coded the articles individually by reading full text of each article and then identified the three themes; disaster, health, and politics. We used an inductive coding approach to derive codes from the data. The intercoder reliability rate between the authors were 82.1%. Disagreement among authors related to deciding in which theme few papers fall under were discussed and a resolution was arrived at. Later we used NVIVO, a qualitative data analysis software, to analyze unstructured data to encode and categorize the themes from the articles. The codes emerged from the articles were categorized into sub-themes and later attached to the main themes; disaster, health, and politics. NVIVO produced a rank list of codes based on frequency of occurrence (“ Appendix ”). An intercoder reliability check was completed for the data by an external research scholar having a different areas of expertise to ensure reliability. The coder agreed upon 26 articles out of 28 (92.8%), which indicated a high level intercoder reliability [ 49 ]. The independent researcher’s disagreement about the code for two authors was discussed between the authors and the research scholar and a consensus was arrived at.

We initially reviewed articles separately from the categories of disaster, health, and politics. We first provide emergent issues that cut across these themes.

Social media misinformation research

Disaster, health, and politics emerged as the three domains (“ Appendix ”) where misinformation can cause severe harm, often leading to casualties or even irreversible effects. The mitigation of these effects can also demand substantial financial or human resources burden considering the scale of effect and risk of spreading negative information to the public altogether. All these areas are sensitive in nature. Further, disaster, health, and politics have gained the attention of researchers and governments as the challenges of misinformation confronting these domains are rampant. Besides sensitivity, misinformation in these areas has higher potential to exacerbate the existing crisis in society. During the 2020 Munich security conference, WHO’s Director-General noted: “We are not just fighting an epidemic; we are fighting an infodemic”, referring to the faster spread of COVID-19 misinformation than the virus [ 50 ].

More than 6000 people were hospitalized due to COVID-19 related misinformation in the first three months of 2020 [ 51 ]. As COVID-19 vaccination began, one of the popular myths was that Bill Gates wanted to use vaccines to embed microchips in people to track them and this created vaccine hesitancy among the citizens [ 52 ]. These reports show the severity of the spread of misinformation and how misinformation can aggravate a public health crisis.

Misinformation during disaster

In the context of emergency situations (unforeseen circumstances), the credibility of social media information has often been questioned [ 11 ]. When a crisis occurs, affected communities often experience a lack of localized information needed for them to make emergency decisions. This accelerates the spread of misinformation as people tend to fill this information gap with misinformation or 'improvised news' [ 9 , 24 , 25 ]. The broadcasting power of social media and re-sharing of misinformation could weaken and slow down rescue operations [ 24 , 25 ]. As the local people have more access to the disaster area, they become immediate reporters of a crisis through social media. Mainstream media comes into picture only later. However, recent incidents reveals that voluntary reporting of this kind has begun to affect rescue operations negatively as it often acts as a collective rumor mill [ 9 ], which propagates misinformation. During the 2018 floods in the South-Indian state of Kerala a fake video on Mullaperiyar Dam leakage created unnecessary panic among the citizens, thus negatively impacting the rescue operations [ 53 ]. Information from mainstream media is relatively more reliable as they have traditional gatekeepers such as peer reviewers and editors who cross-check the information source before publication. Chua et al. [ 28 ] found that a major chunk of corrective tweets were retweeted from mainstream news media, thus mainstream media is considered as a preferred rumor correction channel, where they attempt to correct misinformation with the right information.

Characterizing disaster misinformation

Oh et al. [ 9 ] studied citizen-driven information processing based on three social crises using rumor theory. The main characteristic of a crisis is the complexity of information processing and sharing [ 9 , 24 ]. A task is considered complex when characterized by increase in information load, information diversity or rate of information change [ 54 ]. Information overload and information dearth are the two grave concerns that interrupt the communication between the affected community and a rescue team. Information overload, where too many enquiries and fake news distract a response team, slows them down to recognize valid information [ 9 , 27 ]. According to Balan and Mathew [ 55 ] information overload occurs when volume of information such as complexity of words and multiple languages that exceeds and cannot be processed by a human being. Here information dearth in our context is the lack of localized information that is supposed to help the affected community to make emergency decisions. When the official government communication channels or mainstream media cannot fulfill citizen's needs, they resort to information from their social media peers [ 9 , 27 , 29 ].

In a social crisis context, Tamotsu Shibutani [ 56 ] defines rumoring as collective sharing and exchange of information, which helps the community members to reach a common understanding about the crisis situation [ 30 ]. This mechanism works in social media, which creates information dearth and information overload. Anxiety, information ambiguity (source ambiguity and content ambiguity), personal involvement, and social ties are the rumor-causing variables in a crisis context [ 9 , 27 ]. In general, anxiety is a negative feeling caused by distress or stressful situation, which fabricates or produces adverse outcomes [ 57 ]. In the context of a crisis or emergency, a community may experience anxiety in the absence of reliable information or in other cases when confronted with overload of information, making it difficult to take appropriate decisions. Under such circumstances, people may tend to rely on rumors as a primary source of information. The influence level of anxiety is higher during a community crisis than during a business crisis [ 9 ]. However, anxiety, as an attribute, varies based on the nature of platforms. For example, Oh et al. [ 9 ] found that the Twitter community do not fall into social pressure as like WhatsApp community [ 30 ]. Simon et al. [ 30 ] developed a model of rumor retransmission on social media and identified information ambiguity, anxiety and personal involvement as motives for rumormongering. Attractiveness is another rumor-causing variable. It occurs when aesthetically appealing visual aids or designs capture a receiver’s attention. Here believability matters more than the content’s reliability or the truth of the information received.

The second stage of the spread of misinformation is misinformation retransmission. Apart from the rumor-causing variables that are reported in Oh et al. [ 9 ], Liu et al. [ 13 ] found senders credibility and attractiveness as significant variables related to misinformation retransmission. Personal involvement and content ambiguity can also affect misinformation transmission [ 13 ]. Abdullah et al. [ 25 ] explored retweeter's motive on the Twitter platform to spread disaster information. Content relevance, early information [ 27 , 31 ], trustworthiness of the content, emotional influence [ 30 ], retweet count, pro-social behavior (altruistic behavior among the citizens during the crisis), and the need to inform their circle are the factors that drive users’ retweet [ 25 ]. Lee et al. [ 26 ] have also examined the impact of Twitter features on message diffusion based on the 2013 Boston marathon tragedy. The study reported that during crisis events (especially during disasters), a tweet that has less reaction time (time between the crisis and initial tweet) and had higher impact than other tweets. This shows that to an extent, misinformation can be controlled if officials could communicate at the early stage of a crisis [ 27 ]. Liu et al. [ 13 ] showed that tweets with hashtags influence spread of misinformation. Further, Lee et al. [ 26 ] found that tweets with no hashtags had more influence due to contextual differences. For instance, usage of hashtags for marketing or advertising has a positive impact, while in the case of disaster or emergency situations, usage of hashtags (as in case of Twitter) has a negative impact. Messages with no hashtag get widely diffused when compared to messages with the hashtag [ 26 ].

Oh et al. [ 15 ] explored the behavioral aspects of social media participants that led to retransmission and spread of misinformation. They found that when people believe a threatening piece of misinformation they received, they are more likely to spread it, and they take necessary safety measures (sometimes even extreme actions). Repetition of the same misinformation from different sources also makes it more believable [ 28 ]. However, when they realize the received information was false they were less likely to share it with others [ 13 , 26 ]. The characteristics of the platform used to deliver the misinformation also matters. For instance, numbers of likes and shares of the information increases the believability of the social media post [ 47 ].

In summary, we found that platform architecture also has an essential role in spreading and believability of misinformation. While conducting this systematic literature review, we observed that more studies on disaster and misinformation are based on the Twitter platform. The six papers out of nine that we reviewed on disaster area were based on the Twitter platform. When a message was delivered in video format, it had a higher impact compared to audio or text messages. If the message had a religious or cultural narrative, it led to behavioral action (danger control response) [ 15 ]. Users were more likely to spread misinformation through WhatsApp than Twitter. It was difficult to find the source of shared information on WhatsApp [ 30 ].

Misinformation related to healthcare

From our review, we found two systematic literature reviews that discusses health-related misinformation on social media. Yang et al. [ 58 ] explores the characteristics, impact and influences of health misinformation on social media. Wang et al. [ 59 ] addresses health misinformation related to vaccines and infectious diseases. This review shows that health-related misinformation, especially on M.M.R. vaccine and autism are largely spreading on social media and the government is unable to control it.

The spread of health misinformation is an emerging issue facing public health authorities. Health misinformation could delay proper treatment to patients, which could further add more casualties to the public health domain [ 28 , 59 , 60 ]. Often people tend to believe health-related information that is shared by their peers. Some of them tend to share their treatment experience or traditional remedies online. This information could be in a different context and may not be even accurate [ 33 , 34 ]. Compared to health-related websites, the language used to detail the health information shared on social media will be simple and may not include essential details [ 35 , 37 ]. Some studies reported that conspiracy theories and pseudoscience have escalated casualties [ 33 ]. Pseudoscience is the term referred to as the false claim, which pretends as if the shared misinformation has scientific evidence. The anti-vaccination movement on Twitter is one of the examples of pseudoscience [ 61 ]. Here the user might have shared the information due to the lack of scientific knowledge [ 35 ].

Characterizing healthcare misinformation

The attributes that characterize healthcare misinformation are distinctly different from other domains. Chua and Banerjee, [ 37 ] identified the characteristics of health misinformation as dread and wish. Dread is the rumor which creates more panic and unpleasant consequences. For example, in the wake of COVID-19, misinformation was widely shared on social media, which claimed that children 'died on the spot' after the mass COVID-19 vaccination program in Senegal, West Africa [ 61 ]. This message created panic among the citizens, as the misinformation was shared more than 7000 times on Facebook [ 61 ]. Wish is the type of rumor that gives hope to the receiver (e.g.,: rumor on free medicine distribution) [ 62 ]. Dread rumor looks more trustworthy and more likely to get viral. Dread rumor was the cause of violence against a minority group in India during COVID-19 [ 7 ]. Chua and Banerjee, [ 32 ] added pictorial and textual representations as the characteristics of health misinformation. The rumor that contains only text is textual rumor. Pictorial rumor on the other hand contains both text and images. However, Chua and Banerjee, [ 32 ] found that users prefer textual rumor than pictorial. Unlike rumors that are circulated during a natural disaster, health misinformation will be long-lasting, and it can spread cutting across boundaries. Personal involvement (the importance of information for both sender and receiver), rumor type and presence of counter rumor are some of the variables that can escalate users’ trusting and sharing behavior related to rumor [ 37 ]. The study of Madraki et al. [ 46 ] study on COVID-19 misinformation /disinformation reported that COVID-19 misinformation on social media differs significantly based on the languages, countries and their culture and beliefs. Acceptance of social media platforms as well as Governmental censorship also play an important role here.

Widespread misinformation could also change collective opinion [ 29 ]. Online users’ epistemic beliefs could control their sharing decisions. Chua and Banerjee, [ 32 ] argued that epistemologically naïve users (users who think knowledge can be acquired easily) are the type of users who accelerate the spread of misinformation on platforms. Those who read or share the misinformation are not likely to follow it [ 37 ]. Gu and Hong [ 34 ] examined health misinformation on mobile social media context. Mobile internet users are different from large screen users. The mobile phone user might have a more emotional attachment toward the gadget. It also motivates them to believe received misinformation. The corrective effort focused on large screen users may not work with mobile phone users or small screen users. Chua and Banerjee [ 32 ] suggested that simplified sharing options of platforms also motivate users to share the received misinformation before validating it. Shahi et al. [ 47 ] found that misinformation is also propagated or shared even by the verified Twitter handles. They become a part of misinformation transmission either by creating it or endorsing it by liking or sharing the information.

The focus of existing studies is heavily based on data from social networking sites such as Facebook and Twitter, although other platforms too escalate the spread of misinformation. Such a phenomenon was evident in the wake of COVID-19 as an intense trend of misinformation spread was reported on WhatsApp, TikTok, and Instagram.

Social media misinformation and politics

There have been several studies on the influence of misinformation on politics across the world [ 43 , 44 ]. Political misinformation has been predominantly used to influence the voters. The USA Presidential election of 2016, French election of 2017 and Indian elections in 2019 have been reported as examples where misinformation has influenced election process [ 15 , 17 , 45 ]. During the 2016 USA election, the partisan effect was a key challenge, where false information was presented as if it was from an authorized source [ 39 ]. Based on a user's prior behavior on the platform, algorithms can manipulate the user's feed [ 40 ]. In a political context, fake news can create more harm as it can influence the voters and the public. Although, fake news has less ‘life’, it's consequences may not be short living. Verification of fake news takes time and by the time verification results are shared, fake news could achieve its goal [ 43 , 48 , 63 ].

Characterizing misinformation in politics

Confirmation bias has a dominant role in social media misinformation related to politics. Readers are more likely to read and engage with the information that confirms their preexisting beliefs and political affiliations and reject information that challenges it [ 46 , 48 ]. For example, in the 2016 USA election, Pro-Trump fake news was accepted by Republicans [ 19 ]. Misinformation spreads quickly among people who have similar ideologies [ 19 ]. The nature of interface also could escalate the spread of misinformation. Kim and Dennis [ 36 ] investigated the influence of platforms' information presentation format and reported that social media platforms indirectly force users to accept certain information; they present information such that little importance is given to the source of information. This presentation is manipulative as people tend to believe information from a reputed source and are more likely to reject information that is from a less-known source [ 42 ].

Pennycook et al. [ 39 ], and Garrett and Poulsen [ 40 ] argued that warning tags (or flagging) on the headline can reduce the spread of misinformation. However, it is not practical to assign warning tags to all misinformation as it gets generated faster than valid information. The fact-checking process in social media also takes time. Hence, people tend to believe that the headlines which do not have warning tags are true and the idea of warning tags will thus not serve any purpose [ 39 ]. Furthermore, it could increase the reader's belief in warning tags and lead to misperception [ 39 ]. Readers tend to believe that all information is verified and consider untagged false information as more accurate. This phenomenon is known as the implied truth effect [ 39 ]. In this case, source reputation rating will influence the credibility of the information. The reader gives less importance to the source that has a low rating [ 17 , 50 ].

Theoretical perspectives of social media misinformation

We identified six theories among the articles we reviewed in relation to social media misinformation. We found rumor theory was used most frequently among all the studies chosen for our review; the theory was used in four articles as a theoretical foundation [ 9 , 11 , 13 , 37 , 43 ]. Oh et al. [ 9 ], studied citizen-driven information processing on Twitter using rumor theory in three social crises. This paper identified four key variables (source ambiguity, personal involvement, and anxiety) that spread misinformation. The authors further examined the acceptance of hate rumors and the aftermath of community crisis based on the Bangalore mass exodus of 2012. Liu et al. [ 13 ], examined the reason behind the retransmission of messages using rumor theory in disasters. Hazel Kwon and Raghav Rao [ 43 ] investigated how internet surveillance by the government impacts citizens’ involvement with cyber-rumors during a homeland security threat. Diffusion theory has also been used in IS research to discern the adoption of technological innovation. Researchers have used diffusion theory to study the retweeting behavior among Twitter users (tweet diffusion) during extreme events [ 26 ]. This research investigated information diffusion during extreme events based on four major elements of diffusion: innovation, time, communication channels and social systems. Kim et al. [ 36 ] examined the effect of rating news sources on users’ belief in social media articles based on three different rating mechanisms expert rating, user article rating and user source rating. Reputation theory was used to show how users would discern cognitive biases in expert ratings.

Murungi et al. [ 38 ] used rhetorical theory to argue that fact-checkers have less effectiveness on fake news that spreads on social media platforms. The study proposed a different approaches by focusing on underlying belief structure that accepts misinformation. The theory was used to identify fake news and socially constructed beliefs in the context of Alabama’s senatorial election in 2017. Using third person effect as the theoretical ground, the characteristics of rumor corrections on Twitter platform have also been examined in the context of death hoax of Singapore’s first prime minister Lee Kuan Yew [ 28 ]. This paper explored the motives behind collective rumor and identified the key characteristics of collective rumor correction. Using situational crisis communication theory (SCCT), Paek and Hove [ 44 ] examined how government could effectively respond to risk-related rumors during national-level crises in the context of food safety rumor. Refuting rumor, denying it and attacking the source of rumor are the three rumor response strategies suggested by the authors to counter rumor-mongering (Table ​ (Table2 2 ).

Theories used in social media misinformation research

Determinants of misinformation in social media platforms

Figure  3 depicts the concepts that emerged from our review using a framework of Antecedents-Misinformation-Outcomes (AMIO) framework, an approach we adapt from Smith HJ et al. [ 66 ]. Originally developed to study information privacy, the Antecedent-Privacy-Concerns-Outcomes (APCO) framework provided a nomological canvas to present determinants, mediators and outcome variables pertaining to information privacy. Following this canvas, we discuss the antecedents of misinformation, mediators of misinformation and misinformation outcomes, as they emerged from prior studies (Fig.  3 ).

An external file that holds a picture, illustration, etc.
Object name is 41060_2022_311_Fig3_HTML.jpg

Determinants of misinformation

Anxiety, source ambiguity, trustworthiness, content ambiguity, personal involvement, social ties, confirmation bias, attractiveness, illiteracy, ease of sharing options and device attachment emerged as the variables determining misinformation in social media.

Anxiety is the emotional feeling of the person who sends or receives the information. If the person is anxious about the information received, he or she is more likely to share or spread misinformation [ 9 ]. Source ambiguity deals with the origin of the message. When the person is convinced of the source of information, it increases his trustworthiness and the person shares it. Content ambiguity addresses the content clarity of the information [ 9 , 13 ]. Personal involvement denotes how much the information is important for both the sender and receiver [ 9 ]. Social ties, information shared by a family member or social peers will influence the person to share the information [ 9 , 13 ]. From prior literature, it is understood that confirmation bias is one of the root causes of political misinformation. Research on attractiveness of the received information reveals that users tend to believe and share the information that is received on her or his personal device [ 34 ]. After receiving the misinformation from various sources, users accept it based on their existing beliefs, and social, cognitive factors and political factors. Oh et al. [ 15 ] observed that during crises, people by default have a tendency to believe unverified information especially when it helps them to make sense of the situation. Misinformation has significant effects on individuals and society. Loss of lives [ 9 , 15 , 28 , 30 ], economic loss [ 9 , 44 ], loss of health [ 32 , 35 ] and loss of reputation [ 38 , 43 ] are the major outcome of misinformation emerged from our review.

Strategies for controlling the spread of misinformation

Discourse on social media misinformation mitigation has resulted in prioritization of strategies such as early communication from the officials and use of scientific evidence [ 9 , 35 ]. When people realize that the received information or message is false, they are less likely to share that information with others [ 15 ]. Other strategies are 'rumor refutation—reducing citizens' intention to spread misinformation by real information which reduces their uncertainty and serves to control misinformation [ 44 ]. Rumor correction models for social media platforms also employ algorithms and crowdsourcing [ 28 ]. Majority of the papers that we have reviewed suggested fact-checking by experts, source rating of the received information, attaching warning tags to the headlines or entire news [ 36 ], and flagging content by the platform owners [ 40 ] as the strategies to control the spread of misinformation. Studies on controlling misinformation in the public health context showed that the government could also seek the help of public health professionals to mitigate misinformation [ 31 ].

However, the aforementioned strategies have been criticized for several limitations. Most papers mentioned confirmation bias as having a significant impact on the misinformation mitigation strategies, especially in the political context where people tend to believe the information that matches their prior belief. Garrett and Poulsen [ 40 ] argued that during an emergency situation, misinformation recipient may not be able to characterize the misinformation as true or false. Thus, providing alternative explanation or the real information to the users have more effect than providing fact-checking report. Studies by Garrett and Poulsen [ 40 ], and Pennycook et al. [ 39 ] reveal a drawback of attaching warning tags to news headlines. Once the flagging or tagging of the information is introduced, the information with the absence of tags will be considered as true or reliable information. This creates an implied truth effect. Further, it is also not always practical to evaluate all social media posts. Similarly, Kim and Dennis [ 36 ] studied fake news flagging and found that fake news flags did not influence users’ belief. However, they created cognitive dissonance and users were in search of the truthfulness of the headline. Later in 2017 Facebook discontinued the fake news flagging service owing to its limitations [ 45 ]

Key research gaps and future directions

Although, misinformation is a multi-sectoral issue, our systematic review observed that interdisciplinary research on social media misinformation is relatively scarce. ‘Confirmation bias’ is one of the most significant behavioral problem that motivates the spread of misinformation. However, lack of research on it reveals the scope for future interdisciplinary research across the fields of Data Science, Information Systems and Psychology in domains such as politics and health care. In the disaster context, there is a scope for study on the behavior of a first respondent and an emergency manager to understand their information exchange pattern with the public. Similarly, future researchers could analyze communication patterns between citizens and frontline workers in the public health context, which may be useful to design counter-misinformation campaigns and awareness interventions. Since information disorder is a multi-sectoral issue, researchers need to understand misinformation patterns among multiple government departments for coordinated counter-misinformation intervention.

There is a further dearth of studies on institutional responses to control misinformation. To fill the gap, future studies could concentrate on the analysis of governmental and organizational interventions to control misinformation at the level of policies, regulatory mechanisms, and communication strategies. For example, in India there is no specific law against misinformation but there are some provisions in the Information Technology Act (IT Act) and Disaster Management Act which can control misinformation and disinformation. An example of awareness intervention is an initiative named ‘Satyameva Jayate’ launched in Kannur district of Kerala, India which focused on sensitizing children at school to spot misinformation [ 67 ]. As noted earlier, within the research on Misinformation in the political context, there is a lack of research on strategies adopted by the state to counter misinformation. Therefore, building on cases like 'Satyameva Jayate' would further contribute to knowledge in this area.

Technology-based strategies adopted by social media to control the spread of misinformation emphasize the corrective algorithms, keywords and hashtags as a solution [ 32 , 37 , 43 ]. However, these corrective measures have their own limitations. Misinformation corrective algorithms are ineffective if not used immediately after the misinformation has been created. Related hashtags and keywords are used by researchers to find content shared on social media platforms to retrieve data. However, it may not be possible for researchers to cover all the keywords or hashtags employed by users. Further, algorithms may not decipher content shared in regional languages. Another limitation of algorithms employed by platforms is that they recommend and often display content based on user activities and interests which limits the users access to information from multiple perspectives, thus reinforcing their existing belief [ 29 ]. A reparative measure is to display corrective information as 'related stories' for misinformation. However, Facebook’s related stories algorithm only activates when an individual clicks on an outside link, which limits the number of people who will see the corrective information through the algorithm which turns out to be a challenge. Future research could investigate the impact of related stories as a corrective measure by analyzing the relation between misinformation and frequency of related stories posted vis a vis real information.

Our review also found a scarcity of research on the spread of misinformation on certain social media platforms while studies being skewed toward a few others. Of the studies reviewed, 15 articles were concentrated on misinformation spread on Twitter and Facebook. Although, from recent news reports it is evident that largely misinformation and disinformation are spread through popular messaging platforms like the 'WhatsApp', ‘Telegram’, ‘WeChat’, and ‘Line’, research using data from these platforms are, however, scanty. Especially in the Indian context, the magnitude of problems arising from misinformation through WhatsApp are overwhelming [ 68 ]. To address the lacunae of research on messaging platforms, we suggest future researchers to concentrate on investigating the patterns of misinformation spreading on platforms like WhatsApp. Moreover, message diffusion patterns are unique to each social media platform; therefore, it is useful to study the misinformation diffusion patterns on different social media platforms. Future studies could also address the differential roles, patterns and intensity of the spread of misinformation on various messaging and photo/ video-sharing social networking services.

Evident from our review, most research on misinformation is based on Euro-American context and the dominant models proposed for controlling misinformation may have limited applicability to other regions. Moreover, the popularity of social media platforms and usage patterns are diverse across the globe consequent to cultural differences and political regimes of the region, therefore necessitating researchers of social media to take cognizance of empirical experiences of ' left-over' regions.

To understand the spread of misinformation on social media platforms, we conducted a systematic literature review in three important domains where misinformation is rampant: disaster, health, and politics. We reviewed 28 articles relevant to the themes chosen for the study. This is one of the earliest reviews focusing on social media misinformation research, especially based on three sensitive domains. We have discussed how misinformation spreads in the three sectors, the methodologies that have been used by researchers, theoretical perspectives, Antecedents-Misinformation-Outcomes (AMIO) framework for understanding key concepts and their inter-relationships, and strategies to control the spread of misinformation.

Our review also identified major gaps in IS research on misinformation in social media. This includes the need for methodological innovations in addition to experimental methods which have been widely used. This study has some limitations that we acknowledge. We might not have identified all relevant papers on spread of misinformation on social media from existing literature as some authors might have used different keywords and also due to our strict inclusion and exclusion criteria. There might also have been relevant publications in languages other than English which were not covered in this review. Our focus on three domains also restricted the number of papers we reviewed.

Author contributions

TMS: Conceptualization, Methodology, Investigation, Writing—Original Draft, SKM: Writing—Review & Editing, Supervision.

This research did not receive any specific Grant from funding agencies in the public, commercial, or not-for-profit sectors.

Declarations

On behalf of two authors, the corresponding author states that there is no conflict of interest in this research paper.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Sadiq Muhammed T, Email: [email protected] .

Saji K. Mathew, Email: ni.ca.mtii@ijas .

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: an interactive agent foundation model.

Abstract: The development of artificial intelligence systems is transitioning from creating static, task-specific models to dynamic, agent-based systems capable of performing well in a wide range of applications. We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents across a wide range of domains, datasets, and tasks. Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction, enabling a versatile and adaptable AI framework. We demonstrate the performance of our framework across three separate domains -- Robotics, Gaming AI, and Healthcare. Our model demonstrates its ability to generate meaningful and contextually relevant outputs in each area. The strength of our approach lies in its generality, leveraging a variety of data sources such as robotics sequences, gameplay data, large-scale video datasets, and textual information for effective multimodal and multi-task learning. Our approach provides a promising avenue for developing generalist, action-taking, multimodal systems.

Submission history

Access paper:.

  • Download PDF
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

menu

Best Media Info

  • Follow us ▾

bestmediainfo logo

  • Home  ›

How marketers can go beyond traditional ways of consumer segmentation

At a panel discussion held alongside the launch of market research society of india (mrsi)'s new socio-economic classification system, 'isec', advertisers and market research experts suggested segmenting consumers based on their behaviours, needs, and life stages offers a more effective approach, over and above traditional means of segmenting.

BestMediaInfo Bureau

In a time when obtaining a comprehensive omnichannel view of consumers appears challenging, it's advantageous to transcend traditional segmentation methods reliant solely on factors such as affluence, propensity to buy, and demographics.

Advertisers and market research experts proposed at a panel discussion, held alongside the launch of Market Research Society of India (MRSI)'s new Socio-economic Classification System, 'ISEC', that segmenting consumers based on their behaviours, needs, and life stages offers a more effective approach.

research paper about traditional media

Citing examples of how Dabur goes about segmenting for its healthcare and pharma range of products, Rajiv Dubey, Head of Media at the FMCG company, said, “Health-conscious individuals tend to gravitate towards healthier activities, making them prime candidates for health supplements consumption. Along with that, our research partners have provided invaluable data, including states with high disease indices and low disease indices, aiding us in segmenting the audience more effectively.”

He further emphasised that brands can effectively target consumers based on their current life stage, as product preferences tend to shift accordingly. For instance, Dubey highlighted that consumers often purchase baby products upon entering parenthood.

Along with Dubey, the other panellists were Muralidhar Salvateeswaran, Chief Operations Officer at Insights APAC, Kantar; Vivek Malhotra, Group CMO of India Today Group; Vinay Virwani, Head of Consumer Insights at Dabur India; Amit Adarkar, CEO, IPSOS, India and Jasmine Sachdeva, Managing Partner at Wavemaker India. The session was moderated by Shuvadip Banerjee, Chief Digital Marketing Officer at ITC and General Secretary of MRSI.

research paper about traditional media

In addition to segmenting the audience according to behaviour, Salvateeswaran proposed that brands could further refine their segmentation by considering consumer needs and layering it with additional variables such as lifestyle.

“Brands must look at new spaces of growth, especially the categories which have reached a saturation in terms of penetration,” he added.

Wavemaker’s Sachdeva echoed Salvateeswaran’s sentiments, emphasising the importance of identifying the source of growth after the brand has completed segmentation based on demographics and psychographics. This entails determining whether growth will stem from increased consumption or penetration in certain cases.

“That's the brilliance of digital data today, which helps brands segment the consumers better. For example, open data sources, in which brands can link their API which helps them become more deterministic of the need gap. Open API layered with static and digital first-party data, helps segment better today,” she added.

This underscores the power of digital data in enhancing consumer segmentation for brands. For instance, leveraging open data sources enables brands to link their APIs, enhancing their ability to pinpoint specific gaps in consumer needs more accurately. By combining open APIs with static and digital first-party data, brands can significantly improve their segmentation capabilities in today's digital landscape.

research paper about traditional media

Giving an example of need-based segmentation done for Luminous Inverters, Sachdeva explained, “Wavemaker used an API, which helped them determine areas with power cuts. Even in the scenario of power cuts, mobiles function. Through mobiles, we were able to send across our messaging in the power cut areas.”

[email protected]

Katrina Kaif’s Kay Beauty partners with UP Warriorz as Title Sponsor for WPL Season 2

  • Post Comment

Why Paramount's problems should worry the rest of the media giants

Paramount is in trouble: The one-time media giant's ad sales are plummeting, and so is its stock price. Would-be buyers are kicking the tires, but no one seems in a hurry to make a deal — the price will surely keep going down. This week, the day after the company broadcast the Super Bowl to a record-setting number of viewers, it announced companywide layoffs .

But why should you, a person who doesn't work at Paramount, care about the future of the company?

Because, as Lucas Shaw explains in a new Bloomberg Businessweek story, it's a proxy for traditional media in general:

The company's troubles are also a warning sign for Hollywood, which looked to avoid the fate of newspapers, magazines and music—industries ravaged by the internet. But as media companies struggle to transition from cable to streaming, they're surrendering the next generation of TV viewers to short-form video apps and services that tech giants in Silicon Valley and China own. So far, Hollywood has relied on restructuring and layoffs rather than innovation and growth, leading to questions about whether we're in the last great age of TV.

As Shaw notes in his piece, Paramount's problems are both particularly acute and self-inflicted: Compared to the likes of Disney and Warner Bros Discovery, it has less room for error because it is less diversified. And under the leadership of longtime owner Sumner Redstone, the company stubbornly refused to accept the fact that its young audience was particularly likely to leave for digital alternatives ; more recently, under the leadership of Redstone's daughter, Shari, it has missed opportunities to sell all or parts of the company at prices it has no hope of getting again.

But even under the best-case scenario, it would be hard for Paramount or any other traditional media company to survive the transition to streaming and digital. Which is why two of the biggest traditional giants — Time Warner and Rupert Murdoch's Fox — took the opportunity to sell most of themselves in 2016 and 2017.

research paper about traditional media

Watch: The president of Turner knows what advertisers want, but says telecom isn't moving fast enough

research paper about traditional media

  • Main content
  • Mobile Site
  • Staff Directory
  • Advertise with Ars

Filter by topic

  • Biz & IT
  • Gaming & Culture

Front page layout

Pics and it didn't happen —

Openai collapses media reality with sora, a photorealistic ai video generator, hello, cultural singularity—soon, every video you see online could be completely fake..

Benj Edwards - Feb 16, 2024 5:23 pm UTC

Snapshots from three videos generated using OpenAI's Sora.

On Thursday, OpenAI announced Sora , a text-to-video AI model that can generate 60-second-long photorealistic HD video from written descriptions. While it's only a research preview that we have not tested, it reportedly creates synthetic video (but not audio yet) at a fidelity and consistency greater than any text-to-video model available at the moment. It's also freaking people out.

Further Reading

"It was nice knowing you all. Please tell your grandchildren about my videos and the lengths we went to to actually record them," wrote Wall Street Journal tech reporter Joanna Stern on X.

"This could be the 'holy shit' moment of AI," wrote Tom Warren of The Verge.

"Every single one of these videos is AI-generated, and if this doesn't concern you at least a little bit, nothing will," tweeted YouTube tech journalist Marques Brownlee.

For future reference—since this type of panic will some day appear ridiculous—there's a generation of people who grew up believing that photorealistic video must be created by cameras. When video was faked (say, for Hollywood films), it took a lot of time, money, and effort to do so, and the results weren't perfect. That gave people a baseline level of comfort that what they were seeing remotely was likely to be true, or at least representative of some kind of underlying truth. Even when the kid jumped over the lava , there was at least a kid and a room.

The prompt that generated the video above: " A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors. "

Technology like Sora pulls the rug out from under that kind of media frame of reference. Very soon, every photorealistic video you see online could be 100 percent false in every way. Moreover, every historical video you see could also be false. How we confront that as a society and work around it while maintaining trust in remote communications is far beyond the scope of this article, but I tried my hand at offering some solutions  back in 2020, when all of the tech we're seeing now seemed like a distant fantasy to most people.

In that piece, I called the moment that truth and fiction in media become indistinguishable the "cultural singularity." It appears that OpenAI is on track to bring that prediction to pass a bit sooner than we expected.

Prompt: Reflections in the window of a train traveling through the Tokyo suburbs.

OpenAI has found that, like other AI models that use the transformer architecture, Sora scales with available compute . Given far more powerful computers behind the scenes, AI video fidelity could improve considerably over time. In other words, this is the "worst" AI-generated video is ever going to look. There's no synchronized sound yet, but that might be solved in future models.

How (we think) they pulled it off

AI video synthesis has progressed by leaps and bounds over the past two years. We first covered text-to-video models in September 2022 with Meta's Make-A-Video . A month later, Google showed off Imagen Video . And just 11 months ago, an AI-generated version of Will Smith eating spaghetti went viral. In May of last year, what was previously considered to be the front-runner in the text-to-video space, Runway Gen-2, helped craft a fake beer commercial full of twisted monstrosities, generated in two-second increments. In earlier video-generation models, people pop in and out of reality with ease, limbs flow together like pasta, and physics doesn't seem to matter.

Sora (which means "sky" in Japanese) appears to be something altogether different. It's high-resolution (1920x1080), can generate video with temporal consistency (maintaining the same subject over time) that lasts up to 60 seconds, and appears to follow text prompts with a great deal of fidelity. So, how did OpenAI pull it off?

OpenAI doesn't usually share insider technical details with the press, so we're left to speculate based on theories from experts and information given to the public.

OpenAI says that Sora is a diffusion model, much like DALL-E 3 and Stable Diffusion . It generates a video by starting off with noise and "gradually transforms it by removing the noise over many steps," the company explains. It "recognizes" objects and concepts listed in the written prompt and pulls them out of the noise, so to speak, until a coherent series of video frames emerge.

Sora is capable of generating videos all at once from a text prompt, extending existing videos, or generating videos from still images. It achieves temporal consistency by giving the model "foresight" of many frames at once, as OpenAI calls it, solving the problem of ensuring a generated subject remains the same even if it falls out of view temporarily.

OpenAI represents video as collections of smaller groups of data called "patches," which the company says are similar to tokens (fragments of a word) in GPT-4. "By unifying how we represent data, we can train diffusion transformers on a wider range of visual data than was possible before, spanning different durations, resolutions, and aspect ratios," the company writes.

An important tool in OpenAI's bag of tricks is that its use of AI models is compounding . Earlier models are helping to create more complex ones. Sora follows prompts well because, like DALL-E 3 , it utilizes synthetic captions that describe scenes in the training data generated by another AI model like GPT-4V . And the company is not stopping here. "Sora serves as a foundation for models that can understand and simulate the real world," OpenAI writes, "a capability we believe will be an important milestone for achieving AGI."

One question on many people's minds is what data OpenAI used to train Sora. OpenAI has not revealed its dataset, but based on what people are seeing in the results, it's possible OpenAI is using synthetic video data generated in a video game engine in addition to sources of real video (say, scraped from YouTube or licensed from stock video libraries). Nvidia's Dr. Jim Fan, who is a specialist in training AI with synthetic data, wrote on X, "I won't be surprised if Sora is trained on lots of synthetic data using Unreal Engine 5. It has to be!" Until confirmed by OpenAI, however, that's just speculation.

reader comments

Channel ars technica.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Why Data Breaches Spiked in 2023

  • Stuart Madnick

research paper about traditional media

And what companies can do to better secure users’ personal information.

In spite of recent efforts to beef up cybersecurity, data breaches — in which hackers steal personal data — continue to increase year-on-year: there was a 20% increase in data breaches from 2022 to 2023. There are three primary reasons behind this increased theft of personal data: (1) cloud misconfiguration, (2) new types of ransomware attacks, and (3) increased exploitation of vendor systems. Fortunately, there are ways to reduce the impact of each of these factors.

For many years, organizations have struggled to protect themselves from cyberattacks: companies, universities, and government agencies have expended enormous amounts of resources to secure themselves. But in spite of those efforts, data breaches — in which hackers steal personal data — continue to increase year-on-year: there was a 20% increase in data breaches from 2022 to 2023 . Some of the trends around this uptick are disturbing. For example, globally, there were twice the number of victims in 2023 compared to 2022, and in the Middle East, ransomware gang activity increased by 77% in that same timeframe.

  • Stuart Madnick  is the John Norris Maguire (1960) Professor of Information Technologies in the MIT Sloan School of Management, Professor of Engineering Systems in the MIT School of Engineering, and Director of Cybersecurity at MIT Sloan (CAMS): the Interdisciplinary Consortium for Improving Critical Infrastructure Cybersecurity. He has been active in the cybersecurity field since co-authoring the book Computer Security in 1979.

Partner Center

IMAGES

  1. (DOC) Discuss the advantages/disadvantages of a traditional media

    research paper about traditional media

  2. Media studies essay

    research paper about traditional media

  3. (DOC) Implications to Social Interaction: An inquiry about the effects

    research paper about traditional media

  4. Sample research paper example

    research paper about traditional media

  5. (DOC) The Evolution of Traditional Media to New Media

    research paper about traditional media

  6. 10 Unbeatable Reasons: Writing a Blog vs Traditional News 2023

    research paper about traditional media

VIDEO

  1. Research Paper Writing online Workshop

  2. Bachelor of Digital Media

  3. Introduction to Research

  4. Finding HIGH-Impact Research Topics

  5. Best AI tools for research paper writing 🔥#shorts #ai

  6. Introduction to thesis writing for Journalism Studies

COMMENTS

  1. The Impact of New Media on Traditional Media

    The Impact of New Media on Traditional Media Authors: Lavanya Rajendran Anna University, Chennai Preethi Thesinghraja Illinois Institute of Technology Abstract and Figures Social media and...

  2. (PDF) Social Media versus Traditional Media

    Social Media versus Traditional Media Authors: Sameer Kubtan Abdullah Gul University Ahmad Antabi Abdullah Gul University Ahmed Alqeshi Abdullah Gul University Abstract and Figures This paper...

  3. Traditional media versus social media: challenges and opportunities

    The findings of the data analysis demonstrated that the traditional media and the social media have their pros and cons. Additionally, there are major differences between the two types,...

  4. Not only people are getting old, the new media are too: Technology

    In the present study, we examine how different traditional and new media are used by the following technology generations: the 'mechanical' (people born in 1938 or before), the 'household revolution' (people born between 1939 and 1948), and the 'technology spread' (people born between 1949 and 1963).

  5. Full article: Digital media vs mainstream media: Exploring the

    With the numerous media outlets available for information and entertainment, people access online and traditional media platforms for different reasons and purposes, which is determined by their personal choices and preferences (Daramola, Citation 2003). For example, an individual may prefer to access political news content from newspapers or ...

  6. Responsible media technology and AI: challenges and research ...

    The last two decades have witnessed major disruptions to the traditional media industry as a result of technological breakthroughs. New opportunities and challenges continue to arise, most recently as a result of the rapid advance and adoption of artificial intelligence technologies. On the one hand, the broad adoption of these technologies may introduce new opportunities for diversifying ...

  7. PDF The Transition and Countermeasures of Traditional Media After Being

    Published by Atlantis Press SARL. The platforms of digital media are diverse, consisting of the Internet, mobile phones, and other social media applications [3]. The emergence of these digital media platforms changes people's lives. In the late 1990s, the number of people using the internet increased dramatically.

  8. The Internet and Traditional Media Displacement

    at work Internet users spend 34% percent their media minutes using the Internet while only 30% percent are used on the television and 26% on the radio (2001). In 2000, a survey done by the Round Table group reported that the Internet is. rapidly displacing older media like television as a source of information for young.

  9. News media trust and its impact on media use: toward a framework for

    Introduction. From a democratic perspective, a key function of news media is to 'aid citizens in becoming informed' (Holbert, Citation 2005, p. 511).For the news media to fulfill this function, an important prerequisite is that they provide people with the kind of information they need to be free and self-governing (Kovach & Rosenstiel, Citation 2014; Strömbäck, Citation 2005).

  10. New Media and the Traditional Media Platforms: Introspection on the

    media is whether it is an extension of traditional media forms. As discussed earlier, the term new media can be used in an inclusive manner. New Media has gained currency as a term because of its useful inclusiveness. It avoids, at the expense of generality and its ideological overtones, the reductions of some of its alternatives. It

  11. The Evolution of Traditional Media to New Media

    See Full PDFDownload PDF. The Evolution of Traditional Media to New Media In this timeline, we were going to discuss the evolution of media generation to generation.This include how media is used during the Pre- historic age, Industrial age, Information age. and Electronic age. Share The Prehistoric Age : (1500 BC - 1500 AD) Prehistory is the ...

  12. PDF Traditional and Modern Media

    Thus, traditional and modern media, such as newspapers, street theater, television and the Internet can be seen as descendants and variants of types of stories and storytelling that predated the invention of those media. For instance, as an important narrative communication of contemporary times, mass media have taken on the functions served ...

  13. The Role of Social Media Content Format and Platform in Users

    The purpose of this study is to understand the role of social media content on users' engagement behavior. More specifically, we investigate: (i)the direct effects of format and platform on users' passive and active engagement behavior, and (ii) we assess the moderating effect of content context on the link between each content type (rational, emotional, and transactional content) and ...

  14. Setting the future of digital and social media marketing research

    Digital media spending is up and traditional media spending is down. Media viewership is a direct relationship with age - the younger the media the younger the audience. In the meantime, Google now processes over 40,000 search queries every second (Statista, 2019). The world's largest search engine is handling an unconceivable number of ...

  15. The evolution of social media influence

    To study the evolution of social media influence on an individual, systematic literature review process suggested by Brereton et al. (2007) had been followed. Figure 1 presents the process followed for the selection of the articles. For developing the review protocol, existing studies like Brereton et al. (2007); Chauhan et al., and Kar (2016); Grover and Kar (2017); Grover et al., and Davies ...

  16. Media and the Development of Gender Role Stereotypes

    This review summarizes recent findings (2000-2020) concerning media's contributions to the development of gender stereotypes in children and adolescents. Content analyses document that there continues to be an underrepresentation of women and a misrepresentation of femininity and masculinity in mainstream media, although some positive changes are noted. Concerning the strength of media's ...

  17. Research on the Transformation and Development of Traditional Media in

    ... Both form iteration and content innovation reconstruct the traditional media operation system, thereby necessitating its adoption. Paying equal attention to technology empowerment and...

  18. PDF Traditional Media versus New Media: A Case Study in the ...

    OBJECTIVE: The objective of the study is to reveal the impact of new media on the old traditional media of communication in Karnataka. KEY WORDS : New media, Social media, Treditional media, Communities, Urbun and Rural areas. I. INTRODUCTION

  19. Full article: Social media advertisements and their influence on

    Social media marketing also appeared to be more convenient and cost-effective than traditional media marketing. Marketers immediately noticed this (Alalwan et al., Citation 2017). Owing to the fact that this was a new platform, there were no typical safe paths to be taken. Testing, analyzing, and using ideal marketing tactics were necessary ...

  20. Fake news, disinformation and misinformation in social media: a review

    Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017).Furthermore, it has been reported in a previous study about the spread of online news on Twitter ...

  21. The disaster of misinformation: a review of research in social media

    Ours is one of the early reviews focusing on social media misinformation research, particularly on three socially sensitive domains; disaster, health, and politics. This review contributes to the emerging body of knowledge in Data Science and social media and informs strategies to combat social media misinformation.

  22. [2402.05929] An Interactive Agent Foundation Model

    The development of artificial intelligence systems is transitioning from creating static, task-specific models to dynamic, agent-based systems capable of performing well in a wide range of applications. We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents across a wide range of domains, datasets, and tasks. Our training ...

  23. What The Apple Vision Pro Means For Traditional Entertainment

    The Apple Vision Pro isn't going to render traditional entertainment useless, unappealing or extinct. Instead, the AVP's functionality allows it to bring together in a single device the ...

  24. How marketers can go beyond traditional ways of consumer segmentation

    BestMediaInfo Bureau Delhi, February 22, 2024. In a time when obtaining a comprehensive omnichannel view of consumers appears challenging, it's advantageous to transcend traditional segmentation ...

  25. Paramount's Problems Show Why Traditional Media Is Screwed

    Feb 16, 2024, 8:16 AM PST. Paramount's challenges illustrate just how hard it is for all traditional media companies the new digitally focused world. PATRICK T. FALLON/Getty Images. Paramount is ...

  26. PDF Research on the Transformation and Development of Traditional Media in

    Research on the Transformation and De-velopment of Traditional Media in the New Media Era. Open Journal of Social Sciences, 9, 457-462. https://doi.org/10.4236/jss.2021.93029 Received:...

  27. OpenAI collapses media reality with Sora, a photorealistic AI video

    0. On Thursday, OpenAI announced Sora, a text-to-video AI model that can generate 60-second-long photorealistic HD video from written descriptions. While it's only a research preview that we have ...

  28. Why Data Breaches Spiked in 2023

    Post. In spite of recent efforts to beef up cybersecurity, data breaches — in which hackers steal personal data — continue to increase year-on-year: there was a 20% increase in data breaches ...

  29. Martin Compston's Norwegian Fling

    Martin Compston and Phil MacHugh celebrate Norway's National Day and try out some traditional jobs. This time Martin and Phil hit the road as they head from Oslo to the west coast. Along the way ...