• Trending Now
  • Data Structures & Algorithms
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial
  • Web Development
  • Web Browser

Related Articles

  • Top 10 Tools For Monitoring as a Service (MaaS)
  • How Emerging Economies and Different Industries are Getting Benefit From Cloud Computing?
  • How to Make a Career in Cloud Computing?
  • Top 10 Cloud Services For Database-as-a-Service
  • What is GCP (Google Cloud Platform) Security?
  • Cloud Computing Platforms and Technologies
  • 10 Best Cloud Computing Project Ideas
  • Why Cloud Computing is the Best Choice for Small Businesses?
  • Top 10 Most Valuable Cloud Computing Certifications
  • What is Google Cloud Platform (GCP)?
  • Serverless Computing
  • Skills Required to Become a Cloud Engineer
  • How Cloud Storage Actually Works !!
  • Advantages and Disadvantages of Cloud Security
  • How to decide the right cloud for your requirements?
  • Portable applications in Cloud and their barriers
  • Economics of Cloud Computing
  • Top 10 Cloud Platform Service Providers in 2024
  • AWS vs Azure - Which One You Should Choose?

Top 10 Cloud Computing Research Topics in 2020

Cloud computing has suddenly seen a spike in employment opportunities around the globe with tech giants like Amazon, Google, and Microsoft hiring people for their cloud infrastructure. Before the onset of cloud computing, companies and businesses had to set up their own data centers, allocate resources and other IT professionals thereby increasing the cost. The rapid development of the cloud has led to more flexibility, cost-cutting, and scalability. 

Top-10-Cloud-Computing-Research-Topics-in-2020

The Cloud Computing market its an all-time high with the current market size at USD 371.4 billion and is expected to grow up to USD 832.1 billion by 2025! It’s quickly evolving and gradually realizing its business value along with attracting more and more researchers, scholars, computer scientists, and practitioners. Cloud computing is not a single topic but a composition of various techniques which together constitute the cloud. Below are 10 the most demanded research topics in the field of cloud computing:

1. Big Data

Big data refers to the large amounts of data produced by various programs in a very short duration of time. It is quite cumbersome to store such huge and voluminous amounts of data in company-run data centers. Also, gaining insights from this data becomes a tedious task and takes a lot of time to run and provide results, therefore cloud is the best option. All the data can be pushed onto the cloud without the need for physical storage devices that are to be managed and secured. Also, some popular public clouds provide comprehensive big data platforms to turn data into actionable insights. 

DevOps is an amalgamation of two terms, Development and Operations. It has led to Continuous Delivery, Integration, and Deployment and therefore reducing boundaries between the development team and the operations team. Heavy applications and software need elaborate and complex tech stacks that demand extensive labor to develop and configure which can easily be eliminated by cloud computing. It offers a wide range of tools and technologies to build, test, and deploy applications with a few minutes and a single click. They can be customized as per the client requirements and can be discarded when not in use hence making the process seamless and cost-efficient for development teams.

3. Cloud Cryptography

Data in the cloud is needed to be protected and secured from foreign attacks and breaches. To accomplish this, cryptography in the cloud is a widely used technique to secure data present in the cloud. It allows users and clients to easily and reliably access the shared cloud services since all the data is secured using either the encryption techniques or by using the concept of the private key. It can make the plain text unreadable and limits the view of the data being transferred. Best cloud cryptographic security techniques are the ones that do not compromise the speed of data transfer and provide security without delaying the exchange of sensitive data. 

4. Cloud Load Balancing

It refers to splitting and distributing the incoming load to the server from various sources. It permits companies and organizations to govern and supervise workload demands or application demands by redistributing, reallocating, and administering resources between different computers, networks, or servers. Cloud load balancing encompasses holding the circulation of traffic and demands that exist over the Internet. This reduces the problem of sudden outages, results in an improvement in overall performance, has rare chances of server crashes, and also provides an advanced level of security. Cloud-based servers farms can accomplish more precise scalability and accessibility using the server load balancing mechanism. Due to this, the workload demands can be easily distributed and controlled.

5. Mobile Cloud Computing

It is a mixture of cloud computing, mobile computing, and wireless network to provide services such as seamless and abundant computational resources to mobile users, network operators, and cloud computing professionals. The handheld device is the console and all the processing and data storage takes place outside the physical mobile device. Some advantages of using mobile cloud computing are that there is no need for costly hardware, battery life is longer, extended data storage capacity and processing power improved synchronization of data and high availability due to “store in one place, accessible from anywhere”. The integration and security aspects are taken care of by the backend that enables support to an abundance of access methods. 

6. Green Cloud Computing

The major challenge in the cloud is the utilization of energy-efficient and hence develop economically friendly cloud computing solutions. Data centers that include servers, cables, air conditioners, networks, etc. in large numbers consume a lot of power and release enormous quantities of Carbon Dioxide in the atmosphere. Green Cloud Computing focuses on making virtual data centers and servers to be more environmentally friendly and energy-efficient. Cloud resources often consume so much power and energy leading to a shortage of energy and affecting the global climate. Green cloud computing provides solutions to make such resources more energy efficient and to reduce operational costs. This pivots on power management, virtualization of servers and data centers, recycling vast e-waste, and environmental sustainability. 

7. Edge Computing

It is the advancement and a much more efficient form of Cloud computing with the idea that the data is processed nearer to the source. Edge Computing states that all of the computation will be carried out at the edge of the network itself rather than on a centrally managed platform or the data warehouses. Edge computing distributes various data processing techniques and mechanisms across different positions. This makes the data deliverable to the nearest node and the processing at the edge. This also increases the security of the data since it is closer to the source and eliminates late response time and latency without affecting productivity.

8. Containerization

Containerization in cloud computing is a procedure to obtain operating system virtualization. The user can work with a program and its dependencies utilizing remote resource procedures. The container in cloud computing is used to construct blocks, which aid in producing operational effectiveness, version control, developer productivity, and environmental stability. The infrastructure is upgraded since it provides additional control over the granular activities over the resources. The usage of containers in online services assists storage with cloud computing data security, elasticity, and availability. Containers provide certain advantages such as a steady runtime environment, the ability to run virtually anywhere, and the low overhead compared to virtual machines. 

9. Cloud Deployment Model

There are four main cloud deployment models namely public cloud, private cloud, hybrid cloud, and community cloud. Each deployment model is defined as per the location of the infrastructure. The public cloud allows systems and services to be easily accessible to the general public. Public cloud could also be less reliable since it is open to everyone e.g. Email. A private cloud allows systems and services to be accessible inside an organization with no access to outsiders. It offers better security due to its access restrictions. Hybrid cloud is a mixture of private and public clouds with the critical activities being performed using private cloud and non-critical activities being performed using the public cloud. Community cloud allows system and services to be accessible by a group of an organization.

10. Cloud Security

Since the number of companies and organizations using cloud computing is increasing at a rapid rate, the security of the cloud is a major concern. Cloud computing security detects and addresses every physical and logical security issue that comes across all the varied service models of code, platform, and infrastructure. It collectively addresses these services, however, these services are delivered in units, that is, the public, private, or hybrid delivery model. Security in the cloud protects the data from any leakage or outflow, theft, calamity, and removal. With the help of tokenization, Virtual Private Networks, and firewalls data can be secured. 

Please Login to comment...

author

  • Cloud-Computing

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

For enquiries call:

+1-469-442-0620

banner-in1

  • Cloud Computing

Top 10 Cloud Computing Research Topics of 2024

Home Blog Cloud Computing Top 10 Cloud Computing Research Topics of 2024

Play icon

Cloud computing is a fast-growing area in the technical landscape due to its recent developments. If we look ahead to 2024, there are new research topics in cloud computing that are getting more traction among researchers and practitioners. Cloud computing has ranged from new evolutions on security and privacy with the use of AI & ML usage in the Cloud computing for the new cloud-based applications for specific domains or industries. In this article, we will investigate some of the top cloud computing research topics for 2024 and explore what we get most out of it for researchers or cloud practitioners. To master a cloud computing field, we need to check these Cloud Computing online courses .

Why Cloud Computing is Important for Data-driven Business?

The Cloud computing is crucial for data-driven businesses because it provides scalable and cost-effective ways to store and process huge amounts of data. Cloud-based storage and analytical platform helps business to easily access their data whenever required irrespective of where it is located physically. This helps businesses to take good decisions about their products and marketing plans. 

Cloud computing could help businesses to improve their security in terms of data, Cloud providers offer various features such as data encryption and access control to their customers so that they can protect the data as well as from unauthorized access. 

Few benefits of Cloud computing are listed below: 

  • Scalability: With Cloud computing we get scalable applications which suits for large scale production systems for Businesses which store and process large sets of data.
  • Cost-effectiveness : It is evident that Cloud computing is cost effective solution compared to the traditional on-premises data storage and analytical solutions due to its scaling capacity which leads to saving more IT costs. 
  • Security : Cloud providers offer various security features which includes data encryption and access control, that can help businesses to protect their data from unauthorized access.
  • Reliability : Cloud providers ensure high reliability to their customers based on their SLA which is useful for the data-driven business to operate 24X7. 

Top 10 Cloud Computing Research Topics

1. neural network based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing.

Cloud computing research topics are getting wider traction in the Cloud Computing field. These topics in the paper suggest a multi-objective evolutionary algorithm (NN-MOEA) based on neural networks for dynamic workflow scheduling in cloud computing. Due to the dynamic nature of cloud resources and the numerous competing objectives that need to be optimized, scheduling workflows in cloud computing is difficult. The NN-MOEA algorithm utilizes neural networks to optimize multiple objectives, such as planning, cost, and resource utilization. This research focuses on cloud computing and its potential to enhance the efficiency and effectiveness of businesses' cloud-based workflows.

The algorithm predicts workflow completion time using a feedforward neural network based on input and output data sizes and cloud resources. It generates a balanced schedule by taking into account conflicting objectives and projected execution time. It also includes an evolutionary algorithm for future improvement.

The proposed NN-MOEA algorithm has several benefits, such as the capacity to manage dynamic changes in cloud resources and the capacity to simultaneously optimize multiple objectives. The algorithm is also capable of handling a variety of workflows and is easily expandable to include additional goals. The algorithm's use of neural networks to forecast task execution times is a crucial component because it enables the algorithm to generate better schedules and more accurate predictions.

The paper concludes by presenting a novel multi-objective evolutionary algorithm-based neural network-based approach to dynamic workflow scheduling in cloud computing. In terms of optimizing multiple objectives, such as make span and cost, and achieving a better balance between them, these cloud computing dissertation topics on the proposed NN-MOEA algorithm exhibit encouraging results.

Key insights and Research Ideas:

Investigate the use of different neural network architectures for predicting the future positions of optimal solutions. Explore the use of different multi-objective evolutionary algorithms for solving dynamic workflow scheduling problems. Develop a cloud-based workflow scheduling platform that implements the proposed algorithm and makes it available to researchers and practitioners.

2. A systematic literature review on cloud computing security: threats and mitigation strategies 

This is one of cloud computing security research topics in the cloud computing paradigm. The authors then provide a systematic literature review of studies that address security threats to cloud computing and mitigation techniques and were published between 2010 and 2020. They list and classify the risks and defense mechanisms covered in the literature, as well as the frequency and distribution of these subjects over time.

The paper suggests the data breaches, Insider threats and DDoS attack are most discussed threats to the security of cloud computing. Identity and access management, encryption, and intrusion detection and prevention systems are the mitigation techniques that are most frequently discussed. Authors depict the future trends of machine learning and artificial intelligence might help cloud computing to mitigate its risks. 

The paper offers a thorough overview of security risks and mitigation techniques in cloud computing, and it emphasizes the need for more research and development in this field to address the constantly changing security issues with cloud computing. This research could help businesses to reduce the amount of spam that they receive in their cloud-based email systems.

Explore the use of blockchain technology to improve the security of cloud computing systems. Investigate the use of machine learning and artificial intelligence to detect and prevent cloud computing attacks. Develop new security tools and technologies for cloud computing environments. 

3. Spam Identification in Cloud Computing Based on Text Filtering System

A text filtering system is suggested in the paper "Spam Identification in Cloud Computing Based on Text Filtering System" to help identify spam emails in cloud computing environments. Spam emails are a significant issue in cloud computing because they can use up computing resources and jeopardize the system's security. 

To detect spam emails, the suggested system combines text filtering methods with machine learning algorithms. The email content is first pre-processed by the system, which eliminates stop words and stems the remaining words. The preprocessed text is then subjected to several filters, including a blacklist filter and a Bayesian filter, to identify spam emails.

In order to categorize emails as spam or non-spam based on their content, the system also employs machine learning algorithms like decision trees and random forests. The authors use a dataset of emails gathered from a cloud computing environment to train and test the system. They then assess its performance using metrics like precision, recall, and F1 score.

The findings demonstrate the effectiveness of the proposed system in detecting spam emails, achieving high precision and recall rates. By contrasting their system with other spam identification systems, the authors also show how accurate and effective it is. 

The method presented in the paper for locating spam emails in cloud computing environments has the potential to improve the overall security and performance of cloud computing systems. This is one of the interesting clouds computing current research topics to explore and innovate. This is one of the good Cloud computing research topics to protect the Mail threats. 

Create a stronger spam filtering system that can recognize spam emails even when they are made to avoid detection by more common spam filters. examine the application of artificial intelligence and machine learning to the evaluation of spam filtering system accuracy. Create a more effective spam filtering system that can handle a lot of emails quickly and accurately.

4. Blockchain data-based cloud data integrity protection mechanism 

The "Blockchain data-based cloud data integrity protection mechanism" paper suggests a method for safeguarding the integrity of cloud data and which is one of the Cloud computing research topics. In order to store and process massive amounts of data, cloud computing has grown in popularity, but issues with data security and integrity still exist. For the proposed mechanism to guarantee the availability and integrity of cloud data, data redundancy and blockchain technology are combined.

A data redundancy layer, a blockchain layer, and a verification and recovery layer make up the mechanism. For availability in the event of server failure, the data redundancy layer replicates the cloud data across multiple cloud servers. The blockchain layer stores the metadata (such as access rights) and hash values of the cloud data and access control information

Using a dataset of cloud data, the authors assess the performance of the suggested mechanism and compare it to other cloud data protection mechanisms. The findings demonstrate that the suggested mechanism offers high levels of data availability and integrity and is superior to other mechanisms in terms of processing speed and storage space.

Overall, the paper offers a promising strategy for using blockchain technology to guarantee the availability and integrity of cloud data. The suggested mechanism may assist in addressing cloud computing's security issues and enhancing the dependability of cloud data processing and storage. This research could help businesses to protect the integrity of their cloud-based data from unauthorized access and manipulation.

Create a data integrity protection system based on blockchain that is capable of detecting and preventing data tampering in cloud computing environments. For enhancing the functionality and scalability of blockchain-based data integrity protection mechanisms, look into the use of various blockchain consensus algorithms. Create a data integrity protection system based on blockchain that is compatible with current cloud computing platforms. Create a safe and private data integrity protection system based on blockchain technology.

5. A survey on internet of things and cloud computing for healthcare

This article suggests how recent tech trends like the Internet of Things (IoT) and cloud computing could transform the healthcare industry. It is one of the Cloud computing research topics. These emerging technologies open exciting possibilities by enabling remote patient monitoring, personalized care, and efficient data management. This topic is one of the IoT and cloud computing research papers which aims to share a wider range of information. 

The authors categorize the research into IoT-based systems, cloud-based systems, and integrated systems using both IoT and the cloud. They discussed the pros of real-time data collection, improved care coordination, automated diagnosis and treatment.

However, the authors also acknowledge concerns around data security, privacy, and the need for standardized protocols and platforms. Widespread adoption of these technologies faces challenges in ensuring they are implemented responsibly and ethically. To begin the journey KnowledgeHut’s Cloud Computing online course s are good starter for beginners so that they can cope with Cloud computing with IOT. 

Overall, the paper provides a comprehensive overview of this rapidly developing field, highlighting opportunities to revolutionize how healthcare is delivered. New devices, systems and data analytics powered by IoT, and cloud computing could enable more proactive, preventative and affordable care in the future. But careful planning and governance will be crucial to maximize the value of these technologies while mitigating risks to patient safety, trust and autonomy. This research could help businesses to explore the potential of IoT and cloud computing to improve healthcare delivery.

Examine how IoT and cloud computing are affecting patient outcomes in various healthcare settings, including hospitals, clinics, and home care. Analyze how well various IoT devices and cloud computing platforms perform in-the-moment patient data collection, archival, and analysis. assessing the security and privacy risks connected to IoT devices and cloud computing in the healthcare industry and developing mitigation strategies.

6. Targeted influence maximization based on cloud computing over big data in social networks

Big data in cloud computing research papers are having huge visibility in the industry. The paper "Targeted Influence Maximization based on Cloud Computing over Big Data in Social Networks" proposes a targeted influence maximization algorithm to identify the most influential users in a social network. Influence maximization is the process of identifying a group of users in a social network who can have a significant impact or spread information. 

A targeted influence maximization algorithm is suggested in the paper "Targeted Influence maximization based on Cloud Computing over Big Data in Social Networks" to find the most influential users in a social network. The process of finding a group of users in a social network who can make a significant impact or spread information is known as influence maximization.

Four steps make up the suggested algorithm: feature extraction, classification, influence maximization, and data preprocessing. The authors gather and preprocess social network data, such as user profiles and interaction data, during the data preprocessing stage. Using machine learning methods like text mining and sentiment analysis, they extract features from the data during the feature extraction stage. Overall, the paper offers a promising strategy for maximizing targeted influence using big data and Cloud computing research topics to look into. The suggested algorithm could assist companies and organizations in pinpointing their marketing or communication strategies to reach the most influential members of a social network.

Key insights and Research Ideas: 

Develop a cloud-based targeted influence maximization algorithm that can effectively identify and influence a small number of users in a social network to achieve a desired outcome. Investigate the use of different cloud computing platforms to improve the performance and scalability of cloud-based targeted influence maximization algorithms. Develop a cloud-based targeted influence maximization algorithm that is compatible with existing social network platforms. Design a cloud-based targeted influence maximization algorithm that is secure and privacy-preserving.

7. Security and privacy protection in cloud computing: Discussions and challenges

Cloud computing current research topics are getting traction, this is of such topic which provides an overview of the challenges and discussions surrounding security and privacy protection in cloud computing. The authors highlight the importance of protecting sensitive data in the cloud, with the potential risks and threats to data privacy and security. The article explores various security and privacy issues that arise in cloud computing, including data breaches, insider threats, and regulatory compliance.

The article explores challenges associated with implementing these security measures and highlights the need for effective risk management strategies. Azure Solution Architect Certification course is suitable for a person who needs to work on Azure cloud as an architect who will do system design with keep security in mind. 

Final take away of cloud computing thesis paper by an author points out by discussing some of the emerging trends in cloud security and privacy, including the use of artificial intelligence and machine learning to enhance security, and the emergence of new regulatory frameworks designed to protect data in the cloud and is one of the Cloud computing research topics to keep an eye in the security domain. 

Develop a more comprehensive security and privacy framework for cloud computing. Explore the options with machine learning and artificial intelligence to enhance the security and privacy of cloud computing. Develop more robust security and privacy mechanisms for cloud computing. Design security and privacy policies for cloud computing that are fair and transparent. Educate cloud users about security and privacy risks and best practices.

8. Intelligent task prediction and computation offloading based on mobile-edge cloud computing

This Cloud Computing thesis paper "Intelligent Task Prediction and Computation Offloading Based on Mobile-Edge Cloud Computing" proposes a task prediction and computation offloading mechanism to improve the performance of mobile applications under the umbrella of cloud computing research ideas.

An algorithm for offloading computations and a task prediction model makes up the two main parts of the suggested mechanism. Based on the mobile application's usage patterns, the task prediction model employs machine learning techniques to forecast its upcoming tasks. This prediction is to decide whether to execute a specific task locally on the mobile device or offload the computation of it to the cloud.

Using a dataset of mobile application usage patterns, the authors assess the performance of the suggested mechanism and compare it to other computation offloading mechanisms. The findings demonstrate that the suggested mechanism performs better in terms of energy usage, response time, and network usage.

The authors also go over the difficulties in putting the suggested mechanism into practice, including the need for real-time task prediction and the trade-off between offloading computation and network usage. Additionally, they outline future research directions for mobile-edge cloud computing applications, including the use of edge caching and the integration of blockchain technology for security and privacy. 

Overall, the paper offers a promising strategy for enhancing mobile application performance through mobile-edge cloud computing. The suggested mechanism might improve the user experience for mobile users while lowering the energy consumption and response time of mobile applications. These Cloud computing dissertation topic leads to many innovation ideas. 

Develop an accurate task prediction model considering mobile device and cloud dynamics. Explore machine learning and AI for efficient computation offloading. Create a robust framework for diverse tasks and scenarios. Design a secure, privacy-preserving computation offloading mechanism. Assess computation offloading effectiveness in real-world mobile apps.

9. Cloud Computing and Security: The Security Mechanism and Pillars of ERPs on Cloud Technology

Enterprise resource planning (ERP) systems are one of the Cloud computing research topics in particular face security challenges with cloud computing, and the paper "Cloud Computing and Security: The Security Mechanism and Pillars of ERPs on Cloud Technology" discusses these challenges and suggests a security mechanism and pillars for protecting ERP systems on cloud technology.

The authors begin by going over the benefits of ERP systems and cloud computing as well as the security issues with cloud computing, like data breaches and insider threats. They then go on to present a security framework for cloud-based ERP systems that is built around four pillars: access control, data encryption, data backup and recovery, and security monitoring. The access control pillar restricts user access, while the data encryption pillar secures sensitive data. Data backup and recovery involve backing up lost or failed data. Security monitoring continuously monitors the ERP system for threats. The authors also discuss interoperability challenges and the need for standardization in securing ERP systems on the cloud. They propose future research directions, such as applying machine learning and artificial intelligence to security analytics.

Overall, the paper outlines a thorough strategy for safeguarding ERP systems using cloud computing and emphasizes the significance of addressing security issues related to this technology. Organizations can protect their ERP systems and make sure the Security as well as privacy of their data by implementing these security pillars and mechanisms. 

Investigate the application of blockchain technology to enhance the security of cloud-based ERP systems. Look into the use of machine learning and artificial intelligence to identify and stop security threats in cloud-based ERP systems. Create fresh security measures that are intended only for cloud-based ERP systems. By more effectively managing access control and data encryption, cloud-based ERP systems can be made more secure. Inform ERP users about the security dangers that come with cloud-based ERP systems and how to avoid them.

10. Optimized data storage algorithm of IoT based on cloud computing in distributed system

The article proposes an optimized data storage algorithm for Internet of Things (IoT) devices which runs on cloud computing in a distributed system. In IoT apps, which normally generate huge amounts of data by various devices, the algorithm tries to increase the data storage and faster retrials of the same. 

The algorithm proposed includes three main components: Data Processing, Data Storage, and Data Retrieval. The Data Processing module preprocesses IoT device data by filtering or compressing it. The Data Storage module distributes the preprocessed data across cloud servers using partitioning and stores it in a distributed database. The Data Retrieval module efficiently retrieves stored data in response to user queries, minimizing data transmission and enhancing query efficiency. The authors evaluated the algorithm's performance using an IoT dataset and compared it to other storage and retrieval algorithms. Results show that the proposed algorithm surpasses others in terms of storage effectiveness, query response time, and network usage. 

They suggest future directions such as leveraging edge computing and blockchain technology for optimizing data storage and retrieval in IoT applications. In conclusion, the paper introduces a promising method to improve data archival and retrieval in distributed cloud based IoT applications, enhancing the effectiveness and scalability of IoT applications.

Create a data storage algorithm capable of storing and managing large amounts of IoT data efficiently. Examine the use of cloud computing to improve the performance and scalability of data storage algorithms for IoT. Create a secure and privacy-preserving data storage algorithm. Assess the performance and effectiveness of data storage algorithms for IoT in real-world applications.

How to Write a Perfect Research Paper?

  • Choose a topic: Select the topic which is interesting to you so that you can share things with the viewer seamlessly with good content. 
  • Do your research: Read books, articles, and websites on your topic. Take notes and gather evidence to support your arguments.
  • Write an outline: This will help you organize your thoughts and make sure your paper flows smoothly.
  • Start your paper: Start with an introduction that grabs the reader's attention. Then, state your thesis statement and support it with evidence from your research. Finally, write a conclusion that summarizes your main points.
  • Edit and proofread your paper. Make sure you check the grammatical errors and spelling mistakes. 

Cloud computing is a rapidly evolving area with more interesting research topics being getting traction by researchers and practitioners. Cloud providers have their research to make sure their customer data is secured and take care of their security which includes encryption algorithms, improved access control and mitigating DDoS – Deniel of Service attack etc., 

With the improvements in AI & ML, a few features developed to improve the performance, efficiency, and security of cloud computing systems. Some of the research topics in this area include developing new algorithms for resource allocation, optimizing cloud workflows, and detecting and mitigating cyberattacks.

Cloud computing is being used in industries such as healthcare, finance, and manufacturing. Some of the research topics in this area include developing new cloud-based medical imaging applications, building cloud-based financial trading platforms, and designing cloud-based manufacturing systems.

Frequently Asked Questions (FAQs)

Data security and privacy problems, vendor lock-in, complex cloud management, a lack of standardization, and the risk of service provider disruptions are all current issues in cloud computing. Because data is housed on third-party servers, data security and privacy are key considerations. Vendor lock-in makes transferring providers harder and increases reliance on a single one. Managing many cloud services complicates things. Lack of standardization causes interoperability problems and restricts workload mobility between providers. 

Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) are the cloud computing scenarios where industries focusing right now. 

The six major components of cloud infrastructure are compute, storage, networking, security, management and monitoring, and database. These components enable cloud-based processing and execution, data storage and retrieval, communication between components, security measures, management and monitoring of the infrastructure, and database services.  

Profile

Vinoth Kumar P

Vinoth Kumar P is a Cloud DevOps Engineer at Amadeus Labs. He has over 7 years of experience in the IT industry, and is specialized in DevOps, GitOps, DevSecOps, MLOps, Chaos Engineering, Cloud and Cloud Native landscapes. He has published articles and blogs on recent tech trends and best practices on GitHub, Medium, and LinkedIn, and has delivered a DevSecOps 101 talk to Developers community , GitOps with Argo CD Webinar for DevOps Community. He has helped multiple enterprises with their cloud migration, cloud native design, CICD pipeline setup, and containerization journey.

Avail your free 1:1 mentorship session.

Something went wrong

Upcoming Cloud Computing Batches & Dates

Course advisor icon

cloud computing Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Simulation and performance assessment of a modified throttled load balancing algorithm in cloud computing environment

<span lang="EN-US">Load balancing is crucial to ensure scalability, reliability, minimize response time, and processing time and maximize resource utilization in cloud computing. However, the load fluctuation accompanied with the distribution of a huge number of requests among a set of virtual machines (VMs) is challenging and needs effective and practical load balancers. In this work, a two listed throttled load balancer (TLT-LB) algorithm is proposed and further simulated using the CloudAnalyst simulator. The TLT-LB algorithm is based on the modification of the conventional TLB algorithm to improve the distribution of the tasks between different VMs. The performance of the TLT-LB algorithm compared to the TLB, round robin (RR), and active monitoring load balancer (AMLB) algorithms has been evaluated using two different configurations. Interestingly, the TLT-LB significantly balances the load between the VMs by reducing the loading gap between the heaviest loaded and the lightest loaded VMs to be 6.45% compared to 68.55% for the TLB and AMLB algorithms. Furthermore, the TLT-LB algorithm considerably reduces the average response time and processing time compared to the TLB, RR, and AMLB algorithms.</span>

An improved forensic-by-design framework for cloud computing with systems engineering standard compliance

Reliability of trust management systems in cloud computing.

Cloud computing is an innovation that conveys administrations like programming, stage, and framework over the web. This computing structure is wide spread and dynamic, which chips away at the compensation per-utilize model and supports virtualization. Distributed computing is expanding quickly among purchasers and has many organizations that offer types of assistance through the web. It gives an adaptable and on-request administration yet at the same time has different security dangers. Its dynamic nature makes it tweaked according to client and supplier’s necessities, subsequently making it an outstanding benefit of distributed computing. However, then again, this additionally makes trust issues and or issues like security, protection, personality, and legitimacy. In this way, the huge test in the cloud climate is selecting a perfect organization. For this, the trust component assumes a critical part, in view of the assessment of QoS and Feedback rating. Nonetheless, different difficulties are as yet present in the trust the board framework for observing and assessing the QoS. This paper talks about the current obstructions present in the trust framework. The objective of this paper is to audit the available trust models. The issues like insufficient trust between the supplier and client have made issues in information sharing likewise tended to here. Besides, it lays the limits and their enhancements to help specialists who mean to investigate this point.

Cloud Computing Adoption in the Construction Industry of Singapore: Drivers, Challenges, and Strategies

An extensive review of web-based multi granularity service composition.

The paper reviews the efforts to compose SOAP, non-SOAP and non-web services. Traditionally efforts were made for composite SOAP services, however, these efforts did not include the RESTful and non-web services. A SOAP service uses structured exchange methodology for dealing with web services while a non-SOAP follows different approach. The research paper reviews the invoking and composing a combination of SOAP, non-SOAP, and non-web services into a composite process to execute complex tasks on various devices. It also shows the systematic integration of the SOAP, non-SOAP and non-web services describing the composition of heterogeneous services than the ones conventionally used from the perspective of resource consumption. The paper further compares and reviews different layout model for the discovery of services, selection of services and composition of services in Cloud computing. Recent research trends in service composition are identified and then research about microservices are evaluated and shown in the form of table and graphs.

Integrated Blockchain and Cloud Computing Systems: A Systematic Survey, Solutions, and Challenges

Cloud computing is a network model of on-demand access for sharing configurable computing resource pools. Compared with conventional service architectures, cloud computing introduces new security challenges in secure service management and control, privacy protection, data integrity protection in distributed databases, data backup, and synchronization. Blockchain can be leveraged to address these challenges, partly due to the underlying characteristics such as transparency, traceability, decentralization, security, immutability, and automation. We present a comprehensive survey of how blockchain is applied to provide security services in the cloud computing model and we analyze the research trends of blockchain-related techniques in current cloud computing models. During the reviewing, we also briefly investigate how cloud computing can affect blockchain, especially about the performance improvements that cloud computing can provide for the blockchain. Our contributions include the following: (i) summarizing the possible architectures and models of the integration of blockchain and cloud computing and the roles of cloud computing in blockchain; (ii) classifying and discussing recent, relevant works based on different blockchain-based security services in the cloud computing model; (iii) simply investigating what improvements cloud computing can provide for the blockchain; (iv) introducing the current development status of the industry/major cloud providers in the direction of combining cloud and blockchain; (v) analyzing the main barriers and challenges of integrated blockchain and cloud computing systems; and (vi) providing recommendations for future research and improvement on the integration of blockchain and cloud systems.

Cloud Computing and Undergraduate Researches in Universities in Enugu State: Implication for Skills Demand

Cloud building block chip for creating fpga and asic clouds.

Hardware-accelerated cloud computing systems based on FPGA chips (FPGA cloud) or ASIC chips (ASIC cloud) have emerged as a new technology trend for power-efficient acceleration of various software applications. However, the operating systems and hypervisors currently used in cloud computing will lead to power, performance, and scalability problems in an exascale cloud computing environment. Consequently, the present study proposes a parallel hardware hypervisor system that is implemented entirely in special-purpose hardware, and that virtualizes application-specific multi-chip supercomputers, to enable virtual supercomputers to share available FPGA and ASIC resources in a cloud system. In addition to the virtualization of multi-chip supercomputers, the system’s other unique features include simultaneous migration of multiple communicating hardware tasks, and on-demand increase or decrease of hardware resources allocated to a virtual supercomputer. Partitioning the flat hardware design of the proposed hypervisor system into multiple partitions and applying the chip unioning technique to its partitions, the present study introduces a cloud building block chip that can be used to create FPGA or ASIC clouds as well. Single-chip and multi-chip verification studies have been done to verify the functional correctness of the hypervisor system, which consumes only a fraction of (10%) hardware resources.

Study On Social Network Recommendation Service Method Based On Mobile Cloud Computing

Cloud-based network virtualization in iot with openstack.

In Cloud computing deployments, specifically in the Infrastructure-as-a-Service (IaaS) model, networking is one of the core enabling facilities provided for the users. The IaaS approach ensures significant flexibility and manageability, since the networking resources and topologies are entirely under users’ control. In this context, considerable efforts have been devoted to promoting the Cloud paradigm as a suitable solution for managing IoT environments. Deep and genuine integration between the two ecosystems, Cloud and IoT, may only be attainable at the IaaS level. In light of extending the IoT domain capabilities’ with Cloud-based mechanisms akin to the IaaS Cloud model, network virtualization is a fundamental enabler of infrastructure-oriented IoT deployments. Indeed, an IoT deployment without networking resilience and adaptability makes it unsuitable to meet user-level demands and services’ requirements. Such a limitation makes the IoT-based services adopted in very specific and statically defined scenarios, thus leading to limited plurality and diversity of use cases. This article presents a Cloud-based approach for network virtualization in an IoT context using the de-facto standard IaaS middleware, OpenStack, and its networking subsystem, Neutron. OpenStack is being extended to enable the instantiation of virtual/overlay networks between Cloud-based instances (e.g., virtual machines, containers, and bare metal servers) and/or geographically distributed IoT nodes deployed at the network edge.

Export Citation Format

Share document.

Journal of Cloud Computing

Advances, Systems and Applications

Journal of Cloud Computing Cover Image

Special Issues - Guidelines for Guest Editors

For more information for Guest Editors, please see our Guidelines

Special Issues - Call for Papers

We welcome submissions for the upcoming special issues of the Journal of Cloud Computing

Computational Intelligence Techniques for Pattern Recognition in Multimedia Data Guest Editors: Mughair Aslam Bhatti, Sibghat Ullah Bazai, Konstantinos E. Psannis Submission deadline:  3 May 2024 

Advanced Blockchain and Federated Learning Techniques Towards Secure Cloud Computing Guest Editors: Yuan Liu, Jie Zhang, Athirai A. Irissappane, Zhu Sun Submission deadline:  30 April 2024

Mobile Edge Computing Meets AI Guest Editors: Lianyong Qi, Maqbool Khan, Qiang He, Shui Yu, Wajid Rafique Submission deadline:  3 May 2024   Blockchain-enabled Decentralized Cloud/Edge Computing Guest Editors: Qingqi Pei, Kaoru Ota, Martin Gilje Jaatun, Jie Feng, Shen Su Submission deadline: 31 st  March 2023

  • Most accessed

Unified ensemble federated learning with cloud computing for online anomaly detection in energy-efficient wireless sensor networks

Authors: S. Gayathri and D. Surendran

Edge intelligence-assisted animation design with large models: a survey

Authors: Jing Zhu, Chuanjiang Hu, Edris Khezri and Mohd Mustafa Mohd Ghazali

Target tracking using video surveillance for enabling machine vision services at the edge of marine transportation systems based on microwave remote sensing

Authors: Meiyan Li, Qinyong Wang and Yuwei Liao

Multiple objectives dynamic VM placement for application service availability in cloud networks

Authors: Yanal Alahmad and Anjali Agarwal

Investigation on storage level data integrity strategies in cloud computing: classification, security obstructions, challenges and vulnerability

Authors: Paromita Goswami, Neetu Faujdar, Somen Debnath, Ajoy Kumar Khan and Ghanshyam Singh

Most recent articles RSS

View all articles

A quantitative analysis of current security concerns and solutions for cloud computing

Authors: Nelson Gonzalez, Charles Miers, Fernando Redígolo, Marcos Simplício, Tereza Carvalho, Mats Näslund and Makan Pourzandi

Critical analysis of vendor lock-in and its impact on cloud computing migration: a business perspective

Authors: Justice Opara-Martins, Reza Sahandi and Feng Tian

Future of industry 5.0 in society: human-centric solutions, challenges and prospective research areas

Authors: Amr Adel

Intrusion detection systems for IoT-based smart environments: a survey

Authors: Mohamed Faisal Elrawy, Ali Ismail Awad and Hesham F. A. Hamed

Load balancing in cloud computing – A hierarchical taxonomical classification

Authors: Shahbaz Afzal and G. Kavitha

Most accessed articles RSS

Aims and scope

The Journal of Cloud Computing: Advances, Systems and Applications (JoCCASA) will publish research articles on all aspects of Cloud Computing. Principally, articles will address topics that are core to Cloud Computing, focusing on the Cloud applications, the Cloud systems, and the advances that will lead to the Clouds of the future. Comprehensive review and survey articles that offer up new insights, and lay the foundations for further exploratory and experimental work, are also relevant.

Published articles will impart advanced theoretical grounding and practical application of Clouds and related systems as are offered up by the numerous possible combinations of internet-based software, development stacks and database availability, and virtualized hardware for storing, processing, analysing and visualizing data. Where relevant, Clouds should be scrutinized alongside other paradigms such Peer to Peer (P2P) computing, Cluster computing, Grid computing, and so on. Thorough examination of Clouds with respect to issues of management, governance, trust and privacy, and interoperability, are also in scope. The Journal of Cloud Computing is indexed by the Science Citation Index Expanded/SCIE. SCI has subsequently merged into SCIE.  

Cloud Computing is now a topic of significant impact and, while it may represent an evolution in technology terms, it is revolutionising the ways in which both academia and industry are thinking and acting. The Journal of Cloud Computing, Advances, Systems and Applications (JoCCASA) has been launched to offer a high quality journal geared entirely towards the research that will offer up future generations of Clouds. The journal publishes research that addresses the entire Cloud stack, and as relates Clouds to wider paradigms and topics.

Chunming Rong, Editor-in-Chief University of Stavanger, Norway

  • Editorial Board
  • Sign up for article alerts and news from this journal

Annual Journal Metrics

2022 Citation Impact 4.0 - 2-year Impact Factor 4.4 - 5-year Impact Factor 1.711 - SNIP (Source Normalized Impact per Paper) 0.976 - SJR (SCImago Journal Rank)

2022 Speed 12 days submission to first editorial decision for all manuscripts (Median) 86 days submission to accept (Median)

2022 Usage  458,186 downloads 124 Altmetric mentions 

  • More about our metrics
  • ISSN: 2192-113X (electronic)

Benefit from our free funding service

New Content Item

We offer a free open access support service to make it easier for you to discover and apply for article-processing charge (APC) funding. 

Learn more here

Book cover

  • Conference proceedings
  • © 2022

Cloud Computing – CLOUD 2021

14th International Conference, Held as Part of the Services Conference Federation, SCF 2021, Virtual Event, December 10–14, 2021, Proceedings

  • Kejiang Ye 0 ,
  • Liang-Jie Zhang   ORCID: https://orcid.org/0000-0002-6219-0853 1

Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China

You can also search for this editor in PubMed   Google Scholar

Kingdee International Software Group Co., Ltd., Shenzhen, China

Part of the book series: Lecture Notes in Computer Science (LNCS, volume 12989)

Part of the book sub series: Information Systems and Applications, incl. Internet/Web, and HCI (LNISA)

Conference series link(s): CLOUD: International Conference on Cloud Computing

4898 Accesses

5 Citations

Conference proceedings info: CLOUD 2021.

  • Table of contents
  • Other volumes

About this book

Editors and affiliations, bibliographic information.

  • Publish with us

Buying options

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

This is a preview of subscription content, log in via an institution to check for access.

Table of contents (7 papers)

Front matter, a brokering model for the cloud market.

  • Georgios Chatzithanasis, Evangelia Filiopoulou, Christos Michalakelis, Mara Nikolaidou

An Experimental Analysis of Function Performance with Resource Allocation on Serverless Platform

  • Yonghe Zhang, Kejiang Ye, Cheng-Zhong Xu

Electronic Card Localization Algorithm Based on Visible Light Screen Communication

  • Kao Wen, Junjian Huang, Chan Zhou, Kejiang Ye

BBServerless: A Bursty Traffic Benchmark for Serverless

  • Yanying Lin, Kejiang Ye, Yongkang Li, Peng Lin, Yingfei Tang, Chengzhong Xu

Performance Evaluation of Various RISC Processor Systems: A Case Study on ARM, MIPS and RISC-V

  • Yu Liu, Kejiang Ye, Cheng-Zhong Xu

Comparative Analysis of Cloud Storage Options for Diverse Application Requirements

  • Antara Debnath Antu, Anup Kumar, Robert Kelley, Bin Xie

COS2: Detecting Large-Scale COVID-19 Misinformation in Social Networks

  • Hailu Xu, Macro Curci, Sophanna Ek, Pinchao Liu, Zhengxiong Li, Shuai Xu

Back Matter

Other volumes.

The 6 full papers and 1 short paper presented were carefully reviewed and selected from 25 submissions. They deal with the latest fundamental advances in the state of the art and practice of cloud computing, identify emerging research topics, and define the future of cloud computing.

  • cloud computing
  • Cloud Computing
  • communication systems
  • computer networks
  • Distributed Architecture
  • distributed computer systems
  • High Availability
  • Network performance analysis
  • Network performance modeling
  • network protocols
  • parallel processing systems
  • Reliability
  • signal processing
  • software architecture
  • software design
  • software engineering
  • telecommunication networks

Liang-Jie Zhang

Book Title : Cloud Computing – CLOUD 2021

Book Subtitle : 14th International Conference, Held as Part of the Services Conference Federation, SCF 2021, Virtual Event, December 10–14, 2021, Proceedings

Editors : Kejiang Ye, Liang-Jie Zhang

Series Title : Lecture Notes in Computer Science

DOI : https://doi.org/10.1007/978-3-030-96326-2

Publisher : Springer Cham

eBook Packages : Computer Science , Computer Science (R0)

Copyright Information : Springer Nature Switzerland AG 2022

Softcover ISBN : 978-3-030-96325-5 Published: 26 February 2022

eBook ISBN : 978-3-030-96326-2 Published: 25 February 2022

Series ISSN : 0302-9743

Series E-ISSN : 1611-3349

Edition Number : 1

Number of Pages : XIII, 105

Number of Illustrations : 13 b/w illustrations, 35 illustrations in colour

Topics : Computer Communication Networks , Security , Database Management , Information Systems Applications (incl. Internet) , Software Engineering/Programming and Operating Systems

Policies and ethics

  • Find a journal
  • Track your research

10Pie

Latest Research Topics on Cloud Computing (2022 Updated)

research topic

Cloud computing is now a vital online technology that is used worldwide. The market size of cloud computing is expected to reach $832.1 billion by 2025 . Its demand will always increase in the future, and there are many major reasons behind it. It has acquired popularity because it is less expensive for companies rather than setting up their on-site server implementations.

In this article, we’ve covered the top 14 in-demand research topics on cloud computing that you need to know.

📌 These cloud Computing research topics are:

  • Green cloud computing
  • Edge computing
  • Cloud cryptography
  • Load balancing
  • Cloud analytics
  • Cloud scalability
  • Mobile cloud computing
  • Cloud deployment model
  • Cloud security
  • Cloud computing platforms
  • Cloud service model
  • Containerization

Top 14 Cloud Computing Research Topics For 2022

1. green cloud computing.

Due to rapid growth and demand for cloud, the energy consumption in data centers is increasing. Green Cloud Computing is used to minimize energy consumption and helps to achieve efficient processing and reduce the generation of E-waste.

 It is also called GREEN IT. The goal is to go paperless and decrease the carbon footprint in the environment due to remote working.

Power management, virtualization, sustainability, and environmental recycling will all be handled by green cloud computing. 

2. Edge Computing

A rapidly growing field where the data is processed at the network’s edge instead of being processed in a data warehouse is known as edge computing. The real-time computing capacity is driving the development of edge-computing platforms. The data is processed from the device itself to the point of origin without relying on a central location which also helps to increase the system’s security. It gives certain benefits such as cost-effectiveness, powerful performance, and new functionality which wasn’t previously available.

Some innovations are made with the help of cloud computing by increasing the ability of network edge capabilities and expanding wireless connections.

3. Cloud Cryptography

Cloud Cryptography is a strong layer of protection through codes that helps to give security to the cloud storage and breach of the data. It saves sensitive data content without delaying the transmission of information. It can turn plain text into unreadable code with the help of computers and algorithms and restrict the view of data being delivered.

The clients can use the cryptographic keys only to access this data. The user’s information is kept private, which results in fewer chances of cybercrime from the hackers. 

4. Load Balancing

The workload distribution over the server for soft computing is called load balancing. It helps distribute resources over multiple PCs, networks, and servers and allows businesses to manage workloads and application needs. Due to the rapid increase in traffic over the Internet, the server gets overloaded—two ways to solve the problem of overload of the servers: single-server and multiple-server solutions.

Keeping the system stable, boosting the system’s efficiency, and avoiding system failures are some reasons to use load balancing. It can be balanced by using software-based and hardware-based load balancers.

5. Cloud Analytics

Cloud analytics is a set of societal and analytical tools that analyze data on a private or public cloud to reduce data storage costs and management. It is specially designed to help clients get information from massive data. It is widely used in industrial applications such as genomics research, oil and gas exploration, business intelligence, security, and the Internet of Things (IoT).

It can help any industry improve its organizational performance and drive new value from its data. It is delivered through various models: public, private, hybrid, and community models. 

6. Cloud Scalability

Cloud scalability refers to the capacity to scale up or down IT resources as per the need for change in computing. Scalability is usually used to fulfill the static needs where the workload is handled linearly when resource deployment is persistent.

The types of scalability are vertical, horizontal, and diagonal. Horizontal scaling is regarded as a long-term advantage; on the other hand, vertical scaling is considered a short-term advantage. The benefits of cloud scalability are reliability, cost-effectiveness, ease, and speed. It is critical to understand how much those changes will cost and how they will benefit the company.

It can be applied to Disk I/O, Memory, Network I/O, and CPU. 

7. Mobile Cloud Computing

Mobile cloud computing helps to deliver applications to mobile devices through cloud computing. It allows different devices with different operating systems to have operating systems, computing tasks, and data storage. Mobile cloud helps speed and flexibility, resource sharing, and integrated data. Mobile Cloud Computing advantages are:

  • Increased battery life
  • Improvement in reliability and scalability
  • Simple Integration
  • Low cost and data storage capacity
  • Processing power improvement

The only drawback is that the bandwidth and variability are limited. It has been chosen due to productivity and demand, increasing connectivity.

8. Big Data

Big data is a technology generated by large network-based systems with massive amounts of data produced by different sources. The data get classified through structured (organized data) and unstructured (unorganized data), and semi-structured forms. The data are analyzed through algorithms which may vary depending upon the data means. Its characteristics are Volume, Variety, Velocity, and Variability.

Organizations can make better decisions with the help of external intelligence, which includes improvements in customer service, evaluation of consumer feedback, and identification of any risks to the product/services.

9. Cloud Deployment Model

The way people use the cloud has evolved based on ownership, scalability, access, and the cloud’s nature and purpose. A cloud deployment model identifies a particular sort of cloud environment that determines the cloud infrastructure’s appearance.

Cloud computing deployment models are classified according to their geographical location. Deployment methods are available in public, private, hybrid, community, and multi-cloud models.

It depends on the firms to choose as per their requirements as each model has its unique value and contribution.

10. Cloud Security

Cloud security brings the revolution to the current business model through shifts in information technology. With the rapid increase in the number of cloud computing, the organization needs the security of the cloud, which has become a significant concern.

Cloud Security protects the data from any leakage or outflow, with the removal of theft and catastrophe. The cloud has public, private, and hybrid clouds for security purposes.

Cloud security is needed to secure clients’ data, such as secret design documents and financial records. Its benefits are lower costs, reduced ongoing operational and administrative expenses, increased data reliability and availability, and reduced administration.

11. Cloud Computing Platforms

In an Internet-based data center, a server’s operating system and hardware are referred to as a cloud platform. Cloud platforms work when a firm rents to access computer services, such as servers, databases, storage, analytics, networking, software, and intelligence. So the companies don’t have to set up their data centers or computing infrastructure; they need to pay for what they use. It is a very vast platform where we can do many types of research.

12. Cloud Service Model

The use of networks hosted on the Internet to store from remote servers used in managing and processing data, rather than from a local server or a personal computer. It has three models namely Infrastructure-as-a-Service (IaaS), Software-as-a-Service (SaaS),and Platform-as-a-Service (PaaS).Each type of cloud computing service provides different control, flexibility, and management levels to choose the right services for your requirements.

The ability to deliver applications and services increases an organization’s ability to evolve and improve products faster. This model helps the firms have their benefits more quickly and better than traditional software. In the DevOps approach, development and operations teams are integrated into a single unit, enabling them to develop diverse skills that aren’t limited to a particular task. The benefits of DevOps are rapidity, increase in frequency, reliability, scale, improved collaboration, and security.

It provides a wide range of tools and technologies to meet clients’ needs.

14. Containerization

Containerization is a popular software development technique that is rapidly evolving and can be used in addition to virtualization. It includes packaging software code and all of its components so that it may run consistently and uniformly across any infrastructure. The developers and operational teams see its benefit as it helps create and locate applications quickly and more securely. It benefits developers and development groups as it provides flexibility/ portability, the ability to move swiftly and efficiently, speed, fault isolation, efficiency, easily manageable, and security. 

Final Words

Hence, all the above are new technologies of cloud computing developed to benefit users worldwide. But there are some challenges that need to be overcome. People nowadays have become skeptical about whether their data is private, secure, or not. This research can make this security more advanced and help to provide innovations in cloud computing.

We hope this article helps you to know some best research topics on cloud computing and how they’re changing the world.

10Pie Editorial Team is a team of certified technical content writers and editors with experience in the technology field combined with expert insights . Learn more about our editorial process to ensure the quality and accuracy of the content published on our website.

10pie blog logo

10Pie is your go-to resource hub to start and grow your Tech Career.

Send us your queries at [email protected]

CAREER GUIDES

  • Data Science
  • Cyber Security
  • Cloud Computing
  • Artificial Intelligence
  • Business Intelligence
  • Contributors
  • Tech Glossary
  • Editorial Policy
  • Tech Companies
  • Privacy policy

📈 Tech career paths

  • AI career paths
  • Python career paths
  • DevOps career paths
  • Data engineer career paths
  • Data science career paths
  • Software testing career paths
  • Software engineer career paths

🏆 Tech courses

  • Cloud computing courses in Pune
  • Data analytics courses in Hyderabad
  • Data science courses in Mangalore
  • Cloud computing courses in Hyderabad
  • Data analytics courses in Indore
  • Data analytics courses in Mumbai
  • Data analytics courses in Pune

📌 Featured articles

  • AI seminar topics
  • Which tech career is right for me?
  • Will AI replace software engineers?
  • Top data annotation companies
  • Cyber security career roadmap
  • How Tesla uses Artificial Intelligence
  • Cloud computing seminar topics

© 2023 All rights reserved. All content is copyrighted, republication is prohibited.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Comput Intell Neurosci
  • v.2022; 2022

Logo of cin

This article has been retracted.

The rise of cloud computing: data protection, privacy, and open research challenges—a systematic literature review (slr), junaid hassan.

1 Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad, Chiniot-Faisalabad Campus, Chiniot 35400, Pakistan

Danish Shehzad

2 Department of Computer Science, Superior University, Lahore 54000, Pakistan

Usman Habib

3 Faculty of Computer Sciences and Engineering, GIK Institute of Engineering Sciences and Technology, Topi, Swabi 23640, Khyber Pakhtunkhwa, Pakistan

Muhammad Umar Aftab

Muhammad ahmad, ramil kuleev.

4 Institute of Software Development and Engineering, Innopolis University, Innopolis 420500, Russia

Manuel Mazzara

Associated data.

The data used to support the findings of this study are provided in this article.

Cloud computing is a long-standing dream of computing as a utility, where users can store their data remotely in the cloud to enjoy on-demand services and high-quality applications from a shared pool of configurable computing resources. Thus, the privacy and security of data are of utmost importance to all of its users regardless of the nature of the data being stored. In cloud computing environments, it is especially critical because data is stored in various locations, even around the world, and users do not have any physical access to their sensitive data. Therefore, we need certain data protection techniques to protect the sensitive data that is outsourced over the cloud. In this paper, we conduct a systematic literature review (SLR) to illustrate all the data protection techniques that protect sensitive data outsourced over cloud storage. Therefore, the main objective of this research is to synthesize, classify, and identify important studies in the field of study. Accordingly, an evidence-based approach is used in this study. Preliminary results are based on answers to four research questions. Out of 493 research articles, 52 studies were selected. 52 papers use different data protection techniques, which can be divided into two main categories, namely noncryptographic techniques and cryptographic techniques. Noncryptographic techniques consist of data splitting, data anonymization, and steganographic techniques, whereas cryptographic techniques consist of encryption, searchable encryption, homomorphic encryption, and signcryption. In this work, we compare all of these techniques in terms of data protection accuracy, overhead, and operations on masked data. Finally, we discuss the future research challenges facing the implementation of these techniques.

1. Introduction

Recent advances have given rise to the popularity and success of cloud computing. It is a new computing and business model that provides on-demand storage and computing resources. The main objective of cloud computing is to gain financial benefits as cloud computing offers an effective way to reduce operational and capital costs. Cloud storage is a basic service of cloud computing architecture that allows users to store and share data over the internet. Some of the advantages of cloud storage are offsite backup, efficient and secure file access, unlimited data storage space, and low cost of use. Generally, cloud storage is divided into five categories: (1) private cloud storage, (2) personal cloud storage, (3) public cloud storage, (4) community cloud storage, and (5) hybrid cloud storage.

However, when we outsource data and business applications to a third party, security and privacy issues become a major concern [ 1 ]. Before outsourcing private data to the cloud, there is a need to protect private data by applying different data protection techniques, which we will discuss later in this SLR. After outsourcing the private data to the cloud, sometimes the user wants to perform certain operations on their data, such as secure search. Therefore, while performing such operations on private data, the data needs to be protected from intruders so that intruders cannot hack or steal their sensitive information.

Cloud computing has many advantages because of many other technical resources. For example, it has made it possible to store large amounts of data, perform computation on data, and many other various services. In addition, the cloud computing platform reduces the cost of services and also solves the problem of limited resources by sharing important resources among different users. Performance and resource reliability requires that the platform should be able to tackle the security threats [ 2 ]. In recent years, cloud computing has become one of the most important topics in security research. These pieces of research include software security, network security, and data storage security.

The National Institute of Standards and Technology (NIST) defines cloud computing as [ 3 ] “a model for easy access, ubiquitous, resource integration, and on-demand access that can be easily delivered through various types of service providers. The Pay as You Go (PAYG) mechanism is followed by cloud computing, in which users pay only for the services they use. The PAYG model gives users the ability to develop platforms, storage, and customize the software according to the needs of the end-user or client. These advantages are the reason that the research community has put so much effort into this modern concept [ 4 ].

Security is gained by achieving confidentiality, integrity, and data availability. Cloud users want assurance that their data must be saved while using cloud services. There are various types of attacks that launch on a user's private data, such as intrusion attacks, hacking, stealing the user's private data, and denial of service attacks. 57% of companies report security breaches using cloud services [ 5 ]. Data privacy is more important than data security because cloud service providers (CSPs) have full access to all cloud user's data and can monitor their activities, because of which the cloud user privacy is compromised. For example, a user is a diabetic, and the CSP is analyzing their activities, such as what he is searching for more and what kind of medicine he is using the most. Because of this access, CSP can get all the sensitive information about an individual user and can also share this information with a medicine company or an insurance company [ 6 ]. Another problem is that the user cannot fully trust CSP. Because of this reason, there are many legal issues. Users cannot store their sensitive data on unreliable cloud services because of this mistrust. As a result, many users cannot use cloud services to store their personal or sensitive data in the cloud. There are two ways to solve this problem. One is that the user installs a proxy on his side, and this proxy takes the user's data, encrypts and saves their data using some data protection techniques, and then sends it to the untrusted CSP [ 7 ].

The recent Google privacy policy is that any user can use any Google service free of cost; however, Google monitors their activity by monitoring their data to improve their services [ 8 ]. In this paper, we compare different types of data protection techniques that provide privacy and security over the data stored on the cloud. Many papers discuss outsourcing data storage on the cloud [ 9 , 10 ], however, we also discuss how we can secure the outsourced data on the cloud. Most of the paper describes the data security on the cloud vs the external intruder attacks [ 11 , 12 ]. This paper not only discusses the security attacks from outside intruders and securing mechanisms but also inner attacks from the CSP itself. Many surveys cover data privacy by applying cryptographic techniques [ 13 , 14 ]. These cryptographic techniques are very powerful for the protection of data and also provide a very significant result. However, there is a problem as these cryptographic techniques require key management, and some of the cloud functionalities are not working on these cryptographic techniques. In this paper, we also discuss some steganographic techniques. To the best of our knowledge, no study discusses all the conventional and nonconventional security techniques. Therefore, all the data protection techniques need to be combined in one paper.

The rest of this paper is organized as follows: Section 3 of the paper describes the research methodology that consists of inclusion, exclusion criteria, quality assessment criteria, study selection process, research questions, and data extraction process. Also, we discuss assumptions and requirements for data protection in the cloud. Section 4 presents all the cryptographic and also noncryptographic techniques that are used for data protection over the cloud. Also, we discuss the demographic characteristics of the relevant studies by considering the following four aspects: (i) publication trend, (ii) publication venues (proceeding and journals), (iii) number of citations, and (iv) author information. Section 4 also compares all these data protection techniques. Lastly, in Section 5 , we discuss results and present conclusion and future work.

2. Related Work

The first access control mechanism and data integrity in the provable data possession (PDP) model is proposed in the paper [ 15 ], and it provides two mobile applications based on the RSA algorithm. Like the PDP, the author in the paper [ 16 ] proposed a proof of retrievability (PoR) scheme that is used to ensure the integrity of remote data. PoR scheme efficiency is improved using a shorter authentication tag that is integrated with the PoR system [ 17 ]. A more flexible PDP scheme is proposed by the author of the paper [ 18 ] that uses symmetric key encryption techniques to support dynamic operations. A PDP protocol with some flexible functionality is developed, in which, we can add some blocks at run time [ 19 ]. A new PDP system with a different data structure is introduced, and it improves flexibility performance [ 20 ]. Similarly, another PDP model with a different data structure is designed to handle its data functionality [ 21 ]. To improve the accuracy of the data, the author of the paper [ 22 ] designed a multireplicas data verification scheme that fully supports dynamic data updates.

A unique data integration protocol [ 23 ] for multicloud servers is developed. The author of the paper [ 24 ] also considers the complex area where multiple copies are stored in multiple CSPs and builds a solid system to ensure the integrity of all copies at once. A proxy PDP scheme [ 25 ] is proposed, which supports the delegation of data checking that uses concessions to verify auditor consent. In addition, the restrictions of the verifier are removed that strengthened the scheme, and it proposes a separate PDP certification system [ 26 ]. To maintain the security of information, a concept for information security is proposed and a PDP protocol for public research is developed [ 27 ]. To resolve the certification management issue, the PDP system with data protection is introduced [ 28 ].

Identity-based cryptography is developed, in which a user's unique identity is used as input to generate a secret key [ 29 ]. Another PDP protocol is recommended to ensure confidentiality [ 30 ]. The author of the paper [ 31 ] proposed a scheme, in which tags are generated through the ring signature technique for group-based data sharing that supports public auditing and maintains user privacy. A new PDP system is introduced for data sharing over the cloud while maintaining user privacy [ 32 ]. Additionally, it supports the dynamic group system and allows users to exit or join the group at any time. Another PDP system [ 33 ] that is based on broadcast encryption and supports dynamic groups [ 34 ] is introduced. The issue of user revocation has been raised [ 35 ], and to address this issue, a PDP scheme has been proposed, which removes the user from the CSP using the proxy signature method. A PDP-based group data protocol was developed to track user privacy and identity [ 36 ]. A PDP system [ 37 ] is proposed for data sharing between multiple senders. The author of the paper [ 38 ] provides SEPDP systems while maintaining data protection. However, the author of the paper [ 39 ] proved that the scheme proposed in [ 38 ] is vulnerable to malicious counterfeiting by the CSP. A collision-resistant user revocable public auditing (CRUPA) system [ 40 ] is introduced for managing the data that is shared in groups. Another scheme [ 41 ] is introduced as a way to ensure the integrity of mobile data terminals in cloud computing.

To address the PKI issue, identity-based encryption [ 42 ] is designed to enhance the PDP protocol and maintain user privacy in a dynamic community. Before sharing user-sensitive data with third parties or researchers, data owners ensure that the privacy of user-sensitive data is protected. We can do this using data anonymization techniques [ 43 ]. In recent years, the research community has focused on the PPDP search area and developed several approaches for tabular data and SN [ 44 – 49 ]. There are two popular settings in PPDP: one is interactive, and the other is noninteractive [ 50 ]. The K-anonymity model [ 51 ] and its effects are most commonly used in the noninteractive setting of PPDP [ 52 – 56 ]. Differential privacy (DP) [ 57 ] and an interactive configuration of PPDP make extensive use of DP-based methods [ 58 – 60 ]. Meanwhile, several studies for a noninteractive setting reported a PD-dependent approach [ 61 ]. Researchers have expanded the concepts used to anonymize tabular data to protect the privacy of SN users [ 62 – 64 ].

Most images on the internet are in a compressed form. Hence, various studies design some techniques for AMBTC-compressed images. Data concealment has become an active research area. We can hide the data by adding confidential information to the cover image, and as a result, we get the stego image. There are two types of data hiding schemes: one is irreversible [ 65 – 68 ], and the other is a reversible data hiding scheme [ 69 – 71 ]. A cipher text designated for data collection can be re-encrypted as designated for another by a semitrusted proxy without decryption [ 72 ]. The first concrete construction of collusion-resistant unidirectional identity-based proxy re-encryption scheme, for both selective and adaptive identity, is proposed in the paper [ 73 ]. One of the data hiding schemes is the histogram shifting scheme [ 74 – 76 ], and it is the most widely used. A histogram-shifting data hiding scheme [ 77 ] that detects pixel histograms in the cover image is introduced. When big and diverse data are distributed everywhere, we cannot control the vicious attacks. Therefore, we need a cryptosystem to protect our data [ 78 – 80 ].

Some identity-based signature (IBS) schemes [ 81 – 84 ] are introduced that are based on bilinear pairing. However, the authentication schemes based on bilinear pairing over elliptic curve are more efficient and safer than traditional public key infrastructure [ 85 , 86 ]. The paper [ 87 ] proposed a preserving proxy re-encryption scheme for public cloud access control. A differential attack is performed on one-to-many order preserving encryption OPE by exploiting the differences of the ordered ciphertexts in [ 88 ]. Another scheme is proposed, which consists of a cancelable biometric template protection scheme that is based on the format-preserving encryption and Bloom filters [ 89 ]. Some of the researchers also use the concept of paring free identity-based signature schemes [ 90 – 93 ]. A lightweight proxy re-encryption scheme with certificate-based and incremental cryptography for fog-enabled e-healthcare is proposed in [ 94 ].

3. Research Methodology

The objective of this SLR is to evaluate, investigate, and identify the existing research in the context of data storage security in cloud computing to find and evaluate all the existing techniques. SLR is a fair and unbiased way of evaluating all the existing techniques. This way provides a complete and evidence-based search related to a specific topic. At this time, there is no SLR conducted on data storage security techniques that explains all the cryptographic and noncryptographic techniques. Hence, this SLR fulfills the gap by conducting itself. This SLR aims to provide a systematic method using the guidelines of an SLR provided by Kitchenham [ 95 ]. Furthermore, to increase the intensity of our evidence, we follow another study that is provided by [ 96 ]. Our SLR consists of three phases, namely planning, conducting, and reporting. By following these three phases, we conduct our SLR, as shown in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.001.jpg

Review procedure.

3.1. Research Questions

The primary research question of this systematic literature review is “What types of data protection techniques have been proposed in cloud computing?” This primary research question is further divided into four RQs. All these four questions are enlisted below.

  •   RQ1: what types of data protection techniques have been proposed in cloud computing?
  •   RQ2: what are the demographic characteristics of the relevant studies?
  •   RQ3: which data protection technique provides more data protection among all the techniques?
  •   RQ4: what are the primary findings, research challenges, and directions for future research in the field of data privacy in cloud computing?

3.2. Electronic Databases

Six electronic databases were selected to collect primary search articles. All these six electronic databases are well-reputed in the domain of cloud computing. Most of the relevant articles are taken from two electronic databases, namely IEEE and Elsevier. All the electronic databases that we use in this research process are given in Table 1 .

Databases sources.

3.3. Research Terms

First of all, the title base search is done on the different electronic databases, which are given in Table 1 . After that, most related studies/articles are taken. Search is done using the string (p1 OR p2. . . . . .OR pn.) AND (t1 OR t2. . . . . . OR tn.). This string/query is constructed using a population, intervention, control, and outcomes (PICO) structure that consists of population, intervention, and outcome. Database search queries are given in Table 2 .

  •   Population : “cloud computing”
  •   Intervention : “data security,” “data privacy,” “data integrity”
  •   Using the PICO structure, we construct a general query for the electronic database. Generic: ((“Document Title”: cloud∗) AND (“Document Title”: data AND (privacy OR protect∗ OR secure∗ OR integrity∗))).

Databases search query.

3.4. Procedure of Study Selection

The procedure of study selection is described in Figure 2 . This procedure has three phases: the first one is exclusion based on the title, in which articles are excluded based on the title, and the relevant titles are included. The second is exclusion based on the abstract in which articles are excluded. By reading the abstract of the articles, the most relevant abstract is included, and the last one is exclusion based on a full text that also includes quality assessment criteria.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.002.jpg

Study selection procedure.

3.5. Eligibility Control

In this phase, all the selected papers are fully readied, and relevant papers are selected to process our SLR further. Table 3 shows the final selected papers from each database based on inclusion and exclusion criteria. The related papers are selected based on inclusion and exclusion criteria, which are given in Table 4 .

Results from electronic databases.

Inclusion and exclusion criteria.

3.6. Inclusion and Exclusion Criteria

We can use the inclusion and exclusion criteria to define eligibility for basic study selection. We apply the inclusion and exclusion criteria to those studies that are selected after reading the abstract of the papers. The criteria for inclusion and exclusion are set out in Table 4. Table 4 outlines some of the conditions that we have applied to the articles. After applying the inclusion and exclusion criteria, we get relevant articles, which we finally added to our SLR. The search period is from 2010 to 2021, and most of the papers included in our SLR are from 2015 to onward.

We apply inclusion and exclusion criteria in the third phase of the study selection process, and we get 139 results. After that, we also apply quality criteria, and finally, we get 52 articles, which are included in this SLR. Most of the articles are taken from Elsevier and IEEE electronic databases. IEEE is the largest Venus for data storage security in cloud computing. The ratio of the selected articles from different electronic databases is shown in Figure 3 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.003.jpg

Percentage of selected studies.

3.7. Quality Assessment Criteria

Quality checking/assessment is done in the 3 rd phase of the study selection process. A scale of 0-1 is used for the quality assessment (QA) of the articles.

Poor-quality articles get 0 points on the scale, and good-quality articles get 1 point on the scale. The articles with 1 point on the scale are included in this SLR. Hence, by applying the quality checking/assessment criteria on all the articles, we finally get 52 articles. All the selected papers have validity and novelty for different data protection techniques, and also, we find the relevance of the articles in the quality assessment criteria, which ensures that all the articles are related to the SLR (data storage protection and privacy in cloud computing). The quality checking (QC) criteria are given in Table 5 .

Quality checking criteria.

3.8. Taxonomy of the Data Protection Techniques

In this section, all the data protection techniques are depicted in Figure 4 . All the data protection techniques are arranged and classified in their related categories. The purpose of the taxonomy is to give a presentational view of all the data protection techniques. The data protection techniques are mainly divided into two categories, namely (1) noncryptographic techniques and (2) cryptographic techniques.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.004.jpg

Taxonomy of the data protection techniques.

4. Results and Discussions

Data protection on the cloud is done by developing a third-party proxy that is trusted by the user. The trusted proxy is not a physical entity. It is a logical entity that can be developed on the user end (like on the user's personal computer) or at that location on which the user can trust. Mostly, all the local proxies are used as an additional service or as an additional module (like browser plugins). To fulfill the objective of data protection by proxies, some requirements are needed to fulfill necessarily. The requirements are given below:

  • User privilege. There are several objectives of user privilege or user empowerment, however, the main objective is to increase the trust of the users in data protection proxies used by the cloud.
  • Transparency. Another important objective is that when users outsource their sensitive data to trusted proxies, their data should remain the same and should not be altered.
  • Cloud computing provides large computing power and cost saving resources. However, one concern is that if we increase data security, computation overhead should not increase. We want to minimize the computation overhead over the proxies.
  • Cloud functionalities preservation. Cloud functionalities preservation is the most important objective. The users encrypt their sensitive data on their personal computers by applying different encryption techniques to increase the protection of their data, however, by applying these different encryption techniques, they are not able to avail some of the cloud functionalities because of compatibility issues [ 97 ]. Hence, it is the main issue.

Figure 5 provides a data workflow for protecting sensitive data on the cloud using a local proxy. There are different types of the assumption that are made for data protection, and some of them are discussed below.

  • Curious CSPs, the most commonly used model in cloud computing, is given in the literature [ 98 ]. The cloud service provider honestly fulfills the responsibilities, i.e., they do not interfere in the user activities, and they only follow the stander protocols. The CSP is honest, however, sometimes, it is curious to analyze the users' queries and analyze their sensitive data, which is not good because it is against the protocol. Also, by this, the privacy of the user is compromised. Hence, we can avoid these things by applying some data protection techniques on the user end to protect the users' sensitive data from the CSPs.
  • In some cases, CSPs may collaborate with data protection proxies that are present on the users' sides to increase the level of trust between the users and CSPs because better trust can motivate more users to move to the cloud. This collaboration can be done if CSPs provide some services to the users with a stable interface for storing, searching, and computing their data.
  • A multicloud approach to cloud computing infrastructure has also been proposed to improve their performance. In this regard, multiple cloud computing services are provided in the same heterogeneous architecture [ 19 ]. A multicloud gives the user multiple different places to store their data at their desired location. There are several benefits to use a multicloud, e.g., it reduces reliance on a single CSP, which increases flexibility.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.005.jpg

Data workflow on cloud using local proxy.

4.1. RQ1: What Type of Data Protection Techniques has Been Proposed in Cloud Computing?

In this session, we will discuss all the techniques for data storage security over the cloud. All these techniques are divided into two main categories, namely (i) cryptographic techniques and (ii) noncryptographic techniques. The local proxy uses different techniques to protect data that are stored on the cloud. Because of this reason, we cannot gain all the advantages of cloud services. Therefore, we analyze and compare all these techniques based on different criteria. These different criteria are as follows: (i) the data accuracy of all the techniques, (ii) the data protection level of all the techniques, (iii) all the functionalities these schemes allow on masked and unmasked data, and (iv) the overhead to encrypt and decrypt data over the cloud.

4.1.1. Noncryptographic Techniques

There are some noncryptographic techniques, and we discuss them in this paper as follows:

(1) Data Anonymization . Data anonymization is a data privacy technique used to protect a user's personal information. This technique hides the person's personal information by hiding the person's identifier or attributes that could reveal a person's identity. Data anonymization can be done by applying various mechanisms, for example, by removing or hiding identifiers or attributes. It can also be done by encrypting the user's personal information. The main purpose of performing data anonymization is that we can hide the identity of the person in any way. Data anonymity can be defined as the user's personal data being altered in such a way that we cannot directly or indirectly identify that person, and the CSP cannot retrieve any person's personal information. Data anonymization techniques have been developed in the field of statistical control disclosure. These techniques are most often used when we want to outsource sensitive data for testing purposes. Data anonymization is graphically represented in Figure 6 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.006.jpg

Data anonymization flow diagram.

Data anonymization techniques are most often used when we want to outsource sensitive data for testing purposes. For example, if some doctors want to diagnose certain diseases, some details of these diseases are required for this purpose. This information is obtained from the patients that suffer from these diseases, but it is illegal to share or disclose anyone's personal information. However, for this purpose, we use data anonymization technique to hide or conceal the person's personal information before outsourcing the data. In some cases, however, the CSP wants to analyze the user's masked data. In the data anonymization technique, attributes are the most important part. Attributes can include name, age, gender, address, salary, etc. Table 6 shows the identifiers classification.

Identifiers classification.

Data anonymization can be performed horizontally or vertically on this table and also on the record or group of records. The attributes are further classified into the following categories.

  • Sensitive Attributes: sensitive attributes possess sensitive information of the person, such as salary, disease information, phone number, etc. These attributes are strongly protected by applying some protection techniques.
  • Nonsensitive Attributes: these types of attributes do not belong to any type of category. Hence, they do not disclose the identity of a person.
  • Identifiers: identifier belongs to the identity of a person, such as Id card, name, social security number, etc. Because of the presence of these identifiers, the relationship between different attributes can be detected. Hence, these identifiers must be replaced or anonymized.
  • Quasi-Identifiers: quasi-identifiers are the group of identifiers that are available publicly, such as zip-code, designation, gender, etc. Separately, these identifiers cannot reveal the personal identity, however, by combining them, they may reveal the identity of the person. Hence, we want to separate these quasi-identifiers to avoid the discloser.

There are two main categories of data masking: (1) perturbative masking and (2) nonperturbative masking.

  • (1) Perturbative Masking
  • In perturbation, masking data is altered or masked with dummy datasets. Original data is replaced with dummy data, however, this data looks like the original data with some noise addition. The statistical properties of the original data are present in the masked data, however, nonperturbative masking does not contain the statistical properties of original data, because in perturbation masking, data is altered or masked with physically same but dummy data.
  • Data swapping
  • In data swapping, the data is randomly changed with the same but dummy data between different records [ 99 ]. However, if the numerical values are present in the dataset, then in certain limits, the values can be changed. Otherwise, the meaning of the data is changed. The masked data cannot look like the original data. For those attributes that can be ranked, the attribute is replaced with the nearby ranked attributes, and a very large difference between ranks is not suitable [ 100 ]. In data swapping, higher-level attributes are swapped [ 101 ] and individual values are not changed.
  • Noise Addition
  • In this mechanism, some noise is added to the original dataset to alter the original data. Noise is only added to the data that is continuous and divided into categories [ 102 ]. The noise is added into all the attributes that are present in the original dataset, such as sensitive attributes and also quasi-attributes.
  • Microaggregation
  • In this technique, all the relevant data is stored into different groups, and these different groups release average values from each record [ 103 ]. If a large number of similar records is present in different groups, then more data utility is done. We can cluster the data in many ways, e.g., in categorical versions [ 104 ]. Microaggregation is done on a quasi-attribute to protect these attributes from reidentification, and the quasi-attributes protect all the other attributes from reidentification. We can also minimize reidentification by data clustering [ 105 ].
  • Pseudonymization
  • In this method, the original data is replaced with artificial datasets [ 106 ]. In this technique, each attribute present in the original data is a pseudonym, and by doing this, data is less identifiable.
  • (2) Nonperturbative Masking
  • Nonperturbative masking does not change or alter the original data, however, it changes the statistical properties of the original data. Mask data is created by the reduction of the original data or suppressions of the original data [ 107 ].
  • Bucketization
  • In this method, original data is stored in different buckets, and these buckets are protected through encryption [ 108 ]. We can protect the sensitive attributes through bucketization.
  • Data slicing is a method in which a larger group of data is divided into smaller slices or segments [ 109 ]. Hence, we can slice the data, and in this way, the sensitive attribute and the quasi-attributes are divided into different slices. By identifying the individual slice, the identity of the person cannot be disclosed.
  • Sampling is a technique in which the population and sample concept is present. The entire data is called population, and the masked data is called a sample. In this technique, we make different samples of the original data. A smaller data sample provides more protection [ 110 ].
  • Generalization
  • It is a technique in which some additional attributes are added to the record. If the number of quasi-attributes is less rare, then some dummy attributes are added into the record, which look like the quasi-attributes. Hence, by doing this, reidentification becomes more difficult [ 111 ]. By applying generalization on data, we can protect the identity of a person because it hides the relationship between the quasi-attributes.

The summary of data anonymization techniques is given in Table 7 .

The summary of data anonymization techniques.

(2) Data Splitting . Data splitting is a technique in which sensitive data is divided into different fragments [ 112 ] to protect it from unauthorized access. In this technique, we first split the data into different fragments, then these fragments are randomly stored on different clouds. Even if the intruder gains access to a single fragment in any way, still the intruder will not be able to identify the person. For example, if an intruder gets a fragment from the cloud that contains the salary information of an organization, it is useless until he knows which salary belongs to which person. Hence, data splitting is a very useful technique for protecting data stored on the cloud.

Local proxies outsource data to the cloud without splitting the data, and they can also split the data first and then outsource to the same cloud using different accounts in the same CSP. It can also store data on different cloud platforms that run through different CSPs but provide some of the same services. Data is split before storing in different locations because even if some part or piece of data is known to an intruder, they will not be able to identify anyone.

Firstly, the local proxy retrieves sensitive data from the user and then calculates the risk factor for disclosure. In this method, the user can define the privacy level, and this privacy level provides information about all the sensitive attributes that can reveal someone's identity. These sensitive attributes are called quasi-attributes or quasi-identifiers. Next, the local proxy decides the number of pieces into which the sensitive data will be split and the number of locations that will be needed to store those pieces. Therefore, no one can reveal a person's identity, and all this information about the data splitting mechanism is stored at the local proxy. However, the system must be able to function properly and respond to the queries on time. After that, the local proxy stores these different data fragments in different cloud databases, and now, they are free from disclosure. The data-splitting mechanism supports almost all the functions of the cloud. Hence, we can use almost all the services provided by CSP using the data-splitting mechanism for storing data in the cloud.

When the users want to retrieve the original data, they process a query on a local proxy. The query is processed, and the data storage locations are retrieved from the local database. After that, the query is replicated as many times as the data is split into fragments, and these queries are forwarded to the relevant CSPs. As a result, each CSP provides a set of results that represent a partial view of the complete result. Finally, the proxy collects partial results according to the criteria used to split the data and provides the complete result to the user. Mostly, all these fragments are stored on different cloud databases in their original structure. Therefore, computation on these fragments can be performed easily. However, there is a problem if we want to perform computation separately on the individual fragment. Then, there is no algorithm that exists for this computation. Therefore, some algorithms are required to perform these types of computation as this computation requires communication between different CSPs. The redundancy of proxy metadata and backup policies must be essential to ensure the robustness of the mechanism. The data-splitting is graphically represented in Figure 7 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.007.jpg

Data-splitting flow diagram.

The summary of the data-splitting is given in Table 8 . Different data-splitting techniques are used for the protection of data stored on the cloud. Some of these are given below.

  • Byte level splitting
  • In this type, all the sensitive data is converted into bytes [ 113 ]. Then, these bytes are randomly shuffled with each other. After that, all the bytes are recombined. Fixed length fragments are made, and then, these fragments are stored on a different cloud.
  • Privacy level splitting
  • In this mechanism, the user chose the privacy level of each file [ 114 ] that is to be stored on a cloud database. Hence, a privacy level is attached with the file that is to be stored on the cloud. Using this privacy level, the user can decide that the higher privacy level files should be stored on the trusted cloud.
  • Byte level splitting with replication
  • Byte-level data splitting is combined with data replication to improve both performance and security. The author of the paper [ 115 ] proposed an algorithm to store the data fragments on different clouds, so that they are at a certain distance and by doing this; we can avoid confabulation attacks where the intruder can aggregate the split fragments.
  • Byte level splitting with encryption
  • Firstly, byte-level data splitting [ 116 , 117 ] is proposed. In this scheme, every fragment of data is encrypted to enhance the security of sensitive data. In this mechanism, the data is split into bytes, and these bytes are randomly shuffled and finally recombined. This type of data splitting is suitable for binary or multimedia files that are not processed through the cloud.
  • Another problem is the length of a fragment in which we can say that the data cannot be reidentified or the identity of a person cannot be revealed. If the length is too short, then the probability of disclosure increases, and if the length is too long, then it is difficult to handle these fragments. Hence, it should have a certain length so that we can also protect the identity of a person.
  • There is another type of data splitting in which we split data into attributes. The attribute level splitting is performed in two ways: one is horizontal splitting and the second is vertical splitting. These types of splitting are mostly done on structural databases, and they provide strong privacy.
  • Vertical splitting
  • In vertical data splitting [ 118 , 119 ], we divide quasi-identifiers or quasi-attributes in such a way that all the risky attributes are divided into different fragments to secure the reidentification. Some of the sensitive fragments required encryption on it. Hence, we can encrypt these fragments by applying some encryption algorithms or by applying some other privacy methods to increase the security level.

The summary of the data-splitting techniques.

A solution for sensitive data splitting without performing encryption on fragments is proposed [ 120 ]. This mechanism is suitable for data on which we want to perform some computation, because on encrypted data, we cannot perform computation directly. Another technique has been proposed [ 121 ], which demonstrates the redaction and sanitization of a document that identifies all sensitive attributes and protects the data in most documents.

The schemes that use vertical splitting to protect data are faster than other splitting techniques because data fragments consist of a single attribute or multiple attributes. It does not involve data masking or encryption. Hence, the computation is easy. There is another type of encryption in which we do not encrypt and decrypt every time to perform computation. It is called homomorphic encryption. In this case, all data modification is done on encrypted data, and actual data is not changed, however, the final result is preserved [ 122 ].

(3) Steganography . Steganography is the practice of concealing a message within another message or a physical object. In computing contexts, video, audio, image, message, or computer file is concealed within another image, message, or file. The steganography flow diagram is depicted in Figure 8 . There are two main types of steganography, namely (1) linguistic steganography and (2) technical steganography. These techniques are given as follows:

  • (1) Linguistic Steganography
  • It uses images and symbols alone to cover the data. There are two types of Semagrams [ 123 ]. The first is a visual Semagram. In this type, we can visualize the massage. The second type is a text Semagram. In this type, we change the font, color, or symbols of the text message.
  • In this case, we hide the real message from the intruder by installing the original massage in an authorized carrier [ 124 ]. Open code technique is further divided into two types: one is jargon code, and the second is covered ciphers.
  • (2) Technical Steganography
  • Text steganography
  • In this type, we change some textual characteristics of text, such as the font, color, or symbols of the text message [ 127 ]. Three coding techniques are used to change these textual features, which are as follows: (1) line-shift coding, (2) word-shift coding, and (3) feature coding.
  • Image steganography
  • It is the most popular type of steganography. Image steganography refers to the process of hiding sensitive data inside an image file [ 128 ]. The transformed image is expected to look very similar to the original image because the visible features of the stego image remain the same. The image steganography is divided into three parts, namely (1) least significant bits coding, (2) masking and filtering, and (3) transformations.
  • Audio steganography
  • Audio steganography is a technique that is used to transmit secret data by modifying a digitalized audio signal in an imperceptible manner [ 129 ]. Following types of audio steganography are given: (1) least significant bits coding, (2) phase coding, (3) spread spectrum, and (4) echo hiding.
  • Video steganography
  • In video steganography, both image and audio steganography are used [ 130 ]. A video consists of many frames. Hence, video steganography hides a large amount of data in carrier images. In this type of steganography, we select the specific frame in which we want to hide the sensitive data.
  • (ii) Methods
  • Frequency Domain
  • A frequency-domain steganography technique is used for hiding a large amount of data with no loss of secret message, good invisibility, and high security [ 131 ]. In the frequency domain, we change the magnitude of all of the DCT coefficients of the cover image. There are two types of frequency domain: (1) discrete cosine transformation and (2) discrete wavelet transformation.
  • Spatial Domain
  • The spatial domain is based on the physical location of pixels in an image [ 132 ]. A spatial domain technique gives the idea of pixel regulation, which minimizes the progressions of a stego image created from the spread image. Some methods of the spatial domain are given as follows: (1) least significant bit, (2) pixel value differencing, (3) pixel indicator, (4) gray level modification, and (5) quantized indexed modulation.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.008.jpg

Steganography flow diagram.

The summary of the steganographic techniques is given in Table 9 .

The summary of the steganographic techniques.

4.1.2. Cryptographic Techniques

Cryptography is the most important and most widely used technique for security purposes. In cryptography, the plain text is converted into ciphertext using a key and some encryption algorithms. Cryptographic techniques are the most secure techniques among all the other security techniques. Hence, these cryptography techniques are widely used in data storage security over the cloud. The present day's cryptography techniques are more realistic. We can achieve different objectives by applying these cryptographic techniques, for example, data confidentiality and data integrity. Because of an increase in the number of data breaches in the last few years, some cloud service provider companies are shifting toward cryptographic techniques to achieve more security. The most commonly used cryptographic technique is AES [ 133 ]. Key management is an important issue in cryptographic techniques because if the key is hacked by an intruder, then all the data will be hacked or stolen by this intruder. Hence, key protection or key management is a very important issue. Therefore, it is mostly the responsibility of CSP to manage the key and also provide the protection of key. Cryptographic techniques also protect the user from an untrusted CSP because sometimes the CSP outsources sensitive data without taking the permission of users, and it is an illegal activity. Hence, to avoid these things and protect our sensitive data from untrusted CSPs, we use cryptographic techniques, and it is the best option for users. However, there are some difficulties the user has to face while using cryptographic techniques, i.e., if a user wants to update a small amount of data, the user needs to decrypt the data and then perform this minor update. Hence, this work is very costly. Over time, implementing cryptographic techniques gives us a higher level of security, however, we compromise on performance or speed. It all depends on the user, the standard, the performance, or the high level of security the user wants to achieve. In this paper, we are focusing on the four main functionalities that are required or needed on cloud computing when using cryptographic techniques. Figure 9 shows the flow diagram of encryption.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.009.jpg

Encryption flow diagram.

Some of the main functionalities of cryptographic functions are given below.

  • Search on encrypted data
  • If a user wants to retrieve their data stored in a cloud database, they generate a query and run the query on a local proxy server and search for the data they want. Searching for encrypted data is a very important part of cryptography because every user who stores their sensitive data in a cloud database wants to retrieve it, and it is done by searching their sensitive data through queries. Therefore, the procedure of retrieving their data is very difficult.
  • Storage control
  • Sometimes the user wants to store data in a desired location or trusted database. Hence, the user must have full control over the storage of data.
  • Access control
  • It is a very important control and is referred to as data access restriction. Sometimes, the user does not want to share a private file publicly. Hence, access control is an important functionality.
  • Computation on data
  • Data computation is the main functionality of cloud computing. Sometimes, the user wants to perform some computation on data that are stored on a cloud database. For example, if a user wants to perform computation on encrypted data that is stored on cloud databases, then there are two ways. One is that the user, firstly, decrypts the entire data, performs computation on the data, and finally, the user encrypts the entire data and stores on the cloud database. This process is very expensive in terms of computation.

Some of the cryptographic techniques are as follows:

(1) Homomorphic Encryption . Homomorphic encryption is a form of encryption that permits users to perform computations on encrypted data without decrypting it. These resulting computations are left in an encrypted form, which, when decrypted, result in an identical output to that produced had the operations been performed on the unencrypted data. There are some types of homomorphic encryption that are described below.

  • Partial Homomorphic Encryption
  • In partial homomorphic encryption, only one arithmetic function addition or multiplication is performed at one time. If the resultant ciphertext is the addition of the plain text, then it is called an additive homomorphic scheme, and if the resultant ciphertext is the multiplication of the plaintext, then it is called the multiplicative homomorphic scheme. Two multiplicative homomorphic schemes are given as in [ 134 , 135 ]. There is one additive homomorphic scheme that is called Paillier [ 136 ].
  • Somewhat Homomorphic Encryption
  • This technique allows the user to perform the multiplication and subtraction mathematical operations. However, this scheme allows a limited number of arithmetic operations, because if it allows a large number of arithmetic operations, then it produces noise. This noise changes the structure of the original data. Hence, limited numerical math operations are allowed. There is a somewhat homomorphic encryption scheme that is presented by the authors of the papers [ 137 , 138 ]. In this scheme, the time of encryption and decryption is increased when multiplication operations are increased. To avoid this increase in time, we allow only a limited number of mathematical operations.
  • Fully Homomorphic Encryption
  • This technique allows a large number of arithmetic operations, namely multiplication and subtraction. Multiplication and addition in this technique are performed in the form of XOR and AND gates [ 139 ]. Completely homomorphic encryption techniques require a higher computation time to encrypt and decrypt data. Therefore, this technique is not applicable in real-life applications for implementation. This technique uses a bootstrapping algorithm when a large number of multiplication operations is performed on data and also for the decryption of the data it is used. Homomorphic encryption, on the other hand, represents the trade-off between operations and speed performance. Only a limited number of arithmetic operations are allowed if someone wants low computation, and a large number of arithmetic operations are allowed if someone wants high security. It depends on the needs of the user.

(2) Searchable Encryption . A searchable encryption technique is proposed by the author of the paper [ 140 ]. In this technique, before storing data on a cloud database, encryption is performed, and after that, it is stored on the cloud. The advantage of this technique is that when we search for some data over the cloud database, this technique provides a secure search over the cloud database.

  • Searchable Asymmetric Encryption
  • Over the past two decades, we have focused on searchable encryption. Much of the work is related to the multiwriter and single-reader cases. Searchable encryption is also called public keyword search encryption along with keyword search (PEKS) [ 141 ].
  • Searchable Symmetric Encryption
  • Symmetric-key algorithms use the same key for massage encryption and ciphertext decryption. The keys can be the same, or there can be a simple transformation to go between the two keys. Verifiable searchable symmetric encryption, as a key cloud security technique, allows users to retrieve encrypted data from the cloud with keywords and verify the accuracy of the returned results. Another scheme is proposed for keyword search over dynamic encrypted cloud data with a symmetric-key-based verification scheme [ 142 ].

(3) Encryption . In cryptography, encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decipher a ciphertext back to plaintext and access the original information.

  • Symmetric Key Encryption
  • Only one key is used in symmetric encryption to encrypt and decrypt the message. Two parties that communicate through symmetric encryption should exchange the key so that it can be used in the decryption process. This method of encryption differs from asymmetric encryption, where a pair of keys is used to encrypt and decrypt messages. A secure transmission method of network communication data based on symmetric key encryption algorithm is proposed in [ 143 ].
  • Public Key Encryption
  • The public-key encryption scheme is proposed by the author of the paper [ 144 ]. In this scheme, a public key pair is created by the receiver. This public key pair consists of two keys. One is called a public key, which is known publicly to everyone, and the second is the private key, which is kept a secret. Hence, in this scheme, the sender performs encryption on the data using the public key of the receiver and then sends this encrypted data to the receiver. After receiving this encrypted data, the receiver can decrypt this data using the private key. Hence, in this way, we can perform secure communication between two parties.
  • Identity-Based Encryption
  • Identity-based encryption is proposed by the author of the paper [ 145 ]. In this technique, a set of users is registered on the database and a unique identity is assigned to all the registered users by an admin that controls this scheme. The identity of the users can be represented by their name or their e-mail address. Just like in a public-key encryption, there is a public key pair that consists of one public key, which is the identity of the user, and one private key, which is a secret key. Just like in public-key encryption, the receiver cannot generate their public key in identity-based encryption. The identity cannot be generated by the user. There is a central authority that generates and manage the user's identity. The identity-based encryption is improved by the author [ 146 ]. The main advantage of identity-based encryption is that anyone can generate the public key of a given identity with the help of the central main authority.
  • Attribute-Based Encryption
  • The authors of the papers [ 147 , 148 ] propose a technique called attribute-based encryption. Similar to identity-based encryption, attribute-based encryption also depends on the central main authority. The central main authority generates the private key and distributes it to all the registered users. It can be encrypting the messages, however, if it does not have this designation, then it cannot be generating the messages. Attribute-based encryption is used when the number of registered users is very large. Then, the attribute-based encryption is useful. The attribute-based encryption consists of two schemes, which are key policy and ciphertext policy.
  • Functional Encryption
  • A functional encryption technique [ 149 , 150 ] consists of identity-based encryption, attribute-based encryption, and public-key encryption. All the functionalities of these three techniques combinedly make function encryption. In this technique, all the private keys are generated by the central main authority, which is associated with a specific function. Functional encryption is a very powerful encryption technique that holds all the functionalities of three encryption techniques. A functional encryption technique is used in many applications.

(4) Signcryption . Cryptography is publicly open-source, and it functions simultaneously as a digital signature and cipher. Cryptography and digital signatures are two basic encryption tools that can ensure confidentiality, integrity, and immutability. In [ 151 ], a new scheme called signature, encryption and encryption is proposed, based on effectively verifiable credentials. The system not only performs encryption and encryption but also provides an encryption or signature form only when needed [ 152 ]. The paper proposes lightweight certificate-based encryption using a proxy cipher scheme (CSS) for smart devices connected to an IoT network to reduce computing and communications costs. To ensure the security and efficiency of the proposed CBSS project, we used a cipher system encoded with 80 bit subparameters. Reference [ 153 ] proposes an input control scheme for the IoT environment using a cryptographic scheme corresponding to the efficiency and robustness of the UK security system. The proposed scheme shows that besides security services, such as protection against attacks, confidentiality, integrity, nonblocking, nondisclosure, and confidentiality, accounting and communication costs are low compared to the current scheme. Document [ 154 ] gives the informal and formal security proof of the proposed scheme. Automated Validation of Internet Security Protocols and Applications (AVISPA) tool is used for formal security analysis, which confirms that the proposed CB-PS scheme can potentially be implemented for resource-constrained low-computing electronic devices in E-prescription systems. The proposed scheme [ 155 ] introduced a new concept that does not require a reliable channel. The main production center sends a part of the private key to the public consumers. The summary of the cryptographic schemes is given in Table 10 .

The summary of the cryptographic techniques.

All data storage protection on cloud computing is discussed in session 3. There are a lot of data protection techniques, however, all these techniques are only divided into three main categories, namely (i) data splitting, (ii) data anonymization, and (iii) cryptography. From different points views, we discuss all these techniques, e.g., overhead on the local proxy, computation cost, search on encrypted data, data accuracy all these techniques retained, and data protection level all these techniques have, and all the masked data techniques have the functionalities. These are some different views, and by considering them, we can analyze all the data protection techniques. Cryptography provides high-level security but limited cloud functionalities and a high cost of performing computation on cloud data. Data splitting provide low computation cost but a low level of security. Data anonymization is of two types: one is perturbative masking, and the second is nonperturbative masking. Hence, in perturbative masking, data is altered with dummy data. Hence, security is high, however, we cannot perform some functionalities.

4.2. RQ2: What are the Demographic Characteristics of the Relevant Studies?

We answer this question by considering the four following aspects: (i) publication trend, (ii) publication venues (proceeding and journals), (iii) number of citations, and (iv) author information.

4.2.1. Publication Trend

From 2010 to 2021, we found 52 papers that were of top ranked journals and conferences. From 2010 to 2017, there is linear work in cloud computing, however, after 2017, a lot of work is done in cloud computing data security. From 2018 to 2021, 37 papers are published. After 2018, the trend about data security in cloud computing increased very vastly. Most of the work is done in 2021. High-ranked studies are published in 2021. Figure 10 shows all trends of all the publications from 2010. Most of the articles are published in journals venue, and the highest number of papers have been published in IEEE Access journal. 6 papers were published in this journal.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.010.jpg

Number of publications per year.

4.2.2. Publication Venues

There are different types of publication venues, and some of them are book articles, conference proceedings, journals, workshop proceedings, and symposium proceedings. Hence, in our SLR, the number of publications in a different venue is given in Figure 11 . We have a total of 52 papers after applying the inclusion and exclusion criteria in Section 2 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.011.jpg

Publication venues.

Out of 52 papers, 0 papers are published in book chapters. 1 paper is published in workshop proceedings. 0 papers are published in symposium proceedings. 43 papers are published in journals. 8 papers are published in conference proceedings. There are some most active journals in cloud data security, which are enlisted in Table 11 .

Top 5 most active journals.

The most active journal is the IEEE Access. In this journal, 6 papers are published. Journal of Cryptology is the second most active journal in the field of data storage, security, and privacy in cloud computing. In this journal, 3 papers are published. In the third journal, i.e., in the Journal of Information Fusion, 3 papers are published. The fourth journal is the Information Science. In this journal, 2 papers are published. The fifth journal is IEEE Transactions on Knowledge and Data Engineering, and in this journal, 2 papers are published. Most active conferences are given in Table 12 .

Top 5 most active conferences.

4.2.3. Number of Citations

The number of citations of a paper also tells the quality of the paper. The more the number of citations, the higher the quality, and the fewer the number of citations of the paper, the lower the paper quality. Table 13 shows the most influential authors, and Figure 12 shows the number of citations of all the papers that we have used in this SLR. Few papers have citations of more than 100. Hence, it shows that papers have a very high quality, and hence, the citation of those papers is very high. These papers are [ 105 , 118 , 124 , 139 ].

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.012.jpg

Number of citations of the papers.

Top 10 most influential authors in data protection in cloud computing.

4.2.4. Author Information

Some authors are most active in their publication. To identify these authors, we enlist the names of the top 10 authors that are more active in the field of data protection and privacy in cloud computing. Hence, we enlist the names of the top 10 authors and also their numbers of publications in Table 13 .

4.3. RQ3: Which Data Protection Technique Provides More Data Protection among all the Techniques?

We answer this question by considering the following four aspects: (i) publication trend, (ii) publication venues (proceeding and journals), (iii) number of citations, and (iv) author information.

4.3.1. Comparison of Data Protection Techniques

In this section, we compare all the data protection techniques that are discussed in this SLR, and finally, we review which technique is better and provides more protection among all these data protection techniques. We compare these techniques based on different functionalities, which are given as (i) local proxy overhead, (ii) data accuracy retain, (iii) level of data protection, (iv) transparency, and (v) operation supported, and finally, we discuss RQ2. Table 14 depicts a comparison of all the data protection techniques and provides a brief comparison of all the data protection techniques discussed in this SLR. Now, we discuss all these five functionalities one by one in more detail.

  • The overhead on the local proxy for encryption is very high because the data is encrypted. If the user wants to update the data, firstly, the user decrypts the data and then updates the data. After that, the user encrypts the data again. Hence, this operation requires a lot of time, and all this work is performed by the local proxy. It is the reason the overhead on the local proxy for encryption is very high for encryption.
  • Data Splitting
  • The overhead on a local proxy for data splitting is very low. The local proxy overhead remains constant while splitting data into fragments.
  • Anonymization
  • The overhead on a local proxy for anonymization is average because most of the anonymization methods require quasilinear computation in the number of records to generate the anonymized data set. Whenever the anonymized data is generated and stored in the cloud database, then there is no overhead on the local proxy.
  • Homomorphic Encryption
  • The overhead on local proxies for homomorphic encryption is very high because homomorphic encryption involves a large number of mathematical operations. Therefore, there is a lot of overhead on local proxies for homomorphic encryption.
  • Steganography
  • The overhead on the local proxy for steganography is not too much as the data is concealed inside the cover for secure communication. However, based on the complexity of the operation in the transformed domain technique, the local proxy overhead is more than the spatial domain technique.
  • Signcryption
  • The overhead on the local proxy for signcryption is high compared to the simple encryption because in signcryption, hashing and encryption are performed in a single logical step. Because of an extra operation in signcryption, the overhead on the local proxy is higher than the simple encryption.
  • The data accuracy level for encryption is very high because data is encrypted by applying some algorithms. The sensitive data is encrypted by the sender, and this data is decrypted by the receiver using a key. This data cannot be read by anyone who does not have the secret key. Therefore, data accuracy is very high for encryption.
  • The data accuracy level for data splitting is average because data-splitting data is present in the form of fragments. Therefore, CSP can easily access the fragments of data. Both encryption and data splitting are irreversible methods. Hence, we can retrieve the original data easily.
  • The data accuracy level for data anonymization is very low because anonymization is not irreversible. In anonymization, data is replaced with dummy data, and it cannot be retrieved back. Therefore, anonymization has a very low level of data accuracy.
  • The data accuracy level for homomorphic encryption is very high because data is encrypted by applying some algorithms.
  • The data accuracy level for steganography is very low as compared to the other cryptographic techniques because data is embedded inside the cover of another medium. Any change in the cover during transmission results in the change of the concealed data. Therefore, it is hard to ensure a high accuracy level in steganography. The stego image contains the secrete data that is transmitted over the communication channel. Data concealed by the sender is extracted from the cover by the receiver. Therefore, the concealment of data results in accurate data transmission.
  • The data accuracy level for signcryption is also very high, because in signcryption, confidentiality and authentication are achieved. Therefore, we can also verify the identity of the sender.
  • The level of data protection is very high for encryption techniques, because in encryption, data is changed into ciphertext, which cannot be understood. Therefore, we can say that the identification of data is impossible without decryption using a secret key because encryption is a one-way function that is easy to execute in one direction, however, it is impossible to execute in the opposite direction.
  • The level of data protection for data splitting is less high as compared to cryptographic techniques because data is split into different fragments, and these fragments contain original forms of data. Hence, if an intruder hacks or steal these fragments, then the untired data can be easily read. Hence, the data protection level is not high as compared to encrypted methods.
  • The level of data protection for data anonymization is less high as compared to cryptographic techniques, because in anonymization techniques, quasi-identifiers are protected if the quasi-identifiers are not protected strongly. Then, there is a change in the reidentification of person-sensitive data.
  • The level of data protection is very high for homomorphic encryption techniques because encryption data is changed into ciphertext, which cannot be understood.
  • The data protection level for steganography is medium because data is embedded inside the cover of another medium. The stego image contains the secrete data that is transmitted over the communication channel. Data concealed by the sender is extracted from the cover by the receiver. Therefore, the concealment of data results in secure data transmission.
  • The data protection level for signcryption is also very high, because in signcryption, both confidentiality and authentication are achieved. Therefore, we can also verify the identity of the sender.
  • There is no transparency for the encrypted data, because in encryption, there is a need for key management. Hence, the local proxy needs to keep the records of all the keys and manage all these keys. Therefore, there is no transparency for the encrypted data.
  • There is no transparency for the data-splitting mechanism, because in the data-splitting mechanism, data is split into different fragments, and the local proxy stores these fragments in different locations. Hence, there is a need to keep the record of the location of all the fragments that are stored on different locations.
  • Anonymization is fully transparent, because in anonymization, there is no need to keep the record of data storage by the local proxy. In anonymization, data is statistically similar to the original data. Hence, CSP also performs computation and some analysis on the anonymized data.
  • There is no transparency for the homomorphically encrypted data, because in encryption, there is a need for key management. Hence, the local proxy needs to keep the records of all the keys.
  • In steganography, as compared to other data protection techniques, the main aim is to transmit data without letting the attacker know about the data transmission as it is concealed inside the cover of another medium. The data transmission in steganography is fully transparent. No key management is required, and there is no need to keep track of data storage.
  • There is no transparency for the signcrypted data, because in signcryption, there is a need for key management. Hence, the local proxy needs to keep the records of all the keys and also manage all these keys.
  • Only the data storage operation is supported on the encrypted data, because if the user wants to update some encrypted data that are stored on a cloud database, firstly, the user needs to decrypt this data, and then the user performs an update on this data. We cannot perform any modification operation on encrypted data.
  • All the operations cloud be performed on data splitting, because in data splitting, the data is present in their original structure. Hence, we can perform data storage, search, data update, and also data computation.
  • In anonymization, there are two types of data anonymization: one is data masking, and the second is data nonmasking. If data is nonmasked, then we can perform data storage and search on this data. Otherwise, we can only perform data storage.
  • Only the data storage operation is supported on the encrypted data, because if the user wants to update some encrypted data that are stored on the cloud database, firstly, the user needs to decrypt this data, and then the user performs some updates on this data.
  • A stego image only supports data storage operations because if the user wants to update the data hidden in a stego image, the user, firstly, retrieves that data from the stego image, and the user can perform any modification on this data.
  • Only the data storage operation is supported on the signcrypted data, because if the user wants to update signcrypted data that are stored on the cloud database, firstly, the user needs to unsign this data, and then the user can perform any update on this data.

Comparison of data protection techniques.

5. Conclusion and Future Work

5.1. rq4: what are the primary findings, research challenges, and direction for future work in the field of data privacy in cloud computing, 5.1.1. conclusion and research challenges.

In this SLR, we have presented all the data privacy techniques related to data storage on cloud computing systematically, and we also present a comparison among all the protection techniques concerning the five finalities, which are the (i) local proxy overhead, (ii) data accuracy retains, (iii) level of data protection, (iv) transparency, and (v) operation supported. There are some research gaps we found in all these techniques of data splitting, anonymization, steganography, encryption, homomorphic encryption, and signcryption.

  • There is a very strong need to develop some ad hoc protocols for the communication of data splitting fragments that are stored on different CSPs, and also, there is a strong need to develop some protocol for the communication between different CSPs. Noncryptographic techniques are faster on different CSPs but do not provide enough security. Hence, we can improve security by developing some methods for data-splitting techniques.
  • Anonymity techniques work very effectively on a small amount of data but not for big data. Hence, there is a search gap in which we can develop some anonymity techniques to achieve more efficient performance. Therefore, some anonymous schemes need to be developed, which provide stronger protection to the quasi-identifier. Current anonymity techniques are very immature.
  • One of the limitations of steganography is that one can only use it to defend against a third party who does not know steganography. If the third party knows steganography, it can extract the data in the same way that the recipient extracts it. Therefore, we always use encryption with steganography. Therefore, there is a need to develop such steganography techniques that can protect sensitive data from third parties.
  • There is a need to develop some cryptographic techniques that can take less time than the existing cryptographic techniques to perform search and computation operation on encrypted data. Cryptographic techniques provide high security but low computational utility. Therefore, it is a search gap to develop some techniques that provide both high security with more efficiency.
  • The complexity of homomorphic encryption and decryption is far greater than that of normal encryption and decryption, and it is not applicable to many applications, such as healthcare and time-sensitive applications. Therefore, there is an urgent need to develop such homomorphic encryption schemes that have low complexity and computation cost.
  • Signcryption is used to verify and authenticate users. We can obtain confidentiality and authentication using signcryption, however, the main limitation of signcryption is that the calculation costs of the encryption algorithm used in signcryption are very high. Therefore, there is a need to develop such signcryption schemes that use such encryption algorithms, which have low computation cost.

Acknowledgments

This research was financially supported by The Analytical Center for the Government of the Russian Federation (Agreement nos. 70-2021- 00143 dd. 01.11.2021, IGK 000000D730321P5Q0002).

Data Availability

Conflicts of interest.

The authors declare that there are no conflicts of interest regarding the publication of this paper.

  • IEEE CS Standards
  • Career Center
  • Subscribe to Newsletter
  • IEEE Standards

cloud computing research paper topics

  • For Industry Professionals
  • For Students
  • Launch a New Career
  • Membership FAQ
  • Membership FAQs
  • Membership Grades
  • Special Circumstances
  • Discounts & Payments
  • Distinguished Contributor Recognition
  • Grant Programs
  • Find a Local Chapter
  • Find a Distinguished Visitor
  • Find a Speaker on Early Career Topics
  • Technical Communities
  • Collabratec (Discussion Forum)
  • Start a Chapter
  • My Subscriptions
  • My Referrals
  • Computer Magazine
  • ComputingEdge Magazine
  • Let us help make your event a success. EXPLORE PLANNING SERVICES
  • Events Calendar
  • Calls for Papers
  • Conference Proceedings
  • Conference Highlights
  • Top 2024 Conferences
  • Conference Sponsorship Options
  • Conference Planning Services
  • Conference Organizer Resources
  • Virtual Conference Guide
  • Get a Quote
  • CPS Dashboard
  • CPS Author FAQ
  • CPS Organizer FAQ
  • Find the latest in advanced computing research. VISIT THE DIGITAL LIBRARY
  • Open Access
  • Tech News Blog
  • Author Guidelines
  • Reviewer Information
  • Guest Editor Information
  • Editor Information
  • Editor-in-Chief Information
  • Volunteer Opportunities
  • Video Library
  • Member Benefits
  • Institutional Library Subscriptions
  • Advertising and Sponsorship
  • Code of Ethics
  • Educational Webinars
  • Online Education
  • Certifications
  • Industry Webinars & Whitepapers
  • Research Reports
  • Bodies of Knowledge
  • CS for Industry Professionals
  • Resource Library
  • Newsletters
  • Women in Computing
  • Digital Library Access
  • Organize a Conference
  • Run a Publication
  • Become a Distinguished Speaker
  • Participate in Standards Activities
  • Peer Review Content
  • Author Resources
  • Publish Open Access
  • Society Leadership
  • Boards & Committees
  • Special Technical Communities
  • Local Chapters
  • Governance Resources
  • Conference Publishing Services
  • Chapter Resources
  • About the Board of Governors
  • Board of Governors Members
  • Diversity & Inclusion
  • Open Volunteer Opportunities
  • Award Recipients
  • Student Scholarships & Awards
  • Nominate an Election Candidate
  • Nominate a Colleague
  • Corporate Partnerships
  • Conference Sponsorships & Exhibits
  • Advertising
  • Recruitment
  • Publications
  • Education & Career

8 Cloud Security Trends to Watch Out For in 2022

Cloud Security trends in 2022

But with major advantages follow some critical security threats. In the case of mismanagement, organizations can suffer from data breaches and leakage.

To avoid the same, let’s take a look at the most talked-about cloud security trends in recent times.

Want more tech news? Subscribe to ComputingEdge Newsletter Today!

Cloud Security Posture Management (CSPM)

Research has shown that misconfiguration, lack of visibility, identity, and unauthorized access comes under the highest-ranked cloud threats. Cloud Security Posture

cloud computing research paper topics

Management or CSPM looks at the configuration of your cloud platform accounts and identifies any possible misconfiguration leading to data breaches and leakage.

The cloud environment is dramatically expanding, and the identification of misconfiguration becomes increasingly difficult. Gartner identifies misconfiguration as the core reason behind data breaches. Reduction or complete elimination would ensure better functioning.

CSPM helps businesses develop trust with their users in terms of safety and security. It automates security and provides compliance assurance in the cloud.

Here is how CSPM proves to be effective for businesses:

  • Easy detection and remediation of cloud misconfigurations
  • Inventory of best practices of varied cloud configuration
  • Keep track of current configuration status
  • Efficiently work with SaaS and PaaS platforms even in a multi-cloud environment
  • Keeps a proper check on storage bucked, encryptions, and account permissions

Ensuring customer data protection before it reaches the cloud

Did you know that the top cloud security concerns in 2021 are data loss and leakage (69%) ?

cloud computing research paper topics

Cloud computing has numerous benefits but security is always at stake. The data is out of the direct control of the owner, thereby making security a top concern. Increasing data breaches calls out businesses to improve prior data protection.

Customers would hardly be interested in associating with companies that couldn’t guarantee data safety. Organizations must take all relevant steps to create new standards, rules, and regulations to protect crucial customer data.

Businesses are highly invested in encrypting data before sending it to the cloud. It’s not too late to introduce Bring Your Own Key (BYOK) encryption for the overall benefit of the organization and customers.

The BYOK encryption system encrypts the organization’s data, and the access to the information lies with the owner. But businesses need to be cautious while introducing this system as some plans upload the keys to the cloud security platform. This again makes the information vulnerable and prone to leakage.

Strictly follow the zero trust model

cloud computing research paper topics

The zero trust model offers complete security by assuring no one gets access to data until their identity is authenticated. It ensures the users get access only to the information that they need. No piece of extra information is offered in any scenario.

At every step, the user needs to authenticate their identity. This model gives the control back to the organization and increases accountability. By providing limited access, the possibility of data breaches reduces.

Adapting to this model becomes necessary with an increased number of insider attacks. Employees should never gain access to information that isn’t relevant to their area.

SDLC and DevSecOps within the cloud

The increased popularity of DevOps has helped companies release effective software programs with negligible risk.

cloud computing research paper topics

Now companies rely highly on DevSecOps, a new model that takes complete accountability for the security implementation. The implemented security and accountability to everyone ensures companies don’t suffer any trouble.

Some significant benefits of implementing it are reducing vulnerabilities present on your code, IaC technologies, ways to exploit your application, and downtime.

The overall security on your SDLC improves after integrating DevSecOps into your current DevOps pipeline. These security measures are crucial to ensure every phase of the SDLC pipeline goes smoothly.

Lack of consensus

The government has been working on implementing rules, regulations, and policies to ensure adequate cloud security. But countries often deal with issues differently, causing unavoidable security threats.

Businesses suffer trouble caused by diversity in tackling major issues and varying regulations around the world. Users are heavily invested in ensuring they receive proper security. The increased cybercrimes require businesses to invest their time and attention to strict adherence to clear regulations.

A drastic increase in cybercrimes

Cloud computing provides access to information at all times. But the users who are associated with the resources are responsible for the risk that follows. The exposure to cybercrime is drastic in the case of cloud computing due to decreased visibility and control. Even the individuals are least aware of the associated threats.

cloud computing research paper topics

The three types of data in cloud computing exposed to the risk of cybercrime are:

  • Data processed in the cloud
  • The idle or resting data
  • The data in transit

Due to the increased risk of cybercrime, companies cannot function without end-to-end encryption. Despite being aware of the severe threats, only one in five companies assess their cloud security posture from time to time. Make sure you don’t lag in this area to save your business from heavy losses.

The need for centralized platforms

Streamlining activities is crucial for businesses using more than one cloud provider. A centralized platform is the need of the hour to implement relevant measures and security controls.

To get rid of these issues, companies rely on a cloud security access broker (CSAB). This acts as a connector between cloud applications and users. CSAB leads to the smooth functioning and offers better visibility. Continuous scanning of the concerned activities followed by implementing major procedures and rules makes it a viable choice.

Increased investment in intelligent security

The continuous advancement in Artificial Intelligence and Machine learning requires businesses to rethink their security techniques. These technical advancements offer complete protection of the data, thereby saving businesses from severe cyber thefts. It’s crucial as undetected thefts could cause severe damages that take time to recover.

Businesses relying on them develop better customer trust and end up expanding their customer base. These technologies are slowly taking over various industries, including insurance, baking, etc.

Furthermore, a shortage of a cybersecurity workforce increases the demand for artificial intelligence and machine learning.

We know that every passing day brings forward a new sophisticated cyber threat to businesses. After analyzing the above trends, companies must prepare for the worst.

Taking strong security measures is crucial to save their integrity and develop a lasting relationship with the customers. Keep working and monitoring security considerations consistently to protect your business from severe threats.

About the Writer

Gaurav Belani is a senior SEO and content marketing analyst at Growfusely , a content marketing agency that specializes in data-driven SEO. He has more than seven years of experience in digital marketing and loves writing about AI, machine learning, data science, cloud security, and other emerging technologies. In his spare time, he enjoys watching movies and listening to music. Connect with him on Twitter at @belanigaurav .

Recommended by IEEE Computer Society

Cloud Computing

Tools and Techniques to master the management of APIs in Production

Cloud Computing

Configuration as Code: A Practical Guide

Cloud Computing

Top 10 Microservices Design Patterns and Their Pros and Cons

Cloud Computing

Entry Level Accessibility Barriers - A Thought Provoking Challenge For Immersive Technologies

Cloud Computing

Sponsoring a Conference in 2024: Is It Worth It?

Cloud Computing

Teaching Like an Entrepreneur

Cloud Computing

Decoding Emotions: The AI Way – A New Era in Human-Technology Symbiosis

Cloud Computing

8 Strategies to Innovate and Achieve Diversity in Tech

Top 10 Cloud Computing Research Topics in 2022

Table of contents.

Cloud computing as a technology may have been in the cards for a long time, but its widespread application and popularity have increased in recent times. Moreover, at its current size, this industry is valued at approximately $850 billion. However, this number will not hold on for long as it is likely to go up in the coming years.

Nonetheless, if you are interested in this field and willing to learn more about it, here are 10 research topics on cloud computing that can help you start.

Top 10 Research Topics for Cloud Computing in 2022

Here are ten research topics for cloud computing to look forward to in 2022 –

  • Cloud analytics

Cloud analytics is a cloud-related analytical tool that helps to analyze data and reduce data storage costs. It is used for research in genomics, exploring oil and gas reserves, business intelligence, Internet of Things (IoT) and cybersecurity. It unleashes the power of data to improve the organizational performance of a company.

  • Load balancing

The workload distribution for soft computing over the server is known as load balancing. It helps in the distribution of resources over various local servers, networks and industrial servers for workload management and requirement of applications, and it also helps to keep the system stable and boost its efficiency so that there is no malfunctioning or failure of any type.

  • Green cloud computing

The consumption of energy consumption is increasing in data centres due to an increase in demand for cloud services. Green cloud computing will help to minimise the consumption of energy and reduce e-waste generation. Management of power, virtualisation of the system along with the computation of the system sustainability, and recycling of environmental resources will be handled by green cloud computing systems.

  • Edge computing

Processing of data at the edge of a network instead of a data warehouse is called edge computing. Some innovations are possible only due to cloud computing, which amplifies a network edge's capabilities and helps expand the domain of wireless connections.

  • Cloud cryptography

Cloud cryptography adds strong protection layers which help in giving security to the cloud storage infrastructure. It helps to prevent the breach of data by saving sensitive data containing any information transmitted to third parties. Cloud cryptography systems convert plain text into an unreadable form of code. It is helped by computers and algorithms that restrict the preview of data during its delivery.

  • Cloud scalability

Cloud scalability is the capability of scaling the IT resources over the cloud up or down as per the computing changes requirements. A system can be scaled horizontally, diagonally and vertically. Scalability can be applied to Memory and Disk I/O, CPU and Network I/O.

  • Mobile cloud computing

These refer to the cloud computing systems that are typically for the Mobile computing system, which allows different OS, computing tasks, and data storage. Mobile cloud has many advantages. It increases the speed and flexibility of the system. It enables resource sharing across multiple systems. Mobile Cloud Computing helps in the integration of data.

Big data is the technology that helps handle large network-based systems with copious amounts from different sources. All unstructured data is connected to structured data and organised in a particular way so that handling it becomes hassle-free. Moreover, it becomes easy to manage them from one dashboard. A lot of innovation is going into this field.

  • Cloud deployment model

Nowadays, a lot of apps are hosted and stored on cloud systems. So for each type of application, there needs to be a model which is based on scalability, access, scalability, ownership, cloud nature and purpose of the deployment. A cloud deployment model helps to find out which cloud environment determines the infrastructure of the cloud that suits the system best.

DevOps is all about delivering apps and services that enhance an organisation’s product, making it better and faster. The research in DevOps can help to achieve advanced security in cloud computing systems.

To conclude, this write-up has offered much-needed clarity regarding the cloud computing research topics that are popular nowadays. Hopefully, it will help you find your niche, get a more in-depth understanding of the topic, and build your career around it.

cloud computing research paper topics

Mastering Vector Embeddings: A Comprehensive Guide to Revolutionizing Data Science

cloud computing research paper topics

AI Day: February Edition - Bangalore‍

cloud computing research paper topics

Step-by-Step Guide for E-Commerce Startups to Create 3D Product Catalogs Using E2E Cloud

cloud computing research paper topics

AI Agents: Predictions for 2024

cloud computing research paper topics

A Comprehensive Guide to LLM Training: Overview of Different Methods to Train an LLM

cloud computing research paper topics

AI Coding Assistant for Enterprise Teams: Guide to Code Llama2 70B with Visual Studio Code Integration on E2E Cloud:

This is a decorative image for: A Complete Guide To Customer Acquisition For Startups

A Complete Guide To Customer Acquisition For Startups

Any business is enlivened by its customers. Therefore, a strategy to constantly bring in new clients is an ongoing requirement. In this regard, having a proper customer acquisition strategy can be of great importance.

So, if you are just starting your business, or planning to expand it, read on to learn more about this concept.

The problem with customer acquisition

As an organization, when working in a diverse and competitive market like India, you need to have a well-defined customer acquisition strategy to attain success. However, this is where most startups struggle. Now, you may have a great product or service, but if you are not in the right place targeting the right demographic, you are not likely to get the results you want.

To resolve this, typically, companies invest, but if that is not channelized properly, it will be futile.

So, the best way out of this dilemma is to have a clear customer acquisition strategy in place.

How can you create the ideal customer acquisition strategy for your business?

  • Define what your goals are

You need to define your goals so that you can meet the revenue expectations you have for the current fiscal year. You need to find a value for the metrics –

  • MRR – Monthly recurring revenue, which tells you all the income that can be generated from all your income channels.
  • CLV – Customer lifetime value tells you how much a customer is willing to spend on your business during your mutual relationship duration.  
  • CAC – Customer acquisition costs, which tells how much your organization needs to spend to acquire customers constantly.
  • Churn rate – It tells you the rate at which customers stop doing business.

All these metrics tell you how well you will be able to grow your business and revenue.

  • Identify your ideal customers

You need to understand who your current customers are and who your target customers are. Once you are aware of your customer base, you can focus your energies in that direction and get the maximum sale of your products or services. You can also understand what your customers require through various analytics and markers and address them to leverage your products/services towards them.

  • Choose your channels for customer acquisition

How will you acquire customers who will eventually tell at what scale and at what rate you need to expand your business? You could market and sell your products on social media channels like Instagram, Facebook and YouTube, or invest in paid marketing like Google Ads. You need to develop a unique strategy for each of these channels. 

  • Communicate with your customers

If you know exactly what your customers have in mind, then you will be able to develop your customer strategy with a clear perspective in mind. You can do it through surveys or customer opinion forms, email contact forms, blog posts and social media posts. After that, you just need to measure the analytics, clearly understand the insights, and improve your strategy accordingly.

Combining these strategies with your long-term business plan will bring results. However, there will be challenges on the way, where you need to adapt as per the requirements to make the most of it. At the same time, introducing new technologies like AI and ML can also solve such issues easily. To learn more about the use of AI and ML and how they are transforming businesses, keep referring to the blog section of E2E Networks.

Reference Links

https://www.helpscout.com/customer-acquisition/

https://www.cloudways.com/blog/customer-acquisition-strategy-for-startups/

https://blog.hubspot.com/service/customer-acquisition

This is a decorative image for: Constructing 3D objects through Deep Learning

Image-based 3D Object Reconstruction State-of-the-Art and trends in the Deep Learning Era

3D reconstruction is one of the most complex issues of deep learning systems . There have been multiple types of research in this field, and almost everything has been tried on it — computer vision, computer graphics and machine learning, but to no avail. However, that has resulted in CNN or convolutional neural networks foraying into this field, which has yielded some success.

The Main Objective of the 3D Object Reconstruction

Developing this deep learning technology aims to infer the shape of 3D objects from 2D images. So, to conduct the experiment, you need the following:

  • Highly calibrated cameras that take a photograph of the image from various angles.
  • Large training datasets can predict the geometry of the object whose 3D image reconstruction needs to be done. These datasets can be collected from a database of images, or they can be collected and sampled from a video.

By using the apparatus and datasets, you will be able to proceed with the 3D reconstruction from 2D datasets.

State-of-the-art Technology Used by the Datasets for the Reconstruction of 3D Objects

The technology used for this purpose needs to stick to the following parameters:

Training with the help of one or multiple RGB images, where the segmentation of the 3D ground truth needs to be done. It could be one image, multiple images or even a video stream.

The testing will also be done on the same parameters, which will also help to create a uniform, cluttered background, or both.

The volumetric output will be done in both high and low resolution, and the surface output will be generated through parameterisation, template deformation and point cloud. Moreover, the direct and intermediate outputs will be calculated this way.

  • Network architecture used

The architecture used in training is 3D-VAE-GAN, which has an encoder and a decoder, with TL-Net and conditional GAN. At the same time, the testing architecture is 3D-VAE, which has an encoder and a decoder.

  • Training used

The degree of supervision used in 2D vs 3D supervision, weak supervision along with loss functions have to be included in this system. The training procedure is adversarial training with joint 2D and 3D embeddings. Also, the network architecture is extremely important for the speed and processing quality of the output images.

  • Practical applications and use cases

Volumetric representations and surface representations can do the reconstruction. Powerful computer systems need to be used for reconstruction.

Given below are some of the places where 3D Object Reconstruction Deep Learning Systems are used:

  • 3D reconstruction technology can be used in the Police Department for drawing the faces of criminals whose images have been procured from a crime site where their faces are not completely revealed.
  • It can be used for re-modelling ruins at ancient architectural sites. The rubble or the debris stubs of structures can be used to recreate the entire building structure and get an idea of how it looked in the past.
  • They can be used in plastic surgery where the organs, face, limbs or any other portion of the body has been damaged and needs to be rebuilt.
  • It can be used in airport security, where concealed shapes can be used for guessing whether a person is armed or is carrying explosives or not.
  • It can also help in completing DNA sequences.

So, if you are planning to implement this technology, then you can rent the required infrastructure from E2E Networks and avoid investing in it. And if you plan to learn more about such topics, then keep a tab on the blog section of the website . 

https://tongtianta.site/paper/68922

https://github.com/natowi/3D-Reconstruction-with-Deep-Learning-Methods

This is a decorative image for: Comprehensive Guide to Deep Q-Learning for Data Science Enthusiasts

A Comprehensive Guide To Deep Q-Learning For Data Science Enthusiasts

For all data science enthusiasts who would love to dig deep, we have composed a write-up about Q-Learning specifically for you all. Deep Q-Learning and Reinforcement learning (RL) are extremely popular these days. These two data science methodologies use Python libraries like TensorFlow 2 and openAI’s Gym environment.

So, read on to know more.

What is Deep Q-Learning?

Deep Q-Learning utilizes the principles of Q-learning, but instead of using the Q-table, it uses the neural network. The algorithm of deep Q-Learning uses the states as input and the optimal Q-value of every action possible as the output. The agent gathers and stores all the previous experiences in the memory of the trained tuple in the following order:

State> Next state> Action> Reward

The neural network training stability increases using a random batch of previous data by using the experience replay. Experience replay also means the previous experiences stocking, and the target network uses it for training and calculation of the Q-network and the predicted Q-Value. This neural network uses openAI Gym, which is provided by taxi-v3 environments.

Now, any understanding of Deep Q-Learning   is incomplete without talking about Reinforcement Learning.

What is Reinforcement Learning?

Reinforcement is a subsection of ML. This part of ML is related to the action in which an environmental agent participates in a reward-based system and uses Reinforcement Learning to maximize the rewards. Reinforcement Learning is a different technique from unsupervised learning or supervised learning because it does not require a supervised input/output pair. The number of corrections is also less, so it is a highly efficient technique.

Now, the understanding of reinforcement learning is incomplete without knowing about Markov Decision Process (MDP). MDP is involved with each state that has been presented in the results of the environment, derived from the state previously there. The information which composes both states is gathered and transferred to the decision process. The task of the chosen agent is to maximize the awards. The MDP optimizes the actions and helps construct the optimal policy.

For developing the MDP, you need to follow the Q-Learning Algorithm, which is an extremely important part of data science and machine learning.

What is Q-Learning Algorithm?

The process of Q-Learning is important for understanding the data from scratch. It involves defining the parameters, choosing the actions from the current state and also choosing the actions from the previous state and then developing a Q-table for maximizing the results or output rewards.

The 4 steps that are involved in Q-Learning:

  • Initializing parameters – The RL (reinforcement learning) model learns the set of actions that the agent requires in the state, environment and time.
  • Identifying current state – The model stores the prior records for optimal action definition for maximizing the results. For acting in the present state, the state needs to be identified and perform an action combination for it.
  • Choosing the optimal action set and gaining the relevant experience – A Q-table is generated from the data with a set of specific states and actions, and the weight of this data is calculated for updating the Q-Table to the following step.
  • Updating Q-table rewards and next state determination – After the relevant experience is gained and agents start getting environmental records. The reward amplitude helps to present the subsequent step.  

In case the Q-table size is huge, then the generation of the model is a time-consuming process. This situation requires Deep Q-learning.

Hopefully, this write-up has provided an outline of Deep Q-Learning and its related concepts. If you wish to learn more about such topics, then keep a tab on the blog section of the E2E Networks website.

https://analyticsindiamag.com/comprehensive-guide-to-deep-q-learning-for-data-science-enthusiasts/

https://medium.com/@jereminuerofficial/a-comprehensive-guide-to-deep-q-learning-8aeed632f52f

This is a decorative image for: GAUDI: A Neural Architect for Immersive 3D Scene Generation

GAUDI: A Neural Architect for Immersive 3D Scene Generation

The evolution of artificial intelligence in the past decade has been staggering, and now the focus is shifting towards AI and ML systems to understand and generate 3D spaces. As a result, there has been extensive research on manipulating 3D generative models. In this regard, Apple’s AI and ML scientists have developed GAUDI, a method specifically for this job.

An introduction to GAUDI

The GAUDI 3D immersive technique founders named it after the famous architect Antoni Gaudi. This AI model takes the help of a camera pose decoder, which enables it to guess the possible camera angles of a scene. Hence, the decoder then makes it possible to predict the 3D canvas from almost every angle.

What does GAUDI do?

GAUDI can perform multiple functions –

  • The extensions of these generative models have a tremendous effect on ML and computer vision. Pragmatically, such models are highly useful. They are applied in model-based reinforcement learning and planning world models, SLAM is s, or 3D content creation.
  • Generative modelling for 3D objects has been used for generating scenes using graf, pigan, and gsn, which incorporate a GAN (Generative Adversarial Network). The generator codes radiance fields exclusively. Using the 3D space in the scene along with the camera pose generates the 3D image from that point. This point has a density scalar and RGB value for that specific point in 3D space. This can be done from a 2D camera view. It does this by imposing 3D datasets on those 2D shots. It isolates various objects and scenes and combines them to render a new scene altogether.
  • GAUDI also removes GANs pathologies like mode collapse and improved GAN.
  • GAUDI also uses this to train data on a canonical coordinate system. You can compare it by looking at the trajectory of the scenes.

How is GAUDI applied to the content?

The steps of application for GAUDI have been given below:

  • Each trajectory is created, which consists of a sequence of posed images (These images are from a 3D scene) encoded into a latent representation. This representation which has a radiance field or what we refer to as the 3D scene and the camera path is created in a disentangled way. The results are interpreted as free parameters. The problem is optimized by and formulation of a reconstruction objective.
  • This simple training process is then scaled to trajectories, thousands of them creating a large number of views. The model samples the radiance fields totally from the previous distribution that the model has learned.
  • The scenes are thus synthesized by interpolation within the hidden space.
  • The scaling of 3D scenes generates many scenes that contain thousands of images. During training, there is no issue related to canonical orientation or mode collapse.
  • A novel de-noising optimization technique is used to find hidden representations that collaborate in modelling the camera poses and the radiance field to create multiple datasets with state-of-the-art performance in generating 3D scenes by building a setup that uses images and text.

To conclude, GAUDI has more capabilities and can also be used for sampling various images and video datasets. Furthermore, this will make a foray into AR (augmented reality) and VR (virtual reality). With GAUDI in hand, the sky is only the limit in the field of media creation. So, if you enjoy reading about the latest development in the field of AI and ML, then keep a tab on the blog section of the E2E Networks website.

https://www.researchgate.net/publication/362323995_GAUDI_A_Neural_Architect_for_Immersive_3D_Scene_Generation

https://www.technology.org/2022/07/31/gaudi-a-neural-architect-for-immersive-3d-scene-generation/  

https://www.patentlyapple.com/2022/08/apple-has-unveiled-gaudi-a-neural-architect-for-immersive-3d-scene-generation.html

Build on the most powerful infrastructure cloud

A vector illustration of a tech city using latest cloud technologies & infrastructure

Cloud-Native Computing: A Survey From the Perspective of Services

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Cloud Computing Projects

Home / Cloud Computing Topics

Cloud Computing Topics

Cloud computing is the process of offering resources and services depends the user demand in regardless of location. It is the computing system which classify under of these 3 service models of cloud.

  • Platform as a Service (PAAS) – offers a well-established platform for developing the software / application
  • Software as a Service (SAAS) – provides a GUI based application for third party access through the internet
  • Infrastructure as a Service (IAAS) – affords the cloud-oriented service / infrastructure includes networking, virtualization  and storage

Innovative cloud computing topics for phd and ms scholars

As you know already, the cloud computing has in-build unique features to support revolutionary technological advancement in the modern society. Here, we have given the characteristics based on three major aspects as follows,

Key Features of Cloud Computing

  • Here, the user not tied up with the network and present applications accompany with the users by the cloud.
  • Support fast deployment of pay-as-you-go model
  • No need to care about installation updates
  • User can access the service or resource from any location
  • Improves the user’s capacity more than the standard local host
  • Support huge volume of processing power and storage

When dealing with cloud computing , it is must to be ware with the different varieties of cloud deployments. For any cloud developments, the environment is based on the any of the following type.

Types of Cloud Deployments

  • Recommended for Stand-alone Organization / Institution
  • Useful for Business People with Control Norms
  • Large-capacity of Storage Space with Scalability Provision
  • Useful for Software Project and other Developments
  • Multi-Tenant and Cooperative Platform
  • Useful to Industrialists to Share Data within Organization
  • Combing two different platforms as Public with Private
  • Suggested for enterprise to handle Big Data with Standard Privacy Norms

So far, we have debated the cloud service, service types and its supporting features. Now, we can see about the how the task is scheduled in cloud system. In this, the scheduling process takes place in following three phases. Scheduling in cloud can be generalized into three phases namely:

  • Task Allocation: Task is surrender to targeted resource
  • Resource Discovering And Filtering: Data Center Broker find out all available resources in the network with their current status information
  • Resource Selection: Decide on target resource depends on specific task metrics

Then, we are going to talk over about the most frequently asked cloud computing questions in google with the agreeing solutions. Since, it gives new dimension in creating novel Cloud Computing Topics .

FAQs in Cloud Computing

  • Solution: Use VM migration, load balancing or scheduling schemes to assign the tasks to the particular servers. It reduces the storage overhead and also uses the resources appropriately. Due to the optimization of storage servers, it can be scale and flexible well for the large volume of tasks.
  • Solution: Use blockchain or cryptography mechanisms for ensuring the security and privacy for both users, and cloud service providers. Application layer security is the crucial need for the current cloud applications and services.
  • Solution: Today, tasks from the users are heterogeneous (Video, Voice or multimedia applications) and the requirements of each task are different with respect to the QoS (response time, latency and resources). Optimally choose the service provider for the corresponding task.
  • Solution: At first, select the most apt approach to do each task depend on end-user or service provider request. Then, assign the resources to tasks by priority.
  • Solution: CPU, memory and network bandwidth usage is estimated for task and sensitive tasks are allocated by the resources and then non-sensitive tasks are assigned using appropriate method. Due to the estimation of accurate amount of resources for the required task, applications are executed without any error.

In addition, we have given the new future direction of the cloud computing. These are handpicked by our experts after the thorough research of current advancement in cloud computing.

What is the Next in cloud?

  • Cloud Computing Vendors
  • Virtualization and Containers
  • Server less and Microservices
  • Software Define Network and Storage
  • Migration of Internal Service and Private Cloud
  • Private Cloud Operation and Administration
  • Container Orchestration and Container as a Service
  • New Technologies for IaaS based Cloud Computing and Network Security
  • And other Upcoming Public Cloud Project

When working on cloud computing projects, make sure that the following things are answered in the selected research topic. Since, these aspects give the add-on score in your research work.

Research Issues in Cloud Computing

  • How to address the heterogeneity of mixed clouds?
  • How to incorporate multiple clouds to increase scalability
  • How to model high persistent and performance caching approaches?
  • How cache overcome the trustworthiness issues and enhance the performance

Just to let you know, we have given the top-demanding Cloud Computing Topics. These topics are suitable for all active cloud computing scholars as well as final year candidates. And, it is classified into 5 different categorizes for your ease.

Cloud computing project topics for final year

  • Cloud Service Composition
  • Business Process Management
  • Cloud Federation and Interoperability
  • Clouds Energy Issues
  • Adaptive and also Dynamic Services
  • Service Life Cycles and also Service Governance
  • Container Deployment
  • Architectural Models
  • Service Privacy and also Security
  • Building Service based Applications
  • Self-Organizing Service Architectures
  • Crowd and Social based Cloud
  • Cloud and also in Service Business Models
  • Scientific Computing
  • Smart Cities
  • RESTful Services and also clouds
  • DevOps in the cloud
  • Microservices Management and also Deployment
  • Trends in Computation, Storage and also in Network Clouds
  • Next Gen Service Repositories
  • Edge/Fog Computing
  • Real-Time and Embedded Services
  • Fog to Multi Cloud Services
  • IoT service Engineering
  • Cloud to Fog Computing Solutions
  • IoT Delivery Models
  • Mobile Services and also Clouds
  • PaaS and IaaS Cloud Services
  • Industry Clouds: Rapidly Growing Segments
  • Internet of Things and Artificial Intelligence
  • Hyper-scale Providers also on Cloud
  • Hybrid Cloud Platform and Services
  • Severless Computing
  • Workload Acceleration
  • Data Center Proliferation
  • Hardened/ Trusted/ Shield VMs
  • Resource Allocation

Cloud Computing Tools and Simulators

Now, shall we see about the implementation tools for cloud computing. Normal tools addresses the user demands based on resource features (storage size, processing ability in MIPs, deadlines). Further, advance tools analyze the task interdependence for real-world cloud communication.

In overall, Cloud simulation tools measures the performance and readiness of critical cloud system. In addition, it monitors, handles and assesses the applications, infrastructure, architecture and services. Below, we have suggested some important cloud monitoring tools,

  • Logic Monitor
  • AppDynamics
  • Amazon CloudWatch
  • Microsoft Cloud Monitoring

Moreover, cloud computing is incorporated with several advantages to assist both clients and hosts. And, it is easy to access, upload, download the info in very low time.

  • Internet of Things
  • Fog / Edge Computing
  • Green computing (Green cloud)

Before get into the cloud oriented research work, one should know the fundamental theories of Cloud computing. By referring the good textbook materials, one can strong build their technical skills. Also, the following things are very essential for active scholars to create a successful cloud system.

How to be Successful in the Cloud?

  • Software Defined Storage and Network
  • Various Container Solutions (rkt, docket and more)
  • Cloud based Applications Logging, Monitoring and Debugging
  • Cloud CI/CD Services (Continuous Integration and Developments)
  • Primary cloud offering models (private, public and hybrid) and services (SaaS, IaaS and PasS)

At the end of development phase, it is required to analysis the overall system perform to bring out the original worth of the selected research topic. Through, one can prove that their selected topic is better than other existing ones. Most importantly, it should meet the user QoS in all aspects.

Performance Analysis in Cloud Computing

Quality of Service

  • Measure the network overall performance and services
  • Parameters: bit rate, jitter, latency, availability, packet loss and throughput
  • Has varied priority based on users, applications and data flow. For instance: telecommunication – resource advance booking and traffic ranking using Voice over IP technology

Qualities of Traffic

Here, QoS factor is varied based on the human and technical factors. In some cases, the network may miss to drop some packets because of network traffic.

  • Human factors – service availability, stability, quality, waiting times and user data
  • Technical factors – scalability, reliability, maintainability, effectiveness and network jamming

Due to unpredictable user load the QoS metric may vary to assess the performance. Here, we have given some common parameters that being used in developing Cloud Computing Topics.

  • User (click rates, page views, etc.)
  • Throughput (http, cables, network, etc.)
  • Performance (response time, queries/sec, cpu, etc.)
  • Availability (link breaks, uptime, service and host failure, etc.)
  • Resource Utilization (disk, bandwidth, database tables, memory, etc.)
  • Performance Indicators (no. of. users, revenue / hour and cost / transaction, etc.)

VM Migration

Related Pages

Cloud Computing Based Projects

Cloud Based Projects

Cloud Computing Final Year Projects

Fog Computing Projects

Project On Cloud Computing Security

Cloud Computing Mini Projects

Projects On Cloud Computing Security

Cloud Computing Project Ideas

Project On Cloud Computing Final Year

Cloud Computing Open Source Projects

Projects Related To Cloud Computing

Cloud Computing Projects For Final Year Cse

Cloud Computing Projects In Java

Cloud Computing Projects For Engineering Students

Final Year Engineering Projects For Cse On Cloud Computing

Key Services

  • Literature Survey
  • Research Proposal
  • System Development
  • AWS Integration
  • Algorithm Writing
  • Paper Writing
  • Conference Paper
  • Thesis Writing
  • Dissertation Writing
  • Assignments

Testimonials

I really appreciate your project development team. Since, your source codes are very easy to understand and execute it. Thank you!

Happy Customer Wilson

You’re amazing and great working with you! I am totally satisfied with your paper writing. Keep up the best service for scholars!

Happy Client Lewis

Thank you so much for my project support and you guys are well done in project explanation. I get a clear vision about it.

Satisfied Client Eliza

You’ve been so helpful because my project is based on the AWS and HDFS integration. Before my commitment with you, I’ve a lot of fear, but you people rocked on my project.

Satisfied Customer Henry

Your project development is good and you made it so simple. Especially, codes are very new and running without any error.

Much Satisfied Client Frank

You exactly did my project according to my demand. I tried many services, but I get the correct result from you. So surely I will keep working with you!

Happy cloud Computing Project Customer

IMAGES

  1. Best Cloud Computing Research Topics by PhD Research Proposal

    cloud computing research paper topics

  2. Cloud Computing

    cloud computing research paper topics

  3. Cloud computing research paper 2018 pdf

    cloud computing research paper topics

  4. List of thesis topics in cloud computing for computer science

    cloud computing research paper topics

  5. (PDF) TOP 10 CLOUD COMPUTING PAPERS: RECOMMENDED READING

    cloud computing research paper topics

  6. (PDF) The Future of Cloud Computing: Opportunities, Challenges and

    cloud computing research paper topics

VIDEO

  1. Cloud computing previous year question paper || bteup #bteup #diploma #cloudcomputing

  2. Cloud Computing Research paper presentation

  3. Research Domain and Topic: Cloud Computing

  4. What is Cloud Computing?

  5. Cloud Computing Paper 2023-24 #AKTU #btech3rdyear #5thsem

  6. Cloud computing and distributed system

COMMENTS

  1. Top 10 Cloud Computing Research Topics in 2020

    Below are 10 the most demanded research topics in the field of cloud computing: 1. Big Data. Big data refers to the large amounts of data produced by various programs in a very short duration of time. It is quite cumbersome to store such huge and voluminous amounts of data in company-run data centers. Also, gaining insights from this data ...

  2. Top 10 Cloud Computing Research Topics of 2024

    4. Blockchain data-based cloud data integrity protection mechanism. The "Blockchain data-based cloud data integrity protection mechanism" paper suggests a method for safeguarding the integrity of cloud data and which is one of the Cloud computing research topics. In order to store and process massive amounts of data, cloud computing has grown ...

  3. IEEE Cloud Computing

    IEEE Cloud Computing. IEEE Cloud Computing is committed to the timely publication of peer-reviewed articles that provide innovative research ideas, applicati. IEEE Account. Change Username/Password; Update Address; Purchase Details. Payment Options; Order History; View Purchased Documents ...

  4. Articles

    Cloud computing technology offers flexible and expedient services that carry a variety of profits for both societies as well as individuals. De-duplication techniques were developed to minimize redundant data ... M. Pavithra, M. Prakash and V. Vennila. Journal of Cloud Computing 2024 13 :8.

  5. 12 Latest Cloud Computing Research Topics

    Learn about 12 latest cloud computing research topics, such as green cloud computing, edge computing, cloud cryptography, load balancing, cloud analytics, scalability, cloud platforms, mobile cloud computing, big data, cloud deployment model, cloud security and more. These topics are relevant for researchers, developers and students who want to explore the latest trends and challenges in cloud computing.

  6. cloud computing Latest Research Papers

    The paper further compares and reviews different layout model for the discovery of services, selection of services and composition of services in Cloud computing. Recent research trends in service composition are identified and then research about microservices are evaluated and shown in the form of table and graphs. Download Full-text.

  7. Cloud Computing Continuum Research Topics and Challenges. A Multi

    This paper has presented HUB4CLOUD's multi-source analysis for the identification of cloud computing research challenges. The paper presents the methodology followed and the main sources analysed. It also discusses the research topics identified and provides a graphical representation of the expected timeframe in which they could be realised ...

  8. Survey on serverless computing

    They proposed an open dataset for serverless computing papers. The dataset includes 60 papers for the period (2016-July 2018). Also, they have analyzed the dataset according to bibliometric, content, technology, and produced statistics about each section. In contrast, our paper aims to conduct a systematic survey.

  9. A Systematic Literature Review on Cloud Computing Security: Threats and

    Cloud computing has become a widely exploited research area in academia and industry. Cloud computing benefits both cloud services providers (CSPs) and consumers. The security challenges associated with cloud computing have been widely studied in the literature. This systematic literature review (SLR) is aimed to review the existing research studies on cloud computing security, threats, and ...

  10. Home page

    The Journal of Cloud Computing: Advances, Systems and Applications (JoCCASA) will publish research articles on all aspects of Cloud Computing. Principally, articles will address topics that are core to Cloud Computing, focusing on the Cloud applications, the Cloud systems, and the advances that will lead to the Clouds of the future.

  11. A COMPARATIVE STUDY ON THREE SELECTIVE CLOUD PROVIDERS

    topics to look into for further research. KEYWORDS Cloud Computing, Trending Cloud Providers, cloud Service feature. 1. INTRODUCTION Cloud Computing is being lauded as the next-generation shift that combines the internet and computing, allowing software, material and data to be kept on remote servers that are accessible via the web from ...

  12. Cloud Computing

    The 6 full papers and 1 short paper presented were carefully reviewed and selected from 25 submissions. They deal with the latest fundamental advances in the state of the art and practice of cloud computing, identify emerging research topics, and define the future of cloud computing.

  13. Cloud computing research: A review of research themes, frameworks

    This paper presents a meta-analysis of cloud computing research in information systems with the aim of taking stock of literature and their associated research frameworks, research methodology, geographical distribution, level of analysis as well as trends of these studies over the period of 7 years. ... Cloud computing research started to gain ...

  14. Key Topics in Cloud Computing Security: A Systematic Literature Review

    Based on a systematic literature review (hereafter SLR), this paper identifies the key themes and topics in cloud computing security. Findings from an analysis of 275 peer-reviewed publications show that cloud security solutions and cloud security challenges are the two most dominant themes. The other themes identified include guiding frameworks and methodologies and general security ...

  15. Latest Research Topics on Cloud Computing (2022 Updated)

    Learn about the top 14 in-demand research topics on cloud computing for 2022, such as green cloud computing, edge computing, cloud cryptography, load balancing, and more. These topics cover various aspects of cloud computing, such as its benefits, challenges, and applications. Find out how cloud computing can help you with your research or career goals.

  16. The Rise of Cloud Computing: Data Protection, Privacy, and Open

    From 2010 to 2021, we found 52 papers that were of top ranked journals and conferences. From 2010 to 2017, there is linear work in cloud computing, however, after 2017, a lot of work is done in cloud computing data security. From 2018 to 2021, 37 papers are published. After 2018, the trend about data security in cloud computing increased very ...

  17. 8 Cloud Security Trends in 2022

    After the COVID-19 pandemic, cloud adoption followed the increase in remote working. Increased flexibility, productivity, and reduced costs made it a viable option for businesses around the world. But with major advantages follow some critical security threats. In the case of mismanagement, organizations can suffer from data breaches and leakage.

  18. (PDF) Research Paper on Cloud Computing

    Cloud Computing An Empowering Technology: Architecture, Applications and Challenges. Conference Paper. Sep 2021. Meena Rani. Kalpna Guleria. Surya Narayan Panda. View. This paper contains of ...

  19. Research on Mobile Cloud Computing: Review, Trend and Perspectives

    Internet, Cloud Computing has become a significant research topic of the scientific and industrial communities since 2007. Commonly, cloud computing is described as a range of ser-vices which are provided by an Internet-based cluster system. Such cluster systems consist of a group of low-cost servers or

  20. A Review Paper on Cloud Computing

    Cloud computing has taken its place all over the IT industries. It is an on-demand internet-based computing service that provides the maximum result with minimum resources cloud computing provides a service that does not require any physical close to the computer hardware. Cloud Computing is a product of grid, distributed, parallel, and ubiquitous computing. This paper introduces the concepts ...

  21. Top 10 Cloud Computing Research Topics in 2022

    Here are ten research topics for cloud computing to look forward to in 2022 -. Cloud analytics. Cloud analytics is a cloud-related analytical tool that helps to analyze data and reduce data storage costs. It is used for research in genomics, exploring oil and gas reserves, business intelligence, Internet of Things (IoT) and cybersecurity.

  22. 545 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on GREEN CLOUD COMPUTING. Find methods information, sources, references or conduct a literature review ...

  23. Gartner Emerging Technologies and Trends Impact Radar for 2024

    Use this year's Gartner Emerging Tech Impact Radar to: ☑️Enhance your competitive edge in the smart world ☑️Prioritize prevalent and impactful GenAI use cases that already deliver real value to users ☑️Balance stimulating growth and mitigating risk ☑️Identify relevant emerging technologies that support your strategic product roadmap Explore all 30 technologies and trends: www ...

  24. Cloud costs continue to rise in 2024

    U.S. government economic data and vendor research point to a pattern of rising cloud costs. The Bureau of Labor Statistics' Producer Price Index (PPI) for January, released last week, reported a 0.6% month-over-month increase in data processing and related services, a category that includes cloud computing. The year-over-year uptick stands at 3.7%.

  25. Cloud-Native Computing: A Survey From the Perspective of Services

    The development of cloud computing delivery models inspires the emergence of cloud-native computing. Cloud-native computing, as the most influential development principle for web applications, has already attracted increasingly more attention in both industry and academia. Despite the momentum in the cloud-native industrial community, a clear research roadmap on this topic is still missing. As ...

  26. Cloud Computing Topics [Hot Topics for cloud computing Research]

    Cloud Computing Topics. Cloud computing is the process of offering resources and services depends the user demand in regardless of location. It is the computing system which classify under of these 3 service models of cloud. Platform as a Service (PAAS) - offers a well-established platform for developing the software / application.