Despite the surge in Vehicular Ad Hoc NETwork (VANET) research, future high-end vehicles are expected to under-utilize the on-board computation, communication, and storage resources. Olariu et al. envisioned the next paradigm shift from conventional VANET to Vehicular Cloud Computing (VCC) by merging VANET with cloud computing. But to date, in the literature, there is no solid architecture for cloud computing from VANET standpoint.
In this paper, we put forth the taxonomy of VANET based cloud computing. It is, to the best of our knowledge, the first effort to define VANET Cloud architecture. Additionally we divide VANET clouds into three architectural frameworks named Vehicular Clouds (VC), Vehicles using Clouds (VuC), and Hybrid Vehicular Clouds (HVC). We also outline the unique security and privacy issues and research challenges in VANET clouds.
Cloud computing is an increasingly important solution for providing services deployed in dynamically scalable cloud networks. Services in the cloud computing networks may be virtualized with specific servers which host abstracted details. Some of the servers are active and available, while others are busy or heavy loaded, and the remaining are offline for various reasons.
Users would expect the right and available servers to complete their application requirements. Therefore, in order to provide an effective control scheme with parameter guidance for cloud resource services, failure detection is essential to meet users' service expectations. It can resolve possible performance bottlenecks in providing the virtual service for the cloud computing networks. Most existing Failure Detector (FD) schemes do not automatically adjust their detection service parameters for the dynamic network conditions, thus they couldn't be used for actual application.
This paper explores FD properties with relation to the actual and automatic fault-tolerant cloud computing networks, and find a general non-manual analysis method to self-tune the corresponding parameters to satisfy user requirements. Based on this general automatic method, we propose specific and dynamic Self-tuning Failure Detector, called SFD, as a major breakthrough in the existing schemes. We carry out actual and extensive experiments to compare the quality of service performance between the SFD and several other existing FDs.
Our experimental results demonstrate that our scheme can automatically adjust SFD control parameters to obtain corresponding services and satisfy user requirements, while maintaining good performance. Such an SFD can be extensively applied to industrial and commercial usage, and it can also significantly benefit the cloud computing networks.
Although the cloud computing model is considered to be a very promising internet-based computing platform, it results in a loss of security control over the cloud-hosted assets. This is due to the outsourcing of enterprise IT assets hosted on third-party cloud computing platforms. Moreover, the lack of security constraints in the Service Level Agreements between the cloud providers and consumers results in a loss of trust as well. Obtaining a security certificate such as ISO 27000 or NIST-FISMA would help cloud providers improve consumers trust in their cloud platforms' security.
However, such standards are still far from covering the full complexity of the cloud computing model. We introduce a new cloud security management framework based on aligning the FISMA standard to fit with the cloud computing model, enabling cloud providers and consumers to be security certified. Our framework is based on improving collaboration between cloud providers, service providers and service consumers in managing the security of the cloud platform and the hosted services.
It is built on top of a number of security standards that assist in automating the security management process. We have developed a proof of concept of our framework using. NET and deployed it on a test bed cloud platform. We evaluated the framework by managing the security of a multi-tenant SaaS application exemplar.
Electronic health is vital for enabling improved access to health records, and boosting the quality of the health services provided. In this paper, a framework for an electronic health record system is to be developed for connecting a nation's health care facilities together in a network using cloud computing technology.
Cloud computing ensures easy access to health records from anywhere and at any time with easy scalability and prompt on demand availability of resources. A hybrid cloud is to adopted in modeling the system and solutions are proposed for the main challenges faced in any typical electronic health record system
.
With rapid development of cloud computing, the need for an architecture to follow in developing cloud computing applications is necessary. Existing architectures lack the way cloud applications are developed. They focus on clouds' structure and how to use clouds as a tool in developing cloud computing applications rather than focusing on how applications themselves are developed using clouds.
This paper presents a survey on key cloud computing concepts, definitions, characteristics, development phases, and architectures. Also, it proposes and describes a novel architecture, which aid developers to develop cloud computing applications in a systematic way. It discusses how cloud computing transforms the way applications are developed/delivered and describes the architectural considerations that developers must take when adopting and using cloud computing technology.
Cloud computing has become increasingly popular by obviating the need for users to own and maintain complex computing infrastructure. However, due to their inherent complexity and large scale, production cloud computing systems are prone to various runtime problems caused by hardware and software failures. Dependability assurance is crucial for building sustainable cloud computing services. Although many techniques have been proposed to analyze and enhance reliability of distributed systems, there is little work on understanding the dependability of cloud computing environments.
As virtualization has been an enabling technology for the cloud, it is imperative to investigate the impact of virtualization on the cloud dependability, which is the focus of this work. In this paper, we present a cloud dependability analysis (CDA) framework with mechanisms to characterize failure behavior in cloud computing infrastructures. We design the failure-metric DAGs (directed a cyclic graph) to analyze the correlation of various performance metrics with failure events in virtualized and non-virtualized systems. We study multiple types of failures.
By comparing the generated DAGs in the two environments, we gain insight into the impact of virtualization on the cloud dependability. This paper is the first attempt to study this crucial issue. In addition, we exploit the identified metrics for failure detection. Experimental results from an on-campus cloud computing testbed show that our approach can achieve high detection accuracy while using a small number of performance metrics.
Personal health record (PHR) is an emerging patient-centric model of health information exchange, which is often outsourced to be stored at a third party, such as cloud providers. However, there have been wide privacy concerns as personal health information could be exposed to those third party servers and to unauthorized parties.
To assure the patients’ control over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management, flexible access and efficient user revocation, have remained the most important challenges toward achieving fine-grained, cryptographically enforced data access control. In this paper, we propose a novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semi-trusted servers. To achieve fine-grained and scalable data access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patient’s PHR file.
Different from previous works in secure data outsourcing, we focus on the multiple data owner scenario, and divide the users in the PHR system into multiple security domains that greatly reduces the key management complexity for owners and users. A high degree of patient privacy is guaranteed simultaneously by exploiting multi-authority ABE. Our scheme also enables dynamic modification of access policies or file attributes, supports efficient on-demand user/attribute revocation and break-glass access under emergency scenarios. Extensive analytical and experimental results are presented which show the security, scalability and efficiency of our proposed scheme
Cloud computing has been envisioned as the de-facto solution to the rising storage costs of IT Enterprises. With the high costs of data storage devices as well as the rapid rate at which data is being generated it proves costly for enterprises or individual users to frequently update their hardware. Apart from reduction in storage costs data outsourcing to the cloud also helps in reducing the maintenance.
Cloud storage moves the user’s data to large data centers, which are remotely located, on which user does not have any control. However, this unique feature of the cloud poses many new security challenges which need to be clearly understood and resolved. We provide a scheme which gives a proof of data integrity in the cloud which the customer can employ to check the correctness of his data in the cloud. This proof can be agreed upon by both the cloud and the customer and can be incorporated in the Service Level Agreement (SLA).
The advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search.
Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in cloud, it is crucial for the search service to allow multi-keyword query and provide result similarity ranking to meet the effective data retrieval need. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely differentiate the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE), and establish a set of strict privacy requirements for such a secure cloud data utilization system to become a reality.
Among various multi-keyword semantics, we choose the efficient principle of “coordinate matching”, i.e., as many matches as possible, to capture the similarity between search query and data documents, and further use “inner product similarity” to quantitatively formalize such principle for similarity measurement. We first propose a basic MRSE scheme using secure inner product computation, and then significantly improve it to meet different privacy requirements in two levels of threat models.
Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given, and experiments on the real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication.
Cloud computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance.
Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user. Specifically,our contribution in this work can be summarized as the following three aspects:
1) We motivate the public auditing system of data storage security in Cloud Computing and provide a privacy-preserving auditing protocol, i.e., our scheme supports an external auditor to audit user’s outsourced data in the cloud without learning knowledge on the data content.
2) To the best of our knowledge, our scheme is the first to support scalable and efficient public auditing in the Cloud Computing. In particular, our scheme achieves batch auditing where multiple delegated auditing tasks from different users can be performed simultaneously by the TPA.
3) We prove the security and justify the performance of our proposed schemes through concrete experiments and comparisons with the state-of-the-art.
Cloud computing is fundamentally altering expectations for how and when computing, storage an networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they consume. Service Developers want the Service Providers to ensure or provide the capability to dynamically allocate and manage resources in response to changing demand patterns in real-time.
Ultimately, Service Providers are under pressure to architect their infrastructure to enable real-time end-to-end visibility and dynamic resource management with fine grained control to reduce total cost of ownership while also improving agility. The current approaches to enabling real-time, dynamic infrastructure are inadequate, expensive and not scalable to support consumer mass-market requirements. Over time, the server-centric infrastructure management systems have evolved to become a complex tangle of layered systems designed to automate systems administration functions that are knowledge and labor intensive.
This expensive and non-real time paradigm is ill suited for a world where customers are demanding communication, collaboration and commerce at the speed of light. Thanks to hardware assisted virtualization, and the resulting decoupling of infrastructure and application management, it is now possible to provide dynamic visibility and control of service management to meet the rapidly growing demand for cloud-based services.