• No results found

Participatory Cloud Computing and the Privacy and Security of Medical Information Applied to A Wireless Smart Board Network

N/A
N/A
Protected

Academic year: 2021

Share "Participatory Cloud Computing and the Privacy and Security of Medical Information Applied to A Wireless Smart Board Network"

Copied!
7
0
0

Loading.... (view fulltext now)

Full text

(1)

Participatory Cloud Computing and the Privacy and Security of Medical Information Applied to A Wireless Smart Board Network

Lutando Ngqakaza ngqlut003@myuct.ac.za

UCT Department of Computer Science

Abstract: Cloud computing is a fast growing computing paradigm which is being increasingly adopted by companies and individuals who wish to use online services. With the increasing adoption rate of Cloud Computing services, there are security and sustainability issues which need to be addressed. These issues may be the dependence of large Cloud vendors such as Google, Amazon, and Microsoft which raise concerns of data privacy. Added to the privacy concerns are the issues which surround sustainability, the more popular that Cloud Computing services become, the more data centers and energy that needs to be created and consumed. As a result Cloud Computing leads to an increasing global carbon footprint. To alleviate some of these issues, a participatory Cloud Computing paradigm shift will try to address the issues of privacy and sustainability while still achieving the same resilience and convenience that the traditional Cloud Computing architecture offers.

1. Introduction and Motivation

Cloud Computing: Cloud Computing offerings from commercial companies have become widely used and widely available to most people connected to the internet. The big web companies like Google, Amazon, and Microsoft [2] offering some variant of a Cloud Computing platform (App Engine, Elastic Cloud Compute and Live Mesh respectively). The advantages of using the above companies Cloud Computing platforms is that to use or leverage these services does not require much effort from the consumer. These Cloud Computing services are relatively easy to set up and use, added to this is that these services are also convenient.

The main advantages of utilizing Cloud Computing platforms are that the prices for using a Cloud Computing instance is a fraction of the price of buying and maintaining a datacenter or server. Amazon‟s EC2 platform has seen its success in allowing consumers to instantiate machines or virtual machines within the cloud [2]. These instances can be used to do any computational task. There are however some drawbacks. The consumer is at the mercy of the Cloud Computing vendor. The consumer has no

assurances as to what will be done with associated data, the consumer also has little or no control over what security policy can be enforced upon their data. It is very hard to ensure that your data will remain private or secure with commercial Cloud Computing platforms. There is also the concern that as Cloud Computers becomes more prominent, more datacenters would have to be deployed around the globe, which raises some sustainability questions [4]. It is expected that if the current Cloud Computing adoption rate trends are to be continued, then the Cloud Computing carbon footprint will be expected to

(2)

exceed the global airline industries carbon footprint by 2020 [4]. There are also efficiency issues which can be improved in the Cloud Computing space, but that will not be covered in this paper.

Community Cloud Computing: Community Cloud Computing was proposed as a solution which addressed the issues surrounding the traditional vendor controlled Cloud Computing companies, and the issues surrounding sustainability. The Community Cloud aspires to unify distributed resource/provision management (Grid Computing), distributed control, and sustainability [2]. The Community Cloud would ideally be a self managed system (gaining insight from Autonomic Computing). The basic idea is to leverage underutilized recourses of user machines (or smart devices) to form a Community Cloud, with nodes fulfilling all (or some) of the roles, consumer, producer and coordinator. As shown in Figure 1.

Figure 1: Nodes potentially fulfilling all roles, consumer, producer, and coordinator. Green symbolizes recourse consumption, yellow symbolizes resource provision, and red symbolizes resource coordination

Participatory Cloud Computing: Participatory Cloud Computing builds upon the Community Cloud Computing principles to support smart networked infrastructures where:

 The computing devices are smart boards which are assumed to be deployed unattended to collect data.

 Using an opportunistic resource allocation model, the Participatory Cloud Computing infrastructure will allow network size increase/decrease.

 Using an opportunistic data dissemination model, the Participatory Cloud will allowed data to be stored/fetched in/from the Cloud and forwarded to underutilized nodes for potential processing.

So it follows that in essence, Community Cloud Computing aims to address the issues related to the traditional Cloud Computing model. And the Participatory Cloud Computing model aims to build on top of the Community Cloud Computing model. Participatory Cloud Computing builds on Grid Computing and Community Cloud principles to maximize energy efficiency and device efficiency.

When Participatory Cloud Computing is applied to community healthcare, there are numerous issued that are raised by the Participatory Cloud. These issues being:

(3)

 Privacy and security policies related to the storing and disseminating of medical data

 The reliable transfer of medical data to and from the Participatory Cloud

 Effective resource allocation.

There have been some advances made in the above mentioned issues; these advances have been done by other individuals and consortiums.

2. Related Work

There needs to be a scalable and safe (in terms of security and privacy) systems architecture of a participatory cloud computing set up of wireless smart boards. If we apply this principle to the field of community healthcare the benefits become apparent. Medical doctors can monitor patients from remote or rural locations [6].This can enable medical personnel to make decisions or prioritize patients in terms of their level of medical need for medical attention without having to have the patient physically present with a doctor [6]. The main challenge is to come up with an architecture which can both be scalable, secure and it has to be a system of a safety critical nature. The information which the system handles is medical data and this data could save the livelihood of some patients. The system can be similar to the one proposed by Heinze et al [3] except that the system should not be limited to the remote monitoring of heart patients with heart problems, the system could be generic for all medical conditions. Some previous research around the areas of privacy, reliable transfer of data, and resource allocation has been conducted.

Privacy: In the paper: Secure Cloud-Based Medical Data Exchange by Neuhaus et al [5] it is argued that healthcare is becoming more of an inter-institutional joint effort. Neuhaus et al also argued that with the ever increasing popularity of Cloud-based data storage, security and privacy of storing medical data on these platforms remains a challenge. Neuhaus et al proposed a secure way to store and disseminate data on the cloud by applying rights management techniques on data fragments in the cloud. Possible attackers for this medically critical data could be curious cloud infrastructure providers, curious network providers and curious end users [7]. The data integrity could be verified by digital signatures and check sums. A typical Cloud Computing set up could resemble Figure 2.

Figure 2: Security Perimeter: Trusted and Untrusted Zones [1]

(4)

Marinos et al [4] made an argument from another angle. In the paper: Community Cloud Computing the discussion of Convenience vs. Control (applied to Cloud Computing) showed that the increasing adoption rate of vendor Cloud platforms is due to the convenience of the services that they (Amazon, Google and Microsoft) provide. This convenience comes at the price of control. When it comes to critical data such as medical information, a well controlled cloud infrastructure is preferred since if one controls an

infrastructure then one can also enforce a stringent security policy on the infrastructure.

Project Fontane: Project Fontane is a large scale research effort by a consortium to experimentally research on the remote monitoring of heart patients.

Reliable Data Transmission: Medical data must also be transmitted in a reliable fashion. The transfer of medical data from remote patient monitoring nodes to and from the cloud infrastructure has to have a high quality of service with high resilience, redundancy and reliability. In the context of the Fontane project [6]

mobile communications technology was used (UMTS) This was however a live stream of medical data (medical data sampled at a high rate). A usability experiment on the Fontane architecture was conducted on 5 of the 40000 Berlin-Marathon athletes. The system was a success. This system however has some scalability drawbacks. There needs to be a rule-based, self adaptive middleware that can cope with high amounts of patient data, the middleware also needs to be able to suggest a review order of patient data.

Schacht et al [7] have come up with an architecture called the self-adaptive prioritizing middleware (SaPiMa) for remote monitoring in the Fontane project.

Resource Allocation: It makes sense that in a Participatory Cloud, the system should automatically be able to allocate resources to nodes. Energy efficient resource allocation algorithms need to be developed.

These algorithms should be adaptive to changes in the size of the Participatory Cloud.

CloudSim [1] is a toolkit designed to enable engineers to model and simulate Clouds as well as execute applications and processes on top of Clouds. It is a highly customizable tool which allows engineers to enforce arbitrary policies on components. CloudSim provides data allocation policy modeling. It is extremely important to have a data allocation policy which can be adapted to a Participatory Cloud scheme.

An ideal Community Cloud should „Gracefully fail‟ [4]. If a few nodes were to fail, then the cloud should only sustain minimal damage or downtime if any, all the unaffected nodes should compensate for all other fallen nodes. It is expected that the Community Cloud would have a smaller carbon footprint than

traditional data centers since the Community Cloud is just making use of underutilized machines. The quality of service of a community cloud can only be tested when the cloud is at maximum capacity. QoS with respect to data transmission is a huge concern especially since the nature of the data is medical.

Doctors and other medical professionals need to use this data for diagnoses and decision making.

Experimental research has been conducted in the field of remote community healthcare. One of them being Project Fontane. Project Fontane is a collaborative research effort by a consortium of numerous partners, among them include Deutsche Telekom, Hassno-Plattner-Insitute, and some heart specialists [6].

The project was launched to increase the quality of medical care and to decrease the inconvenience of rural German citizens from commuting vast distances for medical checkups. Patients would be equipped

(5)

with monitoring smart boards (blood pressure meter, mobile electrocardiographs etc.) and the data would be sent to a tele-medical center which would be monitored by medical experts. A challenge which was experienced by the Fontane project was the high amount of data being sent back to the tele-medical center, since the patients are being monitored on a 24x7 basis, it also meant that there had to be medical experts available to diagnose patients and store the data on the system. The Fontane project realized that due to the highly variable amount of data that had to be transmitted and displayed to doctors. The classification and prioritization of information could no longer be performed manually. The process had to be automated to make the system scalable[6], added to this is that the system also needed to be affordable, adding more doctors to the Fontane system would make the system less scalable and more expensive. The paper by Schacht et al. [7] builds on top of the Fontane architecture by trying to adapt the existing Fontane architecture to perform well under the scenario of live streaming medical data over a UMTS mobile communications network. The challenges on this system were that mobile communication networks are not reliable for critical data transmission. There were 3 challenges when it came to the live streaming of the medical data [7]:

 Devices

 Connections

 Data Transmission

To get around these issues the authors used a TCP like protocol for the streaming of this data. Each data packet had to be acknowledged to confirm the transmission of data.

3. Critical Comparison

The authors of the Secure Cloud-based medical Data Exchange [5] focused more on the secure storage and secure communication of data being held in the cloud. This is a very crucial aspect to not overlook the nature of the data is critically important to the lives and health of the subjects using a remote health monitoring system. Some [7], [3] of the papers that are covered in this synthesis either gloss over the privacy issues surrounding this data or they mention the security policy in passing. The paper by Schacht et al [7] tried to address QoS issues surrounding the transmission of this important data by considering the problems which surround sending data through a wireless community network (e.g. UMTS). The Fontane project realized that it would not be a scalable endeavor if every data fragment had to be analyzed by professional doctors, Heinze et al [3] improved scalability issues by using a hybrid artificial intelligence classifier which combines a rule based neural network with a traditional neural network to get results from patient data.

The Participatory Cloud system would be rather similar to the one presented by Heinze et al. The only difference is that the data would intentionally be stored in an untrusted but redundant cloud storage location. The participatory cloud component would come in where the actual data processing would be conducted by participatory nodes (or wireless smart boards). These smart boards would be both

consumers and producers (and some coordinators) the consumers would pull the data from the cloud, the producers would process and distribute data around the participatory cloud network. The processing of this data can include performing large scale machine learning algorithms or rule based neural networks to the data to cope with large scale operations. The processed data would then be sent back to the cloud (encrypted) for cloud storage.

(6)

Sustainability was addressed by Marinos et al [4] but not clearly substantiated. There needs to be more research conducted to determine if Community Cloud Computing / Participatory Cloud Computing has a smaller environmental impact than the traditional data center doing the same amount of work. Marinos et al stated “We expect the Community Cloud to have a smaller carbon footprint than vendor clouds” but did not substantiate those claims.

4. Conclusions

In conclusion this paper outlined how Community Cloud Computing tries to improve on the disadvantages which traditional Cloud Computing has. And suggested the Participatory Cloud Computing idea to build upon the Community Cloud Computing scheme by utilizing underused nodes to process data; these nodes could be smart boards of any platform type.

The previous works outlined in this paper has some definite benefits. There are architectural outlines which can definitely be used for future work. The discussion surrounding the safety and privacy of medical data was discussed. Major issues surrounding data safety include.

 Connection Reliability

 Privacy of Data

 Sustainability

There have been some experimental results surrounding the streaming of live medical data which can assist others in implementing future systems of a similar nature.

Designing an efficient architecture for Participatory Cloud Computing is an open ended problem.

Experimental analysis should be conducted on various architectures to deduce the most efficient architecture which maximizes data safety, minimizes carbon footprint and ensures that data transmissions are on top of a high quality of service communications backbone.

(7)

5. References

[1] Buyya, R. et al. 2009. Cloudbus Toolkit for Market-Oriented Cloud Computing. (2009), 24-44.

[2] Buyya, R. et al. 1969. Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities. (1969).

[3] Heinze, T. et al. 2011. A Hybrid Artificial Intelligence System for Assistance in Remote Monitoring of Heart Patients. (2011), 413-420.

[4] Marinos, A. and Briscoe, G. 2009. Community cloud computing Conference paper Community Cloud Computing. December (2009), 1-4.

[5] Neuhaus, C. et al. Secure Cloud-based Medical Data Exchange.

[6] Polze, A. et al. A scalable , self-adaptive architecture for remote patient monitoring.

[7] Schacht, A. et al. 2011. Live Streaming of Medical Data - The Fontane Architecture for Remote Patient Monitoring and Its Experimental Evaluation. 2011 14th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops. (Mar. 2011), 306-312.

References

Related documents

1 M.Sc of Health, Safety and Environment Management, Department of Health, Safety and Environment Management, Faculty of Health, Kashan University of Medical Sciences, Kashan, Iran•

It will: define the concept of an effective remedy; establish Hong Kong’s legal responsibility to provide an effective remedy for human rights violations as a party to International

Unlike master’s level psychotherapists whose training focuses narrowly on counseling, RxP will highlight that psychologists have advance training in diagnosis and treatment of

Our key objectives and priorities for the year have been to maintain a balanced portfolio across all tumour sites and modalities and in particular to increase recruitment

4 Tender form must accompany earnest money in shape of Term Deposit Receipt only duly pledge in favour of the General Manager, The Kangra Central Cooperative Bank Limited Dharamshala

The area constitutes the largest continuous stretch of forest north of the Missouri River in the state and is sanctuary to a unique wildlife population that includes deer,

It appears that both body and resin color pigments are derived through laccaic acid D, which is a component of the body color pigments in wild type crimson insect and only

London Toronto Sydney Tokyo Singapore Madrid Mexico City Munich Paris Cape Töwn Hong