In cloud storage system and mobile devices only backup the data and that data retrieve it from cloud but simultaneously cannot access the data in mobile and computer. In Android, there is no proper facility to save data and retrieve data in cloud. Memory card have no proper security like user lose the mobile or physical damage. 1. The security of the cloud is essentially strong and the private internet access area is also secured. But the data
Dr. Gerard Steube has conducted research in a wide variety of areas including health, defense, and education. He has expertise in Information Technology and Management Science. He has participated in presentations on software complexity at major computer science conferences and conducted research that profiles computer hackers. In addition, he has provided consulting services for federal, state, and private organizations in developing information technology policies, software management plans, and outsourcing strategies. Dr. Steube provided statistical examination of Medicare and Medicaid data for the state of Maryland. He has worked on federal grants for investigating software complexity and networking and has developed software in C++, COBOL, FORTRAN, Perl, PHP, SQL, and Java. In industry he has held positions including the director of information technology, chief computer scientist, executive director of technology, director of software research, and research statistician. He was awarded the CCP (Certified Computer Professional) designation from the Institute for the Certification of Computer Professionals (ICCP) and is certified for Institutional Research Board work by the Collaborative Institutional Training Initiative. Dr. Steube is a member of MENSA and a certified MENSA test proctor; he has served as the editor for the MENSA International Journal. He serves as a reviewer and editorial board member for the Journal of Information Systems Education (JISE). Dr. Steube is a voting member in the American Psychological Association, the Mathematical Association of America, the Association for Computing Machinery, and the American Statistical Association. He is a published restoration photographer and a Web site designer. E-mail: firstname.lastname@example.org.
Flash memory devices called flash drives, with capacities of up to a few hundred GBs, are available for general mass storage applications. These units are packaged in small plastic cases approximately three inches long with a remov- able cap on one end to protect the unit’s electrical connector when the drive is off-line. The high capacity of these portable units as well as the fact that they are easily connected to and disconnected from a computer make them ideal for off- line datastorage. However, the vulnerability of their tiny storage chambers dic- tates that they are not as reliable as optical disks for truly long term applications. Another application of flash technology is found in SD (Secure Digital) memory cards (or just SD Card). These provide up to two GBs of storage and are packaged in a plastic rigged wafer about the size a postage stamp (SD cards are also available in smaller mini and micro sizes), SDHC (High Capacity) memory cards can provide up to 32 GBs and the next generation SDXC (Extended Capacity) memory cards may exceed a TB. Given their compact physical size, these cards conveniently slip into slots of small electronic devices. Thus, they are ideal for digital cameras, smartphones, music players, car navigation systems, and a host of other electronic appliances.
Data access control is an effective way to assure the data security in the cloud. However, cloud storage service separates the roles of the data owner from the data service provider, and the data owner does not interact with the user directly for providing data access service, which makes the data access control a challenging issue in cloud storage systems. Because the cloud server cannot be fully trusted by data owners, traditional server-based access control methods are no longer applicable to cloud storage systems. To block the untrusted servers from accessing sensitive data, traditional methods usually encrypt the data and only users holding valid keys can access the data. Many access control scheme employing attributed-based encryption is proposed, which adopts the so-called key- policy attribute-based encryption (KP-ABE) to enforce fine-grained access control. However, KP-ABE experience defeat of flexibility in attribute management and lacks scalability in dealing with multiple-levels of attributes authorities. When compared to KP-ABE, cipher text-policy ABE (CP-ABE) turns out to be well suited for access control due to its expressiveness in describing access control policies.
Sensor-Cloud infrastructure has been evolved and proposed by several IT people in the present days. Sensor-Cloud infrastructure is the extended form of cloud computing to manage the sensors which are scattered throughout the network (WSN). Due to the increasing demand of sensor network applications and their support in cloud computing for a number of services, Sensor-Cloud service architecture is introduced as an integration of cloud computing into the WSN to innovate a number of other new services. When WSN is integrated with cloud computing environment, several shortfalls of WSN like storage capacity of the data collected on sensor nodes and processing of these data together would become much easier. Since cloud computing provides a vast storage capacity and processing capabilities, it enables collecting the huge amount of sensor data by linking the WSN and cloud through the gateways on both sides, that is, sensor gateway and cloud gateway. Sensor gateway collects information from the sensor nodes of WSN, compresses it, and transmits it back to the cloud gateway which in turn decompresses it and stores it in the cloud storage server, which is sufficiently large . Sensor-Cloud can be used in many real-life applications like environmental monitoring, disaster monitoring, telemetric, agriculture, irrigation, healthcare, and so forth. As an illustration, we can use the Sensor-Cloud infrastructure for deploying health-related applications such as monitoring patients with cardiovascular disease, blood sugar follow-up, sleep activity pattern monitoring, diabetics monitoring, and so forth. In traditional approach, the trials of individual’s data like level of blood sugar, weight, heart rate, pulse rate, and so forth are reported everyday through some telemedicine interface. The patient’s trial information is sent to a dedicated server and is stored there for doctors or caregivers to analyse it sometime later. This system suffers from a level of adversity when the patient randomly moves from its current location, that is, when a patient is “on the go.” Thus, a more progressive, rapid, and mobile approach is needed where the recorded data from several sensor nodes of a WSN can be processed in pipelined and parallel fashion, and thereby to make the system easier to scale and be cost-effective in terms of resources available. The pipeline processing of data sets or instructions enable the overlapped operations into a conceptual pipe with all the stages of pipes processing
The increased complexity of electrical power distribution systems has a direct impact on the reliability of the power supply and on the power quality. The main cause behind the inefficient operation of the power distribution systems can be attributed to overloading of the power supply feeders, line losses and uneven loading of the feeders. To take care of these situations which eventually lead to a condition of under voltage or over voltage in a power distribution system and affects the reliability of power supply, we need to continuously monitor the system parameters like voltage, current, power and frequency in real time. As in a power distribution system the distribution points are sparsely distributed, it becomes difficult for the operators to monitor all the distribution points within the system in person. This raises the requirement of a system that can monitor the electrical parameters at all the distribution points and enable the remote monitoring of the condition of the distribution system. The available monitoring systems like SCADA and those implemented using PLC are being used widely for monitoring and control of large scale power distribution systems and the industrial setups. But the high cost involved in the implementation and the need of skilled manpower for the operation of these systems make them unsuitable to use in small power distribution systems like that of an educational organization or small industrial setups. Apart from above discussed options we do have many metering devices already available in the market and theses devices can be coupled with specified third party service for communication and data transmission. But the problem with such an arrangement is the high cost of the available measuring devices and also they do not have an integrated communication unit which leads to the need of external devices and service provider for all kind of communication and data transmission.
Abstract— With the fast evolution of digital data exchange, security information becomes much important in datastorage and transmission. It is essential to protect the confidential data from unauthorized access. In this paper, we analyze the Advanced Encryption Standard (AES). To provide the security of the Military confidential data we use encryption algorithm which take over reward of superior encryption algorithm. The proposed implementation using encryption algorithm was implemented on ARM 7 to encrypt and decrypt the confidential data on datastoragedevices such as SD card or Pen drive. The main objective of proposed implementation is to provide protection for storagedevices. The ARM and encryption algorithm protect the data accessibility, reliability and privacy successfully. Since (AES) Advanced Encryption Standard algorithm is widely used in an embedded system or fixed organization. These AES algorithms are used for proper designs in defense for security. The AES algorithm is a block cipher that can encrypt and decrypt digital information.
Recent research in Mobile Cloud Computing mainly focused on either full offload or partial offload of the applications from local devices to cloud without supporting decentralized resource allocation mechanism. Most of the existing frameworks are not consider RAM size in decision making process for execution environment. RAM is one of the main resources to execute application in mobile devices. Moreover, Security concern are not much focused before while migrating images to clouds. So, this work mainly focuses to propose a Secured, Dynamic and Decentralized model to overcome the constraints mentioned above and also aim to minimize response time and energy consumption. This research proposed a dynamic and novel framework that used decentralized resource allocation for offloading the execution of computation intensive tasks to clouds. The main objective and contributions of this work are stated below:
This module uses an Attribute Based DataStorage (ABDS) scheme that is based on PP-CP-ABE  to enable efficient, scalable data management and sharing. The ABDS system achieves scalable and fine-grained data access control, using public cloud services. Based on ABDS algorithm , user attributes are organized in a carefully constructed hierarchy so that the cost of membership revocation can be minimized. Moreover, ABDS is suitable for mobile computing to balance communication and storage overhead and thus reduces the cost of data management operations for both mobile as well as cloud infrastructure.
By throwing a Halting-Problem wrench in the works of guessing that iteration count, we widen the security gap with any attacker to its theoretical optimum. Halting Key Derivation Functions are practical and universal: they work with any password, any hardware, and a minor change to the user interface. It shows how works “pure password”-based encryption can best with stand the most dedicated offline dictionary attacker regardless of password strength. It can be typed quickly and discreetly on a variety of devices, and remain effective in constrained environments with basic input and no output capabilities.
In this paper, we are focusing in fuzzy-based approach for disk scheduling optimization. We applied fuzzy logic to disk scheduling policy, hence this algorithm is embedded firmware added to the hard disk controller. This approach takes for beginning of the sector to reach the head is known as rotational delay or rotational latency. The sum of the seek time ans the rotational delay equals to access time. In general, we studied so many scheduling policies     like, First Come First Serve (FCFS), Shortest Seek Time First (SSTF), SCAN scheduling policy, Cyclic-SCAN (C-SCAN), LOOK and Cyclic –LOOK (C-LOOK). These are the various types of scheduling policies which can be used to access the records from the memory location in operating systems also provide and issue for providing a quick response time in query processing on devices.
Based on the above requirements, among the numerous architectures proposed , one famous framework divides ELearning into three eras,ELearning1.0 in which the first era makes use of direct transfer model where instructor is distributor of learning material. ELearning 2.0 is constructed by wikis, blogs, podcasts and other social web tools. ELearning3.0 is transformed with the emergence of cloud computing ,semantic web, increased datastorage capacity, high screen resolutions, multi gesture devices, 3D Touch,3D Avatars ,Artificial Intelligent Systems. The integration of web services with these components can lead the society for the successful implementation of virtual learning environment (VLE) where continuous flow of knowledge can improve the learning environment with the hope to create a ELearning Society.
Tests conducted on the SCADA system aims to determine whether the design results system has functioned properly and in accordance with the recommended specifications. Tests performed include First, making sure all devices on the central computer and the site plant have been connected in accordance with the procedure. Second, making sure that application of SCADA system can be operated in line with expectations. Third, testing the SCADA system which serves to communicate between devices on the site plant, central computer and user.
Abstract: ICT has brought many challenges and opportunities and influenced the world like no other invention in the recent past. The field of education has also got greatly influenced by ICTs, which certainly altered whole education process. Hence, teacher educators are required to have positive attitude along with adequate knowledge and use of ICT tools and devices in educational process in case they wish to utilize current techniques and technologies effectively for prospective teachers. The purpose of this study is to investigate the attitude of teacher-educators towards the use of ICT along with knowledge and levels of ICT tools and devices usage among teacher-educators in teaching training colleges. A self-prepared interview guide was used in the present study. A purposive sampling technique was employed in the selection of the sample of as many as 50 teacher-educators working in different teacher-training colleges in the State of Haryana, India. The findings of the study revealed that the teacher-educators have positive attitude to some extent towards the ICT and its tools and devices usage in teacher education process. The present finding discloses that teacher-educators have lack of training and technical support. The current study also exhibits that teacher-educators also have some anxiety towards using ICT tools and devices during teaching learning process. As well as teacher-educators are also lacking of motivation and enthusiasm towards the use of ICT tools and devices in teacher-education process. It was found that if ICT-training, technical support, resources of ICTs, motivation, support of management as well as benefits of ICT in education process acquainted to the teacher-educators then they can successfully incorporate ICT in teacher education process.
According to the “father of AI”, John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”. Artificial Intelligence is a process of taking smart decisions by machines in a similar manner in which the intelligent humans think. AI is concerned with learning how human brain thinks and how humans learn, decide, and work while trying to solve a problem, and then using the significances of this study as a basis for developing intelligent software and systems . Basically it creates an expert system which also encompasses the human intelligence in machines which creates a system that understands, thinks, learns and acts like humans. We have proposed the usage of AI techniques for the IOT systems to reduce security issues as discussed earlier in the previous sections. Few techniques of AI are provided in Table 4 with their advantages and disadvantages to reduce the security issues in IOT  .
Anytime the system loses power, is shut down, or becomes disabled because of a system crash, it usually needs to be rebooted or initial program loaded (IPLed). A system crash is the result of a hardware, software, or operation problem: a malfunction in the CPU, a programming error from which the operating system could not recover, or an operator error caused by an incorrect response to a message. Booting most systems resets all status indicators and reloads the supervisor (the executive-system program along with other resident routines) into the CPU memory. The manner in which the system is booted depends upon the computer system used and the software included in its operating system. Many of the larger mainframe computers store their operating systems on disk, and this disk is referred to as the SYStem RESident (SYSRES) pack. Once the disk unit with the SYSRES pack is in a ready status, you can then boot the system. Some systems are so simple to boot that all you need do is depress the start (or load) button on the CPU (or master console) and enter the date and time on the console keyboard. Some of the more complex systems may require you to take additional steps—assigning various I/O devices, partitioning (sectioning off) memory, and so on. It is because of these differences that boot procedures are well documented with each step explained to the point that
There is a variety of serial and parallel I/O channel formats that you may encounter as a technician. Do not take for granted the type of interface a computer uses. A single different pin in a connector or a different voltage level used by a computer can make a vast difference when you are performing maintenance. Your computer’s technical manual will provide the standards to be used with the cabinet and cable con- nectors. They will match the standards that govern the requirements for parallel and serial interfacing. Table 7-1, from MIL-STD-2036, General Requirements For Electronic Equipment Specifications, provides you with some of the accepted standard external interfaces. We do not cover the General-Purpose Interface Bus (GPIB), Fiber Distributed Data Interface (FDDI), and TACTICAL. Other interfaces used but not listed in the table include RS-449, Centronics Parallel, ST-506/412, Enhanced Small Device Interface (ESDI), Integrated Drive Electronic (IDE), and Enhanced Integrated Drive Electronics (EIDE). We discuss signal designations in more detail later in this topic under serial and parallel I/O operations. First, let’s look at the various interfaces and some of their applications and any unique characteristics. As stated, each interface is governed by a standard.
In software-defined networking (SDN) , the control plane of a network element is separated from its data plane functions. SDN technology is used in data centers to effectively manage network traffic. The SDN principles can also be applied to other areas such as storage, security and service level agreement. Software-defined cloud computing (SDCC) in this term in an approach where all aspects of a data centre providing services to the users are software-