Top PDF Edge Computing Platforms and Protocols

Edge Computing Platforms and Protocols

Edge Computing Platforms and Protocols

Also referred to as Mobile Clouds MC in the research community [32], Mobile Cloud Computing represents a broader platform where specialized compute servers deployed at the edge of the ce[r]

99 Read more

Internet of Things future in Edge Computing

Internet of Things future in Edge Computing

Defining characteristics of the Fog are: low latency and location awareness; wide-spread geographical distribution; mobility; very large number of nodes, predominant role of wireless access, strong presence of streaming and real time applications, heterogeneity. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. The remaining parts of this paper are organized as: Section II presents the IoT architecture model, Section III discuses about the various application protocols in IoT, Section IV talks about the needs for Edge computing and its definition, Section V presents the research direction in the field of edge computing, followed by Section VI with the conclusion.
Show more

7 Read more

Survey on Edge Cloud and Edge Computing with Emerging Technologies

Survey on Edge Cloud and Edge Computing with Emerging Technologies

(3) Monopoly vs. open IoT competition. Current centralized cloud infrastructure is usually expensive to build and is only affordable to those giant companies that tend to define and use proprietary protocols. Customers are easily stuck to some specific infrastructures as the cost of switching to others could be dreadful. Such lack of openness could lead to a monopoly, ossification of the Internet, and further inhibit innovations.

6 Read more

Cloud Computing at the Tactical Edge

Cloud Computing at the Tactical Edge

VM synthesis is particularly useful in tactical environments characterized by unreliable networks and bandwidth, unplanned loss of cyber-foraging platforms, and a need for rapid deployment. For example, imagine a scenario in which a soldier must execute a computation-intensive app config- ured to work with cloudlets. At runtime, the app discovers a nearby cloudlet located on a Humvee and offloads the computation-intensive portion of code to it. However, due to enemy attacks, loss of network connectivity, or exhaustion of energy sources on the cloudlet, the mobile app is dis- connected from the cloudlet. The mobile app can then locate a different cloudlet (e.g., in an un- manned aerial vehicle [UAV]) and have the app running in a short amount of time with no need for any configuration on the app or the cloudlet. This runtime flexibility enables the use of what- ever resources become opportunistically available as well as replacement of lost cyber-foraging resources and dynamic customization of newly acquired cyber-foraging resources.
Show more

41 Read more

Cloud computing applications and platforms: a survey

Cloud computing applications and platforms: a survey

Plug-in Hybrid Electric Vehicle (PHEV) owners can access real time price/state and charge/discharge their vehicles anytime they want. The utility can know where PHEVs are located and the amount of energy is needed. So the total charge-optimization is thoughtful for the smart grid model [24]. The communication and optimization components are used by cloud data to support Advanced Metering Infrastructure (AMI). AMI for micro-grid exploiting cognitive radio network in the cloud data center [25]. AMI ables to support PHEV and the main pros for that it is able to work with existing Base Transceiver Station (BTS) cellular service. While Ethernet protocols are not supported by AMI. (IaaS) used by [26] to process the smart meter of data streams. While (PaaS) used in real-time distributed data management and parallel processing of information.
Show more

10 Read more

Edge and Fog Computing in Healthcare – A Review

Edge and Fog Computing in Healthcare – A Review

In order to record physiological symptoms and signals, various efficient devices have been discovered and are feasible in the market which can be easily connected to the internet over smart phones, computer or any nodal devices. There is a high potential appeal of the fog computing inside the IoT-based monitoring systems as per the contemporary surveys. The Internet of Things is pointed out towards the physical objects like appliances, electronics, devices which are always developing throughout the world that can feature internet connectivity within an IP address using unique identities are interconnected for enabling the various objects to interact and act accordingly. For obtaining various essential signs within real time, efficient and working in active way occurring in medical devices which includes ECG or for maybe for measuring pressure and temperature, WBAN is used as among various basic prototypes in IoT methodology of healthcare. WBAN establishes a multitude imperceptible or attire sensor node to percept and convey the data over a wireless network. The transmission protocols used for this are Wi-Fi or IEEE.802.15.4. Basically WBAN-based structure is Low-cost and power competent and plays a leading role in several region of healthcare environments such as tracking, and care clinic towards various disease management including chronic and taken precaution towards it. Distant cloud servers are used in many health monitoring systems for keeping and organizing huge data possessed as of sensor nodes in huge quantity. Cloud computing has a number of benefits like services which are made cheap, capacity for the huge facts storing capability and more surplus preservation rate but few challenges also exists like large data transmission, location perception and latency-sensitive issues. For the slowdown in data transmission and data error packet dropping possibilities have increased. More the data is sent over a grid system, the greater is the chance of mistakes that may result in wrong or less precise treatment decisions for which there will be adverse effect on emergency situation and critical illness of mankind. Thus, there will be a great requirement of minimizing the flow of data absorbed system in excess with the value of provision. One solution for accomplishing the problem of a predictable gateway and a remote cloud server is by adding a coat in between them. This added coating is termed as fog sheet and it aids in lowering the large chunks data but also yet guarantying service quality and so protecting the bandwidth of the network by pre-analysing the
Show more

15 Read more

Mobile-Edge Computing

Mobile-Edge Computing

This type of portability removes the need for dedicated development or integration efforts on a per- platform basis, which would unnecessarily encumber the software-application developers. It allows for the rapid transfer of applications (which may occur on the fly) between MEC servers, providing the freedom to optimize, without constraints, the location and required resources of the virtual appliances. The precise but extensible definition of the services provided by the application platform is the key to ensuring application portability. The platform-management framework needs to be consistent across the different solutions to ensure that diverse management environments do not complicate the application developers’ work on Mobile-edge Computing. The tools and mechanisms used to package, deploy and manage applications also need to be consistent across platforms and vendors. This will allow software application developers to ensure the seamless integration of their applications’ management frameworks.
Show more

36 Read more

Influence of Montoring: Fog and Edge Computing

Influence of Montoring: Fog and Edge Computing

• In contrast with VMs, the usage of containers which does not involve an Operating System (OS) to boot up to an additional method of server virtualization is swiftly growing in acceptance. An another reason for the same is the container images have a tiny size as compared to VM full images. The example for the container based virtualization platforms such as Container of Google (GKE) and Amazon EC2 container service (ECS) can be used as alternatives to the hypervisor-oriented approach. This is tranquil to pull container images of the application components across the nodes in the cloud. Even the agility is required as the migration technique is the best tool for numerous determinations such as load balancing, reallocating resources (scalability) and dealing with failures due to hardware etc.The the main intention here should be to reduce the downtime in the cloud. Table 2.2 and Table 2.3 indicates the common set of container level policies which can be monitored and useful with respect to the content of the task’s adaptation.
Show more

12 Read more

Distributed microservices evaluation in edge computing

Distributed microservices evaluation in edge computing

• Geographical: How distances effect communication latencies • Administrative: How easily a system spanning multiple independent organization can be managed From the three dimensions of scalability, centralized systems mainly scale in the size dimension. When traffic to a centralized server increases, the server can only be scaled up, i.e. upgrading the CPU, adding memory and storage, etc. However, scaling up is effective up to a certain point, and as request volume increases, even modern machines will eventually run into problems. If clients are connected to the server over a network, geographical scaling comes into play in the form of transfer speed and physical distance [20]. Aside from well-optimized transfer protocols, centralized systems do not have good means to address increases in latencies and lost messages due to wide-area network reliability. For centralized systems, administrative scalability only determines the effect of the increased workload on the administrative workload [21], making administrative the least significant dimension.
Show more

38 Read more

Automating Software Development for Mobile Computing Platforms

Automating Software Development for Mobile Computing Platforms

The first individual works mostly on Google’s search products, and his team practices the process of mock-up driven development, where developers work in tandem with a dedicated UI/UX team. Overall, the developer was quite positive about ReDraw explaining that it could help to improve the process of writing a new Android app activity from scratch, however, he noted that “It’s a good starting point... From a development standpoint, the thing I would appreciate most is getting a lot of the boilerplate code done [automatically]". In the “boilerplate" code statement, the developer was referring to the large amount of layout and style code that must be written when creating a new activity or view. He also admitted that this code is typically written by hand stating, “I write all my GUI-code in xml, I don’t use the Android Studio editor, very few people use it". He also explained that this GUI-code is time-consuming to write and debug stating, “If you are trying to create a new activity with all its components, this can take hours", in addition to the time required for the UI/UX team to verify proper implementation. The developer did state that some GUI-hierarchies he examined tended to have redundant containers, but that these can be easily fixed stating, “There are going to be edge cases for different layouts, but these are easily fixed after the fact".
Show more

224 Read more

Computing the everyday: Social media as data platforms

Computing the everyday: Social media as data platforms

Viewed in this light, social media establish online a drastically simplified version of social interaction and communication. Essentially, on social media basic things or entities such as users, comments, photos, posts are all classified as data objects and every activity connecting two objects as action. For instance, Facebook defines sta- tus updates, pictures, videos, etc. as objects because in this way objects can be con- nected, or, as Facebook calls it, edged (Bucher 2012). Through this elementary syn- tax, every action undertaken on Facebook generates an edge, that is, a link connect- ing two objects. “Liking an object, tagging a photo, leaving a comment, these are all edge generators.” 3 Encoding activities such as sharing, tagging, liking, and so on provide connections between two objects that can be further computed (see Figure 2). By processing the data resulting from the encoding of user interaction, the system is able to extract potentially meaningful sets of information on user behavior. For instance, in the case of Facebook, connections or edges are ranked under different criteria, such as how recent they are (what Facebook calls time decay), or how close the two end-users connected are (what Facebook calls affinity) (see Bucher 2012).
Show more

44 Read more

Multimodal neuroimaging computing: the workflows, methods, and platforms

Multimodal neuroimaging computing: the workflows, methods, and platforms

The bias and artifacts in MRI are mainly system-related, e.g., RF inhomogeneity causing slice and volume intensity inconsistency. The nonparametric nonuniformity normal- ization (N3) algorithm and its variant based on Insight Toolkit [54, 55] (N4ITK) [56] are the de facto standard in this area. The acquisition protocols for dMRI are inherently complex, which require fast gradient switching in Echo- Planar Imaging (EPI) and longer scanning time. dMRI is prone to many other types of artifacts, such as eddy current, motion artifacts and gradient-wise inconsistencies [57]. Tortoise [58] and FSL diffusion toolbox (FDT) [59] are popular choices for eddy current correction and motion correction in dMRI data, and the recently proposed DTI- Prep [60] offers a thorough solution for all known data quality problems of dMRI. Motion is a serious issue in fMRI, and may lead to voxel displacements in serial fMRI volumes and between slices. Therefore, serial realignment and slice timing correction is required to eliminate the effects of head motion during the scanning session. Linear transformation is usually sufficient for serial alignment, whereas a non-linear auto-regression model is often used for slice timing correction [61]. These two types of cor- rection are commonly performed using SPM and FSL. Dedicated PET scanners have been replaced by the hybrid PET/CT systems [62]. The most commonly seen artifacts on PET/CT are mismatches between CT and PET images caused by body motion due to the long acquisition time of
Show more

15 Read more

Contributions to Edge Computing

Contributions to Edge Computing

bytes of data on a daily basis. Likewise, cities like Chicago are deploying general pur- pose sensor arrays to ”track the city’s vitals” [59]. The so-called sensor Array of Things deployed in Chicago will be used to generate both information for research and also city-wide decision making. While data generated from general purpose sen- sor arrays is multipurpose in nature, the policies pertaining to usage will vary greatly based on application. For instance, aggregated air quality information used to detect real-time threats to public safety should have a higher computational and transmis- sion priority than the same information used for a long-term climate change study. However, while it is possible to provide traffic priorities between applications, exist- ing end-to-end infrastructures typically don’t differentiate between intra-application communications and processing priorities. Edge computing frameworks must be able to manage sensor arrays, communication networks, and computational resources in large data-intensive heterogeneous environments. In addition, such frameworks must extend operations beyond interactions with end-devices to communication and com- putational infrastructures providing end-to-end policy enforcement. The net result is a requirement for edge frameworks to support a wide range of devices, including resource components used in the provisioning of infrastructure and services. There is a need for standards and protocols to support edge computing. Our work is a step in this direction.
Show more

206 Read more

Platforms and Protocols for the Internet of Things

Platforms and Protocols for the Internet of Things

Inquiries: in this pattern the end device (or the element in charge of connecting the end device to the IP world) sends requests to the central aggregator, which successively answers with the required information. This is exactly the use case addressed by the REST architecture, since it can be seen as a traditional request-response pattern. Here the HTTP server must be placed in the central aggregator, while the end devices (or the access gateways) are equipped with an HTTP client, which is less demanding in terms of computing resources than the HTTP server. Instead, the MQTT protocol does not formalize a way to exchange messages in a request-response pattern, so that the parties must agree beforehand on pair topics for this pattern: a topic for publishing requests and another for publishing responses. However, the OASIS MQTT Technical Committee is currently working on a mechanism to formally enable the request-response messaging pattern in MQTT. AMQP, instead, already supports a mechanism to enable the request-response message exchange, thus providing the flexibility needed to operate in this scenario.
Show more

15 Read more

ECDU: an edge content delivery and update framework in Mobile edge computing

ECDU: an edge content delivery and update framework in Mobile edge computing

Mobile edge computing (MEC) [14–21] provides cloud capabilities in close proximity to mobile users to make up for the shortcomings of CDN. Driven by the potential capabilities of the MEC architecture, we propose an edge content delivery and update (ECDU) framework. In the ECDU framework, contents are not uploaded to the cloud data center first, but uploaded to the edge server first. Thus, the load on the core network will be significantly reduced. The ECDU framework consists of two schemes, an edge content delivery (ECD) scheme and an edge con- tent update (ECU) scheme. The ECD scheme prioritizes the top-ranking cache pools to store higher priority con- tents. Note that, the priority of contents is defined accord- ing to the number of times the user requests access to the contents within a certain period of time. The ECU scheme is to upload popular contents to different levels of cache pools and the cloud data center. Note that, popular con- tents refers to contents that are frequently requested by mobile users.
Show more

9 Read more

Computing and relaying : utilizing mobile edge computing for P2P communications

Computing and relaying : utilizing mobile edge computing for P2P communications

In practical scenarios, Proposition 4 (b) doesn’t always hold. The optimal cost of the MEC-assisted system isn’t always less than the conventional P2P communication due to the existence of computing delay in the MEC server. The computing capacity of the server is higher than that of devices but is still limited. Hence the computation in servers occupies time and causes delay to the P2P communication. The MEC server may decompress the transmitted data and re- compress it or may just compress it directly to the target ratio and the computing process may occupy one thread or multi- thread. Therefore, the computing delay in the server is quite complicated. Here, the influence of the computing delay in the server is analyzed by comparing it with the cost difference offered by the optimal relaying strategy. The desired solution with T r is written as
Show more

14 Read more

Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing

Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing

As shown in Fig. 3, we choose two deep learning networks, CNN1 and CNN2, as the example for illustration of the reduced data size ratio (blue plots) and computational overhead (red plots). These two deep learning networks have five layers and different neuron settings. From the plots, the input data can be reduced by the deep learning networks, and more intermediate data are reduced by lower layers. Meanwhile, the computational overhead is increased quickly with more layers. We use Python 2.7 and networkx to develop the simulator and use the reduced ratio of the intermediate data from executing CNN tasks. In the simulations, we set the number of deep learning tasks is 1000. The service capability of each edge server is set to 290Gflops according to NVIDIA Tegra K1. We set the number of edge servers in the network from 20 to 90. The input data size of each task is set from 100KB to 1MB. The layer number of all CNN networks is set from 5 to 10. The bandwidth of each edge server is uniform distributed from 10Mbps to 1Gbps. The require latency is set to 0.2s.
Show more

7 Read more

Smart Edge: A Software-Defined Infrastructure for Future Application Platforms

Smart Edge: A Software-Defined Infrastructure for Future Application Platforms

SAVI Kaleidoscope Scenario ¤   Smart Edge ensure that local security personnel continue to have their own secure/dedicated channels for the purpose of law enforcement ¤   As the crowd moves to local restaurants and bars to celebrate,

70 Read more

Spiking Neural Computing in Memristive Neuromorphic Platforms

Spiking Neural Computing in Memristive Neuromorphic Platforms

b Neuromorphic architecture inspired from neural networks in the biological brain, capable to conquer Von neumann bottelneck issue, performing parallel and cognitive computing, as well a[r]

38 Read more

An Applied Evaluation and Assessment of Cloud Computing Platforms

An Applied Evaluation and Assessment of Cloud Computing Platforms

One major motivator for using cloud computing is the economic aspect and the potential savings. The pay-as-you-go model of cloud computing entails exchanging capital expendi- tures for operational expenditures and the economies of scale enjoyed by large corporations delivering cloud services is a potential cost saver for cloud users. The need to make invest- ments in hardware to create a private datacenter for hosting is eliminated by the cloud model which is particularly advantageous for startups lacking the funds to make such investments. The elasticity of cloud computing can also be a great source of monetary gain. With a private datacenter the server capacity for expected load need to be provisioned in advance since delivery, installation and configuration of new hardware takes time. Predicting future load is difficult, [18] gives the example of a company that made its service available on Facebook and had an increase in demand from 50 servers to 3500 servers in three days. Even though not all companies are likely to experience such surges in demand under- or over-provisioning are still risks. Over-provisioning results in unnecessary expenses for hard- ware that is not needed. The consequences of under-provisioning are likely even worse since insufficient capacity will result in a poor user experience that may result in a loss of cus- tomers. Failing in the provisioning of the hardware leads to these undesired scenarios but even if the prediction is correct and the hardware can handle the peak load the capacity is likely unnecessarily high the majority of the time. The load on many systems varies over the course of each day, week, or year. The load during nights, weekends, or a particular season may be lower than the average so even successfully provisioning for peak load may lead to a lot of unutilized capacity. In fact, an estimate of utilization of server capacity in datacenters puts the number at 6 % [37]. This illustrates the benefit of being able to quickly scale up and down according to current load.
Show more

79 Read more

Show all 10000 documents...