A watershed is near in the architecture of computer sys- tems. There is overwhelming demand for systems that sup- port a universal format for computer programs and software components so users may benefit from their use on a wide variety of computing platforms. At present this demand is being met by commodity microprocessors together with stan- dard operating system interfaces. However, current systems do not offer a standard API (application program interface) for parallelprogramming, and the popular interfaces for parallel computing violate essential principles of modular or component-based software construction. Moreover, mi- croprocessor architecture is reaching the limit of what can be done usefully within the framework of superscalar and VLIW processor models. The next step is to put several processors (or the equivalent) on a single chip.
Figure 1 An overview of the Bioclipse QSAR and OpenTox integration. Toxicological properties of molecules can be calculated in Bioclipse using the online computational services within the OpenTox cloud, in parallel to local services. When the user calculates these properties, Bioclipse will first query local and online service providers for available functionality. Example services in the OpenTox cloud are the ToxTree toxicology prediction models. The OpenTox cloud is queried by Bioclipse internally using the SPARQL query language. Once the user has selected the toxicological properties of interest (see Figure 2), these will be calculated by Bioclipse. Here, REST technologies are used to perform this computation in the OpenTox cloud. The computed results can then be used in Bioclipse.
services support generating and validating new predic- tive models via machine learning techniques or other methods. OpenTox services are independent, and can be mashed up or invoked in serial or parallel fashion by explicit invocation of command tools, existing workflow systems, or custom user interface. SADI’s strong point is in the usage of implicit invocation of web services, given a SPARQL query. The SHARE engine  decides which services to invoke in order to fill in the missing data. The SADI services use HTTP, but define HTTP resources only for the processing elements, not for the data elements. The calculations are initiated by a POST command, and the data is returned in the body, resem- bling a typical processing by a remote procedure call, rather than a REST resource. Input data is subsumed into the output data, and neither of the data has its own dereferenceable identifier. OpenTox services work by accepting a URI of an input resource and return a URI of the resulting resource. The content of the latter could be retrieved by a subsequent GET operation if necessary - as a whole or in parts. This allows processing of data- sets of arbitrary number of entries. Dataset is a central type of resource in OpenTox, while we are not aware of a corresponding concept in SADI. Implementation-wise, SADI services require a RDF triple storage as a backend, while OpenTox services do not mandate any particular backend representation; it is sufficient only to serialize resources to RDF on input/output in order to be com- patible with the OpenTox API. Another difference exists due to the requirement to define a custom input/output
The EPC consists in four network elements namely Serving Gateway (SGW), PDN Gateway (PGW), Mobil- ity Management Entity (MME), and Home Subscriber Server (HSS) . The UE connects to eNB, the eNB directs data traffic to SGW and PGW in a GTP tunnel and for the control traffic it directs to MME. The MME acts as the manager of the network connectivity and it plays an important role in LTE/EPC architecture. In fact, the MME is the main signaling node in the EPC. It is responsible for UE authentication and authorization, UE session setup, and intra-3GPP mobility manage- ment. The SGW and PGW are responsible for data forwarding, IP mobility and QoS control at the data plane. The PGW communicates with the outside world (i.e. PDN Network), using SGi interface. Each packet data network is identified by an access point name (APN). The QoS level that should be affected to each bearer is decided by the PGW. The MME is connected to SGW by means of S11 interface. The SGW is connected to PGW by means of S5 interface.
This module accepts knowledge in and transform this knowledge into the ontology format through the graphical user interface (GUI). The knowledge that is accepted is automatically sent to the OWL as production rules to be stored in the program using the OWL knowledge representation language. The production rules are stored in two format which is the antecedent (If / else part) and the consequence (then part). For example: If a is X, then b is Y where a,b are linguistic variables and X,Y are values gotten from the GUI that is determined by previous experience. This module is contained in a java class named MainProjectGUI. Based on the knowledge acquisition module, the system performance is enhanced making assessment and selection decisions to be robust with more data as it becomes available thereby making the ontology knowledge model to be equipped with experienced data.
Abstract—Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.
The interactions for the newly developed application are illustrated in Fig. 2 where User, Client and Database Server interact with each other. The User speaks with a microphone to the Client. The Client recognizes the User's command and creates a request to the Database Server. When the result arrives the Client updates its visual objects, generates a response text and sends it to the User via speakers.
When developing the application the methods used have been collaborative pro- gramming and pair programming. Pair programming consists of two persons where one is the “Driver” who writes the code and the other is the “Observer” who makes sure the code is correct. The roles are switched continuously. The integrated development environment has been Eclipse and the plug-in Saros has been used for the collaborative programming. Saros gives the developers the pos- sibility to edit the same code simultaneously. At the start an Android emulator plug-in was used to test the code. After a while the authors realised that it was possible to test and run the code directly in the cell phone using HTC Sync. Since the application needed access to the cell phone’s contact list this proved to be a much better way to test the application. Also, it was considerably quicker than the emulator. An extra bonus to the developing process was the fact that the authors had devices with different versions of the Android platform which proved useful when testing since several bugs only appeared in one of the versions.
Android SDK provides a public static interface GpsStatusListener to report current state of GPS. The method onGpsStatusChanged (int event) should be called in MainActivity to report changes in GPS status. LocationManager has a getGpsStatus( ) function to acquire GPS status. There are four statuses in total:
Abstract—In recent years, much has been made of the computing industry’s widespread shift to parallel computing. Nearly all consumer computers will ship with multicore processors. Parallel computing will no longer be only relegated to exotic supercomputers or mainframes, moreover, electronic devices such as mobile phones and other portable devices have begun to incorporate parallel computing capabilities. High performance computing becomes increasingly important. To maintain high quality solutions, programmers have to efficiently parallelize and map their algorithms. This task is far from trivial, especially for different existing parallel computer architectures and different parallelprogramming paradigms. To reduce the burden on programmers, automatic parallelization tools are introduced. The purpose of this paper is to discuss different automatic parallelization tools for different parallel architectures.
Abstract: The development of network management interfaces (NMIs) involves a variety of software layers, applicationprogramming interfaces (APIs), specification languages and tools. In order to make NMI development easier and more efficient, we have developed Layla, a prototype application framework supporting Open Systems Interconnection (OSI) NMIs. Layla is based on a heterogeneous yet coherent system of design patterns that comprises previously published patterns, several new and domain-specific patterns taken from NMI standards, as well as a couple of basic patterns relevant in Layla’s API. Our research indicates that pattern-based frameworks can indeed be built for a domain as complex as NMIs, and that they have a positive impact on both the development process and the resulting NMI products. In this paper, we discuss APIs for NMIs and the need for application frameworks, describe and illustrate the pattern system underlying the Layla framework, detail three of its key patterns, and put the pattern system into perspective. Keywords: Design Pattern, Pattern System, Application Framework, Network Management Interface, API (ApplicationProgrammingInterface), OSI (Open Systems Interconnection), Manager-Agent pattern, Managed Object pattern, Remote Operation pattern.
In the efforts to enhance oystersentinel.org, we have designed and implemented two models: the Perkinsus marinus model that assesses the extent of oyster infection by the parasite Perkinsus marinus and the Oil Spill model that assesses the impact of oil spills on oyster habitat. In building these two capabilities, we have applied Web Mashup techniques and used large portions of data that are provided by external resources such as Google Maps, the Louisiana Oil Spill Coordinator's Office (LOSCO), the Environmental Response Management Application (ERMA), and the National Oceanic and Atmospheric Administration (NOAA). The Web Mashup technology gives developers tools to consume external resources, as well as tools to create interfaces to internal data. To investigate the latter capability, we have constructed a Web interface that will allow the programs developed by third party developers to use
Management is the most challenging issue in dealing with big data. This issue initially surfaced 10 years prior in the UK eScience activities where data was circulated geologically by numerous elements. Determining issues of access, metadata, usage, overhauling, and in fl uence have ended up being signi ﬁ cant hindrances. Dissimilar to the accumulation of data by manual systems, where thorough conventions are regularly followed with a speci ﬁ c end goal to guarantee correctness and validity, computerized data gathering is signi ﬁ cantly looser. The wealth of advanced data representation precludes a tailored technique for data gathering (Kwon et al. 2014). Data capability frequently centers more on missing data than attempting to vali- date every item. Information is regularly ﬁ ne-grained, for example, clickstream or metering information. Given the volume, it is unrealistic to accept each data item: new methodologies to data validation are required. The wellsprings of this data vary — both transiently and spatially, by organization, as well as system for accumulation. People help advanced data in mediums convenient to them: such as reports, drawings, pictures, sound and multimedia recordings, models, programming practices, client interface outlines, and so on — with or without suf ﬁ cient metadata depicting what, when, where, who, why and how it was gathered and its provenance. Still, these categories of data are promptly accessible for assessment and analysis (Kwon et al. 2014).
Thus, our simulator relies on the measurements re- ported in Sections 3.3 to 3.5 to build a network band- width and latency model and on Equation 3 to estimate the delay of the spoke and hub nodes taking into ac- count the variabil- ity introduced by the cloud shared in- frastructure. The overall system delay, i.e. the packet round-trip time from a fabrication request issued by a client until the system acknowledge is computed using Equation 1. Experiments have been designed to analyze the behaviour of the NEWTON Fab Lab infrastructure with the following users ’ distribution: 250, 500, 1000, and 1500. Each user can issue from one to five requests; moreover, for each load configuration, the number of containers allocated to the application will scale as mul- tiples of 8 from 8 to 128 (for 16 possible configurations). Finally, the simulated infrastructure must cover requests from four AWS availability zones (Europe, North and Central America, South America and Asia-Pacific) in order to ensure a globally optimal service to all the world regions. Table 14 summarizes the experiments configurations. The variable simulation parameters are the num- ber of users, the number of requests per user,
This testing phase will be incorporated within the development phases under an agile development methodology, which according to Somerville (2004) relies on an iterative approach to software development. This will effectively allow the trial of features etc. as they are implemented and added rather than at the end of development. However, it is most likely that the small office testing will be conducted in two major stages: Firstly, the testing at the completion of local application; and Secondly, the second stage of testing occurring at the end of the development of the network component of the tool.
BlueDBM proposes a pure FPGA architecture using Virtex7 FPGA, which has enough resources both for SSD controller and in-storage processing . In BlueDBM, storage units contain a network interface implemented in FPGA to form a network for running user applications in a distributed fashion. The most notable drawback of an ISP-enabled storage architecture using pure FPGA implementation is the time-consuming design process due to its reconfigurability issues. In other words, users have to go through the entire process of redesigning, synthesizing, place, and route, to generating a bitstream for implementing the desired ISP functionality . Jo et al. , proposes a heteroge- neous platform composed of CPU, GPU and ISP-enabled SSD (called iSSD). Based on simulations, this platform can get up to 3.5× speedup for data-intensive algorithms. Although this method looks effective, it never went beyond simulation. In all the men- tioned related works, lack of an OS inside ISP-enabled storage makes it harder to adopt applications for running in-place.
The PBX can be directly connected to an external processor with a cable. The maximum length of the cable is 15 m (49 ft.). The PBX also can be connected to a MODEM for extending the distance between the PBX and the external processor. The maximum length of the cable from PBX to modem is 4 m (13 ft.). Figure 1-2 shows the external processor connection for RS-232C interface.
This paper investigates how applicationprogramming interfaces can be used to improve the interoperability (or shareability) of health records. Electronic health records store health information that originates from various sources like prescription order systems, medical devices and even other EHRs. An API helps these disparate systems exchange information with one another. APIs can improve data sharing by using secure standards like FHIR. Having all off this integrated and usable data can aid in the clinical decision process. This would also allow patients to have a more comprehensive look at their health data in patient portals.