Abstract: Software evolution is critical to extending the utility and life of distributedreal-time and embedded (DRE) systems. Determining the optimal set of software and hardware components to evolve that (1) incorporate cutting-edge technology and (2) satisfy DRE system resource constraints, such as memory, power, and CPU usage is an NP-Hard problem. This article provides four contributions to evolving legacy DRE system config- urations. First, we present the Software Evolution Analysis with Resources (SEAR) technique for converting legacy DRE system configurations, external resource availabilities, and candidate replacement components into multiple-choice multi-dimension knapsack problems (MMKP). Second, we present a formal methodology for assessing the validity of evolved system configurations. Third, we apply heuristic approximation algorithms to determine low-cost, high value evolution paths in polynomial time. Finally, we analyze results of experi- ments that apply these techniques to determine which technique is most effective for given system parameters. Our results show that constraint solvers can only evolve small system configurations, whereas approximation techniques are needed to evolve larger system configurations.
Any kind of control network has two types of connections, namely; physical connection and logical connection. The physical connectivity is realized through transceivers and communication links and the logical connectivity is established using conventional method. It should be noted that the configuration of logical connectivity depends upon the control requirement of the target system. Making physical connection is a challenge. However, logical connections in a control network are susceptible to error. Without exact logical configuration control task is not feasible. In a networked system of complex nature, it is very difficult to guarantee the logical connectivity among the variables that are defined at the application layer of the communication protocol stack. In a distributed system of thousand nodes with multiple logical connections, it is not only difficult but also time consuming in order to detect and isolate an illogical connection. The simulation software overcomes all such problems by designing a virtual network before the real design begins. As a result DCS validation was achieved. The approach offered another level of flexibility in the design process of the DCS.
Distributedsoftwaresystems are usually built without taking into consideration disconnections; they fail to operate when a disconnection occurs. Coda is a good example of a file system that handles disconnections. To support disconnections, either periodically or when a network disconnection is anticipated, data items are cached at the mobile device to allow its autonomous operation during disconnection. Preloading data to survive a forthcoming disconnection is called hoarding. A critical issue during hoarding is how to anticipate the future needs for data. While disconnected, the mobile unit can use only local data. All updates are locally maintained. Upon reconnection, any updates performed at the mobile host are reintegrated with updates performed at other sites, while any conflicting updates are somehow resolved.
The QARMA visual modeling tool is based on the Generic Modeling Environment (GME) [6, 5], a general-purpose, configurable modeling environment with a modular, ex- tensible architecture that enables users to create domain-specific modeling tools. VEST (Virginia Embedded Systems Toolkit), a GME-based tool, is a customized toolkit for con- structing and analyzing component-based DRE systems [10, 11]. VEST provides a library that supports descriptions of hardware components and networks. The QARMA modeling tool extends the VEST library and, among other enhancements, adds support for the de- scriptions of software applications and paths. The QARMA tool enables an DRE system designer to capture in a graphic model all aspects of a DRE system that is of interest to the QARMA resource manager.
Verification plays a vital role in the design cycle of any safety critical system. The development of any system is not complete without careful testing and verification that the implementation satisfies the system requirements. In the past, verification was an informal process performed by the designer. But as the complexity of systems increased, it became necessary to consider the verification as a separate step in the overall development cycle. Verification techniques can be either based on simulation or based on formal methods. Simulation is based on a model that describes the possible behavior of the system design at hand. This model is executable in some sense, such that a simulator can determine the system’s behavior on the basis of some scenarios. Formal Verification is defined as “establishing properties of hardware or software designs using logic, rather than (just) testing or informal arguments. This involves formal specification of the requirement, formal modeling of the implementation, and precise rules of inference to prove that the implementation satisfies the specification” . Three categories can be used to classify the Formal Verification methods - equivalence checking, model checking and theorem proving.
Real-timesoftware, Real-time operating systems scheduling, virtual memory issues, and file systems, real-time databases, fault tolerance, and exception handling techniques, reliability evaluation, data structures, and algorithms for real-time/embedded systems, programming languages, compilers, and run time environment for real-time/embedded systems, real-time system design, real-time communication and security, real-time constraints and multiprocessing and distributedsystems.
The first task, news channel has many subtasks involved in it i.e to scroll a message while the news are displayed and updating the current news as and when there is a flash news. And the second task is the notepad task which is just used to enter some information on to it and save it. And the final task is the windows media player which runs both the audio and video files which are considered to be the subtasks and dependent of each other. Finally after the tasks are simulated the framework shows the results on the processor time spent on each task after execution. The task considers the logic involved in the DYTAS Algorithm.
In recent years, a third testing method has been also considered i.e. grey box testing. It is defined as testing software and also having some knowledge of its internal logic and underlying code. It uses internal data structures and algorithms for designing the test cases more than black box testing but much less than white box testing. This method holds important when conducting integration testing between two or more modules of code written by different developers, where only their interfaces are exposed for testing (Redmill, Felix (2005), Theory and Practice of Risk-based Testing, Vol. 15, No. 1). This method includes reverse engineering to determine boundary values. Grey box testing is unbiased and non- intrusive because it doesn¶t require that the tester have access to internal source code.
In a feedback queue scheduling is the best algorithm for embedded systems. Despite this many well known RTOS’s i.e. Win CE, embedded NT, Linux and pharLap also utilize priority time slicing. Under this algorithm, when a higher priority task becomes ready to run, it must wait until the end of the current time slice to be dispatched. Hence response time is governed by the granularly of the time
A popular and oen easy to implement approach to retrieving management data in distributedsystems is for a query unit to ask for the data from the remote nodes of inter- est. e requested data is then sent from the source nodes to the data sink, eﬀectively implementing a pull-based collection strategy. A pull-based approach at query-time has some severe limitations for larger or highly heterogeneous distributedsystems. Pri- marily, there is the potential for very large latencies in getting the desired data out of a multitude of source nodes randomly distributed across the physical network. In a centralised approach like this there will always be a trade-oﬀ between the amount of work performed at the time the data is generated and the work performed at query- time. Query-time processing generally introduces latencies and indeterminism that is not acceptable for monitoring applications. Secondly, real-time monitoring using a pull-based approach can result in excessive polling rates, leading to errors due to the perturbations introduced by the polling itself.
clients should receive responses as soon as they are oered; any delay could force the server to wait, which is unacceptable in practice. This argues for the inclusion of a maximal progress assumption. Moreover, the example shows why we would like a localized version of the maximal progress assumption. Suppose that we have one client which engages in an innite, internal computation. Although this situation seems to be articial at rst sight, it naturally occurs when abstracting from timing aspects of some part of a distributed system, e.g. from the timing behavior of a particular client. Ideally, this should not eect the proper work at other sites. Unfortunately, the usual maximal progress assumption has the side-eect that no clock is able to tick in such a situation. Thus, by localizing the maximal progress assumption we formalize an important aspect of our intuition of distributedsystems.
PECOS component model was originally developed for field device systems and some supporting tools such as Component Composition (CoCo) description language for specifying components, code generator for generating Java/C++ code skeletons from CoCo and runtime environment (RTE) which interfaces generated codes to the real-time operating system . However, many of these tools are incomplete and information on the RTE is not publicly accessible as it is a proprietary of the ABB Company .
VII. R EAL -T IME S PECIFICATION F OR J AVA (RTSJ) The RTSJ allows development of real-time application by implementing several classes for real-time thread, no heap memory, real-time clock, schedulable objects, schedul- ing, schedulability analysis, synchronization, asynchronous events, physical memory access, resource sharing, etc. RTSJ thread may get 28 priority level and having ability to pre-empt garbage collector thread due to higher priority. Object in RTSJ can get their storage in immortal memory(no heap memory) which is not garbage collector controlled for reclaim. Various real-time algorithms can be easily implemented by RTSJ but multiprocessor realtime algorithm support is under research to get multi core advantages.
Real-time Ethernet has grown to one of the core top- ics in current industrial automation research and appli- cation. A significant number of vendor-driven solutions have shown up on the market during the last years, claim- ing to replace traditional fieldbuses. The overview of available solutions on  currently lists 16 soft and hard real-time Ethernet variants. Most of them either re- quire special hardware extensions to nodes or infrastruc- ture components, or they provide only soft real-time guar- antees. Academia approaches are typically designed to demonstrate specific concepts and lack common OS or hardware support. A broad overview of soft and hard real- time protocol research is given in . Some recent ap- proaches are for example FTT-Ethernet , RT-EP , or the combination of switches and traffic shapers .
• What is a suitable engineering framework for decentralizing monitoring tasks? Our work shows that a key step in designing a solution for real-time monitoring under performance objectives is formulating a global optimization problem and solving that problem in a distributed way. This includes the mapping of the global problem onto a set of local problems, which can be solved independently and asynchronously. Currently, such a mapping is custom-made for each case and remains more an art rather than a craft. In the literature several examples of such mappings can be found, but there is no fundamental understanding of the engineering principles behind this task. To exemplify the difficulties an algorithm designer faces when performing this mapping, consider the well-known case called “tragedy of the commons” , where individuals, by trying to maximize a local utility function, jeopardize the achievement of a global objective. While this phenomenon is known, there is no fundamental understanding on how to de- fine the local problems in such a way that their solutions provide good approxi- mations to the solution of the global problem.
Real-Timesystems are computing environments in which the design of the system focuses on not just the functional correctness but also temporal correctness. This means that real-timesystems provide execution platforms to running applications that can schedule tasks according to their temporal requirements. A real-time system is temporally correct if all the timing requirements of real-time tasks can be met. Control systems often impose real-time requirements to operate automobiles, aerial vehicles, nuclear systems, etc. A delayed response in such systems could impact the quality of service or even result in catastrophic consequences. For example, the anti-lock braking system, which monitors the brakes of automobiles and must react within milliseconds, could result in life loss if the system cannot respond in time and cause the wheels to lock-up for too long a time. Depending upon the purpose of the systems and the implications of system correctness, real-timesystems can be divided into two categories. Sof t real-timesystems can tolerate some misses of their temporal requirements, but eventually the quality of service will degrade if too many are missed. For example, an online video game that requires multiple players to cooperate in a timely fashion could provide a lagged visual presentation and delay the actions of players if the system misses the deadline of information exchanged between the players. This eventually decreases the quality of the game experience. In contrast, hard real-timesystems must absolutely guarantee the temporal requirements of the applications to prevent catastrophic consequences. These applications are safety critical. The anti-lock braking system is one such system.