The ISO/IEC standard 9126: “Software engineering — Product Quality”  describes a two-part model for software product quality: a) internal and external quality, and b) quality in use. The first part of the model specifies six characteristics (see Figure 3) for internal and external quality, which are further subdivided into subcharacteristics. These subcharacteristics are manifested externally when the software is used as a part of a computer system, and are a result of internal software attributes. The second part of the model specifies quality in use characteristics. Quality in use is the combined effect for the user of the six software product quality characteristics. The standard also provides metrics for each of the quality characteristics to measure the attributes. An explanation of how this quality model can be applied in software product evaluation is contained in ISO/IEC 14598-1 . An evaluation according to ISO/IEC 9126 is mostly based on metrics where our model uses a more rigid scale by providing yes/no checks. This yes/no scale leaves more room for expert opinions, but also caters for less precise comparison between two products. As our model focuses on correctness and consistency, ISO/IEC 9126 does not address consistency between elements as a separate concern. Correctness is in ISO/IEC 9126 mostly determined through indirect measures (e.g. measure the number of defects found during a production period). We therefore believe that our model is more suitable to determine product quality (correctness and consistency) whereas the ISO/IEC model is more usable to specify and measure the desired product quality (all six characteristics). In the future we could extend our model with other characteristics from ISO/IEC 9126.
Software wrapper technology has been investigated in many fields, including computer security, software engineering, database systems and software dependability. In the context of computer security, software wrappers have been used to enforce a specific security policies  and protect vulnerable assets . It has also been shown that security wrappers deployed within an operating system kernel can be used to meet application specific security requirements . Software wrappers have been widely applied in the inte- gration of legacy systems , where they act as connectors which allow independent systems to interact and the recon- ciliation of functionality  . Examples of this can be found in the field of database systems, where software wrappers are used to encapsulate legacy databases  . Software wrappers have been extensively investigated in the context of operating system dependability  , where emphasis is placed on wrapping device drivers and shared libraries  . Software wrappers have also been used to address the more general problem of improving dependability in commercial-off-the-shelf software , as well as several more specific software dependability issues, such as the problem of non-atomic exception handling . The proposed methodology is related to , where wrap- pers were used to detect contract violations in component- basedsystems. In contrast, the proposed methodology com- bines software wrappers that implement standard predi- cates and variable replication to enhance dependability. The variable-centric approach, facilitated by the metrics devel- oped in , also differentiates the proposed methodology.
The increased usage of mobile and portable devices has given rise over the last few years to a new market of mo- bile and pervasive applications. These applications may be executed on either mobile computers (laptops, tabletPCs, etc.), or wireless hand-held devices (PDAs, smart phones, etc.), or embedded systems (PDAs, on-board computer, in- telligent transportation or buildings systems, etc.), or even use sensors or RFID tags. Their main goal is to provide connectivity and services at any time, adapting and moni- toring when required and improving the user experience. These systems are different to the traditional distributed computing systems. On one hand, a mobile system is able to change location allowing the communication via mo- bile devices. On the other hand, a pervasive application attempts to create an ambient intelligence environment to make the computing part of it and its enabling technolo- gies essentially transparent. This results in some new is- sues related to inconsistencies, changes or faults, which arise dynamically and continuously whilst composing ser- vices in these systems, and which have to be detected and handled. These issues can be classified into four main cat- egories : (i) mismatch problems, (ii) requirement and configuration changes (iii) network and remote system fail- ures, and (iv) internal service errors. The first refers to the problems that may appear at different interoperabil- ity levels (i.e., signature, behavioural or protocol, qual- ity of service and semantic or conceptual levels), and the Software Adaptation paradigm tackles these problems in a non-intrusive way .The second is prompted by con- tinuous changes over time (new requirements or services fully created at run-time), and Software Evolution (or Soft- ware Maintenance) focuses on solving them in an intrusive- way . The third and fourth are related to networks (net-
Safety critical software has traditionally been implemen- ted using a cyclic executive. This uses a regular interrupt to access a table in which sequences of procedure calls are identified. By cycling through the table, periodic jobs are supported. Although conceptually simple, the cyclic execu- tive approach suffers from a number of drawbacks ; it uses resources inefficiently, it does not easily cater to sporadic jobs, and it gives poor support to more flexible and adaptive requirements. A more appropriate scheme, using priority-based scheduling of application tasks , is currently been used (or evaluated) in a number of application domains, for example, in avionics  or automotive electronics . Both preemptive and non- preemptive (cooperative) methods are under consideration. With cyclic executives, it is relatively easy to coordinate the executions of replicated jobs. With more flexible scheduling, where not all tasks are replicated, this coordi- nation is problematic. In this paper, we propose a scheme for this coordination. The work presented is based on earlier independent treaties by Poledna ,  and Barrett et al. . The paper is organized as follows: In the following section, the issue of replica determinism is introduced. A system model is given in Section 3. Section 4 relates the formal concept of common knowledge in distributed computersystems to the problem of deterministic distrib- uted scheduling. Timed messages are introduced in Section 5. It is shown that timed messages provide a very efficient and flexible means to achieve replica determinism in the presence of on-line scheduling, preemptions, and nonidentically replicated task sets. Finally, Section 6 concludes the paper.
Furukawa, Sako, and Obana  proposed an SSO system based on secret information shared between users' IC cards and portals. Furukawa-Sako-Obana system stores a password and an authentication key for each registered user in a portal. The password and the authentication key are encrypted using the user's secret key, which prevents the portal from impersonating the users. The password authenticates the user while the authentication key authenticates the user's IC card. If both the user and the IC card are authenticated, the portal transmits the encrypted secrets to the user via a secure channel. The IC card decrypts the encrypted secrets and uses them to authenticate the user to the service. Although Furukawa-Sako-Obana system remains secure even if an attacker succeeds in mounting two attacks e.g., stealing the password and comprising the portal, and although it was proven to be SSO-AKE-MA-secure, it involves the use of a multitude of signing/verifying keys and requires multiple runs of Authenticated Key Exchange (AKE) protocols and Password-Based Authenticated Key Exchange (PAKE) protocols, in addition to the use of Tweakable Block Cipher, first introduced in . This heavy-duty usage of cryptographic primitive greatly complicates the system.
There is currently much UK government and industry thinking towards the integration of complex computer- basedsystems, including those in the military domain. Such systems include applications of high safety criticality and must, therefore, be capable of providing the necessary predetermined levels of performance. Current systems requiring such assurances of performance are mostly based on parameters and system states decided during design time, thus allowing a predictable estimate of performance. The ability to dynamically reconfigure systems at run-time would, however, lead to increased flexibility and adaptability. These properties would allow for the better use of existing assets and more sustainable expansion of system functionality.
An ability to detect, log and react to dependability-relevant events has been part of the design of, at least, high-end dependable computers since the 1970s and 1980s. During these decades, several computer vendors (IBM, Tandem, Digital Equipment Corporation (DEC), etc.) used electronic communication (usually a secure telephone link) to maintain and service their customers’ computer devices. As a result, data for maintenance as well as for statistical studies of field dependability were made available. For instance, IBM offered customers of its 3090 high-end mainframe series (which appeared around 1985) an option for automatic remote support capability (an automatic call was initiated to IBM) or a field service engineer was sent to the customer’s site after a maintenance job has been raised due to a processor controller 2 detecting errors (see [Siewiorek and Swarz 1998, Chap. "General-purpose Computing"]). All this information was logged in a central database called RETAIN, and, for example, a service engineer could communicate with it to obtain the latest maintenance information. Another option offered with the 3090 series was that a field technical support specialist could establish a data link to a processor controller to remotely monitor and/or control the mainframe. A similar option existed as a part of the well-known VAX system from DEC (VAX-11/780, the first member of the VAX computer family, was introduced in 1977). The user mode diagnostic software of the VAX system reported errors, which initiated
We argue that, in principal, lack of ability to fully exploit system’s DoF is due to the connectivity-centric approach in the conventional paradigm which ignores the role of computing and fails to efficiently substitute data transfer with a combination of computing, storage and transferring part of the information (Fig.2). Therefore, extensions to the conventional paradigm are needed in which instead of allocating resources to transfer the information, resource are allocated enabling the tasks to obtain the required information through any possible means, e.g., processing, cooperation in different levels among task, objects, RANs, and computing. While there have been some developments in information exchange provisioning no-one has looked at all these perspectives in tandem. To address this issue, DeX is defined which brings these techniques together and offers a manifesto for development of the dependable M-CPSs.
The succession of leading models organizations in the dynamic evolution of functional structures meet the following hierarchy:Functionalhierarchy, divisional hierarchy; strategic business units; logistics matrix and logistics type network.Corporations attributions based on knowledge are mainly the following: logistics type network where each post has functional connections with all existing posts in organizational management; strategy is oriented towards intellectual intensive knowledge; dynamic changing has a kaleidoscopic character; performance is based on capacitive intellectual impact on technological and managerial structures; quality of human resources is notable by professionals who self-lead; efficient functioning of corporations is based on knowledge flows from / to outside, technology has in its structure intelligent knowledge processors. Comparison managerial practices can be followed in Table 1:
As stated before, correct processors may become approximately synchronized by run- ning a fault-tolerant clock synchronization algorithm. A great number of fault-toler- ant clock synchronization algorithms have appeared in literature (see [SiLL90] for an overview). Roughly, these algorithms can be divided into two groups: probabilistic and deterministic algorithms, respectively. Probabilistic fault-tolerant clock synchroniza- tion algorithms (e.g., [Cris89]) reach better average synchronization between proces- sor clocks than deterministic algorithms [Jalo94, p.97]. However, in probabilistic algorithms there is always a probability that the clocks cannot be synchronized within finite time [Jalo94]. Thus, running a probabilistic fault-tolerant clock synchronization algorithm cannot guarantee that the synchronization that is required in order to be able to execute a Byzantine Agreement Protocol can be obtained in finite time. Since in a dependable distributed system, any Byzantine Agreement Protocol should terminate in finite time, probabilistic fault-tolerant clock synchronization algorithms are not appli- cable here. Deterministic fault-tolerant clock synchronization algorithms can guaran- tee approximate synchronization within finite time. These algorithms require that N ≥ 3T+1, unless authentication is used or there is a bound on the rate at which messages can be generated (proved in [DoHS86]). In general, it is impossible to indicate a priori an upper bound on the rate at which messages can be generated, so either we must require that N ≥ 3T+1, or authentication must be used.
In today‟s fast growing technology, PLC has made it possible for automation systems to become larger and hence, increase the complexity of the algorithms implemented in logic controllers increases. At the same time, the demands on dependability are increasing due to rising user-awareness, stricter legislation and especially new application areas of automatic control. This increases the vulnerability in the development of the systems, especially in systems with safety responsibility faults that must not occur because it may lead to high costs, human injuries and also could causes material damages.
Developments in the computer infrastructure and hardware have been followed by the developments in computer programs. CAS also has got varied as the years passed. There are various CASs that can be classified as free and open source software (FOSS) like Maxima and Scilab that can be available freely by teachers and students, and propriety ones like Maple and Matlab that have many interesting features and supports for the users. But the cost of commercial software is an obstacle for the teachers and the students . Another classification is general purpose systems like Mathematica, Derive, Scilab and Maxima and specific purpose systems like GAP and Cadabra. Another classification is numeric CAS like Matlab and Scilab and symbolic software like Maxima, Maple and Mathematica. One can found many studies on different topics and tasks guiding the use of CAS easing to teach and to improve students’ motivation and learning in the official sites of the software and throughout the web.
The Innova has also won wide respect and devotion for its strong emphasis on low operating costs and long-term reliability, as exhibited by its world-proven, time-tested 4- stroke engine technology and quality construction. For 2007, this strong, dependable performance has been brought further up to date with the addition of new electronic fuel injection and low-emissions systems for smoother and stronger operation, and significant improvements to its already low fuel consumption and exhaust emissions, ensuring full compliance with Europe’s strict EURO-3 emissions regulations, an unprecedented achievement for an air-cooled engine.
The development and evolution of informatics as a research area promises some sort of future for the discipline, even if we cannot accurately predict the details of that future. These changes will necessarily impact on university level education. Increasing pan European collaborations in research need to be complemented by educational collaborations. This trend can only be accelerated by the demands of administrative changes brought about by activities such as the Bologna agreement. In the area of computer science educational research the potential gains of collaborative working and learning are being demonstrated by activities such as the Disciplinary Commons [2, 10]. These activities are acknowledged to have impact beyond the scope of the original project domain in terms of information sharing and understandings of practice
After the initial knowledge model has been elicited from the domain expert, the consultation component is created as a system of rules that maps to the expert’s diagnostic process. The consultation part can present a full complement of media in the form of examples to assist in the diagnostic interaction. The consultation component offers great promise to mitigate the need for access to human domain experts. By integrating pedagogically- motivated browsers with knowledge-basedsystems, just-in-time training is provided for the learner/practitioner who is not in a traditional setting - somebody, for example, who needs to learn how to fix a piece of equipment while on the job actually conducting the repair.
To ensure that information systems are functioning in an efficient and effective manner and to help the organization achieve its strategic objectives, an audit process must be performed . This task involves analyzing information systems processes. Individual activities within an information system can be grouped into processes. The fundamental feature of this framework is that audit domains and audit criteria can be combined to form a hierarchical tree consisting domains, processes and activities.
methods. As to develop their level of confidence, they require an intensive level of guidance and practice so and also to re-enforce these concepts in their minds. Teachers are not always available for guiding students through in their journey of knowledge discovery of financial analysis methods. This concept proposes the use of a web-based, problem-based interactive system called NeRePa, to help students learn financial analysis methods. This application is aimed at improving students' understanding and learning experience on financial analysis methods. Such a program enables learners to gain hands-on problem-solving skills and knowledge. This paper is in the process of developing the NeRePa application. Once it has been fully developed, the tool will then be measured for its effectiveness in improving learners learning experience with regard to analysis of financial methods in project management . It describes a logical research study focusing on the use of a particular computer program group to teach project management to UK-based students of business management. This particular computer program group is widely used in the business world but its accessibility and ease of understanding for student of business management has not been tested previously. An empirical study was undertaken in 1994 among third-year business studies undergraduates based at the University of Westminster, London. It discuss the approaches both students and staff members have to inherit to get the best out of thecommercially produced computer-based training package. The difficulties in ascertaining the role of computer-based training program group in any knowledge gained by students are acknowledged. Information used was based on diaries of students, questions and group talks. .