The constant upgrading of automobile software is also a concern. Many problems with the entertainment and communications software have been reported in the literature. I have a personal example: my favorite car. After an upgrade of the satellite radio software at the provider’s site, the software on the car kept contacting the satellite for program- ming data even while the car was turned off. This depleted the battery and required two tows, two replacement batteries, and two days of work by the dealer’s top mechanic until the car was fixed. This is a problem in software configuration management, because the newest version of the satellite provider’s system did not interface correctly with existing satellite-enabled radios. It is also a problem in software design, because there was no safety mechanism, such as a time-out, to protect the battery. Deployment of software to multiple locations is a constant issue in modern software that uses networks and a variety of end- user devices. (You might see this particular car, a convertible with a vanity license plate. If you do, please smile and buy another copy of this book. Thank you.)
The fundamental notion behind model-driven engineering is that completely automated transformation of models to code should be possible. To achieve this, you have to be able to construct graphical models whose semantics are well defined. You also need a way of adding information to graphical models about the ways in which the operations defined in the model are implemented. This is possible using a subset of UML 2, called Executable UML or xUML (Mellor and Balcer, 2002). I don’t have space here to describe the details of xUML, so I simply present a short overview of its main features. UML was designed as a language for supporting and documenting software design, not as a programming language. The designers of UML were not concerned with semantic details of the language but with its expressiveness. They introduced useful notions such as use case diagrams that help with the design but which are too informal to support execution. To create an executable sub-set of UML, the number of model types has therefore been dramatically reduced to three key model types: 1. Domain models identify the principal concerns in the system. These are defined
In the previous two chapters we have seen first FOL and then a version of it that was slightly changed with respect to notation and number of features in the language (easier, and less, respectively), being the DL family of languages. They haven’t gotten us anywhere close to implementations, however. This is set to change in this chapter, where we will look at ‘implementation versions’ of DLs that have rich tooling support. We will take a look at the computational use of DLs with a so-called serialization to obtain computer-processable versions of an ontology and automated reasoning over it. The language that we will use to serialise the ontology is the most widely used ontology language for computational purposes, being the Web Ontology Language OWL. OWL was standardised first in 2004 and a newer version was standardised in 2009, which has fuelled tool development and deployment of ontologies in ontology-driven information systems. OWL looks like yet another a language and notation to learn, but the ones that we will consider (the DL-based ones) have the same underlying principles. It does have a few engineering extras, which also has as consequence that there are several ways to serialise the ontology so as to cater for software developers’ preferences. Thus, theoretically, there is not really anything substantially new in this chapter, but there will be many more options and exercises to practically engage with the ontology languages, automated reasoning, and toy ontologies to play with on the computer. Depending on your interests, things start to get ‘messy’ (for a theoretician) or finally concrete (for an engineer).
Automated reasoning, and deduction in particular, has found applications in ‘every day life’. A notable example is hardware and (critical) software verification, which gained prominence after Intel had shipped its Pentium processors with a floating point unit error in 1994 that lost the company about $ 500 million. Since then, chips are routinely automatically proven to function correctly according to speci- fication before taken into production. A different scenario is scheduling problems at schools to find an optimal combination of course, lecturer, and timing for the class or degree program, which used to take a summer to do manually, but can now be computed in a fraction of it using constraint programming. In addition to such general application domains, it is also used for specific scenarios, such as the demonstration of discovering (more precisely: deriving) novel knowledge about protein phosphatases [WSH07]. They represented the knowledge about the subject domain of protein phosphatases in humans in a formal bio-ontology and classified the enzymes of both human and the fungus Aspergillus fumigatus using an auto- mated reasoner, which showed that (i) the reasoner was as good as human expert classification, (ii) it identified additional p-domains (an aspect of the phosphatases) so that the human-originated classification could be refined, and (iii) it identified a novel type of calcineurin phosphatase like in other pathogenic fungi. The fact that one can use an automated reasoner (in this case: deduction, using a Descrip- tion Logics knowledge base) as a viable method in science is an encouragement to explore such avenues further.
쉐어웨어(Shareware)나 패키지 소프트웨어(Package software)의 대부분은 소프트웨어 개발 비용을 회수하거나 무단 복제를 금지하기 위해 시리얼 넘버에 의한 보호 방법을 채택하고 있습니다. 또한 요즘 같이 온라인 게임에서는 해킹 방지를 위해 게임보안 솔루션을 도입하여 게임을 보호하고 있습니다. 더 나아가서는 하드웨어 보호장치를 통한 소프트웨어 보호를 시도하고 있습니다. 그러나 그 보호 방법 대다수는 굉장히 취약합니다. 개인의 스킬에 따라 달라지기도 하지만 수분에서 수시간만에 크랙이 가능합니다. 현재도 Microsoft 를 비롯한 많은 기업에서 자사 소프트웨어를 대상으로 공개적인 크랙을 해달라고 요청하기도 합니다. 국내에서도 개발사가 크래커에 대하여 공개 크랙을 제시한 예가 몇 있었지만, 그러한 공개 도전에 비해서 Protection 레벨이 결코 높지 않았습니다. 단순히 자사 소프트웨어를 홍보하기 위한 수단들이 대부분 이었으므로 실제 크래커도 김빠지는 일이라고 생각됩니다.
Abstract— Introduction is an important part of research articles. It is the first opportunity to make good impression and establish the credibility over the reading audience. Well-written introduction of a research article increases the chance of paper acceptance and citations. The objective of this study is to produce a structure for writing introduction section for softwareengineering (CARSSE) research articles. Nine highly cited research articles from ―IEEE Transaction on SoftwareEngineering‖ are selected for pattern extractions. Creating a Research Space (CARS) model is kept as a baseline with additional three parameters: sentences, paragraphs and references. Keeping the CARS model in view, from the selected studies, ―occupying a niche‖ move obtains around 48% of appearance in the introduction section whereas ―establishing a territory‖ and ―establishing a niche‖ moves carry 34% and 18% respectively. The proposed structure can further be extended and precise by adding a few more studies.
The careful observer of language history can detect two ironies here. The first is that the designers of Ada were well aware of O-O ideas; although this is not widely known, Ichbiah had in fact written one of the first compilers for Simula 67, the original O-O language. As he has since explained when asked why he did not submit an O-O design to the DoD, he estimated that in the competitive bidding context of Ada’s genesis such a design would be considered so far off the mainstream as to stand no chance of acceptance. No doubt he was right; indeed one can still marvel at the audacity of the design accepted by the DoD. It would have been reasonable to expect the process to lead to something like an improvement of JOVIAL (a sixties’ language for military applications); instead, all four candidate languages were based on Pascal, a language with a distinct academic flavor, and Ada embodied bold new design ideas in many areas such as exceptions, genericity and concurrency. The second irony is that the Ada mandate, meant to force DoD software projects to catch up with progress in softwareengineering by retiring older approaches, has also had in the ensuing years the probably unintended effect of slowing down the adoption of newer (post-Ada) technology by the military-aerospace community.
More recently, some introductory programming textbooks have started to use object- oriented ideas right from the start, as there is no reason to let “ontogeny repeat phylogeny”, that is to say, take the poor students through the history of the hesitations and mistakes through which their predecessors arrived at the right ideas. The first such text (to my knowledge) was [Rist 1995]. Another good book covering similar needs is [Wiener 1996]. At the next level — textbooks for a second course on programming, discussing data structures and algorithms based on the notation of this book — you will find [Gore 1996] and [Wiener 1997]; [Jézéquel 1996] presents the principles of object-oriented softwareengineering.
This suggests a more general observation as to the intellectual value of our field. Over the years many articles and talks have claimed to examine how software engineers could benefit from studying philosophy, general systems theory, “cognitive science”, psychology. But to a practicing software developer the results are disappointing. If we exclude from the discussion the generally applicable laws of rational investigation, which enlightened minds have known for centuries (at least since Descartes) and which of course apply to software science as to anything else, it sometimes seems that experts in the disciplines mentioned may have more to learn from experts in software than the reverse. Software builders have tackled — with various degrees of success — some of the most challenging intellectual endeavors ever undertaken. Few engineering projects, for example, match in complexity the multi-million line software projects commonly being launched nowadays. Through its more ambitious efforts the software community has gained precious insights on such issues and concepts as size, complexity, structure, abstraction, taxonomy, concurrency, recursive reasoning, the difference between description and prescription, language, change and invariants. All this is so recent and so tentative that the profession itself has not fully realized the epistemological implications of its own work.
• For the standard production versions, decide whether to choose a no-check version or a protected version (usually at the precondition level) based on your assessment, from an engineering perspective, of the relative weight of the three factors cited at the beginning of this discussion: how much you trust the correctness of your software (meaning in part how hard you have worked at making it correct and convincing yourself and others that it is); how crucial it is to get the utmost efficiency; and how serious the consequences of an undetected run-time error can be. • If you decide to go for a no-check version, also include in your delivery a version that checks at least for preconditions. That way, if the system starts exhibiting abnormal behavior against all your expectations, you can ask the users — those at least who have not been killed by the first erroneous production runs — to switch to the checking version, helping you find out quickly what is wrong.
• By writing texts that conform to the syntax of the software notation, you can make use of all the tools of the supporting software development environment. In particular, the compiling mechanism will double up as a precious CASE (computer- aided softwareengineering) tool, applying type rules and other validity constraints to check the consistency of your specifications and detect contradictions and ambiguities; and the browsing and documentation facilities of a good O-O environment will be as useful for analysis as they are for design and implementation. • Using the software notation also means that, should you decide to proceed to the design and implementation of a software system, you will be able to follow a smooth transition path; your work will be to add new classes, effective versions of the deferred features and new features. This supports the seamlessness of the approach, discussed in the next chapter.
You should apply the style rules right from the time you start writing a class. For example you should never write a routine without immediately including its header comment. This does not take long, and is not wasted time; in fact it is time saved for all future work on the class, whether by you or by others, whether after half an hour or after half a decade. Using regular indentation, proper spelling for comments and identifiers, adequate lexical conventions — a space before each opening parenthesis but not after, and so on — does not make your task any longer than ignoring these rules, but compounded over months of work and heaps of software produces a tremendous difference. Attention to such details, although not sufficient, is a necessary condition for quality software (and quality, the general theme of this book, is what defines softwareengineering).
Cloud services and applications are becoming very popular and penetrative these days. Increasingly, both business and IT applications are being modernized appro- priately and moved to clouds to be subsequently subscribed and consumed by global user programs and people directly anytime anywhere for free or a fee. The aspect of software delivery is henceforth for a paradigm shift with the smart leverage of cloud concepts and competencies. Now there is a noteworthy trend emerging fast to inspire professionals and professors to pronounce the role and responsibility of clouds in softwareengineering. That is, not only cloud-based software delivery but also cloud-based software development and debugging are insisted as the need of the hour. On carefully considering the happenings, it is no exaggeration to say that the end-to-end software production, provision, protection, and preservation are to happen in virtualized IT environments in a cost-effective, compact, and cognitive fashion. Another interesting and strategic pointer is that the number and the type of input/output devices interacting with remote, online, and on-demand cloud are on the climb. Besides fi xed and portable computing machines, there are slim and sleek mobile, implantable, and wearable devices emerging to access, use, and orchestrate a wider variety of disparate and distributed professional as well as personal cloud services. The urgent thing is to embark on modernizing and refi ning the currently used application development processes and practices in order to make cloud-based softwareengineering simpler, successful, and sustainable.
A complementary approach to functional or black-box testing is called structural or white-box testing. In this approach, test groups must have complete knowledge of the internal structure of the software. We can say structural testing is an approach to testing where the tests are derived from knowledge of the software’s structure and implementation. Structural testing is usually applied to relatively small program units, such as subroutines, or the operations associated with an object. As the name implies, the tester can analyze the code and use knowledge about the structure of a component to derive test data. The analysis of the code can be used to find out how many test cases are needed to guarantee that all of the statements in the program are executed at least once during the testing process. It would not be advisable to release software that contains untested statements as the consequence might be disastrous. This goal seems to be easy but simple objectives of structural testing are harder to achieve than may appear at first glance.
Software engineers sometimes distinguish revenue tasks, which contribute di- rectly to the solution of a problem, from excise tasks, which do not. For example, compiling a Java class is a classic excise task because, although necessary for the class to become executable, compilation contributes nothing to the particular behav- ior of that class. In contrast, determining which methods are appropriate to define a given data abstraction as a Java class is a revenue task. Excise tasks are candidates for automation; revenue tasks are not. Software testing probably has more excise tasks than any other aspect of software development. Maintaining test scripts, re- running tests, and comparing expected results with actual results are all common excise tasks that routinely consume large chunks of test engineer’s time. Automat- ing excise tasks serves the test engineer in many ways. First, eliminating excise tasks eliminates drudgery, thereby making the test engineers job more satisfying. Second, automation frees up time to focus on the fun and challenging parts of testing, namely the revenue tasks. Third, automation can help eliminate errors of omission, such as failing to update all the relevant files with the new set of expected results. Fourth, automation eliminates some of the variance in test quality caused by differences in individual’s abilities.
Before the 1960s, semiconductor engineering was regarded as part of low-current and low-voltage electronic engineering. The currents used in solid-state devices were below one ampere and voltages only a few tens of volts. The year 1970 began one of the most exciting decades in the history of low-current electronics. A number of companies entered the field, including Analog Devices, Computer Labs, and National Semiconductor. The 1980s represented high growth years for integrated circuits, hybrid, and modular data converters. The 1990s major applications were industrial process control, measurement, instrumentation, medicine, audio, video, and computers. In addition, communications became an even bigger driving force for low-cost, low-power, high-performance converters in modems, cell-phone handsets, wireless infrastructure, and other portable applications. The trends of more highly integrated functions and power dissipation drop have continued into the 2000s.