Open Source Software (OSS) projects as a complex softwareengineering system is an ideal domain for empirical softwareengineering, because it provides a lot of data and possibility to introduce new approaches that can be easily adapted and enrich methods to improve the software quality. As flexible and continuous developing systems, OSS projects are always growing as new requirements from users and new code from developers come into the project. However, at some point the project manager wants to know the status of OSS project whether the project is in the right direction and the product is delivered in a good quality. To get the immediate status of OSS project, some efforts have been done to observe the softwareengineeringprocesses of OSS project. However, current approaches focus on limited areas of health indicators of OSS project. In this research, we will improve the approaches by proposing a framework that integrates different approaches on observing the softwareengineeringprocesses and monitoring the health status of OSS project. Our objectives are to define observability factors of softwareengineering by making literature research on prior works and to assess the OSS engineeringprocesses using the observability factors to show that this approach is working and can improve the software quality. In this research we use OSS projects domain as our context, and other engineering domains as references, e.g., production automation and (software+) engineering domains. Our contributions are improvement on the data collection and the data analysis steps.
The game is organized as a competitive game, in which students take on the roles of project leaders in the same company. They are both given the same project and are instructed to complete it as quickly as possible. The player who completes the project ﬁrst will be the winner. However, players must balance several com- peting concerns as they work, including their budget and the client’s demands regarding the reliability of the produced software. In essence, they must strive to follow proper softwareengineering practices in order to avoid any adverse consequences that might cause them to fall behind their opponent in the race to complete the pro- ject. What are considered proper and improper softwareengineering procedures is based upon a compendium of 85 ‘‘rules of softwareengineering’’ that we have col- lected by surveying softwareengineering literature (Abdel-Hamid and Madnick, 1991; Cook and Wolf, 1998; Dawson, 2000) and practitioners’ experience re- ports (Brooks, 1995; Davis, 1995; Glass, 2003). These rules represent a mixture of both academic and indus- trial ‘‘best practices’’, and have been gathered with the intention of teaching important academic lessons while remaining faithful to reality. A full description of these rules is outside the scope of this paper, but is provided elsewhere at: http://www.ics.uci.edu/~emilyo/SimSE/ se_rules.html.
Currently, we are starting to use the GOODSTEP platform within two case stud- ies. The aim of the rst case study is to customize the platform to an SDE for use within typical information system develop- ment processes. This SDE will then be used by an industrial partner for development of an information system supporting university adminstration. In a second case study we are going to customize the GOODSTEP plat- form for use within airline software projects of another industrial partner. These projects reuse C++ classes from a variety of class li- braries. The customised SDE for this project is going to support the development and maintenance process of C++ class libraries.
Chapter 1 is a general introduction that introduces professional softwareengineering and defines some softwareengineering concepts. I have also written a brief discussion of ethical issues in softwareengineering. I think that it is important for software engineers to think about the wider implications of their work. This chapter also introduces three case studies that I use in the book, namely a system for managing records of patients undergoing treatment for mental health problems, a control system for a portable insulin pump and a wilderness weather system. Chapters 2 and 3 cover softwareengineeringprocesses and agile devel- opment. In Chapter 2, I introduce commonly used generic software process models, such as the waterfall model, and I discuss the basic activities that are part of these processes. Chapter 3 supplements this with a discussion of agile development methods for software engineer- ing. I mostly use Extreme Programming as an example of an agile method but also briefly introduce Scrum in this chapter.
I started my career in artificial intelligence (AI) creating rule-based expert systems. This involves listening to experts and creating mod- els of their decision-making processes and then coding these models into rules in a knowledge-based system. As I built these systems, I began to see repeating themes: in common types of problems, experts tended to work in similar ways. For example, experts who diagnose problems with equipment tend to look for simple, quick fixes first, then they get more systematic, breaking the problem into component parts; but in their systematic diagnosis, they tend to try first inexpensive tests or tests that will eliminate broad classes of problems before other kinds of tests. This was true whether we were diagnosing problems in a computer or a piece of oil field equipment.
Dynamic binding has a major influence on the structure of object-oriented applications, as it enables developers to write simple calls (meaning, for example, “call feature turn on entity my_boat”) to denote what is actually several possible calls depending on the corresponding run-time situations. This avoids the need for many of the repeated tests (“Is this a merchant ship? Is this a sports boat?”) which plague software written with more conventional approaches.
To the outsider it sometimes seems that scientific programming — the world of Fortran — has remained aloof from much of the evolution in softwareengineering. This is partly true, partly not. The low level of the language, and the peculiar nature of scientific computing (software produced by people who, although scientists by training, often lack formal software education), have resulted in some software of less than pristine quality. But some of the best and most robust software also comes from that field, including advanced simulations of extremely complex processes and staggering tools for scientific visualization. Such products are no longer limited to delicate but small numerical algorithms; like their counterparts in other application areas, they often manipulate complex data structures, rely on database technology, include extensive user interface components. And, surprising as it may seem, they are still often written in Fortran.
This technique is probably worse than the C-Unix signal mechanism, which at least picks up the computation where it left. A when subclause that ends with return does not even continue the current routine (assuming there are more instructions to execute); it gives up and returns to the caller as if everything were fine, although everything is not fine. Managers — and, to continue with the military theme, officers — know this situation well: you have assigned a task to someone, and are told the task has been completed — but it has not. This leads to some of the worst disasters in human affairs, and in software affairs too. This counter-example holds a lesson for Ada programmers: under almost no circumstances should a when subclause terminate its execution with a return. The qualification “almost” is here for completeness, to account for a special case, the false alarm, discussed below; but that case is very rare. Ending exception handling with a return means pretending to the caller that everything is right when it is not. This is dangerous and unacceptable. If you are unable to correct the problem and satisfy the Ada routine’s contract, you should make the routine fail. Ada provides a simple mechanism to do this: in an exception clause you may execute a raise instruction written as just
The disadvantage of this approach is that it is modal: it forces you to select first what you want to do, then what you want to do it to. The practice of software development is different. During the course of a debugging session, you may suddenly need a browsing facility: for example you discover that a routine causing trouble is a redefined version, and you want to see the original. If you see that original you may next want to see the enclosing class, its short form, and so on. Modal environments do not let you do this: you will have to go away from the “debugger tool” to a “browser tool” and restart from scratch to look for the item of interest (the routine) even though you had it in the other window.
The retrospective is a critique of the most recent past sprint of the team. One approach is to ask ourselves two simple questions: What did we do well? What do we need to improve? Launch process gate reviews and other heavy-handed approaches can be improved by incorporating at least a brief retrospective so that team learning occurs (see Figure 2.20). Our experience with staged-gate processes suggests that once a customer or client has seen a schedule or time line, the schedule is not open to alter- ation. We suggest that the retrospective allow for both learning and recalibration of the overall project schedule. Software scrum teams will often develop a sprint velocity (or story point velocity) based on empirical measurements of their specific team, which allows for calculation of the probable conclusion date for the project or subproject. For those who think the classical approach is the way to operate, please take a look at a well-baselined timeline with both planned and actual start and finish dates. We normally begin to see variance between plan and reality within the first few weeks of the project, a situation that worsens as the project continues. With the
In ordinary approaches to software construction, although calls and other operations often (as in the various preceding examples) rely for their correctness on various assumptions, these assumptions remain largely implicit. The developer will convince himself that a certain property always holds at a certain point, and will put this analysis to good use in writing the software text; but after a while all that survives is the text; the rationale is gone. Someone — even the original author, a few months later — who needs to understand the software, perhaps to modify it, will not have access to the assumption and will have to figure out from scratch what in the world the author may have had in mind. The check instruction helps avoid this problem by encouraging you to document your non-trivial assumptions.
Providing each computer user with a multi-windowing, multiprogramming interface is the responsibility of the operating system. But increasingly the users of the software we develop want to have concurrency within one application. The reason is always the same: they know that computing power is available by the bountiful, and they do not want to wait idly. So if it takes a while to load incoming messages in an e-mail system, you will want to be able to send an outgoing message while this operation proceeds. With a good Web browser you can access a new site while loading pages from another. In a stock trading system, you may at any single time be accessing market information from several stock exchanges, buying here, selling there, and monitoring a client’s portfolio.
the class corresponding to TX might not have a feature called f; the feature might exist but be secret; the number of arguments might not coincide with what has been declared for f in the class; the type for a or another argument might not be compatible with what f expects. In all such cases, letting the software text go through unopposed — as in a language without static typechecking — would usually mean nasty consequences at run time, such as the program crashing with a diagnostic of the form “Message not understood ” (the typical outcome in Smalltalk, a non-statically-typed O-O language). With explicit typing, the compiler will not let the erroneous construct through.
L earning all the technical details of inheritance and related mechanisms, as we did in part C, does not automatically mean that we have fully grasped the methodological consequences. Of all issues in object technology, none causes as much discussion as the question of when and how to use inheritance; sweeping opinions abound, for example on Internet discussion groups, but the literature is relatively poor in precise and useful advice. In this chapter we will probe further into the meaning of inheritance, not for the sake of theory, but to make sure we use it best to benefit our software development projects. We will in particular try to understand how inheritance differs from the other inter-module relation in object-oriented system structures, its sister and rival, the client relation: when to use one, when to use the other, when both choices are acceptable. Once we have set the basic criteria for using inheritance — identifying along the way the typical cases in which it is wrong to use it — we will be able to devise a classification of the various legitimate uses, some widely accepted (subtype inheritance), others, such as implementation or facility inheritance, more controversial. Along the way we will try to learn a little from the experience in taxonomy, or systematics, gained from older scientific disciplines.
It would be foolish to dismiss this side-effect-full style as thoughtless; its widespread use shows that many people have found it convenient, and it may even be part of the reason for the amazing success of C and its derivatives. But what was attractive in the nineteen-seventies and eighties — when the software development population was growing by an order of magnitude every few years, and the emphasis was on getting some kind of job done rather than on long-term quality — may not be appropriate for the software technology of the twenty-first century. There we want software that will grow with us, software that we can understand, explain, maintain, reuse and trust. The Command-Query Separation principle is one of the required conditions for these goals.
Even if instead of a general-purpose mechanism retrieved were a retrieval function specific to your application and declared with the intended type, you could still not trust its result blindly. Unlike an object that the software creates and then uses during the same session, guaranteeing type consistency thanks to the type rules, this one comes from the outside world. You may have chosen the wrong file name and retrieved an EMPLOYEE object rather than a BOOK object; or someone may have tampered with the file; or, if this is a network access, the transmission may have corrupted the data.
The discovery, in 1985, of this property — that even in the presence of multiple inheritance it was possible to implement a dynamically-bound feature call in constant time — was the key impetus for the project that, among other things, yielded both the first and the present editions of this book: to build a modern software development environment, starting from the ideas brilliantly introduced by Simula 67 and extending them to multiple inheritance (prolonged experience with Simula having shown that the limitation to single inheritance was unacceptable, as explained in the next chapter), reconciling them with modern principles of softwareengineering, and combining them with the most directly useful results of formal approaches to software specification, construction and verification. The design of an efficient, constant-time dynamic binding mechanism, which may at first sight appear to be somewhat peripheral in this set of goals, was in reality an indispensable enabler.