The course title \ConcreteMathematics" was originally intended as an antidote to \Abstract Mathematics," since concrete classical results were rap- idly being swept out of the modern mathematical curriculum by a new wave of abstract ideas popularly called the \New Math." Abstract mathematics is a wonderful subject, and there's nothing wrong with it: It's beautiful, general, and useful. But its adherents had become deluded that the rest of mathemat- ics was inferior and no longer worthy of attention. The goal of generalization had become so fashionable that a generation of mathematicians had become unable to relish beauty in the particular, to enjoy the challenge of solving quantitative problems, or to appreciate the value of technique. Abstract math- ematics was becoming inbred and losing touch with reality; mathematical ed- ucation needed a concrete counterweight in order to restore a healthy balance. When DEK taught ConcreteMathematics at Stanford for the rst time, he explained the somewhat strange title by saying that it was his attempt
analogous to formulas and inquire about their interpretation in various model struc- tures. Here we begin with a word of caution: clearly, people are often capable of assigning meaning to grammatically ill-formed utterances, and conversely, they can have trouble interpreting perfectly well-formed ones. But even with this caveat, there is a great deal of correlation between the native speaker’s judgments of grammati- calness and their ability to interpret what is being said, so a program of research that focuses on meaning and treats grammaticality as a by-product may still make sense. In fact, much of modern syntax is an attempt, one way or the other, to do away with separate mechanisms in accounting for these three ranges of facts. In Sec- tion 5.1, we will discuss combinatorical approaches that put the emphasis on the way words combine. Although in formal theories of grammar the focus of research activ- ity historically fell in this category, the presentation here can be kept brief because most of this material is now a standard part of the computerscience curriculum, and the reader will find both classical introductions such as Salomaa (1973) and modern monographic treatments such as Kracht (2003). In Section 5.2, we turn to grammat- ical approaches that put the emphasis on the grammatical primitives; prominent ex- amples are dependency grammar (Tesni`ere 1959), tagmemics (Brend and Pike 1976), case grammar (Fillmore 1968), and relational grammar (Perlmutter 1983), as well as classical P¯an.inian morphosyntax and its modern variants. These theories, it is fair to say, have received much less attention in the mathematical literature than their actual importance in the development of linguistic thought would warrant. In Section 5.3, we discuss semantics-driven theories of syntax, in particular the issues of frame se- mantics and knowledge representation. In Section 5.4, we take up weighted models of syntax, which extend the reach of the theory to another range of facts, the weight a given string of words has. We will present this theory in its full generality, permitting as special cases both standard formal language theory, where the weights 1 and 0 are used to distinguish grammatical from ungrammatical, the extension (Chomsky 1967) where weights between 1 and 0 are used to represent intermediate degrees of gram- maticality, and the probabilistic theory, which plays a central role in the applications. In Section 5.5, we discuss weighted regular languages, giving an asymptotic charac- terization of these over a one-letter alphabet. Evidence of syntactic complexity that comes from external sources such as difficulty of parsing or acquisition is discussed in Section 5.6.
Many built-in scalar variables represent fairly arcane systems-programming concepts, which at this introductory level we can afford to ignore. The most frequently-used built-in scalar variable of all, $_, will be passed over briefly here for a different reason. We encountered $_ once, in chapter 12, in connexion with the map() function (where it is indispensable). But the commonest use of $_ is to provide idiomatically brief alternatives to Perl constructions that take slightly longer to spell out explicitly. For seasoned programmers to whom brevity is important, this may be handy, but beginners are better advised to make their code fully explicit, and hence they should probably avoid using $_. (Actually, even professional programmers – not to speak of those who have to maintain their code after they have moved on – are probably better off in the long run making everything explicit at the cost of a few extra keystrokes. There is a geeky side to Perl which delights in terse obscurity for its own sake, and the symbol $_ is arguably a symptom of that.)
The true spirit of delight, the exaltation, the sense of being more than man, which is the touchstone of the highest excellence, is to be found in Mathematics as surely as in poetry.. . . Real life is, to most men, a long second best, a perpetual compro- mise between the ideal and the possible; but the world of pure reason knows no compromise, no practical limitations, no barriers to the creative activity embody- ing in splendid edifices the passionate aspiration after the perfect, from which all great work springs.
If you want to implement this method in Java, it would probably be best to use a longer name than the single letter C used in mathematics. The one-character name might well cause confusion, if for no other reason than that the fact that its name is uppercase suggests a constant—possibly the speed of light. As a general rule, method names used in this text tend to be longer and more expressive than variable names. Method calls often appear in parts of the program that are far removed from the point at which those methods are defined. Since the definition may be hard to locate in a large program, it is best to choose a method name that conveys enough information about the method so that the reader does not need to look up the definition. Local variables, on the other hand, are used only within the body of a single method, and it is therefore easier to keep track of what they mean. In the interest of having the name of the combinations method make sense immediately when anyone looks at it, we will use the name combinations as the
The lookup method is abstract in class Subst . There are two concrete forms of sub- stitutions which differ in how they implement this method. One form is defined by the emptySubst value, the other is defined by the extend method in class Subst . The next data type describes type schemes, which consist of a type and a list of names of type variables which appear universally quantified in the type scheme. For instance, the type scheme ∀ a ∀ b.a → b would be represented in the type checker as: TypeScheme(List(Tyvar("a"), Tyvar("b")), Arrow(Tyvar("a"), Tyvar("b"))) . The class definition of type schemes does not carry an extends clause; this means that type schemes extend directly class AnyRef . Even though there is only one pos- sible way to construct a type scheme, a case class representation was chosen since it offers convenient ways to decompose an instance of this type into its parts.
In particular, our decomposition has nothing to do with the possible division of a com- piler into passes. (We consider a pass to be a single, sequential scan of the entire text in either direction. A pass either transforms the program from one internal representation to another or performs specied changes while holding the representation constant.) The pass structure commonly arises from storage constraints in main memory and from input/output considerations, rather than from any logical necessity to divide the compiler into several se- quential steps. One module is often split across several passes, and/or tasks belonging to several modules are carried out in the same pass. Possible criteria will be illustrated by con- crete examples in Chapter 14. Proven programming methodologies indicate that it is best to regard pass structure as an implementation question. This permits development of program families with the same modular decomposition but dierent pass organization. The above consideration of coroutines and other implementation models illustrates such a family.
This series of post focus on providing both imperative and functional algo- rithms and data structures. Many functional data structures can be referenced from Okasaki’s book. While the imperative ones can be founded in classic text books  or even in WIKIpedia. Multiple programming languages, includ- ing, C, C++, Python, Haskell, and Scheme/Lisp will be used. In order to make it easy to read by programmers with diﬀerent background, pseudo code and mathematical function are the regular descriptions of each post.
System modelling is concerned with how systems are realised using technology. System modelling is largely a technological activity that attempts to translate the application model into a concrete, executable system. System modelling has to deal with artificial details that are not an inherent part of the application model, but a by-product of using specific technologies. For example, it has to deal with specific programming constructs, middleware services, data models, and so on. In other words, it produces an internal view of the solution, showing how its different parts interact in order to support the external, application view. System modelling is where the non-functional requirements (e.g., platform, performance, throughput, scalability, maintainability) are addressed. The system model is expressed in technical terms and is for the internal use of the technologists who work on it. It is inappropriate reading material for business users.
We arrive at the third Circle, filled with cold, unending rain. Here stands Cerberus barking out of his three throats. Within the Circle were the blas- phemous wearing golden, dazzling cloaks that inside were all of lead—weighing them down for all of eternity. This is where Virgil said to me, “Remember your science—the more perfect a thing, the more its pain or pleasure.”
The ﬁrst chapter solves an intriguing AI puzzle which was ﬁrst published in the New Scientist magazine  in 2003. The Prolog solution presented here combines problem speciﬁc knowledge using Finite Mathematics with the well-know AI technique ‘generate-and-test’. Even though this chapter did not emanate from my teaching activities, the presentation follows a well-tested pattern: the problem is broken down into manageable and identiﬁable subproblems which then are more or less readily implemented in Prolog. Many interesting hurdles are identiﬁed and solved thereby. The availability of uniﬁcation as a pattern matching tool makes Prolog uniquely suitable for solving such problems. This ﬁrst chapter is an adaptation of work reported in . Further recent developments on solving this problem can be found in .
The order in which the books may be studied is fairly free even though an example introduced somewhere may serve in a later chapter to illustrate the generalization or improvement aﬀorded by the material just covered. The SWI-Prolog compiler is used throughout: it has been around for quite some time; it is well documented; it is free; and, it is being maintained with new, improved versions becoming available all the time. Furthermore, there is an object oriented extension to SWI-Prolog (XPCE) for building graphical applications, useful if one wants to pursue this line further.
Likewise in the software world, there are objects that a user or programmer can make effective use of without having to know how the object has been implemented. On a very simple level an Ada program may declare objects to hold floating point numbers, which can then be used with arithmetic operations to sum, multiply, etc. these values. Most programmers however, do not know the exact details of how these operations are performed; they accept the interface provided by the programming language.
But word-processors, presentation graphics and interactive games are not the only type of software being developed. Computers are now controlling the most complex systems in the world: airplanes and spacecraft, power plants and steel mills, communications networks, international banks and stock markets, military systems and medical equipment. The social and economic environment in which these systems are developed is totally di ff erent from that of packaged software. Each project pushes back the limits of engineering experience, so delays and cost overruns are usually inevitable. A company’s reputation for engineering expertise and sound management is more im- portant in winning a contract than a list of features. Consistent, up-to-date, technical competence is expected, not the one-time genius of a startup.