Computers also interact with the outside world. For a start, they need some means of accepting problems and delivering solutions. Many computer systems monitor and control industrial processes. This role of computers is familiar now, but was never envisaged at first. Modelling it requires a notion of states that can be observed and changed. Then we can consider updating the state by assigning to variables or performing input/output, finally arriving at conventional programs (familiar to those of you who know C, for instance) that consist of commands.
Thus although there will be many instances in which our practices may dispose us to regard two programs or machines as representing the same algorithm, the question of whether such judgements can be made precise does not seem to be of any independent mathematical significance. And there thus seems to be no abiding reason why ontological questions of type (Q1) need to be answered before we can provide detailed answers to epistemological questions of type (Q2). At least to date, the story of algorithmic realism might thus be taken to represent a cautionary tale for the study of ontological commitment. For it is indeed difficult to deny that our discourse is highly suggestive of the fact that parts of theoretical computerscience are engaged in the study of a class of procedural entities which are closely related to but yet somehow distinct from those studied in classical mathematics. But at the same time, a more detailed appraisal of the methodologies of the relevant subjects suggests that there may be no way to take this language at face value without contravening some of the assumptions about the nature of algorithms in which these fields themselves seems to be grounded.
If computers, which at that time had only the most rudimentary “senses” and no emotions could perceive and understand in the way humans did, then the rules-based approach of the rationalist philosophers would be vindi- cated. But when Dreyfus had examined the AI efforts, he wrote a paper titled “Alchemy and Artificial Intelligence.” His comparison of AI to alchemy was provocative in that it suggested that like the alchemists, the modern AI research- ers had met with only limited success in manipulating their materials (such as by teaching computers to perform such intellectual tasks as playing checkers and even prov- ing mathematical theorems). However, Dreyfus concluded that the kind of flexible, intuitive, and ultimately robust intelligence that characterizes the human mind couldn’t be matched by any programmed system. Each time AI research- ers demonstrated the performance of some complex task, Dreyfus examined the performance and concluded that it lacked the essential characteristics of human intelligence. Dreyfus expanded his paper into the book What Computers Can’t Do. meanwhile, critics complained that Dreyfus was moving the goal posts after each play, on the assumption that “if a computer did it, it must not be true intelligence.”
In these notes, student activities alternate with explanations and extensions of the point of the activities. The best way to use these notes is to try to master the student activity before beginning the explanation that follows. The activities are largely meant to be done in groups in class; thus for activities done out of class we recommend trying to form a group of students to work together. The reason that the class and these notes are designed in this way is to help students develop their own habits of mathematical thought. There is considerable evidence that students who are actively discovering what they are learning remember it far longer and are more likely to be able to use it out of the context in which it was learned. Students are much more likely to ask questions until they understand a subject when they are working in a small group with peers rather than in a larger class with an instructor. There is also evidence that explaining ideas to someone else helps us organize these ideas in our own minds. However, diﬀerent people learn diﬀerently. Also the amount of material in discrete mathematics that is desirable for computerscience students to learn is much more than can be covered in an academic term if all learning is to be done through small group interaction. For these reasons about half of each section of these notes is devoted to student activities, and half to explanation and extension of the lessons of these activities.
To start off, an HTML document is nothing more than a bunch of characters that someone has entered into a text editor or a word processor and been saved as a file on a computer employed as a web server. When you request an answer from a web site, such as the one and only www.ibm.com, the corresponding web server goes to a default file named index.html, retrieves the HTML file, and sends it back to your browser for rendering on your computer‟s display. It is someone‟s job to put the right stuff into index.html, and that stuff should be written in HTML. Now the file named index.html might have links to other pages that are returned in a similar manner when you click on them. Those links are referred to as hot links, because we get some action when we click on them – as we just mentioned. You can even put programs into an HTML document. These programs are executed by your browser resulting in some visual or audio activity on the receiving end. The active behavior can result in a wide variety of audio, video, and data-oriented interactive forms.
This paper continues with the conspectus of Service Science for academicians and practitioners. It follows the previous paper, entitled Foundations of Service Science: Concepts and Facilities, with the express purpose of defining the scope of the discipline. A thriving flexible service economy has emerged through globalization and digitization, and as a direct result, the modern enterprise has a dynamically changing boundary based on a portfolio of services obtained through make, buy, or rent decisions. Through the application of information and communications technology (ICT), many organizations have adjusted everyday operations enabling them to go through a transformational process to achieve revenue growth by being able to respond more quickly to changing market conditions and by being more effective and efficient in the application of services. The viewpoint taken here is that service management and modern business usually employ a complex computer infrastructure, but their domain is by no means restricted to computer-based services.
ecognizing that more than 80% of the country‟s GNP results from services and also that more than 80% of the workforce is employed in services, Sam Palmisano, CEO of IBM, initiated a corporate- wide program in Service Science that has transformed IBM and many other organizations. The cornerstone of the program is the fact that even though most of us are engaged in services, we really know very little about the subject. At the time, there was no academic subject called “service science,” no principles of service science, no theorems, and most importantly, there was no set of best practices. He has changed all of that. An important aspect of the IBM initiative is that it has enabled academic participation in the development of the subject matter through the establishment of a field called Service Science in a similar manner to the way IBM assisted in the development of academic programs in ComputerScience, three decades ago. Responding to this situation, the IBM Corporation initiated a project in the years 2004-2007 to develop a science of services. The project has resulted in tidal wave of activity within the business and university communities to study the subject and develop academic programs. With Service Science, we are interested in the underlying principles that define the subject matter and demonstrate its relationship to other disciplines.
Let us consider the resources used in computation. And, most important are those which seem to limit computation. In particular, we will examine time and space constraints during computation. This is very much in line with computerscience practice since many problems are costly to us or placed beyond our reach due to lack of time or space - even on modern computing equipment. We shall return to Turing machines in order to examine computational difficulty. This may seem rather arbitrary and artificial, but this choice is reasonable since most natural models of computation are not too far apart in the amounts of time and space used in computation for the same functions. (For example, consider the space used by Turing machines and programs that compute the same functions or decide membership in the same sets. They are very similar indeed!) In addition, the simplicity of the Turing machine model makes our study far less cumbersome.
will ever help me. Rice, and Stanford who contributed the choice grati and helped to debug our rst drafts. Our contacts at Addison-Wesley were especially ecient and helpful; in particular, we wish to thank our publisher (Peter Gordon), production supervisor (Bette Aaronson), designer (Roy Brown), and copy ed- itor (Lyn Dupre). The National Science Foundation and the Oce of Naval Research have given invaluable support. Cheryl Graham was tremendously helpful as we prepared the index. And above all, we wish to thank our wives (Fan, Jill, and Amy) for their patience, support, encouragement, and ideas. I had a lot of trou-
■ NATHANIEL T. SCHUTTA is a senior software engineer in the Twin Cities area of Minnesota with extensive experience developing Java Enterprise Edition– based Web applications. He has a master’s of science degree in software engineering from the University of Minnesota and for the last several years has focused on user interface design. Nathaniel has contributed to corporate interface guidelines and consulted on a variety of Web-based applications. A long-time member of the Association for Computing Machinery’s Computer- Human Interaction Special Interest Group and a Sun-certified Web component developer, Nathaniel believes that if the user can’t figure out your application, then you’ve done some- thing wrong. Along with his user interface work, Nathaniel is the cocreator of the open-source Taconite framework, has contributed to two corporate Java frameworks, has developed train- ing material, and has led several study groups. During the brief moments of warm weather found in his home state of Minnesota, he spends as much time on the golf course as his wife will tolerate. He’s currently exploring Ruby, Rails, and (after recently making the switch) Mac OS X. For more of his random thoughts, check out his blog at www.ntschutta.com/jat/.
sighan all 20050908 pdf Domain Specific Word Extraction fromHierarchical Web Documents A First Step Toward Building Lexicon Trees fromWeb Corpora Jing Shin Chang Department of Computer Science & Infor[.]
alr all 20050906 pdf Question Classification using Multiple Classifiers LI Xin Computer Science Engineering Dep FUDAN Univ , Shanghai lixin@fudan edu cn HUANG Xuan Jing Computer Science Engineering De[.]
7. It is easy to imagine financial or navigational disasters that may occur as the result of arithmetic errors due to overflow and truncation problems. What con- sequences could result from errors in image storage systems due to loss of image details (perhaps in fields such as reconnaissance or medical diagnosis)? 8. ARM Holdings is a small company that designs the processors for a wide vari- ety of consumer electronic devices. It does not manufacture any of the proces- sors; instead the designs are licensed to semiconductor vendors (such as Qualcomm, Samsung, and Texas Instruments) who pay a royalty for each unit produced. This business model spreads the high cost of research and develop- ment of computer processors across the entire consumer electronic market. Today, over 95 percent of all cellular phones (not just smartphones), over 40 percent of all digital cameras, and 25 percent of Digital TVs use an ARM processor. Furthermore, ARM processors are found in mini-notebooks, MP3 players, game controllers, electronic book readers, navigation systems, and the list goes on. Given this, do you consider this company to be a monopoly? Why or why not? As consumer devices play an ever increasing role in today’s society, is the dependency on this little known company good, or does it raise concerns?
Case study. The Pentagon is the United States’ military headquarters. Located near Washington, D.C., the Pentagon has many computers and extensive networking equipment. Back in the 1970s, someone forgot to turn oﬀ a 300-watt light bulb in a vault where computer tapes were stored. The small bulb generated heat that had nowhere to go and started heating up the room and smoldering the ceiling. When the door was ﬁnally opened, the fresh air rushing into the room turned the high temperature to ﬁre. The ﬁre spread to several adjoining rooms and caused damage in the millions of dollars. Theft should especially be mentioned, because personal computers are getting smaller and lightweight all the time and are therefore easy to steal. There is a school of thought in law enforcement that says that if you want to catch a thief, you should think like one. We hear about sophisticated hackers who write viruses and spyware, but an unsophisticated thief can cause much harm by stealing computers, because all the data in the computer disappears with the computer. Such data may be slow and expensive to replace and may also be private and sensitive. We should always keep in mind the simple, straightforward brute-force approach that computer thieves often adopt. Simply sneak in, take what you ﬁnd, and get away quickly.
Since 1956 artificial intelligence is formally found and very impressive progress has been made in many areas over the past years. Its achievements and techniques are in the mainstream of computerscience and at the core of so many systems. For example, the computer beats the world’s chess champ, commercial systems are exploiting voice and speech capabilities, there are robots running around the surface of Mars. In well-known TV quiz show “Jeopardy” IBM super computer system Watson beats the best of the two bit of human champion Ken Jennings and Brad Rutter. But all these achievements are not in the realm of human level machine intelligence.
FOSSACS 2005 consisted of one invited and 30 contributed papers, selected out of 108 submissions, yielding an acceptance rate of less than 28%. The quality of the manuscripts was very high indeed, and the Programme Committee had to reject sev- eral deserving ones. Besides making for a strong 2005 programme, this is an indication that FOSSACS is becoming an established point of reference in the international land- scape of Theoretical ComputerScience. This is a trend that I believe will continue in its forthcoming editions.
Châtelet (Zinsser, ed., 2009), I was immediately struck by the apparent overlap between topics treated by Du Châtelet and by the early Kant. Kant’s first publication (1749) was his contribution to the so-‐called “vis viva controversy”, and this same topic occupies the final chapters of Du Châtelet’s Foundations as part of a public dispute. Kant is explicitly continuing this debate. Du Châtelet submitted her “Dissertation on the Nature and Propagation of Fire” in 1737 for the 1738 Royal Academy of Sciences prize competition, and Kant published in 1755 on fire. Reading on, the similarities seem much deeper and more important than this. Schönfeld’s (2000, Introduction) description of Kant’s precritical project could be a description of what Du Châtelet sets out to do in her Foundations (and related texts, such has her manuscript “On Liberty”). It seems there is much work to be done on the
Computers and information technology has become an essential part of living in current era. Developments in the field of computerscience (CS) have influenced many aspects of human life including education. In order to make students skillful in this filed a subject of computerscience is offered at secondary school level. Major purpose of introducing CS at secondary level was assumption that students of this age are more motivated to learn new technologies. The second reason was high dropout rate, where almost half of them enter in job market, keeping in view this trend it was necessary to develop technology literate workforce. Now as the technology has flooded in and new generation is exposed to it from birth so it is necessary to know perceptions about computerscience curriculum. Major purpose of this research was to explore the perception of secondary school computerscience curriculum. Population of the study includes the entire student enrolled at secondary school level in the subject of computerscience curriculum their teachers and principals. The sample of the study was selected through convenient sampling and forty students (20=male, 20=female) took part in focus group discussion. Eight teachers and four principals were also part of the study. A focus group discussion guide was prepared to conduct discussion with students, while interview schedules were used to take data from teachers and principals. Scores of students in the subject of computerscience were also taken from students and counter checked from the school administration to examine effectiveness of course. The results showed that students and teachers are not quite satisfied with current curriculum. On the other hand scores of the subject were not very high showing and loophole in effectiveness of the computerscience curriculum. On the basis of results it is suggested that measures should be taken to not only improve curriculum but also for provision of physical facilities required to teach computer sciences.
Context-free grammars are used to describe some aspects of the syntax of programming languages. However, the notation that is used for grammars in the context of programming languages is somewhat different from the notation introduced in the preceding section. The notation that is used is called Backus-Naur Form or BNF. It is named after computer scientists John Backus and Peter Naur, who developed the notation. Actually, several variations of BNF exist. I will discuss one of them here. BNF can be used to describe the syntax of natural languages, as well as programming languages, and some of the examples in this section will deal with the syntax of English. Like context-free grammars, BNF grammars make use of production rules, non-terminals, and terminals. The non-terminals are usually given meaningful, multi-character names. Here, I will follow a common practice of enclosing non-terminals in angle brackets, so that they can be easily distinguished. For example, h noun i and h sentence i could be non-terminals in a BNF grammar for English, while h program i and h if-statement i might be used in a BNF grammar for a programming language. Note that a BNF non-terminal usually represents a meaningful syntactic category, that is, a certain type of building block in the syntax of the language that is being described, such as an adverb, a prepositional phrase, or a variable declaration statement. The terminals of a BNF grammar are the things that actually appear in the language that is being described. In the case of natural language, the terminals are individual words.
Then, why are computer graphics subjects compulsory? The problems solved in the computer graphics enable us to practise and deepen the knowledge and skills that students obtained in the area of programming (Kozel, 2009) and algorithm development (Milkova et al, 2007) and simultaneously teach students to apply mathematic tools gained in theoretical subjects. The solved tasks utilize mathematic knowledge and force students to suggest and correctly implement a software solution. Created programs have mainly graphical output which gives both the students and their lecturers immediate answers to questions such as whether the algorithm is correctly implemented or in which part of the algorithm and its implementation a mistake occurred. An additional advantage lies usually in an attractive formulation of the projects to be solved by the students and very impressive results, which students obtain by dealing with them. All of this attracts the students and stimulates them to further work.