• No results found

UNIT – I THE STATE OF COMPUTING The state of computing:

N/A
N/A
Protected

Academic year: 2021

Share "UNIT – I THE STATE OF COMPUTING The state of computing:"

Copied!
24
0
0

Loading.... (view fulltext now)

Full text

(1)

UNIT – I

THE STATE OF COMPUTING The state of computing:

Today modern computers are equipped with powerful hardware facilities by widespread software packages. To assess state of the art computing, first of all we review the historical milestones in the development of the computers.

Before 1945, computers are made with mechanical or electromechanical parts. The first mechanical adder/subtractor was built in 1642, after that for polynomial evaluation a differentiate machine was developed in 1827. The first binary mechanical computer was developed in 1941 and then electromechanically decimal computer was built in 1944.

At the early stage the computing and communications were carried out with moving mechanical parts, due to this speed and reliability of the computers are limited or very less. Today computers are electronic computers; moving mechanical parts are replaced with high speed mobility electrons. Information transmission by mechanical gears and levers was replaced by electric signals those travel at the speed of light.

Computer Generations: Over the past several decades, electronic computers have gone through five generations of development. Each of the first three generations span about 10 years, the fourth generation covers a time of 15 years, today we use fifth generation systems, which are having more than one billion transistors on a single chip. The generations are marked depending on the major changes in hardware and software.

The First Generation: As per the hardware technology, the first generation computers used vacuum tubes, relay memories, interconnected by insulated wires. This generation computers were built with single central processing unit (CPU) which performed serial fixed point arithmetic using a program counter, branch instructions and with accumulator. In this generation the CPU involved in all memory and input, output operations. To operate these computers we use machine or assembly languages. Subroutine linkage provided in this generation.

(2)

The representative systems include, Electronic Numerical Integrator and Calculator (ENIAC) and Institute for Advanced Studies computer (IAS) and IBM 701, the first electronic stored program commercial computer.

The Second Generation: This generation computers were used, discrete transistors, diodes and magnetic ferrite cores interconnected by the printed circuits. This generation may also have index registers, floating point arithmetic, multiplexed memory, and I/O processors. To operate these computers we use High level languages (HLLs) such as FORTRAN, Algol, and COBOL etc. The compilers are introduced in this generation for translation of high level programming into machine level. For the systematic organization of these computers, subroutine libraries, batch processing monitors and register transfer language was developed.

The representative systems include IBM 7030 which have look ahead and error correcting memories, Univac LARC (Livemore Atomic Research Computer) and CDC 1604.

The Third Generation: The third generation computers use Integrated Circuits (ICs) for both logic and memory in small scale or medium scale integration and multilayered printed circuits. The micro programmed control is introduced in this generation. Pipelining and cache memories were used to mitigate the speed gap between the CPU and main memory.

The multiprogramming idea was implemented to interleave CPU and I/O activities; this concept leads to the development of time sharing operating systems using virtual memory with good sharing or multiplexing of resources.

The representative systems are, IBM 360-370 series, the CDC 6600/7600 series, Texas Instruments ASC (Advanced Scientific Computer) and Digital Equipment’s PDP-8 series.

The Fourth Generation: In this generation parallel computers in various architectures were introduced. These systems were used shared or distributed memory and optional vector hardware. Multiprogramming OS,

(3)

The representative systems are, VAX 9000, Cray X-MP, IBM/3090 VF, BBN TC-2000 etc.

The Fifth Generation: This generation computers consists super scalar processors, cluster computers, and Massively Parallel Processing (MPP), leads to scalable and latency tolerant architectures. These systems using advanced VLSI (Very Large Scale Integrated Circuits) technology, optical and high density packing. This generation computers achieve Teraflops (1012 floating point operations per second) performance in 1990s. Today these may more than Petaflops (1015 floating point operations per second).

Heterogeneous processing to solve large scale problems by interconnecting heterogeneous systems in networking is emerging trend in this generation.

Elements Of Modern Computers

Today modern computers consists of hardware, software-both system software and application programs, sophisticated operating system, rich instruction set and user interface. All of the elements of the system facilitate to solve various kinds of computing problems.

Computing Problems: The users of the modern computers can understand that computer architecture means not only the structure of the bare machine or hardware and known that, numerical problems in science and technology demand complex mathematical formulations and intensive integer or floating point operations. The problems in business and government sectors demand efficient transaction processing, large database management and quick information retrieval operations. The artificial intelligent problems demands logic interferences and symbolic manipulations. So the computing problems are categorized into numerical computing problems, transaction processing problems, and logical reasoning problems. To solve all these kinds, the modern computer comprises of variety of computing elements for ease of computing, communicating. The following diagram shows elements of the modern computer.

(4)

Algorithms and Data Structures: We know that, most numerical problems are deterministic and using regularly structured data. The symbolic may use heuristics or nondeterministic operations over the data. So special algorithms and data structures are needed to indicate the computation and communication problems in computers.

Hardware Resources: Today’s computer exhibits its power through coordinated efforts of hardware resources, an operating system, and the application software. Generally processor, memory, peripheral devices are core of hardware in computer system. Sometimes special hardware interfaces are built into I/O devices such as display terminals, workstations, optical page scanners, magnetic ink character recognizers, modems, network adaptors, voice data entry, printers, and plotters. These peripherals are able to connect to mainframe computers directly or through local networks.

For the proper function of the hardware resources, software interface programs are needed. These software interfaces include file transfer systems, editors, word processors, device drivers; interrupt handlers, network communication programs. These programs facilitate to execute the user programs on different machine architectures.

Operating System: We know that an effective operating system manages the

(5)

The mapping of algorithmic structure with hardware architecture is bidirectional. An effective mapping will lead to produce better source codes. The mapping of algorithmic and data structures with hardware architectures may include processor scheduling, memory maps, interprocessor communication etc, these activities are machine architecture dependent. Optimal mappings are required for different computer architectures. The implementation of these mappings may depend upon the compiler and operating system support.

System software support: We know that system software support should require for the development of efficient programs in High level languages.

The source codes written in HLL are translated into object code with the help of compiler. Here the compiler assigns variables to registers or to memory and generates the machine operations corresponding to high level language operations. In addition to the compiler a loader is required to initiate the program execution through OS kernel. During the program execution certain system resources are needed, the resource binding demands the use of compiler, assembler, loader and OS to allot the physical resources to program execution. The effectiveness of the resource allocation process decides the efficiency of hardware utilization and programmability of computer.

Compiler Support: We know that the compiler is the one of important component in system software; there are three different versions of compilers.

a) Preprocessor: A preprocessor is a sequential compiler, and it utilizes low level library functions of the target computer to implement the high level programs.

b) Pre compiler: The pre compiler is able to detect the parallelism in the high level programs. It requires program flow analysis, dependency checking, and limited optimizations towards parallelism detection.

c) Parallelizing Compiler: This technique automatically detects the parallelism in the high level programming and also transforms the sequential programming constructs into parallel constructs. To

(6)

Evolution of Computer Architecture:

From the past few decades, the computer architecture has undergone evolutional changes. The study of computer architecture means, not only the study of hardware organization, it also includes software or programming requirements. From the programmer’s point of view computer architecture is a study of instruction set, opcodes, addressing modes, handling of registers and virtual memory etc. From the hardware implementation point of view, architecture is a study of CPU organization, dealing with caches, buses, pipelines, and physical memories. So the study of computer architecture includes both instruction set architecture and machine implementation organizations.

The following diagram shows evolution of architecture from von Neuman machine to massively parallel processors.

(7)

The architecture is evaluate from von Neuman machine, which is a sequential processing machine executing the scalar data. After that the sequential machine was improved to process word-parallel operations and from fixed point to floating point. The von Neuman architecture is slow due to sequential execution of instructions.

Look ahead, parallelism and Pipelining: Look ahead techniques were introduced to pre-fetch the instructions to achieve overlapping of instruction fetch and execution phases and to achieve the functional parallelism. Generally the functional parallelism is achieved in two ways;

one is use of multiple functional units and the other is implementing pipelining technique at different processing levels.

(8)

Flynn’s Classification: Machael Flynn in 1972, classify the various computer architectures based on the concept of instruction and data streams to achieve the parallel processing.

1. Single Instruction Stream, Single Data Stream (SISD).

2. Single Instruction Stream, Multiple Data Stream (SIMD).

3. Multiple Instruction Stream, Single Data Stream (MISD).

4. Multiple Instruction Stream, Multiple Data Stream (MIMD).

Out of the four machine architectures, most parallel computers are built on the Multiple Instruction Stream, Multiple Data Stream (MIMD) to perform the general purpose computations. Here SIMD and MISD models are suitable for special purpose computations applied in commercial machines.

Single Instruction stream, Single Data Stream: Single instruction, single data is a term referring to a computer architecture in which a single processor executes a single instruction stream, to operate on data stored in a single memory. According to Michael J. Flynn, SISD can have concurrent processing characteristics. Instruction fetching and pipelined execution of instructions are common examples found in most modern SISD computers.

Single Instruction Stream, Multiple Data Stream (SIMD): It describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. Thus, such machines exploit data level parallelism. SIMD is particularly applicable to common tasks like adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions in order to improve the performance of multimedia use.

(9)

Multiple Instruction Stream, Single Data Stream: In multiple instruction stream, single data stream architecture, many functional units perform different operations on the same data. Pipeline architectures belong to this type. An MISD computer is a systolic array, which is a network of small computing elements connected in a regular grid. All the elements are controlled by a global clock. On each cycle, an element will read a piece of data from one of its neighbors, perform a simple operation (e.g. add the incoming element to a stored value), and prepare a value to be written to a neighbor on the next step.

Multiple Instruction Stream, Multiple Data Stream: In computing, MIMD (multiple instruction, multiple data) is a technique employed to achieve parallelism. Machines using MIMD have a number of processors that function asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data. MIMD machines can be of either shared memory or distributed memory categories. These classifications are based on how MIMD processors access memory.

(10)

Parallel / Vector Computers: The basic parallel computers are those which function in MIMD mode. There are two classes of parallel computers, one is shared memory multiprocessors and another is message passing multicomputer. The major difference between multiprocessors and multicomputers lies in memory sharing and mechanisms used for interprocessor communication. The processors in multiprocessor system communicate with each other through shared variables in a common memory. In multicomputer system, each node has a local memory, unshared with other nodes. Here interprocessor communication is done through message passing between the nodes.

In vector computers, vector instructions were used to execute on vector processors. A vector processor contains multiple vector pipelines that can concurrently used under hardware or firmware control. There are two types of pipelined vector processors;

Memory-to-Memory vector processors:This architecture supports the pipelined flow of vector operands directly from the memory to pipelines and then back to the memory.

Register-to-Register processors: This architecture uses vector registers to interface between the memory and functional pipelines.

Another important architecture to achieve the parallelism is SIMD computers. An SIMD computer exhibits spatial parallelism rather than temporal parallelism. The SIMD computers are designed by the use of array of processing elements synchronized by the same controller.

(11)

Development Layers: The following diagram shows, a layered development model of parallel computers.

Here, regarding the hardware, the configuration of one machine is entirely different from another machine hardware configuration even both belong to the same model. For example the address space of the processor is dependent on memory organization which is machine dependent.

But we would like to develop the application programs and programming environment should be machine independent. The independent development of user program will leads to program portability and less conversion costs. The high level language support and communication concepts should dependent on the system architecture.

From the programmer’s point of view these should be architecture transparent.

System Attributes to Performance:

An ideal performance of the system may achieve through the perfect match between the machine capability and program behavior. Machine capability can be improved by using the better hardware, innovative architectural features, and efficient resource management, but prediction of program behavior is difficult because it depending upon type of application and runtime considerations. Many other factors may also affect the program behavior, including algorithm design, data structures, language efficiency, programmer skill, and the compiler technology. Finally we state that it is not possible to achieve the perfect match between the hardware and software. But by improving some factors it is possible to achieve the optimal performance of the system.

The simplest measure of program performance is, turnaround time, which includes disk and memory access, input and output activities,

(12)

multiprogramming computers, the I/O and system overheads of one program may overlap with other program requirements, so to calculate the system performance we should consider only the CPU time required for the program execution. The following are some factors for evaluating the performance of a computer.

Clock Rate and CPI: Digital CPUs are defined by a clock with constant cycle time ‘τ’, this represents time period to generate a clock. The inverse of the clock time is known as clock rate (frequency = 1/ τ) and the size of the program was measured by its instruction count (Ic). The performance of a computer is calculated by the number of machine instruction executed in a program within a time period ‘τ’. But we know that different machine instructions may require different clock cycles, therefore cycles per instruction is an important parameter to measure the performance of the system. For a given instruction set, we measure the average CPI over all types of instructions for a particular CPU (architecture).

Let us consider Ic is the number of instructions in a given program, T is the time in seconds to execute the program, then T is defined as;

T = Ic X CPI x τ --- 1

But we know that, the execution of instruction involves, instruction fetch, decode, fetching of operands, execution and store of results, out of these operations only decode and execution are carried within the CPU.

The remaining operations require accessing of memory. So now we define a memory cycle – the time required to complete one memory reference.

Generally memory cycle is ‘K’ times greater than the CPU cycle. Here the value of ‘K’ is depends on the speed of the cache memory, memory technology and CPU-memory interconnections.

Now we divide CPI into two components, one is processor cycles and another is memory cycles needed to complete the execution of an instruction. Now we can rewrite the above instruction as;

T = Ic X (p + m x k) x τ --- 2

Where ‘p’ is the number of processor cycles needed for the instruction decode and execution ‘m’ is the number of memory references needed, ‘k’ is the ratio between memory cycle and processor cycles.

(13)

2. Compiler technology. 3. CPU Implementation and control. 4. Cache and memory hierarchy.

The instruction set architecture affects the program length (Ic), and processors cycles (p), the compiler technology not only effects the Ic, p, but also memory reference count (m). The CPU implementation and control determine the total processor time (p. τ).

MIPS Rate: Let us consider ‘C’ be the total number of clock cycles needed to execute a given program then the equation-2 can be rewritten as;

T = C/f

Further, CPI = C/Ic and T = Ic X CPI x τ = = Ic X CPI/f

The processor speed is measured in terms of million instructions per seconds (MIPS) called MIPS rate of a processor. The MIPS rate varies with respect to a number of factors, such as clock rate, instruction count, and CPI of a given machine. The MIPS rate can define as;

=

We conclude that MIPS rate of a given computer is directly proportional to the clock rate and inversely proportional to the CPI. Naturally all system attributes, instruction set, compiler, processor and memory technology affects the MIPS rate.

Floating Point Operations per Second: Most computer based applications in science and engineering make heavy use of floating point operations.

Floating point operations per second (FLOPS) is the most relevant measure for estimate system performance. Today computers are able to perform floating point operations in mega (106), giga (109), tera (1012), and in peta (1015) flops.

Throughput Rate: Another important concept is how many programs a system can execute per unit time, called system throughput Ws. In multiprogramming environment the system throughput is lower than the CPU throughput. The CPU through put Wp is defined as;

We rewrite Wp as, Wp = (MIPS) X 106 / Ic from the above equation. The Wp

is also measured programs/second. Usually the Ws < Wp due to the additional overheads caused by the I/O, compiler, and OS when multiple programs are interleave for CPU execution by multiprogramming or timesharing operations. Suppose the CPU is kept busy in a perfect

(14)

Programming Environments: Generally the programmability of a computer depends on the programming environments provided to the users.

Conventional uniprocessor computers are programmed in a sequential environment in which instructions are executed one after another in a sequential manner. UNIX/OS kernel was designed in this fashion. When using a parallel computer, we need a parallel environment where parallelism is automatically implemented.

In parallel environment, language extensions or new constructs must be developed to specify parallelism or to facilitate easy detection of parallelism at various granularity levels by more intelligent compilers. Beside all of this, the operating system must support parallel processing. The OS must be able to manage the resources in parallelism. The following are different types of parallelisms;

a) Implicit Parallelism: An implicit approach uses conventional languages such as C, C++, Fortran, Pascal to write the source code. The sequentially written program is translated into parallel object code by parallelizing compiler. Here the compiler is able to detect the parallelism and assign target machine resources. This approach is applied in shared memory multiprocessors.

b) Explicit Parallelism: This approach required more effort to develop a source program using parallel languages. The usage of parallel languages reduces the burden on compiler to detect parallelism. Instead, the compiler preserve parallelism and

(15)

MULTIPROCESSORS AND MULTICOMPUTERS

The parallel computer architectures are divided into two categories, one is shared common memory architecture and another one is unshared distributed memory.

Shared Memory Multiprocessors: The shared memory multiprocessor architectures are again categorized into three, those are;

1. Uniform Memory Access (UMA)

2. Non Uniform Memory Access (NUMA) 3. Cache Only Memory Architecture (COMA)

The UMA Model: In UMA multiprocessor model, the physical memory is uniformly shared among all the processors. In this model all the processors have an equal amount of time to access any memory word from the physical memory. Here each processor can also have its private cache.

Peripheral devices attached to the UMA multiprocessors can also shared.

The following diagram shows UMA multiprocessor model.

(16)

Multiprocessors in UMA model may also known as tightly coupled systems due to the high degree of resource sharing. Multiprocessors in this model are connected with a common bus, a cross bar switch, or with a multistage network. Synchronization, communication and coordination of parallel events among the multiprocessors is done through shared variables in the common memory.

Applications: The UMA model is suitable for general purpose and time sharing applications by multiple users. It can also use to speed up the execution of a single large program in time critical applications.

The multiprocessors in UMA model is again classified as symmetric and asymmetric multiprocessors. the symmetric multiprocessors have an equal access to all peripheral devices, it means all the processors are equally capable of running the executive programs, such as OS kernel and I/O service routines etc. In asymmetric multiprocessor, only one or subset of processors are executive capable. This executive or master processor can execute OS and handle I/O routines, remaining processors are called attached processors (AP), and these processors can execute user code under the supervision of master processor.

(17)

The NUMA Model: The NUMA multiprocessor is a shared memory system in which the access time varies with the location of the memory word. Here the shared memory is physically distributed to all processors, called local memories. The collection of all local memories forms a global address space accessible by all processors.

The processors are faster to access its local memories, to access remote memory attached to other processors takes longer time due to the delay of interconnection network. The BBN TC-2000 butterfly multiprocessor is an example for NUMA model. The above diagram shows shared local memories.

Besides distributed memories, it is possible to attach global shared memory to the multiprocessor system. Here accessing of the memory is fall into three patterns: the fastest local memory access. The next global memory access. The slowest is the remote memory access. The NUMA models can be easily modified to allow a mixture of shared memory and private memory with pre-specified access rights.

In hierarchical structured NUMA model, the processors are divided into several clusters. Each cluster is itself an UMA or NUMA multiprocessor and these clusters are connected to global shared memory module. Now the entire system is considered a NUMA multiprocessor.

All the processors belongs

to the same cluster are allowed to uniformly access the cluster shared memory modules and all the clusters have equal access to the global memory. So we know that the access time to the cluster memory is shorter than to the global memory. We should specify the access rights to access inter cluster memories. The above diagram shows hierarchical cluster model.

(18)

The COMA Model: The COMA model is a special case of NUMA machine, in which the distributed main memories are converted into caches. There is no memory hierarchy at the processors node and all the caches form a global address space. Here remote cache access is supported by the distributed cache directories depending on the interconnection network.

The initial data placement is not a problem in this model because the data is eventually migrating where it will be used.

Besides UMA, NUMA and COMA models, there are some other versions of multiprocessors, such as cache coherent non uniform memory access (CC-

(19)

Distributed Memory Multicomputers: A distributed memory multicomputer system consists of multiple computers called as nodes, interconnected by a message passing network. Here each node is an autonomous one consisting of processor, local memory and attached disk or I/O peripherals. The following diagram shows general model of message passing multicomputer.

Here the message passing network provides point-to-point static connections among the nodes. In the node, all local memories are private and are accessible only by local processor. For this reason the multicomputers may also called no-remote-memory-access (NORMA). This model has importance due to advances in interconnection and network technologies, because it is suitable for certain applications, scalability, and fault-tolerance.

Generations of Multicomputers: Today multicomputers use hardware routers for message passing between the nodes. A computer node is attached to a router and boundary routers are connected to the I/O and peripheral devices. Generally message passing between the nodes involve a sequence of routers and channels.

Mixed types of nodes are also allowed in heterogeneous multicomputers.

The internode communication in a heterogeneous multicomputer is achieved through compatible data representations and message passing protocols.

The first generation multicomputers were based on the processor board

(20)

The second generation multicomputers has mesh connected architecture, hardware message routing and have the environment for medium grain distributed computing.

The next generation multicomputers are called fine grain multicomputers, those are implemented with processor and communication gears on the same VLSI chip.

There are various topologies are used to support networking of multicomputers such as ring, tree, mesh, torus, hypercube and cube- connected cycle and various communication patterns are used among the nodes of multicomputers, such as one-to-one, broadcasting, permutations and multicasting.

The most important issues for multicomputers are message routing schemes, network flow control strategies, deadlock avoidance, virtual channels, message passing mechanisms, and programming decomposing techniques.

Taxonomy of MIMD Computers: Generally parallel computers are available either SIMD or MIMD configurations. The SIMDs are normally used for special purpose applications and those are not size scalable but little bit generation scalable.

The general architecture for parallel processing is MIMD configuration;

with various kinds of memory arrangements has lead to the taxonomy of MIMD machines. Gordon Bell considers that shared memory multiprocessors having single address space and scalable multiprocessors or multicomputers use distributed memory structure.

The multiprocessors which use centrally shared memory have limited scalability. The multicomputers use distributed memories with multiple address space and those are scalable distributed memory. The following diagram shows Bell’s taxonomy of multicomputers.

(21)

MULTIVECTOR AND SIMD COMPUTERS:

Generally supercomputers are consists parallel processors and vector processing to implement the parallelism. The supercomputers are classified as either pipelined vector machines having necessary vector hardware or SIMD computers which exhibit high-end data parallelism.

Vector Supercomputers: Generally a vector computer is built on the top of scalar processor and naturally the vector processor is attached to the scalar processor as an optional one. The programs and data first loaded into the main memory through a host computer and then all instructions are first decoded by the scalar control unit.

(22)

scalar functional units. If the instruction is decoded as a vector operation, it will be sent to the vector control unit. Now the vector control unit supervises the flow of vector data between the main memory and vector functional pipelines. The number of vector functional pipelines may contain in a one vector processor. The following diagram shows the architecture of vector supercomputer.

Vector supercomputers are broadly classified into two categories; one is register to register architecture and memory to memory architecture.

Register-to-Register Architecture: The above diagram represents a vector supercomputer in register-to-register architecture. Here vector registers are used to hold the vector operands, intermediate and final vector results. Here the vector functional pipelines retrieve operands from and put results into the vector registers. Here all the vector registers are programmable by the user instructions. In this architecture each vector register is contains a component counter which keep track the component registers used in successive pipeline cycles.

The length of each vector register is fixed in this architecture; it may be

(23)

Generally, there was fixed number of vector registers and functional pipelines in a vector processor, so the resources are reserved in advance to avoid resource conflicts between different vector operations.

Memory-to-memory Architecture: In this architecture vector stream unit replaces the vector registers. Vector operands and results are directly retrieved from and stored into the main memory in the form of super words. i.e; 512 bits at a time.

Pipelined vector supercomputers are started with uniprocessors and subsequently this architectures offer both uniprocessor and multiprocessor models.

SIMD Supercomputers: The following diagram shows the operational model of supercomputer presented by H.J. Siegel in 1979.

Generally the operational model of SIMD computer is specified by a 5- tupele.

M = (N, C, I, M, R) Where:

N is the number of processing elements (PEs) in the machine. For example Illiac IV machine had 64 PEs and CM-2 had 65,536 PEs.

(24)

C is the set of instructions directly executed by the control unit (CU), including scalar and program flow of instructions.

I is the set of instructions broadcast by the CU to all PEs for parallel execution. The instructions may be arithmetic, logic, data routing, masking, and other local operations in each active PE over the data within the PE.

M is the set of masking schemes, where each mask partitions the set of PEs into enabled and disabled subsets.

R is the set of data routing functions, specifying various patterns to be set up in the interconnection network for inter-PE communications.

References

Related documents

The threshold into the stadium is through a series of layers which delaminate from the geometry of the field to the geometry of the city and creates zones of separation,

[r]

In humans, PET studies with [ 11 C]raclopride recently confirmed this hypothesis by showing that in cocaine abusers drug cues (cocaine-cue video of scenes of subjects taking

Therefore, the qualitative examples are oriented by the six restaurants which have received the most engagement per post or profile on the quantitative indicators which are

Keywords: Asian stock markets, BSE Sensex , Nikkei 225 index, Taiex Index, Hang Seng Index, Singapore Strait Times index, Shanghai composite index.. 1

The purpose of this study was to evaluate the rela- tive performance of 26 public urban transportation organizations in India using various criteria.. We grouped these 19 criteria

- To attend and participate in the Biological Nitrogen Fixation (BNF) project leaders meeting held at USAID Building, Washington, D.. - To consult

Ecerpt from Clarence Hathawy, &#34;Report of Situation in U.S.A. The remainder of the report deals with CPUSA activities in the 1936 elections. MEETING OF SECRATERIAT OF COMRADE