A magnetic compass doesn’t make a very convenient meter. It has to be lying flat, and the coil has to be aligned with the compass needle when there is no current. But of course, electrical and elec- tronic devices aren’t all oriented so as to be aligned with the north geomagnetic pole! But the exter- nal magnetic field doesn’t have to come from the earth. It can be provided by a permanent magnet near or inside the meter. This supplies a stronger magnetic force than does the earth’s magnetic field, and therefore makes it possible to make a meter that can detect much weaker currents. Such a meter can be turned in any direction, and its operation is not affected. The coil can be attached directly to the meter pointer, and suspended by means of a spring in the field of the magnet. This type of me- tering scheme, called the D’Arsonval movement, has been around since the earliest days of electricity, but it is still used in some metering devices today. The assembly is shown in Fig. 3-4. This is the basic principle of the ammeter.
The Navy Electricity and Electronics Training Series (NEETS) was developed for use by personnel in many electrical- and electronic-related Navy ratings. Written by, and with the advice of, senior technicians in these ratings, this series provides beginners with fundamental electrical and electronic concepts through self-study. The presentation of this series is not oriented to any specific rating structure, but is divided into modules containing related information organized into traditional paths of instruction. The series is designed to give small amounts of information that can be easily digested before advancing further into the more complex material. For a student just becoming acquainted with electricity or electronics, it is highly recommended that the modules be studied in their suggested sequence. While there is a listing of NEETS by module title, the following brief descriptions give a quick overview of how the individual modules flow together.
The reason I mention semiconductors at all is because they are the building block of the computer. Electricity can hold the properties of plus or minus polarity. The electronics in the PC are designed to take advantage of this by storing electricity in one polarity or the other and assigning a numerical value to each. In the PC, these numerical values are the ones and zeroes of binary data (see "Reading binary numbers" later in the chapter). By using a semiconductor, which can be toggled between two electrical values, the result is a perfect place to store all of the binary values that course around inside of the PC. Confused? Don't be. It's actually very simple: A semiconductor is simply an extremely simple on/off switch. Zap it once, it's on; zap it again, it's off. Zap it, on; zap it, off--and so on.
A static synchronous compensator (STATCOM), also known as a static synchronous condenser (STATCON), is a regulating device used on alternating current electricity transmission networks. It is based on a power electronics voltage-source converter and can act as either a source or sink of reactive AC power to an electricity network. If connected to a source of power it can also provide active AC power. It is a member of the FACTS family of devices. A static VAR compensator can also be used for voltage stability. However, a STATCOM has better characteristics than an SVC. When the system voltage drops sufficiently to force the STATCOM output current to its ceiling, its maximum reactive output current will not be affected by the voltage magnitude. Therefore, it exhibits constant current characteristics when the voltage is low under the limit. In addition, the speed of response of a STATCOM is faster than that of an SVC and the harmonic emission is lower.
The idea of implementing computational engines using an encoded data format is by no means an idea of our times. In the early nineteenth century, Babbage envisioned large- scale mechanical computing devices, called Difference Engines [Swade93]. Although these engines use the decimal number system rather than the binary representation now common in modern electronics, the underlying concepts are very similar. The Analytical Engine, developed in 1834, was perceived as a general-purpose computing machine, with features strikingly close to modern computers. Besides executing the basic repertoire of operations (addition, subtraction, multiplication, and division) in arbitrary sequences, the machine operated in a two-cycle sequence, called “store” and “mill” (execute), similar to current computers. It even used pipelining to speed up the execution of the addition opera- tion! Unfortunately, the complexity and the cost of the designs made the concept impracti- cal. For instance, the design of Difference Engine I (part of which is shown in Figure 1.1) required 25,000 mechanical parts at a total cost of £17,470 (in 1834!).
This book is designed to serve as a first course in an electrical engineering or an electrical engineering and computer science curriculum, providing students at the sophomore level a transition from the world of physics to the world of electronics and computation. The book attempts to satisfy two goals: Combine circuits and electronics into a single, unified treatment, and establish a strong connection with the contemporary worlds of both digital and analog systems. These goals arise from the observation that the approach to introduc- ing electrical engineering through a course in traditional circuit analysis is fast becoming obsolete. Our world has gone digital. A large fraction of the student population in electrical engineering is destined for industry or graduate study in digital electronics or computer systems. Even those students who remain in core electrical engineering are heavily influenced by the digital domain.
Semiconductors have a number of parameters that vary linearly with temperature. Normally the reference voltage of a zener diode or the junction voltage variations are used for temperature sensing. Semiconductor temperature sensors have a lim- ited operating range from –50 to 150 ° C but are very linear with accuracies of ± 1 ° C or better. Other advantages are that electronics can be integrated onto the same die as the sensor giving high sensitivity, easy interfacing to control systems, and making different digital output configurations possible. The thermal time con- stant varies from 1 to 5 s, internal dissipation can also cause up to 0.5 ° C offset. Semiconductor devices are also rugged with good longevity and are inexpensive. For the above reasons the semiconductor sensor is used extensively in many appli- cations including the replacement of the mercury in glass thermometer.
Before the 1960s, semiconductor engineering was regarded as part of low-current and low-voltage electronic engineering. The currents used in solid-state devices were below one ampere and voltages only a few tens of volts. The year 1970 began one of the most exciting decades in the history of low-current electronics. A number of companies entered the field, including Analog Devices, Computer Labs, and National Semiconductor. The 1980s represented high growth years for integrated circuits, hybrid, and modular data converters. The 1990s major applications were industrial process control, measurement, instrumentation, medicine, audio, video, and computers. In addition, communications became an even bigger driving force for low-cost, low-power, high-performance converters in modems, cell-phone handsets, wireless infrastructure, and other portable applications. The trends of more highly integrated functions and power dissipation drop have continued into the 2000s.
The growing sensitivity to the technologies on Wall Street is clear evi- dence that the electrical/electronics industry is one that will have a sweep- ing impact on future development in a wide range of areas that affect our life style, general health, and capabilities. Even the arts, initially so deter- mined not to utilize technological methods, are embracing some of the new, innovative techniques that permit exploration into areas they never thought possible. The new Windows approach to computer simulation has made computer systems much friendlier to the average person, resulting in an expanding market which further stimulates growth in the field. The computer in the home will eventually be as common as the telephone or television. In fact, all three are now being integrated into a single unit. Every facet of our lives seems touched by developments that appear to surface at an ever-increasing rate. For the layperson, the most obvious improvement of recent years has been the reduced size of electrical/ elec- tronics systems. Televisions are now small enough to be hand-held and have a battery capability that allows them to be more portable. Computers with significant memory capacity are now smaller than this textbook. The size of radios is limited simply by our ability to read the numbers on the face of the dial. Hearing aids are no longer visible, and pacemakers are significantly smaller and more reliable. All the reduction in size is due primarily to a marvelous development of the last few decades—the integrated circuit (IC). First developed in the late 1950s, the IC has now reached a point where cutting 0.18-micrometer lines is commonplace. The integrated circuit shown in Fig. 1.1 is the Intel ® Pentium ® 4 processor, which has 42 million transistors in an area measuring only 0.34 square inches. Intel Corporation recently presented a technical paper describing 0.02-micrometer (20-nanometer) transistors, developed in its silicon research laboratory. These small, ultra-fast transistors will permit placing nearly one billion transistors on a sliver of silicon no larger than a finger- nail. Microprocessors built from these transistors will operate at about 20 GHz. It leaves us only to wonder about the limits of such development. It is natural to wonder what the limits to growth may be when we consider the changes over the last few decades. Rather than following a steady growth curve that would be somewhat predictable, the industry is subject to surges that revolve around significant developments in the field. Present indications are that the level of miniaturization will con- tinue, but at a more moderate pace. Interest has turned toward increas- ing the quality and yield levels (percentage of good integrated circuits in the production process).
The primary aim of the material in this text is to provide the fundamental analytical and underpin- ning knowledge and techniques needed to success- fully complete scientific and engineering principles modules of Degree, Foundation Degree and Higher National Engineering programmes. The material has been designed to enable students to use techniques learned for the analysis, modelling and solution of realistic engineering problems at Degree and Higher National level. It also aims to provide some of the more advanced knowledge required for those wishing to pursue careers in mechanical engineer- ing, aeronautical engineering, electronics, commu- nications engineering, systems engineering and all variants of control engineering.
Abstract Monitoring the electrical activity of multiple neurons in the brain could enable a wide range of scientific and clinical endeavors. An enabling technology for neural monitoring is the interface amplifier. Current amplifier research is focused on two paradigms of chronically sensing neural activity: one is the measurement of ‘spike’ signals from individual neurons to provide high-fidelity control signals for neuroprosthesis, while the other is the measurement of bandpower fluctuations from cell ensembles that convey general information like the intention to move. In both measurement techniques, efforts to merge neural recording arrays with integrated electronics have revealed significant circuit design challenges. For example, weak neural signals, on the order of tens of microvolts rms, must be amplified prior to analysis and are often co-located with frequencies dominated by 1 / f and popcorn noise in CMOS technologies. To insure the highest fidelity measurement, microp- ower chopper stabilization is often required to provide immunity from this excess noise. Another difficulty is that strict power constraints place severe limitations on the signal processing, algorithms and telemetry capabilities available in a practical system. These constraints motivate the design of the interface amplifier as part of a total system–level solution. In particular, the system solutions we pursued are driven by the key neural signal of interest, and we use the characteristics of the neural code guide the partitioning of the signal chain. To illustrate the generality of this design philosophy, we discuss state-of-the-art design examples from a spike-based, single-cell system, and a field potential, ensemble neuronal measurement system, both intended for practical and robust neuroprosthesis applications.