VLC-OFDM is the modification of conventional OFDM.The main goal of O-OFDM is to increase the data rate in accordance with established optical modulation techniques. The additional blocks in this compared to OFDM are Hermitian Symmetry, LED which is used to convert the electrical signal into optical signal and the photo-detector which is used to convert the optical signal into electric signal. In the transmitter section, initially a stream of data is received as an input signal. The serial to parallel converter separates the stream of data into parallel form. Then the mapping process which obtained different symbols depends upon the modulation used. Usually in OFDM the modulations used are PSK and QAM. Then the Hermitian Symmetry is used. By the use of this the output of IFFT obtains only real values. Once the modulation is done the serial to parallel process is executed and then the cyclic prefix is added. Output of this process is given to the Digital to Analog Converter (DAC). Then each of the signal is filtered by using a Low Pass Filter (LPF) to shape the pulses and avoiding the aliasing effect (Aliasing is an effect in signalprocessing that cause different signals to become indistinguishable among each other sampled, one signal becomes an alias of the other). Then the LED trasmits the data through the optical channel in the form of light and photo-detector at the receiver section receives the signal. Then the reverse process of receiver part is done and finally the transmitted output is retained.
Governments worldwide are investing much time and effort into developing efficient traffic-management capabilities to overcome growing problems with traffic congestion, accident rates, long travel times, and air pollution from auto emissions. One approach is to use optical sensors to monitor traffic and establish a dynamic road traffic-management system that operates in near real time. Such a system requires the design of new software and hardware components for signalprocessing. Traffic-data measurement systems often used today are induction loops embedded in the pavement and the so-called floating-car-data technique, in which the desired data such as the position and velocity of the vehicles--are collected by several mobile units at the same time and transmitted to a central system via mobile communication. These types of systems have disadvantages, including the inability to calculate all the necessary traffic parameters, such as object-related features (location, speed, size, and shape) and region-related features (traffic speed and density, queue length at stop lines, waiting time, and origin- destination matrices), or to evaluate the behavior of nonmotorized road users. Optical systems can overcome these limitations and thus optimize traffic flow at intersections during busy periods, identify accidents quickly, and provide a forecast of changes in traffic patterns.
Digital audio processing systems require input in the form of digital encoded values of sampled analog signals. A digital audio system thus starts first with sound converted into an analogsignal by a microphone. The analogsignal is then encoded into a digital signal by using an Analog - to - Digital converter (ADC). Normally, a digital signal is a Pulse - Code Modulated (PCM) signal that has a resolution depending on the resolution of the ADC which is used. In typical digital audio systems, the resolution used is 16 - bit, and the audio sig- nal is sampled at 44.1 kHz, which gives 44100 samples per second, in case of CD audio. For other formats, different sampling frequencies and resolutions exist. With common digital tools and techniques, the encoded digital values can be stored and/or processed. To output the processed signal, a Digital - to - Analog converter (DAC) is used to obtain an analog equivalent of the digital signal, which is then passed through a power amplifier and given to a loudspeaker or headphones. The entire process of recording to reproducing an audio signal can be visualized in an audio reproduction chain as shown in Figure 2.1.
Hongbin Li received the B.S. and M.S. degrees from the University of Electronic Science and Technology of China, Chengdu, in 1991 and 1994, respectively, and the Ph.D. degree from the University of Florida, Gainesville, in 1999, all in electrical engineering. From July 1996 to May 1999, he was a Research Assistant with the Department of Electrical and Computer Engineering, University of Florida. He was a Summer Visiting Faculty Member of the Air Force Research Laboratory, Rome, NY, in summers 2003 and 2004. Since July 1999, he has been with the Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, where he is an Associate Professor. His current research interests include wireless communications and networking, statistical signalprocessing, and radars. Dr. Li is a member of Tau Beta Pi and Phi Kappa Phi. He received the Harvey N. Davis Teaching Award in 2003 and the Jess H. Davis Memorial Award for excellence in research in 2001 from Stevens Institute of Technology, and the Sigma Xi Graduate Research Award from the University of Florida in 1999. He is a member of the Sensor Array and Multichannel (SAM) Technical Committee of the IEEE SignalProcessing Society. He is an Associate Editor for the IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (1/2003 to 12/2006), the IEEE SIGNALPROCESSING LETTERS (1/2005 to 12/2006), and the IEEE TRANSACTIONS ON SIGNALPROCESSING (10/2006 to present), and serves as a Guest Editor for EURASIP Journal on Applied SignalProcessing Special Issue on Distributed SignalProcessingTechniques for Wireless Sensor Networks.
Hence amplitude of .5 has been taken as a threshold to detect R peak and finally we detect the R peak location and the difference between two R peaks is the R-R interval with reference to x axis. From the R wave detection we can calculate the heart beat .R peak is essential part in ECG wave .The high-resolution electrocardiography has become an important clinical tool for analyzing the high-frequency content of electrocardiograms (ECGs). Recent emphasis has been on the detection of ventricular late potential activity due to its ability to predict ventricular tachycardia (VT) in myocardial infarction (MI) patients. To accentuate the high-frequency components, the signal-averaged ECG data are filtered using high-pass filters. Two types of filters used in commercial systems, bidirectional Butterworth and Fourier transform filters, are compared using a common signal-averaged ECG data base. Signal-averaged ECG data acquired at two clinical sites (Mayo Clinic and Bowman Gray School of Medicine) using the MAC15 HIRES system were filtered using a 40-Hz fast Fourier transform (FFT) filter with a 6 dB/octave roll off on an IBM-compatible personal computer. The same average data were filtered using a 40-Hz bidirectional Butterworth filter with similar roll off. Using a common algorithm, outputs of both filters were used to compute vector magnitude and to obtain the measurements to quantify high-frequency, low-amplitude (HFLA) signals. The measurements include total QRS duration, duration of HFLA signals, root mean square voltage, and mean voltage in the terminal 40 msec. The results were very similar and both filters were found to be functionally equivalent.
The final ADC output is achieved by averaging the two different outputs from two different ADCs. The difference of outputs of both ADCs is equal to zero with the same input, indicates ADCs are well calibrated. The difference with non- zero value indicates calibration is required and the value decides the amount by which it should be calibrated. As analog circuit broke up into 2, so total analog area remains same. As we know bandwidth is proportional to gm/C, power is proportional to gm and noise is proportional to √kT/C, splitting by two parts and having capacitance of C/2 by each half, bandwidth remains same and so power. The overall noise remains unchanged as the results are getting by averaging of two . For a 10-bit, 1 MS/s algorithmic ADC, Self-calibration is used with around 10000 conversions. By using the concept of split ADC, the analog area of single ADC essentially splits into two, so it makes insignificant effect on analog complexity when consider its power, overall area, noise performance and bandwidth . But the concept of split ADC only, is not enough for the estimation of errors. For example, if both A and B sides have the same error and comparators outputs also same, it would be unable to rectify the error . But this problem could be rectified if we force the two sides to take different decisions and it can be realized by multiple residue mode cyclic ampliﬁer . This amplifier will be the combination of dual residue approach and the 1.5 bit/stage amplifier. Which comparator output will be used for digital output has been decided by 2-bit path address? The best possible residue among four residue modes, this technique helps for digital output selection without interacting the analog circuit. But it requires an additional comparator. It will impact minimal on overall area and power. Due to multiple residue mode, a wide variety of decision paths are available, so it is easy to extract the calibration information even for a DC input.
The receiving circuit mainly includes Sensing Re- ceiver (SR), Signal Amplification Circuit (SAC), Band Pass Filter (BPF), Amplitude Detection Circuit (ADC). In the design, we select Honeywell’s three axis of magnetic resistance sensor as sensing receiver. It is HMC1043, it has the following characteristics: low cost, high sensitiv- ity, minute extension, low noise, high reliability, strong adaptability, easy installation and so on . Amplitude detection circuit can give us the signal amplitude size what we need. After these, the signal will be transmitted to DSP and then do software processing.
The sound transmission path in a normal human auditory system is as follows: sound → auricle → external auditory canal → tympanic membrane → ossicles → oval window → cochlea → auditory nerve → brain. This is the path whereby each human being receives an auditory sensation. Without passing through the auditory structures before the oval window, bone conduction directly transmits sound through bone vibration. In this case, an auditory sensation travels by the following path: skull → cochlea → auditory nerve → brain. People with hearing loss who have a normal auditory nerve can still utilize the bone conduction path to receive an auditory sensation [1, 2]. According to domestic and foreign research on bone conduction hearing aids, such a device usually includes a signal converter electrically connected to a vibrator. The signal converter receives the ambient sound signal, converting it into a corresponding electrical signal. A vibrating sheet is then driven by the electrical signal to generate a vibration. Users would simply press the vibrating sheet against the bones in their middle ear area to utilize the bone as a medium to transmit the vibration frequency of the vibrating sheet to the user’s cochlear structure and auditory nerve, enabling perception of the sound received by the signal converter. However, the signal converter of a bone conduction hearing aid can receive sounds from all directions. Excessive sound sources or continual noise can prevent users from correctly receiving the target sound, resulting in the low accuracy of sound transmission. Additionally, when an audio signal is transmitted through the air, its intensity attenuates with distance. Excessive distance between the user and a sound source can thus prevent smooth transmission of the audio signal to the signal converter, and the user will therefore be unable to receive sounds from distant locations, thus diminishing usability. Conventional bone conduction devices therefore require a redesign to resolve these defects of low accuracy and poor usability. The proposed transmission device employs dentary bone vibration for sound conduction. It receives an optical signal that also carries auditory information, and limits users to receive only the audio information carried by the optical signal. This not only facilitates sound transmission accuracy but also improves usability by preventing audio signal attenuation with distance.
I n many real situations, outliers can substantially aect the performance of the pdf transfer technique. The notion of outliers refers here to the data samples that do not follow the mapping, by opposition to inliers that do follow the mapping. In archive footage for example, outliers can be missing data randomly in each frame. If the size of the missing patches are large (which is frequently the case) this would bias the measurement of the image pdf away from the true underlying signal. In the colour transfer case, it is clear that large dierences in content can aect the suitability of the mapping generated. Content mismatches can be considered as outlier contamination in general and that applied also when there is motion between frames in the case of icker: the motion can cause content changes where it is not expected. An example of this problem is shown in gure 5.4. Of course it is key to establish exactly what are the outliers and inliers in the case of pdf transfer. This chapter therefore considers techniques for handling outliers in the transfer problem. Results are shown for both the icker and colour transfer applications.
The cardiac disease classification algorithms beginwith the separation or delineation of the individual ECG signalcomponents. The ECG signal comprises of the QRS complex,P and T waves as shown in Figure 3. Occasionally a U-wavemay also be present which lies after the T-wave. The QRScomplex is the most distinguishable component in the ECGbecause of its spiked nature and high amplitude as it indicatesdepolarization of the ventricles of the heart which have greatermuscle mass and therefore process more electrical activity .Detection of the QRS complex is one of vital importance inresponse to the subsequent processing of the ECG signal suchas calculation of the RR interval, definition of search windowsfor detection of the P and T waves and etc. In terms of diseaseclassification, the QRS complex is of pathological importanceand its detection serves as an entry point for almost allautomated ECG analysis algorithms.
proach: it aims at the direct reconstruction of the different statis- tically independent bioelectric source signals, as well as the char- acteristics of their propagation to the electrodes, each revealing important medical information. It is nonparametric and is not based on pattern averaging, which could hamper the detection and analysis of typical fetal heartbeats. Barros and Cichocki (72) discovered a semi-blind source separation algorithm to solve the FECG extraction problem. This algorithm requires a priori infor- mation about the autocorrelation function of the primary sources, to extract the desired signal (FECG). They did not assume the sources to be statistically independent but they assumed that the sources have a temporal structure and have different autocor- relation functions. The main problem with this method is that if there is FHR variability, a priori estimate of the autocorrelation function of the FECG may not be appropriate for FHR analysis. D. E. Marossero et al. (73) had a theory saying that ICA can be an efficient method for extracting the FECG from the compos- ite electrocardiogram signals. They demonstrated the perfor- mance of an information theoretic ICA named Minimum Renyi ’ s Mutual Information (Mermaid) (74) and the performance of the Mermaid algorithm, which is based on minimizing Renyi’s mutual information, was evaluated. The effectiveness and data ef- ficiency of Mermaid and its superiority over alternative informa- tion theoretic BSS algorithms are illustrated using artificially mixed ECG signals as well as FHR estimates in real ECG mix- tures. In 2003, Ping Gao et al. (75) employed a combined meth- od of singular value decomposition (SVD) and ICA for the separation of FECG from the mixture of ECG signals measured on the abdomen of the mother. They mainly applied a blind source separation method using an SVD of the spectrogram, fol- lowed by an iterative application of ICA on both the spectral and a temporal representation of the ECG signals. The SVD con- tributes to the separability of each component and the ICA contributes to the independence of the two components from the mixtures. In 2003, Vigneron et al. (76) also applied BSS methods for FECG extraction. They showed that the FECG could be reconstructed by means of higher-order statistical tools exploiting ECG nonstationarity associated with post- denoising wavelets.
In base of this previous analysis we assume there is only one energy centre in a light source representing the position under monitoring. This approaches comes from the analysis of signal energy centre thru several methods just mentioned, where we could find that even when the optoelectronic signal could have more than one peak on the signal (light emitter source energy peaks), it could be found the energy signal centre by the centroid calculation on both time and frequency domains by geometric centroid and power spectrum centroid calculation respectively. However although the power spectrum centroid and the geometric centroid results well coincides in our experiments, it has been detected that when the signal is affected by noise in any of its sides it affects the centroid position displacing it to the respective side with noise. And the improvement has been developed, in the proposed method to saturate the signal and establish a threshold from which noise is left out to calculate its half time interval, which correspond to the energy signal centre concept we are looking for to correlate with the position under monitoring.
The data was collected at J.S.S. ‘Sahana’ integrated and special school located at Bangalore, India. Prior to the data collection, the teachers were requested to train and acclimatize the disabled children of age between five to eight years facing speech disability problem, to read few sentences written in Kannada language. The experiment was conducted in a pleasant atmosphere, the children are made to read the sentences and the said sentences were recorded. Similarly the normal samples were obtained at ‘vinayaka’ school, Bangalore by following the same procedure with the normal children in the same age group. About 30 samples were taken in all for normal cases. The averages of these samples were taken and the child’s voice which nearly coincides with the average is considered as reference data mentioned as ‘normal’. However, the data after conversion should be made easy to use, manipulate, low loss and compatible to windows platform. The recorded digitized signal is further subjected to transformation into wav file, the most commonly used digitized sound format with the help of downloadable software known as “GOLDWAVE”. The transformation procedure used in GOLDWAVE software results in .wav file format. Table-1 shows the parameters of .wav file obtained.
Signals in electrical power system are time and frequency dependent. Frequency domain analysis is used to extract features and information for possible transient con- ditions. These transient conditions are associated with the presence of high frequency harmonics and other disturbances. As the electric smart grid of the future becomes more complex in terms of the variability of loads and generation, growth in response to market incentives and utilization of power electronics for energy processing is required. Therefore, electrical signals will require a broader set of tools and methods for signalprocessing. The basic bridge between time and frequency domains is the Fourier trans- form (FT). The FT is not the best tool to analyze power system signals because power system signals are non-stationary signals but FT assumes that the signals under analysis are stationary. In order to overcome this limitation, alternative methods have been pro- posed such as the short-time Fourier transform (STFT), wavelets and filter banks. These techniques are commonly known as joint time-frequency analysis .
with a sampling rate of 2048 Hz, amplitudes of the order of microvolt from the PO7 channel is considered in this proposed work. A random noise is added to the loaded EEG signals and digital filter structures (IIR and FIR) are implemented to identify the best filter structure. Noise is a random signal which occurs naturally. There are many types of random noise like square root noise, pink noise, white noise, proportional noise, blue noise and thermal noise. Random noise occurs in any conducting medium which is due to the random motion of electrons. When signals of random noise are collective with electronic circuits, the resulting noise is the combined power of individual signals. In this work, white noise which consists of distributed random numbers is used. The “randn” MATLAB function is used to generate a 1-by-N vector random numbers with 0.1 standard deviation. A randn function generates a random number with an undefined pattern of numbers. Extremity of signal fluctuation from the mean is calculated by the standard deviation. No precise peak to peak value is there in the random noise, it’s nearly 6 to 8 times of the standard deviation so a random noise of 0.6 to 0.8 peak to peak is generated in this work for analysis.In this research, the EEG signals are loaded from the database of Physiobank ATM. The recorded EEG signal is exported to MATLAB (R2016a) workspace which is shown in Fig. 2.
The SmartFusion technology combines analog capability, Flash memory, and FPGA fabric along with microcontroller subsystem in PSoC. Microsemi is company that manufactures FPGA chips. It use a low power CMOS technology and company proprietary interconnection fusing technique called programmable Low impedance circuit element (PLICE) to produce their version of a FPGA. Microsemi provides software’s for system co-designing and Complete FPGA development software Libero SoC v10.1. Libero SoCv10.1 is true IDE (integrated design environment) it integrate number of software’s in one software. This software suits for designing with all FPGAs, including the new Smartfusion2 intelligent FPGA and managing the entire design flow from design entry, synthesis and
The resolution is the important parameter of the radar. Here waveform design plays an important role in the radar applications. These waveform designs can be achieved by using signalprocessing tools like auto correlation and ambiguity function. In this project signalprocessingtechniques have been developed by using above functions. These techniques are most useful in the multi target scenario of the radar. In this project the signals like burst signal, linear frequency modulated (LFM) signals are used for the determination of radar resolution and also these waveforms are implemented in popular codes like “COSTAS”. The three dimensional plots are generated to evaluate both range and Doppler resolution by using ambiguity functions. The results are being presented for the COSTAS code by using LFM signals. The performance of these waveforms is compared with the conventional waveforms.
Interactive computer gaming offers another interesting application of bio-signal based interfaces. The game system would have access to heart rate, galvanic skin response, and eye movement signals, so the game could respond to a player’s emotional state or guess his or her level of situation awareness by monitoring eye movements. An interactive game character could respond to a user who stares or one who looks around, depending on the circumstances. This use of eye tracking is easier than using the eyes as a precision pointing device, which is difficult because the eyes constantly explore the environment and do not offer a stable reference for a screen pointer. To provide more fun and strategies, there are usually two styles of attack possible in fighting games. One is the weak attack and the other is the strong attack. Common input devices for fighting action games are the joypad and joystick. These use a stick to move the character and a button to make a certain type of attack, for example, a punch or kick. To make a strong attack the user has to input a complex key sequence that makes that motion difficult to invoke, thereby achieving a balance between two types of attack. Though those devices are cheap and easy to use, they have disadvantages. These interfaces are not intuitive for human fighting movement control, and the user has much to memorize, such as the meaning of the button and the input sequence for a strong attack motion. A human-computer interface device designed for a fighting action game, “Muscleman,” has been developed by D. G. Park and H. C. Kim in Korea. The game characters are usually depicted as making an isometric contraction of their arms as an expression of power concentration to make a strong attack like a fireball (75).
In 2008, Wei Zhang et al  used the multiresolution concept along with adaptive filters (LMS) to detect effectively the weak ECG signal in strong noisy environment. The method is simple, fast and effective and in 2009,Arman Sargolzaei et al  proposed a new automatic base line wander removal algorithm. This algorithm preserves clinical information and morphology of ECG with high signal-to-noise ratio.
Abstract—The operational amplifiers (OPAMP) are basic building blocks in implementing a variety of analog circuits such as amplifiers, filters, integrators, differentiators, summers, oscillators etc. OPAMPs work well for low- frequency applications, such as audio and video systems. For higher frequencies, however, OPAMP designs become difficult due to their frequency limit. At those high frequencies, operational transconductance amplifiers (OTAs) are deemed to be promising to replace OPAMPS as the building blocks. This paper illustrates an application of OTA as an active low pass filter. The primary building block of an OTA is the current mirror. In this paper different current mirrors are used to design the LPF & the corresponding frequency and phase responses are comparatively studied. Also a comparative study of CMOS OTA & NMOS OTA is also illustrated in this paper. Finally, the applications of OTA based LPF are also studied.