advanced signal processing algorithms

Top PDF advanced signal processing algorithms:

The Personal Hearing System—A Software Hearing Aid for a Personal Communication System

The Personal Hearing System—A Software Hearing Aid for a Personal Communication System

about 10 ms, the signal path needs to remain completely on the audio headsets. Then, processing on the central processor is restricted to signal analysis schemes that control processing parameters of the signal path, for example, classification of the acoustical environment, direction of arrival estimation, and parameter extraction for blind source separation. In general, it seems feasible that these complex signal analysis schemes and upcoming complex processing performance demanding algorithms for Auditory Scene Analysis [15] might not necessarily be part of the signal path. The projected architecture might, therefore, be suited for these algorithms, which could benefit from the high signal processing and battery power of the central processor. Other requirements for the link are bandwidth and low power consumption: to allow for multichannel audio processing, several (typically two or three) microphone signals from each ear are required, asking for sufficient link bandwidth. Additionally, if signals are transmitted in compressed form, the link signal encoder should not modify the signal to avoid artifacts and performance decreases in multichannel processing. To ensure long battery life, the link should use low power. To reduce the link power consumption, the PHS could provide only advanced processing on demand. Switching on advanced processing and the link might be either controlled manually or by an automatic audio analysis in the headsets.
Show more

9 Read more

Advanced Signal Processing Methods for Planetary Radar Sounders Data

Advanced Signal Processing Methods for Planetary Radar Sounders Data

) and the Viterbi Algorithm (VA) for locating each of the target features. In this context, our approach addresses the critical issue of the literature algorithms which are characterized by a high computational complexity that scales with respect to both the state dimension and the number of targets. This issue is crucial in radargrams acquired by orbiting planetary sounders that are very large and usually contain a high number of layer boundaries in the subsurface. To avoid inconsistencies and false alarms in the detection procedure, we also propose a novel adaptive denoising and enhancement technique acting as a pre-processing step. This technique adaptively exploits the implicit information in the radargram signal samples to estimate the conditional density distributions of both noise and signal plus noise which are then used in the framework of the radar detection theory. The retrieved statistical information is combined with observations on the sensor deviations (e.g. accu- racy) to produce a denoised radargram enhancing the subsurface layering with respect to noise and unwanted artefacts of the signal (e.g. side lobes). The denoising performances are determined statistically in terms of false alarm rate and detection probability.
Show more

131 Read more

Advanced Electromyogram Signal Processing with an Emphasis on Simplified, Near-Optimal Whitening

Advanced Electromyogram Signal Processing with an Emphasis on Simplified, Near-Optimal Whitening

In this chapter, the method to calibrate an IIR whitening filter will be introduced. To evaluate the performance the IIR whitening filter, we also show the performances of original whitening filter which is calibrated to each subject and FIR whitening filter. It is hypothesized that by implementing a “Universal” IIR whitening filter which is close enough to the “Universal” FIR whitening filter that has been proven feasible, estimating surface-based EMG to toque will be much more convenient and achievable. Designing an IIR filter always has some obstacles [2, 3, 4]. The first is that the filter may become unstable; limiting the coefficients can solve this problem. Another problem is that the error surface may have multiple minima [5], thus basic conventional methods can easily get stuck at a local minima and fail to find the global minima. To solve this problem, several algorithms have been introduced such as ant colony optimization (ACO) [6], simulated annealing (SA) [7, 8, 9] and genetic algorithm (GA) [10, 11, 12 ,13, 14, 15]. Among these algorithms, GA is the one applied most when designing an IIR filter.
Show more

75 Read more

DESIGN AND EVALUATION OF TEST BED SOFTWARE FOR A SMART ANTENNA SYSTEM SUPPORTING WIRELESS COMMUNICATION IN RURAL AREAS. Michael David Panique

DESIGN AND EVALUATION OF TEST BED SOFTWARE FOR A SMART ANTENNA SYSTEM SUPPORTING WIRELESS COMMUNICATION IN RURAL AREAS. Michael David Panique

The beam patterns of the array were measured in the anechoic chamber at MSU. The chamber size is 41.74" 79.5" 51.5" (W L H). The frequency range is 2GHz- 30GHz and the microwave absorber thickness: 5 inch (-42dB) & 8 inch(-52dB). A 5.8 GHz resonant horn antenna (WR159 f=4.9-7.05GHz) is located at one end of the chamber. The other end of the chamber has a rotating pedestal with a stepper motor attached. The antenna array is mounted on the rotating pedestal, which is controlled by the step motor control with a PC through another DAQ card, allows it to collect a full 360 degree power spectrum. The pedestal is built to hold the complete antenna array system. Additional pieces of equipment attached to the chamber are an Anritsu 68369A RF signal generator and an Advantest R3272 spectrum analyzer. These devices are controlled by PC via a General Purpose Interface Bus (GPIB). A DC power source is also used to power the RF board which requires 5V, 3.3V, and ground connections.
Show more

79 Read more

Adaptive independent sticky MCMC algorithms

Adaptive independent sticky MCMC algorithms

The range of applicability of the sticky MCMC methods is briefly discussed below. On the one hand, sticky MCMC methods can be employed as stand-alone algorithms. Indeed, in many applications, it is necessary to draw sam- ples from complicated univariate target pdf (as example in signal processing, see [38]). In this case, the sticky schemes provide virtually independent samples (i.e., with correlation close to zero) very efficiently. It is also impor- tant to remark that AISM and AISMTM also provide automatically an estimation of the normalizing constant of the target (a.k.a. marginal likelihood or Bayesian evidence) (since, with a suitable choice of the update test, the pro- posal approaches the target pdf almost everywhere). This is usually a hard task using MCMC methods [1, 2, 11].
Show more

28 Read more

A REVIEW ON VARIOUS APPROACHES FOR IMAGE ENHANCEMENT

A REVIEW ON VARIOUS APPROACHES FOR IMAGE ENHANCEMENT

1.1 Digital image processing: A picture is digitized to change over it to a structure which can be put away in a computer's memory or on some type of storage media, for example, a hard disk or CD-ROM. This digitization procedure should be possible by a scanner, or by a camcorder joined with an frame grabber board in a PC. Once the picture has been digitized, it can be worked upon by different picture handling operations. Image preparing operations can be generally isolated into three noteworthy classifications, Image Compression, Image Enhancement and Restoration, and Measurement Extraction. Picture pressure is natural to a great many people. It includes diminishing the measure of memory expected to store a computerized picture. Picture imperfections which could be created by the digitization process or by shortcomings in the imaging set-up can be adjusted utilizing Image Enhancement techniques. Once the picture is in great condition, the Measurement Extraction operations can be utilized to acquire helpful data from the picture.
Show more

7 Read more

A Dataflow based Hardware Design Methodology for Digital Signal Processing Algorithms.

A Dataflow based Hardware Design Methodology for Digital Signal Processing Algorithms.

mate performance metrics for refining DSP algorithms in order to select appropriate HW archi- tectures. Presently, this methodology was developed to focus on streaming applications such as image processing including Synthetic Aperture Radar (SAR) and video compression algorithms. We used a pre-built dataflow template library approach which enabled the rapid building of high-level dataflow models as compared to conventional HW/SW co-design approaches. Our methodology begins with a dataflow description as presented earlier in this section. We selected the Caltrop language (CAL) as a starting point for describing a candidate algorithm using the dataflow concept. CAL is an efficient candidate for studying and analyzing the behavior of dataflow models for performance estimation [26]. Additionally, CAL has been adopted as a standard language in the video compression community and its use should help make the use of our methodology more acceptable to designers. Additionally, we determined that CAL had efficient modeling capability since it comes with a language property and automated design tool sets including a CAL language parser and its high level synthesis tool.
Show more

112 Read more

Optimization and Implementation of the Wavelet Based Algorithms for Embedded Biomedical Signal Processing

Optimization and Implementation of the Wavelet Based Algorithms for Embedded Biomedical Signal Processing

One of the first references to the introduction of Digital Signal Processors (DSP) in real time processing of ECG signal, by using wavelets, is given in [14]. In particular, QRS complexes, P and T waves are distinguished from noise, baseline drift or artefacts by SPROC-1400 DSP running on 50 MHz. Follow the implementations on modern DSPs, like TI TMS320C6713 [15], where ECG signal is processed in real-time by using DWT and Adaptive Weighting Scheme. An increasing emphasis has been placed in recent years on approaches based on highly integrated, low-power, low-cost MCs like PICs (from Microchip) [16] or MSP430s (from TI). However, their algorithms are still based on traditional methods based on cascade of derivative and averaging filters.
Show more

22 Read more

Advanced flooding based routing protocols for underwater sensor networks

Advanced flooding based routing protocols for underwater sensor networks

In this work, we have proposed some advanced upgrades of a flooding-based protocol (Dflood), for underwater sensor networks. Our first idea was to incorporate the node position information in the relaying process. In this way, the participation of the nodes farther from the des- tination can be avoided. Simulation results show that a considerable amount of energy can be saved and an improvement of the PDR can be achieved as well. The price to pay is the end-to-end delay, which increases with respect to the original protocol. The use of an implicit ACK ensures that the PDR remains closer to the max- imum for low values of the packet error rate, but our geographic approach outperforms the standard one in terms of energy consumption.
Show more

12 Read more

The use of Digital Signal Processing Algorithms for Electrophysiological Diagnostics of Cardiovascular Diseases

 

The use of Digital Signal Processing Algorithms for Electrophysiological Diagnostics of Cardiovascular Diseases  

The relevance of the study is determined by the need for automatic pre-diagnosis and introduction of the diagnostic algorithms into the appropriate software. The article presents a method of processing the electrocardiogram (ECG) as well as the results of applying this method to the real ECG taken from public databases. Their Fourier and wavelet spectra are given as proposed for digital signal processing and automated diagnostics, and also a number of methods for their use are described. The problems of software development of mobile medical electrocardiographic system have been considered. It is proposed to diagnose diseases of the cardiovascular system by comparison a normalized ECG signal with the interpolation model . The outcomes of the research are of practical value for the needs of medicine, in particular for cardiology.
Show more

10 Read more

Advanced signal processing methods for plane-wave color Doppler ultrasound imaging

Advanced signal processing methods for plane-wave color Doppler ultrasound imaging

We have shown in [2] that spread-spectrum beamforming improves spatial resolution without reducing the maximum measurable Doppler shift, thereby allowing the imaging of high flow rates with high spatial resolution. In section 3.2, we presented a frequency and time domain formulations for the PRCF method, and showed them to be equivalent. In actuality, our original intent was to develop two separate methods based on the segmented sweep and compare their performance. The frequency domain formulation was our first attempt since we wished to improve over the original reshuffling clutter filter from [2] by removing the need for thresholding and we thought that a segmented sweep that repeats the tilt angle would make stationary echoes periodic, and hence represented by subset of the FFT coefficients, thereby allowing their complete removal without the need for manual threshold calibration. After developing the segmented sweep, we then considered a second, time-domain, approach of treating each tilt angle as a separate channel, running a low pass mean filter on each, and subtracting the mean value from the original signal, since the mean value represents stationary clutter.
Show more

134 Read more

ACO

ACO

Marco Dorigo was first person to propose and publish first ACO algorithm Ant System (AS) in a set of three algorithms namely ant-quantity, ant-density, and ant-cycle as part of his doctoral thesis [3]. Few years later, these three algorithm appeared first as technical report [4] in the IEE Transactions on Man, Cybernetics and Systems [5]. Difference between three algorithms was that in ant-quantity and ant-density, pheromone was updated right after ants move from one city to another, while in ant-cycle, pheromone deposition was updated once all ants had built the path and tour quality was used as a function for level of pheromone update. Better performance presented by ant-cycle caused research in other two algorithms to stop and ant-cycle was used to present Ant System. Initial version of AS, presented encouraging results but were not enough to compete with other well-established algorithms. However, these results were encouraging enough to stimulate exploration of research in this field.
Show more

11 Read more

Ant Colony Optimization algo

Ant Colony Optimization algo

• Virtual “trail” accumulated on path segments • Starting node selected at random.. • Path selected at random.[r]

34 Read more

Ants coony optimiztion problem

Ants coony optimiztion problem

• Each ant deposits amounts of pheromone proportional to its rank.. Only iteration-best or best-so-far ant deposits pheromone[r]

70 Read more

Types of Algorithms

Types of Algorithms

finding all solutions, or if a value for the best solution is known, it may stop when any best solution is found.  Example: Finding the best path for a travelling salesman[r]

36 Read more

graph algorithm

graph algorithm

 But Dijkstra’s algorithm does not handle negative edges..  Johnson’s Algorithm: reweight edges to form equivalent.[r]

56 Read more

Advanced Signal Processing Solutions for Brain-Computer Interfaces: From Theory to Practice

Advanced Signal Processing Solutions for Brain-Computer Interfaces: From Theory to Practice

EEG hardware is significantly and considerably more affordable by comparison with most other techniques. Also, immobility of modalities such as fMRI, PET, or MEG, limit the flexibility of ex- periment design and require a more complex, therefore costly, arrangements and setting at the data collection venue, while EEG sensors can be placed anywhere on the scalp not requiring any specific ambient conditions to work at. Moreover, EEG recordings hold a very high temporal resolution, on the order of milliseconds rather than seconds, thus, for clinical and research settings, EEG is commonly recorded at sampling rates above 250 Hz and up to 2000 Hz. Nowadays, modern EEG data collection systems are capable of recording at sampling rates above 20,000 Hz if desired. EEG, being absolutely silent while recording, enables researchers to not only study the responses to audi- tory stimuli, but also to investigate and track the brain changes during different phases of life, e.g., EEG sleep analysis can indicate significant aspects of the timing of brain development, including evaluation of adolescent brain maturation. Additionally, EEG, as a powerful tool to detect covert processing (i.e., processing that does not require a response), is non-invasive and can be used in sub- jects who are incapable of making a motor response. In contrast to all the useful advantages, EEG also possesses disadvantages that researchers must take into account before adopting this technique of recording as the tool by which they aim to answer the question of their study. The first drawback of EEG recordings is a poor spatial resolution on the scalp as compared to techniques such as fMRI, and in order to compensate for this downside, intense interpretation is required just to hypothesize what areas are activated by a particular response. The quality of EEG signals is affected by scalp, skull, and many other layers as well as background noise. Noise is key to EEG, insofar as it reduces the SNR and therefore the ability to extract meaningful information from the recorded signals.
Show more

118 Read more

Array signal processing algorithms for localization and equalization in complex acoustic channels

Array signal processing algorithms for localization and equalization in complex acoustic channels

A number of algorithms that incorporate IID and spectral cues have been pro- posed for sound source localization using the HRTF information. Typically, these methods extract the relevant acoustic features in the frequency-domain of the re- ceived signal, and identify the source locations through a pattern matching [72, 111], statistical [73] or parametric modelling approach [84, 115]. Correlation-based ap- proaches [62, 100] represent a popular subset of these methods, where the correla- tion coefficient is used as a similarity metric to identify the distinct source locations. However, each method is not without its own drawbacks, such as the training re- quired by the system or the high ambiguity differentiating between the actual source location and the adjacent locations. In Chapter 3, the possibility of exploiting the di- versity in the frequency-domain of the channel transfer function for high-resolution broadband direction of arrival estimation was explored. In this chapter, we explore the application of these concepts to the binaural sound source localization problem using a Knowles Electronic Manikin for Acoustic Research (KEMAR).
Show more

183 Read more

MIA Sig: multiplex chromatin interaction analysis by signal
processing and statistical algorithms

MIA Sig: multiplex chromatin interaction analysis by signal processing and statistical algorithms

Another crucial component in Hi-C data analysis is to call topologically associating domains (TADs), loosely defined as regions with more contacts inside than out- side. In general, TADs appear as squares along the diag- onal in the contact map, and the goal is to identify and segment the genome. There are more than 20 TAD call- ing algorithms (Zufferey et al. [32]), some of which con- vert the contact map into a 1D signal along the diagonal for subsequent segmentation or into a graph and apply community detection algorithms. To run the existing tools, multiplex data must first be converted into a con- tact map. However, enumerating over all possible pairs in a complex is computationally expensive and may introduce additional bias since the number of pairwise interactions increases quadratic in n. In other words, a complex with 5 fragments yields 28 pairs instead of 1 pair for a complex with 2 fragments. This approach would also lose valuable multiplexity information.
Show more

13 Read more

Objective Assessment of Machine Learning Algorithms for Speech Enhancement in Hearing Aids

Objective Assessment of Machine Learning Algorithms for Speech Enhancement in Hearing Aids

The analog signal (sound waves) is converted to electrical signal by the vibration of diaphragm in microphones which is caused by the changes in air pressure. This electrical signal is then sampled and digitized using an Analog-to-Digital Converter (ADC), which usually provides a usable audio bandwidth of 8 to 16kHz. The resultant discrete time signal may be analyzed spectrally either in frequency domain or in time domain depending on the signal processing algorithm used in the device [10]. Either of the methods will involve trade-offs in time-frequency resolution. The hearing devices these days are expected to have a delay no more than 10-12ms to avoid incongruity between the speaker’s lip movement, echo and the audio signal [10]. The spectral resolution is dependent on frequencies with bandwidth proportional to center frequency. The time domain analysis of speech signals can be done using a bank of infinite impulse response (IIR) filters as: Signal from microphone → Filter bank (Combination of low pass, band pass and high pass filters) → Sound cleaning → Loudness adjustment → Recombination of signals into single output. Each of the filters in filter bank may allow or stop a specific range of frequencies that form a band. The channel width increases with center frequency with more frequency resolution at lower frequencies. IIR filters require much lesser computation to achieve desired filter slopes when compared to FIR filters. However, a cautious design is required to evade the artifacts that may be produced otherwise. It is recommended to process a block of samples instead of processing each sample when it comes for computational efficiency [10].
Show more

86 Read more

Show all 10000 documents...