A time reversal mirror refocuses time delayed arrivals at the intended depth, and this property is used to reduce ISI for communications (Dowling, 1994; Edelmann et al., 2002; Kuperman et al., 1998). Instead of two-way transmission invol- ving two arrays, passive time reversal communications realized by the passive-phase conjugation (PPC) approach decrease the instrumentation using only one receiving array plus one-way transmission (Gomes et al., 2008; Rouseff et al., 2001; Song et al., 2006b; Yang, 2005). PPC processing can be treated as match- ﬁltering of the received signal. For the passive time reversal method, spatial diversity is used to suppress ISI by low complex multichannel combining (Song et al., 2006a). ISI cannot be eliminated but is mitigated, where the channel impulse response cannot be converted into a Dirac function. In a time-varying underwater channel, the refocusing degrades with elapsed time. It is a rule of thumb that only one adaptive channel equalizer with a reduced number of taps removes residual ISI after refocusing and tracks channel variations. In terms of output signal-to-noise ratio (SNR), the theoretical performance of time reversal commu- nications has been discussed in Stojanovic (2005), but it is difﬁcult to precisely predict the performance in a real ocean due to interchannel correlations and residual ISI. As discussed by Yang (2004), the multichannel DFE achieves superior performance over the PPC method, as the multichannel DFE exploits spatial diver- Contents lists available at ScienceDirect
This work envisages recording and signalprocessing of the ambient noise data from the western continental shelf off Goa at different location to extend our understanding of the variability of noise and the cause thereof. The techniques of recording the ambient noise under varying ocean environmental conditions can be realized through use of ‘spatial’ and / or ‘temporal’ signalprocessing using single or an array of hydrophones. Here, ambient noise measurements were carried out using three calibrated omni-directional hydrophones mounted in a vertical array at different depth. We present analyses and data recorded from the area where the system was deployed at 17-m depth off Goa, India on 14 January 2013.
in underwater have been considered as the most robust and feasible carrier for under- water source localization [8, 10]. Many beamforming techniques [88–90] which rely on time differences of arrival and direction of arrival estimation, have been developed for terrestrial acoustic source localization. However, most of these techniques are based on plane wave signals which are usually not the case in the ocean waveguide . This is mainly due to the characteristics of underwater acoustic channels, such as variable speed of sound, and unavoidable movement of the source and receiver [1–3]. Matched-field processing (MFP) [32, 91] which explores the spatial complexities of acoustic fields in an ocean waveguide to locate sources has attracted much research interest in the past few decades. It does not rely on plane wave signals and provides superior performance than plane wave methods for underwater source localization . Due to bandwidth limitations of underwater acoustic channels, receivers are required to process broadband communications signals. Therefore, in this chapter, we are interested in broadband MFP techniques [33, 37, 38, 92] for underwater acoustic source localization.
Abstract - Coal is an extremely complex material and exhibits a wide range of physical and chemical properties. The rapidly expanding use of coal made it necessary to devise acceptable methods for coal analysis with the goal of correlating composition and properties with behavior. As a part of the multifaceted program of coal evaluation, new methods are continually being developed and the already accepted methods may need regular modification to increase the accuracy of the technique as well as the precision of the results. The use of ultrasonic testing for material characterization not only play a important role in quality assurance during in-manufacture inspection but also can serve as a powerful tool for life prediction technology during in-service inspection, residual life assessment and plant life extension . The measurement of ultrasonic parameters has been used for determining material properties for many years, but with the advent of modern signalprocessing techniques it is possible to extract significantly more information from ultrasonic signals. In this paper an attempt is made to characterize the Moisture content in Coal Sample by ultrasonic Non destructive techniques by measuring various NDT parameters of Coal Sample such as, ultrasonic velocity, attenuation, etc.
furnace, explosions in the furnace or gas ducts, or inflammation of combustible deposits in convective heating surfaces; vii) on an intolerable reduction of the pressure of gas or fuel oil the control valve; viii) when there is no voltage at remote-control devices and measuring and control instrumentation; and ix) on switching off of the turbine (in monobloc units) or of some of the auxiliary equipment (draft fans, blowers, etc.). In all such cases, a delayed shut-down can aggravate the situation and cause serious damage, so that the boiler operator is instructed to stop the boiler without waiting for permission from the management. An emergency shut down of a boiler is affected by the protection system on receiving the signal as per appearance of some or other emergency situation. Firing system for thermal power generation: Firing system operation consists of preparatory procedures, firing, and raising the load to its specified rated capacity. The first preparatory stage includes assembling the water-steam, fuel, and gas-air paths, preparing all mechanisms and systems, creating vacuum in the turbine condenser, pre-starting de-aeration of feed water, etc. A drum type boiler is filled with water as required. The water level in the drum should be somewhat below the normal mark so as to allow swelling. A once-through boiler is to be filled with water in all firing regimes, except for firing from the state of hot reserve. As water is fed into the boiler, it displaces air from the system (provided that the air pressure is not excessive). In a once-through boiler, the feed water flow rate is adjusted at the starting-up level and the water pressure is raised to the working value by closing the throttle valve, In boiler firing from the hot state, first a reduced flow rate of water is established (10-15% of the nominal value), which makes it possible to slowly cool the boiler path up to the internal gate valve and internal separator. The starting-up flow rate of water is set in upon raising the water pressure before the internal gate valve. Water from the internal separator is discharged into the start-up expander and further into the circulation conduit. The start-up and shutdown device is opened to create vacuum in the super heater. This procedure is also carried out in a drum type boiler if there is no excessive pressure in it; this ensures a slower rise in the saturation temperature of the drum during firing. In cases when the start-up and shutdown device is initially closed, its opening is performed only upon firing the furnace, so as to maintain a constant pressure of live steam that has been generated to that moment.
Listening to the heart sound using stethoscope is probably the oldest and fastest method to check the heart functionality (Health essentials, 2014). As many heart problems affect the way the heart beats in some manner, and since the heart acoustic is a result of the heart beating; it should hold valuable information about the causing problem. However, the human ear might not be able to distinguish small differences in the heart sound that indicate a disease, due to its low frequency that falls out of the spectrum range of human hearing or due its small power. Therefore, heart diagnosis using the conventional sound listening is not effective in most heart diseases (Health essentials, 2014). Thus, traditionally more sophisticated diagnostics and investigation methods are needed to fully report the heart condition, such as Electrocardiogram (ECG) and Echocardiogram. Nevertheless, most of these advanced methods require large devices and large power and cannot be implemented in small equipment’s like wearable watches or mobile phones. Consequently, this thesis proposes the application of advanced machine learning technologies to extract useful diagnostic information from the heart sound data. In this process, the entire sound spectrum should be used, including the bands outside of the human hearing range, and this process of data collection and processing should be performed with limited processing energy and limited capabilities, the likes of which mobile devices are capable of performing. The success of this concept will open the door for tremendous changes to the game of heart monitoring and diagnosis using wearable small devices like watches. Its task is to process low quality signals to obtain health diagnosis with acceptable error margin. The focus is alarming diagnostics and the goal is to save precious time at emergencies in less than perfect conditions (such conditions that are usually experienced in catastrophes). During the first couple years of research the following questions have emerged:
A schematic diagram of the embedded system setup used to dynamically adapt and control the mechanical properties of the transducer can be accessed in . The acoustic transducer built with Kapton and PZT materials and the algorithm presented above were integrated in this setup. A laser vibrometer (head + controller: Polytec OFV 2700) providing an analog output voltage was used as the reference measurement signal of the displacements induced onto the membrane from acoustic stimuli. A 32-bit MCU (Micro-controller Unit) from STMicroelectronics (STM32F4) running at 168MHz is used to compute the algorithm. Pre-amplification and filtering are provided, integrated on a conditioning circuit for the output signals coming from the laser controller. Signal acquisition is done using an on-board 12-bit A/D converter with a sampling frequency of 50kHz. Data is acquired and managed using interrupt-based routines. A threshold based algorithm is executed inside the CPU (Central Processing Unit) which is set according to the intensity of the input sound signals, such that the feedback control system can dynamically adapt the structural mechanics of the front-end transducer in real-time. The PZT stack is actuated using a 12-bit D/A converter with an additional analog driving circuit to amplify compatible output signals.
The work is focused on the possibilities of accoustic emission (AE) signal sensing from surfaces in- accessible for commonly used sensors. The aim of the work is to verify the possibility of assembly of several measuring devices for maximalization of the possibilities of AE signal sensing in practice. For this purpose, samples of waveguides and sets of tools for their clamping with AE senzor have been constructed. These samples have been tested in laboratory conditions and functionality of the whole system has been validated in practise. Several sets of normalized AE signal measurements have been performed at all waveguides samples. The results have been evaluated from several points of view, concerning waveguide design. The results contributed to the knowledge of signal conduction through waveguide body, its changes and deformation. The results of evaluation has also conﬁ rmed that waveguide shape diﬀ erences do not cause any critical failings. The possibility of further device set development have been conﬁ rmed.
The objective of this article was to use numerical modeling to analyze the impact of the structure of the test device on recorded physical AE signals in the case of RIA experiments. A resonant frequency of the test device observed in physical signals has been identified with the help numerical compu- tations. This resonant frequency being only connected to the structure of the test device, it can be removed safely from recorded signals, making their interpretation more effective. The results we obtained showed that there is a strong depen- dence of the transfer function on the source/receiver relative positions. It is thus impossible to define a global transfer function. We also conclude that it is not possible to use classical deconvolution methods when the source position is unknown. Additionally, numerical simulations suggests that frequencies up to 80 kHz have to be preferred because of the flat response of the transfer functions in this frequency band. No perturbation on the signal coming from the structure of the test device should be expected in this frequency band. A perspective of this work which is a direct consequence of the feasibility of performing numerical computations is the possibility of optimizing the receiver positions. From the previous results, we can assert that AE sensors should always be located before the Xenon layer and should be put at the same distance from the z − axis. This would reduce the estimation bias of the time-delay between the two AE sensors and then improve the source mechanism localization.
Notice that the proposed model remains feasible as long as the diﬀerence between acoustical paths is distinguishable. A too small microphone spacing (d < 1 cm) gives little path di ﬀ erence to low frequency components of sources. A higher accuracy (more bits in digital signalprocessing) during sam- pling helps, but will be limited by the background noise level. The small path diﬀerence can accumulate after a number of reflections, so it reveals itself strongly in the reverbera- tion part. However, since the signal power decays exponen- tially, the path di ﬀ erence becomes “invisible” especially at the presence of noise. Obviously, the separation fails when microphones are placed at the same point since no diﬀer- ence can be detected regardless of the given accuracy. In the case of a large spacing (d > 10 cm), the time delay of the arrival of the direct sound can help to “build” a path di ﬀ er- ence to avoid the spatial aliasing of high frequency compo- nents.
This article presents a suite of algorithms, which together have proved eﬀective for automatic clustering and separation of AE events based on multiple features extracted from the original test data. The procedure consists of two steps. First, the noise events are separated from the events of interest and subsequently removed, using a combination of covariance analysis, principal component analysis (PCA), and diﬀeren- tial time delay estimates. The original data is reduced by up to 70% after this step. The second step processes the remain- ing data using a neural network, which clusters AE signals and noise signals to separate neuron outputs. To improve the e ﬃ ciency of classification, a short-time Fourier transform (STFT) is applied to retain the time-frequency characteristics of the remaining events, and reducing the dimension of the data. The performance of the algorithm has been validated on some eight sets of experimental data involving significant levels of crack-like signals generated from the mechanisms that hold the sample. Two sets of the data were determined by inspection to have been obtained under the most reliable conditions. Hence, this paper concentrates on the presenta- tion of the results for these two sets that have resulted in AE classification accuracies in excess of 97%. The remaining data sets resulted in accuracies in the range of 85%–95%. In an al- ternate approach, an AE signal subspace, that is, one formed by a set of orthogonal basis set that retains the features of AE signals, is computed from the separated AEs. When applied to data from new tests, signals of similar features, that is, AE events of the same origin, are selected automatically. The ex- ample in this study shows a correct selection ratio of 90%.
A well-known complete multi-channel equalization technique aiming at acoustic system inversion is the multiple-input/output inverse theorem (MINT)-based technique , which however suffers from drawbacks in practice. Since the available RIRs typically differ from the true RIRs due to fluctuations (e.g., temperature or position variations ) or due to the sensitivity of blind system identification (BSI) and supervised system identifi- cation (SSI) methods to near-common zeros or interfering noise [28–30], MINT generally fails to invert the true RIRs, possibly leading to severe distortions in the output signal [22–24, 26]. In order to increase the robustness against RIR perturbations, partial multi-channel equal- ization techniques, such as relaxed multi-channel least-squares (RMCLS)  and partial multi-channel equalization based on MINT (PMINT) , have been proposed. Since early reflections tend to improve speech intelligibility [1–3] and late reflections are the major cause of speech intelligibility degradation [4–6], the objective of partial equalization techniques is to shorten the overall impulse response by suppressing only the late reflections. While RMCLS imposes no constraints on the remaining early reflections, PMINT has been shown to be more perceptually advantageous since it also aims to control the remaining early reflections. Although partial equalization techniques can be significantly more robust than MINT, their performance still remains rather susceptible to RIR perturbations [23, 24, 26]. As a result, several methods have been proposed to further increase the robustness against RIR perturbations. In [22, 24], it has been proposed to incorporate regularization, such that the distortion energy due to RIR perturbations is decreased. In , it has been proposed to use a signal- dependent penalty function to promote sparsity in the output signal and reduce artifacts generated by non- robust techniques. In [31, 32], it has been proposed to relax the constraints on the filter design by constructing approximate reshaping filters in the subband domain. In , it has been proposed to relax the constraints on the filter design by using a shorter reshaping filter length than conventionally used. The objective of this paper is to provide a mathematical analysis of the robustness increase when using a shorter reshaping filter length as well as to propose an automatic non-intrusive procedure for selecting an optimal shorter reshaping filter length.
7.1. Summary. The ﬁelds of signal and image processing oﬀer an unusually fer- tile playground for applied mathematicians, where the distance between an abstract mathematical idea and an application or even a product may be small. In this pa- per we have discussed the concept of sparse representations for signals and images. Although sparse representation is a poorly deﬁned problem and a computationally im- practical goal in general, we have pointed to mathematical results showing that under certain conditions one can obtain results of a positive nature, guaranteeing unique- ness, stability, or computational practicality. Inspired by such positive results we have explored the potential applications of sparse representation in real signalprocessing settings and shown that in certain denoising and compression tasks content-adapted sparse representation provides state-of-the-art solutions.
Q. It can be observed that for signals with a high amount of spectral sparsity, such as the mono brown noise signal, the DEA scheme yields the best echo cancellation perfor- mance, while the 3DM and SPU schemes yield the poorest performance despite obtaining the highest values for the Closeness Measure. This is due to the highly spectrally selective nature of the 3DM and SPU schemes (discussed in Section 4.4.1), i.e., the sub-filters with the smallest mag- nitude tap-inputs do not have taps updated in every frame, resulting in very slow convergence of these sub-filters and thus negatively affecting the overall echo cancellation per- formance. For the least sparse mono white noise signal, it can be observed that the 3DM, FEA, and DEA schemes yield similar echo cancellation performance, while the SPU again yields the poorest performance. This may be due to the fact that the SPU scheme is the only one which completely ignores entire subbands when updat- ing the filters, while the other schemes may allocate a few taps to each subband when the reference signal has a low amount of sparsity. For the spatially sparse stereo white noise signal, the DEA scheme performs better than the FEA scheme, both in terms of the converged ERLE value as well as the t 20 values. For all considered signals,
fundamental calculus, and every bit of that is thanks to you. Dr. Joel Webb, your lectures were never dull, to say the least. You have a way of capturing the students’ interest and refining their talents. Dr. Gregory Seab and Dr. Kevin Stokes, you have both been excellent sources of support during my academic career since my very first day in PHYS 1061. Thank you for you counsel and thank you for stealing me away from the engineering department. Dr. Linxiong Li, much of my research relies on large data sets and your statistics course was vital to giving me the knowledge necessary to understand much of the higher level published literature in signal analysis. I was a physics student lost in a sea of mathematicians, but you helped me stay right along with the rest of them. Thank you for understand and guidance.
EURASIP Journal on Applied Signal Processing 2001 1, 15?26 ? 2001 Hindawi Publishing Corporation Generalization of a 3 D Acoustic Resonator Model for the Simulation of Spherical Enclosures Davide Rocc[.]
The paper is organized as follows. Section 2 discusses the batch, that is, nonadaptive estimation of the complete acous- tic impulse responses from the recorded microphone signals. It is shown that if the length of the impulse responses is ei- ther known or can be overestimated, the complete impulse responses can be identified from the EVD of the speech cor- relation matrix (noiseless case and spatiotemporally white noise case) or from the GEVD of the speech and the noise correlation matrices (colored noise case). These batch im- pulse response estimation procedures form the basis for de- riving stochastic gradient algorithms that iteratively estimate the (generalized) eigenvector corresponding to the smallest (generalized) eigenvalue. These adaptive EVD and GEVD algorithms are discussed in Section 3. In , it has been shown that the adaptive EVD algorithm can be used for TDE, remarkably, even when underestimating the length of the acoustic impulse responses. We will show that this re- sult also holds for the spatiotemporally colored noise case when using the adaptive GEVD algorithm (and the adaptive prewhitening algorithm) for TDE. In Section 4, it is shown that all considered batch and adaptive TDE algorithms can easily be extended to the case of more than two micro- phones. Section 5 describes the simulation results for dif- ferent reverberation conditions (ideal and realistic), diﬀer- ent SNRs, and di ﬀ erent noise sources (localized and di ﬀ use noise source). For all conditions, it is shown that the time delays can be estimated more accurately using the adaptive GEVD algorithm and the adaptive prewhitening algorithm than using the adaptive EVD algorithm. Since the adaptive GEVD algorithm requires an estimate of the noise correla- tion matrix, we also analyze its sensitivity with respect to the accuracy of this noise correlation matrix estimate, show- ing that the performance of the adaptive GEVD algorithm may be quite sensitive to deviations, especially for low SNR scenarios.
The data samples were scaled using their calibration fac- tors (provided by CTBTO), and an inverse filter of the record- ing system’s frequency response was applied to eliminate the effect of the acquisition chain on the frequency response of the recordings. The fast Fourier transform (FFT) of the signal was computed using rectangular windows of 15 000 samples (i.e. 1 min intervals at 250 Hz sampling rate) and the broad- band signal was then filtered in five frequency bands (5–115, 10–30, 40–60, 56–70, 85–105 Hz) via selection of the cor- responding FFT bins within each frequency band. Then the resulting sound pressure level (SPL) in dB re 1 µ Pa 2 for each frequency band was calculated (Robinson et al., 2014; ISO 18405, 2017). Finally, outliers, i.e. levels more than 20 dB above the average of the entire time series of SPL values, were removed.