As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very diﬃcult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code prop- erties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguityresolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phaseambiguityresolution can be performed without resorting to a pilot sequence, thus improving the spectral eﬃciency.
X B ; Y B sin K cos ; Z B sin Ψ . (7) When resolving phase ambiguities, special interest is paid to the likelihood of gross errors, i.e. the cases in which the phaseambiguity is determined incorrectly. Gross errors occur when LF has side lobe values comparable with the main maximum value corresponding to the right solution. This situation is illustrated in Figure-2, which shows the likelihood function for one NS. The figure shows that the resolution of phase ambiguities in the measurement of one base for each NS alone is impossible, because the likelihood function takes the extreme values in the whole areas and spurious solutions are indistinguishable from true solutions.
Integer GPS carrier phaseambiguityresolution is a pre- requisite to get very precise positioning results using short observation time spans. Ionospheric errors in the relative GPS observations over baselines of medium lengths (longer than 10 km) hamper a fast estimation of the integer carrier phase ambiguities. This problem is expected to grow in the coming maximum of the sunspot cycle (expected in the years 2000–2002). If a permanent GPS array is within the vicinity, then this can be an outcome, as it is possible to estimate pre- cise ionospheric delays from the network. These estimates can then be interpolated for an arbitrary location within the surroundings of the network and these interpolated values can be provided to users to correct their GPS measurements. In the Netherlands such an ionosphere interpolation technique will be part of the so-called Virtual GPS Reference Station concept (as explained in van der Marel, 1998). This means that the observation data of the Dutch permanent stations are transformed to a location, which is approximately the posi- tion of the user’s antenna, and these data are corrected for the errors that may be expected at the user’s location. Next, the user processes these virtual data together with the data of his receiver as an ordinary ‘short baseline’.
The two-source AED system included in Figure 2, which was developed in previous work , employs a model-based approach with one microphone. All ac- cepted sound combinations are modeled, i.e., the AED system has a model for each class whether it is an iso- lated event or a combination of events. This approach does not require a prior separation of the two overlapped signals, but requires a number of models that may be too large. In our particular meeting room scenario, however, the approach is feasible because 11 AEs are considered, which may be overlapped only with one class, speech, so only 22 models are required [14,15]. The ASL sys- tem, also developed in previous works , is based on the steered response power with phase transform (SRP-PHAT) algorithm, which uses 24 microphones available in the room.
demonstrated for neurologically healthy individuals (NHI) in a self-paced reading study (Hare et al., 2003). Eleven people with mild or moderate aphasia and eleven neurologically healthy control participants read sentences while their eyes were tracked. Using adapted materials from the study by Hare et al., target sentences containing an SC structure (e.g. He acknowledged (that) his friends would probably help him a lot) were presented following a context prime that biased either a direct object (DO-bias) or sentence complement (SC-bias) reading of the verbs. Half of the stimuli sentences did not contain that so made the post verbal noun phrase (his friends) structurally ambiguous. Both groups of participants were influenced by structural ambiguity as well as by the context bias, indicating that PWA can, like NHI, use their knowledge of a verb’s sense-based argument structure frequency during online sentence reading. However, the individuals with aphasia showed delayed reading patterns and some individual differences in their sensitivity to context and ambiguity cues. These differences compared to the NHI may contribute to difficulties in sentence comprehension in aphasia.
We have shown that simple unsupervised algorithms that make use of bigrams, surface features and para- phrases extracted from a very large corpus are ef- fective for several structural ambiguity resolutions tasks, yielding results competitive with the best un- supervised results, and close to supervised results. The method does not require labeled training data, nor lexicons nor ontologies. We think this is a promising direction for a wide range of NLP tasks. In future work we intend to explore better-motivated evidence combination algorithms and to apply the approach to other NLP problems.
In the process of navigation and positioning using carrier phase observation, the accuracy of carrier phase affects the final result. This paper describes in detail the two algorithms of integer ambiguityresolution. The whole ambiguityresolution process is simulated by the long baseline method and the LAMBDA algorithm. Through the results, we can find that the long baseline method can improve the efficiency of calculation to a certain extent. However, when the angle between the antenna and the satellite is within a certain range, it will lead to large errors in the calculation results. The correctness of integer ambiguityresolution is affected. This effect needs further research and simulation.
Model in Word Covering Ambiguity Resolution in Chinese Word Segmentation Based on Contextual Information Xiao LUO; Maosong SUN National Lab of Intelligent Tech and Systems Tsinghua University, Beijing[.]
TRANSLATION AMBIGUITY RESOLUTION BASED ON TEXT CORPORA OF SOURCE AND TARGET LANGUAGES T R A N S L A T I O N A M B I G U I T Y R E S O L U T I O N B A S E D O N T E X T C O R P O R A O F S O U R C E A[.]
On February 11, 2019, four additional Galileo satellites were put into service, approaching the completion of the Euro- pean global navigation satellite system constellation. For the first time, the performance of Galileo system in terms of high-accuracy precise point positioning (PPP) can be evaluated. The results presented in this paper are based on one full week (February 11–17, 2019) of post-processed kinematic positioning for a set of fixed stations at a 30-s sampling. Due to the availability of precise Galileo orbit and “integer” clock products, delivered by CNES/CLS Analysis Center of International GNSS Service, the impact of Galileo ambiguityresolution on the positioning results is also quantified. The precision using Galileo-only measurements in the East, North and Up directions is 10 mm, 7 mm and 33 mm for PPP and 6 mm, 5 mm and 28 mm for PPP-AR (PPP with ambiguityresolution) (1 sigma), respectively. These results shall represent the future performance of the Galileo system for kinematic post-positioning. They also indicate the important future contribution of Galileo to high-accuracy multi-GNSS applications.
In this work, an interferometry system for phase measurement will be developed to study the changes in pressure of the acoustic waves produced by laser interaction. The system is designed to overcome the problem of phaseambiguity due to extra fringes associated with laser interactions. As phase measurement interferometry is a very sensitive and very precise measurement, its environmental effects should also be taken care of. Thus, the system designed will also include eliminating the problems of air turbulences and also vibrations. Error contaminations are unavoidable in the production of the images. But, these errors would not be such a nuisance if they are of the same nature and come from the same sources. This would simplify the noise filtering process. Phase calculations will surely benefit from this type of images.
We believe that an evaluation on these test corpora is a realistic simulation of the hard task of target-language disambiguation in real-word machine translation. The translation alterna- tives are selected from online dictionaries, cor- rect translations are determined as the actual translations found in the bilingual corpus, no examples are omitted, the average ambiguity is high, and the translations are often very close to each other. In constrast to this, most other evaluations are based on frequent uses of only two clearly distant senses that were determined as interesting by the experimenters.
For the solution of the Eq. 4, first identified an integer vector to set as the search space, and searched out the final integer ambiguity solution according to the principle of the minimum objective function. In the application of GPS short baseline, the baseline length is used as the constraint condition, and the search space is constructed as follows:
In navigation and surveying systems using GPS carrier phase data, the performance of ambiguityresolution and computational efficiency are of great concern. These capa- bilities are often traded off in designing the system. One possible way to overcome the trade-off loss is to reduce the number of ambiguity candidates before or at the search- verification step. The search space transformation (Abidin, 1993; Teunissen, 1994; Martin-Neira et al., 1995) and am- biguity candidate filtering in multi-search levels (Chen and Lachapelle, 1995; Teunissen, 1997) are effective techniques for that purpose.
Phase mapping is a fast, efficient and accurate method for phase measurement of an interferogram. This technique relies on digitizing the intensity distribution of the whole area of the interferogram [1-2]. However, a problem with the technique is that, quite often this technique is plagued with phaseambiguity, especially when measurement is made on a single interferogram . In order to find a good suitable interferogram, we must capture the interferogram again and again until we produce a good enough interferogram that can be assessed. Due to the sensitive nature of laser interferometry, this could lead to other in-situ problems. Phase mapping the interferogram would involve noise filtering, phase wrapping and phase unwrapping. This would enable the observer to have a direct 2D and 3D view of very small physical changes occurring in the event.
The OH 1720-MHz masers listed in Table 1 all occur in pairs, which are positionally coincident to within our errors of measurement, and which we argue are produced by Zeeman splitting (Section 6). In order to facilitate comparison between 1720-MHz masers and 4765-MHz masers, which do not show Zeeman splitting, we calculated for each 1720 MHz pair the unshifted or degmagnetized velocity corresponding to no Zeeman splitting to be the mid-point velocity between the LSR velocities of the LHC and RHC components at 1720 MHz in Table 1. The LSR and demagnetized velocities are based on flux-weighted averages over all the features contributing to each spot, the same scheme used for calculating the positions. Again, the difference introduced by using velocities of the peak channel for each spot were small compared to the velocity resolution. There is no more accurate demagnetization method available for 1720 MHz unless the magnetic hyperfine quantum numbers of the inverted levels are known. Calculations for full Zeeman multiplets at 1720 MHz, and other frequencies, appear in Gray & Field (1995), and further discussion of the Zeeman effect at 1720 MHz appears in Section 6.
Abstract— This paper illustrates how data pre-processing choices about author name disambiguation can affect research findings about scholarly networks and hypotheses about underlying social mechanisms. We have analyzed three big scholarly datasets that were disambiguated algorithmically and via two common initial-based disambiguation methods; namely first-initial and all-initials disambiguation. The comparison of resulting bibliometric and network properties revealed that initial-disambiguation bears the prevalent risks of incorrectly merging author identities, underestimating the number of unique authors and inflating the average productivity and number of collaborators per author. The gaps between outcomes of name ambiguityresolution methods range from -4.23% to -87.36% per dataset for the number of unique authors, from 3.75% to 691.20% for average productivity, and from 5.06% to 285.28% for degree centrality for initial based methods compared to algorithmic disambiguation. This calls for special attention to data pre-processing choices in scholarly big data research.