scaling methods

Top PDF scaling methods:

Estimating Central Bank Preferences Combining Topic and Scaling Methods

Estimating Central Bank Preferences Combining Topic and Scaling Methods

“Inflation Dovish” to those members that we classify as relatively more “Inflation Hawkish,” we find supportive evidence. For example, we find that both Yellen and Kohn suggested that while they were willing to consider a numerical inflation target, it was their belief that the committee needed to very clear articulate the dual mandate of inflation stability and maximum employment. What this means is that these two members wanted to emphasize output/unemployment in addition to inflation. Yellen, for instance, stated that she would be willing to accept a 1.5% target for core inflation, however, she also said that she thought “such a policy might be the first step along a slippery slope that ultimately undermines the Committee’s mandate for maximum employment, as well as broader financial stability.” Kohl, meanwhile, stated that he felt that inflation risks facing the U.S. economy were “on balance,” implying that he did not see inflation as a major risk. During the same meeting, Hoenig suggested that he thought that inflation risk were much higher, men- tioning that he expected core inflation to increase, and mentioning the phrases “inflation risks” and “increasing risk of inflation” a number of times in his speech. Similarly, Greenspan argued that he felt that people were “underestimating the potential inflation pickup.” This suggests that Greenspan is more inflation adverse than the others. Such a ranking in individual’s inflation preferences, from left to right, implies that Yellen and Kohl are Dovish and Greenspan and Hoenig are more Hawk- ish. While this is only illustrative evidence, it does also suggest that not only do FOMC members spend a significant time debating inflation during meetings, but also that committee members’ ap- petites or sensitivities to “inflation risks” vary in accordance to estimated preferences recovered using the topic and scaling methods presented here.

27 Read more

An evaluation of the validity of multidimensional scaling methods : for the representation of cognitive processes

An evaluation of the validity of multidimensional scaling methods : for the representation of cognitive processes

Kruskal in particular tends to increase the proportion of t ied data (i. e. several response scores may be mapped into one disparity to preserve the monotone relation). This tends t o diminish the response information, promote degeneracy and increase dimensionality. Shepard (1974) in reviewing the problem of degeneracy (solutions with all equal distances) suggests imposing additional conditions on the monotonic regression (such as smoothness). Weeks and Bentler (1979) in a comparison of the effectiveness of l inear and monotone scaling models are even more doubtful about the utility of weak monotone models. They compared the performance of both models in the conditions where the linear model is or is not sa tisfied. They found the linear model to perform substan tially better when i ts assumptions were met, and when they were not

278 Read more

Transform Ranking: a New Method of Fitness Scaling in Genetic Algorithms

Transform Ranking: a New Method of Fitness Scaling in Genetic Algorithms

To overcome these problems, fitness scaling methods have been devised to transform the raw fitness, i.e. the objective function, into a scaled selective function used in selecting individuals for reproduction [1]. This paper presents the first systematic analysis and comparison of the performance of a range of six existing fitness scaling methods against two challenging benchmark optimization problems. A new scaling technique called transform ranking is also introduced and evaluated. These seven techniques are also compared with tournament selection, for which the application of fitness scaling would have no effect, since tournament selection is determined by the rank ordering of fitness rather than absolute values.

7 Read more

The Multiplication Method with Scaling the Result for High-Precision Residue Positional Interval Logarithmic Computations

The Multiplication Method with Scaling the Result for High-Precision Residue Positional Interval Logarithmic Computations

of moduli in log n cycles, where the cycle denotes the access time of the high speed memory. The technique uses the CRT and the redundant modulus to compute the magnitude index in the CRT formula. Ulman and Czyżak [10] proposed a scaling technique in non-redundant residue arithmetic that uses only small memories with the size at most of  log 2 m  + 1 -bits, and arithmetic elements, where  log 2 m  is the binary size of the modulus. In this technique, K can be a real number. A novel concept of scaling was recently presented by Meyer-Baese and Stouraitis [11], they proposed effective scaling by 2 by transforming the scaling operation into division remainder zero by checking parity of the number and adding 1 when necessary in order to assure the existence of multiplicative inverses of 2 with respect to all moduli of the RNS base. This method can be extended by repeating scaling by 2, or by directly using the power of 2, but this approach requires larger look-up tables. The known scaling methods have certain drawbacks that makes their application in the high-speed DSP difficult. The first drawback is the special form of the moduli of the RNS base and their fixed and limited number, that may enforce their increased size to attain the necessary dynamic range. The increased size makes other operations like multiplication by a constant not realizable by table look-up. The second is the use of large look-up tables, that practically excludes pipelining. The third is the limitation imposed on the form of the scaling factor. The scaling factor is usually limited to one or two moduli or their product. Moreover, the majority of the known methods do not provide the scaling of signed numbers with the implicit sign. The certain remedy may be the use of scaling techniques termed the approximate CRT methods. They allow to reduce the scaler complexity and provide other desirable characteristics. Griffin et. al. [12] presented a method termed (L+δ)-CRT]. The scaling factor can be any number from [0, M) . This technique allows to use approximate scaled projections and also instead modulo M operation, operation modulo µ, where µ is a more convenient number with respect to modulo reduction. However, the use of the approximate values may lead to large, unacceptable errors in scaling of signed numbers.

11 Read more

Exploring precipitation pattern scaling methodologies and robustness among CMIP5 models

Exploring precipitation pattern scaling methodologies and robustness among CMIP5 models

Figure 1 shows the baseline (preindustrial) annual mean pre- cipitation pattern B(x, 0) and the scaling patterns P (x) for both of the pattern scaling methods generated from the group 1 (see Table 1) model average for the 1pctCO2 simulation. The regression and epoch-difference methods have very sim- ilar scaling patterns, no differences greater in magnitude than 0.05 mm day −1 K −1 , and no differences are statistically sig- nificant (not shown). Both patterns show similar broad fea- tures: an increase in tropical precipitation with global warm- ing, particularly over the oceans; increases at high latitudes, again over the oceans; and decreases in the South Pacific, North Atlantic, and southern Indian oceans, as well as Cen- tral America and the Mediterranean basin.

14 Read more

Terrestrial denitrification: challenges and opportunities

Terrestrial denitrification: challenges and opportunities

its high background concentration in the atmosphere. Spatial and temporal variation in denitrification is high due to control of the process by multiple factors (oxygen, nitrate, carbon, pH, salinity, temperature etc.) that each vary in time and space. A particular challenge is that small areas (hotspots) and brief periods (hot moments) frequently account for a high percentage of N gas flux activity. These phenomena are challenging to account for in measurement, modeling and scaling efforts. The need for scaling is driven by the fact that there is a need for information on this microscale process at the ecosystem, landscape and regional scales where there are concerns about nitrogen effects on soil fertility, water quality and air quality. In this review, I outline the key challenges involved with denitrification and then describe specific opportunities for making progress on these challenges including advances in measurement methods, new conceptual approaches for addressing hotspot and hot moment dynamics, and new remote sensing and geographic information system – based scaling methods. Analysis of these opportunities suggests that we are poised to make great improvements in our understanding of terrestrial denitrification. These improvements will increase our basic science understanding of a complex biogeochemical process and our ability to manage widespread nitrogen pollution problems.

11 Read more

Negative Sampling Improves Hypernymy Extraction Based on Projection Learning

Negative Sampling Improves Hypernymy Extraction Based on Projection Learning

We present a new approach to extrac- tion of hypernyms based on projection learning and word embeddings. In con- trast to classification-based approaches, projection-based methods require no can- didate hyponym-hypernym pairs. While it is natural to use both positive and nega- tive training examples in supervised rela- tion extraction, the impact of negative ex- amples on hypernym prediction was not studied so far. In this paper, we show that explicit negative examples used for reg- ularization of the model significantly im- prove performance compared to the state- of-the-art approach of Fu et al. (2014) on three datasets from different languages. 1 Introduction

8 Read more

Mass-spectrometric profiling of cerebrospinal fluid reveals metabolite biomarkers for CNS involvement in varicella zoster virus reactivation

Mass-spectrometric profiling of cerebrospinal fluid reveals metabolite biomarkers for CNS involvement in varicella zoster virus reactivation

Methods: Metabolite profiles were determined by targeted liquid chromatography-mass spectrometry in CSF from patients with segmental zoster (shingles, n = 14), facial nerve zoster (n = 16), VZV meningitis/encephalitis (n = 15), enteroviral meningitis (n = 10), idiopathic Bell ’ s palsy (n = 11), and normal pressure hydrocephalus (n = 15). Results: Concentrations of 88 metabolites passing quality assessment clearly separated the three VZV reactivation forms from each other and from the non-infected samples. Internal cross-validation identified four metabolites (SM C16:1, glycine, lysoPC a C26:1, PC ae C34:0) that were particularly associated with VZV meningoencephalitis. SM(OH) C14:1 accurately distinguished facial nerve zoster from Bell ’ s palsy. Random forest construction revealed even more accurate classifiers (signatures comprising 2 – 4 metabolites) for most comparisons. Some of the most accurate biomarkers correlated only weakly with CSF leukocyte count, indicating that they do not merely reflect recruitment of inflammatory cells but, rather, specific pathophysiological mechanisms. Across all samples, only the sum of hexoses and the amino acids arginine, serine, and tryptophan correlated negatively with leukocyte count. Increased expression of the metabolites associated with VZV meningoencephalitis could be linked to processes relating to neuroinflammation/immune activation, neuronal signaling, and cell stress, turnover, and death (e.g., autophagy and apoptosis), suggesting that these metabolites might sense processes relating to end-organ damage.

15 Read more

Estimates of Scaling Violations for Pure SU(2) LGT

Estimates of Scaling Violations for Pure SU(2) LGT

This may be the main reason why Allton’s approach never became popular. In [17] this problem was avoided by combining the scales discussed there into a single fit, which is only possible if the relative scaling violations are so weak that they can be neglected within the statistical errors. In [5] we relaxed this to the requirement that the α i,1 coe ffi cients have to agree for all scales, i.e. α i,1 ≡ α 1 ,

8 Read more

PILOT STUDY A new management strategy for the treatment of streptococcal gingivitis: A pilot study

PILOT STUDY A new management strategy for the treatment of streptococcal gingivitis: A pilot study

For Group II patients, to inhibit antibiotic resistance, an alternative treatment which was based on the microorganism's growth characteristics was planned. For this purpose, during oral hygiene instruction and subsequent initial preparation including scaling, we offered an antacid chewing tablet, which included 680mg calcium carbonate and 80mg magnesium carbonate, three times a day (in the morning and before sleeping) for a week. When using this medication, the patients were requested to chew the antacid tablet without swallowing in for one minute and keep the saliva in oral cavity for extra two minutes and then spit. By this treatment method we tried to change the acidic oral pH required for growing of the streptococcus species and kept the optimal oral pH in normal values. To evaluate the efficacy of this method, the mean saliva pHs were measured in the first day and after the treatments. To measure the effects of the antacid treatment in oral environment, saliva samples were collected without any stimulation and pH values were recorded with pH-metre (Inolab pH-meter level 2, Wissenschafllich Technische, Germany) (Table-1). The pH measurements were done on the times (as below) of the first day (T0), after one time oral antacid administration on the first day for determining the pH change in saliva (Ta) and after all the treatments (T7 days).

5 Read more

A HIRES/KECK SPECTROSCOPIC INVESTIGATION OF THE MEASUREMENT OF SODIUM IN THE ATMOSPHERE OF HD 209458b

A HIRES/KECK SPECTROSCOPIC INVESTIGATION OF THE MEASUREMENT OF SODIUM IN THE ATMOSPHERE OF HD 209458b

A few more reduction steps were necessary before the data could be in a form usable for differential spectroscopy: continuum-normalizing the echelle blaze shape, removing as many of the terrestrial atmospheric lines as possible, cross- correlating the spectra at the subpixel level to remove horizontal Doppler shifts due to relative motion between the Earth and the star, and removing cosmic rays that survived the previous cosmic-ray correction. If all nonstellar contaminants in the data are removed, most of the remaining variation in the time- series of an individual pixel is due to the planet’s atmosphere, but each of the steps taken to further reduce the data can introduce additional errors that must be understood. In addition, multiple different methods or parameters might be used for each reduction step, so within one reduction step there may be multiple sources of uncertainty depending on which method or parameters are used.

13 Read more

Centiloid scaling for quantification of brain amyloid with [18F]flutemetamol using multiple processing methods

Centiloid scaling for quantification of brain amyloid with [18F]flutemetamol using multiple processing methods

Further, a recent study by Bourgeat et al. looked at the utility of an alternate image processing platform, CapAIBL [24]. CapAIBL utilises a PET-only approach, overcoming the requirement for a corresponding MR image, and is particularly useful where MR imaging is not possible. The authors reported similar results for the conversion of [ 18 F]flutemetamol through this pipeline. The application of such PET-only quantification methods could lead to a readily adopted clinical quantification method as images could be processed directly from the PET scanner. To that end, further work to investigate the utility of a PET-only method based on CortexID (AW Workstation, GE Healthcare), a dedicated platform for reviewing [ 18 F]flute- metamol images is on-going.

11 Read more

Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods

Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods

While much of what early researchers set out to achieve has been either forgotten or sidelined in favor of what can be done by exploiting large data sets and processing power, its potential value has not gone away: There is much to be gained from recognizing not just what was said, but why; from identifying conclusions naturally drawn from what has been said and what hasn’t; and from representing domains in a sufficiently rich way to reduce reliance on only what a text makes explicit. As such, we believe there can be a broad and positive impact of reviving early aspirations in the current context of large data sets and “deep" and probabilistic methods.

12 Read more

Evaluation of statistical methods for quantifying fractal scaling in water quality time series with irregular sampling

Evaluation of statistical methods for quantifying fractal scaling in water quality time series with irregular sampling

Each estimation method listed above was applied to the sim- ulated data (Sect. 2.3) to estimate β, which were then com- pared with the prescribed (“true”) β to quantify the perfor- mance of each method. Plots of method evaluation for all simulations are provided as Figs. S3–S12 (Supplement S2). Close inspections of these plots reveal some general patterns of the methods’ performance. For brevity, these patterns are presented with a subset of the plots, which correspond to the cases where true β = 1 and shape parameter λ = 0.01, 0.1, 1, and 10 (Fig. 5). In general, β values estimated using the regular data (A1) are very close to 1.0, which indicates that the adopted fractional noise generation method and Whittle’s maximum likelihood estimator have small combined simula- tion and estimation bias. This is perhaps unsurprising, since the estimator is based on the Fourier transform and the noise generator is based on an inverse Fourier transform; thus, one method is essentially just the inverse of the other. One should also note that when fractional noises are not arbitrarily band limited at the Nyquist frequency (as they inherently are with the noise generator that is used here), spectral aliasing should lead to spectral slopes that are flatter than expected (Kirch- ner, 2005) and thus to underestimates of β .

18 Read more

Scaling up contraceptives use in the division with lowest contraceptives use in Bangladesh: sources, methods, and determinants

Scaling up contraceptives use in the division with lowest contraceptives use in Bangladesh: sources, methods, and determinants

After applying weighted frequency, 47.8% of the women were using any contraceptives, and 40.9% were using any modern types of contraceptives such as pills, con- dom, and injections (Table 1). Modern CPR was higher in urban areas than the rural areas. The birth control pill was the most common method used by women in both urban and rural areas, 30.8% and 19.3% use, respectively. Sources from which women can obtain modern family planning methods according to the place of residence have been presented in Fig. 1. Overall, the public sector (for instance, district hospital/medical college hospital, maternal and child welfare centers, or upazila health complexes) was the most common source for both

8 Read more

Local and Global Scaling Reduce Hubs in Space

Local and Global Scaling Reduce Hubs in Space

For each collection the table shows the absolute number of hubs, orphans, and all other objects in the original data space. We then compute their badness before (columns Orig.) and after applying MP and NICDM. It can be clearly seen that indeed in each of the tested collections the badness of hubs decreases noticeably. In fact, on average BN k=5 decreases more than 10 percentage points from 46.3% in the original space to 35.6% (NICDM) and 35.3% (MP). Another visible effect is that orphans re-appear in the nearest neighbor lists (see previous experiment, Figure 12) with an average badness of 36.5% (NICDM) and 35.1% (MP). The measured badness of orphan objects is comparable to the values reported for hubs, but is still notably higher than the numbers computed for the rest of the objects (‘Other’). The badness of all other objects tends to stay the same: In three cases the badness increases slightly, in all other cases a slight decrease in badness can be observed. On average, badness decreases from 29.3% to 28.4% for both methods (MP and NICDM).

32 Read more

CALMAR: A New Versatile Code Library for Adjustment from Measurements

CALMAR: A New Versatile Code Library for Adjustment from Measurements

A new spectrum adjustment library, CALMAR, has been developed to perform simultaneous adjustment and scaling. This library is versatile in its use, thanks to the C++ ROOT framework that allows interactive sessions. After testing and validation, it will be available to the community with selected cases. Future developments will include the GND format, avoiding cross section post-processing tools.

6 Read more

Scaling characteristics of ULF geomagnetic fields at the Guam seismoactive area and their dynamics in relation to the earthquake

Scaling characteristics of ULF geomagnetic fields at the Guam seismoactive area and their dynamics in relation to the earthquake

An example of the experimental data used for the analysis is presented in Fig. 1. Here, a typical daily record of geo- magnetic field variations (H , D, Z components) at Guam is shown. The original ULF record (with one-second sampling rate) can be seen in the insertion into Fig. 1. The signal is taken from 02:00–03:00 UT interval, which corresponds to local noon sector (12:00–13:00 LT). Before using the main procedure of data analysis, some selection of data has been made. First, we divide the raw data along local time intervals, thus forming for every day 24 sets of 1-hour time series, each having N = 3600 variables (H , D, Z components). Such se- lection allows us to separate the local time effects from other effects that could be related to earthquake source dynam- ics. Scaling characteristics of the time series were calculated by using three methods of data processing: spectral analysis (FFT), the Burlaga-Klein (BK) method and the Higuchi (H) method. First, we present the results obtained by the tradi- tional FFT (Fast Fourier Transform) method.

8 Read more

Estimating preferences for a dermatology consultation using Best-Worst Scaling: Comparison of various methods of analysis

Estimating preferences for a dermatology consultation using Best-Worst Scaling: Comparison of various methods of analysis

Best-worst scaling [5], developed by Finn and Louviere [6] and introduced to health care research by McIntosh and Louviere [7] is a solution that utilizes a different design of choice experiment. A guide to using best-worst scaling is available [3], but briefly, unlike most traditional discrete choice experiments, best-worst scaling presents respond- ents with profiles (in this case appointments) one at a time. Respondents make choices within profiles (appoint- ments) rather than between profiles. Thus, for a given pro- file, the set of alternatives on offer comprises the attribute levels that define that particular profile (appointment). By choosing the best and worst attribute levels on offer within that profile, respondents select that pair (best and worst) of attribute levels that lie furthest apart on the latent utility scale. Thus, variations on best-worst scaling have appeared, sometimes called "maximum difference scaling[8].

12 Read more

UP-SCALING FARMER FIELD SCHOOLS AND RAINFOREST ALLIANCE CERTIFICATION AMONG SMALLHOLDER TEA PRODUCERS IN KENYA: OPTIONS, OPPORTUNITIES AND EMERGING LESSONS

UP-SCALING FARMER FIELD SCHOOLS AND RAINFOREST ALLIANCE CERTIFICATION AMONG SMALLHOLDER TEA PRODUCERS IN KENYA: OPTIONS, OPPORTUNITIES AND EMERGING LESSONS

up-scaling or scalability or scaling-up refers to the diffusion and dissemination of or scalability or scaling-up refers to the diffusion and dissemination of locally successful innovations to a wider stakeholder group (Gordijn, 2005) and according to sustainable agriculture and rural development (2007), leads to “more quality benefits to more people over a wider geographic area more quickly, more equitably and more lastingly”. With respect to the ffs, up-scaling requires mobilization of adequate human and material resources requires mobilization of adequate human and material resources mobilization of adequate human and material resources to replicate the model and also additional organization and finance to facilitate, channel and control the flow of information, goods and services efficiently and effectively (davis, 2006). davis, 2006).. campbell (2010) notes that ffs provides a scalable model for knowledge empowerment and can increase the potential scalability of sustainable technologies such as ra certification and any other program taught through ffs. akinnagbe and ajayi (2010) indicate that indicate that the farmer he farmer field school popularly known as “informal” or “school without walls” is a community-based, is a community-based, community-based, capacity building, learning by doing extension model or system that uses adult education extension model or system that uses adult education uses adult education principles in farmers’ groups. this group-based experiential learning, hartl (2009) and mwangi, this group-based experiential learning, hartl (2009) and mwangi, this group-based experiential learning, hartl (2009) and mwangi, , hartl (2009) and mwangi, mwangi, oloo and maina (2010) say, encourages farmers (normally in groups of 20-30) to learn improved encourages farmers (normally in groups of 20-30) to learn improved improved technologies and farming practices through observation. empowerment - an essential feature through observation. empowerment - an essential feature through observation. empowerment - an essential feature of the system according to dzeco, amilai and crist�v�o (2010) refers to the development of dzeco, amilai and crist�v�o (2010) refers to the development of skills so that individuals can make informed choices in their lives. the ffs system was first the ffs system was first was first introduced in indonesia in 1989 to counter overuse of insecticides in irrigated rice fields during the Green revolution (campbell, 2010, braun et al., 2006, Gallagher et al., 2006) and began in east africa in 1995 (davis et al., 2010).

18 Read more

Show all 10000 documents...