Top PDF Representativeness-Based Sampling Network Design for the Arctic

Representativeness-Based Sampling Network Design for the Arctic

Representativeness-Based Sampling Network Design for the Arctic

Integration Across Scales Geomorphological features—including thaw lakes, drained thaw lake basins, and ice-rich polygonal ground—provide the organizing framework for integrating process studies and observations from the pore or core scale (micron to tens of centimeters) to plot (meters to tens of meters) and landscape (kilometers) scales. Within these discrete geomorphological units, mechanistic studies in the field and laboratory are targeting four critical and interrelated components— water, nitrogen, carbon, and energy dynamics—that determine whether the Arctic is, or in the future will become, a negative or positive feedback to anthropogenically forced climate change. Multi- scale research activities organized around these components include hydrology and geomorphology, vegetation dynamics, biogeochemis- try, and energy transfer processes.
Show more

34 Read more

Spatial network sampling in small area estimation

Spatial network sampling in small area estimation

An intuitive way to produce samples that are well spread over the population, widely used by practitioners, is to stratify the units of the population on the basis of their location. The problems arising by adopting this strategy lie in the evidence that it does not have a direct and substantial impact on the second order inclusion probabilities, surely not within a given stratum, and that frequently it is not clear how to obtain a good partition of the study area. These drawbacks are in some way related and, for this reason, they are usually approached together by defining a maximal stratification, i.e. partitioning the study in as many strata as possible and selecting one or two units per stratum. The idea that is behind the Generalized Random Tessellation Stratified (GRTS) design (Stevens and Olsen, 2004) is to systematically select the units, map the two-dimensional population into one dimension while trying to preserve some multi- dimensional order based on the use of Voronoi polygons, which are used to define an index of spatial balance.
Show more

6 Read more

Cost-effective sampling network design for contaminant plume monitoring under general hydrogeological conditions

Cost-effective sampling network design for contaminant plume monitoring under general hydrogeological conditions

transport model, the run time for a sampling design optimization problem depends on two primary factors: one is the speed of GA convergence and the other is the choice of interpolation method. It is difficult to know when the optimal or near-optimal solution is attained using GA. Few guidelines have been suggested in the literature for determining the stopping criterion for GA. Reed et al. (2000a) pointed out that two conditions must be met before the GA converges. The first condition is that a single subset of the potential sampling locations must be selected by about 90% of the individuals within the last generation. The second one requires that all of remaining sampling locations could not be sampled by more than 10% of the individuals within the last generation. Reed et al. (2000b) presented a simple three-step method for the GA to determine the number of control parameters. They also proposed the relationships for the population size, convergence rates and genetic drift, and demonstrated their proposed methodology through a long-term groundwater monitoring application ( Reed et al., 2000a,b ). However, these suggested rules are not directly applicable to this study because a different GA is used. For our study, we have checked that for PM1 the OK-based sampling design has no improvement in the objective function from generation 58 to generation 100. For the IDW-based design, the objective function stops improving even sooner. Thus, to ensure mature convergence, the number of generations is set to 80 for all optimization runs in this study. Fig. 8 shows the evolutions of the OK-based and IDW-based objective functions versus the number of generations, respectively, for the PM1 run. Increasing the population size from 800 to 2000 does not affect the objective function, but is much more time-consuming.
Show more

25 Read more

A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design

A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design

The challenge is thus how to generate an approximation model that is as accurate as possible over the complete domain of interest while minimizing the simulation cost. Solving this challenge involves multiple sub-problems that must be addressed: how to interface with the simulation code, how to run simulations (locally, or on a cluster or cloud), which model type to approximate the data with and how to set the model complexity (e.g., topology of a neural network), how to estimate the model quality and ensure the domain expert trusts the model, how to decide which simulations to run (data collection), etc. The data collection aspect is worth emphasizing. Since data is compu- tationally expensive to obtain and the optimal data distribution is not known up front, data points should be selected iteratively, there where the information gain will be the greatest. A sampling function is needed that minimizes the number of sample points selected in each iteration, yet max- imizes the information gain of each iteration step. This process is called adaptive sampling but is also known as active learning, or sequential design.
Show more

5 Read more

Optimization of Soil Sampling Design Based on Road Networks – A Simulated Annealing/Neural Network Algorithm

Optimization of Soil Sampling Design Based on Road Networks – A Simulated Annealing/Neural Network Algorithm

The historical sample data were used to select highly representative sampling points to achieve the optimization of the spatial distribution of the sampling points. SA is a stochastic computing technique, which employs a combinatorial optimization algorithm that converges to a global optimum while effectively avoiding local optima [26]. The algorithm has no strict requirements for the initial state of the study object. Hence, the SA algorithm can be used to solve complex deterministic combinatorial optimization problems [27-28]. In the past study, SA optimization was used to predict the spatial distribution of soil properties with a minimum number of samples while maintaining a predictive precision that was no smaller than that obtained using the original samples, thereby obtaining an optimal sample layout [27]. Another study used the average Kriging variance as the objective function in the SA algorithm [29]. In this case, the spatial layout of the samples was mainly assessed and optimized via two indicators, the main Kriging variance and the weighted Kriging variance, to obtain an optimum sampling design targeting different soil attributes. In the present study, the SA algorithm was applied, and the minimum
Show more

11 Read more

Quality-Based Sampling Test Design for Head Gimbal Assembly

Quality-Based Sampling Test Design for Head Gimbal Assembly

This research proposed to use the Discriminant Analysis (DA) as a tool to make the prediction. The DA technique is used to classify objects into groups based on predicting variables, which are of variable type. There have been many researches that used the DA technique to classify objects. For example, the DA was used to classify the mental illness of the Thai Muslim Schizophrentic Patients [1]. It could predict the results with 97.3% accuracy. The DA was also used in the medical field. It was used to do the classification of antibiotic resistance patterns of indicator bacteria [2]. The DA could classify the object with 63% accuracy. Moreover, some research compared the performance of DA with other prediction tools such as Case-Based Forecasting and Neural Network. For example, research on bankruptcy prediction [3] found that the Case-Based Forecasting and the Discriminant Analysis had the same performance in classifying groups while the Neural Network had lower accuracy than the two mentioned tools. This research found that the number of variables used in the prediction did not affect the accuracy significantly as shown by the result that the number of predicting variables of 9 and 16 provided the higher accuracy than other number of variables.
Show more

8 Read more

Reducing representativeness and sampling errors in radio occultation–radiosonde comparisons

Reducing representativeness and sampling errors in radio occultation–radiosonde comparisons

Abstract. Radio occultation (RO) and radiosonde (RS) com- parisons provide a means of analyzing errors associated with both observational systems. Since RO and RS observations are not taken at the exact same time or location, tempo- ral and spatial sampling errors resulting from atmospheric variability can be significant and inhibit error analysis of the observational systems. In addition, the vertical resolu- tions of RO and RS profiles vary and vertical representative- ness errors may also affect the comparison. In RO–RS com- parisons, RO observations are co-located with RS profiles within a fixed time window and distance, i.e. within 3–6 h and circles of radii ranging between 100 and 500 km. In this study, we first show that vertical filtering of RO and RS pro- files to a common vertical resolution reduces representative- ness errors. We then test two methods of reducing horizon- tal sampling errors during RO–RS comparisons: restricting co-location pairs to within ellipses oriented along the direc- tion of wind flow rather than circles and applying a spatial– temporal sampling correction based on model data. Using data from 2011 to 2014, we compare RO and RS differences at four GCOS Reference Upper-Air Network (GRUAN) RS stations in different climatic locations, in which co-location pairs were constrained to a large circle ( ∼ 666 km radius), small circle ( ∼ 300 km radius), and ellipse parallel to the wind direction ( ∼ 666 km semi-major axis, ∼ 133 km semi- minor axis). We also apply a spatial–temporal sampling cor- rection using European Centre for Medium-Range Weather Forecasts Interim Reanalysis (ERA-Interim) gridded data. Restricting co-locations to within the ellipse reduces root mean square (RMS) refractivity, temperature, and water va- por pressure differences relative to RMS differences within the large circle and produces differences that are comparable
Show more

16 Read more

Unit 2 Representativeness, balance and sampling pdf

Unit 2 Representativeness, balance and sampling pdf

Like corpus representativeness, balance is an important issue for corpus creators, corpus users and readers of corpus-based studies alike. Representativeness links to research questions. The research question one has in mind when building (or thinking of using) a corpus defines representativeness. If one wants a corpus which is representative of general English, a corpus representative of newspapers will not do. If one wants a corpus representative of newspapers, a corpus representative of The Times will not do. Representativeness is a fluid concept. Corpus creators should not only make their corpora as balanced as possible for the language variety in question by including a great variety of relevant representative language samples, they must also document corpus design criteria explicitly and make the documentation available to corpus users so that the latter may make appropriate claims on the basis of such corpora and decide whether or not a given corpus will allow them to pursue a specific research question. Readers of corpus-based research should also interpret the results of corpus-based studies with caution and consider whether the corpus data used in a study was appropriate. With that said, however, we entirely agree with Atkins et al (1992: 6), who comment that:
Show more

8 Read more

Exploring the utility of quantitative network design in evaluating Arctic sea ice thickness sampling strategies

Exploring the utility of quantitative network design in evaluating Arctic sea ice thickness sampling strategies

Abstract. We present a quantitative network design (QND) study of the Arctic sea ice–ocean system using a software tool that can evaluate hypothetical observational networks in a variational data assimilation system. For a demonstra- tion, we evaluate two idealised flight transects derived from NASA’s Operation IceBridge airborne ice surveys in terms of their potential to improve 10-day to 5-month sea ice forecasts. As target regions for the forecasts we select the Chukchi Sea, an area particularly relevant for maritime traf- fic and offshore resource exploration, as well as two areas related to the Barnett ice severity index (BSI), a standard measure of shipping conditions along the Alaskan coast that is routinely issued by ice services. Our analysis quantifies the benefits of sampling upstream of the target area and of reducing the sampling uncertainty. We demonstrate how ob- servations of sea ice and snow thickness can constrain ice and snow variables in a target region and quantify the com- plementarity of combining two flight transects. We further quantify the benefit of improved atmospheric forecasts and a well-calibrated model.
Show more

13 Read more

Design of a Sensor Network Based Security System

Design of a Sensor Network Based Security System

The architecture of supervisor unit is shown in Fig. 6. It consists of two main components. In left hand side of the figure the architecture of the system configuration, while in the right hand side the system management component are shown. Both components use the WSN interface to communicate with the wireless sensor network. The discovery service of the system management component is responsible to for detecting the nodes and building the connectivity matrix. Using the data provided by the discovery service the optimization service computes the TDMA schedule, which is then downloaded to the nodes by the configuration service. While the system configuration component is mainly offline, the system man- agement component is heavily online. The wireless sensor net- work is accessed through a system state service. The user can execute commands or handle (e.g. cancel or confirm) alarms with the commands and the alarm handling services. All user interactions are done using the graphical user interface. C. Results
Show more

5 Read more

Robust Negative Sampling for Network Embedding

Robust Negative Sampling for Network Embedding

Many recent network embedding algorithms use negative sam- pling (NS) to approximate a variant of the computationally expensive Skip-Gram neural network architecture (SGA) ob- jective. In this paper, we provide theoretical arguments that reveal how NS can fail to properly estimate the SGA objective, and why it is not a suitable candidate for the network embed- ding problem as a distinct objective. We show NS can learn undesirable embeddings, as the result of the “Popular Neigh- bor Problem.” We use the theory to develop a new method “R-NS” that alleviates the problems of NS by using a more intelligent negative sampling scheme and careful penalization of the embeddings. R-NS is scalable to large-scale networks, and we empirically demonstrate the superiority of R-NS over NS for multi-label classification on a variety of real-world networks including social networks and language networks.
Show more

8 Read more

WIRELESS MESH NETWORK BASED ON DESIGN AND IMPLEMENTATION

WIRELESS MESH NETWORK BASED ON DESIGN AND IMPLEMENTATION

In spite of the massive efforts in researching and developing mobile ad hoc networks in the last decade, this type of network has not yet witnessed mass-market deployment. The low commercial penetration of products based on ad hoc networking technology could be explained by noting that the ongoing research is mainly focused on implementing military or specialized civilian applications. To turn mobile ad hoc networks into a commodity, we should move to more pragmatic” opportunistic ad hoc networking” in which multihop ad hoc networks are not isolated self-configured networks, but rather emerge as a flexible and low-cost extension of wired infrastructure networks coexisting with them. Indeed, a new class of networks is emerging from this view: mesh networks. We provide a survey of the current state of the art in off-the-shelf and proprietary solutions to build wireless mesh networks. Finally, we address the challenges of designing a high performance, scalable, and cost-effective wireless mesh networks.
Show more

6 Read more

Adaptive Sampling Fuzzy Controlled Based Fault tolerance in Wireless Sensor Network

Adaptive Sampling Fuzzy Controlled Based Fault tolerance in Wireless Sensor Network

distributed wireless sensor node which sense environment variables and report to base station. Wireless sensor network are increasingly applied in industrial and defense applications. Faults happen in sensor network due to reasons like environmental disturbance and battery energy depletion. Also sensors fail transiently or permanently due to manufacturing defect. In field, due to fault of sensor nodes, the applications built on events from sensors is bound to fail and may cause severe damages. In this work, we propose a efficient fault tolerance method for wireless sensor network. Our mechanism is based on adapting the sampling rate in sensors depending on its data importance and a using spatial temporal correlation to achieve fault tolerance.
Show more

10 Read more

Arctic Network. Project File Management Guidelines for keeping Arctic Network project files organized, secure and accessible. Summary.

Arctic Network. Project File Management Guidelines for keeping Arctic Network project files organized, secure and accessible. Summary.

/Project directory where draft and in-progress files may be stored. Store files unrelated to the Arctic Network mission elsewhere than the O:\ drive. Directories and files deemed personal will be moved to the \Lost and Found directory and the creator notified, if possible.

7 Read more

Detecting Network Intrusions Using Signal Processing with Query-Based Sampling Filter

Detecting Network Intrusions Using Signal Processing with Query-Based Sampling Filter

Pulse code modulation (PCM) is a digital technique that involves sampling an analog signal at regular intervals and coding the measured amplitude into a series of binary values, which are transmitted by modulation of a pulsed, or intermittent, carrier. It essentially consists of three stages, namely, sampling of the analog signal, quantization, and binary encoding. During sampling, the continuously varying amplitude of the analog signal is approximated by digital values; this introduces a quantization error, the di ff erence between the actual amplitude and the digital approximation. A quantization error is apparent when the signal is reconverted to analog form as distortion, a loss in audio quality, and it can be reduced by increasing the sample size; as allowing more bits per sample will improve the accuracy of the approximation. The approximation introduced by quantization manifests itself as a noise. Often, for the analysis of sound-processing circuits, such noise is assumed to be white and decorrelated with the signal, but in reality it is perceptually tied to the signal itself, to such an extent that quantization can be perceived as an effect. Gold and Ur [18] reported an efficient error feedback scheme for compensating the amplification of the noise generated in the comb part of complex frequency sampling FIR filters [19]. In this study, we apply the concept of signal processing to develop a training samples filter for neural networks.
Show more

8 Read more

Assessing the sensitivity and representativeness of the Belgian Sentinel Network of Laboratories using test reimbursement data

Assessing the sensitivity and representativeness of the Belgian Sentinel Network of Laboratories using test reimbursement data

not included in the INAMI-RIZIV database. Second, our approach identified laboratories using certification num- ber for reimbursement. As laboratory identification codes evolve over time and between sources, some tests could not be identified in the list of laboratories codes provided to the INAMI-RIZIV to obtain reimbursement data. In addition, multiple laboratories associated or belonging to the same group may share a same laboratory code (e.g. after a fusion) while not all laboratories from the group participate to the surveillance network. In this study we assumed that if at least one laboratory participated to the SNL, all laboratories from the same group and with the same identification code were also participating. This assumption does not always hold which means that cover- age values could be slightly biased in that respect. Third, data were only available for a short period of time and do not allow to assess how changes in test coverage of the SNL since the 1980s might influence the meaning and interpretation of the reported number of cases over time. Data were furthermore only available for 12 pathogens which are fairly well distributed over place and widely tested. It is unknown whether results hold for pathogens which are less often tested (such as Hantavirus) or which are more clustered or subject to outbreaks (such as Hepatitis A, Legionella, Neisseria meningitis, or Listeria). Testing behaviours are also likely to differ by pathogen. Some tests, such as Syphilis, are systematically prescribed for some subgroups (e.g. pregnant women). These will tend to be well reported to the SNL as they are mainly performed by laboratory hospitals, which are overrepre- sented in the SNL. In certain circumstances, a high test coverage, might therefore not translate into representa- tiveness or sensitivity. Alternative surveillance systems are therefore available in order to adequately monitor infec- tious diseases which are not evenly distributed in the general population, such as sexually transmitted diseases.
Show more

8 Read more

Network reconstruction via density sampling

Network reconstruction via density sampling

Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its (global) link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selection scheme, any other procedure being biased towards unrealistically large, or small, link densities. We then introduce our core technique for reconstructing both the topology and the link weights of the unknown network in detail. When tested on real economic and financial data sets, our method achieves a remarkable accuracy and is very robust with respect to the sampled subsets, thus representing a reliable practical tool whenever the available topological information is restricted to small portions of nodes.
Show more

13 Read more

The emergence of surface-based Arctic amplification

The emergence of surface-based Arctic amplification

The concept of Arctic amplification, articulated by Manabe and Stouffer (1980) and a near universal feature of climate model simulations (Holland and Bitz, 2003), is that rises in surface air temperature (SAT) in response to increasing atmospheric greenhouse gas (GHG) concentrations will be larger in the Arctic compared to the Northern Hemisphere as a whole. Model-projected Arctic amplification is focused over the Arctic Ocean (Serreze and Francis, 2006). As the climate warms, the summer melt season lengthens and inten- sifies, leading to less sea ice at summer’s end. Summertime absorption of solar energy in open water areas increases the sensible heat content of the ocean. Ice formation in autumn and winter, important for insulating the warm ocean from the cooling atmosphere, is delayed. This promotes enhanced up- ward heat fluxes, seen as strong warming at the surface and in the lower troposphere. This vertical structure of temperature change is enhanced by strong low-level stability which in- hibits vertical mixing. Arctic amplification is not prominent in summer itself, when energy is used to melt remaining sea ice and increase the sensible heat content of the upper ocean,
Show more

9 Read more

A Capital Market Test of Representativeness

A Capital Market Test of Representativeness

I also examine the effects of time series variation in investors' bias. Cooper et al. (2004) show that underreaction captured by stock price momentum varies strongly with past market trends, suggesting that periods of optimism and pessimism impact investor behavior. Therefore, I test hypotheses that state that during upward (downward) market trends, investors are more prone to overpricing (underpricing) stocks due to a greater tendency to declare winners (losers) prematurely (i.e., make extreme base rate errors) in a manner consistent with representativeness. The mispricing effects discussed earlier become significantly larger when tests are conditioned upon the overall market trend measured as the past three-year return of the overall market. Firms in the High- Sales/Weak-Fundamentals group experience significant negative abnormal returns of - 0.79% per month following periods of positive market trends and firms in the Low- Sales/Strong-Fundamentals group experience significant positive abnormal returns of 1.44% per month following periods of negative market trends. These results suggest that the investors' error in undervaluing (overvaluing) firms with low (high) earnings signals due to representativeness is more severe during up (down) markets. These results survive additional tests based on liquidity risk and changes in volatility.
Show more

55 Read more

Adaptive Sampling and the Autonomous Ocean Sampling Network: Bringing Data Together With Skill

Adaptive Sampling and the Autonomous Ocean Sampling Network: Bringing Data Together With Skill

implemented to get from the data to the application is the process of extracting a window of data. The process is the same for each asset’s data. Whit this design, modularity and the robustness of code is optimal, because each particular element of the flow diagram stands alone and does not depend on another’s specifics. There is also a lot less code to be written and maintained, reducing the potential error of the system. This is the design that we decided to integrate to.

9 Read more

Show all 10000 documents...