Code Search

Top PDF Code Search:

Acquisition Performance Analysis of BOC Signal Considering the Code Search Step Size

Acquisition Performance Analysis of BOC Signal Considering the Code Search Step Size

depicted, using infinite bandwidth. The best case values for SinBOC(1,1), SinBOC(10,5) and CosBOC(15,2.5) are all 1 but the worst case values are different. For the CosBOC(15,2.5) case, not only is the degradation more steep, but also there are local minimum points produced by the regularly spaced autocorrelation nulls between side peaks, as shown in Fig. 1. A typical code search step size of 0.5 experiences a loss of up to 27 dB compared to the best case and up to 12 dB loss compared to SinBOC(1,1) correlation waveform with the same search step.

8 Read more

SpotWeb: Characterizing framework API usages through a code search engine

SpotWeb: Characterizing framework API usages through a code search engine

SpotWeb accepts an input framework, say JUnit, and extracts Application- Info from the framework. The ApplicationInfo includes all classes, all interfaces, public or protected methods of each class and interface, and inheritance hierar- chy among classes or interfaces of the framework. SpotWeb constructs different queries for each class or interface and interacts with a CSE such as Google Code Search [4] to gather relevant code samples from existing open source projects that use the APIs of the input framework. For example, SpotWeb constructs a query such as “ lang:java junit.framework.TestSuite ” for gathering related code samples of the TestSuite class. The gathered code samples are referred as a LocalRepository for the input framework. SpotWeb analyzes the gathered code samples statically and computes UsageMetrics for classes, interfaces, and public or protected methods of all classes and interfaces. For example, the UsageMet- rics computed for the TestSuite class show that the class is instantiated for 165 times and is extended for 32 times. Similarly, the UsageMetrics computed for the method addTest of the TestSuite class show that the method is invoked for 95 times. SpotWeb also gathers code examples for each class or method and stores these code examples in a repository, referred as ExampleDB. Then SpotWeb uses the algorithm shown in Figure 6 for detecting hotspots from the computed UsageMetrics.
Show more

15 Read more

High-rate systematic recursive convolutional encoders: minimal trellis and code search

High-rate systematic recursive convolutional encoders: minimal trellis and code search

In this article, we introduce the construction of the minimal trellis for a systematic recursive convolutional encoding matrix, the encoding required for the con- stituent codes of a turbo code. Our goal is to reduce the decoding complexity of a turbo decoder operating with high-rate constituent encoders. We also conduct a code search to show that a more finely grained decoding complexity-error performance trade-off is achieved with our approach. We tabulate several new encoding matri- ces with a larger variety of complexities than those in [1], as well as code rates other than k /( k + 1 ) . The proposed minimal trellis can be constructed for systematic recursive convolutional encoders of any rate. Thus, our approach is more general than that in [1], while allowing that a dis- tance spectrum (d i , N i ) better than that of the punctured
Show more

7 Read more

Mining Sequences of Developer Interactions in Visual Studio for Usage Smells

Mining Sequences of Developer Interactions in Visual Studio for Usage Smells

Abstract—In this paper, we present a semi-automatic approach for mining a large-scale dataset of IDE interactions to extract usage smells, i.e., inefficient IDE usage patterns exhibited by developers in the field. The approach outlined in this paper first mines frequent IDE usage patterns, filtered via a set of thresholds and by the authors, that are subsequently supported (or disputed) using a developer survey, in order to form usage smells. In contrast with conventional mining of IDE usage data, our approach identifies time-ordered sequences of developer actions that are exhibited by many developers in the field. This pattern mining workflow is resilient to the ample noise present in IDE datasets due to the mix of actions and events that these datasets typically contain. We identify usage patterns and smells that contribute to the understanding of the usability of Visual Studio for debugging, code search, and active file navigation, and, more broadly, to the understanding of developer behavior during these software development activities. Among our findings is the discovery that developers are reluctant to use conditional breakpoints when debugging, due to perceived IDE performance problems as well as due to the lack of error checking in specifying the conditional.
Show more

14 Read more

Summarizing Source Code using a Neural Attention Model

Summarizing Source Code using a Neural Attention Model

High quality source code is often paired with high level summaries of the compu- tation it performs, for example in code documentation or in descriptions posted in online forums. Such summaries are extremely useful for applications such as code search but are expensive to manually author, hence only done for a small frac- tion of all code that is produced. In this paper, we present the first completely data- driven approach for generating high level summaries of source code. Our model, CODE-NN , uses Long Short Term Mem- ory (LSTM) networks with attention to produce sentences that describe C# code snippets and SQL queries. CODE-NN is trained on a new corpus that is auto- matically collected from StackOverflow, which we release. Experiments demon- strate strong performance on two tasks: (1) code summarization, where we estab- lish the first end-to-end learning results and outperform strong baselines, and (2) code retrieval, where our learned model improves the state of the art on a recently introduced C# benchmark by a large mar- gin.
Show more

11 Read more

Index support for regular expression search. Alexander Korotkov PGCon 2012, Ottawa

Index support for regular expression search. Alexander Korotkov PGCon 2012, Ottawa

Index support for regular expression search, Alexander Korotkov, PGCon 2012, Ottawa Expand continuous string fractions into trigrams.. Google code search[r]

78 Read more

Volume 63: Software Clones 2014

Volume 63: Software Clones 2014

A picture of the proposed search method is explained in Sec. 3. ( The implementation is built on top of the code search method/tool[Kam14]. Figure 1 shows a sample run of the code search tool, where two keywords setForeground and getForeground were searched. Here the former was called directly in a method body of initPainter and the latter called indirectly, via a method setStyles. In this latter case, the tool found such an example code of code fragments from distinct source files but connected by an execution path. )

8 Read more

Clone code detector using Boyer–Moore string search algorithm integrated with ontology editor

Clone code detector using Boyer–Moore string search algorithm integrated with ontology editor

In project developments, the code reuse and component reuse is also the important task according to the functionality of the program the reuse the by the developers. The project is product based or client based may chance of duplicate code. Several researches says that approx 25% - 35% of large projects having the cloned codes (Krinke,2001). For some of the large projects detection of cloned code is flexible only by automatic techniques. There are several automatic techniques proposed to detect clones automatically (Bellon et al, 2007).
Show more

9 Read more

Optimizing Chien Search usage in the BCH Decoder

Optimizing Chien Search usage in the BCH Decoder

ABSTRACT:In the decoding of the Bose ChaudhuriHochquenghem (BCH) codes, the most complex block is the Chien search block. In the decoding process of the BCH codes, error correction is performed bit by bit, hence they require parallel implementation. The area required to implement the Chien search parallel is more, hence a strength reduced parallel architecture for the Chien search is presented. In this paper, the syndrome computation is done using conventional method, the inversion-less Berlekamp Massey Algorithm is used for the solving the key equations.
Show more

7 Read more

How To Optimize Code With A Model And An Empirical Search

How To Optimize Code With A Model And An Empirical Search

Abstract. Compilers employ system models, sometimes implicitly, to make code optimization decisions. These models are analytic; they reflect their implemen- tor’s understanding and beliefs of the system. While their decisions can be made almost instantaneously, unless the model is perfect their decisions may be flawed. To avoid exercising unique characteristics of a particular machine, such models are necessarily general and conservative. An alternative is to construct an empiri- cal model. Building an empirical model involves extensive search of a parameter space to determine optimal settings. But this search is performed on the actual machine on which the compiler is to be deployed so that, once constructed, its decisions automatically reflect any eccentricities of the target system. Unfortu- nately, constructing accurate empirical models is expensive and, therefore, their applicability is limited to library generators such as ATLAS and FFTW. Here the high up-front installation cost can amortized over many future uses. In this paper we examine a hybrid approach. Active learning in an Explanation-Based paradigm allows the hybrid system to greatly increase the search range while drastically reducing the search time. Individual search points are analyzed for their information content using an known-imprecise qualitative analytic model. Next-search-points are chosen which have the highest expected information con- tent with respect to refinement of the empirical model being constructed. To eval- uate our approach we compare it with a leading analytic model and a leading empirical model. Our results show that the performance of the libraries generated using the hybrid approach is comparable to the performance of libraries gener- ated via extensive search techniques and much better than that of the libraries generated by optimization based solely on an analytic model.
Show more

15 Read more

Choosing the Correct GL Code

Choosing the Correct GL Code

You can search the category item list by keyword to locate a GL code that fits the item/service you are locate a GL code that fits the item/service you are purchasing1. If you cannot t[r]

10 Read more

Evaluation of Web Sites for Quality and Contents of Asthma Patient Education

Evaluation of Web Sites for Quality and Contents of Asthma Patient Education

information are not regulated. The sponsored information is likely to be biased and business oriented. It is required that the content of such information is collectively and critically evaluated. The present study evaluated the compliance of asthma education websites based on HON code, Core education concept and HSWG criteria. It was observed that 20% of the sites were compliant to HON code, while most sites adhered to the core education concept and all sites were compliant to the basic HSWG criteria. The HON code and HSWG criteria were able to decipher the quality of content of asthma information on the internet. Among the shortlisted 12 sites for asthma, the counseling information on the quality criteria was found to be satisfactory. On the basis of this study, it is now possible to evaluate and recommend the inclusion of quality information that is speciÞ c to meet the needs of asthma patients in education websites.
Show more

6 Read more

Spectral code index (SPECOIND): A general infrared spectral database search method

Spectral code index (SPECOIND): A general infrared spectral database search method

too big, the index will have little meaning. The higher the distinction rate is, the higher multiplicity is, and the fewer the number of hits. One can im prove fetch performance by creating appropriate indexes and prop- erly tuning the query syntax to leverage the spectral search. For investigating the balance point of hit num- ber and score, we used the relative score (score divided by hit num ber). The result is shown in Fig. 6. Unfor- tunately, we cannot Ž nd a platform for either hit num- ber or relative score. They decrease or increase m o- notonously. From Table I, we Ž nd that the score is the highest, 0.51, when the Ž rst level is 14. At the same time, it has the biggest hit number. The average hit number is 1318.05, and the biggest one is about 2.5 times the average value. Compared with the hit num ber, the variation of the scores is small: the biggest is 0.51 and the smallest is 0.45. Both maxim um and minimum values are close to the average. Because the score is above the average and the hit number is 278.70, and that is enough for the general comparison of spectra, we chose 35 as a suitable number for the Ž rst level of the coding scheme.
Show more

10 Read more

A Common Law Crime Analysis of Pinkerton V. United States: Sixty Years of Impermissible Judicially-Created Criminal Liability

A Common Law Crime Analysis of Pinkerton V. United States: Sixty Years of Impermissible Judicially-Created Criminal Liability

The Pinkerton theory is clearly a common law doctrine and not statutory in nature. A search of the United States Code will reveal no statute prescribing that a conspir[r]

33 Read more

How Do Software Developers Identify Design Problems?:A Qualitative Analysis

How Do Software Developers Identify Design Problems?:A Qualitative Analysis

elements were rarely analyzed. Moreover, there were a few cases in which they had to determine a criterion to choose such elements explicitly. For example, one of the subjects chose a class based on the number and nature of variables and methods located in the class. Another subject decided to limit the search to classes within specific subsystems. He picked a subsystem that was visibly large regarding the number of classes. The same subject also suggested restricting the search to a generic subsystem. All class that did not belong to any other specific subsystem were created in or moved to this subsystem. In the following quotation, we illustrate an example of how the element-based strategy was used by subjects of Scenario 1. At the beginning of the task, the S1 subject was trying to determine which elements he should analyze, and then he decided to prioritize the analysis of classes in large subsystems. After that, he browsed a few classes until selecting one:
Show more

10 Read more

PSP Guide for Students

PSP Guide for Students

Under “Minor subject studies” and its explanatory text you will find the link “Add courses to minor subject studies”. Click on the link and you will be able to search for courses and study modules according to their name or code, type, unit or by browsing the course catalogue. E.g. like this:

10 Read more

Comparison of fractional splines with polynomial splines; An Application on under-five year’s child mortality data in Pakistan (1960-2012)

Comparison of fractional splines with polynomial splines; An Application on under-five year’s child mortality data in Pakistan (1960-2012)

Searching for the location of spline knots by creating a potential adjustment variable for every possible time period (54 in this case) allows one to statistically search for significant adjustment points by using an appropriate technique to locate one or more such points. The first step is to generate the spline adjustment variables. One can use the “+” functions or dummy variables to set up spline adjustment variables. The dummy variable, D, is needed to create a first derivative break at the point where X = K. This creates a kink in the line at X = K. We are not trying to change the overall slope of the line throughout the entire length of the line, but instead create a kink in the line at X = K. The dummy variable plays a critical role in accomplishing this.
Show more

10 Read more

ONLINE ASSISTANCE AVAILABLE FOR DOING RESEARCH

ONLINE ASSISTANCE AVAILABLE FOR DOING RESEARCH

21 st century is a digital age where it is very easy to search for any information with the help of internet and its supportive information technology. Arrival of computer and availability of internet makes research student’s task easy for collecting information with respect to their research area. But, at the same time due to information overload it is very difficult to accurately retrieve data specifically to research problem. Another problem is the authenticity of information which is largely dependent on the source from where the data are retrieved. Moreover, many research students also face the problem of managing their references and creating their bibliography. With the advent of computers and internet it has relatively become easy to collect primary data from a large number of respondents who may be geographically scattered throughout the globe. This article is an humble effort to create list of various online tools and sources available which a research student can use in the different stages of his/her research.
Show more

10 Read more

Improving Performance via Mini-applications

Improving Performance via Mini-applications

In addition to IO parser support, a large fraction of the Xyce source is devoted to the library of device models. In circuit simulation, device models are used to enforce KCL equa- tions by applying Ohmic relationships of discrete electrical components to branches of the circuit graph. Typical examples of such components include transistors, diodes, resistors, and capacitors. While some device models, such as the resistor, are quite simple, mod- ern transistor models can be extremely complex. It is common for modern CMOS based transistor models to consist of over 10,000 lines of C/C++ code.
Show more

40 Read more

A Distributed Content-Based Search Engine Based on Mobile Code and Web Service Technology

A Distributed Content-Based Search Engine Based on Mobile Code and Web Service Technology

Services are published in a hierarchical name space (similar to a hierarchical file system), which simplifies the group- ing of services and the definition of access control policies. Agents may publish services dynamically at runtime in an allowed subspace of the name space, and the server may publish services or launch daemons statically at boot time. For instance, an image broker provides a static index service that his agents (and only his agents) can access in order to merge collected feature vectors with previously collected ones. In our implementation, the index service is backed by a file system and provides concurrent read/write access to the stored information. The image broker also publishes a static finder service which, on input of a query, returns matching image entries. This service is backed by the index service (as illustrated by the horizontal arrow in Fig. 3.2) but restricts access to the index to a limited set of operations and can therefore be made accessible to search agents (by placing it in the public area of the name space). In the incubator model, an index agent publishes the finder service dynamically at the image provider, and it keeps a private (unpublished) index service as the back end of its finder service. In that case, the image entries are stored by which ever resource backs the storage of the index agent 3 . Figure 3.3 gives a simplified view of the interface and class design in the UML [17] notation. Image providers publish the static pics and shop services. The pics service iterates image IDs ( e.g., a locally unique image name) and thumbnails without restriction, but retrieves full quality images (based on the image ID) only if the
Show more

17 Read more

Show all 10000 documents...