depth-first traversal

Top PDF depth-first traversal:

Test Case Generation for UML Behavioral Diagram by Traversal Algorithm

Test Case Generation for UML Behavioral Diagram by Traversal Algorithm

Rajiv mall et al. [4] presented a process to produce test cases by applying combination of use case and sequence diagram. They transform diagrams into their respective graphs. Then graphs are merged together to make the system graph. But, it was not clearly shown that how graphs are merged together. The optimization is not done for generated test cases. Abinash Tripathy et al. [3] give an idea to produce test cases. They amalgamate UML AD and SD for card validation . Ranjita Kumari Swain et al. [5] have produced the test cases using AD. In their method, activity diagram is initially transformed into ADG. The depth first traversal technique is used for traversing the paths. All activity paths are produced by using proposed algorithm. Finally, path coverage criterion is used for TCG. Swagatika Dalai et al. [9] described a method to produce test cases for object oriented systems using integrated UML Models. Then traversing is completed to produce test cases. But optimization of test cases is still not completed.
Show more

5 Read more

Implementation and Analysis of Iterative MapReduce Based Heuristic Algorithm for Solving N-Puzzle

Implementation and Analysis of Iterative MapReduce Based Heuristic Algorithm for Solving N-Puzzle

We provide here an efficient implementation of Parallel Breadth First Heuristic Search (PBFHS) fused with stack based depth first traversal using MapReduce Programming model. The algorithm is proficient, fault- tolerant and easy to implement, because the Hadoop’s MapReduce framework takes care of all domain independent tasks. With its efficient utilization of distributed resources (CPU, main memory and disks), MapReduce provides us a platform for solving large search space problems by efficaciously distributing the workload across the cluster.

5 Read more

Software Prefetching Using Jump Pointers in Linked Data Structures

Software Prefetching Using Jump Pointers in Linked Data Structures

effective here.It improves the performance by 35%.However, PA improves the performance by 47% and 48% respectively. The kernel Health has the following proper ties: these are not small programs and LDS nodes are inserted and deleted frequently during the execution. The second fact should indicate that jump pointer prefetching and prefetch arrays should be less effective as they incur overheads for inser t and delete operations. However, it is seen that jump pointer prefetching improves the performance by 27% and PA with 38% and 39%.The kernel DB. tree is taken from a database server, and is a depth-first traversal of a binary index tree in search of a leaf node. The important aspect of this kernel is that the traversal path is not known a priori, thus the jump pointers in jump pointer prefetching is set to point down the last path traversed, thus the high instruction overhead for this technique compared to the other. But jump pointer is not a good option for programs where traversal is not known a priori. It even increases the memory stall time due to cache misses on jump pointer references that most of the time are useless.Also, the software PA suffers from a high instruction overhead from the issuing of the prefetches and only manages to improve the performance by 3%. The best prefetch approach for this kernel is hardware PA which improves the performance by 28%. Greedy prefetching improves the performance by 15% as the value of Work is close to the value of Latency. In the kernel Treeadd ,the traversal path is known a priori. Thus, jump pointer provides a performance increase here. Also the memory stall time can be removed almost entirely by changing the prefetch distance to a higher value, thus jump pointer prefetching can outperform all other techniques for treeadd. However, jump pointer prefetching needs to adjust the prefetch distance for vary ing memory latencies, while prefetch arrays gives a 40%execution time reduction, for this kernel, without any tuning. The last kernel is perimeter that traverses a quadtree in which the traversal path is known a priori. Both prefetch array approaches cannot improve the performance as there are far too many prefetches that need to be launched when each new node is entered. The best approach to use should be jump pointer prefetching after an adjustment of the prefetch distance to a higher value, which would reduce the memory stall time even more[1].
Show more

8 Read more

Comparative Study of Complexities of Breadth- First Search and Depth-First Search Algorithms using Software Complexity Measures

Comparative Study of Complexities of Breadth- First Search and Depth-First Search Algorithms using Software Complexity Measures

There are interesting points to observe about these graphs. Figure 2 shows that breadth- first search has the lowest and highest Program volume when coded in Pascal language and Visual BASIC language respectively. By implication, the graph shows that breadth-first search algorithm is best implemented in Pascal language followed by C language, C++ language and Visual BASIC language in that order.

6 Read more

Reference Scan Algorithm for Path Traversal Patterns

Reference Scan Algorithm for Path Traversal Patterns

As the popularity and vastness of World Wide Web is increasing with each day so is the importance of web mining. With the increasing number of websites and web users, web data is being collected and stored by the server as web server data constituted with different fields. It is considered that the analysis of this web server data can provide us various information’s like user surfing behavior which can help in user profiling, web site designs and making better business and marketing decision making our website more popular and user friendly [1]. For performing this task it is necessary to collect a good amount of data for analysis before coming to any productive conclusions. As the web server data tends to be too large there is a need devise an efficient algorithm to first extract useful data and then mine it to get patterns which are helpful for the website. This paper has considered a new data mining capability which constitutes of mining the access patterns where the objects are linked with each other giving an interactive access in a distributed information providing environment say World Wide Web (WWW) [2] where the users access the websites by travelling from one page to the other with the help of connecting facility provided say hyperlinks. The fact that mining of user traversal patterns will not only help in improving our web-site’s design (say, more user friendly for most used pages, better designs of pages) but also help in making business decisions (say placing of advertisement at appropriate page). The user access patterns or mining traversal patterns provides with frequently used pages which are done by analyzing the user behavior from the web server log. Various algorithms have been proposed for mining the user traversal pattern but in this paper algorithmic aspects proposed are in such a manner that it makes traversal pattern mining much better [3].
Show more

6 Read more

A Lexicon of Distributed Noun Representations Constructed by Taxonomic Traversal

A Lexicon of Distributed Noun Representations Constructed by Taxonomic Traversal

A LEXICON OF DISTRIBUTED NOUN REPRESENTATIONS CONSTRUCTED BY TAXONOMIC TRAVERSAL A L E X I C O N O F D I S T R I B U T E D N O U N R E P R E S E N T A T I O N S C O N S T R U C T E D B Y T A X O N O M[.]

5 Read more

Cable-Driven Constrained Traversal Mechanism for Planar Motion

Cable-Driven Constrained Traversal Mechanism for Planar Motion

In this paper a simple model of mobile traversal mechanism suspended by cables and actuated by motors is presented. A detailed description of the workspace on which the payload is traversed is discussed. The mechanism is actuated by cables which are driven by motors. The rate of change of length of cables and the angular velocities of motors are determined such that the payload traverses along the shortest path in the desired duration of time. The motors are programmed to operate separately as per the derived formulas. The mechanism thus designed is portable and can be applied in the field of agriculture, farming, manufacturing, surveillance etc.
Show more

5 Read more

In depth study of personality disorders in first admission patients with substance use disorders

In depth study of personality disorders in first admission patients with substance use disorders

The strengths of this study lie in the sample collection and assessment methods. The catchment-area-based ser- vices made it possible to identify all patients who met the study criteria. Other specialized addiction or psychi- atric services, which received patients from the catch- ment area, cooperated by identifying eligible patients and referring them to the study. In contrast to earlier studies, the PDs in our sample were assessed at a rela- tively early stage. By selecting a sample of patients at their first admission, we avoided an overrepresentation of the chronically ill, and we reduced recall bias. Fur- thermore, we obtained reliable assessment of all com- mon SUDs, Axis I, and Axis II disorders, by using reliability-tested diagnostic interviews performed by a psychiatrist. The SCID-II was chosen for its good criter- ion validity and reliability in diagnosing PDs. The PRISM is the best-documented diagnostic interview for diagnos- ing a wide range of Axis I disorders in heavy substance users. The use of different methods to assess some of the same symptom areas showed consistent results. This strengthens the findings. SUD patients are at high risk of noncompliance. Even so, we had a low dropout rate of four out of 78 (5%), which was achieved by the personal follow-up of each patient.
Show more

10 Read more

Traversal Free Word Vector Evaluation in Analogy Space

Traversal Free Word Vector Evaluation in Analogy Space

This effort can be simply applied on any exist- ing word analogy tasks. Frankly speaking, we can- not claim that our method outperforms the origi- nal, except for the complexity part. But complex- ity does matter. Currently analogy tasks generally contain tens of thousands questions, so traditional traversal-based evaluation can still manage. How- ever, we would definitely want to test higher por- tion of words in the vocabulary, and with the ef- forts from the whole community, we may have a “nearly optimized” test set someday with up to million words involved. At that time, traversal- free could be a highly desirable quality.
Show more

5 Read more

Web Navigation Path Pattern Prediction using First Order Markov Model and Depth first Evaluation

Web Navigation Path Pattern Prediction using First Order Markov Model and Depth first Evaluation

method stores data extracted from web logs into a relational database using a click fact schema, so as to provide better support to log querying finalized to frequent pattern mining. Several methods have been proposed to model the web data. The tree structure model is used to store the sequence of web pages and predict the traversal path from them. The Markov model is used to store the sequence of web pages which also supports scalability, high state space and predicts the next page access. In this paper , we have proposed a first order markov model to store te session sequences.
Show more

6 Read more

Exploiting spatiotemporal locality for fast call stack traversal

Exploiting spatiotemporal locality for fast call stack traversal

Our heuristic is based on a very simple principle: the higher the number of call stack traversal events within a given pe- riod of time (high density), the closer these calls will be in the CCT. This in-turn suggest a large overlap between successive calls, and therefore a large potential saving. The stack traversal functionality of WMTools is provided by libunwind through frame pointers, and as such will be the traversal method of choice for our analysis. Whilst we have explored the use of other traversal libraries and methods, the portability and speed of libunwind were found to be prefer- able. Additionally experiments with function prologue and epilogue instrumentation provided poor performance and in many cases an inability to instrument external libraries, such as the Message Passing Interface (MPI). Similarly we focus on the x86_64 architecture, and do not compare the tech- nique on other architectures with different call stack struc- tures.
Show more

10 Read more

Recent Trends in 2d to 3d Image Conversion: Algorithms at a glance

Recent Trends in 2d to 3d Image Conversion: Algorithms at a glance

of 2D to 3D image conversion. In this modern era, 3D contents are dominated by its 2D counterpart. Today there exists an urgent need to convert the existing 2D content to 3D. Mainly, these conversion methods are categorised in an automatic method and semi-automatic method. In an automatic method, human intervention is not involved, whereas in semi- automatic method human operator is involved. The main difference between 2D and 3D images is clearly the presence of depth in 3D images which makes the calculation of depth the most important factor. Until now many researchers have proposed different methods to close this gap. This paper describes and analyses algorithm that uses monocular depth cues and by learning depth from examples, establishing an overview and evaluating its relative position in the field of conversion algorithms. This may, therefore, contribute to the development of novel depth cues and help to build better algorithms using combined depth cues.
Show more

5 Read more

Accelerated Entry Point Search Algorithm for Real Time Ray Tracing

Accelerated Entry Point Search Algorithm for Real Time Ray Tracing

Figure 3: A comparison of the visited nodes in a tree where the ray proxy frustum fully overlaps each leaf. (a) MLRTA: a traversal from the root node 0 to the first occupied leaf 3 adds the nodes [3,1,0] to the bifurcation stack. 3 is popped and marked as a potential entry point. Node 1 is then popped and investigated. As a traversal from 1 to 4 finds an occupied leaf, the candidate entry point is now 1. Node 0 is then popped. A traversal from node 0 passes through node 2 to an occupied leaf at 5. Node 0 is then marked as the candidate entry point. As the stack is now empty the current entry point 0 is returned. (b) AESPA: a traversal from the root node 0 to the first occupied overlapped leaf 3 adds the nodes [0,1,3] to the candidate queue. The first entry (and highest in the tree) 0 is investigated and a traversal from 0 to leaf 5 yields an occupied overlapped leaf. Node 0 is returned as the entry point.
Show more

7 Read more

Unibot, a Universal Agent Architecture for Robots

Unibot, a Universal Agent Architecture for Robots

Having successfully concluded the first part of the experiments regarding learning, the second part was testing robot's application of gained knowledge through navigation across the maze while simultaneously building the mental model of its environment. This was carried out in both simulated and physical environments. Robot's sensory input was interpreted using previously described multi-layered ANN and recognized concepts were depicted in the simulation that represented robot's mental model of the world. In this case, the agent had perfect memory and was operating in a static environment. How- ever, the agent cannot be self-aware and pos- sess meta-knowledge about its environment, so it is necessary to, at least periodically, update the mental model by processing sensory input through object recognition module (trained ANN) and refresh its mental model accordingly. Aside from allowing the agent to operate in a dynamic environment, this can also be useful for correcting any prior potential errors in the agent's mental model. Figure 8 shows the robot in the real world and its mental representation of fully explored maze. Experiments were then conducted in additional three different maze configurations with the same type of elements (concepts). In all cases, the robot successfully managed to explore its environment and create an appropriate mental model that was later used in the third experimental phase. It should be em- phasized that there is a difference between re- al-world and agent's mental models. This is best demonstrated by the different color of boxes. In the physical world, there are two types of boxes, with different height and color on top. However, the robot's sensors are not capable of distinguishing these differences causing the both types of boxes to be identified as the same type of object. This is not regarded as an er- ror in the proposed architecture, but as a lim- itation of this particular robot. On the contrary, the proposed agent architecture is designed to overcome limitations of a single agent. By shar-
Show more

15 Read more

Starflake Schema Implementation Using Depth-First Algorithm in Automating Data Normalization

Starflake Schema Implementation Using Depth-First Algorithm in Automating Data Normalization

Abstract: The two most popular schemas used to implement a multi-dimensional model in a relational database are star schema and snowflake schema. A combination of a star schema and snowflake schema, whose aim is to utilize the advantages of both schema, is called a Starflake schema. This study discusses the application of Starflake schema to automate data normalization. The researchers created a system that accepts a file input of a sample inventory data from a business inventory system and its corresponding Entity Relationship Diagram structure, and established a rule-based methodology using the depth-first algorithm. The system successfully implemented the Starflake Schema and achieved data normalization. The final output of system was successfully implemented and it comprehensively showed the time of execution of each query, the Entity-relationships, the attributes of each entity and overall space utilization.
Show more

7 Read more

Firewall Traversal Method by Inserting Pseudo TCP Header into QUIC

Firewall Traversal Method by Inserting Pseudo TCP Header into QUIC

Tunneling encapsulates original packets by other protocol. Ordinal tunneling uses HTTP or HTTPS for capsulation because these protocols are the most widely used. They use TCP as the transport layer protocol. Even if tunneled communication uses another transport layer protocol, TCP influences the communication much stronger than the other protocol. For example, assume that QUIC is the target of HTTP tunneling. QUIC uses UDP and has its own TCP- like control mechanisms. When a packet loss occurs, TCP of HTTP tunneling retransmits transparently from QUIC. In this way, TCP mechanisms work prior to the ones of QUIC. That is to say, QUIC over HTTP cannot exhibit its performance. In this study, we propose an FW traversal method, which inserts a pseudo TCP header with the purpose of realizing communication without affecting it on a communication path, where delivery of packets between end nodes is not guaranteed. The proposed method makes it possible to use various communication protocols without being restricted by an FW. The method achieves this by disguising packets that use protocols and port numbers usually restricted by an FW making them look as if they are parts of HTTPS communication. Impersonation of HTTPS communication is achieved by encapsulating the pseudo TCP specified port 443 for the payload of the IP datagram of the target packet. By labeling a packet to be a part of HTTPS traffic, it becomes possible to exclude it from the FW targets to be filtered. In addition, the proposed method does not affect the control of the protocol that the FW wishes to pass since the method only rewrites the packet on the communication path preventing the TCP control from working. The proposed method inserts the pseudo TCP header, discards it after passing through the FW, and returns it to the original packet to obtain communication. In this paper, we discuss application of the proposed method to QUIC developed by Google and expected to be popular in the future. QUIC is a transport layer protocol that operates on the user land; it is designed and developed on the premise of combination with HTTP/2[1].
Show more

6 Read more

An Efficient Stack Based Graph Traversal Method for Network Configuration

An Efficient Stack Based Graph Traversal Method for Network Configuration

an operation to add or insert an element from the top of the stack whereas pop is an operation to delete or remove an element from the top of the stack. The element which is inserted first will be in the rare end of the stack and the element which is inserted last will be in the top of the stack. Size of the stack can be found by stack [top]-stack [rare]. A number of stacks at the end are nothing but a number of components in a graph. There are different types of graphs in graph theory. This metho-dology works well for graphs with bridges, self-loops, cyclic and acyclic graphs and isomorphic graphs. But here the main constraint is that the vertices in individual components should be named in such a way that all vertices are traversable without repeating edges. The size of each stack gives the total number of vertices in a component and the number of resultant stacks gives the total number of components.
Show more

5 Read more

Secured Wi Fi based Indoor Path Traversal –IOT

Secured Wi Fi based Indoor Path Traversal –IOT

can be pointed out using the accelerometer and the orientation sensors in the Smartphone itself. These sensors send the location samples to the server and those are plotted on the map and the trajectory is made. The mobile sensors and the Wi-Fi routers can be used for the indoor localization because of the accuracy and the consumption of less mobile battery. DynoPath is a unique and an efficient algorithm being proposed here which is used for quick localization of target devices. DynoPath can be combined with Wi-Fi to form a complete system for indoor tracking. Basic idea of DynoPath is to set the Smartphone as signal emitting source which send the signals and gradually draws the user in its own direction. It is mainly divided into two phases. The first phase is the user validation phase and the second one is the path traversal tracking phase.
Show more

6 Read more

Using a fixed-wing UAS to map snow depth distribution: an evaluation at peak accumulation

Using a fixed-wing UAS to map snow depth distribution: an evaluation at peak accumulation

Abstract. We investigate snow depth distribution at peak accumulation over a small Alpine area ( ∼ 0.3 km 2 ) using photogrammetry-based surveys with a fixed-wing unmanned aerial system (UAS). These devices are growing in popular- ity as inexpensive alternatives to existing techniques within the field of remote sensing, but the assessment of their perfor- mance in Alpine areas to map snow depth distribution is still an open issue. Moreover, several existing attempts to map snow depth using UASs have used multi-rotor systems, since they guarantee higher stability than fixed-wing systems. We designed two field campaigns: during the first survey, per- formed at the beginning of the accumulation season, the digi- tal elevation model of the ground was obtained. A second sur- vey, at peak accumulation, enabled us to estimate the snow depth distribution as a difference with respect to the previous aerial survey. Moreover, the spatial integration of UAS snow depth measurements enabled us to estimate the snow volume accumulated over the area. On the same day, we collected 12 probe measurements of snow depth at random positions within the case study to perform a preliminary evaluation of UAS-based snow depth. Results reveal that UAS estima- tions of point snow depth present an average difference with reference to manual measurements equal to −0.073 m and a RMSE equal to 0.14 m. We have also explored how some basic snow depth statistics (e.g., mean, standard deviation, minima and maxima) change with sampling resolution (from 5 cm up to ∼ 100 m): for this case study, snow depth standard deviation (hence coefficient of variation) increases with de-
Show more

12 Read more

Evaluating the depth dependence of atmospheric muons with the first string of the KM3NeT/ORCA detector

Evaluating the depth dependence of atmospheric muons with the first string of the KM3NeT/ORCA detector

The results of figure 5.5 is summarized in table 5.1, in which we show the reliability of each fit. Here it should be directly noticed that the values of the χ 2 are very high. With only 17 degrees of freedom the obtained χ 2 is too high to verify the exponential depth dependence of muons. This, however, is expected to be a direct consequence of the low top data points, and the fact our error margin was solely based on the influence of Poisson noise. The mean of the slopes is 0.0029 ± 0.00004, comparing this to all val- ues we find a χ 2 of 18.8 (degrees of freedom = 15) which makes the results consistent between runs and multiplicities , validating the consistency of our method and features. This gives us reason to believe the artifacts to be rather constant over each run, which allows us to generate an estimate of the muons survival length. With a slope of 0.0029 ± 0.00004 we find a halving length of 239 ± 3 meters. To obtain a better estimate, a model for DOM efficiency must be made.
Show more

42 Read more

Show all 10000 documents...