that the proposed method has a good performance in the detection of people, but in some cases these results were not accurate in detecting of a human body when he moves in a similar color environment. In this case, the color of the body is similar to the color of background . Or if the moving man was away from the camera (the distance between the person and the camera is too far). The researchers suggested in  another method to detect the moving human and to follow him through building model by using human walking cycle which is called HGPLVM (hierarchical Gaussian process latent variable model), they built a multiple models for the movement of a human in different positions , then they compared these models with the detected moving bodies, to determine if it was similar to the pedestrian. The results of this experiment showed good performance, but it has suffered from false positive cases many times , it may pick up some counterfeit objects .Researchers presented in  a model with many training samples, these samples were classified using (Support vector machine) SVM. It classifies offered samples to return a number of these types, as the possible object might be one of them. Then the types that have got the highest number of votes are further classified using a another classification depending on Cascade  that is used to choose the best one among the selected objects. Tests’ results have shown a high percentage of success of identifying objects. Still, problems encountered in the last research were the relative high implementation time, as a result of classifying many models; in addition to, the need for a large number of training samples to be trained regularly. As for  researchers submitted their models in real-time, relying on Average of Synthetic Exact Filters (ASEF) . This is among the fast algorithms in implementation time; it depends on conducting some mathematical processes over the object to be compared with previously stored models. Based on such calculations it can know whether the object is identical to one of such models or not The problem is that it is greatly sensitive to lighting and disruption , and also used filter ASEF, is not efficient enough on changing the angle of the camera view, and its inability to handle several moving objects simultaneously in different sizes in the same image and at the same time.
Dissolved gas analysis is a common method for diagnosing faults in electrical transformers and determining the type of faults early on, depending on the specific standards used. Applying dissolved gas analysis methods can be used in diagnosis and in the evaluation process. There are many methods used in the diagnosis of faults in power transformers, including traditional and intelligent .The use of an intelligent expert system relies on dissolved gas analysis using artificial neural networks, and it gives excellent results in diagnosing faults and assessing the quality of insulating oil during service and the application of appropriate treatment.
This paper proposes a new approach to extend the limitation of the previous algorithm by enabling their capability to generate a concurrent plan. The proposed algorithm based on Hierarchical Task Network (HTN) enhances SHOP2 planning system to detect and generate a concurrent plan based on the output of SHOP2 (sequence plan). To trigger the availability of concurrent planning, allocation of resources based on web services inputs is used. The inputs (resources) are compared with SHOP2 operator, and concurrent plan will initialize if the instance of inputs equal to SHOP2 operator. To evaluate our approach, we perform two experiments using pathway information retrieval and logistic dataset from SHOP2 benchmark problem. The result of pathway information retrieval shows that this approach is able to find and generate a concurrent plan, but it takes longer computational time. Meanwhile, by using logistic dataset, the proposed algorithm is efficient to handle concurrent tasks based on cost reduction by some pruned operators. Therefore, in our future work, we intend to examine the approach in other complex Bioinformatics and system biology workflow which widely used web services as their analysis tools.
Our approach can be illustrated by a domain specific case study which presents a great adaptation potential. Our choice was fixed on tax domain. It should be noted that a tax information system is supposed to be updated for each finance law publication. Therefore, tax information system must be flexible, scalable and customizable as much as possible. This business change (functional dynamic) represents, according to Kelly , one of the essential points justifying the adoption of the DSM approach. We focused only on the calculation and restitution of corporation tax. The main services of our validation scenario are “TaxCalculation” and “TaxRestitution”. The former is a generic service which will be specialized by the “CTCalculationService” (corporation tax calculation service), specific for calculating the value of corporation tax. The latter is a business process composed of several services. It allows a restitution of the corporation tax. Based on our Tax meta-model (see section 6) we can generate models for each tax (corporation tax, income tax…).
Table 2 also shows that learner behaviours in the learning content design features (prerequisites, flowchart, references, objectives and details, help features (FAQ), support features (collaborate with teacher and expert), and evaluation feature (open question and pre-quiz) have significant effect on identifying the learning preferences for rational Learning Style. The findings also show that significant learner patterns include temporal behaviour with prerequisites, flowcharts, and FAQ features and navigation behaviour with references, objectives, open questions, communication channels with teachers and experts learning objects. Thus, it could be inferred that the rational learners tend to spend more time at reading and reviewing prerequisite topics and skills before studying a new topic. More time is also taken at logical thinking when viewing flowcharts. These learners tend to read the official help feature through reading the most common questions and its official answers. Furthermore, rational learners tend to navigate and browse official references such as books and articles to look for an intended topic/terminology/concept and prefer to open more communications channel with teachers and experts when facing challenges. In essence, it is found that rational learners adopt different WBES design features classified according to the main system components (Figure 6) as listed below:
The main usage, we take into consideration here is the guidance of a p2p based virtual node extended supercomputer for equivalent calculations in the parallel synchronous process  . The bulk synchronous parallel process provides the outline view of the practical arrangement and the connection capabilities of the network hardware (e. g., a cluster of workstations,a parallel computer or a set of nodes connected by the wireless network). A bulk synchronous parallel system comprises of a set of bulk procedures and a series of super-steps with
Floods are among the most powerful forces on the planet. Therefore, the need for a system that predict and warn about the flood occurrence is required. The combination of several approaches into a single system was our concern to design and develop an intelligent system that meets the demands of our research. After many studies, we decided to work with multi-agent systems to benefit of its advantages in terms of the distributed artificial intelligence, and we worked with expert systems to benefit of the concept of logic programming and the concept of facts and rules. In this paper, we present an expert system for real-time flood forecasting and warning that consists of two levels of processing. A first level for short-term forecasting and warning ie. for a time that does not exceed two to three days, using the proposed model, which is based on coefficients that will be calculated to do the flood forecasting and warning. A second level for medium (the time could be up to 10 years) and long-term (the time could be up to 1000 years) forecasting and warning via the empirical model of Hazan-Lazarevic.
are satisfying the developers and end-users need. In this part the types of analysis which has been pursued will be discussed. TSST (Technical Survival Skill Test) was used to determine the student’s computer skill level in the form of numerical scores. This test is based on Cronbach’s Alpha Score Analysis [11, 61]. iSELF: an Internet- tool for Self Evaluation and Learner Feedback. The tool is designed to stimulate self directed learning in a ubiquitous learning environment and our experiences so far confirm its usefulness . In order to focus on individual users Perception on Innovation Characteristics (PCI) of eLearning two special questions were asked. First, can the perception variables of innovation characteristics (PCI) predict individual’s intention to use an eLearning web site? The second, whether the technology adoption model of learners experienced in eLearning is different from inexperienced learners [12, 28]. These are the tests which are frequently used for doing numerical analysis, Eigen values analysis test with Cattell’s Scree test [35, 16], Regression Analysis, Correlation analysis, F- Test and Z-Test, Likert scale test along with Mean, Mode and Standard Deviation. Delphi was designed as a structured communication technique by RAND in the 1950s to collect data through collective opinion polling . The usage of web application can be measured with the use of indexes and metrics. However in eLearning platforms there are no appropriate indexes and metrics that would facilitate their qualitative and quantitative measurement. In such time, data mining techniques such as clustering, classification and association, in order to analyze the log file of an eLearning platform and deduce useful conclusions .
called pheromone. They also deposit pheromone on their way back home. The ants that follow are more likely to follow the path that has more pheromone trail rather than moving in a random fashion, these ants will also deposit pheromone on the path thus making that path more attractive for other ants to follow. Thus the more the ants follow a path the more attractive that path becomes for other ants to follow. Moreover, the ants that take the shorter route will deposit pheromone on their route much faster than the ants that take longer routes; this will increase the probability of other ants to follow this route. Hence over a period of time all the ants will follow the shortest route to the food source, thus leading to an optimal solution [11, 12]. Additionally, the pheromone evaporates over time, thus reducing the probability of finding low quality solutions. This algorithm, though the convergence is slow leads to finding an optimal solution. Many researchers have used ant colony optimization algorithms for different image processing techniques like image segmentation, edge detection, etc. This paper proposes a method for using ant colony optimization technique for shadow segmentation.
Micorstrip patch antennas play a major role in day to day life. In this paper the designed microstrip patch antenna is compact sized with circular polarization for RFID applications. Different arbitrary shaped slots like square circle plus are used and parameters like return loss, gain and frequency are observed for each of the patches. All these are done using ANSOFT HFSS . The antenna is fabricated using duroid as dielectric substrate (relative permittivity =2.2, loss tangent=0.0004) and coaxial feed. These designed antennas are fabricated and used in realtime applications.
A software bug is a common concern in software engineering and one needs to embark upon with it in a systematic way. In software systems, defect management is an important aspect to ensure the software system’s reliability. Automated software defect management solves many issues related to software maintenance. Software repositories encompass the information of software bugs in the form of Extensible Markup Language (XML) or Hypertext Markup Language (HTML). A software bug or defect mainly consists of the title, description, and comments in textual format. The Bug Tracking System (BTS) manages the bug fixing process right from receiving the bug to the assignment. Software developers, testers, and end- users submit the bug reports to the bug repository. Bugzilla, Perforce, and JIRA are the open source bug trackers that allow the bug report from both developers and end-users. Bug triage is the process of assigning each bug report to the appropriate developer. An automatic bug tracking system tackles the issues of the labor-intensive, fault- prone, and time-consuming software bug process.
financial accounting information systems. Internal system has indicators internal control system, human resource competencies, standard operating procedures, and support of top management, indicators get the lowest loading factor is the support of top management and that have the ultimate loading factor is the competence of human resources with a high level of significance means that if competency of human resources in information system handles private Polytechnic in East Java have competence in their field of expertise, training is done if there is a change of software, and employees have experience in the field of financial accounting information system performance is not good. All indicators on the internal system gain factor loading above which have been required so that the internal system indicator has a very strong influence is not significant to the performance of financial accounting information systems in private Polytechnic in East Java. This is because the compensation and motivation of employees in the private Polytechnic in East Java is still low, so even though the internal control, human resource competencies, standard operating procedures, and support of top management have been done properly then the impact on the performance of financial accounting information systems on private Polytechnic in Java East is not good. The empirical findings do not support some of the research results , the analysis of the factors that affect the performance of accounting information systems, with the findings of user involvement, support of top management, formulation, training & education, commitment control information systems affect the performance of accounting information systems. In addition, this study does not support  , the findings of the Internal organization have a positive relationship with AIS is reinforced by the adoption processes , .
Cloud Computing is an innovative technology in the field of information technology. Federated cloud is an amalgamation of several cloud service providers. Since there are many cloud service providers in the federated cloud, users get confusion in choosing the best cloud service provider for their requirements. To choose the best cloud service from the available and eligible cloud service providers ranking concept is proposed. Poincare Plot method (PPM) based mathematical model is proposed to find the rank of the cloud service providers in federated cloud management system. The proposed ranking model reveals that the federated cloud model improves the performance of resource provisioning when compared to the existing rank model using Analytical Hierarchy Process (AHP).
Humans are usually difficult to manage in the context of information security. In fact, humans are not very predictable because they do not operate as machines where if the same situation happened they will operate in the same way, time after time. Human challenge lies in accepting that individuals in the organization have personal and social identity (i.e. unique attitudes, beliefs and perceptions) that they bring with them to work as well as their work identity conferred by their role in that organization , . While information security management activities comprise processes and procedures, it
The high computational time incurred by conventional Hough voting, attributed to the trigonometric opera- tions and multiplications in (1) applied to every pixel in the edge map, makes it unsuitable for direct use in lane detection, which demands real-time processing. Hierarchical pyramidal approaches have been proposed in - to speed up the HT computation process through parallelism. These hierarchical approaches in  filter candidates to be promoted to the higher levels of hierarchy by the threshold of the Accumulation spaces. For each candidate that qualifies, they perform a complete HT computation again using (1). Hence, although the hierarchical approaches.
When it works in ANTI mode, the SCA is not required; the ADC performs peak detection on the signal and provides in its output this maximum as a digital value. The Lower Level Discriminator (LLD) and Upper Level Discriminator (ULD) potentiometers set the limits for the input signal amplitude to be accepted by the ADC for conversion. If an input pulse falls within these limits, the ADC starts the conversion pro- cess. When the conversion process has finished, the Data Ready (DR) signal is activated. When an error occurs in the conversion process, the Invalid (INV) line is activated, DR stays inactive and the process is aborted (this is impor- tant for the Sect. 5 discussion). After reading data, the exter- nal system (the SAS in our scenario) activates the Data Ac- cepted (DA) line, which resets the ADC, leaving it ready for a new conversion. Once the ADC has started the conversion and up to the DA signal activation, the signal input remains disabled and therefore ignored (see Fig. 5).
The eye is one of the sense organs that can give users better interaction closer to their need by observing the change of the eyes (open or closed). It is considered as a rich source for gathering information on our daily life. So, it is used in computer science area, especially in human computer interaction. This paper proposes a new system for detecting eye blinks accurately without any restriction on the background and the user does not have to wear any sensors or marks. No manual initialization is required in our proposed system. The proposed system works with the online and offline environment. It automatically classifies the eye as either open or closed at each video frame. The proposed system is tested with the users who wear glasses and the experiments proved its applicability. The proposed system is very easy to configure and use. It is totally non-intrusive and it only requires one low-cost web camera and computer.
The goal of this paper is to design and fabricate a multicopter to obtain a stable flight with live video recording, autonomous navigation and video analysis/processing. This paper uses a completely 3D printed frame using Poly Lactic Acid (PLA), brushless dc motors, electronic speed controllers, flight controller, transmitter and receiver, GPS module and Raspberry pi for video processing. This drone also houses a wide angle micro first person view camera (FPV) which allows the user to control and navigate the drone beyond line of sight. It also makes use of technologies like OpenCV (Open Computer Vision) and Python programming to implement autonomous navigation, face recognition, object detection and tracking and obstacle avoidance. By analyzing the video captured and then comparing it with trained datasets, it is possible to identify objects/people in realtime for object detection. Keywords: Autonomous navigation, Datasets, Object recognition, Propellers, Flight controller
To construct, train and deploy Object Detection Models TensorFlow is used that makes it easy and also it provides a collection of Detection Models pre-trained on the COCO dataset, the Kitti dataset, and the Open Images dataset . One among the numerous Detection Models is that the combination of Single Shot Detector (SSDs) and Mobile Nets architecture that is quick, efficient and doesn't need huge computational capability to accomplish the object Detection task, an example of which can be seen on the image below. This document is template. We ask that authors follow some simple guidelines. In essence, we ask you to make your paper look exactly like this document. The easiest way to do this is simply to download the template, and replace(copy-paste) the content with your own material.Number the reference items consecutively in square brackets (e.g. ).However the authors name can be used along with the reference number in the running text. The order of reference in the running text should match with the list of references at the end of the paper. 3. APPLICATION OF OBJECT DETECTION
We show companion absolutely specific rotate invariant and computationally compelling floor descriptor alluded to as Dominant became local Binary sample (DRLBP). A Rotate perpetual exceptional is knowledgeable via the method the descriptor with reference to a reference in an incredibly passing near the world. A reference rushes to enlist maintaining the method truthful the native Binary patterns (LBP). The organized technique not totally holds the whole helper information isolated by way of LBP, still, it in like way receives the vital records intentionally the important information, on these traces accomplishing masses of separation management. For epitomize in associate sudden technique, we will be predisposed to generally tend to drench up a phrase reference of the primary adequate of the time going on plans from the association photographs and wipe out tedious and non-illuminating alternatives.