that the proposed method has a good performance in the detection of people, but in some cases these results were not accurate in detecting of a human body when he moves in a similar color environment. In this case, the color of the body is similar to the color of background . Or if the moving man was away from the camera (the distance between the person and the camera is too far). The researchers suggested in  another method to detect the moving human and to follow him through building model by using human walking cycle which is called HGPLVM (hierarchical Gaussian process latent variable model), they built a multiple models for the movement of a human in different positions , then they compared these models with the detected moving bodies, to determine if it was similar to the pedestrian. The results of this experiment showed good performance, but it has suffered from false positive cases many times , it may pick up some counterfeit objects .Researchers presented in  a model with many training samples, these samples were classified using (Support vector machine) SVM. It classifies offered samples to return a number of these types, as the possible object might be one of them. Then the types that have got the highest number of votes are further classified using a another classification depending on Cascade  that is used to choose the best one among the selected objects. Tests’ results have shown a high percentage of success of identifying objects. Still, problems encountered in the last research were the relative high implementation time, as a result of classifying many models; in addition to, the need for a large number of training samples to be trained regularly. As for  researchers submitted their models in real-time, relying on Average of Synthetic Exact Filters (ASEF) . This is among the fast algorithms in implementation time; it depends on conducting some mathematical processes over the object to be compared with previously stored models. Based on such calculations it can know whether the object is identical to one of such models or not The problem is that it is greatly sensitive to lighting and disruption , and also used filter ASEF, is not efficient enough on changing the angle of the camera view, and its inability to handle several moving objects simultaneously in different sizes in the same image and at the same time.
Floods are among the most powerful forces on the planet. Therefore, the need for a system that predict and warn about the flood occurrence is required. The combination of several approaches into a single system was our concern to design and develop an intelligent system that meets the demands of our research. After many studies, we decided to work with multi-agent systems to benefit of its advantages in terms of the distributed artificial intelligence, and we worked with expert systems to benefit of the concept of logic programming and the concept of facts and rules. In this paper, we present an expert system for real-time flood forecasting and warning that consists of two levels of processing. A first level for short-term forecasting and warning ie. for a time that does not exceed two to three days, using the proposed model, which is based on coefficients that will be calculated to do the flood forecasting and warning. A second level for medium (the time could be up to 10 years) and long-term (the time could be up to 1000 years) forecasting and warning via the empirical model of Hazan-Lazarevic.
Figure 7 shows the results of the proposed algorithm using logistics dataset. Three problems which have different complexity are created. Problem 1 consists of two locations and two packages that need to be transferred from one location to another location. Problem 2 involves three locations and two packages and Problem 3 has four locations with six packages. Each problem is set in the planner to find 100, 200, 300 and 400 plans. The result shows that JSHOP2 has better performance in terms of computational runtime (milliseconds) in order to find the plan compared with the proposed algorithm. The reason for such behavior is due to the proposed algorithm which has the concurrent plan capability that consume slightly more computational time to find and generate the concurrent task. JSHOP2 does not have any mechanism to handle the concurrent process. Thus, in the proposed algorithm, the mechanism to handle the concurrent process has affected planner performance.
Table 2 also shows that learner behaviours in the learning content design features (prerequisites, flowchart, references, objectives and details, help features (FAQ), support features (collaborate with teacher and expert), and evaluation feature (open question and pre-quiz) have significant effect on identifying the learning preferences for rational Learning Style. The findings also show that significant learner patterns include temporal behaviour with prerequisites, flowcharts, and FAQ features and navigation behaviour with references, objectives, open questions, communication channels with teachers and experts learning objects. Thus, it could be inferred that the rational learners tend to spend more time at reading and reviewing prerequisite topics and skills before studying a new topic. More time is also taken at logical thinking when viewing flowcharts. These learners tend to read the official help feature through reading the most common questions and its official answers. Furthermore, rational learners tend to navigate and browse official references such as books and articles to look for an intended topic/terminology/concept and prefer to open more communications channel with teachers and experts when facing challenges. In essence, it is found that rational learners adopt different WBES design features classified according to the main system components (Figure 6) as listed below:
are satisfying the developers and end-users need. In this part the types of analysis which has been pursued will be discussed. TSST (Technical Survival Skill Test) was used to determine the student’s computer skill level in the form of numerical scores. This test is based on Cronbach’s Alpha Score Analysis [11, 61]. iSELF: an Internet- tool for Self Evaluation and Learner Feedback. The tool is designed to stimulate self directed learning in a ubiquitous learning environment and our experiences so far confirm its usefulness . In order to focus on individual users Perception on Innovation Characteristics (PCI) of eLearning two special questions were asked. First, can the perception variables of innovation characteristics (PCI) predict individual’s intention to use an eLearning web site? The second, whether the technology adoption model of learners experienced in eLearning is different from inexperienced learners [12, 28]. These are the tests which are frequently used for doing numerical analysis, Eigen values analysis test with Cattell’s Scree test [35, 16], Regression Analysis, Correlation analysis, F- Test and Z-Test, Likert scale test along with Mean, Mode and Standard Deviation. Delphi was designed as a structured communication technique by RAND in the 1950s to collect data through collective opinion polling . The usage of web application can be measured with the use of indexes and metrics. However in eLearning platforms there are no appropriate indexes and metrics that would facilitate their qualitative and quantitative measurement. In such time, data mining techniques such as clustering, classification and association, in order to analyze the log file of an eLearning platform and deduce useful conclusions .
Adaptation in software engineering received a lot of research attention these last years. Several categorizations of adaptation exist in the literature. Raman and al.  proposed a goal-oriented categorization. They distinguish, in his adaptation taxonomy, between corrective, adaptive, perfective, extending and preventive adaptation. In addition, they divided adaptive adaptation into context- aware, customization/personnalization and mediation adaptation. Bucchiarone  and al. differ between at run-time adaptation and designed adaptation. The first one for on-the-fly adaptation, the second, requires analyzing all the possible adaptation case at design time. Khouloud  listed three types of adaptation: reflexive adaptation, adaptation controlled by policies and adaptation by weaving aspects. The first is the ability of a system to observe and act on itself during its execution.
A novel method has been proposed for the identification of shadows in satellite images. The method is based on the use of Ant Colony Optimization (ACO) for the identification of shadows in remotely sensed images. The existence of shadow regions in images has been a hindrance to image analysis and hence accurate shadow detection and removal is still a current research topic. The proposed work is a combination of the previous techniques and an object based technique. The proposed work first identifies the edges of all the objects in the scene and then each object is analyzed using ant colony optimization to determine whether it is a shadow or a foreground object. The shadow regions are detected in a finite number of steps, considering the various properties of the shadow regions.
Micorstrip patch antennas play a major role in day to day life. In this paper the designed microstrip patch antenna is compact sized with circular polarization for RFID applications. Different arbitrary shaped slots like square circle plus are used and parameters like return loss, gain and frequency are observed for each of the patches. All these are done using ANSOFT HFSS . The antenna is fabricated using duroid as dielectric substrate (relative permittivity =2.2, loss tangent=0.0004) and coaxial feed. These designed antennas are fabricated and used in realtime applications.
A software bug is a common concern in software engineering and one needs to embark upon with it in a systematic way. In software systems, defect management is an important aspect to ensure the software system’s reliability. Automated software defect management solves many issues related to software maintenance. Software repositories encompass the information of software bugs in the form of Extensible Markup Language (XML) or Hypertext Markup Language (HTML). A software bug or defect mainly consists of the title, description, and comments in textual format. The Bug Tracking System (BTS) manages the bug fixing process right from receiving the bug to the assignment. Software developers, testers, and end- users submit the bug reports to the bug repository. Bugzilla, Perforce, and JIRA are the open source bug trackers that allow the bug report from both developers and end-users. Bug triage is the process of assigning each bug report to the appropriate developer. An automatic bug tracking system tackles the issues of the labor-intensive, fault- prone, and time-consuming software bug process.
The quality of financial information and significant positive effect on the performance of financial accounting information system, thereby affect the quality of financial information directly on the performance of financial accounting information system is quite significant. The quality of financial information have accurate indicators, relevant, and timely obtain the lowest loading factor is accurate and that has a loading factor is the ultimate on-time with high significance level means that if the system timely information generated within their required reporting period, the performance of the system financial accounting information in Private Polytechnic in East Java, it will be good. All indicators on the quality of the financial information gain above the required loading factor so that the indicator concerning the quality of financial information is accurate, relevant, timely and has a very strong influence and significant impact on the performance of financial accounting information systems in private Polytechnic in East Java. The empirical evidence supports several studies that include, entitled influence the quality of financial accounting information for performance improvement strategies . Based on the analysis results showed that the quality of financial accounting information significant effect on performance improvement strategies. These results reinforced by research , .
Let us consider another example of computation is done using the Quality of Service (QoS) data of three real cloud providers. The QoS data is collected from three Cloud providers: Amazon EC2, Windows Azure, and Rackspace. Assume that the unavailable data such as accountability and security are randomly assigned. The following table gives the values of the parameters of the three cloud service providers. Table 5 shows the QoS data of three cloud service providers.
Quick and high quality document clustering techniques play a vital role in text mining applications by grouping large text documents into meaningful clusters and enhancing the clustering accuracy using dimensionality reduction or query expansion. Detecting meaningful clusters and summaries in Distributed p2p network applies single document summarization techniques and peer relationships for detecting meaningful clusters and summaries. Traditional cluster based summarization methods usually suffer with the computation speed, compression, peer selection and sentence clustering in order to generate high quality summaries. Traditional document clustering and summarization methods assume node adjacency and neighborhood information to build clusters and summaries. Since the multilevel overlay p2p networks have suffered with node adjacency and duplicate information, it was difficult to generate optimal clusters and summaries within the peers. Proposed approach provides better solution to generate optimal document clustering using probabilistic k- representative clustering algorithm and forms efficient summaries using phrase rank based summarization. Experimental results give better performance in terms of execution time, entropy and cluster quality are concerned.
Humans are usually difficult to manage in the context of information security. In fact, humans are not very predictable because they do not operate as machines where if the same situation happened they will operate in the same way, time after time. Human challenge lies in accepting that individuals in the organization have personal and social identity (i.e. unique attitudes, beliefs and perceptions) that they bring with them to work as well as their work identity conferred by their role in that organization , . While information security management activities comprise processes and procedures, it
The counting of people is the main and the last step of the algorithm. From the feature based multi-class tracking scheme, the actual status of the people pattern in the current frame is now known. This information helps us to develop a reliable counting scheme. A fixed virtual region in the middle of the detection range of the camera is selected for the counting. The middle portion is preferred because a full human body becomes entirely visible within this region and provides a better counting result rather than using the multiple lines method at the edge of the detection range. The counting process will continue only when the objects enter in this counting zone. In conventional counting scheme, only the previous frame is taken into account for the counting process. But in realtime case where occlusion is the inherent problem to degrade the performance of counting, it will be not sufficient to deal with only the previous frame. Here a multilevel reverse tracking method is proposed for better accuracy in the counting algorithm. According to this, the status of an object of the k th frame is checked reversely with the (k-1) th and (k-2) th frames or sometimes on the higher level with the help of the Similarity function defined in Equation-16 to get accurate result. This algorithm also has the adoptability to sense the situation. If there is no occlusion in the video scene then the previous frame is only observed. This adoptability and the activation of process within the small counting zone, provides less computational time. Figure 3 shows a few nonlinear occlusion situations where multilevel reverse tracking is needed for accurate counting. From Figure 3, it is clearly visible that the counting cannot be done accurately in a proper direction by simply dealing with the previous frames. Figure 3(a) shows that two objects A and B are merged on the k th image frame. In
detecting and tracking objects. For example, it is difficult to distinguish between a field o f orange flowers and a tiger, because it lacks information about how the color is distributed spatially. It is important to group color in localized regions and to fuse color with textural properties. Heisele in [HKR97] develops an algorithm to detect and track vehicles or pedestrians in real-time using color cluster based technique. Each image is divided into given number of clusters by grouping pixels of similar color and position. Pfinder (“Person finder”) [WAD97J is a real-time system for tracking people. The system uses a multi-class statistical model of color and shape to obtain a 2D representation of the head and hands. Each pixel in the background is associated with mean color value and a covariance matrix that describes the color distribution of each pixel. Meanwhile, Pfinder can detect a differently colored region as change in scene. Each blob that corresponds to the person’s hands, head, feet, shirt and pants locations respectively has a spatial (x, y) and color (Y , U, V) component, and also have a detailed representation of its shape and appearance.
Hsiuao-Ying Chen et al.  have proposed the hybrid-boost mastering algorithm pertaining to multi-pose experience detection and facial term recognition. In order to speed-up the detection procedure, the system searches the entire frame for your potential experience regions by employing skin coloration detection and segmentation. Then that scans the epidermis color segments on the image and applies the weak classifiers combined with the strong classifier pertaining to face recognition and term classification. Their proposed system detected human face in different scales, different poses, distinct expressions, partial- occlusion, and defocus. Their key contribution has been the fragile hybrid classifiers selection good Haar-like (local) attributes and Gabor (global) attributes. The multipose experience detection algorithm will also be modified pertaining to facial term recognition. The experimental results get showed how the face recognition system and facial term recognition system have far better performance than the other classifiers. D. Gobinathan et al.  have presented a hybrid means for face recognition in shade images. The well-known HAAR feature-based encounter detector developed by Viola as well as Jones (VJ) that was designed for gray-scale pictures was along with a skin-color filtration system, which supplies complementary facts in shade images. The image was passed by way of a HAAR Feature based encounter detector, which ended up being adjusted in a way that it ended up being operating in a point in its ROC curve which has a low quantity of missed people but a top number involving false detections. Their recommended method features eliminated a number of these false detections. They have likewise used a color pay out algorithm in order to reduce the results of illumination. Their experimental results about the Bao shade face databases have showed that this proposed method was better than the unique VJ algorithm.
Furthermore, colour information is invariant to face orientations. However, even under a fixed ambient lighting, people have different skin color appearance. In order to effectively exploit skin color for face detection, a feature space has to found, in which human skin colors clusters tightly together and reside remotely to background colors. Color model is to specify the colors in some standard, Some of the color models used is RGB color model for color monitors, CMY and CMYK model for color printing. HSV color model is the cylindrical representation of RGB color model. In each cylinder the angel around the central vertical axis corresponds to “hue” or it form the basic pure color of the image, the distance from the axis corresponds to “saturation” or when white color and black color is mixed with pure color it forms the two different form “tint” and “shade” respectively, and the distance along the axis corresponds to HSV color model is the cylindrical representation of RGB color model. The HSV model describes the color similarly to how the human eye tends to perceive color. RGB defines color in terms of a combination of primary colors, where as , HSV describes color using more familiar comparisons such as color, vibrancy and brightness. The color camera, on the robot, uses the RGB model to determine color. Ones the camera has read these value, the converted to HSV value. The HSV values are then used in the code to determine the location of a specific object or color for which the robot is searching. The pixels are individually checked to determine if they matched predetermined color threshold.