Experience is applied on 50 moving object, 25 of them were actually a Man , and 25 were strange objects (non-Man), different shapes. We'll examine the impact of parameter Y, this parameter results from the ratio of the resulting similarity from NCC. In other words, our goal is not only capturing and classification of 50 objects , but we will determine that the 25 objects are human being and the rest are not. Otherwise we will get the False positive cases. Let's suggest that the number of training sample (templates) are 10 ,and lets study the impact of the parameter (Y) which may take values ranging between 0 and 100, according we can change the value of this parameter from 0 to 100. We can notice that when its values are approximately 0, most human and non- humans objects are classified as human. This means that we get 50 % false positive cases and 50% were correctly classified. After that we gradually increase the value of parameter to become 30 .At this value 41 of 50 objects are classified as human and 9 objects are classified as non- human, despite the fact that all these 9 objects are moving objects and are non- human, on the other hand, 41 objects that are classified as a human are divided into two parts: 25 humans and 16 non-human, This means that 16 objects are classified wrongly and 25 are classified correctly. In addition to that 9 objects are shown as correctly non-human. The number of objects that are correctly classified are 25+9=34 which means the correct percentage is 68% .We continue increasing the value Y till it becomes Y= 60 . Here we notice that 31 objects are classified as humans ;( among them there are 25 real humans and the rest are wrongly classified), 19 objects are classified as non–humans and truly
The log file is normally used to determine learner’s demographical information, knowledge level and goals. Currently, log file has other uses; for instance, it is used to infer the learning preferences, interests and learning styles of learners . However, Stathacopoulou et al.  highlighted that the main limitation for inferring and identifying learner preferences by analysing the log file is that scattered information may be acquired from the non-sequential behaviour within the different features of a WBES. The behavioural data obtained from empirical studies are limited and inadequate to apply the non-symbolic artificial intelligence techniques such as neural network, machine learning, and genetic algorithm. The non- symbolic techniques are used to infer the unknown knowledge of a learner from the repetitive and entire navigational paths . However, the symbolic formal artificial intelligence techniques such as rule-based, case-based and semantic network are most suitable and effective methods to propose an implicit approach of learner modelling according to Learning Style model . With little interaction information, the system can deduce the learner preferences related to Learning Styles.
Automatic planning of web services composition is a challenging problem both in academia and real-world application. Artificial Intelligence (AI) planning can be applied to automate web services composition by depicting composition problem as AI planning problem. Web services composition would combine multiple services whenever some requirements cannot be fulfilled by a single service. Subsequently, many of the planning algorithms to detect and generate composition plan would focus only on sequence composition thus, neglecting concurrent composition. The aim of this paper is to develop an approach to generate a concurrent plan for web services composition based on semantic web services (OWL-S) and Hierarchical Task Network (HTN) Planning. A Bioinformatics case study for pathway data retrieval is used to validate the effectiveness of proposed approach. The planning algorithm extend Hierarchical Task Network (HTN) algorithm to solve the problem of automatic web service composition in the context of concurrent task planning. Experimental analysis showed that the proposed algorithms are capable of detecting and generating concurrent plan when compared with existing algorithms.
Our approach can be illustrated by a domain specific case study which presents a great adaptation potential. Our choice was fixed on tax domain. It should be noted that a tax information system is supposed to be updated for each finance law publication. Therefore, tax information system must be flexible, scalable and customizable as much as possible. This business change (functional dynamic) represents, according to Kelly , one of the essential points justifying the adoption of the DSM approach. We focused only on the calculation and restitution of corporation tax. The main services of our validation scenario are “TaxCalculation” and “TaxRestitution”. The former is a generic service which will be specialized by the “CTCalculationService” (corporation tax calculation service), specific for calculating the value of corporation tax. The latter is a business process composed of several services. It allows a restitution of the corporation tax. Based on our Tax meta-model (see section 6) we can generate models for each tax (corporation tax, income tax…).
In this paper, we propose a robust multi- watermark embedding algorithm in DWT based on dynamic binary location by selecting a low frequency sub band from fifth level decomposition using two watermark logos to improve the robustness. We tested with the offered schema by applying ten types of attacks. Experimental results with high PSNR value measure the image quality, optimal SNR values estimate the quality of reconstructed image compared to the original one. We demonstrate that our scheme is robust against set of attacks, and also the extracted watermark logo improves to have good visual feature and image quality. Additionally, this can provide copyright protection for legal ownership. Our experimental results show that working with high level decompositions will lead the embedded watermarking logo to be in smaller part of host image. Thus, this will affect the robustness of proposed algorithm schema in face of the other set of attacks. Future work may focus on this area of study and trying to add extra parameter like “Arnold scrambling algorithm”  to improve the security and get better results.
When a person is sick, he/she goes to the doctor, who takes a sample of his/her blood for analysis and determination of the disease. Electrical transformers rely on the same technique to distinguish early and unavoidable faults that may occur. Using a sampling of insulating oil in the transformer process, DGA determines the ratio of gases generated in the oil. Electrical insulating oils under high pressure and high temperature generate varying amounts of flammable and non-flammable gases. The types of faults are determined, and the quality of oil separation is assessed. Numerous methods are used in dissolved gases, and they depend on both gas-to-gas ratio analysis and the values of individual gases. The most commonly used methods for the diagnosis of faults and oil quality evaluation have been used and examined .
Since we are dealing with and dangerous disaster, so we must design and propose an expert system that respect the temporal constraint. Consequently, we have used the real-time multi-agent system in the design and the implementation of our proposed system because of it advantage of the distributed computing, the cooperation and the collaboration concepts. We have used the SIMBA approach  to respect the temporal constraint. We have six agents responsible for all the processing inside the proposed expert system.
Data mining techniques like Association rule mining [25, 1] were applied to extract the patterns and to evaluate the activities of online courses and classification. Also there are many researches that have been investigated in the online learning environment. For example, , investigated impact of learning style on eLearning by using statistics, and  , used Rule induction rough set to classify student knowledge . Furthermore,  have combined clustering technique in the social networking to classify students. They used hierarchical agglomerative clustering method to create a cluster on a student by computing their matrix similarity . K-Means algorithm is used for clustering large data population. There are several specialized web usage mining tools that are used in the eLearning Platforms. CourseVis is a visualization tool that tracks web log data from eLearning system . By transforming this data, it generated graphical representations that keep instructors well informed about what precisely is happening in distance learning classes. GISMO is a tool similar to CourseVis, but provides different information to instructors, such as students details regarding the use of course material . Sinergo/ColAT is a tool that acts as an interpreter of the student’s activity in an eLearning system . MATEP feeds them to a data web house which provides static and dynamic reports . Analog is another system which consists of two main components. The first performs online and the second offline data processing according to web server activity . Past user activity is recorded in server log files which are processed to form clusters of user sessions .
196 spaces. This method misinterpreted dark objects as shadows . Chung made an improvement to Tsai’s method. He performed global and local thresholding to the invariant color models . Wu and Tang used Bayesian network for shadow detection where inputs are needed from the user and many clues have been used . Tian et.al designed a Tricolour Attenuation Model for detection of shadows in outdoor scenes where the spectral power distributions of daylight and skylight are set . This method does not consider the changes in the spectral power distributions at sunlight and sunset and hence does not produce accurate results for images taken at either sunrise or at sunset. Aliaksei Makarau et.al in the paper Adaptive Shadow detection using the Blackbody Radiator model proposed an algorithm for automatic shadow detection, which approximates the illumination spectra using a black body radiator . Here many assumptions are made and this again fails when complex scenes have been considered. The prevalence of clouds in satellite images has also been a big obstacle to the image analysis process and hence the detection of cloud shadows and removal is still an ongoing modern research. Zhe Zhu et.al in the paper Object based cloud and cloud shadow detection in Land Sat imagery proposed a multistage approach cloud shadow detection . Adrian Fisher in his paper, Cloud and cloud shadow detection in SPOT 5 HRG imagery with Automatic Morphological Feature Extraction detects markers for cloud regions and grows them to get cloud segments. Manual interpretation is needed to eliminate false clouds and false shadows . The various shadow detection techniques have been studied in . Most of these methods produce accurate results for simple scenes, but the accuracy gets reduced as the scene gets complex. Therefore the need for accurate shadow detection is very essential.
The random cluster centroid based technique is the most well-known summarization technique used to find the inter and intra document relationships in the large corpus. MEAD is a clustering technique based on the cluster-centroid approach for the multidocument summarization process. It is based on a phrase or sentence extraction. For each phrase or sentence in the documents, MEAD system calculates three characteristics and uses a cluster linear combination of the sentences and phrases. The three characteristics used are centroid score , overlap with the first sentence /phrase and positioned . For single document or group of phrase clusters it calculates the centroid based topic categories using tf-IDF type data.
Micorstrip patch antennas play a major role in day to day life. In this paper the designed microstrip patch antenna is compact sized with circular polarization for RFID applications. Different arbitrary shaped slots like square circle plus are used and parameters like return loss, gain and frequency are observed for each of the patches. All these are done using ANSOFT HFSS . The antenna is fabricated using duroid as dielectric substrate (relative permittivity =2.2, loss tangent=0.0004) and coaxial feed. These designed antennas are fabricated and used in realtime applications.
A software bug is a common concern in software engineering and one needs to embark upon with it in a systematic way. In software systems, defect management is an important aspect to ensure the software system’s reliability. Automated software defect management solves many issues related to software maintenance. Software repositories encompass the information of software bugs in the form of Extensible Markup Language (XML) or Hypertext Markup Language (HTML). A software bug or defect mainly consists of the title, description, and comments in textual format. The Bug Tracking System (BTS) manages the bug fixing process right from receiving the bug to the assignment. Software developers, testers, and end- users submit the bug reports to the bug repository. Bugzilla, Perforce, and JIRA are the open source bug trackers that allow the bug report from both developers and end-users. Bug triage is the process of assigning each bug report to the appropriate developer. An automatic bug tracking system tackles the issues of the labor-intensive, fault- prone, and time-consuming software bug process.
Various phenomena arise in Indonesia ahead of the ASEAN Economic Community (AEC) in 2015 that labor shortages are ready to fill the needs of industry . With a ready workforce needs are still many in need in the business world, while Indonesia is not able to provide the ready-made power. Furthermore, the more years of education were scored power ready to use the growing number of phenomena that occur between the needs of ready workforce that is still inadequate and the growth of vocational education is increasing. With the growth of vocational education in Indonesia from year to year increases the required autonomy from a financial standpoint the Polytechnic so performance financial accounting information system will be in demand better. Higher education is one of the very high risk sector against criminal acts (cyber) / fraud. Colleges and universities reported that high level of criminal acts attacks (cyber), with millions of hacking attempts into the information system weekly. While social security and bank account numbers is always risky, but it is also susceptible to loss of valuable intellectual property such as patents granted to faculty and students, as well as the personal information of students, faculty and staff. Because the frequency of criminal attacks (cyber) / cheating in higher education institutions, the need to raise awareness of the virtual world has never been greater . With a variety of fraud (cyber) will affect the performance of financial accounting information systems in educational organizations especially Polytechnic.
Proposed SLA based Inter cloud operations , does not use simulation to investigate and evaluate the performance and efficiency of different SLA- aware match making algorithms by supporting multiple SLA parameters. SLA-oriented Dynamic Provisioning Algorithm supports integration of market based provisioning policies and virtualization technologies for flexible allocation of resources to applications.
Humans are usually difficult to manage in the context of information security. In fact, humans are not very predictable because they do not operate as machines where if the same situation happened they will operate in the same way, time after time. Human challenge lies in accepting that individuals in the organization have personal and social identity (i.e. unique attitudes, beliefs and perceptions) that they bring with them to work as well as their work identity conferred by their role in that organization , . While information security management activities comprise processes and procedures, it
Many of the highway deaths each year were attributed to Lane departure of the vehicle. Many automobile manufacturers are developing advanced driver assistance systems, many of which include subsystems that help prevent unintended Lane departure. Consistent approach among these systems is warning the driver when pre- dicted unintended Lane departure.
Face detection is an indispensable step and it is the actual first step of the system framework. It determines if the later stages are going to run or not. In the proposed system Viola Jones algorithm  is applied for face detection and tracking as shown in Figure 2. The Viola Jones algorithm is more efficient for tracking than the AdaBoost Algorithm when working with multiple image frames. Viola Jones can detect more than face if the image contains multi faces (it can detect the correct face with the existing of other people or objects). It can track different types of facial views, not only the frontal view like AdaBoost that needs to a Lucas–Kanade–Tomasi (LKT) based method to support non-frontal faces . Viola Jones is characterized by being extremely fast and achieving highdetection rates. The basic idea of this algorithm is to slide a window across the image and evaluate a face model at every location, this window or bounding box serves to restrict the region of the image that is searched for the eyes.
Once the part examples have been located within the input image, and perhaps labeled with a confidence in each detection; each component based object detectionsystem will use another classifier to judge whether or not the part detections are truly part of the target object. The face detectionsystem uses a product of probabilities, indexed from histograms, to calculate confidence in some image patch stemming from the face class.
ABSTRACT:Traffic congestion and accidents are mainly due to pathetic condition of road. For the proper maintenance of the roads the maintenance departments need to regularly assess the quality of the roads. The paper introduces a system to detect the potholes and to inform the concerned authority about the pothole. Ultrasonic sensor and accelerometer are used to measure the depth of the pothole and jerking respectively. The system captures the geographical location of potholes using GPS module and it is stored in the database (cloud). This serves as a valuable source of information to the vehicle drivers and to the Government authorities. A web server is used for public access, so that precautionary measures can be taken to evade accidents. It is a low cost, economic solution for the continuous road monitoring system.