WPCR is a numerical value based of which the web pages are given an order. This algorithm utilizes webstructure mining and also web content mining techniques. Webstructure mining is utilized to figure the significance of the page and web content mining is utilized to discover what amount important a page is? Significance here means the prominence of the page, e.g. what number of pages are indicating or are alluded by this specific page. It can be computed in view of the quantity of inlinks and outlinks of the page. Relevancy implies coordinating of the page with the let go inquiry. In the event that a page is maximally coordinated to the question, that turns out to be more important. The entire of this algorithm can be condensed as the two stages underneath: Input for the algorithm: Page P, inlink and outlink Weights of all backlinks of P, Query Q, d (damping element). Output of the algorithm:
After taking a survey on webstructure mining & web usage mining the main algorithm is found out to follow for the further development of web applications that is HITS algorithm. This paper described several purposed webstructure mining algorithms like Pagerank algorithm, weighted content Pagerank algorithm (WCPR), HITS etc. We analyzed their strengths and limitations and provide comparison among them. So we can say that this paper may be used as a reference by researchers when deciding which algorithm is suitable. We also try to overcome from the problem that particular algorithms have. This paper gives an insight into the possibility of merging data mining techniques with Web application analysis for achieving a synergetic effect of Web usage mining and its utilization in Web Applications Evaluation. The paper firstly describes the data preprocessing and pattern discovery steps, as pages based upon visits using weighted page content ranking and HITS. User clustering tries to discover groups of users having similar browsing patterns. Such knowledge is especially useful in Ecommerce applications for inferring user demographics in order to perform market segmentation while in the evaluation of Web site quality and developing web applications this knowledge is valuable for providing personalized Web content to the users. For the further research of web applications HITS will be the best.
On the other hand, regardless of the recent development on web search technologies, there are still many circumstances in which search engine users are reflected with inappropriate search results. One of the most important reasons for this complexity is that web search engines have problem in recognizing users’ exact search interest with the specified initial query. Also, this is because of the ambiguity that happens naturally in the variety of language itself, and that no context structure is presented to search engines. Conversely, inexpert Web search engine users are habitually not clear of the precise terms that best correspond to their specific information requirements. In the most horrible case, users are still incapable of creating exact queries representing their specific information need. Therefore, it is necessary to learn users’ search patterns  and distinguish their search interests to provide the exact information required by the user. There are many approaches available in the literature for web search engines which gives many ideas and techniques to provide the required information collected from the World Wide Web.
seldom look at results coming after first search result page, which means that results which are not among top ten are nearly invisible for general user. Therefore to provide better search result, page ranking mechanisms are used by most search engines for putting the important pages on top leaving the less important pages in the bottom of result list. So page ranking is helpful in web searching. Rankers are classified into two groups: - Content based rankers and Connectivity based rankers. Content based rankers work on the basis of number of matched terms, frequency of terms, location of terms. Connectivity based rankers work on the basis of link analysis technique, link are the edges that point to different web pages. Connectivity based rankers are independent of user’s query and hence more dynamic as compared to Content based rankers. In this paper various page ranking algorithms are described based on web link structure. In this paper section II describe the work of page ranking algorithm, Section III describes the comparison of these algorithms and section IV describes the conclusion.
Main problem in e-learning technology is lack of accuracy, time limitation, information overload and cost. There should be mechanism to structure the huge amount of e-learning resources and make it readily accessible and reusable in a personalized way. This access should be assisted for the various types and levels of the e-learning community, whether they belong to professional training, academic programs, lifelong learners or others.
transactions. Conceptually, this model ought to deal with the complexities of competition in an online environment while maximizing social welfare. 
In recent years, the proliferation of the World Wide Web has lead to an increase in the number of public auctions on the internet. One of the characteristics of online auctions is that a successful implementation requires a high volume of buyers and sellers at its website. Consequently, auction sites which have a high volume of traffic have an advantage over those in which the volume is limited. This results in even greater polarization of buyers and sellers towards a particular site. This is often referred to as the network effect in a variety of web and telecommunication applications involving interactions among a large number of entities. While this effect has qualitatively been known to increase the value of the overall network, its effect has never been modelled or studied rigorously. In this paper, we construct a Markov Model to analyse the network effect in the case of web auctions. We show that the network effect is very powerful for the case of web auctions and can result in a situation in which one auction can quickly overwhelm its competing sites. This results in a situation in which the natural stable equilibrium is that of a single online auction seller for a given product and geographical locality. While a single player structure is unlikely because of some approximation assumptions in the model, the trend seems to show the likely existence of single dominant player in the web auction space. 
Web content mining is an emerging field of applying knowledge discovery technology to Web data. Web content mining discovers knowledge from the content of Web documents, and attempts to understand the semantics of Web data [22, 30]. Based on various Web data types, Web content mining can be categorized into Web text mining, Web multimedia data mining (e.g. image, audio, video), and Webstructure mining . In this paper, Web information is particularly referred to as the text documents existing on the Web. Thus, the term “Web content mining” here refers to “Web text content mining”, the knowledge discovery from the content of Web text documents. Kosala and Bloc- keel  categorized Web content mining techniques into database views and information retrieval views. From the database view, the goal of Web content mining is to model the Web data so that Web information gathering may be performed based on concepts rather than on keywords. From the information retrieval view, the goal is to improve Web information gathering based on either inferred or solicited Web user profiles. With either view, Web content mining contributes significantly to Web information gathering.
The best hotel is recommended based on the calculated weights.
García approach did not use a structured storage, such as, an ontological knowledge base to store the information about the domain. Wanner et al.  proposed an approach that overcomes this drawback and uses ontology based knowledge base as the main data structure to store information about the environmental domain used for designing an expert system offering personalized support to the citizens in questions related to the environmental conditions in their habitat.
look for gaps. Keywords are used to recognize applicant grouping. Content-based retrieval is carried out in this grouping using multiple icon features. Importance reaction is used to learn the user’s intent—query specification and feature weighting with minimal user-interface concept. The technique applies to a huge amount of images collected from a popular categorical structure on the World Wide Web. Results show that efficient and accurate performance is achievable by exploiting the semantic classification represented by the groups. The significance reaction loop permits the substance descriptor weightings to be determined without exposing the calculations toward the user. Indexing varied compilations of compact disk facts remains a challenging problem. Even though significant progress has been made toward developing effective content descriptors, evidenced by the forthcoming MPEG-7 standard, it is still difficult to bridge the gap between low- level image analysis and image understanding at the semantic level. This gap limits access solutions since users usually interact at the semantic level. The images found on the World Wide Web (WWW) is a prime example of a multimedia collection that is difficult to index. Low-level features, such as color and texture, can be extracted and used for similarity searches. The results might be visually suitable excluding it is unreasonable to expect them to be conceptually relevant. For this, the content based searches must be constrained to semantically relevant sets of images.
Assistant Professor Savitribai Phule Pune University, D.Y.Patil Collage of Engineering, Pune, Maharashtra, India *2 .
Abstract:— A large number of videos are uploaded and shared on social websites in each single day. There are a large number of near-duplicate videos (NDVs) on the web are generated in different ways, such as simple reformatting, to different transformations, editions, and mixtures of different effects. The Internet is over flowing with near-duplicate videos, the video duplicates connected with visual and temporal transformations and post productions. Two basic issues, copyright infringement and search result redundancy, are present currently. To overcome these issues, a spatiotemporal pattern-based approach under the hierarchical filter-and-refine structure for efficient and successful near-duplicate video retrieval and localization can be used. As we survey the works in near-duplicate video retrieval, we investigate existing variants of the definition of near-duplicate video retrieval and near duplicate video detection, describe a generic framework and summarize related work.
The authors in  have presented a paper designing a fuzzy rule-based expert system to alleviate asthma, a chronic lung disorder by diagnosing it at initial stages. The knowledge representation is based on patient perception and is organized into modular structure. The knowledge was presented as production rules and Meta rules were used to present relevant questions for patients in the user interface. In context with the knowledge representation, the fuzzy inference engine was designed involving the modules of symptoms. The final result of every system is de-fuzzified in order to provide the assessment of the possibility of asthma for the patient, considering verification and validation criteria throughout the life-cycle.
The goal is to develop Embedded Web Server system to keep records and management of the data of educational institute. Head of Department also know the Teacher’s daily, weekly performance.
II. RELATED WORK
In 2009, the embedded database SQLite is widely applied in the data management of embedded environment such as mobile devices, industrial control and information appliance . It is advantages of stability and reliability, fast and high efficiency, portability and so on, this occupies the unique advantages among many of the main embedded databases. It describe the definition, basic characteristics, structure and the key technologies of embedded database, analyses the features, architecture and the main interface functions of SQLite. With the popularity of intelligent appliances, the formation of mobile computing environment, and the rise of mobile commerce, embedded database has become the focus of study currently. SQLite has small core, open source, and database is a file. It is very easy to realize the copy, move and cross platform sharing of database files. It can adapt to the needs of embedded system, it is very convenient to construct an embedded database system.
The text clustering is the application of cluster analysis to textual documents. It has applications in automatic document organization, topic extraction and fast information retrieval or filtering. It involves the use of descriptors and descriptor extraction. The descriptors are sets of words that describe the contents within the cluster. In general, there are two common algorithms in clustering. The first one is the hierarchical based algorithm, which includes single link, complete linkage and group average. The documents can be clustered into hierarchical structure by aggregating or dividing which is suitable for browsing. These algorithms can be further classified into,
1 M.Tech Scholar, 2 Assistant Professor
Department of Computer Science and Engineering, NSS College of Engineering, Palakkad, Kerala, India - 678008
Abstract— The Web service has become an essential concern for developers. Users have to select a service from a number of available services that best fit for their use. It is hard to discover the most appropriate web service from a large collection of web services. Quality of service depends upon the numbers of parameters. Every attribute of the QoS has its own effect on overall quality of service, which will change every time based on the service and user requirement. However, most of the researches in this field have concentrated on the study of each independent attribute of web service or based on their pre-defined priority of attribute. The article presents an overview of challenges that comes in order to select appropriate Web service also it give a road map for future research.
The web services group is part of the Mental Health Informatics Section in the VHA Office of Mental Health Services (VACO-10P4M)
Our mission is to use internet and emerging technologies to support the delivery of evidence-based, recovery-oriented, mental health services to Veterans and their families.
We addressed the limitations of the Web-basedsurvey to the extent possible. Respondents who found the survey were assumed to be individuals with some interest in ABC who would be able to provide valid information. Responses that contained too many blank variables were excluded from the analysis. Further, we analyzed the data for only those respondents who identified themselves as being managers, owner-managers, or accountants as well as for the full sample, which also included respondents that identified themselves as holding some other position in the firm. The results from the smaller group were consistent with those found for the larger sample, and the latter results are reported in this paper. The results reported in this study should be interpreted giving consideration to the limitations imposed by the Web -basedsurvey methodology.
ABSTRACT: Voting is essential for modern democratic societies. It is becoming very important to make the voting process more easy and efficient. Android and webbased application for online voting should be technically implemented in such a way that ensures authenticate user requirements. The proposed system is implemented to allow each and every voter to actively participate in the election process. This is done by the android application which will accept the votes of different voters using the application. Online voting through android and webbased application will make the voting process reliable and m or e efficient. This system allows each and every voter to actively participate so that they can get familiar with the candidates and select the appropriate Candidates. The aim is to provide convenient, easy and safe way to capture and count the votes in elections-voting can be a cost effective way for conducting a voting procedure and for attracting voters to participate and also provides the facility of interaction between the voters and the candidates. The main goal of the project is to denote a voting process, which enables voters to cast a secure and secret ballot over a network as the traditional voting process is time consuming and prone to security breaches.
We are going to develop web portal for searching wedding Hall or Lawn to access the computers. This web portal is used to check the availability of Wedding Hall or Lawn, hence we need not visit same place. Because of getting detailed information of Hall in same web portal we can save time and money of user. A simple list of Wedding Halls not gives us detailed information with features and facilities. The user should have waste his/her time and money to search for Hall or Lawn. But with the help of this web portal user can easily get the area wise listing of Wedding Halls and Lawns with detail information as well as the availability of the same for particular date. Information of Individual Hall or Lawn is stored in database. New Owner of Hall or Lawn can insert his detail in web portal. As soon as he became member of web portal he can edit his information and update date of booking. The main advantage of a webbased Hall management system is that it consuming the time and save money of user. It gives upto date information about Hall or Lawn and availability for particular date of this Halls or Lawn. Major problem is focused while searching of Hall or Lawn, because of busy schedule of user he/she is spending much time and money. This web portal is used to solve this problem. This helps the user to save time, money as well as energy because all the information got on the same web portal.