Abstract. Internet search (or perhaps more accurately ‘web-search’) has grown expo- nentially over the last decade at an even more rapid rate than the Internet itself. Starting from nothing in the 1990s, today search is a multi-billion dollar business. Search engine providers such as Google and Yahoo! have become household names, and the use of a search engine, like use of the Web, is now a part of everyday life. The rapid growth of online search and its growing centrality to the ecology of the Internet raise a variety of questions for economists to answer. Why is the search engine market so concentrated and will it evolve towards monopoly? What are the implications of this concentration for different ‘participants’ (consumers, search engines, advertisers)? Does the fact that search engines act as ‘information gatekeepers’, determining, in effect, what can be found on the web, mean that search deserves particularly close attention from policy-makers? This paper supplies empirical and theoretical material with which to examine many of these questions. In particular, we (a) show that the already large levels of concentration are likely to continue (b) identify the consequences, negative and positive, of this out- come (c) discuss the possible regulatory interventions that policy-makers could utilize to address these.
45 Read more
An attractive property of the above methods is that they could be implemented using commer- cial Internet search engines that makes it possible to run the algorithms on huge corpus. At the same time, capabilities of search engines are limited by their query languages that are not powerful enough to solve complex natural language processing tasks. In case when whole documents are available for processing additional information could be used. For example, in (Shinzato and Tori- sawa, 2004) the structure if HTML documents was utilized. The key assumption states that terms appearing in an HTML enumeration list could be hyponyms of some term, and this hypernym term is likely to appear in the text preceding to the list. Large text corpora in connection with lexical patterns used for solving other problems, like automatic construction of attribute words (Tokunaga and Torisawa, 2005).
Abstract - Search hits, or the number of hits returned by an internet search engine for a particular term refers to the estimated number of pages in the World Wide Web that contain the term. In this paper, we propose an extraction method to access the hit count and analyze the reliability of using it to determine the truth values of statements or terms to differentiate from erroneous ones. For example, this can be used to determine the correct spelling of places, to detect grammatical errors, and even for simple fact- checking. Our findings suggest a positive correlation between the number of hits and the ‘correctness’ of the search phrase.
The contribution of this work to the existing literature is two-fold. This is the first study to our knowledge that uses the Inflation Expectations Survey of Household data to validate the presence of Carroll-type epidemiological sources of inflation expectations formation in India. Second, we depart from all the epidemiology-based studies mentioned above as far as the common source of information is concerned. Instead of considering the forecasts made by the professional forecasters or those provided by the news media, as the common source of information, we hypothesize that the Internet is the common source through which the general public update their future inflation expectations. This is not an implausible assumption in the Indian context since the number of Internet users in India is second 1 in the
39 Read more
Background: The benefits of internet for health information are enormous. These include health and academic needs of medical students. The utilisation of internet for health information had not been researched on the pioneer medical students of Abubakar Tafawa Balewa University, ATBU, Bauchi, Nigeria .The study therefore examined the pattern of internet use for health information among these students. Objectives: The study assessed: the frequency of use of internet and internet for health information; the nature of the health information accessed; the search engines and websites accessed for obtaining health information; the perceived helpfulness of internet for health information and the perceived barriers in the use internet for health information. Method: Self-designed, 25- item questionnaires were administered to 40 pioneer medical students of ATBU. Data was analysed using SPSS version 23 and Microsoft Excel 2013. Results: The response (n=40) was high for daily internet access and for daily internet use for health information. The health information sourced was mostly for academic research and personal health information. Most students perceived the internet as an easy and helpful tool and mostly utilised Google search engine and PubMed website. Poor internet access and internet search skills and costly phones were perceived barriers to internet use for health information. Conclusion: The students had good use of internet for health information. However, there are gaps in the optimal use of websites and search engines for health information. Strengthening e-health and improving internet search skills will optimize health information from the internet.
11 Read more
Two research questions were addressed. First, what are the differences between internet search- ers and non-searchers for health-related infor- mation among current and former smokers? Second, does searching the internet for health- related information predict current smoking status in a multivariate model that controls for varia- tions in sociodemographic and family charac- teristics? Data collected from 10,929 current and former smokers who participated in the 2009 National Health Interview Survey showed sig- nificant differences in sociodemographic and family characteristics between searchers and non-searchers. Importantly, searching the inter- net for health-related information made an in- dependent contribution to the prediction of cur- rent smoking status in a multinomial logistic regression model. This study is significant in that it utilized a nationally representative sample to examine the correlation between internet use and smoking behavior and supports ongoing efforts of public health advocates to continue their efforts in developing and delivering online smoking cessation programs.
1994; Yip, 1998). These facts condition a situation that students dislike learning science; they do not try to find out the reasons of their misunderstanding. Students’ disfavor science as they do not understand that science is the background of knowledge. ICT, the Internet and computers serve as tools and methods in this situation and stimulate students’ interest in science education. Computer based visualization may help them understand difficult phenomena, visual representation is clearer than a static one provided in textbooks (Burewicz, Miranowicz, 2002; Penn et al., 2007). Scientists argue that computer based visua- lization activates learning motivation (Wu, Shah, 2004; Cook, 2006; Bilbokaitė, 2008; Nieswandt, Sha nahan, 2008), it also helps to concentrate attention (Velázquez-Marcano et al., 2004) and to memorize things for a longer period of time (Cook, 2006). Relying on an opinion that students are interested in computers and the Internet, we could raise an assumption that students’ motivation and understanding could be enhanced if these tools are used in the science education process. The Internet may be used to give lessons, student homework assignments may also include Internet search. Students could search for various types of visualization on the Internet and use this additional information in the learning process. There is not enough information about this phenomenon, especially in the Lithuanian context, research is also lacking. Moreover, we do not know students’ opinion about their use of the Internet in their daily life and doing homework assignments. Data could focus on the existing situation and that could help to formulate a following step of research in this area. All this allows us to formulate the following problem question: Do students search for additional visual information on the Internet with the aim to clarify misunderstood science topics?
The first question was about strategies of online information Internet search, which could be identified using the analysis of students’ performance of educational tasks. Modeling the educational situation with little-known tasks, we had identified two strategies of educational online searching: “direct online searching” and “improving online searching”. The strategy of direct online searching was probably more popular than the strategy of improving online searching (73.3% students used this strategy in the experimental situation). The strategy of direct online searching involved formulating a minimum number of queries to the search engine, a quick viewing of the first few links and selecting 1-2 information sources to perform the educational task. Students who used a direct online searching strategy did not tend to summarize information from several sources. The strategy of improving online searching involved a consistent refining of the search query, viewing a large amount of information, tending to synthesize information from different sources.
Although people are enthusiastic about Internet use, physicians often express concern about individuals’ ability to evaluate the vast amount of information and place it in the proper context as it relates to their diagnosis. Moreover, previous studies have shown that some health-related websites contain incom- plete, out-of-date, inaccurate information. 11,13,14 Despite this potential for harm, the majority of participants in our study did not report greater anxiety from their Internet use, and for most people, the Internet served as a valuable resource. This was particularly true for participants with BCC and SCC, of whom only 13% reported any increase in anxiety after their Internet search regarding their diagnosis. In participants with melanoma, we confirmed, with a far-more-comprehensive survey, our previous findings that approximately one-third of individuals experience an increase in anxiety regarding
ABSTRACT: The users need to keep the search criteria classified because the program residing in the public server may falls into enemy's hands. A complex query based private searching on streaming data help to achieve this goal. The users could efficiently search for documents that satisfy secret criteria (such as presence or absence of a hidden combination of keywords) under various cryptographic assumptions. It is a process to dispatch to a public server a program, which searches streaming sources of data without revealing search criteria. It searches for documents from streaming data on the basis of keyword frequency, such that the frequency of a keyword is required to be higher or lower than a given threshold. This form of query helps in finding more relevant documents. Here it searches for documents from streaming data privately with minimum communication complexity. It has many applications for the purpose of intelligence gathering. For example, one can use this technique to find documents that contain some optional words, relevant words and absence of some words, in a single search, with minimum communication complexity, without revealing the keywords.
Defect recognition based on picture analysis is one of the most important means to detect key failure points or damages. However, the recognition rate is low due to the limited number of pictures collected on site, so the computer training set is limited, which leads to a lack of studying and training. Mean- while, the internet provides a large number of related pictures which can be used as an important data source for training picture analyzing engines. By using an internet spider under certain rules, one can freely collect information on the internet. The internet spider described in this paper can automatically collect related images from the internet as well as search for similar images by leveraging on a local seed picture. This spider also has a parallel version, which can give significant performance boost when run.
10 Read more
used in our study allows us to delineate the importance of this research in understanding the elderly ’s interactions with the Internet in Europe. Specifically, we explore the determinant of the time that the elderly in Spain spend on two online activities: search and communications. Thus, our evidence shows how older adults interact online, according to certain socio-demographic variables. This will be useful to policy-makers who wish to assess the recently-proven “beneficial” use of online activities , by devising policy instruments to increase the at-home demand for such online activities, in order to contribute to the central objective of the Digital Agenda for Europe, which sets out to ensure universal broadband coverage across the European Union.
16 Read more
It is obvious that the Google hacking procedure is based on certain keywords, which could be used effectively if they are used by some internal commands of the Google search engine. These commands can be used to help hackers narrow down their search to locate sensitive data or vulnerable devices.
31].Visual or object content is descriptor can be either global feature or local features. A global features uses the visual features the whole images and Local descriptor uses features of region or objects in to describe the image content.These features dividing three levels that are low middle and high[7,29] CBIR system relies on color ,texture, and shape which are small level image features. CBIR methods search for one specific image which search based on the esthetic value of the images. Many CBIR system has developed. This method techniques tools, and algorithm that are used originated from fields such as statistics and pattern recognition. We have to discuss some important techniques and methods. .
10 Read more
IV: I just put it in on Google you know and different ones came up. There was, y’know like, then about these tablets that you take for the disease like them [methotrexate] you know. Then it was saying you can—once you are stabilized everything can be alright, y’know what I mean? [continues] The problem here is one of interaction, both in design and in practice. The interviewer asks about locations (“where you looked”) and specifically mentions Google™ as a way to anchor subsequent talk (in the knowledge that this commonly represents the beginning of the looking pro- cess online). For the interviewee, the process is not remarkable; she used Google™ and noticed information relevant to the medication her friend had been prescribed (methotrexate) and a positive prognosis. In a second, individual interview with the same interviewee, she was also unable to specify details, though she did situate this within her broader approach to the Internet that moved between believing information, being unsettled by it, and rejecting the Internet as a source of information.
11 Read more
The indirect positive paths can be interpreted as indicating an improved capability in people with high eHealth literacy to distinguish serious illnesses from less consequential conditions. In a somewhat serious health situation, people with high eHealth literacy will understand they need to act; for example, they will try to educate themselves on the Internet and consult their doctor. A heightened sense of empowerment, which is an integral part of the system of paths our results show, fits well with this kind of behavior. It might even be that, in contrast to the model presented here, a functional rather than a causal explanation of the correlation between Internet health information seeking and visits to the GP might be at work: those people with high eHealth literacy in a more serious condition already know they have to see their doctor and they use the Internet’s potential for self-education as preparation, in order to optimize the consultation.
13 Read more
One of the inherent problems with personalized search is that users are often insecure about handing over otherwise private or personal information regarding themselves to a search provider. Intuitively, the more that a search provider knows about a specific user, the more accurate their search results can be tailored for them, but how are the users to trust that the information that the search provider maintains about them will not be mishandled, lost, or maliciously used? If users can trust their chosen search providers with their personal information, then the providers can use that to deliver more accurate results with more specifically tailored advertisements. Thus, it should be in the keen interest of all providers of personalized search mechanisms to enhance the user‟s privacy surrounding their personalized search services as much as realistically possible. In order to enhance this privacy, this paper will look at philosophies and methods to optimize the privacy that users are given when using a typical personalized search service.
The first Internet “search engine”, a tool called “Archie” shortened from “Archives”, was developed in 1990 and downloaded the directory listings from specified public anonymous FTP (File Transfer Protocol) sites into local files, around once a month. In 1991, “Gopher” was created, that indexed plain text documents. “Jughead” and “Veronica” programs are helpful to explore the said Gopher indexes. In the year 1993, the “World WideWebWanderer” was formed the first crawler. Although this crawler was initially used to measure the size of the Web, it was later used to retrieve URLs that were then stored in a database called “Wandex”, the first web search engine. Another early search engine, “Aliweb” (Archie-Like Indexing for the Web) allowed users to submit the URL of a manually constructed index of their site .
Assuming that the results of a query submitted to a search engine would certainly be more relevant if the search engine takes into account or integrates certain specificities related to the query such as: the thematic district of the research subject, a tool to help the expression of research needs, and some dedicated algorithms. This article proposes conceptual methods that will make it possible to optimize the relevance of the results in a well-defined research context. The thematic constituency to be analyzed is the environmental domain in the Congo Basin in Central Africa. The approach adopted is the aggregation analysis of tools and methods
Something I am very excited about is new work that UKCCIS has just started on Digital Resilience. It brings together relevant stakeholders that represent the education sector, parents, industry, expert civil society organisations and children themselves. What do I mean by ‘digital resilience’? Well, it’s all those things we can do to stay safe around people we meet on the internet. Many of you may do it without thinking -