11. Authorised knowledge versus distributed knowledge; Web 2.0 presents a very different way of producing and validating knowledge. The ease with which one can publish on the web and wide geographic and demographic reach that the web facilitates enables a more democratic form of knowledge production and validation. Crook outlines three points based on Keen‟s arguments of the „cult of the amateur‟ (2007 in Crook 2008 p.46) against the effect that web 2.0 has had on knowledge promotion and publication of cultural knowledge. Firstly that contributions on the web2.0 are
It is generally, though erroneously, assumed that language planning is only under the jurisdiction of the big national and international organizations, disregarding what happens at the micro and meso levels (or disregard- ing the role of micro and meso factors). Indeed, the intra- and inter-generational processes of language transmis- sion can already be analyzed at the family level, though this is an area which is not easily accessible from the outside. Within a monolingual family there may be norms which are more or less explicit, or more or less fixed in relation to a set of linguistic aspects such as cultural and language identity, the type of language used in a given context, modes of communication, phonetic, lexical and expressive choices and practices. In cases of mixed marriage a second language comes into play and parents can decide either to teach it to their children or to abandon it if they think it is not useful in their community. In cases of migration, the communication between
Interestingly, two-way links were the goal of early hypertext systems like Xanadu. Hypertext purists have celebrated trackbacks as a step towards two way links. But note that trackbacks are not properly two-way - rather, they are really (potentially) symmetrical one-way links that create the effect of two way links. The difference may seem subtle, but in practice it is enormous. Social networking systems like Friendster, Orkut, and LinkedIn, which require acknowledgment by the recipient in order to establish a connection, lack the same scalability as the web. As noted by Caterina Fake, co-founder of the Flickr photo sharing service, attention is only coincidentally reciprocal. (Flickr thus allows users to set watch lists - any user can subscribe to any other user's photostream via RSS. The object of attention is notified, but does not have to approve the connection.)
We see this emerging already. E-Science has tended to use Web Services while adopting a growing number of Web 2.0 goodies like Blogs and Wikis. Web 2.0 storage and computing services like Amazon S3 and EC2 are also growing in popularity. We follow myExperiment’s view  that one should embrace useful Web 2.0 features and technologies and integrate them with Web Service and OGSA Grids into operational e-Science systems. For example user interface Gadgets have some features lacking in portlets, while some find mashups an easier approach to service composition than Grid workflow. We are workings on ways to go back and forth between Gadgets and Portlets so that for example we can use a Portlet interface to a service to generate a Google Sidebar Gadget. Naively it would be good to build a “Programmable Broad Grid (.org)” site to add the missing (largely Web Service) Narrow Grid systems, services and API’s to those at ref. 5.
R Statistical Services and Community Models: A common task in cheminfomatics is the development of predictive models. These may be simple linear regression models or more complex models such as support vector machines or random forests. The traditional approach has been to develop the model and report its statistics in a publication. Such a model of dissemination precludes any form of machine (or even human) interaction with the model. An alternative approach is to provide a web page front end to the software that can evaluate the model. Such an approach is certainly an improvement, but still faces the restriction that one must manually navigate to the web page to make use of the model for predictive purposes. Another aspect of such models is that they may require calculation of features that are not publicly available. Thus even if one did have access to the underlying model, one could not calculate the features required as input to the model.
Other contributors to Warwick’s Facebook pres- ence include our politics and international studies librarian, who has started to experiment with Facebook as a way of fielding subject-specific enquiries; a librarian at Wolverhampton Univer- sity learning centres who provided us with the basic code from which our library links box was created; and one of our teaching grid advisors who I spent a fun afternoon with trying to get our blogs to feed to Facebook pages after the recent interface changes.
social aspect in these tools is very outstanding. This requires reconciliation between social networks and web IDE. Collaborative editing systems are real-time Groupware that allows team members to simultaneously edit shared documents from different sites . With the advent of Web 2.0, several projects have started covering the different requirements of users. Google Docs is the most successful real time collaborative editor for office documents. Adopting real- time editing technique by software engineering systems specifically IDEs will provide great added value and will have a large impact on the performance of these systems.
This section briefly overviews a short history and critique of mobile learning research, indicating the research gaps that this study attempts to fill, and situates the research project within the context of current mobile learning activity. The twenty-first century has seen the consolidation and maturing of m-learning research (Traxler, 2008), while the increase in m-learning-focused conferences (e.g., MLearn, Handheld Learning, mICTe), research projects and briefing papers from organizations like JISC, and articles in educational journals like Educause, JCAL, and so forth, demonstrate a growing general interest in m-learning. Many early m-learning studies were relatively short-term pilot studies, and lacked rigor in evaluation and epistemological underpinnings (Traxler & Kukulsa-Hulme, 2005), and many studies focus upon content delivery for small screen devices and the personal digital assistant capabilities of mobile devices rather than leveraging the potential of mobile devices for collaborative learning as recommended by Hoppe, Joiner, Milrad, and Sharples (2003). In recent years there has been a flurry of m-learning research and case studies, particularly from the UK. M-learning and Web 2.0 technologies have been identified as emerging tools to enhance teaching and learning (Anderson, 2007; Becta, 2007; Johnson, Levine, & Smith, 2009; McFarlane, Roche, & Triggs, 2007; McLoughlin & Lee, 2008; New Media Consortium, 2007, 2008; Sharples et al., 2007; Traxler, 2007; Trinder, Guiller, Marggaryan, Littlejohn, & Nicol, 2008), but are not usually explicitly linked together. Many recent m-learning research projects have focused on the informal learning environment, and often presuppose “self-motivated learners” like pre-service teachers (Cook, Pachler, & Bradley, 2008). Few studies have yet to explicitly bridge both the formal and informal learning contexts within “main-stream” tertiary education. One exception was the AMULETS (CeLeKT, 2009) project (Advanced Mobile and Ubiquitous Learning Environments for Teachers and Students), which explored “collaboration in context,” bridging indoor and outdoor learning experiences using mobile and location aware devices in both secondary and tertiary scenarios.
In addition, there’s now growing recognition, even in the most stolid of enterprises, of an important shift in customer demand that promises to change the very foundations of how we develop and deploy applica- tions. Customers are now specifying Web 2.0 capabilities in new applications development and even in ret- rofitting current applications to this new model, and the Web itself is undergoing a shift from a collection of news articles, static forms, and bulletin boards to a virtual application-hosting platform in and of itself. This book is right at the front of this trend for Python developers. Together, we will explore the elements of this change in browser-based applications—from greater dynamism and responsiveness to faster de- velopment cycles to greater embrace of social networking. In this book, the objective is to prepare both the corporate and the independent developer to take advantage of this new emerging landscape. As the world of shrink-wrapped boxes of client-side software begins to wane, a new ecology of Rich Internet Applications (RIAs) is emerging to bring both developers and end users new benefits that take full ad- vantage of today’s connected world. Python developers who can reap the benefits of this new ecology will prosper and stay at the front of their professional ranks.
As has been pointed out in 19, this has an important implication for Web interfaces in general and on science gateways in particular. Following the terminology of Cooper 20, traditional Web browser applications, even very sophisticated ones, are still only “transitory” applications and not “sovereign” applications such as word processor and integrated development environment (IDE) tools. Transitory applications are intended only for use for short periods of time. Web mail and Web calendar applications (and all of electronic commerce) are examples. But these are not suitable for day-long, continuous usage. In contrast, one commonly uses a Word Processor, an Integrated Development Environment tool like Eclipse, or other desktop tools for hours at a time. Rich internet applications for collaborative text editing, code development, spreadsheets, and so on are beginning to emerge and demonstrate that properly developed Web applications can indeed be sovereign.
In Figure 27: Occurrence of Information Quality Criteria in Media Provision websites we can see that the Process-Pragmatic Criteria are of relative importance. In this case, that is because most of these information quality criteria are expected factors in a Media Provision context. Latency and Response Time have additional importance when the Media provided in a Media Provision context become more data-intensive, as is the case with (HD) movies, songs and large pictures. With respect to Currency and Timeliness we see the opposite of what we saw in Collaborative Content Creation websites. In Media Provision we see a focus on Currency, as a substitute of a focus on Timeliness. This is to be expected, since Currency is an objectively measurable criterion, which makes it easy to implement methods which rank information objects according to Currency. Since information objects cannot be updated on an ongoing basis, Currency is a good indicator for the expected Timeliness. The last focus of Media Provision websites is the Relevancy, which is present at all websites. This is displayed by the fact that all websites have implemented some kind of Search Engine, but in the case of hedonic websites, Relevancy becomes of smaller importance. We observed that Understandability was of no concern to all Media Provision websites. This was expected for photo and video sharing websites, but a discovery for blogging websites. It turns out that the ranking of information objects according to
This compound is referenced in 20 journal articles published in the last 5 years Similar compounds are associated with the words “toxic” and “death” in 280 web pages It appears to be covered under 3 patents It has been shown to be active in 5 screens Computer models predict it to show some activity against 8 protein targets