The idea behind the integration was to enable students to engage with just one project in the final semester of their final year. All elements of the project could then be synoptically assessed from the one product that they produced. The degree for which they were studying had its own specific module that served as additional content for the project, and all the other elements from planning to product design to enterprise could be assessed via a variety of methods from reflection and evaluation to presentations at a final-year showcase. In the past, large double modules were used for the project; however, it is more prudent to divide projects up into 15-point elements to give students the opportunity to retake elements if they fail one part.
Organizations today operate in a complex, unpredictable, competitive and global business environment. These demand utilizing Internet-based tools to support more collaborative activities and allow the integration of business processes and the sharing of information. It is often that large organizations have more financial and technical resources compared to Small and Medium Enterprises (SMEs) to leverage the availability of free web 2.0 collaborative tools. Web 2.0 tools provide an efficient and accessible means of encouraging and supporting team members working together on shared objectives. This study investigates twenty available web 2.0 collaborative tools that illustrate different way of collaboration and different set of features. We then organize these features by four major function categories: communication, information sharing, electronic calendar and project management, in order to identify which of the collaborative tools would be suitable for a particular organization. Specifically, this study will increase SMEs to be aware what the current available Web 2.0 collaborative tools have to offer and also help them in selecting the right tools based on their organizational needs.
Fast-moving digital technologies, unrestricted mobile access and vibrant social media have a profound impact on banks’ online strategy with many are developing interactive tools that help customers analyze their spending habits and strengthen their money management skills while some are mobilizing the power of social networks to build their brands and entice consumers to share personal information. Web 2.0 technology is set around this theme of fulfilling the growing digital needs of digitally savvy generation of coming age . Web 2.0 technology holds great potential , to expand product variety and customization, accelerate service delivery, tap new pools of revenue and deepen customer relationships that boost retention and profitability.
. The camera of the phone was used to take a photograph of the detection zone before and after the deposition of specimen. Since then, many groups have started to develop and enhance capabilities of phone-based low cost diagnostic readers . Table 6 presents an overview of recent work in developing phone-based prototypes that can be used to detect vari- ety of biomarkers for a wide range of diseases with clin- ically relevant performance. Devices are designed for a broad spectrum of applications, from genetic testing, cancer detection to personalized food allergen moni- toring [136, 138–140]. A wide range of strategies are also derived to enhance signal strength, for instance, using Quantum dots, Rayleigh/Mie scatter or gold nan- oparticles [141–143]. At present, applications of smart- phone-based diagnostics for malaria detection can be divided into two categories: phone-based RDT readers, which provides automatic interpretation of results, and phone-based brightfield microscopes, which allow sim- ple and portable means to visualize parasites in blood samples [138–149].
Build worksheets. MES matrix worksheets provide the workspace to organize the se- quence and relationships among the BBs as they are created. The time and actor coordi- nates provide for the positioning or reposi- tioning of each BB relative to all others on the worksheet as it is created, streamlining, and expediting the investigation and recon- struction tasks. See Figure 3 for an example of how a worksheet being developed during an investigation might look. Figure 3 used support software (“Investigation Catalyst” for Macintosh); worksheets can be pre- pared manually. Color codes can indicate BB attributes. Sequence or durations can be changed as new data become available. Contents of each BB, including sources, can be viewed in detail when needed. Ear- lier building blocks for actions identifying motives or premeditation would be entered ahead of the actions shown.
The two data sources can now query each other using the same terms. The Oscar Winning Movies site can now query the actor names on the Actor Biographies data source on-demand and gain more detail about a specific actor or actress that has starred in a movie. The Actor Biographies site can now query the film plots on the Oscar Winning Movies data source on - demand and gain more detail about films an actor has starred in. With the contextual relationships defined in formal web ontology, further related information about the actors or films, e.g. film locations, other news events happening on the same day of filming or birth date or the actor, or films made by the same director, may be found via the linked standard terminology without the user even imagining that information initially existed. This happens without the need for transformation, mapping, or contracts being set up between the two sites. It all happens through semantics.
the tools used for Web site capture. Now that the archive has been established, Maureen commented that the next task would be ensuring that the content could be preserved, which is seen as very much an ongoing activity involving people, processes and systems. In developing a preservation approach, the main focus so far has been on: documenting system dependencies, the consideration of containers and metadata standards, the development of preservation workflows and defining preservation strategies. On container standards and metadata, a review has led to WARC being the preferred file format for preservation, but there is also a proposal to use selected additional metadata features from METS (Metadata Encoding and Transmission Standard) and the PREMIS (Preservation Metadata: Implementation Strategies) Data Dictionary. In addition, Maureen explained that “technology watch” needed to be an embedded activity within a preservation archive, and noted that the UK Web Archive had a Technology Watch blog 9 . The presentation ended with a look to the future,
The environmental demands of the rough terrain that therapists have to accommodate in order to provide therapy also pose a significant challenge. In order to do clinic visits, therapists have to travel long distances (often more than 2 hours of driving in one direction) over rough terrain. Even though most hospitals do have a fleet of cars available for staff, not enough 4 × 4 vehicles or “bakkies” are available to accommodate various departments attempting to service all the rural clinics (Anonymous therapist, personal communi- cation, June 11, 2012). This finding relating to the difficulty providing and accessing health care as a result of lack of transport in rural areas is also highlighted in the recent South African Health Review. 23 Late admission to hospital reported
possible for the AFF team to focus on the same tasks from one testing cycle to the next. When they gave us the screen shots they had developed, we tweaked the tasks to fit what they gave us. In hindsight, we realize that we should have worked closer with the AFF team to encourage them to load data that we could use in our tasks, such that the tasks would not change much from one iteration to the next. For example, the geography, year, and topics should have stayed constant so the comparison across iterations could have been more reliable. In future testing, we plan on setting this “consistency standard” with designers and developers before they create screens to test. For example, if they are to only have one data set loaded, it should be the same data set that was available in an earlier round of testing. However, it is unrealistic to expect that one set of tasks will remain relevant as more functionality is added and as the design changes in response to earlier iterations. Keeping a few common tasks as others are replaced is a realistic expectation. See Table 2 for tasks (and accuracy, as detailed below) that were repeated across iterations.
As has been pointed out in 19, this has an important implication for Web interfaces in general and on science gateways in particular. Following the terminology of Cooper 20, traditional Web browser applications, even very sophisticated ones, are still only “transitory” applications and not “sovereign” applications such as word processor and integrated development environment (IDE) tools. Transitory applications are intended only for use for short periods of time. Web mail and Web calendar applications (and all of electronic commerce) are examples. But these are not suitable for day-long, continuous usage. In contrast, one commonly uses a Word Processor, an Integrated Development Environment tool like Eclipse, or other desktop tools for hours at a time. Rich internet applications for collaborative text editing, code development, spreadsheets, and so on are beginning to emerge and demonstrate that properly developed Web applications can indeed be sovereign.
information quality criteria might be affected by the implementation of a pattern. This does mean that the pattern in question should be duly implemented and the right format should be chosen. It is also that not each pattern is applicable in each context. The specific contexts in which an implementation is feasible are documented in the ‘Context’ section of each pattern. It appears in this matrix that the Declaration of Failure is the most powerful pattern, targeting the most possible problems. This is true, but the pattern is mostly applicable in Collaborative Content Creation contexts. The Splitter pattern has a lot of possible desirable side effects, but the side effects may be achieved by various implementations. A splitting on a repository level may have positive effects on Response Time and Availability, whereas a splitting information object level may have a positive effect on Response Time and Efficiency. However, the most noted effects are Conciseness and Completeness. It is also clear that the Process-pragmatic criteria are not often the target of the patterns. This is caused by the scope of the research, which focuses on Web 2.0, while problems with Process- pragmatic information quality criteria do occur in all websites. Therefore, patterns to solve these problems are outside of the scope.
that “although research has provided an abundance of data on key success factors in QI efforts, very little was previously known about how these combine and interact with each other in the improvement process over time.” They comment further that the context of a healthcare sys- tem is “a process; dynamic, fluid, and constantly moving, not lumpen, material, or static,” and that “it is the dynamic and ongoing interaction between [the domains of an envir- onment] rather than any one of them individually or independently, that accounts for the effectiveness of a QI intervention,” as well as for “the striking variation between similar QI interventions in different places” (p. 11).
Diebold-Li (2006) employed factorization by modeling the factors as simple time series processes, while Favero, Niu and Sala (2007) applied an affine model for term structure forecasting in their factor processes. Liquidity is however not incorporated in this dynamic setting. As implied yields of benchmark on-the-run issues often carry a liquidity premium caused by trading concentration, it leads to biases in the estimated term structure. Without considering this fact could result in distortion of implied spot rates and factors driving long-run term structure may be overlooked as well. However, theoretic and empirical attention to liquidity premium in bond markets is well documented. Duffie, Garleanu and Pedersen (2005) proposed a general theoretical model for liquidity premium in an OTC market. Vayanos and Wang (2007) argued that the liquidity premium would be more substantial in markets where trading concentrates. Empirically, Amihud and Mendelson (1991) analyzed how liquidity affects Treasury bill yields. Warga (1992) suggested that liquidity is priced such that on-the- run issues have lower returns than off-the-run ones. Elton and Green (1998) also considered trading volume as a proxy for liquidity. There are evidences from the international markets as well. Eom, Subramanyam and Uno (2002) studied liquidity effects in the Japanese market, while Diaz, Merrick and Navarro (2006) in the Spanish government bond market. Their results indicated that it is crucial, especially in the emerging markets, to control liquidity concentration effect while estimating yield curve.
Several authors attempted to categorise the various communication tools. One way to differentiate the communication technology tools is to use the “Push” and “Pull” technologies concepts. Arguably, the Push applies to channels – the email arrives in your inbox not because you wanted to see it but because someone sent it to you (pushed into your inbox). Further examples of Push technology include radio, television and newspapers (Franklin and Zdonik, 1998) and some of the early research work on the push technologies dates back to the 1980’s such as the Boston Community Information Systems at Massachusetts Institute of Technology (Gifford, 1990). The Pull applies to the ‘traditional’ platforms centred tools such as web pages – you have to consciously navigate to a certain source and point your web browser to a certain web address so that these can be downloaded (or pulled) to your computer. Although the technical process of upload and download is similar (Franklin and Zdonik, 1998), there remains a conceptual difference between the two and it is the perceived activity of information recipient. The use of the Internet and the increasing use of Push technology has been forecasted predicting the demise of the web browser (Franklin and Zdonik, 1998). This is a significant development that we want to highlight in this paper and the excitement surrounding the current passion for Web 2.0 technologies research which produce some similarities to that in the 1990’s when discussing Push and Pull technologies.
This compound is referenced in 20 journal articles published in the last 5 years Similar compounds are associated with the words “toxic” and “death” in 280 web pages It appears to be covered under 3 patents It has been shown to be active in 5 screens Computer models predict it to show some activity against 8 protein targets
The lessons learned from the social networking sites and the related research have produced a number of reasons for the adoption of Web 2.0 technologies in the corporate environment. The users of Web 2.0 technologies such as FaceBook, YouTube and MySpace are at the same time clients and employees of organisations; therefore their expectations on the use of technology in the corporate world are changing. The evolution of technology in organisations is rapidly highlighting the potential benefits of Web 2.0 to the corporate environments. The customers are empowered to voice their opinions and help organisations to develop products and services. The communication between business could be improved by breaking the barriers of formal interactions, yet again highlighting the need for Customer Relationship Networking 2.0 (CRN 2.0) technologies. This positioning paper suggests the need for the information systems research agenda to include CRN 2.0. In particular, to discuss whether there is a place for social media in the workplace?
In professional literature dealing on informal education it is hard to find a single definition that is widely-accepted by scholars. There is a gap between the considerable amount of activity in this field and the paucity of related scholarship and conceptualization (Romi, 2009). Moreover, the field is both complex and multi-dimensional and many scholars have concluded that the very attempt to define informal education defies its unique essence (Romi & Shmeida, 2007a). The names given to this field derive from how they relate to formal education: "informal education", "non-formal education"; and in a positive vein: "supplementary education" "alternative education", "extra-mural education", "extra-curricular activities" and "social-community education".
Facebook is a true social software. It is a place online where individual people create profiles to give out information about themselves, and where many people interact with messages that are both public and private. Facebook enables people to play games and share trivial communication in a way that keeps them in touch with far more of their friends and contacts than they might otherwise have done. Which is pretty much what other sites like MySpace, Bebo, Ning and even LinkedIn do, to some extend or another. But Facebook seems to have the highest profile in the academic sector, perhaps because it grew out of networks based around Universities. Facebook is used by students: it’s their space, they use it how they want to, and if we try to get involved then we’re in danger of just looking a bit silly. However, there were more than nineteen thousand members on the University of Warwick network within Facebook, as of December 2007, which is about four thousand more than when I first investigated in July. With so many students on Facebook, are we missing out if we don’t try anything? And are there ways that we can use Facebook professionally? I registered for a Facebook account with my work e-mail address. That means that I can be part of the University of Warwick network, so I can see what our students are doing and I can be seen by them. I’ve made “friends” on Facebook with some of my colleagues, and with other library and information professionals I’ve networked with. A Facebook friend is someone who can see information about you. This information appears on your profile page (that is the page with information about you), but also in their news feeds on their home page. My Facebook home page tells me news about my Facebook friends: when they change their profile pictures, when photos of them are uploaded, when they change their status lines, when they install new applications and so on.