The research reported in this paper shows an analysis of the cognitive process of students from the senior technical program on food technology, whom are asked to solve a contextualized event on systems of lin- ear algebraic equations within the context of balance of matter in situations of chemical mixtures. The cognitiveanalysis is founded on the theories of Conceptual Fields and vents. For the analysis attention is focused on the representations carried out by students regarding the invariants in the schemes that they build when they face an event of contextualized mathematics. During the acting process of students emerge different types of representation which are appropriate to the context in which the research de- velops, with which a proposal for the classification of them.
The ‘sapiential discourse’ (Loader 2014:89) in Proverbs 1:20–33 lends itself to an embodied cognitiveanalysis. The speaker in the source domain (wisdom personified as a living prophet) addresses a metaphorical group of people by using several conduit metaphorical phrases. The sensory-motor action of speech is used by wisdom, metaphorising her thoughts. Although the hearers are not quoted, their sensory action is depicted in metaphorical terms as well. They hear what is spoken, experience disaster, start acting at a certain stage and call upon wisdom. They form a second source domain in dialogue with the first source domain of wisdom speaking. These two source domains share the same target domain. The way the speaker is depicted in metaphorical terms stands in correlation with the way the hearers are portrayed. This is indicated by the chiastic structure of the strophes in the passage. What is more, the metaphors used for the speaker (wisdom) are all explicated in terms of the metaphors for the hearers, and vice versa – see the repetition of terms in the first and third strophe. Wisdom is what wisdom is in terms of who or what the addressees are. They are who they are in terms of the metaphors for wisdom. The mutual target domain is the contents of the communication passed on by the metaphorically expressed wisdom and the metaphorically pictured hearers. There is, therefore, a coherence of metaphors, a ‘conceptually integrated configuration’ (Jindo 2009:226) and an ‘experiential gestalt’ (Lakoff & Johnson 2003:118).
We used a method of goal-action coding with a ‘think-aloud’ protocol in order to analyze the strategies physicians used to create each summary. 8 This approach draws on the concept of a cognitive walkthrough, a task-based expert system evaluation based on Norman’s Theory of Action that simulates users’ experience with a system by identifying the goal–action sequences required for completion of a specific task. During a think-aloud protocol, individuals with domain knowledge are asked to verbalize or think out loud as they carry out tasks. Analysis. We first focused on a quantitative analysis of the time the physicians spent in the different sections of the EHR and the strategies they employed to navigate the patient record. This analysis was aimed at answering our first two research questions regarding summarization, namely the sources of information physicians rely on and the strategies they follow to prioritize the selection of information. We calculated the portion of time a physician spent in each section of WebCIS as a percentage of the total time he or she spent in WebCIS during that session. For this calculation, we excluded any time during the session that the physician spent writing or revising the summary document and focused on just the time the physicians were viewing the patient record. We also analyzed the order that each section was visited and how frequently the physicians went back to sections already visited earlier in that session. Using Morae™, we delineated the start of a section visit when the physician clicked on that section’s link and the end of a section visit when he or she clicked on a different section’s link.
For this project, the objectives stated in the introduction were achieved, but if there was more time availability more functionalities could have been added. The most important part within the project was the research about the tools that were going to be used, as they were all new and had different capabilities. While researching the tools some ideas changed regarding the capabilities of said tools to adjust what could be done with what wanted to be done. Also, at the start, building this project at a very small scale seemed quite easy, but as more nodes were added to the conversation and more functionalities wanted to be added, the complexity and the difficulty started growing. Building a simple bot that gives always the same answer and has a fixed and small conversation is easy, but building an interactive chatbot that allows the user to feel like it is talking to a human and that makes the user feel comfortable is quite a challenge. The interface for the chatbot is not as intuitive to build as it would seem, although the tool UI in IBM Cloud is quite simple. Not sticking just to the cognitive part, but adding a bit of AI on top of what is already built could add more flexibility to the bot as well as learning for better interaction with the users. The bot that has been built is a simple solution for building a CV that can be improved and has a lot of applications in the real world with a bit of polishing.
The data obtained was presented in Ekegusii orthography and a translation to the nearest gloss in English provided. The recorded data was transcribed first and thereafter translated. Field notes written during the sessions were used to supplement the recorded data especially in cases where references to particular items were unclear. The transcribed data was edited in order to come up with a clean and organized copy to facilitate recall of information. This was thereafter followed by translation of the copies from Ekegusii to English. The words and phrases that were collected were sorted out and classified into different categories by looking at the values that were related. A list of these categories was then compiled and patterns emerged. In addition, the mental imagery that the words and phrases showed were explained and the researcher proceeded to show how these words and phrases were viewed in the Ekegusii society. The social cultural values from the data on the interpretation processes were then mapped from the source domain to the target domain and analyzed using the Cognitive Metaphor Theory.
In addition to the analysis of the selected paradigm words w.r.t the different data domains, we compared the accuracies achieved with both SE and Fixed-Set approaches, when classifying each of the groups of tweets in the dataset STD. For the evaluation of the SE approach, we performed a leave- one-out cross validation to compute the accuracies achieved on the groups Location and Movies, as they have less than 20 tweets each (we do not consider the group Events, as it does not contain any negative tweet). For the other datasets, we performed a 10-fold cross validation, as in the previous experiments. Table VIII presents the obtained results. Note that the accuracies obtained with the SE approach were again higher than those achieved with the Fixed-Set approach (except on Company, with a small difference), which strengthens the idea that a more ﬂexible way to select the paradigm words is beneﬁcial. Also, the performance of the Fixed-Set approach seemed to be more sensitive to the data domain being
Another issue to consider is the approach to communicate the outcome of the analysis performed. Data scientists and Analysts feel hysterical regarding the insight from the data but when converting the analysis into values they fail to play their part. Hence the organization must take into account the services offered by people with skills and expertise in not only analyzing the data but also exhibiting the information in coercive manner 10 .
The cognitive work analysis is formed by five interconnected phases, each describing different types of con- straints on the task. The method does not require the use of all phases, but can be adapted to the subject of interest and the scope of the project (Jenkins et al., 2008, pp. 16-18). This paper focuses on the first three phases of cognitive work analysis, which are work domain analysis, control task analysis and strate- gies analysis, since these are thought to give a fundamental understanding of the cognitive work of the truck driver and how the work domain is used. The work domain analysis will in the context of this project describe how and why the objects in the driver’s cabin can be used to successfully perform a right-hand turn. The control task analysis will be used for evaluating what kind of information is needed by the truck drivers to do a right-hand turn, and how this information is used for decision-making. Lastly, the strategies analysis will be used to create a schema which can illustrate when specific objects are being used, for how long, and in what stages during different right-hand turns.
15. Smolensky (1988) remarks that “unlike symbolic tokens, these vectors lie in a topological space, in which some are close together and others are far apart. (p 14)” However, this seems to radically conflate claims about the Connectionist model and claims about its implementation (a conflation that is not unusual in the Connectionist literature as we’ll see in Part 4). If the space at issue is physical, then Smolensky is committed to extremely strong claims about adjacency relations in the brain; claims which there is, in fact, no reason at all to believe. But if, as seems more plausible, the space at issue is semantical then what Smolensky says isn’t true. Practically any cognitive theory will imply distance measures between mental representations. In Classical theories, for example, the distance between two representations is plausibly related to the number of computational steps it takes to derive one representation from the other. In Connectionist theories, it is plausibly related to the number of intervening nodes (or to the degree of overlap between vectors, depending on the version of Connectionism one has in mind). The interesting claim is not that an architecture offers a distance measure but that it offers the right distance measure – one that is --- empirically certifiable.
Background: Elderly individuals who have memory problems without significant limitations in activities of daily living are often diagnosed as having mild cognitive impairment (MCI). Some of these individuals progress to dementia. Several cognitive enhancers (for example donepezil, galantamine, rivastigmine, memantine) have been approved for use in people with Alzheimer ’ s dementia but their use in patients with MCI is unclear. We aimed to determine the comparative effectiveness, safety, and cost of cognitive enhancers for MCI through a systematic review and network (that is, indirect comparisons) meta-analysis.
Verghese et al. (2006) provide a definition of leisure activities as those which ‘individuals engage in for enjoyment or well-being that are independent of work or activities of daily living’. The impact of a range of leisure pursuits, including physical (Wang et al. 2012), mental (Wilson et al., 2010), and social (Saczynski et al., 2006) activities has been explored, generating suggestions for possible mechanisms of action. A popular theory for the observed advantages of leisure activities is that participation can improve cognitive reserve, a function that allows neurons to communicate with increased efficiency, flexibility and adaptability as well as increasing neuronal capacity (Katzman, 1993; Tucker-Drob et al., 2009).
contributing to trait variability. This requires both unbiased analysis of possible genetic contributions in regard of their genomic location and delivery of functionally interpretable solutions. Genome-wide association studies are such a proposed tool, in which millions of common genetic variants can be individually tested for association with a trait (Visscher, Brown, McCarthy, & Yang, 2012). The elucidation of the genetic underpinnings of brain-related phenotypes has already been started using single- marker analyses (Papassotiropoulos et al., 2011; Papassotiropoulos, Stephan, Huentelman, Hoerndli, Craig, et al., 2006). Yet, this approach does not fully account for the highly polygenic pattern and the inherent biological complexity underlying cognitive complex traits (Papassotiropoulos & de Quervain, 2011). The numerous variants of small effects, which together form the genetic substrate of many complex traits are unlikely to pass the significance threshold that results from the necessary multiple testing correction procedures. A pragmatic response to this power issue consists in increasing the sample sizes of genome-wide association studies. This initiated the development of large-scale collaborative efforts aiming at gathering multi-centric GWAS data, allowing meta and mega-analysis of various complex disorders and traits. Increasing the sample sizes successfully led to the identification of additional loci associated with common neuro-psychiatric disorders (Ripke et al., 2013, 2014) and neuro-anatomical traits (Hibar et al., 2015, 2017; Stein et al., 2012).
Multiple trade-off problems exist in the frame structure opti- mization. From the SUs’ perspective, the lower the probability of sensing errors occur, the more chances the channel can be reused when it is available, thus the higher the throughput of the SU could be achieved. Therefore, a tradeoff exists between the sensing length and throughput, which was formulated by using this frame structure of SUs [6, 7]. Following each sens- ing period, the secondary transmission starts when the licensed channel is considered as idle by the SU. Otherwise, the SU has to wait until the next frame to sense the licensed channels again before any secondary usage. In , the optimization of spectrum sensing length has been studied using the sensing- throughput tradeoff metric. Specifically, the paper studied the design of the sensing length to maximize the achievable throughput of a single channel cognitive radio network, under the constraint of the probability of detection. To provide better service for SUs, it is advisable to aggregate the perceived spectrum opportunities obtained through simultaneous sensing over multiple channels. In  the design of the sensing time has been investigated so as to maximize the average achievable throughput of the multiple channels in cognitive radio network without causing harmful interference to the PUs or exceeding the power limit of the secondary transmitter. The optimal sensing length is identified for the above problem under average power constraint. As an extended work of ,  also studied the problem of designing the optimal sensing length that maximizes the throughput of a wideband sensing-based spectrum sharing cognitive radio network and a wideband opportunistic spectrum access cognitive radio network.
Abstract—Poor perception from human interpretation on system interface design may deviate human critical judgment about state of a system. As a result accidents may occur due to misinterpretation on displayed information available on the screen. In relation to that, this paper describes designing scenarios for system interface design which reflects with user’s working context. System interface design that familiar with working context will help to increase user’s satisfaction and the ease of use of a particular system. Moreover, through the process in designing scenarios also leads to the identification of problems and how experts deal with challenging tasks in using the system. Human abstract thinking which could not be gather in a quantitative way motivate authors to employ Cognitive Task Analysis method in collecting system interface design requirements from the experts as to design task scenarios. In general experts involve in this study are from the manufacturing industries where their daily scope of work is in system maintenance tasks. There are five phases involve in Cognitive Task Analysis; define tasks, select participants, task observation, task diagram and knowledge audit. Results from the interview and observation session will give an essential clue in designing scenarios for system interface design. This is because in knowing a correct problem to solve and provide cues at a needed point in time will help users to interpret information on system interface design.
The second layer of analysis occurs through a systematic examina- tion of the interviews using a constant comparative method (Glaser and Strauss 1967). Specifically, for each question, interviews should be examined to identify patterns in the way respondents interpret and pro- cess the question. By making comparisons across all of the interviews, interpretive patterns can be identified and examined for consistency. If a question asks respondents to evaluate ‘‘the system of public benefits and services in their country,’’ for example, it would be important to under- stand the degree to which respondents are considering the same types of benefits and services. From this layer of analysis, patterns of calculation across respondents can be identified. This is particularly useful in understanding how qualifying clauses such as ‘‘in the past 2 weeks’’ or ‘‘on average’’ impact the way respondents form an answer, whether respondents consistently use the clause in their calculation, and how inconsistencies might impact reliability of the resulting survey data. At this point, it is possible to identify a theoretical framework for understanding the interpretive meaning behind respondents’ answers and the type of information that is ultimately transported through the survey statistic.
Beyond all these beneficial assistance of the CRN in IoT, some other issues prevailing in the network accounts for resolving the issues regarding the dynamic mobility of CR nodes on behalf of the miscellaneous radio access to the network by incorporation of both cellular as well as broadband technologies. It also strives for preserving energy at the time when a failure of sink node is realized. Multi-hop criterion involved in transmitting the sensed data from the sensed ambience to the predefined destination through CR nodes incurs a great end-to-end delay. Owing to the existence of mobile CR in the presented cognitive network, the assistance delivered by them lacks to a great extent.
Mostly efforts have been concentrated on single-hop scenarios in cognitive networks while tackling physical layer and/or Medium Access Control (MAC) layer issues  . Only very recently, the research community has started realizing the potentials of multi-hop CRNs, which can open up new and unexplored service possibili- ties enabling a wide range of pervasive communication applications . Routing problem in cognitive radio networks has similarities with routing in multi-band, multi-hop ad-hoc networks, but with the additional chal- lenge of having to deal with the dynamic behavior of the nodes. In fact, spectrum occupancy is location-depen- dent, and therefore in a multi-hop path scenario, available spectrum bands may be different at each relay node as shown in Figure 1. Hence, in multi-hop cognitive radio networks controlling the interaction between the routing and the spectrum management functionalities is of fundamental importance .
The variety of results with respect to the effects of CSCL scripts on the acquisition of domain- specific knowledge and collaboration skills leads to an important question: What is it that makes certain CSCL scripts effective and other CSCL scripts less effective or even ineffective? A closer look at some of the CSCL scripts that are described in the literature indicates that their design differs substantially from each other with respect to at least three dimensions. First, the CSCL scripts vary extensively with respect to the collaborative activities they prompt. Here, we focus on transactivity because transactive activities are assumed to be most beneficial for collaborative learning (Chi 2009; Chi and Wylie 2014; Teasley 1997). Second, CSCL scripts differ in terms of how much structure they induce. While some CSCL scripts provide less structuring by only being focused on the play level, others provide rather deeply structured support on the scene level or, even more deeply, on the scriptlet level (Fischer et al. 2013). Third, CSCL scripts vary in the extent to which they are accompanied by additional content- related support. While some CSCL scripts only structure the collaboration, others are coupled with domain-specific support such as worked examples or concept maps. The present meta- analysis covers these three different factors as potential moderators of the effectiveness of CSCL scripts.