lexical decision task

Top PDF lexical decision task:

Can a lexical decision task predict efficiency in the judgment of ambiguous sentences?

Can a lexical decision task predict efficiency in the judgment of ambiguous sentences?

The lexical decision task revealed an adequate index of reliability. In addition, this task has been extensively used and validated in previous research (see Araújo et al., 2015; Haro et al., 2017; Hicks et al., 2017; or Oliveira, 2014; Oliveira & Justi, 2017 for studies using lexical de- cision tasks in Portuguese). In the lexical decision task, the correct percentages of regular words and quasi-words were higher than those found by Oliveira (2014), who reported correct percentages of 89.16% (S.D. = 5.78) for regular words and 84.93% (S.D. = 8.53) for quasi-words. No such differences were found for pseudo-words. Oliveira (2014) found a correct percent- age of 97.44% (S.D. = 2.34), which was expected because these words do not exist, so university students should not have had any major problems. In contrast, the aver- age time in the trial was much faster than that in Oli- veira’s study. The slowest category was quasi-words, which showed a reaction time of 808.57 ms (S.D. = 156.55). The fastest category was pseudo-words. It is possible to understand that the difference in correct per- centages between these two variables as a speed-accuracy tradeoff: People who read quickly will lose accuracy and vice versa (Heitz, 2014). Additionally,
Show more

10 Read more

Comparing Character level Neural Language Models Using a Lexical Decision Task

Comparing Character level Neural Language Models Using a Lexical Decision Task

What is the information captured by neural network models of language? We address this question in the case of character-level recurrent neural language models. These models do not have explicit word repre- sentations; do they acquire implicit ones? We assess the lexical capacity of a network using the lexical decision task common in psycholinguistics: the system is required to decide whether or not a string of charac- ters forms a word. We explore how accu- racy on this task is affected by the architec- ture of the network, focusing on cell type (LSTM vs. SRN), depth and width. We also compare these architectural properties to a simple count of the parameters of the network. The overall number of parame- ters in the network turns out to be the most important predictor of accuracy; in partic- ular, there is little evidence that deeper net- works are beneficial for this task.
Show more

6 Read more

The Effect of Physical Weight and Stimulus Spatial Location on Lexical Decision: Implications for Embodied Cognition

The Effect of Physical Weight and Stimulus Spatial Location on Lexical Decision: Implications for Embodied Cognition

The lexical decision task was used as it allows for measures such as response times and ratings to be gathered on words seen. Past research in to embodied cognition has found that differences in word position can effect judgements on the words that are presented, words higher up gained a more positive response than those words presented lower down. The lexical decision task with words at different heights allowed the researcher to measure this possible effect. Weight has also been used in past research and has been found to also have an effect on judgements, heavier weight has been found to produce judgements of a more serious nature than that of lighter weight. By combining the lexical decision task and these two stimuli, it made it possible to uncover any possible dominance effect of either the linguistic or bodily state systems.
Show more

79 Read more

Sustained meaning activation for polysemous but not homonymous words: Evidence from EEG

Sustained meaning activation for polysemous but not homonymous words: Evidence from EEG

Theoretical linguistic accounts of lexical ambiguity distinguish between homonymy, where words that share a lexical form have unrelated meanings, and polysemy, where the meanings are related. The present study explored the psychological reality of this theoretical assumption by asking whether there is evidence that homonyms and polysemes are represented and processed differently in the brain. We investigated the time-course of meaning activation of different types of ambiguous words using EEG. Homonyms and polysemes were each further subdivided into two: unbalanced homonyms (e.g., “ coach ”) and balanced homonyms (e.g., “ match ”); metaphorical polysemes (e.g., “ mouth ”) and metonymic polysemes (e.g., “ rabbit ”). These four types of ambiguous words were presented as primes in a visual single-word priming delayed lexical decision task employing a long ISI (750 ms). Targets were related to one of the meanings of the primes, or were unrelated. ERPs formed relative to the target onset indicated that the theoretical distinction between homonymy and polysemy was reflected in the N400 brain response. For targets following homonymous primes (both unbalanced and balanced), no effects survived at this long ISI indicating that both meanings of the prime had already decayed. On the other hand, for polysemous primes (both metaphorical and metonymic), activation was observed for both dominant and subordinate senses. The observed processing differences between homonymy and polysemy provide evidence in support of differential neuro-cognitive representations for the two types of ambiguity. We argue that the polysemous senses act collaboratively to strengthen the representation, facilitating maintenance, while the competitive nature of homonymous meanings leads to decay.
Show more

46 Read more

Visual, Auditory and Cross Modal Lexical Decision: A Comparison between Dyslexic and Typical Readers

Visual, Auditory and Cross Modal Lexical Decision: A Comparison between Dyslexic and Typical Readers

No significant differences were found between the two groups on the auditory lexical decision task for accuracy or reaction time. These results are not surprising, given that the research tasks at this stage were based only on pseudowords. Thus, for both groups of readers, listening to the stimuli was akin to hearing speech structures. If we accept that the impairment characteristic of dyslexics relates to reading written language, then it would not be plausible to assume that they also have difficulty in relation to spoken language. It is true that reading in- volves some of the same processes and sources of information that are associated with speech; however, reading is also an independent activity. Processing speech is carried out in a biological perceptual system, while reading depends on a biological mechanism that initially served a different function in humans (such as spoken language) (Catts, 1986).
Show more

16 Read more

Pancani_unc_0153M_16616.pdf

Pancani_unc_0153M_16616.pdf

previous findings from both the word in isolation and sentence reading literature (Grainger et al., 2006; Ledoux, Gordon, Camblin, & Swaab, 2007b; Nagy & Rugg, 1989; Rugg & Nagy, 1989). Earlier, several properties of sentence reading were identified as possible causes for the absence of an early neural marker of preview feature integration. Among these were the presence of intervening items, the long lags between prime and target and the reduced attentional focus to form level features were all identified of preview features in sentence reading. Here, we showed that removing lags and intervening items does indeed restore integration of lower level preview information early during target processing in sentence reading but that this effect is still reduced compared to the isolated word task. No difference between tasks was found for the later effect on the N400 component. The task difference on the early measure may reflect enhanced featural processing in the isolated word with lexical decision task. This task implicitly requires
Show more

73 Read more

A cognitive analysis of reading, spelling and memory impairments in children with literacy disorders

A cognitive analysis of reading, spelling and memory impairments in children with literacy disorders

Experiment 7: Case Studies of 2 Children - Task 1: Naming Task - Regular and Irregular Words as above - Task 2: Lexical Decision Task Pseudohomophonic Non-Words and Non-Words as above No[r]

321 Read more

LEXICAL ACCESS TROUGH PICTURE NAMING TASK: EVIDENCE FROM SEMANTIC PRIMING

LEXICAL ACCESS TROUGH PICTURE NAMING TASK: EVIDENCE FROM SEMANTIC PRIMING

Priming refers to a participant‟s change or improvement in performance while participating in a cognitive task (e.g., lexical naming task) as a result of exposure to a stimulus or prior experience (McNamara & Holbrook, 2003). Priming may also reflect a meaning-integration process that occurs after access of the target and affects the decision stage of the task. In psycholinguistic studies, it has been well established that the processing of a target word (nurse) is faster and more accurate when it follows a prime word which is semantically related (doctor) than a prime word which is semantically unrelated (bread) (Meyer & Schvaneveldt, 1971). Since then semantic effect has been extensively used, especially for associatively related pairs (Lupkar 1984; Neelt, 1990). It was later demonstrated between spoken words, using the lexical decision task (Radeau, 1983) and single word shadowing (Slowiaczek, 1994) and was shown to occur across sensory modalities, foe example, between an auditory prime and a visual target (Swinney et al., 1979).
Show more

12 Read more

Syllabic Length Effect in Visual Word Recognition

Syllabic Length Effect in Visual Word Recognition

One of the areas which is now of great importance and the result of which is used as a base in different experiments is the effect of word length on word recognition. Word length comprises different aspects such as the number of syllables, the number of letters, the number of morphemes and the number of phonemes. Among the mentioned components, the number of syllables has received greater attention (Ferrand, 2000; Klapp et al. 1973) and seems to be the most important issue in such studies. Majority of the studies conducted in the field of word recognition mainly have dealt with monosyllabic words and only a few have investigated processing of polysyllabic words, (Jared & Seidenbergs, 1990). Word length effect on words and pseudo-words recognition in lexical decision task has been studied by different scholars from different perspectives. For example, Ziegler et al. (2001) reported that orthographic consistency determined not only the relative contribution of orthographic versus phonological codes within a given orthography but also the preferred grain size of units that are likely to be functional during reading. In another study, Spieler and Balota (1997) suggested that the word frequency plays less significant role alone as compared to the combined predictive power of frequency, neighborhood density, and orthographic length. Forster and Chambers (1973) examined lexical decision time for samples of words, non-words, and unfamiliar words and reported that naming time for words was shorter than for non-words, and that naming time for high frequency words was shorter than low frequency words. Still in another study, Content and Peereman (1992) and Ferrand (2000) showed that the number of syllables has an impact on low frequency words recognition and it does not have much to do with processing of high frequency words in word recognition.
Show more

9 Read more

2017_Jadi.pdf

2017_Jadi.pdf

Different types of translation priming are compared in order to determine the levels of connectivity between languages. Cognate priming results in greater facilitation as measured by shorter reaction times on the target word in the lexical decision task (Davis et al, 2010; Dunabeitia et al., 2009). Non-cognate pairs are only similar semantically and provide a point of comparison. Since cognate priming occurs to a greater magnitude than non-cognate priming, it suggests that orthographic and

25 Read more

Breaking-down the Ontology Alignment Task with a Lexical Index and Neural Embeddings

Breaking-down the Ontology Alignment Task with a Lexical Index and Neural Embeddings

The use of partitioning and modularization techniques have been extensively used within the Semantic Web to improve the efficiency when solving the task at hand (e.g., ontol- ogy visualization [20,21], ontology reuse [22], ontology debugging [23], ontology clas- sification [24]). Partitioning has also been widely used to reduce the complexity of the ontology alignment task. In the literature there are two major categories of partitioning techniques, namely: independent and dependent. Independent techniques typically use only the structure of the ontologies and are not concerned about the ontology alignment task when performing the partitioning. Whereas dependent partitioning methods rely on both the structure of the ontology and the ontology alignment task at hand. Our ap- proach, although we do not compute (non-overlapping) partitions of the ontologies, can be considered a type of dependent technique.
Show more

20 Read more

Modeling lexical decision : the form of frequency and diversity effects

Modeling lexical decision : the form of frequency and diversity effects

Murray and Forster (2004) claimed that rank frequency provided a better account of lexical decision times than either log frequency or power law frequency, the latter being dismissed on the grounds of over-flexibility. We (Adelman & Brown, 2008) argued that (i) Murray and Forster’s use of the relatively small Kuˇcera and Francis (1967) word frequency counts biased the estimates of rank; (ii) the superiority in fit of the power law (and of some other functions) could not all be attributed to over-flexibility in the manner Murray and Forster claimed; and (iii) bootstrapping analyses designed to take flexibility into account gave evidence of systematic deviations from several theoretically-motivated functional forms, including rank and power, but not from some generalizations of the power function. We concluded that the data could not be taken as support for serial search models.
Show more

54 Read more

Decision Trees for Lexical Smoothing in Statistical Machine Translation

Decision Trees for Lexical Smoothing in Statistical Machine Translation

phrase as well as source phrase context, such as bordering words, or part-of-speech of bor- dering words. They built a decision tree for each source phrase extracted from the train- ing data. The branching of the tree nodes was based on the dierent context features, branching on the most class-discriminative fea- tures rst. Each node is associated with the set of aligned target phrases and correspond- ing context-conditioned probabilities. The de- cision tree thus smoothes the phrase probabil- ities based on the dierent features, allowing the model to back o to less context, or no context at all depending on the presence of that context-dependent source phrase in the training data. The model, however, did not provide for a back-o mechanism if the phrase pair was not found in the extracted phrase ta- ble. The method presented in this paper diers in various aspects. We use context-dependent information at the source word level, rather than the phrase level, thus making it readily applicable to any translation model and not just phrase-based translation. By incorporat- ing context at the word level, we can decode directly with attribute-augmented source data (see section 3.2).
Show more

10 Read more

Lexical Event Ordering with an Edge Factored Model

Lexical Event Ordering with an Edge Factored Model

event types. Most work has been unsupervised, of- ten using pattern-based approaches relying on man- ually crafted (Chklovski and Pantel, 2004) or in- duced patterns (Davidov et al., 2007), that corre- late with temporal relations (e.g., temporal discourse connectives). Talukdar et al. (2012) uses the textual order of events in Wikipedia biographical articles to induce lexical information. We use both textual order and discourse connectives to define our fea- ture set, and explore a setting which allows for the straightforward incorporation of additional features. Chambers and Jurafsky (2008b; 2009) addressed the unsupervised induction of partially ordered event chains (or schema) in the news domain, centered around a common protagonist. One of their evalu- ation scenarios tackles a binary classification related to event ordering, and seeks to distinguish ordered sets of events from randomly permuted ones, yield- ing an accuracy of 75%. Manshadi et al. (2008) used language models to learn event sequences and con- ducted a similar evaluation on weblogs with about 65% accuracy. The classification task we explore here is considerably more complex (see §8).
Show more

11 Read more

Exploiting Image Generality for Lexical Entailment Detection

Exploiting Image Generality for Lexical Entailment Detection

We exploit the visual properties of con- cepts for lexical entailment detection by examining a concept’s generality. We in- troduce three unsupervised methods for determining a concept’s generality, based on its related images, and obtain state-of- the-art performance on two standard se- mantic evaluation datasets. We also intro- duce a novel task that combines hypernym detection and directionality, significantly outperforming a competitive frequency- based baseline.

6 Read more

An analysis of content free dialogue representation, supervised classification methods and evaluation metrics for meeting topic segmentation

An analysis of content free dialogue representation, supervised classification methods and evaluation metrics for meeting topic segmentation

The first row of Table 8.3 show that accuracy values of Horizon levels 1, 2, 3 are quite close to each other. The paired t-test shows p = 0.0805 between Horizon = 0 and Horizon = 3. For other Horizon levels, the p values are similar. Although statistical significance (at the p < 0.05 level) for the effect of VOC Horizon fea- tures on accuracy could not be demonstrated with the Bayesian Network classifier, some influence influence on is still observable. For decision trees (Table 8.3, row 2) I see that the highest mean classification accuracy emerges when Horizon = 1. I find that the accuracy does not improve when I add more vocalization features. The effect of Horizon varies with different data sets. Figure 8.3 shows that in a single meeting the best accuracy may emerge for Horizon = 2. Therefore I do not recommend adopting a fixed optimal Horizon. On the other hand, from Figure 8.3, I see the fact that in most feature sets the classification accuracy is higher for Horizon = 1 than for Horizon = 0. The paired t -test shows p = 0.023 between Horizon = 0 and Horizon = 1. The effect of Vocalization Horizon is statistically significant 5 .
Show more

182 Read more

Combining Verbal and Nonverbal Features to Overcome the “Information Gap” in Task Oriented Dialogue

Combining Verbal and Nonverbal Features to Overcome the “Information Gap” in Task Oriented Dialogue

For web-based one-to-one dialogue systems, it is important to achieve efficient runtime performance. To maximize real-world feasibility of the learned dialogue act classifiers, this work only considers the features that can be automatically extracted at runtime. In addition, the use of linguistic analysis software, such as a part-of- speech tagger and a syntactic parser, is intentionally restrained. One might argue that rich linguistic analysis may provide additional information to dialogue act classifiers, potentially improving the performance of learned models. However, there is a trade-off between additional information obtained by rich linguistic analysis and processing time. In addition, previous work (Boyer et al., 2010a) found part-of-speech and syntax features did not provide obvious benefit for dialogue act classification in a domain similar to the one considered in this work. The dialogue act classifiers described in this paper integrate four classes of features automatically extracted from three sources of information: the textual dialogue utterances, task-related runtime information logged into the database, and the images of the students recorded by depth cameras. Each feature class is explained in the following subsections.
Show more

10 Read more

The Assessment of Expert Evidence on DNA in Malaysia

The Assessment of Expert Evidence on DNA in Malaysia

On the basis of the foregoing section, the court can only obtain the assistance of experts in the fields described in the section. The fields are: foreign law; science or art; handwriting; and finger impression (Raja Muhammad Zuha & Ramalinggam 2017; Ramalinggam et al. 2012). In contrast, experts from other fields who are not listed in this section cannot be called to give evidence. Thus, in order to allow DNA experts to testify in any case, they must come from the fields provided for in section 45 of the Evidence Act 1950. It appears at a glance that DNA is not in the category because the word “DNA” is not explicitly specified in the section. The provision, however, refers to “science” as one of the categories. Hence, the question may arise as to whether the expression “science” in the section is wide enough to cover DNA. In determining the answer, reference can be made to the case of Chandrasekaran & Ors v PP [1971] 1 MLJ 153 p.159, where the court held that the expression ‘science or art’ is elastic enough to be given a liberal interpretation and the fact that the section does not specify particular fields of knowledge does not mean that they are not included. This approach was affirmed in the Federal Court case of Pathmanabhan Nalliannen v Public Prosecutor & Other Appeals [2017] 4 CLJ 137 p.178 where the court emphasized the following: “With respect, such argument ignores the point that DNA evidence is admissible under s.45, s.46 and s.51 of the EA. It is basically opinion evidence of an expert.” This decision clearly indicates that the Federal Court held that DNA evidence falls under section 45 of the Evidence Act and is therefore admissible in court (Ramalinggam 2017).
Show more

7 Read more

SemEval 2007 Task 11: English Lexical Sample Task via English Chinese Parallel Text

SemEval 2007 Task 11: English Lexical Sample Task via English Chinese Parallel Text

To gather examples from parallel corpora, we fol- lowed the approach in (Ng et al., 2003). Briefly, af- ter ensuring the corpora were sentence-aligned, we tokenized the English texts and performed word seg- mentation on the Chinese texts (Low et al., 2005). We then made use of the GIZA++ software (Och and Ney, 2000) to perform word alignment on the paral- lel corpora. Then, we assigned some possible Chi- nese translations to each sense of an English word w. From the word alignment output of GIZA++, we selected those occurrences of w which were aligned to one of the Chinese translations chosen. The En- glish side of these occurrences served as training data for w, as they were considered to have been dis- ambiguated and “sense-tagged” by the appropriate Chinese translations. The English half of the par- allel texts (each ambiguous English word and its 3- sentence context) were used as the training and test material to set up our English lexical sample task.
Show more

5 Read more

The effects of exposure to appetitive cues on inhibitory control: A meta-analytic investigation

The effects of exposure to appetitive cues on inhibitory control: A meta-analytic investigation

The indices of inhibition used for each task are stated in Table 1. The most common tasks were the Stop Signal, Go/No-Go and Go/No-Go shifting tasks. The Stop Signal and Go/No-Go tasks require motor in- hibition of a pre-potent response following a visual or auditory ‘ stop signal’ or ‘No-Go cue’. In the Stop Signal task this cue is presented following a variable delay after initial stimulus onset and therefore motor behaviour has to be cancelled, whereas in the No-Go task the No- Go cue is presented concurrently with the target stimulus, and therefore behaviour must be restrained rather than cancelled (Eagle et al., 2008). In the shifting version of the task the cues for ‘ Go ’ and ‘ No-Go ’ are switched on a block-by-block basis (Meule, 2017). In the anti-saccade task participants have to inhibit an involuntary oculomotor response (saccade) to a visual stimulus that appears in the periphery of a visual display (Hallett, 1978). In the Stroop task (Stroop, 1935) participants have to name the colour of target words whilst ignoring the semantic content of the word (e.g., the word ‘red’ printed in blue ink). Finally, in the flanker task participants have to categorise a target stimulus whilst ignoring distractor stimuli that appear alongside it (Eriksen & Eriksen, 1974). Stop Signal Reaction Time and Commission errors were the most common outcomes from these tasks. We also extracted and coded a number of variables for our main and supplementary analyses, in- cluding; type of task used, modality of cue exposure, drinking and weight status, and any correlations with BMI, typical alcohol use or AUDIT scores (see Table 1). We selected these variables as they were the most commonly measured across all studies.
Show more

13 Read more

Show all 10000 documents...