landscape model of reading, priming, cooccurrence graphs

   show filtered

 

This model captures both on-line comprehension processes during reading and the off-line memory representation after reading is completed, incorporating both memory-based and coherence-based mechanisms of comprehension. http://www.brainandeducationlab.nl/downloads

   show filtered

 

A central component of successful reading comprehension is the construction of a coherent memory representation of the text. https://www.questia.com/library/journal/1P3-440581011/a-landscape-model-of-reading-comprehension-inferential

   show filtered

 

The model is based on the premise that, during reading, the ideas and concepts associated with the text fluctuate in their activation. The result is a dynamically shifting landscape of activations. Two factors contribute to the shape of this landscape: readers' limited attentional resources and their attempts to maintain standards for coherence. https://www.questia.com/library/journal/1P3-440581011/a-landscape-model-of-reading-comprehension-inferential

   show filtered

 

Priming is an implicit memory effect in which exposure to one stimulus influences a response to another stimulus. http://en.wikipedia.org/wiki/Priming_%28psychology%29

   show filtered

 

Priming can occur following perceptual, semantic, or conceptual stimulus repetition. For example, if a person reads a list of words including the word table, and is later asked to complete a word starting with tab, the probability that he or she will answer table is greater than if they are not primed. http://en.wikipedia.org/wiki/Priming_%28psychology%29

   show filtered

 

Semantic priming is theorized to work because of spreading neural networks.[9] When a person thinks of one item in a category, similar items are stimulated by the brain. Even if they are not words, morphemes can prime for complete words that include them.[16] An example of this would be that the morpheme 'psych' can prime for the word 'psychology'. http://en.wikipedia.org/wiki/Priming_%28psychology%29

   show filtered

 

Context priming works by using a context to speed up processing for stimuli that are likely to occur in that context. A useful application of this effect is reading written text.[18] The grammar and vocabulary of the sentence provide contextual clues for words that will occur later in the sentence. These later words are processed more quickly than if they had been read alone, and the effect is greater for more difficult or uncommon words http://en.wikipedia.org/wiki/Priming_%28psychology%29

   show filtered

 

Priming is a nonconscious form of human memory concerned with perceptual identification of words and objects. It refers to activating particular representations or associations in memory just before carrying out an action or task. For example, a person who sees the word "yellow" will be slightly faster to recognize the word "banana." This happens because yellow and banana are closely associated in memory. Additionally, priming can also refer to a technique in psychology used to train a person's memory in both positive and negative ways. https://www.psychologytoday.com/basics/priming

   show filtered

 

In computational linguistics, word-sense induction (WSI) or discrimination is an open problem of natural language processing, which concerns the automatic identification of the senses of a word (i.e. meanings). http://en.wikipedia.org/wiki/Word-sense_induction

   show filtered

 

The main hypothesis of co-occurrence graphs is assuming that the semantic of a word is represented by means of co-occurrence graph, whose vertices are co-occurrences and edges are co-occurrence relations. These approaches are related to word clustering methods, where co-occurrences between words can be obtained on the basis of grammatical [8] or collocational relations.[9] HyperLex is the successful approaches of a graph algorithm, based on the identification of hubs in co-occurrence graphs, which have to cope with the need to tune a large number of parameters. http://en.wikipedia.org/wiki/Word-sense_induction

   show filtered

 

Visualize co-occurrence graph from document occurrence input using R package 'igraph' http://planspace.org/2013/01/30/visualize-co_occurrence/

   show filtered

 

java - large-scale document co-occurrence analysis - Stack Overflow http://stackoverflow.com/questions/21090020/large-scale-document-co-occurrence-analysis

   show filtered

 

This study used graph analysis to investigate how age differences modify the structure of semantic word association networks of children and adults and if the networks present a small-world structure and a scale-free distribution which are typical of natural languages. All networks presented a small-world structure, but they did not show entirely scale-free distributions. These results suggest that from childhood to adulthood, there is an increase not only in the number of words semantically linked to a target but also an increase in the connectivity of the network. http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0102-79722014000100011

   show filtered

 
in context: