1997 Psychonomic Society Symposium on High-dimensional Memory Models

1997 Psychonomic Society Symposium


Developing Models of
High-dimensional
Semantic Space


Chaired byand

Curt Burgess

Tom Landauer

Psychonomic Society Meeting, Philadelphia, November 21 - 23, 1997.
Symposium: Saturday afternoon, November 22, 1:30 - 3:50


Introduction:
Curt Burgess
University of California, Riverside

Attempting to derive models of semantic memory using psychometric techniques has a long history in cognitive psychology dating back at least to Osgood (1957). Others have used multidimensional scaling on (many thousands of) human judgements of similarity (Shepard, 1962; Rips, Shoben, & Smith, 1973). Recently, investigators have been using large corpora, 1 to 300 million words, to develop cognitively plausible high-dimensional semantic models without the need for human judgements on stimuli. These models have become increasingly better at explaining cognitive phenomena as they move beyond simple co-occurrence statistics. One reason for their success is their ability to capitalize on the context in which words and sentences appear by allowing meaning (semantic or grammatical) representations to naturally emerge as a product of the systems experience with its environment (a large stream of language in the case of LSA and HAL). These models offer an explicit account of how environmental input is transduced to representational information.


Latent Semantic Analysis: Introduction and new evidence of veridicality
Tom Landauer
University of Colorado

Latent Semantic Analysis (LSA) is a high-dimensional associative matrix model that acquires human-like semantic knowledge from large text corpora. LSA associates words with meaningful passages, then uses matrix decomposition and dimension reduction to induce similarity. New simulations include category judgments, the conjunction fallacy in probability estimation, passing introductory psychology tests, predicting speech errors, and grading conceptual content of essays as reliably as humans. These successes suggest that LSA's mechanisms may be human-like.


Modeling text comprehension with LSA
Peter Foltz
New Mexico State University

Latent Semantic Analysis (LSA) serves as both a theory and method for representing the meaning of words based on a statistical analysis of their contextual usage. Several investigations of LSA were performed to demonstrate that LSA can model effects of text comprehension, including modeling reader's knowledge structures and the effects of text coherence on comprehension. Discussion will describe the relationship between LSA and theories of text comprehension.

Latent Semantic Analysis as a Theory of Human Knowledge
Walter Kintsch
University of Colorado

The relationship between vector representations of meaning and network representations is explored. To what extent do LSA-vector representations provide a basis for a psychological theory of semantic processing? What is missing and how can the LSA framework be expanded? The constraint satisfaction mechanism of the construction-integration model of discourse comprehension can be combined with the LSA representation of meaning for a more powerful model of comprehension processes.


Through Context to Global Co-occurrence and Meaning: The HAL Odyssey
Kevin Lund
University of California, Riverside

The basic methodology behind HAL (a contextual model of meaning) will be presented, as will a general selection of results intended to show some of the range of abilities of such models. These include semantic similarity, coherent representations of both abstract and concrete words, and acquisition of concepts from small corpora. An equivalence to some recurrent neural network models will also be demonstrated.


The Application of Semantic Models in Neuropsychology: Modeling Semantic Errors
Lori Buchanan, University of Alberta
Susan Rickard Liow, National University of Singapore

Deep dyslexia is an acquired reading disorder characterized by the production of semantic paralexias (i.e., the word ROSE is read as TULIP). Standard word association norms predict reading performance of deep dyslexics poorly. However, a model of semantics that includes both word association and global co-occurrence information (HAL) is a much better predictor. Naming data from 8 deep dyslexic patients is assessed using data generated from such a model.


HAL, Global Co-occurrence, and Representational Theory: The Key to How We Know?
Curt Burgess
University of California, Riverside

The Hyperspace Analogue to Language (HAL) model has simulated the dissociation of semantic and associative priming, cerebral asymmetries, emotional and mediated priming, grammatical and semantic categorization, and semantic constraints on syntactic processing. HAL's meaning acquisition procedure and experiments with its representations sheds light on important issues regarding semantic features, modeling the transition from simple associations to categorical knowledge, the relationship between experience and representations for the symbol-grounding problem, and representational modularity. The range of effects HAL can model and the nature of the problems it can address suggests that global co-occurrence serves as the underpinning of knowledge representation.