Reinhard Blutner
P.O.Box 94242,
1090 GE AMSTERDAM,
The Netherlands.
http://www.blutner.de
blutner@contact.uva.nl
Peter beim Graben
Bernstein Center for Computational Neuroscience Berlin, Germany.
https://www.researchgate.net/profile/Peter-Beim-Graben
Geometric models of meaning have become increasingly popular in natural language semantics and cognitive science. In contrast to standard symbolic models of meaning (e.g. Montague), which give a qualitative treatment of differences in meaning, geometric models are also able to account for the quantitative differences, expressing degrees of similarities between meanings, and give an account of typicality and vagueness for words and phrases. In this course we will present new developments in this exciting research field. It is not assumed that every student has the necessary basic background of linear algebra. The first two days are planned to introduce the students into this important field of applied mathematics. Further, the course discusses (i) distributional syntax and semantics, and the problem of compositionality; (ii) a new theory of questions & answers using the very same algebra that underlies distributional semantics; (iii) several puzzles of combining concepts and their solutions in terms of geometric models.
1. Introduction into linear algebra. We do not assume any prior knowledge of linear algebra and provide a careful but concise introduction into this field of mathematics. (Part I: Vector spaces and complex numbers, vector space homomorphisms & matrices, inner product, projection operators, eigenvalues & eigenvectors, spectral decomposition, latent semantic analysis. Part II: pure and mixed states, density operator, tensor product and entanglement, Pauli matrices, quantum probability measure and Bell’s inequalities, representation theory).
2. DISTRIBUTED SYNTAX. In neural network research, geometric models of mental representations are defined by the activation values of connectionist units. These representations are distributed patterns of activity (activation vectors). For core aspects of higher cognitive domains, these vectors realize symbolic structures. It is illustrated how vector space models for syntactic representations, such as context-free grammars and phrase structure trees, are constructed in connectionist modeling, deploying term algebra homomorhism, filler/role bindings, tensor product representations and compression operations such as circular convolution or tensor contraction, e.g. in the work of Paul Smolensky.
3. Semantic spaces. Geometric models of meaning were introduced first in the context of words. For instance, the infomap vector space model (WORDSPACE) pioneered by Hinrich Schütze works, mapping words to points in a high-dimensional space by recording the frequency of co-occurrence between words in the text. The appeal of this and related models lies in their ability to represent meaning simply by using distributional information. Later on in the project, work on the logical properties of WORDSPACE by D. Widdows and S. Peters demonstrated that WORDSPACE can naturally be navigated using the same logic as quantum mechanics, with powerful and exciting consequences for modeling word-meanings.4. An ortho-algebraic approach to questions. Using the same algebra underlying WORDSPACE, a general theory of questions and answers can be developed. In this theory, the meaning of questions is given by decorated partitions. We compare the ortho-algebraic approach with traditional approaches to the semantics of questions and apply the new theory to the area of Attitude questions in survey research and personality psychology. Characteristic for these fields are question ordering effects (non-commutativity of questions).
5. CONCEPTUAL COMBINATION. A big problem for vague concepts and prototype based systems is the proper treatment of conceptual combination This relates to the issue of bounded rationality. Tversky, Kahneman and colleagues have argued that the cognitive system is sensitive to environmental statistics, but it also is routinely influenced by heuristics and biases that can violate the prescription of classical probability theory (e.g. Gigerenzer & Selten, 2001). This position has been very influential, not only in psychology but also in economics, culminating in a Nobel prize award for Kahneman. From an explanatory point of view, a heuristic approach (adaptive toolbox) is not really encouraging, and a more systematic account would be very welcome. We propose that human cognition can and in fact should be modelled within a probabilistic framework. Quantum probabilities (based on ortho-algebras) provide a proper generalization of classical probabilities and allow to solve some difficult problems of conceptual combination / bounded rationality in a systematic way.
day 1 day 2 day 3 day 4 day 5 |
General Introduction | R P P R R |
Linear
algebra I
Outlook |
P P R R P |
Amy N. Langville and Carl D. Meyer (2004): The Use of the Linear Algebra by Web Search Engines.
Alessandro Lenci (2008): Distributional semantics in linguistic and cognitive research. ::: Course members click here
J. Acacio de Barros and Patrick Suppes (2009): Quantum mechanics, interference, and the brain
Smolensky, P. Harmony in linguistic cognition. Cognitive Science, 2006, 30, 779 – 801. ::: Course members click here
Peter D. Bruza, D. Widdows and John Woods (2006): A Quantum Logic of Down Below (especially, Sections 3 & 4)
Reinhard Blutner (2011): Questions and Answers in an Orthoalgebraic Approach.
Jerome R. Busemeyer and Peter D. Bruza (forthcoming): What can quantum theory predict? Predicting question order effects on attitudes. Chapter 3 of a forthcoming book by Jerome R. Busemeyer and Peter D. Bruza: Quantum Cognition and Decision (Cambridge University Press)
Emmanuel M. Pothos and Jerome R. Busemeyer (2009): A quantum probability explanation for violations of ‘rational’ decision theory