Associations, chunks, hierarchies, attention, and analogy: What do we need?
Over sixty years since the cognitive revolution, learning theory remains fragmented and in dire need of integration. The target article presents a framework to think about learning from an associationist perspective, in which all knowledge consists of learned associations between chunks. This comment agrees that associationism provides a promising approach to learning theory, and is uniquely suited to integrating behavioral studies of learning with neuroscience and computational cognitive science. However, some of the specific postulates in the target article are questionable and (if taken to be characteristic of the framework) would unduly narrow the tent of associationism. In particular, I argue that associationism is compatible with deep learning models, whose learning rules mirror those proposed in the animal learning literature. Learned selective attention theory is also entirely compatible with associationism, and provides associative learning models with both a specific definition of attention and testable hypotheses about its effects in learning.
- associative learning
- chunking
- attention
- backpropagation
- reinforcement learning
- hierarchical structure