Humans learn to classify and categorize things all the time. It’s a fundamental aspect of cognition. Sometimes, categories can be learned in more than one way and there may be some things which seem to be exceptions to the classification rule. How does the brain react to individual stimuli during the learning process and in particular, how does the brain deal with learning these exceptions?
A new paper from our lab, published at Cognitive, Affective, and Behavioral Neuroscience, tries to answer this question. This work was carried out by a former lab member and PhD student, Rachel Rabi (who is now Banting postdoc working with Nicole Anderson and Lyne Hasher at the Rotman Research Institute in Toronto) along with Marc Joanisse, and lab members Toka Zhu and Paul Minda.
Learning a Conjunctive Rule
In the study, participants learned to classify shapes known as Gabor Patches (seen to the right), which are fairly common in visual perception research. Two features varied on these stimuli: the orientation of the alternating light and dark bands and the spatial frequency of the light and dark bands. We created a whole set of these and the figure below shows how these stimuli were arranged according to variance along the Orientation and Frequency dimensions. The optimal categorization strategy for this set was the conjunctive rule that combined frequency and orientation. However, it was also possible to learn this category less than perfectly with one of two sub-optimal, single-dimensional rules that were easier to acquire. In the example below, if a person only used frequency, it would allow them to classify all the B items, along with the items in the A1 and A3 regions, but they would misclassify the items in the A2 region because just using frequency does not distinguish between those two. In this case, participant would make errors on those stimuli until they transitioned to the optimal rule.
Our goal was to understand the cognitive processes behind the transition from a simple, suboptimal rule to a more complex optimal rule. We reasoned that this mirrors many real-world scenarios in which people learn a quick and easy rule that’s usually correct but with time and practice can use a more complicated rule that’s almost always correct.
Participants learned to classify the items in the set by seeing a single visual stimulus on a computer screen, making a decision as to whether it belonged in one category or the other (A or B), and then receiving feedback on whether or not they made the right decision. This feedback-driven learning went on for several hundred trials. During the learning phase, we also measured electrical activity in the brain using EEG. The EEG wave forms allowed us to see what was going in the brain at the moment that the participants saw the stimulus, made a decision, and received feedback.
We tested several hypotheses, and one hypothesis was that even after people learned to classify all the items correctly and had learned the conjunctive rule, we should be able to detect a difference between the stimuli that were able to be classified with only the simple rule (“easy” items) and those that could only be classified with the full, conjunctive rule (“difficult” items). We predicted that we’d find a difference in the EEG wave forms between these items, even though the behavioural response was the same.
Processing of “Exceptions”
We used a series of computational models to determine which strategy each person was using (details in the paper) and, as predicted, we found that people transitioned from a single-feature rule to the conjunctive rule. Then we looked at the EEG wave forms for participants who were using that conjunctive rule and making correct decisions. What we found as that even though they classified all of the stimuli correctly, there was a difference in brain activity between the “easy” stimuli and the “hard” stimuli. The difference can be seem below and it shows higher amplitude for the difficult stimuli, roughly 400-600 msec after as image is seen. We interpreted this difference as evidence for the additional memory processing that was happening when participants saw and classified the difficult stimuli.
It seems like even though you can make a correct decision on all of these items, your brain reacts differently to the difficult stimuli. Your brain seems to know that these items were, for a time, exceptions to a simple rule.
We’re working on some new projects to understand more about how the brain reacts when learning about categories that have exceptional items. In addition, this was our first paper that combined the modelling work (which we’ve done a lot of) with EEG ( which we only done a little of). We plan to keep working with this technique. The modelling tells us what strategy participants most likely used and the EEG tells us how the brain was processed these stimuli. Together, it gives us a more complete picture of something that most of us just take for granted: making a quick classification decisions.
Find out More
You can see more about this work by reading the paper at the journal or the preprint from the preprint server. If you really want to dig unto what we’be done, you can see all of the stimuli that we used, the individual data from out study and the code for our analyses and computation models at my Open Science Framework Page.