The Cognitive and Brain Sciences Program is heavily research oriented. Students work closely with a faculty mentor to gain research skills and experience through laboratory work focused on currently relevant topics, and are encouraged to publish their work. The following is a sample of recent papers authored by graduate students in our program, published in high impact journals:
*Although we were required to transfer the copyright of some of the articles to the publishers, we are allowed to distribute copies to individuals for personal and/or research use. Your click on any of the links above constitutes your request for a personal copy of the linked articles. A detailed copyright notice appears in the articles.
- Visual working memory deficits in undergraduates with a history of mild traumatic brain injury
- Underwater Virtual Reality System for Neutral Buoyancy Training: Development and Evaluation
- The Wandering Circles: A Flicker Rate and Contour-Dependent Motion Illusion
- The motion-induced contour revisited: Observations on 3-D structure and illusory contour formation in moving stimuli
- Real-world size coding of solid objects, but not 2-D or 3-D images, in visual agnosia patients with bilateral ventral lesions
- Methods for Presenting Real-World Objects Under Controlled Laboratory Conditions
- Interactions of flicker and motion
- Individual differences reveal limited mixed-category effects during a visual working memory task
- Individual differences and their implications for color perception
- Human Scene-Selective Areas Represent 3D Configurations of Surfaces
- Hemispheric Asymmetries in Deaf and Hearing During Sustained Peripheral Selective Attention
- Functionally Separable Font-invariant and Font-sensitive Neural Populations in Occipitotemporal Cortex
- Embedded word priming elicits enhanced fMRI responses in the visual word form area
- Dynamics of contrast adaptation in central and peripheral vision
- Does right hemisphere superiority sufficiently explain the left visual field advantage in face recognition?
- Distinct visuo-motor brain dynamics for real-world objects versus planar images
- Directional Visual Motion Is Represented in the Auditory and Association Cortices of Early Deaf Individuals
- Dataset of 24-subject EEG recordings during viewing of real-world objects and planar images of the same items
- Color and culture: Innovations and insights since Basic Color Terms—Their Universality and Evolution (1969)
- All-or-none visual categorization in the human brain
- Aging ImpairsTemporalSensitivity,but not Perceptual Synchrony, Across Modalities
- Age-Related Effects on Cross-Modal Duration Perception
- Adaptation and visual discomfort from flicker
- A neural basis of the serial bottleneck in visual word recognition
- Visual recognition of mirrored letters and the right hemisphere advantage for mirror-invariant object recognition
- Visual adaptation and the amplitude spectra of radiological images
- Towards a unified perspective of object shape and motion processing in human dorsal cortex
- The real deal: willingness-to-pay and satiety expectations are greater for real foods versus their images
- Preserved object weight processing after bilateral LOC lesions
- Frontoparietal tDCS Benefits Visual Working Memory in Older Adults With Low Working Memory Capacity
- Electrophysiological correlates of encoding processes in a full-report visual working memory paradigm
- Dissociable effects of inter-stimulus interval and presentation duration on rapid face categorization
- Cognitive Effects of Transcranial Direct Current Stimulation in Healthy and Clinical Populations
- Asymmetric neural responses for facial expressions and anti-expressions
- Action–effect contingency modulates the readiness potential
- Variations in normal color vision: Factors underlying individual differences in hue scaling and their implications for models of color appearance
- Action Properties of Object Images Facilitate Visual Search
- Decoding information about dynamically occluded objects in visual cortex
- Spatial modulation of motor-sensory recalibration in early deaf individuals
- An fMRI study of visual hemifield integration and cerebral lateralization
- Attentional capture for tool images is driven by the head end of the tool, not the handle
- The strategy and motivational influences on the beneficial effect of neurostimulation: A tDCS and fNIRS study
- Intraparietal regions play a material general role in working memory: Evidence supporting an internal attentional role
- Contralateral delay activity tracks the influence of Gestalt grouping principles on active visual working memory representations
We investigated whether a history of mild traumatic brain injury (mTBI), or concussion, has any effect on visual working memory (WM) performance. In most cases, cognitive performance is thought to return to premorbid levels soon after injury, without further medical intervention.We tested this assumption in undergraduates, among whom a history of mTBI is prevalent. Notably, participants with a history of mTBI performed worse than their colleagues with no such history. Experiment 1 was based on a change detection paradigm in which we manipulated visual WM set size from one to three items, which revealed a significant deficit at set size 3. In Experiment 2 we investigated whether feedback could rescue WM performance in the mTBI group, and found that it failed. In Experiment 3 we manipulated WM maintenance duration (set size 3, 500–1,500 ms) to investigate a maintenance-related deficit. Across all durations, the mTBI group was impaired. In Experiment 4 we tested whether retrieval demands contributed toWMdeficits and showed a consistent deficit across recognition and recall probes. In short, even years after an mTBI, undergraduates perform differently on visual WM tasks than their peers with no such history. Given the prevalence of mTBI, these data may benefit other researchers who see high variability in their data. Clearly, further studies will be needed to determine the breadth of the cognitive deficits in those with a history of mTBI and to identify relevant factors that contribute to positive cognitive outcomes.
During terrestrial activities, sensation of pressure on the skin and tension in muscles and joints provides information about how the body is oriented relative to gravity and how the body is moving relative to the surrounding environment. In contrast, in aquatic environments when suspended in a state of neutral buoyancy, the weight of the body and limbs is offloaded, rendering these cues uninformative. It is not yet known how this altered sensory environment impacts virtual reality experiences. To investigate this question, we converted a full-face SCUBA mask into an underwater head-mounted display and developed software to simulate jetpack locomotion outside the International Space Station. Our goal was to emulate conditions experienced by astronauts during training at NASA's Neutral Buoyancy Lab. A user study was conducted to evaluate both sickness and presence when using virtual reality in this altered sensory environment. We observed an increase in nausea related symptoms underwater, but we cannot conclude that this is due to VR use. Other measures of sickness and presence underwater were comparable to measures taken above water. We conclude with suggestions for improved underwater VR systems and improved methods for evaluation of these systems based on our experience.
Understanding of the visual system can be informed by examining errors in perception. We present a novel illusion—Wandering Circles—in which stationary circles undergoing contrast-polarity reversals (i.e., flicker), when viewed peripherally, appear to move about in a random fashion. In two psychophysical experiments, participants rated the strength of perceived illusory motion under varying stimulus conditions. The illusory motion percept was strongest when the circle’s edge was defined by a light/dark alternation and when the edge faded smoothly to the background gray (i.e., a circular arrangement of the Craik-O’Brien-Cornsweet illusion). In addition, the percept of illusory motion is flicker rate dependent, appearing strongest when the circles reversed polarity 9.44 times per second and weakest at 1.98 times per second. The Wandering Circles differ from many other classic motion illusions as the light/dark alternation is perfectly balanced in time and position around the edges of the circle, and thus, there is no net directional local or global motion energy in the stimulus. The perceived motion may instead rely on factors internal to the viewer such as top-down influences, asymmetries in luminance and motion perception across the retina, adaptation combined with positional uncertainty due to peripheral viewing, eye movements, or low contrast edges.
The motion-induced contour (MIC) was first described by Victor Klymenko and Naomi Weisstein in a series of papers in the 1980s. The effect is created by rotating the outline of a tilted cube in depth. When one of the vertical edges is removed, an illusory contour can be seen in its place. In four experiments, we explored which stimulus features influence perceived illusory contour strength. Participants provided subjective ratings of illusory contour strength as a function of orientation of the stimulus, separation between inducing edges, and the length of inducing edges. We found that the angle of tilt of the object in depth had the largest impact on perceived illusory contour strength with tilt angles of 208 and 308 producing the strongest percepts. Tilt angle is an unexplored feature of structure-from-motion displays. In addition, we found that once the depth structure of the object was extracted, other features of the display, such as the distance spanned by the illusory contour, could also influence its strength, similar to the notion of support ratio for 2-D illusory contours. Illusory contour strength was better predicted by the length of the contour in 3-D rather than in 2-D, suggesting that MICs are constructed by a 3-D process that takes as input initially recovered contour orientation and position information in depth and only then forms interpolations between them.
Patients with visual agnosia show severe deficits in recognizing two-dimensional (2-D) images of objects, despite the fact that early visual processes such as figure-ground segmentation, and stereopsis, are largely intact. Strikingly, however, these patients can nevertheless show a preservation in their ability to recognize real-world objects ea phenomenon known as the ‘real-object advantage’ (ROA) in agnosia. To uncover the mechanisms that support the ROA, patients were asked to identify objects whose size was congruent or incongruent with typical real-world size, presented in different display formats (real objects, 2-D and 3-D images). While recognition of images was extremely poor, real object recognition was surprisingly preserved, but only when physical size matched real-world size. Analogous display format and size manipulations did not influence the recognition of common geometric shapes that lacked real-world size associations. These neuropsychological data provide evidence for a surprising preservation of size-coding of real-world-sized tangible objects in patients for whom ventral contributions to image processing are severely disrupted. We propose that object size information is largely mediated by dorsal visual cortex and that this information, together with detailed representation of object shape which is also subserved by dorsal cortex, serve as the basis of the ROA.
Our knowledge of human object vision is based almost exclusively on studies in which the stimuli are presented in the form of computerized two-dimensional (2-D) images. In everyday life, however, humans interact predominantly with real-world solid objects, not images. Currently, we know very little about whether images of objects trigger similar behavioral or neural processes as do real-world exemplars. Here, we present methods for bringing the real-world into the laboratory. We detail methods for presenting rich, ecologically-valid real-world stimuli under tightly-controlled viewing conditions. We describe how to match closely the visual appearance of real objects and their images, as well as novel apparatus and protocols that can be used to present real objects and computerized image on successively interleaved trials. We use a decision-making paradigm as a case example in which we compare willingness-to-pay (WTP) for real snack foods versus 2-D images of the same items. We show that WTP increases by 6.6% for food items displayed as real objects versus high-resolution 2-D colored images of the same foods –suggesting that real foods are perceived as being more valuable than their images. Although presenting real object stimuli under controlled conditions presents several practical challenges for the experimenter, this approach will fundamentally expand our understanding of the cognitive and neural processes that underlie naturalistic vision.
We present a series of novel observations about interactions between flicker and motion that lead to three distinct perceptual effects. We use the term flicker to describe alternating changes in a stimulus’ luminance or color (i.e. a circle that flickers from black to white and visa-versa). When objects flicker, three distinct phenomena can be observed: (1) Flicker Induced Motion (FLIM) in which a single, stationary object, appears to move when it flickers at certain rates; (2) Flicker Induced Motion Suppression (FLIMS) in which a moving object appears to be stationary when it flickers at certain rates, and (3) Flicker-Induced Induced-Motion (FLIIM) in which moving objects that are flickering induce another flickering stationary object to appear to move. Across four psychophysical experiments, we characterize key stimulus parameters underlying these flicker-motion interactions. Interactions were strongest in the periphery and at flicker frequencies above 10 Hz. Induced motion occurred not just for luminance flicker, but for isoluminant color changes as well. We also found that the more physically moving objects there were, the more motion induction to stationary objects occurred. We present demonstrations that the effects reported here cannot be fully accounted for by eye movements: we show that the perceived motion of multiple stationary objects that are induced to move via flicker can appear to move independently and in random directions, whereas eye movements would have caused all of the objects to appear to move coherently. These effects highlight the fundamental role of spatiotemporal dynamics in the representation of motion and the intimate relationship between flicker and motion.
Using stimuli from different categories may expand the capacity limits of working memory (WM) by spreading item representations across distinct neural populations. We explored this mixed-category benefit by correlating individuals’ behavioral performance with fMRI measures of category information during uniform- and mixed-category trials. Behaviorally, we found weak evidence for a mixed-category benefit at the group-level, although there was a high degree of individual variability. To test whether distinct neural patterns elicited superior performance in some individuals, we correlated a multivariate measure of neural category information with multiple behavioral metrics. This revealed a widespread positive relationship, intuitive for hit rate and working memory capacity, but counterintuitive for false alarm rate. Overall, these data suggest that mixed-category effects may support working memory performance, but unexpectedly, not all participants show this benefit. Only some people may be able to take advantage of representing mixed-category information in a differentiable way.
Individual differences are a conspicuous feature of color vision and arise from many sources, in both the observer and the world. These differences have important practical implications for comparing and correcting perception and performance, and important theoretical implications for understanding the design principles underlying color coding. Color percepts within and between individuals often vary less than the variations in spectral sensitivity might predict. This stability is achieved by a variety of processes that compensate perception for the sensitivity limits of the eye and brain. Yet judgments of color between individuals can also vary widely, and in ways that are not readily explained by differences in sensitivity or the environment. These differences are uncorrelated across different color categories, and could reflect how these categories are learned or represented.
It has been argued that scene-selective areas in the human brain represent both the 3D structure of the local visual environment and low-level 2D features (such as spatial frequency) that provide cues for 3D structure. To evaluate the degree to which each of these hypotheses explains variance in scene-selective areas, we develop an encoding model of 3D scene structure and test it against a model of lowlevel 2D features. We fit the models to fMRI data recorded while subjects viewed visual scenes. The fit models reveal that scene-selective areas represent the distance to and orientation of large surfaces, at least partly independent of low-level features. Principal component analysis of the model weights reveals that the most important dimensions of 3D structure are distance and openness. Finally, reconstructions of the stimuli based on the model weights demonstrate that our model captures unprecedented detail about the local visual environment from scene-selective areas.
Previous studies have shown that compared to hearing individuals, early deaf individuals allocate relatively more attention to the periphery than central visual field. However, it is not clear whether these two groups also differ in their ability to selectively attend to specific peripheral locations.We examined deaf and hearing participants’ selective attention using electroencephalography (EEG) and a frequency tagging paradigm, in which participants attended to one of two peripheral displays of moving dots that changed directions at different rates. Both participant groups showed similar amplifications and reductions in the EEG signal at the attended and unattended frequencies, indicating similar control over their peripheral attention for motion stimuli. However, for deaf participants these effects were larger in a right hemispheric region of interest (ROI), while for hearing participants these effects were larger in a left ROI. These results contribute to a growing body of evidence for a right hemispheric processing advantage in deaf populations when attending to motion.
Reading relies on the rapid visual recognition of words viewed in a wide variety of fonts. We used fMRI to identify neural populations showing reduced fMRI responses to repeated words displayed in different fonts (“font-invariant” repetition suppression). We also identified neural populations showing greater fMRI responses to words repeated in a changing font as compared with words repeated in the same font (“fontsensitive” release from repetition suppression). We observed font-invariant repetition suppression in two anatomically distinct regions of the left occipitotemporal cortex (OT), a “visual word form area” in mid-fusiform cortex, and a more posterior region in the middle occipital gyrus. In contrast, bilateral shapeselective lateral occipital cortex and posterior fusiform showed considerable sensitivity to font changes during the viewing of repeated words. Although the visual word form area and the left middle occipital gyrus showed some evidence of font sensitivity, both regions showed a relatively greater degree of font invariance than font sensitivity. Our results show that the neural mechanisms in the left OT involved in font-invariant word recognition are anatomically distinct from those sensitive to font-related shape changes. We conclude that font-invariant representation of visual word form is instantiated at multiple levels by anatomically distinct neural mechanisms within the left OT.
Lexical embedding is common in all languages and elicits mutual orthographic interference between an embedded word and its carrier. The neural basis of such interference remains unknown. We employed a novel fMRI prime-target embedded word paradigm to test for involvement of a visual word form area (VWFA) in left ventral occipitotemporal cortex in co-activation of embedded words and their carriers. Based on the results of related fMRI studies we predicted either enhancement or suppression of fMRI responses to embedded words initially viewed as primes, and repeated in the context of target carrier words. Our results clearly showed enhancement of fMRI responses in the VWFA to embedded-carrier word pairs as compared to unrelated prime-target pairs. In contrast to non-visual language-related areas (e.g., left inferior frontal gyrus), enhanced fMRI responses did not occur in the VWFA when embedded carrier word pairs were restricted to the left visual hemifield. Our finding of fMRI enhancement in the VWFA is novel evidence of its involvement in representational rivalry between orthographically similar words, and the co-activation of embedded words and their carriers.
Adaptation aftereffects are generally stronger for peripheral than for foveal viewing. We examined whether there are also differences in the dynamics of visual adaptation in central and peripheral vision.We tracked the time course of contrast adaptation to binocularly presented Gabor patterns in both the central visual field (within 58) and in the periphery (beyond 108 eccentricity) using a yes/no detection task to monitor contrast thresholds. Consistent with previous studies, sensitivity losses were stronger in the periphery than in the center when adapting to equivalent high contrast (90% contrast) patterns. The time course of the threshold changes was fitted with separate exponential functions to estimate the time constants during the adapt and post-adapt phases. When adapting to equivalent high contrast, adaptation effects built up and decayed more slowly in the periphery compared with central adaptation. Surprisingly, the aftereffect in the periphery did not decay completely to the baseline within the monitored post-adapt period (400s), and instead asymptoted to a higher level than for central adaptation. Even when contrast was reduced to one-third (30% contrast) of the central contrast, peripheral adaptation remained stronger and decayed more slowly. This slower dynamic was also confirmed at suprathreshold test contrasts by tracking tilt-aftereffects with a 2AFC orientation discrimination task. Our results indicate that the dynamics of contrast adaptation differ between central and peripheral vision, with the periphery adapting not only more strongly but also more slowly, and provide another example of potential qualitative processing differences between central and peripheral vision.
The tendency to perceive the identity of the left half of a centrally viewed face more strongly than that of the right half is associated with visual processing of faces in the right hemisphere (RH). Here we investigate conditions under which this wellknown left visual field (LVF) half-face advantage fails to occur. Our findings challenge the sufficiency of its explanation as a function of RH specialization for face processing coupled with LVF-RH correspondence. In two experiments we show that the LVF half-face advantage occurs for normal faces and chimeric faces composed of different half-face identities. In a third experiment, we show that face inversion disrupts the LVF half-face advantage. In two additional experiments we show that half-faces viewed in isolation or paired with inverted half-faces fail to show the LVF advantage. Consistent with previous explanations of the LVF half-face advantage, our findings suggest that the LVF half-face advantage reflects RH superiority for processing faces and direct transfer of LVF face information to visual cortex in the RH. Critically, however, our findings also suggest the operation of a third factor, which involves the prioritization of face-processing resources to the LVF, but only when two upright face-halves compete for these resources.We therefore conclude that RH superiority alone does not suffice to explain the LVF advantage in face recognition.We also discuss the implications of our findings for specialized visual processing of faces by the right hemisphere, and we distinguish LVF advantages for faces viewed centrally and peripherally in divided field studies.
Ultimately, we aim to generalize and translate scientific knowledge to the real world, yet current understanding of human visual perception is based predominantly on studies of two-dimensional (2-D) images. Recent cognitivebehavioral evidence shows that real objects are processed differently to images, although the neural processes that underlie these differences are unknown. Because real objects (unlike images) afford actions, they may trigger stronger or more prolonged activation in neural populations for visuo-motor action planning. Here, we recorded electroencephalography (EEG) when human observers viewed real-world three-dimensional (3-D) objects or closely matched 2-D images of the same items. Although responses to real objects and images were similar overall, there were critical differences. Compared to images, viewing real objects triggered stronger and more sustained eventrelated desynchronization (ERD) in the μ frequency band (8–13 Hz) – a neural signature of automatic motor preparation. Event-related potentials (ERPs) revealed a transient, early occipital negativity for real objects (versus images), likely reflecting 3-D stereoscopic differences, and a late sustained parietal amplitude modulation consistent with an ‘old-new’ memory advantage for real objects over images. Together, these findings demonstrate that real-world objects trigger stronger and more sustained action-related brain responses than images do. These results highlight important similarities and differences between brain responses to images and richer, more ecologically relevant, real-world objects.
Individuals who are deaf since early life may show enhanced performance at some visual tasks, including discrimination of directional motion. The neural substrates of such behavioral enhancements remain difficult to identify in humans, although neural plasticity has been shown for early deaf people in the auditory and association cortices, including the primary auditory cortex (PAC) and STS region, respectively. Here, we investigated whether neural responses in auditory and association cortices of early deaf individuals are reorganized to be sensitive to directional visual motion. To capture direction-selective responses, we recorded fMRI responses frequency-tagged to the 0.1-Hz presentation of central directional (100% coherent random dot) motion persisting for 2 sec contrasted with nondirectional (0% coherent) motion for 8 sec. We found directionselective responses in the STS region in both deaf and hearing participants, but the extent of activation in the right STS region was 5.5 times larger for deaf participants. Minimal but significant direction-selective responses were also found in the PAC of deaf participants, both at the group level and in five of six individuals. In response to stimuli presented separately in the right and left visual fields, the relative activation across the right and left hemispheres was similar in both the PAC and STS region of deaf participants. Notably, the enhanced righthemisphere activation could support the right visual field advantage reported previously in behavioral studies. Taken together, these results show that the reorganized auditory cortices of early deaf individuals are sensitive to directional motion. Speculatively, these results suggest that auditory and association regions can be remapped to support enhanced visual performance.
Here we present a collection of electroencephalographic (EEG) data recorded from 24 observers (14 females, 10 males, mean age: 25.4) while observing individually-presented stimuli comprised of 96 real-world objects, and 96 images of the same items printed in high-resolution. EEG was recorded from 128 scalp channels. Six additional external electrodes were used to record vertical and horizontal electrooculogram, as well as the signal from the left and right mastoid. EEG has been pre-processed, segmented in nonoverlapping epochs, and independent component analysis (ICA) has been conducted to reject artifacts. Moreover, supplemental pre-processing steps have been completed to facilitate the analysis of event-related potentials (ERP). These data are linked to the article “Distinct visuo-motor brain dynamics for real-world objects versus planar images”. Alongside this data we provide the customwritten Matlab® code that can be used to fully reproduce all analyses and figures presented in the linked research article.
Fifty years ago, in 1969, Berlin and Kay published Basic Color Terms—Their Universality and Evolution and set in-motion a large-scale systematic research program for studying color naming and categorization across first-language speakers from different ethno-linguistic societies. While it is difficult to gauge the impact a research program can make over 50-years, many linguists, anthropologists, cognitive scientists, and perceptual psychologists consider the Berlin and Kay book as one of the top-most influential works in cross-cultural studies not only of color linguistics, but of cognition and language more generally. Today reverberations from the Berlin and Kay (1969) research program continue to resonate through recently available data sets that are being examined with new quantitative analysis methods and modeling approaches. Here we review the origins of the Basic Color Terms phenomenon, and note a few of the numerous directions from which on-going related work continues to bring forth interesting results in the color categorization arena.
Whether visual categorization, i.e., specific responses to a certain class of visual events across a wide range of exemplars, is graded or all-or-none in the human brain is largely unknown. We address this issue with an original frequency-sweep paradigm probing the evolution of responses between the minimum and optimal presentation times required to elicit both neural and behavioral face categorization responses. In a first experiment, widely variable natural images of nonface objects are progressively swept from 120 to 3 Hz (8.33 to 333 ms duration) in rapid serial visual presentation sequences; variable face exemplars appear every 1 s, enabling an implicit frequency-tagged face-categorization electroencephalographic (EEG) response at 1 Hz. In a second experiment, faces appear non-periodically throughout such sequences at fixed presentation rates, while participants explicitly categorize faces. Face-categorization activity emerges with stimulus durations as brief as 17 ms for both neural and behavioral measures (17 – 83 ms across individual participants neurally; 33 ms at the group level). The face-categorization response amplitude increases until 83 ms stimulus duration (12 Hz), implying graded categorization responses. However, a strong correlation with behavioral accuracy suggests instead that dilution from missed categorizations, rather than a decreased response to each face stimulus, may be responsible. This is supported in the second experiment by the absence of neural responses to behaviorally uncategorized faces, and equivalent amplitudes of isolated neural responses to only behaviorally categorized faces across presentation rates, consistent with the otherwise stable spatio-temporal signatures of face-categorization responses in both experiments. Overall, these observations provide original evidence that visual categorization of faces, while being widely variable across human observers, occurs in an all-or-none fashion in the human brain.
Encoding the temporal properties of external signals that comprise multimodal events is a major factor guiding everyday experience. However, during the natural aging process,impairments to sensory processing can profoundly affect multimodal temporal perception. Various mechanisms can contribute to temporal perception,and thus it is imperative to understand how each can be affected by age.In the current study, using three different temporal order judgement tasks (unisensory, multisensory, and sensorimotor), we investigated the effects of age on two separate temporal processes:synchronization and integration of multiple signals.These two processes rely on different aspects of temporal information, either the temporal alignment of processed signals or the integration/segregation of signals arising from different modalities, respectively.Results showed that the ability to integrate/ segregate multiple signals decreased with age regardless of the task,and that the magnitude of such impairment correlated across tasks, suggesting a widespread mechanism affected by age.In contrast, perceptual synchrony remained stable with age,revealing a distinct intact mechanism. Overall,results from this study suggest that aging has differential effects on temporal processing, and general impairments with aging may impact global temporal sensitivity while context-dependent processes remain unaffected.
Reliable duration perception of external events is necessary to coordinate perception with action, precisely discriminate speech, and for other daily functions. Visual duration perception can be heavily influenced by concurrent auditory signals; however, age-related effects on this process have received minimal attention. In the present study, we examined the effect of aging on duration perception by quantifying (1) duration discrimination thresholds, (2) auditory temporal dominance, and (3) visual duration expansion/compression percepts induced by an accompanying auditory stimulus of longer/shorter duration. Duration discrimination thresholds were significantly greater for visual than auditory tasks in both age groups, however there was no effect of age. While the auditory modality retained dominance in duration perception with age, older adults still performed worse than young adults when comparing durations of two target stimuli (e.g., visual) in the presence of distractors from the other modality (e.g., auditory). Finally, both age groups perceived similar visual duration compression, whereas older adults exhibited visual duration expansion over a wider range of auditory durations compared to their younger counterparts. Results are discussed in terms of multisensory integration and possible decision strategies that change with age.
Spatial images with unnatural amplitude spectra tend to appear uncomfortable. Analogous effects are found in the temporal domain, yet discomfort in flickering patterns is also strongly dependent on the phase spectrum. Here we examined how discomfort in temporal flicker is affected by adaptation to different amplitude and phase spectra. Adapting and test flicker were square wave or random phase transitions in a uniform field filtered by increasing (blurred) or decreasing (sharpened) the slope of the amplitude spectrum. Participants rated the level of discomfort or sharpness/blur for the test flicker. Before adaptation, square wave transitions were rated as most comfortable when they had “focused” edges, which were defined as characterized by 1/f amplitude spectra, while random phase transitions instead appeared more comfortable the more blurred they were. After adapting to blurred or sharpened transitions, both square wave and random phase flicker appeared more sharpened or blurred, respectively, and these effects were consistent with renormalization of perceived temporal focus. In comparison, adaptation affected discomfort in the two waveforms in qualitatively different ways, and exposure to the adapting stimulus tended to increase rather than decreased its perceived discomfort. These results point to a dissociation between the perceived amplitude spectrum and perceived discomfort, suggesting they in part depend on distinct processes. The results further illustrate the importance of the phase spectrum in determining visual discomfort from flickering patterns.
Written language is a hallmark of cultural and technological development. The ability to read written language is a testament to the effects of learning on human behavior and brain function. However, even highly practiced readers exhibit fundamental neural constraints. The fact that you are unable to read the collection of words comprising this text all at once, as desirable as that may be, draws attention to a defining property of the human brain: its limited informationprocessing capacity. A study by White et al. (1) published in PNAS highlights an extreme case of capacitylimited visual information processing—our inability to read more than one word at a time—and reveals the neural basis of this limitation using functional magnetic resonance imaging (fMRI).
Unlike most objects, letter recognition is closely tied to orientation and mirroring, which in some cases (e.g., b and d), defines letter identity altogether. We combined a divided field paradigm with a negative priming procedure to examine the relationship between mirror generalization, its suppression during letter recognition, and language-related visual processing in the left hemisphere. In our main experiment, observers performed a centrally viewed letter-recognition task, followed by an objectrecognition task performed in either the right or the left visual hemifield. The results show clear evidence of inhibition of mirror generalization for objects viewed in either hemifield but a right hemisphere advantage for visual recognition of mirrored and repeated objects. Our findings are consistent with an opponent relationship between symmetry-related visual processing in the right hemisphere and neurally recycled mechanisms in the left hemisphere used for visual processing of written language stimuli.
We examined how visual sensitivity and perception are affected by adaptation to the characteristic amplitude spectra of X-ray mammography images. Because of the transmissive nature of X-ray photons, these images have relatively more low-frequency variability than natural images, a difference that is captured by a steeper slope of the amplitude spectrum (~ − 1.5) compared to the ~ 1/f (slope of − 1) spectra common to natural scenes. Radiologists inspecting these images are therefore exposed to a different balance of spectral components, and we measured how this exposure might alter spatial vision. Observers (who were not radiologists) were adapted to images of normal mammograms or the same images sharpened by filtering the amplitude spectra to shallower slopes. Prior adaptation to the original mammograms significantly biased judgments of image focus relative to the sharpened images, demonstrating that the images are sufficient to induce substantial after-effects. The adaptation also induced strong losses in threshold contrast sensitivity that were selective for lower spatial frequencies, though these losses were very similar to the threshold changes induced by the sharpened images. Visual search for targets (Gaussian blobs) added to the images was also not differentially affected by adaptation to the original or sharper images. These results complement our previous studies examining how observers adapt to the textural properties or phase spectra of mammograms. Like the phase spectrum, adaptation to the amplitude spectrum of mammograms alters spatial sensitivity and visual judgments about the images. However, unlike the phase spectrum, adaptation to the amplitude spectra did not confer a selective performance advantage relative to more natural spectra.
Although object-related areas were discovered in human parietal cortex a decade ago, surprisingly little is known about the nature and purpose of these representations, and how they differ from those in the ventral processing stream. In this article, we review evidence for the unique contribution of object areas of dorsal cortex to three-dimensional (3-D) shape representation, the localization of objects in space, and in guiding reaching and grasping actions. We also highlight the role of dorsal cortex in form-motion interaction and spatiotemporal integration, possible functional relationships between 3-D shape and motion processing, and how these processes operate together in the service of supporting goal-directed actions with objects. Fundamental differences between the nature of object representations in the dorsal versus ventral processing streams are considered, with an emphasis on how and why dorsal cortex supports veridical (rather than invariant) representations of objects to guide goal-directed hand actions in dynamic visual environments.
Laboratory studies of human dietary choice have relied on computerized two-dimensional (2D) images as stimuli, whereas in everyday life, consumers make decisions in the context of real foods that have actual caloric content and afford grasping and consumption. Surprisingly, few studies have compared whether real foods are valued more than 2D images of foods, and in the studies that have, differences in the stimuli and testing conditions could have resulted in inflated bids for the real foods. Moreover, although the caloric content of food images has been shown to influence valuation, no studies to date have investigated whether ‘real food exposure effects’ on valuation reflect greater sensitivity to the caloric content of real foods versus images. Here, we compared willingness-to-pay (WTP) for, and expectations about satiety after consuming, everyday snack foods that were displayed as real foods versus 2D images. Critically, our 2D images were matched closely to the real foods for size, background, illumination, and apparent distance, and trial presentation and stimulus timing were identical across conditions. We used linear mixed effects modeling to determine whether effects of display format were modulated by food preference and the caloric content of the foods. Compared to food images, observers were willing to pay 6.62% more for (Experiment 1) and believed that they would feel more satiated after consuming (Experiment 2), foods displayed as real objects. Moreover, these effects appeared to be consistent across food preference, caloric content, as well as observers’ estimates of caloric content. Together, our results confirm that consumers’ perception and valuation of everyday foods is influenced by the format in which they are displayed. Our findings raise questions about whether images are suitable proxies for real objects in understanding human decision-making in real-world contexts and highlight avenues for improving public health approaches to diet and obesity.
Object interaction requires knowledge of the weight of an object, as well as its shape. The lateral occipital complex (LOC), an area within the ventral visual pathway, is well-known to be critically involved in processing visual shape information. Recently, however, LOC has also been implicated in coding object weight prior to grasping – a result that is surprising because weight is a nonvisual object property that is more relevant for motor interaction than visual perception. Here, we examined the causal role of LOC in perceiving heaviness and in determining appropriate fingertip forces during object lifting. We studied perceptions of heaviness and lifting behavior in a neuropsychological patient (M.C.) who has large bilateral occipito-temporal lesions that include LOC. We compared the patient’s performance to a group of 18 neurologically healthy age-matched controls. Participants were asked to lift and report the perceived heaviness of a set of equally-weighted spherical objects of various sizes – stimuli which typically induce the size-weight illusion, in which the smaller objects feel heavier than the larger objects despite having identical mass. Despite her ventral-stream lesions, M.C. experienced a robust a size-weight illusion induced by visual cues to object volume, and the magnitude of the illusion in M.C. was comparable to age-matched controls. Similarly, M.C. evinced predictive fingertip force scaling to visual size cues during her initial lifts of the objects that were well within the normal range. These single-case neuropsychological findings suggest that LOC is unlikely to play a causal role in computing object weight.
Working memory (WM) permits maintenance of information over brief delays and is an essential executive function. Unfortunately, WM is subject to age-related decline. Some evidence supports the use of transcranial direct current stimulation (tDCS) to improve visual WM. A gap in knowledge is an understanding of the mechanism characterizing these tDCS linked effects. To address this gap, we compared the effects of two tDCS montages designed on visual working memory (VWM) performance. The bifrontal montage was designed to stimulate the heightened bilateral frontal activity observed in aging adults. The unilateral frontoparietal montage was designed to stimulate activation patterns observed in young adults. Participants completed three sessions (bilateral frontal, right frontoparietal, sham) of anodal tDCS (20 min, 2 mA). During stimulation, participants performed a visual long-term memory (LTM) control task and a visual WM task. There was no effect of tDCS on the LTM task. Participants receiving right unilateral tDCS showed a WM benefit. This pattern was most robust in older adults with low WM capacity. To address the concern that the key difference between the two tDCS montages could be tDCS over the posterior parietal cortex (PPC), we included new analyses from a previous study applying tDCS targeting the PPC paired with a recognition VWM task. No significant main effects were found. A subsequent experiment in young adults found no significant effect of either tDCS montage on either task. These data indicate that tDCS montage, age and WM capacity should be considered when designing tDCS protocols. We interpret these findings as suggestive that protocols designed to restore more youthful patterns of brain activity are superior to those that compensate for age-related changes.
Why are some visual stimuli remembered, whereas others are forgotten? A limitation of recognition paradigms is that they measure aggregate behavioral performance and/or neural responses to all stimuli presented in a visual working memory (VWM) array. To address this limitation, we paired an electroencephalography (EEG) frequency-tagging technique with two full-report VWM paradigms. This permitted the tracking of individual stimuli as well as the aggregate response.We recorded high-density EEG (256 channel) while participants viewed four shape stimuli, each flickering at a different frequency. At retrieval, participants either recalled the location of all stimuli in any order (simultaneous full report) or were cued to report the item in a particular location over multiple screen displays (sequential full report). The individual frequency tag amplitudes evoked for correctly recalled items were significantly larger than the amplitudes of subsequently forgotten stimuli, regardless of retrieval task. An induced-power analysis examined the aggregate neural correlates ofVWMencoding as a function of items correctly recalled.We found increased induced power across a large number of electrodes in the theta, alpha, and beta frequency bands when more items were successfully recalled. This effect was more robust for sequential full report, suggesting that retrieval demands can influence encoding processes. These data are consistent with a model in which encoding-related resources are directed to a subset of items, rather than a model in which resources are allocated evenly across the array. These data extend previous work using recognition paradigms and stress the importance of encoding in determining later VWM retrieval success.
Fast periodic visual stimulation combined with electroencephalography (FPVS-EEG) has unique sensitivity and objectivity in measuring rapid visual categorization processes. It constrains image processing time by presenting stimuli rapidly through brief stimulus presentation durations and short inter-stimulus intervals. However, the selective impact of these temporal parameters on visual categorization is largely unknown. Here, we presented natural images of objects at a rate of 10 or 20 per second (10 or 20 Hz), with faces appearing once per second (1 Hz), leading to two distinct frequency-tagged EEG responses. Twelve observers were tested with three squarewave image presentation conditions: 1) with an ISI, a traditional 50% duty cycle at 10 Hz (50-ms stimulus duration separated by a 50-ms ISI); 2) removing the ISI and matching the rate, a 100% duty cycle at 10 Hz (100- ms duration with 0-ms ISI); 3) removing the ISI and matching the stimulus presentation duration, a 100% duty cycle at 20 Hz (50-ms duration with 0-ms ISI). The face categorization response was significantly decreased in the 20 Hz 100% condition. The conditions at 10 Hz showed similar face-categorization responses, peaking maximally over the right occipito-temporal (ROT) cortex. However, the onset of the 10 Hz 100% response was delayed by about 20 ms over the ROT region relative to the 10 Hz 50% condition, likely due to immediate forward-masking by preceding images. Taken together, these results help to interpret how the FPVS-EEG paradigm sets temporal constraints on visual image categorization.
Transcranial direct current stimulation (tDCS) is a neuromodulatory approach that is affordable, safe, and well tolerated. This review article summarizes the research and clinically relevant findings from meta-analyses and studies investigating the cognitive effects of tDCS in healthy and clinical populations. We recapitulate findings from recent studies where cognitive performance paired with tDCSwas compared with performance under placebo (sham stimulation) in single sessions and longitudinal designs where cognitive effects were evaluated following repeated sessions. In summary, the tDCS literature currently indicates that the effects of tDCS on cognitive measures are less robust and less predictable compared with the more consistent effects on motor outcomes. There is also a notable difference in the consistency of single-session and longitudinal designs. In single-session tDCS designs, there are small effects amid high variability confounded by individual differences and potential sham stimulation effects. In contrast, longitudinal studies provide more consistent benefits in healthy and clinical populations, particularly when tDCS is paired with a concurrent task. Yet, these studies are few in number, thereby impeding design optimization. While there is good evidence that tDCS can modulate cognitive functioning and potentially produce longer-term benefits, a major challenge to widespread translation of tDCS is the absence of a completemechanistic account for observed effects. Significant future work is needed to identify a priori responders from nonresponders for every cognitive task and tDCS protocol.
Face recognition requires identifying both the invariant characteristics that distinguish one individual from another and the variations within the individual that correspond to emotional expressions. Both have been postulated to be represented via a norm-based code, in which identity or expression are represented as deviations from an average or neutral prototype. We used Fast Periodic Visual Stimulation (FPVS) with electroencephalography (EEG) to compare neural responses for neutral faces, expressions and anti-expressions. Antiexpressions are created by projecting an expression (e.g. a happy face) through the neutral face to form the opposite facial shape (anti-happy). Thus expressions and anti-expressions differ from the norm by the same “configural” amount and thus have equivalent but opposite status with regard to their shape, but differ in their ecological validity. We examined whether neural responses to these complementary stimulus pairs were equivalent or asymmetric, and also tested for norm-based coding by comparing whether stronger responses are elicited by expressions and anti-expressions than neutral faces. Observers viewed 20 s sequences of 6 Hz alternations of neutral faces and expressions, neutral faces and anti-expressions, and expressions and anti-expressions. Responses were analyzed in the frequency domain. Significant responses at half the frequency of the presentation rate (3 Hz), indicating asymmetries in responses, were observed for all conditions. Inversion of the images reduced the size of this signal, indicating these asymmetries are not solely due to differences in the lowlevel properties of the images. While our results do not preclude a norm-based code for expressions, similar to identity, this representation (as measured by the FPVS EEG responses) may also include components sensitive to which configural distortions form meaningful expressions.
The ability to constantly anticipate events in the world is critical to human survival. It has been suggested that predictive processing originates from the motor system and that incoming sensory inputs can be altered to facilitate sensorimotor integration. In the current study, we investigated the role of the readiness potentials, i.e. the premotor brain activity registered within the fronto-parietal areas, in sensorimotor integration. We recorded EEG data during three conditions: a motor condition in which a simple action was required, a visual condition in which a visual stimulus was presented on the screen, and a visuomotor condition wherein the visual stimulus appeared in response to a button press. We measured evoked potentials before the motor action and/or after the appearance of the visual stimulus. Anticipating a visual feedback in response to a voluntary action modulated the amplitude of the readiness potentials. We also found an enhancement in the amplitude of the visual N1 and a reduction in the amplitude of the visual P2 when the visual stimulus was induced by the action rather than externally generated. Our results suggest that premotor brain activity might reflect predictive processes in sensory-motor binding and that the readiness potentials may possibly represent a neural marker of these predictive mechanisms
Observers with normal color vision vary widely in their judgments of color appearance, such as the specific spectral stimuli they perceive as pure or unique hues. We examined the basis of these individual differences by using factor analysis to examine the variations in hue-scaling functions from both new and previously published data. Observers reported the perceived proportion of red, green, blue or yellow in chromatic stimuli sampling angles at fixed intervals within the LM and S cone-opponent plane. These proportions were converted to hue angles in a perceptual-opponent space defined by red vs. green and blue vs. yellow axes. Factors were then extracted from the correlation matrix using PCA and Varimax rotation.
There is mounting evidence that constraints from action can influence the early stages of object selection, even in the absence of any explicit preparation for action. We examined whether action properties of images can influence visual search, and whether such effects were modulated by hand preference. Observers searched for an oddball target among 3 distractors. The search arrays consisted either of images of graspable “handles” (“action-related” stimuli), or images that were otherwise identical to the handles but in which the semicircular fulcrum element was reoriented so that the stimuli no longer looked like graspable objects (“non-action-related” stimuli). Our results suggest that action properties in images, and constraints for action imposed by preferences for manual interaction with objects, can influence attentional selection in the context of visual search.
During dynamic occlusion, an object passes behind an occluding surface and then later reappears. Even when completely occluded from view, such objects are experienced as continuing to exist or persist behind the occluder even though they are no longer visible. The contents and neural basis of this persistent representation remain poorly understood. We applied functional magnetic resonance imaging in human subjects to examine representations within visual cortex during dynamic occlusion. For gradually occluded, but not for instantly disappearing objects, there was an increase in activity in early visual cortex (V1, V2, and V3).
Audition dominates other senses in temporal processing, and in the absence of auditory cues, temporal perception can be compromised. Moreover, after auditory deprivation, visual attention is selectively enhanced for peripheral visual stimuli. We assessed whether early hearing loss affects motor-sensory recalibration, the ability to adjust the timing of an action and its sensory effect based on the recent experience. Early deaf participants and hearing controls were asked to discriminate the temporal order between a motor action (a keypress) and a visual stimulus (a white circle) before and after adaptation to a delay between the two events. Adaptation to a motor-sensory delay induced distinctive effects in the two groups, with hearing controls showing a recalibration effect for central stimuli only and deaf individuals for peripheral visual stimuli only.
The human brain integrates hemifield-split visual information via interhemispheric transfer. The degree to which neural circuits involved in this process behave differently during word recognition as compared to object recognition is not known. Evidence from neuroimaging (fMRI) suggests that interhemispheric transfer during word viewing converges in the left hemisphere, in two distinct brain areas, an “occipital word form area” (OWFA) and a more anterior occipitotemporal “visual word form area” (VWFA). We used a novel fMRI half-field repetition technique to test whether or not these areas also integrate nonverbal hemifield-split string stimuli of similar visual complexity.
Tools afford specialized actions that are tied closely to object identity. Although there is mounting evidence that functional objects, such as tools, capture visuospatial attention relative to non-tool competitors, this leaves open the question of which part of a tool drives attentional capture. We used a modified version of the Posner cueing task to determinewhether attention is oriented towards the head versus the handle of realistic images of common elongated tools. We compared cueing effects for tools with control stimuli that consisted of images of fruit and vegetables of comparable elongation to the tools. Critically, our displays controlled for lower-level influences on attention that can arise from global shape asymmetries in the image cues. Observers were faster to detect low-contrast targets positioned near the head end versus the handle of tools.
Working memory (WM) capacity falls along a spectrum with some people demonstrating higher and others lower WM capacity. Efforts to improve WM include applying transcranial direct current stimulation (tDCS), in which small amounts of current modulate the activity of underlying neurons and enhance cognitive function. However, not everyone benefits equally from a given tDCS protocol. Recent findings revealed tDCS-related WMbenefits for individualswith higherworkingmemory (WM) capacity.
Determining the role of intraparietal sulcus (IPS) regions in working memory (WM) remains a topic of considerable interest and lack of clarity. Adjudication between competing theoretical perspectives is complicated by divergent findings from different methodologies. For example, fMRI studies typically use full field stimulus presentations and report bilateral IPS activation, whereas EEG studies direct attention to a single hemifield and report a contralateral bias in both hemispheres. We addressed this issue by applying a regions-of-interest fMRI approach to elucidate IPS contributions to WM. We manipulated stimulus type and the cued hemifield to assess the degree to which IPS activations reflect stimulus specific or stimulus general processing consistent with the pure storage or internal attention hypotheses.
Recent studies have demonstrated that factors influencing perception, such as Gestalt grouping cues, can influence the storage of information in visual working memory (VWM). In some cases, stationary cues, such as stimulus similarity, lead to superior VWMperformance. However, the neural correlates underlying these benefits to VWM performance remain unclear. One neural index, the contralateral delay activity (CDA), is an event-related potential that shows increased amplitude according to the number of items held in VWMand asymptotes at an individual’s VWM capacity limit.