Program details


APCV 2024 schedule

Keynote speakers

Wolfe received an AB in Psychology in 1977 from Princeton and his PhD in Psychology in 1981 from MIT under the supervision of Richard Held. His research focuses on visual search and visual attention with a particular interest in socially important search tasks in areas such as medical image perception (e.g. cancer screening), security (e.g. baggage screening), and intelligence. He taught Intro. Psychology and other courses for 25 years, mostly at MIT. Wolfe is an elected member of the American Academy of Arts and Sciences and a Fellow of AAAS, APA (Div 1, 3, & 6), and APS. He has been President of the Federation of Associations in Behavioral and Brain Sciences (FABBS), President of the Eastern Psychological Association, Chair of the Board of the Psychonomic Society and a member of the Board of the Vision Sciences Society. He was the founding Editor-in-Chief of Cognitive Research: Principles and Implications (CRPI) and was Editor of Attention, Perception, and Psychophysics. Wolfe also serves on the North American Board of the Union for Reform Judaism. He is married to Julie Sandell; Provost of Suffolk University in Boston (Information accurate in March, 2022). They have three sons: Ben, Philip, and Simon. Wolfe’s office contains more stuffed animals than one would expect in the office of a grown man.

The Shimojo Psychophysics Laboratory is one of the few laboratories on the campus of the Caltech which exclusively concentrates on the study of perception, cognition, and action in humans. The lab employs psychophysical paradigms and a variety of recording techniques such as eye-tracking, functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), as well as, brain stimulation techniques such as transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), and recently ultrasound neuromodulation (UNM). They try to bridge the gap between cognitive and neurosciences. They would like to understand how the brain adapts real-world constraints to resolve perceptual ambiguity and to reach ecologically valid, unique solutions. In addition to their continuing interest in surface representation, motion perception, attention, and action, they also focus on crossmodal integration (including VR environments), visual preference/attractiveness decision, social brain, flow and choke in the game-playing brains and individual differences related to "neural, dynamic fingerprint" of the brain.

Dr. Li received his medical degree in 1997 from Zhejiang University School of Medicine in China and his Ph.D. in Neuroscience in 2003 from the University of Texas at Houston where he studied the organization of reciprocal feedback synapse at the axon terminal of the retinal bipolar cell in Dr. Stephen Massey's laboratory. From 2003 to 2007, as a postdoctoral fellow, he worked with Dr. Steven DeVries at Northwestern University where he investigated synaptic connections between photoreceptors and bipolar neurons in a mammalian retina. Dr. Li joined NEI as the principal investigator of the Unit on Retinal Neurophysiology in 2007. His unit uses a variety of physiological and anatomical techniques to explore retinal synapses and circuits and their functions in vision.

The long-term goal of our research is to study the mammalian retina as a model for the central nervous system (CNS) -- to understand how it functions in physiological conditions, how it is formed, how it breaks down in pathological conditions, and how it can be repaired. We have focused on two research themes: 1) Photoreceptor structure, synapse, circuits, and development, 2) Hibernation and metabolic adaptations in the retina and beyond. As the first neuron of the visual system, photoreceptors are vital for photoreception and transmission of visual signals. We are particularly interested in cone photoreceptors, as they mediate our daylight vision with high resolution color information. Diseases affecting cone photoreceptors compromise visual functions in the central macular area of the human retina and are thus most detrimental to our vision. However, because cones are much less abundant compared to rods in most mammals, they are less well studied. We have used the ground squirrel (GS) as a model system to study cone vision, taking advantage of their unique cone-dominant retina. In particular, we have focused on short-wavelength sensitive cones (S-cones), which are not only essential for color vision, but are also an important origin of signals for biological rhythm, mood and cognitive functions, and the growth of the eye during development. We are studying critical cone synaptic structures – synaptic ribbons, the synaptic connections of S-cones, and the development of S-cones with regard to their specific connections. These works will provide knowledge of normal retinal development and function, which can also be extended to the rest of CNS. In addition, such knowledge will benefit the development of optimal therapeutic strategies for regeneration and repair in cases of retinal degenerative disease. Many neurodegenerative diseases, including retinal diseases, are rooted in metabolic stress in neurons and/or glial cells. Using the same GS model, we aim to learn from this hibernating mammal, which possesses an amazing capability to adapt to the extreme metabolic conditions during hibernation. By exploring the mechanisms of such adaptation, we hope to discover novel therapeutic tactics for neurodegenerative diseases.

The research in our laboratory focuses on computational and psychophysical studies of visual perception. Unlike machine vision approaches, we emphasize physiological plausibility of our models because such models have more explanatory and predictive power for understanding biological vision. We have been constructing binocular vision models by analyzing known spatiotemporal receptive-field properties of binocular cells in the visual cortex, and have been applying our models to explain depth perception from horizontal disparity (stereovision), vertical disparity (the induced effect), inter-ocular time delay (the Pulfrich effects), motion field (structure-from-motion), and monocular occlusion (da Vinci stereopsis). We also test new predictions from our models via visual psychophysical experiments. A recent emphasis of our research is psychophysical investigation of faces. Face perception is essential for social interactions. While traditional face studies have primarily focused on high-level properties of face perception, we take a complementary approach by investigating contributions of low-level processing along multiple, interactive streams to face perception. We have been studying hierarchical face processing from low to high levels by measuring multi-level adaptation aftereffects. We also plan to conduct computational studies of faces. Finally, we are interested in computational models of motor planning and sensorimotor integration. In particular, we would like to understand synergistic interactions between visual perception and motor control.

Dr. Li obtained her B.S. in Physics in 1984 from Fudan University, Shanghai, and Ph.D. in Physics in 1989 from California Institute of Technology. She was a postdoctoral researcher in Fermi National Laboratory in Batavia, Illinois USA, Institute for Advanced Study in Princeton New Jersey, USA, and Rockefeller University in New York USA. She has been a faculty member in Computer Science in Hong Kong University of Science and Technology, and was a visiting scientist at various academic institutions. In 1998, she and her colleagues co-founded the Gatsby Computational Neuroscience Unit in University College London. From Oct. 2018, she is a professor in University of Tuebingen and the head of the Department of Sensory and Sensorimotor Systems at the Max Planck Institute for Biological Cybernetics in Tuebingen, Germany. Her research experience throughout the years ranges from areas in high energy physics to neurophysiology and marine biology, with most experience in understanding the brain functions in vision, olfaction, and in nonlinear neural dynamics. In late 90s and early 2000s, she proposed a theory (which is being extensively tested) that the primary visual cortex in the primate brain creates a saliency map to automatically attract visual attention to salient visual locations. This theory, and the supporting experimental evidence, have led her to propose a new framework for understanding vision. She is also the author of Understanding Vision: theory, models, and data, Oxford University Press, 2014.

Dr. Goodhew is currently an Associate Professor at The Australian National University (ANU). She completed her PhD at the University of Queensland and then a postdoctoral fellowship at the University of Toronto. She then moved to ANU, where she was previously an ARC Future Fellow, Senior Lecturer, and Australian Research Council (ARC) Discovery Early Career Researcher.

To err is human: we all make mistakes in everyday life. Sometimes such everyday cognitive slips and lapses have relatively trivial consequences, such as the inconvenience of missing a forgotten-about appointment. But other times, such cognitive failures can have profound consequences, such as failing to notice a safety-critical sign by the side of the road resulting in a car crash. While everyone succumbs to cognitive failures, there are clear and meaningful individual differences in the frequency with which they are experienced. One measure that has a long and illustrious history of measuring these differences is the Cognitive Failures Questionnaire (CFQ). CFQ scores are related to a host of important real-world outcomes, such as a person’s risk of being responsible for a car crash or work accident. Dr. Goodhew has an ongoing program of research investigating the mechanisms underlying cognitive failures, and assessing the convergences and divergences between people’s subjective experiences of cognitive failures and their objective performance on important cognitive tasks.


Mechanisms of face perception

Colin Palmer
Qian Wang
Jessica Taubert
Dongwon Oh
Yong Zhi Foo
Colin Palmer and Gwenisha Liaw

The human face has special significance as a visual cue, helping us to track the emotional reactions and attentional focus of others, shaping social trait impressions (e.g., attractiveness and trustworthiness), and helping us to identify those people familiar to us. While face processing has received much attention in vision science, the mechanisms that shape the everyday experience of faces are still only partially understood. What are the core dimensions of facial information represented in the visual system? How is this information extracted from the visual signals relayed to the brain from the retina? How do implicit processes, such as physiological responses or evolutionary pressures, align with our perceptual experience of faces? This symposium showcases recent discoveries and novel approaches to understanding the visual processing of faces in the human brain. Talks range from the use of intracranial neural recordings to uncover cortical and subcortical responses underlying face perception, data-driven approaches to defining the social dimensions observers perceive in faces, characterisation of the link between face features, perception and physiology using psychophysics and computational models, and analysis of the biological and evolutionary factors that shape face impressions. Together this provides a snapshot of exciting developments occurring at a key interface between vision science and social behaviour.

The impact of recent technologies on studies of multisensory integration

Hiroaki Kiyokawa
Juno Kim
Hideki Tamura
Michiteru Kitazaki
Stephen Palmisano

Multisensory integration is one of the key functions to obtain stable visual and non-visual perception in our daily life. However, it is still a challenging problem to comprehensively understand how our brain integrates different types of modal information. How does our visual system extract meaningful visual information from retinal images and integrate those with information from other sensory modalities? Recent technologies, such as virtual reality (VR) and/or augmented reality (AR), can provide scalable interactive and immersive environments to test the effects of external stimulation on our subjective experiences. What do those technologies bring to our research? We invite world- leading scientists in human perception and performance to discuss the psychological, physiological, and computational foundations of multisensory integration, and methodologies that provide insight into how non-visual sensory information enhances our visual experiences of the world.

Regularity and (un)certainty: extracting implicit sensory information in perception and action

Juno Kim
Hideki Tamura
Shao-Min Hung
Hsin-I Iris Liao
Philip Tseng
Nobuhiro Hagura
David Alais

How do we track the relations among sensory items in the surroundings? With our sensory systems bombarded by immeasurable external information, it is hard to envision a willful, deliberate, and moment-by-moment sensory tracking mechanism. Instead, here we seek to illustrate how our behavior is affected by implicitly tracked regularity and the accompanying (un)certainty. We will provide evidence from a wide spectrum of studies, encompassing interactions among vision, audition, and motor systems. Shao-Min (Sean) Hung first establishes implicit regularity tracking in a cue-target paradigm. His findings suggest that regularity tracking between sensory items relies very little on explicit knowledge or visual awareness. However, how we derive meaningful results requires careful work. Philip Tseng’s work expands on this point and demonstrates how visual statistical learning can be influenced by task demand. These results advocate the importance of experimental design in searching for implicit extraction of sensory information. Similar tracking of perceptual statistics extends to the auditory domain, as evidenced by Hsin-I (Iris) Liao’s work. Her research shows how pupillary responses reflect perceptual alternations and unexpected uncertainty in response to auditory stimulations. Next, we ask how our behavior reacts to such regularities. Using a motor learning paradigm, Nobuhiro Hagura reveals that different visual uncertainty can tag different motor memories, showing that uncertainty provides contextual information to guide our movement. Finally, David Alais uses continuous measurement of perception during walking to reveal a modulation occurring at the step rate, with perceptual sensitivity optimal in the swing phase between steps. Together, our symposium aims to paint a multifaceted picture of perceptual regularity tracking, with the (un)certainty it generates. These findings reveal the ubiquitous nature of implicit sensory processing in multiple sensory domains, integrating perception and action.