This chapter explores the importance of visual experience for cognitive processes, with particular focus on their perception of the environment and subsequently how blind individuals “see” the world in the context of research in psychology and neuroscience. The current state of the art will examine a number of crucial areas of research, including: evidence for compensatory plasticity, whereby the remaining senses develop supranormally to overcome the missing sense; the extent to which there is evidence for the general-loss hypothesis that missing one sense has detrimental effects on the other senses; the impact of lower-level perceptual changes arising from sensory loss on higher-level cognitive processing; and the role of critical or sensitive periods in brain development and the age of onset for sensory impairment. These areas of research are then applied to our understanding of the structure and function of the brain, with accumulating evidence for the metamodal hypothesis that the brain is organised by task or computation, and not by the primary senses, and that this occurs on the basis of existing multisensory networks throughout the brain. This research has implications for long-standing philosophical questions about the nature of perception and for developing assistive technology for the visually impaired, such as sensory substitution devices.
Despite living in a highly visual environment, our perceptual experience of the world is richly multisensory (Ghazanfar and Schroeder, 2006; Spence, 2011). We are able to extract information derived from one sensory modality to inform another; we may know a shape by touch and identify it by sight or know a violin by touch and identify it by sound. This chapter focuses on how blind people perceive the world using different sensory modalities. Comparing congenitally blind, late blind and sighted individuals enables the role of visual experience for cognition (at psychological and neural levels) to be explored. This sheds light on how the “blind brain” may differ from the sighted brain, which is important to establish for developing interventions, such as assistive technology and sensory substitution devices (Proulx et al., 2016; Scheller, Petrini and Proulx, 2018), and could benefit employment and educational practice (Metatla et al., 2015; Metatla et al., 2016). Furthermore, if congenitally blind and late blind individuals significantly differ on cognitive tasks, this suggests visual input during the first few years of life plays a role in cognitive development, reflecting a critical period in neural and psychological terms. How “critical” critical periods are for visual development is also important to establish because blind individuals who can be treated (for example if their blindness is due to cataracts) sometimes remain untreated due to the assumption that they would not develop sight later in life because they have missed a critical period for visual development, for example 40% of blind children in India (Kalia et al., 2017). This chapter explores the importance of visual experience for cognitive processes, with particular focus on their perception of the environment and subsequently how blind individuals “see” the world.
There is a substantial body of physiological and behavioural research that suggests that blind individuals have enhanced sensitivity of their remaining sensory modalities since they rely on these to a greater extent than sighted individuals. This reflects a phenomenon known as “compensatory plasticity” in which individuals cope with the loss of vision by developing supranormal skills when using one of the remaining senses (Kupers and Ptito, 2011). It has been hypothesised that these enhancements depend on the high plasticity of the visual cortex (Merabet and Pascual-Leone, 2010; Merabet et al., 2005). For example, it has been found that the visual cortex in blind individuals is recruited to carry out non-visual tasks such as braille reading (Burton, Sinclair and Agato, 2012; Cohen et al., 1997) and sound localisation (Gougoux et al., 2005; Leclerc et al., 2000).
Audition is argued to be the most studied sensory modality in blind individuals (Kupers and Ptito, 2014). Gougoux et al. (2004) tested early blind and sighted participants on a pitch discrimination task, in which participants had to decide whether the pitch between two tones was rising or falling. They found that nearly blind individuals performed significantly better than sighted controls in this task, even when the speed of change was ten times faster than that perceived by controls. This supports the compensatory hypothesis since it provides evidence of increased auditory perceptual abilities in blind individuals. However, one could argue that enhanced auditory capacities in blind individuals may depend on their usually greater musical experience, rather than due to differences in vision loss per se (Cattaneo et al., 2011). Rokem and Ahissar (2009) accounted for this by comparing congenitally blind individuals with sighted controls matched on musical training, as well as on age and education. They found superior auditory frequency discrimination and superior speech perception measured by resilience to noise among the congenitally blind individuals. Wan et al. (2010) also consider musical experience by controlling for musical training and absolute pitch (the ability to recreate or recognise a musical note without a reference tone) when comparing auditory perception skills in blind and sighted individuals. They found that congenitally (but not late) blind subjects outperformed the sighted controls in pitch discrimination and pitch-timbre categorisation (making judgements on both pitch and timbre) but not in pitch working memory (listening to a series of tones and determining whether the first and last tones were the same). This suggests that visual deprivation onset early in life does not lead to superior performance in all auditory tasks, and that the enhancement of auditory acuity related to pitch stimuli in the blind is restricted to basic perceptual skills rather than extending to higher-level processes such as working memory.
It is also important to consider auditory perception in more naturalistic settings such as processing speech sounds, yet only a few studies do so (Kupers and Ptito, 2014). Hugdahl et al. (2004) investigated consonant-vowel syllable discrimination via headphones in a dichotic listening task in which participants were instructed to pay attention to the right ear stimulus, left ear stimulus or no specific instruction was given. Fourteen congenitally or early blind individuals were compared with 129 sighted individuals. The blind individuals outperformed sighted individuals in correctly identifying syllables. Furthermore, when instructed to pay attention to the left ear stimulus and only report from the attended channel, they were significantly better than sighted controls. The typical finding in this paradigm is a right ear advantage, which indicates better processing of the consonant-vowel syllable stimuli in the left hemisphere. The results from Hugdahl et al. (2004) therefore suggest that there is hemispheric reorganisation in blind individuals in the auditory modality that may enable enhanced speech processing. In contrast however, Gougoux et al. (2009) found no behavioural differences in voice recognition between congenitally blind, late blind and sighted participants.
Contrasting results on how visual experience impacts auditory processing is being revealed by an increasing number of studies in the domain of spatial cognition; for a review see Scheller et al. (2018). Auditory localisation is key for many behaviours and normally is a cross-modal or multisensory task incorporating vision. Blind individuals can show normal or even supranormal auditory localisation performance in the far space as well as near space in a number of studies (Fieger et al., 2006; Lessard et al., 1998; Voss et al., 2004). However, other studies reported that early or congenitally blind individuals compared to sighted individuals have a compromised representation of auditory space in the vertical (sagittal) plane (Finocchietti, Cappagli and Gori, 2015). This might be due to a disruption of audio-visual multisensory calibration (Gori et al., 2014a), where visual perception early in life serves as a mechanism for multisensory “glue” that calibrates the other senses (Pasqualotto and Proulx, 2012). Auditory localisation in the horizontal plane in the visually impaired provides accurate or even superior results because cues used by the brain to decode sound source location (interaural loudness difference (ILD) and interaural time difference (ITD)) are still available without vision. Other studies have also found supranormal sound localisation in the horizontal plane (Lessard et al., 1998; Voss et al., 2004). Sound location in the vertical plane can only be mapped based on the pinna-related spectral shape cues, which are less accurate than interaural time or loudness differences, unless one has specially adapted pinna such as the barn owl (Konishi, 2000). Interestingly, the superior auditory localisation performance of blind individuals is observed mainly in the lateral perceptual field but not in the centre, perhaps suggesting a monaural mechanism for such superior performance rather than standard auditory localisation binaural cues (Roder et al., 1999). The age of onset of blindness seems to play a critical role as well. While, in the study by Finocchietti et al. (2015), early blind individuals showed impaired audio localisation in the lower sagittal plane, late blind individuals did not. This group’s responses were similar to those of sighted participants. This might indicate that multisensory calibration builds up the foundations for understanding physical properties in the environment at an early age, when plasticity is high (Pasqualotto and Proulx, 2012; Proulx, Brown et al., 2014; Scheller et al., 2018).
The ability of touch provides a great deal of information about the environment, for example recognising everyday objects. For blind individuals especially, utilising the sensory modality of touch enables them to read and write in braille, thus facilitating vital communication. It is therefore unsurprising to propose that their tactile perceptual abilities used to “see” the world may be enhanced given their increased reliance on tactile information. Indeed, Van Boven et al. (2000) compared 15 early blind braille readers with 15 sighted subjects in a grating orientation discrimination task used to measure psychophysical limits of spatial acuity. They found that the blind individuals had a significantly lower mean grating orientation discrimination threshold compared to the sighted group, and that their index finger (braille reading finger) had a significantly lower threshold than the other fingers. This suggests that blind individuals may have enhanced tactile spatial acuity due to adaptive changes. This higher tactile acuity in blind individuals has also been supported by subsequent research (Goldreich and Kanics, 2003; Legge et al., 2008).
However, not all touch is improved in the absence of visual experience. Other studies do not find significant differences between blind and sighted subjects in tactile pressure sensitivity (Pascual-Leone and Torres, 1993; Sterr, Green and Elbert, 2003; Sterr et al., 1998) or spatial-acuity-dependent grating orientation discrimination (Grant, Thiagarajah and Sathian, 2000). Wong, Gnanakumaran and Goldreich (2011) tested whether improved tactile acuity was due to their reliance on tactile experience (tactile experience hypothesis), or due to the loss of vision itself driving increased tactile acuity (visual deprivation hypothesis). They tested spatial acuity on the index, middle and ring finger of each hand as well as the lips, comparing 28 blind individuals with 55 sighted individuals. They found that the blind individuals significantly outperformed the sighted group in all fingers, but not on the lips, consistent with the tactile experience hypothesis. Furthermore, within the blind participants, proficient braille readers outperformed non-readers on their preferred reading index finger, and within the proficient braille readers, their preferred index finger outperformed their opposite index finger, and this was correlated with weekly reading time. These results suggest that enhanced tactile acuity is due to increased tactile experience rather than the actual loss of vision. This is further supported by studies that use a visual-tactile sensory substitution device known as the tongue display unit (TDU), which translates visual information into electro-tactile stimulation applied to the tongue. When training both congenitally blind and matched sighted control subjects, both performed equally well in discriminating orientation (Ptito et al., 2005), motion (Matteau et al., 2010) and shapes (Ptito et al., 2012). This therefore suggests that blind individuals have enhanced tactile perception due to their increased experience and reliance on tactile information to see the world, such that practice makes perfect (Sathian, 2000).
The sense of smell is important for almost all species, from finding the right food to choosing a mate (Kupers et al., 2011), and can evoke strong emotions and vivid memories (Gottfried, 2006). While it is believed that humans have a less refined sense of smell than animals, they are nonetheless able to distinguish between thousands of different odours. Like in auditory and tactile processing, it has been proposed that blind individuals, given their loss of vision, have enhanced olfactory performance. For example, Cuevas et al. (2009) found that blind individuals were better than sighted controls in the free identification of odours and odour discrimination. Furthermore, Beaulieu-Lefebvre et al. (2011) found that blind subjects have a lower odour detection threshold and report being more aware of their olfactory environment. Like with enhanced auditory and tactile perception, this is proposed to be due to cross-modal plasticity, since the olfactory bulb in the forebrain is very plastic (Li et al., 2006). Indeed, Rombaux et al. (2010) found that superior olfactory performance in blind subjects was related to increased volume of the olfactory bulb. Moreover, Kupers et al. (2011) found that congenitally blind subjects activated higher-order olfactory bulb areas more strongly than blindfolded sighted subjects.
However, there is also evidence that shows comparable performance in blind and sighted individuals in terms of olfactory thresholds, discrimination abilities and cued smell identification (Cuevas et al., 2009; Kupers et al., 2011; Rombaux et al., 2010; Rosenbluth, Grossman and Kaitz, 2000). The discrepancy in results possibly reflects limited sample sizes of congenitally blind participants, such as Cuevas et al. (2009) and Beaulieu-Lefebvre et al. (2011), or in combining congenital, early and late blind individuals into one group, a persistent problem in the literature (Pasqualotto and Proulx, 2012). Addressing these limitations, Sorokowska (2016) studied congenitally blind (n = 43), late blind (n = 41) and sighted (n = 84) participants matched for age and gender, with sample sizes that are extraordinarily rare in neuroscience and psychology research on the topic of visual impairment. Sorokowska found no significant difference between groups on olfactory threshold, odour discrimination, cued identification, or free identification scores using a standardised smell test (the Sniffin’ Sticks test). This suggests that sensory compensation in blind individuals is not pronounced in olfactory abilities when measured by standardised smell tests. Using the same test, another recent study by Majchrzak et al. (2017) compared a heterogeneous group of visually impaired participants (n = 99) with sighted controls (n = 100). They also found no significant difference in odour identification or discrimination tasks, further suggesting that blind individuals do not have an enhanced sense of smell.
The consequences of blindness on the remaining sensory modalities, however, is still under debate (Gurtubay-Antolin and Rodríguez-Fornells, 2017). In contrast to the literature discussed that argues that blind individuals have enhanced sensitivity of their remaining sensory modalities (compensatory hypothesis), it has also been suggested that visual deprivation leads to maladjustments in remaining modalities (general-loss hypothesis). This is particularly the case for modalities that include spatial information (e.g. audition, touch) since localisation is hypothesised to benefit from visual calibration (Gori et al., 2014a; Gori et al. 2014b). For example, research suggests that blind individuals are impaired in auditory spatial tasks such as bisection (Gori et al., 2014a), vertical localisation (Lewald, 2002; Zwiers, Van Opstal and Cruysberg, 2001) and absolute distance discrimination (Kolarik et al., 2013). Evidence also suggests blind individuals are impaired in tactile spatial tasks including haptic orientation discrimination (Postma et al., 2008), visual spatial imagination (Noordzij, Zuidhoek and Postma, 2007) and rotation of object arrays (Ungar, Blades and Spencer, 1995). Cappagli, Cocchi and Gori (2017) investigated both tactile and auditory spatial perception in the same study and found substantial spatial impairment in congenitally blind children and blind adults for auditory distance perception and proprioceptive reproduction (hand-pointing in the haptic domain) when compared to blindfolded sighted participants.
This supports the notion that visual calibration is necessary for spatial perception in auditory and haptic domains. Indeed, auditory distance accuracy and variability are improved in the presence of additional congruent visual cues in sighted individuals (Anderson and Zahorik, 2014; Calcagno et al., 2012; Finnegan, O’Neill and Proulx, 2015, 2017). Moreover, Cappagli et al. (2017) found that late blind individuals performed significantly better than sighted subjects in both auditory and tactile space perception tasks. This superior performance of late blind individuals may be due to practice effects, since after vision loss their spatial judgements must rely solely on auditory cues, for example, learned early in life, as reported by Cappagli et al. (2017). However, this assumes that during early development vision calibrates hearing in encoding spatial information (Gori et al., 2014a; Pasqualotto and Proulx, 2012). While this study only included three late blind individuals and requires further replication, these preliminary results, that congenitally blind individuals were severely impaired while late blind individuals had superior abilities compared to sighted individuals, suggest that space perception may be drastically weakened due to the lack of visual calibration over the auditory and haptic modalities during a critical period of development.
Thus far we have reviewed the perceptual changes that occur with visual experience or the lack thereof. Given that most higher-level cognitive processing is based on inferences about the outside world that are validated with perceptual information, any initial changes in sensation or perception will have higher-order consequences (Proulx, 2013). One such example is the behaviours that are classified as aspects of spatial cognition. Spatial cognition has been tested in blind individuals for a suite of behaviours, including memory for arrays of objects lying within the manipulatory space (arm’s length), environmental knowledge, wayfinding and navigation. On the one hand, some researchers have reported results suggesting that congenitally blind people do not fully develop spatial cognition, and thus perform poorer than sighted and late blind participants (Pasqualotto and Newell, 2007). On the other hand, other researchers have reported that visual experience is not necessary for the development of spatial cognition and that blind individuals can perform spatial tasks without visual experience (Landau, Gleitman and Spelke, 1981).
We hypothesised that these conflicting findings could be resolved if another factor were considered that would classify these studies by the frame of reference required for the task (Pasqualotto and Proulx, 2012). Shelton and McNamara (2001) noted that objects can be mentally represented with respect to the position of the observer (an egocentric frame of reference) or with respect to the position of the object in the environment (an allocentric frame of reference).
Earlier research with sighted participants found that people have a preference for the allocentric representation (Mou and McNamara, 2002). If the objects are lined up in rows and columns, then we remember that format rather than how they looked from the view from an angle, for example. This is also true if you blindfold the participants and walk them from one “home” location to each object in turn. Sighted participants will prefer the allocentric reference frame over the egocentric one. In our study we were curious whether having visual experience during child development is key to create the structures in the brain to support such another-centred reference frame. We therefore tested people who were congenitally blind, participants who were sighted for some period of time and then became blind later in life and sighted participants. We blindfolded everyone and walked them to the locations of objects in a large room. Later we tested them on a computer with a virtual pointing task: “Imagine you are at the cup facing the book, point to the pan.”
First we found an interesting difference between the congenitally blind and sighted people: although the sighted people preferred the other-centred, or allocentric, reference frame, the congenitally blind participants preferred the self-centred or egocentric reference frame. The important piece of the puzzle, however, was whether the late blind people would perform like the congenitally blind, showing that current visual experience matters, or like the sighted, showing the role of early visual experience. The results were clear: the late blind performed the same as the sighted participants. Therefore having the experience of vision early in life lays the groundwork in the brain for the representation of locations in a different reference frame than that found in people who never had visual experience.
The crucial finding from our work on spatial cognition that resolves the prior conflicting findings is that the difference between the blind and sighted participants was a matter of kind, rather than degree. That is, all three groups performed at a similar level of accuracy, but with different techniques or strategies revealed by the reliance on different spatial frames of reference. Moreover, the reliance on egocentric versus allocentric frames of reference makes sense given the strengths of visual versus auditory and tactile perception: sensing distant (out of reach), silent objects in parallel (Proulx, Brown et al., 2014). The distal and parallel features of visual processing give rise to the utility of vision as a calibrator for multisensory spatial information (Pasqualotto and Proulx, 2012), and thus provides the neural means for mapping the relative locations of objects to one another (allocentric) rather than just in relation to the location of the perceiver (egocentric). Interestingly we found similar differences between the congenitally blind on the one hand and the sighted and late blind on the other hand in a mental number line task (Pasqualotto, Taya and Proulx, 2014) that likely draws upon similar representations of space and magnitude in the posterior parietal cortex.
In contrast, work in another area of cognition, memory, has revealed consistent advantages in working, semantic and episodic memory in those without visual experience (congenital visual impairment). Initial work examining semantic memory (remembering words) found better performance in the congenitally blind than other participants (Amedi et al., 2003). Moreover, activity in the “visual” cortex of the occipital lobe correlated with the number of words correctly remembered in the congenitally blind but not in the other participants. This neural plasticity (taking advantage of the normally visual cortex for another task) allowed for a greater number of items to be remembered; was the quality of the memory representation better as well? We later assessed this with a memory task that is sensitive to the fidelity of memory by distinguishing real and false memories. The prior finding by Amedi et al. (2003) was replicated (the congenitally blind remembered more words than the other participants), but additionally we found those without visual experience also had far lower rates of false memory compared to other participants, too (Pasqualotto, Lam and Proulx, 2013). This and other work examining short-term memory suggests that the advantage in congenitally blind participants arises from improved stimulus encoding, perhaps resulting from cross-modal recruitment of what are normally visual areas of the brain (Rokem and Ahissar, 2009).
A critical period for vision is a window of time in which there must be visual input to develop neural connections and the ability of sight (Daw, 1998). This was first proposed by Hubel and Wiesel (Hubel and Wiesel, 1964; Wiesel and Hubel, 1965a, 1965b), and their findings relating to binocular deprivation will be discussed given its relevance to human blindness. They sutured the eyelids of kittens after birth for three months. Upon sight restoration, the kittens still appeared blind – bumping into objects and losing sight of passing objects. This was also supported by neurological evidence. The lateral geniculate nucleus (LGN), the main connection between the optic nerve and the primary visual cortex (V1), was reduced in size by 40%, and many cells in V1 did not respond to visual stimulation (Daw et al., 1992; Hubel, Wiesel and LeVay, 1977). It was therefore concluded that there is a critical period for vision in cats between 3 weeks and 3 months of age, and that a lack of visual stimulation during this time means that sight cannot develop. A critical period for vision has also been supported by evidence in macaque monkeys and rodents (Fagiolini et al., 1994; Hubel et al., 1977). This suggests that different species have their own critical period for visual development. Given ethical reasons, it is of course difficult to test critical periods in humans. Despite this, it has been proposed there is a critical period between birth and 2 years when the lateral geniculate nucleus (LGN) becomes its adult size (Hickey, 1977), or up to 6 years of age when the cortical architecture becomes refined (Lewis and Maurer, 2005). Nevertheless, it is vital to establish how necessary critical periods are for visual development, since many congenitally blind individuals are not treated due to the belief that they would not develop sight upon sight restoration having missed a critical period (Kalia et al., 2017).
A unique opportunity to study critical periods in humans has emerged in the last decade from Project Prakash in India. India has the world’s largest population of blind children, with 90% unable to obtain an education and fewer than 50% surviving until adulthood, yet nearly 40% of these children can be treated, for example because their blindness is from cataracts (Kalia et al., 2017). Project Prakash provides free screening and treatment for children in India and as of 2014, 42,000 children have been screened with approximately 2,000 being treated (www.projectprakash.org). Alongside providing sight and improving the quality of life of thousands of children, testing these children after sight restoration provides information about visual development in humans.
Kalia et al. (2014) tested children after sight restoration following early-onset (blind before 1 year of age) and extended blindness (blind for 8–17 years). Children were tested on contrast sensitivity, which is the ability to perceive changes in luminance between regions that are not separated by a definitive border. Contrast sensitivity improved within the first few weeks after surgery and continued to improve for two years. This development was independent of the age of sight onset, showing that the duration of blindness did not influence their contrast sensitivity development. Interestingly, contrast sensitivity no longer develops in typically developing individuals of the same age, and the rate of development exceeded those of infants in two individuals (Atkinson, Braddick and Moar, 1977; Banks and Salapatek, 1978). This first shows how the visual system can retain considerable plasticity, even when blindness extends critical periods. Furthermore, it provides evidence that a critical period may not be as critical for visual development as previously thought, because sight restoration after early-onset and extended blindness enabled sight development. This challenges the initial work by Hubel and Wiesel (1964) and suggests we should remain optimistic for sight restoration after childhood.
Further evidence of the critical period not being so critical is provided by findings in object recognition and face categorisation with sight restoration children. Held et al. (2011) presented children with a Lego-like object which the children studied by only touch (while blindfolded). They then presented this object with another distractor object and children were then asked to identify the original object by sight. Immediately after surgery, children were 58% accurate, but this rose to 80% after one week (Held et al., 2011). In a face categorisation task (Gandhi et al., 2017), children were shown images ranging from faces to randomly selected non-face images, with three intermediate stages of increasing face-like patterns between. Children were asked whether the image was a face or not. Immediately after surgery, children performed poorly (less than 50% accuracy), but this rose to approximately 90% after six months. Alongside this improvement, children’s false identification of non-faces dropped, too. This further illustrates the rapid improvement in visual development after early-onset and extended blindness, challenging the notion of an essential critical period early in life.
While evidence from sight restoration studies challenges a critical period for visual development early in life being crucial for vision to later develop, it may be possible that this critical period still occurred but was delayed until their sight restoration. For example, it has been found in rats and kittens that the critical period is mediated by the neurotransmitter GABA. Dark-rearing reduces the function of GABA that delays the onset of the critical period, suggesting that later exposure to light (i.e. sight restoration) could initiate a critical period later in life thus the children may not have missed a critical period, but this may have been delayed (Hensch, 2005a, 2005b; Mower, 1991). However, research suggests that the critical period was only delayed if there was no exposure to light at all. Since light can still enter the eye even with dense cataracts, sight restoration children in the studies discussed would therefore not be raised in total darkness and thus a delayed critical period may not apply. For example, research found that exposure to light for just six hours prevented the delayed onset of the critical period in kittens as the GABA pathway was stimulated by even this brief exposure to light (Mower, Christen and Caplan, 1983). However, this exposure to light was extremely intense (from a flashlight), whereas Project Prakash children would only see dim light due to their cataracts, which may not be intense enough to stimulate the critical period. Since it is unknown how much light the children could detect, it is still an open question whether a critical period for vision was delayed or missed. Nevertheless, the rapid improvements made by the sight restoration children, despite having severe visual impairment for their first few years of life (8–17 years), suggests we should be optimistic for vision restoration later in life.
In the face categorisation task with sight restoration children by Gandhi et al. (2017), they found that the improvement in face categorisation was independent from visual acuity development because face categorisation improvement exceeded visual acuity improvement. Furthermore, they also tested age-matched controls wearing blur goggles causing them to have similar acuity to post-operative children (Gandhi et al., 2017). These controls were still highly accurate at the face categorisation task, suggesting that face categorisation does not depend on visual acuity. Given these results, it suggests there may be multiple critical periods for different visual functions (namely visual acuity and face categorisation), challenging the idea of a critical period as a unitary construct. Indeed, this is supported by animal studies. For example, Harwerth et al. (1986) found different critical periods for rods and cones, monocular spatial vision and binocular vision in macaque monkeys. Furthermore, in ferrets there is evidence that the critical period for orientation selectivity ends earlier than the critical period for ocular dominance (Chapman and Stryker, 1993).
In addition to providing evidence for two separate visual functions with different critical periods, the data from sight restoration studies also suggest face categorisation is more resilient to visual deprivation than visual acuity, given its faster development (Gandhi et al., 2017). It remains an open question, however, as to what makes a visual function susceptible or resilient to early deprivation. It has been suggested that earlier manifesting visual functions are more vulnerable to visual deprivation (Sengpiel, 2007). Evidence from Gandhi et al. (2017), however, challenges this notion, because face categorisation is evident in neonates (Fantz, 1965; Mondloch et al., 1999) while neonate visual acuity is still poor (Kellman and Arterberry, 2000), suggesting face categorisation is an earlier manifesting visual function than acuity. It is therefore important for future research to investigate what makes a visual function susceptible or resilient to early deprivation to further understand visual development following sight restoration. This shows us how sight restoration studies provide evidence of multiple critical periods within visual development, which may have varying degrees of resilience to visual deprivation. Moreover, the blind brain retains considerable plasticity, which may explain the rapid improvement in contrast sensitivity, face categorisation and object recognition after sight restoration.
The rapid improvement in object recognition and face categorisation from studies of sight restoration may be explained at a neural level by the metamodel hypothesis. This suggests that the organisation of specialised cortical regions in the human brain are independent from the sensory input (Pascual-Leone and Hamilton, 2001). For example, the “visual” cortex can also receive auditory and tactile stimuli but is referred to as the “visual” cortex because there is a preference for visual input since the spatial functions inherent to the striate cortex are best accomplished using visual information (Proulx, Brown et al., 2014). Crucially, visual input is not the only sensory input that the “visual” cortex can process (Liang et al., 2013; Liang, van Leeuwen and Proulx, 2008), as recent studies have demonstrated that other sensory information such as sound, touch and even pain evoke activity in the occipital cortex.
In this view, a congenitally blind individual may already have a representation of objects or faces in the visual cortex that have been built up from tactile or auditory input (Pietrini et al., 2004; Ricciardi et al., 2009). Thus, upon sight restoration, the new visual input may be re-routed to these already existing cortical regions for representing objects and faces, discussed in more detail later in the following section. This therefore enables rapid improvement in tasks such as object recognition since a new cortical representation for objects need not be created. Evidence to support the metamodal hypothesis comes from visually impaired as well as sighted individuals.
First, there is evidence from functional brain imaging studies showing activation of the visual cortex in early blind subjects during braille reading (Burton, Sinclair and Agato, 2012; Burton et al., 2002; Sadato et al., 1996) and other forms of tactile stimulation such as during a one-back vibrotactile matching task (Burton, Sinclair and McLaren, 2004). To assess a causal link between visual cortex activation and task performance, Cohen et al. (1997) used repetitive transcranial magnetic stimulation (rTMS) to disrupt the primary visual cortex while participants were reading braille. They found there was an increase in braille reading errors when repetitive transcranial magnetic stimulation (rTMS) bursts were applied to the visual cortex, but stimulation of a control area (somatosensory cortex) produced no such effects. In contrast, when sighted participants read embossed Roman letters, TMS to the visual cortex showed no effect on tactile performance, whereas similar stimulation is known to disrupt their visual performance. This suggests that the visual cortex is necessary for braille reading even in the absence of visual input. However, a limitation of this study is that the TMS was applied during performance, and so other effects are not controlled for, such as changes in attention. Addressing this limitation, Kupers et al. (2007) tested task performance 15 minutes after repetitive transcranial magnetic stimulation (rTMS) in which the effects of TMS could still be assessed outside the stimulation period. They tested participants on three consecutive braille words as a repetition priming task. Due to the repetition priming, subjects made significantly fewer mistakes and read faster in the second and third presentation of the same word list. However, after repetitive transcranial magnetic stimulation (rTMS) to the visual cortex, this repetition priming effect was diminished and participants made significantly more reading errors. These effects were not seen after repetitive transcranial magnetic stimulation (rTMS) to a control area (the somatosensory cortex), in line with Cohen et al. (1997). These studies suggest that despite visual deprivation, the visual cortex has a functional role in blind individuals.
The visual system is classically divided into a dorsal and a ventral stream, involved in motion and object recognition respectively (Mishkin and Ungerleider, 1982). In further support of the metamodal hypothesis, there is evidence that these streams are preserved in the absence of vision, suggesting that the so-called “visual” cortex also processes information carried by non-visual sensory modalities (Kupers and Ptito, 2014; Proulx, Ptito and Amedi, 2014). For example, the lateral occipital tactile vision area (LOtv) is an object-selective area in the ventral visual pathway (Lacey et al., 2009) yet evidence suggests that this area of the ventral pathway is not specific to just visual input. For example, in both blind and sighted individuals, the lateral occipital tactile vision area (LOtv) responds to soundscapes of specific objects created by a visual to auditory sensory substitution device called the “vOICe”, which translates visual shape information into an auditory stream (Amedi et al., 2007), thus showing how auditory stimuli activates the ventral stream. Similarly, evidence suggests this is also the case when perceiving objects from tactile stimuli. For example, Ptito et al. (2012) used a visual to tactile sensory substitution device that translates visual images into electro-tactile stimulation that is transmitted to the tongue via a 12 × 12 electrode array. In a shape recognition study, fMRI data showed that both blind subjects and blindfolded sighted controls activated the lateral occipital tactile vision area (LOtv). This supports the notion that the lateral occipital tactile vision area (LOtv), part of the ventral stream, subserves an abstract or supramodal representation of shape that is preserved in congenitally blind individuals. Moreover, in sighted individuals, the lateral occipital tactile vision area (LOtv) is shown to respond selectively to objects not only by vision but also by touch (Amedi et al., 2001; Amedi et al., 2007; Pietrini et al., 2004). This further suggests the ventral stream is genuinely modality dependent, and that the supramodal nature is not the result of brain reorganisation in congenital blindness.
In specific relation to a modality independent representation for faces, the fusiform face area (FFA) is an area of the ventral stream that receives input from the lateral geniculate nucleus (LGN) and projects into V1, which is associated with face processing in sighted individuals (Haxby et al., 1999). There is evidence that congenitally blind individuals show more activation in the fusiform face area (FFA) when hearing voices, compared to late blind and sighted individuals (Gougoux et al., 2009). This suggests that the fusiform face area (FFA), part of the “visual” cortex, is also responsive to relevant auditory stimuli associated with a face (voices) and that the fusiform face area (FFA) subserves an abstract representation of faces regardless of the sensory modality (Pietrini et al., 2004).
Given the evidence that the ventral stream, in particular the lateral occipital tactile vision area (LOtv) for object recognition and the fusiform face area (FFA) for face recognition, is preserved in congenitally blind individuals, this suggests that the visually impaired may have representations for faces and objects provided by their auditory and haptic experience of the world. How, then, do these metamodel representations enable such rapid improvement in object recognition and face categorisation in sensory restoration?
The review of the literature above suggests that the visual cortex can be activated by other sensory modalities and is functionally active in congenitally blind individuals who lack visual experience. There are two competing hypotheses put forward to explain this cross-modal plasticity in the blind brain. On the one hand, according to the cortical reorganisation hypothesis, cross-modal brain responses are mediated by the formation of new pathways in the sensory-deprived brain. Animal studies suggest that when the brain is deprived of visual input at an early age, tactile and other non-visual information are re-routed to the visual cortex (Chabot et al., 2008; Karlen, Kahn and Krubitzer, 2006, 2009; Piché et al., 2007). On the other hand, according to the unmasking hypothesis, loss of sensory input induces unmasking and strengthening of existing neuronal connections. A key piece of evidence to distinguish between these hypotheses is whether cross-modal plasticity occurs slowly, consistent with cortical reorganisation through novel neuronal pathways, or quickly, consistent with an unmasking of existing neuronal connections. Given such rapid improvement in visual tasks in the sight restoration studies, this favours the unmasking hypothesis, since this time frame is too rapid for new neuronal connections to be formed. But is this sort of plasticity restricted to those with sensory impairments? It is not – other research has found that short-term sensory deprivation and training can also reveal this cross-modal plasticity in sighted adults. Merabet et al. (2008) blindfolded sighted individuals for five days while they learnt braille. After this sudden vision restriction, the visual cortex became activated with tactile information after two days, which then reversed as soon as the blindfold was removed. This provides evidence for profound and rapidly reversible neuroplastic changes, which the authors suggest is from the sudden unmasking of existing connections and a shift in connectivity. Perhaps sudden visual experience rapidly unmasks connections, potentially enabling visual input to access these existing object or face representations, resulting in rapid improvement in face categorisation and object recognition, as seen in the sight restoration children. This suggestion is further supported by evidence that the maturation of a new neuron takes four to six weeks (Brady et al., 2011). Their improvement within a week, therefore, cannot be a result of new neural connections between visual input and the visual cortex, but is more likely to utilise the existing neural architecture representing faces (e.g. fusiform face area (FFA)) and objects (e.g. lateral occipital tactile vision area (LOtv)) that visual input can be directed to via the unmasking of existing connections.
This suggests how visual development may be rapid in individuals, despite missing a critical period from extended blindness. However, this evidence for the unmasking of existing neurons (Merabet et al., 2008) is from sighted individuals, and so its generalisability to sight restoration children could be limited given the potential differences in brain structure between sighted and blind individuals. Research has found structurally different visual areas as well as different connections between these and tactile areas in blind individuals compared to sighted (Büchel, 1998; Ptito et al., 2008). However, other recent research in visually impaired adults who learned to interpret auditory displays of images via sensory substitution showed the same cross-modal plasticity over an even briefer time period of only two hours of training (Striem-Amit et al., 2012). Thus this finding generalises across both sighted and visually impaired people and suggests that the cortex is highly plastic and able to re-route information displayed to the other senses or via sight restoration in just a matter of hours via existing multisensory neural connections (Proulx, Ptito and Amedi, 2014).
This potential mapping of new sensory input on to existing representations draws on Molyneux’s question, a nearly 400-year-old philosophical question asking whether someone born blind could distinguish between a sphere and a cube by sight alone upon sight restoration (Morgan, 1977). John Locke and other empiricists believe the answer would be “yes” because humans have an innate conception of an object that is separate from all sensory input. Sight restoration projects such as research with sensory substitution and Project Prakash offer unique opportunities to address this question. Given the initial failure to accurately identify objects (Held et al., 2011) the answer might be “no”. However, given there is no data for their rapid improvement within the first week, it could be possible that this improvement and their success at this task is evident much earlier than one week, possibly within the first couple of hours or days. Given vision is a completely brand-new experience for these individuals, it is perhaps unrealistic to think their neural circuits can process visual information immediately upon sight restoration, however, this ability may occur within a short period of time, especially given the evidence of the unmasking of existing neurons occurring in just two days (Merabet et al., 2008), and some sensory substitution training can evince successful performance and neural plasticity in only two hours (Striem-Amit et al., 2012). Indeed it is even possible for naïve participants to perform well on an optician’s visual acuity test without any training at all (Haigh et al., 2013). Given the possibility of rapid re-routing of information between visual input and existing representations, there is potential that Molyneux’s question is more realistically answered with a “yes” if allowing a few hours until testing.
“We see with the brain, not the eyes” (Bach-y-Rita, Tyler and Kaczmarek, 2003). The most crucial aspect of this review is that “seeing” the world might be different for those with visual impairments compared to those with vision, but that their perception is just as rich and varied due to the compensatory effects of practising to experience the world through the other senses. This has important consequences for approaches to accessibility. Of course the visually impaired are still faced with the challenges of a world largely designed by the sighted for the sighted, and so some way of accessing visual information is often necessary. Most famously, one might think of technologies such as the white cane and braille as ways to render information normally seen as something that can be felt. New approaches are taking this even further by transforming images into a format that other senses can process, and this is called sensory substitution (Bach-y-Rita et al., 2003), a topic discussed at greater length in another contribution to this volume. The key to sensory substitution is that all perception ultimately takes place in the brain, thus turning images into sounds (Brown and Proulx, 2013, 2016; Haigh et al., 2013; Proulx et al., 2016) or something that can be touched (Bach-y-Rita et al., 1969; Chebat et al., 2007; Chebat et al., 2011) has demonstrated a fascinating potential of the brain for taking such sensory input and processing it in a metamodal framework (Proulx et al., 2014). There is still much to be learned about the “blind brain”, with great potential both for a basic understanding of how all brains work and for developing new assistive technology to best tap into that potential.