• No results found

Chapter I: The space around the body: Peripersonal space

1.3 Peripersonal space in the human brain

1.3.2 Peripersonal space in healthy individuals

Highly converging evidence in favour of distinct multisensory representations of space comes from studies about the interference that visual or auditory information can exert over touch perception in neurologically unimpaired people.

One of the best-known paradigms used to investigate this issue is the cross-modal congruency task (Spence et al., 1998, Spence et al., 2004a; Spence and Driver 2004; Shore et al., 2006; Macaluso and Maravita, 2010; Costantini et al., 2017). In a typical crossmodal congruency study, participants receive a vibrotactile stimulus either at the thumb or the index finger of the hands while holding two foam cubes, one in each hand (see Figure 1.7 for a schematic illustration).

The target tactile stimulation is presented together with a visual stimul us (distractor), independently on an unpredictable trial-by-trial basis, from one of the four possible stimulus locations. Thus, for each trial the visual distractor could be either close to the tactilely stimulated hand or to the other hand. The visual distractor could be “congruent” (same) or “incongruent” (different) in elevation with the tactile target stimuli. Participants are required to make a series of speeded elevation discrimination (e.g., up/down) responses by pressing a foot pedal, reporting whether tactile target stimuli are presented at the index finger (up) or thumb (down) of either hand. Simultaneously, participants have to try ignoring the visual distractors presented at approximately the same time.

Figure 1.7 The visuo-tactile crossmodal congruency task. Participants hold a ‘stimulus cube’ containing vibrotactile stimulators (open triangles) and visual distractor LEDs (open circles). Participants look at a central fixation point (filled circle) situated midway between the two hands. White noise presented over headphones masks the sound of the vibrotactile stimulators. The inset shows a magnified view of the participant’s left hand holding one of the stimulus cubes. Adapted from Holmes and Spence (2004).

The overall effect is that participants are normally significantly worse (both slower and less accurate) at discriminating touches when visual distractors are presented at incongruent elevations (when distractors is presented at a different elevation as compared to the vibrotactile stimuli), rather than at a congruent elevation (when distractors and vibrotactile stimuli are presented at the same elevation), regardless of whether they were presented on the same or on the other hand. The difference in performance between incongruent and congruent trials, known as Cross -modal Congruency Effect (CCE), is thus a measure of the amount of the cross-modal visuo-tactile interaction (Spence et al., 1998, Spence et al. 2004a). This effect is more pronounced when the vibrotactile target and the visual distractor stimuli are presented from approximately the same location (e.g., from the same foam block) than when they come from different locations (i.e., from two different foam blocks, one held by one hand and the other by the other hand). The modulation of visual distractors over touches in this paradigm is resistant to practice and does not seem to reflect a response competition. Rather, the CCE would appear to be a phenomenon reflecting the integration of visual and tactile information. More relevant here is the different influence exerted by the visual information whether the visual distractor was presented near to as compared to far from the tactile targets. In analogy with the cross-modal extinction studies and the neurophysiological properties of visuotactile neurons, Spence and colleagues (2004b) showed that the CCE was especially stronger when the visual information occurred close to the tactually stimulated body part rather than in far space. To specifically demonstrate this, the authors investigated the cross-modal congruency effect displacing one hand in a far position with respect to visual distractors, thus systematically varying the spatial proximity between target and visual distractor. As a result, they revealed that interference exerted by the visual distractors over the tactile modality was dependent upon the spatial distance between them (Spence et al., 2004b). Additionally, when the posture of participants was manipulated, this cross modal effect changed accordingly. For instance, in a crossed hand condition, if the right hand is placed in the left hemispace, a visual distractor in the left space affected tactile discrimination at the right hand. Such a modulation would confirm that the CCE arises in a reference system that is anchored to the stimulated body-part, (e.g. Macaluso et al., 2005; Spence et al. 2004b; Spence et al., 2001). Critically, similar results were obtained when the visual stimulus was presented close to a fake, realistic hand placed on the table in front of the participants, but only when the hand was placed in an anatomically plausible posture (Pavani et al., 2000, see also Farnè et al., 2000 for a similar result on extinction patients).

Using the same approach, the effect of tool use in near and far space has been investigated thoroughly in healthy individuals through the crossmodal congruency task (Maravita et al., 2002;

Holmes et al., 2004, 2007, however see also Holmes 2012). Without going into too much detail,

such studies have shown that active tool-use increases the salience or effectiveness of visual stimuli presented at the tip of the hand-held tool.

Tool-use-dependent changes in the representation of space around the tool have also been documented by using the auditory modality (Canzoneri et al., 2013a). Serino and co-workers (Serino et al., 2007) provided behavioural support that audio–tactile integrative mechanisms are also dynamically modulated in PPS and that, more importantly, such plastic changes are associated with expertise for specific tools. They first showed that in healthy subjects the auditory PPS is specificity of tool-induced plasticity has been corroborated by analogous findings reported for use of the computer mouse (Bassolino et al., 2010). Merely holding a mouse in the hand habitually used to control the mouse (the right one) extended auditory-interactions to the space near the screen.

Such effects were found only when the mouse was actively used, and not just passively held, in the hand not habitually used to control the mouse (the left one). Suggesting again that tool -induced plasticity exhibits a relatively high level of specificity, it has been demonstrated that a tool have to give a functional benefit to the arm in order to shape PPS. In a recent work Bourgeois and colleagues (2014) assessed the effect of two functionally different tools on the perce ption of reachability (i.e., whether an object is within reach), as an index of PPS extent. They required participants to perform a reachability judgement task, before and after using either a long tool increasing the arm length by either 60 cm (70-cm-long tool + 10-cm handle) or a short tool, leaving the arm length unchanged (10 cm long, 10-cm handle). As expected, reachability judgments were selectively modified by long tool-use, as subjects considered farther locations to be reachable only after using the long tool. Conversely, after using the short tool, which did not substantially alter the reaching space, the described effect failed to appear, thus highlighting the importance of the gain in reachability provided by a tool as an essential component to lead to a PPS modulation. Furthermore, mere tool-use observation, while passively holding the same tool, would actually appear to be sufficient in remapping PPS (Costantini et al., 2011). Collectively, these recent works seem to challenge the initial idea that plastic modifications of PPS require the tool to be actively utilized, as

originally advocated by electrophysiological and neuropsychological evidence. These apparently contrasting results can be reconciled if interpreted in the light of the hypothesis that the functional experience with the tool, rather than its active or passive use, seems ultimately to play a prominent role (Martel et al., 2016; Farnè et al., 2005b, 2005c, see also Serino et al., 2015a for a computational model).