Multi-task performance in processing four-choice spatial stimulus-response (S-R) mappings : implications for multimodal human-machine interface design

刺激與反應物的四種不同空間相容關係對多重任務處理的影響 : 多模式人機界面設計啟示

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

  • Ngai Hung TSANG

Detail(s)

Awarding Institution
Supervisors/Advisors
Award date3 Oct 2014

Abstract

The spatial stimulus-response compatibility (SRC) effect refers to the robust finding that human performance is better for some spatial arrangements of controls and displays than for others. Usually the effect is most marked when components of the response panel physically correspond in some obvious way with those of the stimulus panel. Spatial stimulus-response compatibility (SRC) effects have far-reaching implications for optimizing human-machine interface design. Yet, previous studies on display and control compatibility and response performance have been mostly limited to a single-task paradigm. With the increased complexity of human-machine interfaces, there is an increase in the number and variety of stimulus modalities and control devices to be handled concurrently by operators in a control room. The study for resources allocation and capacity limitations in multimodal information processing in the context of spatial compatibility has therefore become important for enhanced human performance and overall system performance. The first experiment examined the effect of spatial compatibility on dual-task performance for various display-control configurations using a tracking task and a discrete four-choice response task. Different levels of compatibility between the stimuli and responses of the discrete response task were found to lead to different degrees of influence on the tracking task, such that the more incompatible the stimulus-response mapping, the more severe the interference with the tracking task. However, degradation of performance was observed for both tasks. This was probably due to resource competition for the visual and spatial resources required for simultaneous task operation within the visual modality and as required for bimanual responses. The dual tasks were not closely spaced and both required focal vision for processing, involving scanning back and forth between the two visual tasks. This may explain part of the delay found for dual-task processing. No right-left prevalence effect for the spatial compatibility task was observed in this study, implying that the use of unimanual two finger responses may not provide the right conditions for a significant effect in the horizontal right-left dimension, as may be found when both hands are used for responses. The second experiment tried to explore the feasibility of superimposing the two independent visual tasks in Experiment 1 on a single display. In this, the task stimuli were placed in close proximity so that focal and ambient vision could be utilized concurrently to minimize resource competition due to excessive demand on the same visual channel (focal) for visual processing. The results showed that although performance with respect to both the tracking and spatial response tasks was impaired, the magnitudes of impairment were not as great as expected (compared with Experiment 1). This implies that focal and ambient vision required for the tracking task and spatial task, respectively, might be deployed, at least partly, from separate resources. Participants here seemed to successfully use focal vision for tracking and ambient vision for identifying signal lights concurrently, reducing the expected keen competition for visual resources. The third experiment was to investigate the interaction between and the performance of a tracking task and an auditory spatial compatibility task with concurrent processing of visual and auditory inputs. As there are not always clear cut predictions concerning the performance of cross-modality (auditory-visual) versus intra-modality (visual-visual) configurations, it was found here that the cross-modality configuration was superior to the intra-modality configuration only when visual scanning was necessary between the intra-modal dual tasks (Experiment 1). When the dual tasks were spaced close enough such that focal and ambient vision were utilized simultaneously for processing (Experiment 2), the intra-modality configuration resulted in slightly better dual-task performance than the cross-modality configuration. The last experiment used a multi-task paradigm involving dual hand and foot tracking and a discrete choice response task to study the effect of spatial compatibility for various display-control configurations on human performance. Delay in multi-task processing was observed when more than one task demanded for the same pool of resource for processing. It was found that cross-modal time-sharing is superior to intra-modal time-sharing under most dual-task/multi-task circumstances because of the use of different perceptual channels for task processing. However, such cross-modal benefit has seldom been studied within a task, that is, more than one modality is used for stimulus presentation within a task. Here, it was found that compared with the visual-visual signal presentation (intra-modality), the auditory-visual signal presentation (cross-modality) resulted in a significantly higher hand tracking error, response time, and response error. This implies that it is very likely that a mixed-modality for stimulus presentation within a task may impair multi-task time-sharing. This is probably due to response conflict and the modality shifting between visual and auditory modalities across trials. The deliverables of this work will help industrial designers and ergonomists in developing effective and intuitive multimodal interfaces so as to improve multi-task performance in control rooms. They are helpful for improving efficiency and overall system performance in human-machine systems – in particular in emergency situations.

    Research areas

  • Human-machine systems, Stimulus generalization, Design