Motor and Cognitive Performance Modification Using Visual-Haptic Interfaces

Motor and Cognitive Performance Modification Using Visual-Haptic Interfaces

No Results

No Results

processing….

The development of virtual reality (VR) technology has spawned new concepts of patient interaction and behavioral modification. The extension of techniques developed for virtual surgery training and pilot training provides the basis for retraining patients with neurological deficits resulting from multiple sclerosis, spinal cord injury, and stroke. [1, 2, 3] Moreover, the application of VR can be of substantial benefit in compensating for sensory deficits, particularly in vision and hearing.

VR approaches can be directed toward assisting the performance of motor and sensory tasks; VR also can be used to develop novel modalities of physical therapy to improve unassisted performance. New modalities of diagnosis and treatment of sensorimotor processing deficits and cognitive dysfunction are emerging from the confluence of clinical neurology, basic science advances, and computer science. [4, 5, 6] In this article, the design considerations of these assistive, diagnostic, and therapeutic systems are reviewed. [7, 8]

Central to the ability to modify motor performance in patients with neurologic disorders is the means to apply corrective or cueing forces to the body parts involved in the activity. In patients with cerebellar tremor, for example, as occurs in multiple sclerosis, a movement such as reaching toward and grasping an object becomes extremely difficult, as demonstrated in the image below (panels A-F are stages of the movement in time).

The entire epoch, which lasts approximately 3 seconds, is shown fully graphed in the image below.

As the patient attempts to reach for the target object (ie, the glass), his hand oscillates rather than following a smooth and accurate trajectory. Interestingly, the terminal regions (thumb and fingers) are relatively stable, allowing for reasonably accurate grasping, but the wrist oscillations result in overturning rather than grasping the target object.

The successful trajectory for the patient’s hand can be mapped out in advance once the target is selected. As long as the patient’s wrist and hand remain within limits established by the position of the target, he or she will be able to reach it with stability. The spatial domain of these limits may be termed the force corridor. A device can be envisioned that applies force to counter the patient’s wrist movement should the wrist deviate outside the corridor.

Thus, the 2 salient functions of the visual-haptic interface are as follows:

Establishing the force corridor on the basis of the position of the patient’s body part and the target

Providing the counterforce (ie, haptic interaction) to constrain the body part to the force corridor

The spatial domain (ie, the region of body part positioning needed to achieve the movement) is computed from the initial position of the patient’s body part (in this case, the wrist) and the position of the target. Position data are available from the videospace of the patient and the target.

A rough corridor is delineated below.

The 3 spatial regions of interest (ROIs), which are overlaid in blue, are the lateral boundaries of the corridor. Encroachment by the wrist and fingers into the ROIs represents deviation from the desired trajectory of the wrist and hand. Degrees of encroachment for each of the 3 ROIs are plotted in graphs below each panel. The corresponding fast Fourier transforms of the encroachment functions are plotted to the left of the panel, and the lowest fast Fourier transforms graph is the coherence of the upper 3 (for quantitative methods, see Steffin 1997 [9] and Steffin, 1999 [10] ). These encroachment levels can be used to control a haptic device that provides counterforce for correction of aberrant wrist movements. For simplicity, only 3 ROIs are shown as limit points on the force corridor; in practice, at least 20 ROIs would be necessary for accuracy.

The counterforce presented to a body part (in this case, the wrist) at any instant can be represented by a vector whose characteristics must be determined by the constraints of the spatial domain and the conditions for movement stability. The computational system provides a value for each ROI in the force corridor region proportional to the level of encroachment by a body part (eg, wrist, fingers) into the corridor limit zone delineated by that ROI.

The generated values for each ROI can be incorporated into a transfer matrix to determine the counterforce vector components. The encroachment matrix values must be processed to generate the specific force components. To continue the example of the reaching arm, application of force by a transducer at a single point on the upper extremity, such as the wrist, is assumed for simplicity.

Consider a haptic device with 3° of freedom output; that is, the force takes the form of a vector, F = F[x(D,t),y(D,t),z(D,t)], in which x, y, and z are functions of the spatial domain matrix, D, and time, t. By formulating the force transfer characteristic in this way, the haptic generator can produce a stabilizing, rather than destabilizing, corrective output to the patient. Bioengineering concepts and principles involved in the construction of such a force vector from spatial data have been described. Implementation of the computational subroutines is proceeding in the author’s laboratory. [9, 10]

The application of appropriate counterforce can appreciably decrease tremor and inaccuracy of movement in a patient with cerebellar deficit, as indicated in the images below, the latter showing the complete epoch.

In this case, a stabilizing force was applied as a preliminary test of the idea. Note the markedly decreased perturbation in trajectory demonstrated by the much flatter curves in the encroachment graphs of the images above than in those of the images farther above.

Application of such a counterforce can be achieved by tethering a haptic device of 3° of freedom directly to the wrist. [9, 10] This general approach also appears to be effective in improving movement accuracy in certain cases of spasticity.

This visual-to-haptic transfer approach has several advantages. Because the functional spatial domain is constructed from the patient’s videospace, the acquisition technology for the spatial domain data is primarily a function of software engineering. This reduces the overall complexity of the hardware for integrating electromagnetic or multiple infrared detectors into the patient’s environment to achieve this result. Likewise, the transduction to force output, at least for the paradigmatic case outlined here, involves relatively simple interaction between the computer and the force generator. The goal of such an approach is construction of a practical instrument that would be available in a typical patient environment. By extension, finer movements (eg, of the fingers) ultimately may be incorporated into the approach using this and other stimulation modalities.

For severely motor-impaired patients (eg, quadriplegics), the extremity videospace monitor approach will fail because the patient is incapable of the extremity volitional movement necessary to create a haptic input signal. As an alternative, video processing of the patient’s facial expression can be used to perform this task. [11] This method is potentially simpler and more reliable to implement than other current approaches, such as EEG driving input, especially because no electrodes need be applied to the patient’s head, and voice recognition may require excessive processing time. The only requirement for facial control is a video camera mounted to view the patient’s face and a self-contained video digital signal processor (single-board freestanding) operating under algorithms under development in this laboratory.

Such techniques have been applied to detection of behavioral states, particularly drowsiness [12] and loss of consciousness (in addition to seizure detection [9, 10] ). For example, such a paradigm can detect sudden loss of consciousness, as in pilots undergoing high acceleration. [13] By using these techniques, scalar processing of converted video facial input can be used to develop robotic assistance regimens. Work is proceeding in the author’s laboratory to develop algorithms for realization of this goal.

The basic approach to facial monitoring is demonstrated below.

The eye region is analyzed in real time, including the supraorbital region and the palpebral fissure. The graphs represent scalar values corresponding to the positions of the structures in the corresponding videospace. Spatial and time resolution are good, as is evident the image above.

The same approach is demonstrated in the image below for the mouth region.

Oral and chin movements are displayed in separate channels. With mouth opening and closing, spatial and time resolution of the movements are similar to those for the eye region. In this case, the mouth movements occurred on command and are therefore more rapid (square wave) than would occur with physiologic yawning; differentiation between volitional and subcortical processes such as yawning is clear with this method, as is shown below. [12]

With the physiologic yawn, the graphs show much more gradual configurational changes of the mouth, almost sinusoidal rather than rectangular. Preservation of high-frequency response is thus necessary for rapid system discrimination of and response to volitional facial driving responses.

Increased spatial resolution can be achieved by multiple channel sampling of overlapping regions, as shown below.

Here, periods of active oral movement contrast with a period of cessation of mouth movements. Reliability of the data is increased by interchannel correlation, as can be seen in these traces during the cessation phase by inspection. Again, the waveforms demonstrate the feasibility of scalar analysis. To resolve behavioral changes in the patient, the video-to-scalar approach presented here is much more efficient computationally than, for example, would be convolutional video transform analysis.

An example of conscious, but quiescent facies, as opposed to volitional activity, involving both mouth and eye movements is demonstrated below.

Eye and mouth movements (2 channels each) are monitored simultaneously. Eye movements are characterized by lower-amplitude, higher-frequency components than mouth movements. As seen here and in images above, mouth movements also show more baseline drift and other low-frequency noise, making interpretation more difficult, although the uncertainty caused by such drift is considerably reduced by the multichannel sampling shown above. However, further improvement in reliability is achieved by high-pass digital filtering, as demonstrated below. In this case, the baseline during movement cessation is nearly flat, leading to less ambiguity and greater reliability in behavioral assessment.

By adding an asymmetrical exponential decay to the output of the high-pass filter, as shown below, a time delay can be introduced to assess consistency of the signal change as it may reflect a behaviorally significant event.

When activity ceases, the signal level decays exponentially until it reaches a level that can trigger a response from the system. As soon as activity resumes, the trigger is reset. In this case, correlation among 4 mouth channels determines response triggering.

Another correlation method involves a similar approach, but with monitoring of 2 mouth and 2 eye channels, as shown below. In the middle of the sweep, both mouth and eye activity cease long enough to produce a combined trigger effect, while at the end of the sweep only the mouth activity ceases long enough for the triggering effect.

These combinations of approaches allow for a wide variety of machine responses to behaviorally significant facial activity. Because the algorithms are efficient and can run on a stand-alone system, preferably a video digital signal processor board, major computer resources are still left free for artificial intelligence routines to effect interpretation of and response to the patient activity indicated by these scalar signals.

Development is continuing to enhance interpretation of these video-derived scalar responses to integrate patient facial activity in machine response paradigms. The potential exists for faster, more efficient response with this technique compared with voice recognition or EEG control of robotics. A combination of all of these signal modalities (eg, video, electrical, verbal) will likely ultimately be used to generate assistive responses for severely disabled patients. Initial indications suggest that machine-level video facial interpretation will play a prominent role in the design of assistive robotics for patients with severe motor impairments. Such a result would indeed represent a cooperative robot, attentive to nonverbal and verbal cues.

Beyond the potential for facial control lies a domain for patients with severe neuromuscular disorders that may impair facial as well as body movement. For these patients, a more fundamental means to achieve a machine haptic interface is direct control by electroencephalography (EEG).

Several computational approaches to EEG analysis and control have been developed. [14, 15] A patient with amyotrophic lateral sclerosis (ALS) was able to use slow cortical potentials to steer a cursor among several choices. [16] However, fine motor control is not as well managed as more limited goal selection. [17] Models of EEG generation, such as thalamocortical generators, have been employed to simulate EEG activity produced by actual subjects in an attempt to improve performance. [18] Vibrational rather than full haptic feedback has been demonstrated to alter mu rhythms to allow enhanced EEG modulation of cursor movement. [19]

Generally, however, these methods are at an early phase of development. Both accuracy and reliability are limited to directing activity in highly controlled environments.

Movement disorders resulting in disabling inaccuracies and aberrations involve deficits in one or more of the following systems. (For a more detailed review, as applicable to haptic feedback, see Steffin, 1997 [9] and Steffin, 1999. [9] )

The primary, or direct, system includes predominantly excitatory output from large pyramidal cells projecting directly to the spinal motor neurons. However, corticocortical inhibition plays a significant role in modulating motor behavior at this level, and the projections of excitatory pyramidal cells are plastic and are modulated by function. This is somewhat contrary to what had been suggested by previous conceptions of homuncular anatomy. Plastic effects also, of course, involve connections from supplementary motor and other cortical regions. Impairment in these regions also produces paresis.

Projections, via the corticospinal tract and supplementary cortical areas (probably projecting onto spinal interneurons), and cortical inhibition of reticulospinal and rubrospinal systems, also influence spinal motor neuron set. Gamma efferent projections influence muscle spindle activity and therefore potentiate reflexes and spasticity.

Basal ganglia play an important role in sequencing motor behavior and modulating muscular tone. External stimuli can produce improvement in sequencing and performance and probably account for kinesia paradoxica (ie, temporary return of mobility in a patient with parkinsonism under the influence of an appropriate external periodic keying stimulus) and gait amelioration. [20, 21]

Evidence for neuroplasticity of the motor system suggests that visual-haptic assistance is beneficial in 2 respects. First, such interactive systems can provide assistance in performing tasks otherwise precluded by neurological deficits. These can range from force application to an impaired extremity to electrical stimulation of intact musculature or can involve outright robotic assistance. At present, the first of these alternatives is probably most practical from a resource standpoint. Second, the visual-haptic approach provides for the development of novel modes of physical therapy.

The extent to which repetition of motor tasks with external cueing can enhance performance beyond immediate assistance is unclear, but the evidence regarding neuroplastic enhancement of activity suggests that such approaches may be effective. With the development of practical visual-haptic systems, as has been outlined conceptually, [10, 9] significant advances in neurorehabilitation of motor deficits are likely to evolve from this intervention. A corollary to this approach is the potential application of videospace-force interfacing technique to the realm of functional electrical stimulation.

Such interfacing in effect entails a fusion of robotic principles with a bionic interaction between patient and machine. The visual-haptic systems described here are likely to provide a useful test-bed for the continuing dynamic development of both external (force application) and internal (functional electrical stimulation) methods of improving motor control in patients with neurological deficits.

Theoretically, neuroplasticity can extend into sensorimotor performance and into cognitive realms. Application of virtual reality (VR) techniques can be useful in providing standardization for neuropsychological testing and in developing more encompassing environments for retraining.

Moreover, the immersive environments that can be generated with VR allow development of neuropsychological test tasks that emulate necessary behavioral and cognitive performance requirements in the real world with greater fidelity than currently provided by available instruments. Such approaches should allow a high degree of interexamination standardization.

As a result of these unique capabilities, VR is finding a therapeutic role in several cognitive disorders. At present, the long-term effect of visuomotor interventions on cognitive systems remains, to a great extent, unexplored territory. Some attempts have been made to influence task-related performance, for example, in patients with traumatic brain injury; results, however, remain uncertain.

The exact extent to which the motor component, as distinct from the sensory component, of the VR milieu can alter behavior is in the early stages of investigation. Some interactions will be determined by closing the VR-patient loop. Independent, objective measurements of patient attention are needed to assess the cognitive effects of VR intervention and to provide feedback for modification of stimulation characteristics. Increasing the richness and versatility of stimulation modes and measurement responses will involve interaction of haptic and sensory modalities, hopefully with enhanced patient motivation.

Evaluations of cognitive performance based on overt performance and measurements such as event-related potentials (ERPs) are likely to form the basis for training feedback systems. Assessment of attention and motivation, aided by such measures, will determine at least some of the parameters of the haptic interaction of VR training systems with patients. Following is a survey of some of these cognitive measures, including ERPs and functional MRI (fMRI), and some likely directions their evolution will take in the context of VR interventions for the treatment of cognitive disorders.

VR poses a major advantage in presenting cognitive material in this setting with attainable high levels of immersion. Although fostering initial acceptance of the head-mounted display and the VR environment may be difficult, in most cases this can be achieved fairly rapidly.

Because environmental features within the VR setting are vivid and entirely controllable by the therapist, and because nonverbal feedback from the patient can be made a central feature of the desired response, VR appears to be capable of eliciting demonstrable improvement in reaction patterns to external stimuli in patients with autism. ERPs show some promise for both autism and learning disabilities as an objective measure of cognitive processing in response to VR stimulus patterns.

The attention-deficit disorders can be difficult to diagnose, and diagnostic modalities may not correspond well to clinical situations. VR appears to have the capability to link well-controlled multimodality stimuli to more objective physiological measurements of attention and discrimination. Electrophysiological and imaging abnormalities have increased the understanding of physiological mechanisms in these disorders. Characteristics of ERPs have, in some studies, shown good correlation with behavioral responses to appropriate medication.

Basic differences in brain physiology may exist with medication that are demonstrable with ERP monitoring and will allow carryover, with refinement, to the detection of such physiological perturbations in more complex, immersive environments. The study of ERPs allows dissection of the attention process, for example, into novel but nonmeaningful stimuli versus novel and meaningful stimuli.

ERPs have been shown to distinguish electrophysiologically between attention-deficit/hyperactivity disorder and combinations of attention-deficit/hyperactivity disorder with learning disabilities. The level of significance of stimuli, particularly if such significance is established by prior events, can be assessed using ERPs. ERPs have been shown to be a valid measure of the ability to discriminate phonemes. Visual-auditory cross-over tasks can produce alterations in ERPs indicative of cross-modality processing.

Mapping of cortical asymmetries involved in tonal versus phonetic processing can be achieved by ERP analysis. These approaches can be correlated with fMRI. Perception of phonemes as native or nonnative to the subject’s language markedly influences ERPs, as does phonologic-semantic inconsistency. Early ERP components (N 100) have been shown to display less lateralization in dyslexic children than in nondyslexic children. Subtle ERP differences also arise in autistic patients.

VR simulation of daily activities can be used in the development of teaching environments for cognitive disabilities. Here, too, ERPs appear to be a valid indicator of cognitive deficit. Haptic interventions can be useful in the alleviation of motor dysfunction in some cases. Much work remains to increase the clinical reliability and utility of such approaches in ameliorating cognitive dysfunction. However, VR almost certainly will play a major role in the development of future therapeutic interventions, as indicated by correlating FMRI activation patterns to stimuli presented in a VR environment.

Particularly with cross-correlation among electrophysiological, haptic, fMRI, and novel psychometric measures, the capacity to diagnose and intervene rationally in cognitive disorders is expected to be enhanced greatly. New “virtual world” approaches to therapy and daily living assistance for neurological and cognitive disorders will begin, more routinely, to reach patients on an affordable and manageable basis.

VR as a motor, sensory, cognitive, and measurement link to patients with neurological and cognitive deficits has opened a new vista in potential levels of patient interaction. The groundwork is now in place to integrate the immersive characteristics of VR, including haptic and special sensory modalities, in the construction of novel stimulating environments. Electrophysiological and new psychometric instruments, some based on haptics, are likely to be derived from such approaches as more standardized and accurate evaluation tools are applied for the diagnosis and treatment of neurological and cognitive deficits. Creation of tailored environments for these patients should allow substantial enhancement of functionality and experience in many of these conditions.

Overview

What is motor and cognitive performance modification using visual-haptic interfaces?

How is motor and cognitive performance modified using a visual-haptic interface?

What are the salient functions of the visual-haptic interface?

What is the basis for establishing the spatial domain of the force corridor for a visual-haptic interface?

What is the role of the haptic generator in motor and cognitive performance modification using visual-haptic interfaces?

What is the role of facial expression control input in motor and cognitive performance modification using a visual-haptic interface?

How is facial monitoring performed in motor and cognitive performance modification using a visual-haptic interface?

What is the role of electroencephalographic control input in motor and cognitive performance modification using a visual-haptic interface?

Which deficits in the primary (corticospinal) efferent system result in movement disorders?

Which deficits in efferent projections result in movement disorders?

Which sequencing deficits result in movement disorders?

What is the role of visual-haptic intervention for motor system deficits?

What is the role of visual-haptic intervention in cognitive assessment, modifications and restraining?

What is the role of visual-haptic interventions in the diagnosis of autism?

What is the role of visual-haptic interventions in the diagnosis of attention-deficit disorders and learning disabilities?

What is the role of visual-haptic interventions in the assessment of traumatic brain injury (TBI)?

What is the future of visual-haptic interfaces in the treatment of neurological and cognitive deficits?

Nam HS, Park E, Heo JH. Facilitating Stroke Management using Modern Information Technology. J Stroke. 2013 Sep. 15(3):135-43. [Medline]. [Full Text].

Levac D, Glegg SM, Sveistrup H, Colquhoun H, Miller PA, Finestone H, et al. A knowledge translation intervention to enhance clinical application of a virtual reality system in stroke rehabilitation. BMC Health Serv Res. 2016 Oct 6. 16 (1):557. [Medline]. [Full Text].

Glegg SMN, Holsti L, Stanton S, Hanna S, Velikonja D, Ansley B, et al. Evaluating change in virtual reality adoption for brain injury rehabilitation following knowledge translation. Disabil Rehabil Assist Technol. 2017 Apr. 12 (3):217-226. [Medline].

Koritnik T, Koenig A, Bajd T, Riener R, Munih M. Comparison of visual and haptic feedback during training of lower extremities. Gait Posture. 2010 Oct. 32(4):540-6. [Medline].

Lécuyer A, George L, Marchal M. Toward adaptive VR simulators combining visual, haptic, and brain-computer interfaces. IEEE Comput Graph Appl. 2013 Sep-Oct. 33(5):18-23. [Medline].

Steffin M. Computer-assisted physical therapy of multiple sclerosis and spinal cord injury patients: an application of virtual reality. Morgan KS, Hoffman HM, Stredney D, Weghorst SJ et al, eds. Medicine Meets Virtual Reality 5. Amsterdam: IOS Press; 1997. 64-72.

Merians AS, Fluet GG, Qiu Q, Lafond I, Adamovich SV. Learning in a virtual environment using haptic systems for movement re-education: can this medium be used for remodeling other behaviors and actions?. J Diabetes Sci Technol. 2011 Mar 1. 5(2):301-8. [Medline]. [Full Text].

Wang RH, Mihailidis A, Dutta T, Fernie GR. Usability testing of multimodal feedback interface and simulated collision-avoidance power wheelchair for long-term-care home residents with cognitive impairments. J Rehabil Res Dev. 2011. 48(7):801-22. [Medline].

Steffin M. Virtual reality therapy of multiple sclerosis and spinal cord injury: design consideration for a haptic-visual interface. Stud Health Technol Inform. 1997. 44:185-208. [Medline].

Steffin M. Quantitative video analysis of complex epileptic seizures during videotelemetry: increasing the reliability of EEG correlations and behavioral autocorrelation. Cyberpsychol Behav. 1999. 2(1):25-33.

Bian ZP, Hou J, Chau LP, Magnenat-Thalmann N. Facial Position and Expression-Based Human-Computer Interface for Persons With Tetraplegia. IEEE J Biomed Health Inform. 2016 May. 20 (3):915-924. [Medline].

Steffin M, Wahl K. Occam’s approach to video critical behavior detection: a practical real time video in-vehicle alertness monitor. Westwood, JM., Haluck RS, et al. eds. Medicine Meets Virtual Reality 12 Proceedings. Amsterdam: IOS Press; 2004. 370-5.

Steffin M. Avionics-compatible video facial cognizer for detection of pilot incapacitation. Westwood, JM, Haluck RS, et al. eds. Medicine Meets Virtual Reality 14 Proceedings. Amsterdam: IOS Press; 2006. 529-31.

Kim SP, Simeral JD, Hochberg LR, Donoghue JP, Black MJ. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. J Neural Eng. 2008 Dec. 5(4):455-76. [Medline].

Kayagil TA, Bai O, Henriquez CS, Lin P, Furlani SJ, Vorbach S, et al. A binary method for simple and accurate two-dimensional cursor control from EEG with minimal subject training. J Neuroeng Rehabil. 2009 May 6. 6:14. [Medline]. [Full Text].

Iversen I, Ghanayim N, Kubler A, Neumann N, Birbaumer N, Kaiser J. Conditional associative learning examined in a paralyzed patient with amyotrophic lateral sclerosis using brain-computer interface technology. Behav Brain Funct. 2008 Nov 24. 4:53. [Medline]. [Full Text].

Royer AS, He B. Goal selection versus process control in a brain-computer interface based on sensorimotor rhythms. J Neural Eng. 2009 Feb. 6(1):016005. [Medline]. [Full Text].

Chen CW, Ju MS, Sun YN, Lin CC. Model analyses of visual biofeedback training for EEG-based brain-computer interface. J Comput Neurosci. 2009 Apr 9. [Medline].

Chatterjee A, Aggarwal V, Ramos A, Acharya S, Thakor NV. A brain-computer interface with vibrotactile biofeedback for haptic information. J Neuroeng Rehabil. 2007 Oct 17. 4:40. [Medline].

Badarny S, Aharon-Peretz J, Susel Z, Habib G, Baram Y. Virtual reality feedback cues for improvement of gait in patients with Parkinson’s disease. Tremor Other Hyperkinet Mov (N Y). 2014. 4:225. [Medline]. [Full Text].

Dockx K, Bekkers EM, Van den Bergh V, Ginis P, Rochester L, Hausdorff JM, et al. Virtual reality for rehabilitation in Parkinson’s disease. Cochrane Database Syst Rev. 2016 Dec 21. 12:CD010760. [Medline]. [Full Text].

Aaltonen O, Niemi P, Nyrke T, Tuhkanen M. Event-related brain potentials and the perception of a phonetic continuum. Biol Psychol. 1987 Jun. 24(3):197-207. [Medline].

Abbas JJ, Triolo RJ. Experimental evaluation of an adaptive feedforward controller for use in functional neuromuscular stimulation systems. IEEE Trans Rehabil Eng. 1997 Mar. 5(1):12-22. [Medline].

Aguirre GK, D’Esposito M. Environmental knowledge is subserved by separable dorsal/ventral neural areas. J Neurosci. 1997 Apr 1. 17(7):2512-8. [Medline].

Berry I, Demonet JF, Warach S, et al. Activation of association auditory cortex demonstrated with functional MRI. Neuroimage. 1995 Sep. 2(3):215-9. [Medline].

Bobet J. Can muscle models improve FES-assisted walking after spinal cord injury?. J Electromyogr Kinesiol. 1998 Apr. 8(2):125-32. [Medline].

Brumaghim JT, Klorman R. Methylphenidate’s effects on paired-associate learning and event-related potentials of young adults. Psychophysiology. 1998 Jan. 35(1):73-85. [Medline].

Brunswick N, Rippon G. Auditory event-related potentials, dichotic listening performance and handedness as indices of lateralisation in dyslexic and normal readers. Int J Psychophysiol. 1994 Dec. 18(3):265-75. [Medline].

Christiansen C, Abreu B, Ottenbacher K, et al. Task performance in virtual environments used for cognitive rehabilitation after traumatic brain injury. Arch Phys Med Rehabil. 1998 Aug. 79(8):888-92. [Medline].

Connolly JF, Phillips NA, Forbes KA. The effects of phonological and semantic features of sentence-ending words on visual event-related brain potentials. Electroencephalogr Clin Neurophysiol. 1995 Apr. 94(4):276-87. [Medline].

Dariush B, Parnianpour M, Hemami H. Stability and a control strategy of a multilink musculoskeletal model with applications in FES. IEEE Trans Biomed Eng. 1998 Jan. 45(1):3-14. [Medline].

Dehaene-Lambertz G. Electrophysiological correlates of categorical phoneme perception in adults. Neuroreport. 1997 Mar 3. 8(4):919-24. [Medline].

Erwin R, Van Lancker D, Guthrie D, et al. P3 responses to prosodic stimuli in adult autistic subjects. Electroencephalogr Clin Neurophysiol. 1991 Nov-Dec. 80(6):561-71. [Medline].

Frank Y, Seiden JA, Napolitano B. Electrophysiological changes in children with learning and attentional abnormalities as a function of age: event-related potentials to an “oddball” paradigm. Clin Electroencephalogr. 1998 Oct. 29(4):188-93. [Medline].

Georgiou N, Iansek R, Bradshaw JL, et al. An evaluation of the role of internal cues in the pathogenesis of parkinsonian hypokinesia. Brain. 1993 Dec. 116 (Pt 6):1575-87. [Medline].

Granovsky Y, Sprecher E, Hemli J, Yarnitsky D. P300 and stress in mild head injury patients. Electroencephalogr Clin Neurophysiol. 1998 Nov. 108(6):554-9. [Medline].

Kagaya H, Shimada Y, Ebata K, et al. Restoration and analysis of standing-up in complete paraplegia utilizing functional electrical stimulation. Arch Phys Med Rehabil. 1995 Sep. 76(9):876-81. [Medline].

Kayser J, Tenke CE, Bruder GE. Dissociation of brain ERP topographies for tonal and phonetic oddball tasks. Psychophysiology. 1998 Sep. 35(5):576-90. [Medline].

Kilgore KL, Peckham PH. Grasp synthesis for upper-extremity FNS. Part 1. Automated method for synthesising the stimulus map. Med Biol Eng Comput. 1993 Nov. 31(6):607-14. [Medline].

Kritikos A, Leahy C, Bradshaw JL, et al. Contingent and non-contingent auditory cueing in Parkinson’s disease. Neuropsychologia. 1995 Oct. 33(10):1193-203. [Medline].

Lovrich D, Simson R, Vaughan HG, Ritter W. Topography of visual event-related potentials during geometric and phonetic discriminations. Electroencephalogr Clin Neurophysiol. 1986 Jan. 65(1):1-12. [Medline].

Mecklinger A, Opitz B, Friederici AD. Semantic aspects of novelty detection in humans. Neurosci Lett. 1997 Oct 10. 235(1-2):65-8. [Medline].

Moline J. Virtual reality for healthcare: a survey. G Riva, ed. Virtual Reality in Neuro-Psycho-Physiology. Amsterdam: IOS Press; 1997. 3-34.

Naatanen R, Lehtokoski A, Lennes M, et al. Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature. 1997 Jan 30. 385(6615):432-4. [Medline].

Onofrj M, Curatola L, Malatesta G, et al. Reduction of P3 latency during outcome from post-traumatic amnesia. Acta Neurol Scand. 1991 May. 83(5):273-9. [Medline].

Pasqualotto A, Finucane CM, Newell FN. Visual and haptic representations of scenes are updated with observer movement. Exp Brain Res. 2005 Oct. 166(3-4):481-8. [Medline].

Popovic DB. Finite state model of locomotion for functional electrical stimulation systems. Prog Brain Res. 1993. 97:397-407. [Medline].

Rappaport M, Clifford JO, Winterfield KM. P300 response under active and passive attentional states and uni- and bimodality stimulus presentation conditions. J Neuropsychiatry Clin Neurosci. 1990. 2(4):399-407. [Medline].

Rizzo AA, Buckwalter JG. Virtual reality and cognitive assessment: the state of the art. G Riva ed. Virtual Reality in Neuro-Psycho-Physiology. Amsterdam: IOS Press; 1997. 123-145.

Rossi F, Triggs WJ, Eisenschenk S. Topographic differences in task-dependent facilitation of magnetic motor evoked potentials. Neurology. 1999 Feb. 52(3):537-40. [Medline].

Sharma A, Kraus N, McGee T, et al. Acoustic versus phonetic representation of speech as reflected by the mismatch negativity event-related potential. Electroencephalogr Clin Neurophysiol. 1993 Jan-Feb. 88(1):64-71. [Medline].

Siebner HR, Tormos JM, Ceballos-Baumann AO, et al. Low-frequency repetitive transcranial magnetic stimulation of the motor cortex in writer’s cramp. Neurology. 1999 Feb. 52(3):529-37. [Medline].

Smith B, Tang Z, Johnson MW, et al. An externally powered, multichannel, implantable stimulator-telemeter for control of paralyzed muscle. IEEE Trans Biomed Eng. 1998 Apr. 45(4):463-75. [Medline].

Steffin M. Interfacing a haptic generator to patient videospace to enhance motor performance. In Preparation.

Steffin M. Video preprocessing of patient environmental data for use in virtual reality assisted physical therapy systems. Cyberpsychol Behav. 1998. 1(2):177-85.

Steffin M. Virtual Reality Biofeedback in Chronic Pain and Psychiatry. Available at: http://emedicine.medscape.com/article/1136874-overview. Medscape Reference Journal [serial online]. 2006. [Full Text].

Steffin, M. Bridging the gap between “real” reality and virtual reality: intelligent human-machine therapeutic interaction in patient videospace. Cypberpsych Behav. 2000. 3(3):447-63.

Strickland D. Virtual reality for the treatment of autism. Stud Health Technol Inform. 1997. 44:81-6. [Medline].

Weiss P, Stelmach GE, Hefter H. Programming of a movement sequence in Parkinson’s disease. Brain. 1997 Jan. 120 ( Pt 1):91-102. [Medline].

Winsberg BG, Javitt DC, Silipo GS. Electrophysiological indices of information processing in methylphenidate responders. Biol Psychiatry. 1997 Sep 15. 42(6):434-45. [Medline].

Morris Steffin, MD Chief Science Officer, Virtual Reality Neurotech Lab

Disclosure: Nothing to disclose.

Mary L Windle, PharmD Adjunct Associate Professor, University of Nebraska Medical Center College of Pharmacy; Editor-in-Chief, Medscape Drug Reference

Disclosure: Nothing to disclose.

Jonathan P Miller, MD Director, Functional and Restorative Neurosurgery, Director of Epilepsy Surgery, Attending Neurosurgeon, University Hospitals Cleveland Medical Center; Director, Functional and Restorative Neurosurgery Center, UH Cleveland Medical Center Neurological Institute; Associate Professor of Neurosurgery, Fellowship Director, Functional and Stereotactic Neurosurgery, Associate Residency Program Director, Department of Neurosurgery, Surgical Director, Neuromodulation Center, Case Western Reserve University School of Medicine

Jonathan P Miller, MD is a member of the following medical societies: Alpha Omega Alpha, American Association of Neurological Surgeons, American College of Surgeons, American Epilepsy Society, American Society for Stereotactic and Functional Neurosurgery, Congress of Neurological Surgeons, International Neuromodulation Society, North American Neuromodulation Society, Ohio State Neurosurgical Society, Society of Neurological Surgeons

Disclosure: Nothing to disclose.

Anthony M Murro, MD Professor, Laboratory Director, Department of Neurology, Medical College of Georgia, Georgia Regents University

Anthony M Murro, MD is a member of the following medical societies: American Academy of Neurology, American Epilepsy Society

Disclosure: Nothing to disclose.

Motor and Cognitive Performance Modification Using Visual-Haptic Interfaces

Research & References of Motor and Cognitive Performance Modification Using Visual-Haptic Interfaces|A&C Accounting And Tax Services
Source