Anda di halaman 1dari 20

C 2006) pp.

859878
Annals of Biomedical Engineering, Vol. 34, No. 5, May 2006 (
DOI: 10.1007/s10439-005-9055-7

Analyzing Trends in Brain Interface Technology: A Method


to Compare Studies
M. M. MOORE JACKSON,1 S. G. MASON,2 and G. E. BIRCH2, 3
1

College of Computing, Georgia Institute of Technology, Atlanta, Georgia; 2 Neil Squire Foundation, 220-2250 Boundary Road,
Burnaby, BC, Canada V5M 4L9; and 3 Department of Electrical and Computer Engineering,
The University of British Columbia, 2356 Main Mall, Vancouver, Canada V6T 1Z4
(Received 6 May 2005; accepted 13 October 2005; published online: 20 April 2006)

the rapid growth in the ranks of BI researchers and the


fields dependence on collaborations due to limited funding
and small, specialized subject populations. Consequently,
objective methods of comparison based on common models
and language are critical to the evolution of appropriate and
effective BI technology.
This paper presents and illustrates a method to objectively classify and compare the salient features of BI technology studies. We demonstrate that the method is effective
and, if a sufficient number of studies are compared, the
method can be used to detect research trends and areas of
inactivity in the BI field. The method employs a classification template that is based on the theoretical taxonomy
proposed by Mason, Moore, and Birch,15 which is summarized in Section 2. Section 3 describes the application of
the method to a representative set of BI studies. The paper
concludes with the results of this comparison and comments
on observed research trends and reporting practices.

AbstractContinued progress in the field of Brain Interface (BI)


research has encouraged the rapid expansion of the BI community over the last two decades. As the number of BI researchers
and organizations steadily increases, newer and more advanced
technologies are constantly produced, evaluated, and reported.
Though the BI community is committed to accurate and objective
evaluation of methods, systems, and technology, the diversity of
the field has hindered the development of objective methods of
comparison. This paper introduces a new method for directly comparing studies of BI technology based on the theoretical models
and taxonomy proposed by Mason, Moore, and Birch. The effectiveness of the proposed method was demonstrated by interpreting
and comparing a representative set of 21 BI studies. The method
allowed us to 1) identify the salient aspects of a specific BI study,
2) identify what has been reported and what has been omitted, 3)
facilitate a complete and objective comparison with other studies,
and 4) characterize overall trends, areas of inactivity, and reporting
practices.
KeywordsBrain interface, Braincomputer interface, Brain
machine interface, Direct brain interface, BI, BCI, BMI, DBI,
Brain interface comparison, Taxonomy, Models, Framework.

CLASSIFICATION TEMPLATE

INTRODUCTION

The proposed method uses a classification template to


assist people in characterizing individual studies, presented
in Table 1. The elements of Mason, Moore, and Birchs
BI study taxonomy, which compose the left side of template, are grouped into three main categories: Synopsis,
Data Collection, and Analysis. The study attributes, shown
in bold on the left side of Table 1, represent the basic items
required to describe a study of BI technology. Attributes
defined under Synopsis identify key aspects of an evaluation that are used to characterize and compare studies.
Attributes listed under Data Collection relate to detailed
methods for collecting data from subjects, and Analysis
attributes describe the analysis methods and outcomes. For
a complete description and discussion of the categories and
attributes, refer to.15
The right side of the template provides attribute values,
which are the specific details of a BI study that is being
classified. Attribute values may consist of either attribute

The field of Brain Interface (BI) technology1 is truly


interdisciplinary, incorporating researchers from neuroscience, psychology, computer science, engineering,
medicine, and other technical and health care fields. This
pronounced diversity strengthens the BI field with a myriad
of perspectives and approaches, which encourages innovation and invention. However, the lack of common perspective and terminology has hindered the development of
objective methods of comparison. This issue, which is a
recognized deficiency in the field,15,16,26 is exacerbated by
Address correspondence to M. M. Moore Jackson, College of Computing, Georgia Institute of Technology, Atlanta, Georgia. Electronic mail:
mooremelody@bellsouth.net
1 In this work, the terms BrainComputer Interface (BCI) research, Brain
Machine Interface (BMI) research, and Direct Brain Interface (DBI)
research are included under the umbrella term Brain Interface (BI) research.

859
C 2006 Biomedical Engineering Society
0090-6964/06/0500-0859/0 

Study
Synopsis

Assistive
Technology

Experimental
variable(s)

Study description

BI technology
design model

Description

Dependent
variable(s)

Independent
variable(s)

Subject class

Online/Offline

Objective(s)

Study Class

Study attributes

Attribute values

(2-component) Full ATTransducer and Assistive Device


(3-component) Full ATTransducer, Control Interface, and Assistive Device
(2-component) Demonstration SystemTransducer and Demo Device
(3-component) Demonstration SystemTransducer, Control Interface, and Demo Device
TransducerTransducer only, no other components
Transducer Subcomponents
a) Sensor to Feature ExtractionFront-end components of a Transducer, used for exploring the
signal properties related to a Neurological Phenomena
b) Sensor Designused to test characteristics of sensor designs

Name of variable 2: short description


...
High-level description of assistive technology components used in the study, e.g., P300 menu-based
AAC system
Design model classone of:

Name of variable 2: specific values


...
Name of variable 1: short description

One of:
safety and tolerance study (of a new biorecording technology and application method)
exploratory study (of unknown properties of a signal measured from a biorecording technology,
e.g., exploring the timefrequency properties of a neurological phenomenon in EEG)
[controlled] technology design study (of technology design options)
[controlled] usability study (of technology designs)
[controlled] efficacy study (of impact of technology on people)
pre-commercialization studies (of product designs)
Statement of study objective(s). Examples are explore the signal characteristics of imagined
movement versus actual movement, demonstrate that a recording technique is safe and tolerable,
study the usability of a new technology under certain conditions
Biosignal source classone of:
online (evaluation of technology with live biosignals)
offline (evaluation of technology with pre-recorded biosignals)
Subject classone of:
human
animal
Name of variable 1: specific values

TABLE 1. BI study classification template.

860
JACKSON et al.

Data collection Apparatus

Physical
environment

Relation to target
environment
BI Transducer

Location

Target application

Study attributes

Attribute values

Type

Transducer type classone of:


endogenous (responds to internally generated signal)
exogenous (responds to an automatic response to an evoked stimulus)

laboratory or field study


followed by
description of specific details of test environment, e.g., in a soundproof room, or a
subjects hospital room
description of how closely the actual location for the study relates to the target environment

Target population class(es)one (or more) of


fully-paralyzed (locked in) individuals
partially-paralyzed individuals (e.g., people with Spinal Cord Injury, stroke, MS, ALS)
individuals with other disabilities: description
able-bodied individuals
general population
Target activity
Target activity class(es)one (or more) of:
control of appliances/devices (e.g., TV, phone, computer, bed)
control of objects/avatar in a virtual environment (e.g., virtual reality, games)
communication with people (e.g., synthesized speech, text or drawing)
control of paralyzed limbs (for body positioning or object manipulation)
control of body function (e.g., bladder control, posture adjustment)
personal mobility (e.g., operating a wheelchair)
Target environment Target environment class(es)one (or more) of:
restricted living environments (e.g., bedroom, hospital)
general living environments/home
office/workplace
outdoors
anywhere (indoors or outdoors)
Location classone of:

Target population

Table 1. Continued.

Analyzing Trends in Brain Interface Technology: A Method to Compare Studies


861

Study attributes

Neurological
phenomenon

Inputs

Biorecording technology

Attribute values

mu, alpha, beta or other rhythm power


slow-cortical potentials
event-related desynchronization (ERD) or synchronization (ERS)
movement-related potentials
a) related to attempted movement
b) related to imagined movements
response to basic cognitive tasks (mental rotation, math operations)
P300 (or other) response to oddball stimuli
VEP response to visual stimulation
SSVEP response to repetitive visual stimulation
aural evoked potentials (AEP) response to aural stimulation
ability to consciously modulate
a) response to visual stimulation
b) response to aural stimulation
c) response to tactile stimulation
other phenomenon (specify)
in field potentials (EEG or ECoG), or
changes in single cell neural activity (e.g., firing rate)
a) related to attempted movement
b) related to imagined movement
c) in response to sensory stimulation

modulated response (responds to an internal response modulation to an external stimulus)


incomplete (study of subcomponent function)
Biorecording technology classone of:
EEGelectroencephalography
ECoGelectrocortiography
implanted electrodes (in cortex)
NIRnear infrared recordings
fMRIfunctional MRI
Transducer input classone of:
in/over motor cortex
in/over somatosensory cortex
in/over frontal cortex
in/over parietal cortex
in/over temporal cortex
in/over occipital (visual) cortex
in/over multiple cortical areas
followed by
specific sensor location information and possibly a description of amplification, filtering and
digitization details (if available)
Neurological phenomenon classone of, or a combination of:

Table 1. Continued.

862
JACKSON et al.

Control
Interface

Study attributes

Attribute values

virtual keyboard (M symbols)


a) single direct selection
b) single indirect (pointer-based) selection
c) multiple direct selections (N-ary keyboard)
d) multiple indirect (pointer-based) selections (N-ary keyboard)
menu system
a) single direct selection
b) single indirect (pointer-based) selection
c) multiple direct selections (N-ary keyboard)
d) multiple indirect (pointer-based) selections (N-ary keyboard)
icon-based GUI

in signals recorded from (implanted) cortical electrode arrays, or


optical changes
a) related to attempted movement
b) related to imagined movement
c) in response to sensory stimulation
Artifact processing A description of the artifact processing (removal) mechanism, if one exists in the transducer
Stimulator
Description of stimulus mechanism used to evoke response
Description of the feature extraction and feature translation algorithms used in the transducer
Feature extraction/translation
algorithms
Output
Transducer output classone of
(single) discrete N-state output:
a) all states are intentional control (IC) states (N 2)
b) one state is no control (NC) state, the others are IC (N 2)
c) one state is an unknown state, the others are IC (N 3)
d) one state is no control (NC), one is an unknown state, the others are IC (N 3)
(single) 1-, 2- or 3-D continuous control-fixed reference (like a volume control on a stereo, 1-D; a
joystick, 2-D; or a 3-D joystick, 3-D)
(single) 1-, 2- or 3-D relative continuous control no reference (like the wheel on a wheel mouse,
1-D; mouse position control, 2-D; or 3-D mouse position control, 3-D)
(single) spatial reference points (or regions) in a 2- or 3-D space (like a pointing device, 2- or 3-D)
multiple discrete control outputs
multiple continuous control outputs
multiple composite of discrete and continuous control outputs
Idle Support
Idle support classone of:
idling supported
idling not supported
Interface paradigm Interface paradigm classone of:

Table 1. Continued.

Analyzing Trends in Brain Interface Technology: A Method to Compare Studies


863

Control
Interface

Study attributes

Dimensionality

Control type

Semantic
translation
Output
Type of device

Input
Temporal control
paradigm

Attribute values

virtual devices: computer monitor with


cursor
menu system
bar or level indicator
discrete indicator
object(s) in virtual world (video game or VR)
virtual hand/arm
virtual vehicle
other virtual device used to demonstrate direct brain control
real/physical devices
robotic arm/hand
model vehicle
vehicle simulator
other device used to demonstrate direct brain control
Control type classone of:
object positioning
a) move object from fixed starting point(s) to 1 target per trial
b) move object from fixed starting point(s) to 1 of N possible targets
c) continuous positioning (path not important)
d) continuous path following/navigation/drawing (where path is important)
continuous (parameter) value adjustment
object selection
a) direct item/(parameter) value/action selection
b) indirect (pointer-based) item/(parameter) value/action selection
Dimensionalityone of:
in 1-D
in 2-D
in 3-D for object positioning or continuous value adjustment, or
1 of M items (or values or actions)define Mfor discrete selection

synchronized (periodically availableidling not supported)


system-paced (periodically availableidling supported)
constantly-engaged (continuously availableidling not supported)
self-paced (continuously availableidling supported)
Description of the method used to translate logical (non-semantic) inputs into semantic outputs that
are meaningful to the Assistive Device and devices/people in the environment
Same classes as listed under TransducerOutput
Demo Device type classone of:

object positioning through virtual manipulation/direction


other
Same classes as listed under TransducerOutput
Control paradigm classone of:

Table 1. Continued.

864
JACKSON et al.

Subject pool

Selection

Cuing mechanisms

Monitors

External
controllers

Assistive
device

Study attributes

written instructions
verbal instructions
displayed instructions
audio cues
displayed objects or icons
Description of the subjects used in the study
Description of criteria used to select the subject pool
Description of how subjects were attracted to the study, including methods of compensation

limb control
a) neuroprosthetic (FES system)
b) exoskeleton control
body function control FES system
appliance interface/remote control (wireless, IR, X10 to external devices)
speech synthesizer
visual display (for text or drawings)
cursor or pointer for referencing
wheelchair
other assistive device
followed by
specific details of the device
Same classes as those listed under Demo DeviceFeedback
Description of all external (non-integrated) State Controllers used to dynamically configure or adjust
the real-world BI system during an evaluation. (Many controllers are integrated into the
components as external control interfaces and are difficult to define. Those listed here are external
devices that manipulate the subject or objects during the experiment)
Description of the State Monitors used, and what data they report, e.g., eye-blink sensors, EMG
sensors, video cameras, or microphones
Description of the Cuing Mechanisms used to guide user responses. One of:

Feedback classone of:


visual
auditory
tactile/haptic
multi-modal sensory
direct cortical stimulation
other
AD type class one of:

Same classes as listed under Control InterfaceTemporal Control Paradigm

Attribute values

Chosen subjects
Selection criteria
Method of
recruitment
Screening methods Description of how the selection criteria were measured and applied

Feedback

Type

Temporal control
paradigm
Feedback

Table 1. Continued.

Analyzing Trends in Brain Interface Technology: A Method to Compare Studies


865

Experimental
protocol

Introductory
protocol
Test protocol

Structure

Preparation

Study attributes

Subject task(s)

Recording periods

Relation to target
population
Usage history
Training methods
Special conditioning/preparation
Equipment
customization
Subject grouping

single-group design
within-group (parallel) design
between-group (crossover) design
case-study (N-of-1 design)
other: specify
followed by
description of grouping methods and controls
Definition of the number and timing of sessions and trials in the study
Description of the methods used to orient the subjects to the equipment, tasks and environment,
completing consent forms and questionnaires
For non-interactive tasks (where feedback is not required), one of:
visual attention
attempted movement
language tasks (spelling)
mathematics tasks (counting, subtraction)
mental rotation
followed by
a description of the mental task(s) performed (e.g., visualizations, mental activities such as
arithmetic, or motor imagery)
For basic interaction tasks, one of:
guided,
single item selection (one selectable target per trial)
item selection (multiple selectable targets per trial)
positioning (i.e., move a pointer/object to a point or orientation in space specified by a fixed
target or target trajectory)
path following/tracing (i.e., move a pointer/object to follow a moving target or a visible path in
space)
self-guided,

Grouping method classone of:

Description of the subjects usage history (using terminology defined in Section . . .Usage History
Description of the training and/or practice methods
Description of special conditioningincludes medical procedures or other preparatory
conditioningrather than training
Description of methods used to customize the BI technology to the user

Description of how the subject pool represents the target population

Attribute values

Table 1. Continued.

866
JACKSON et al.

Quantitative

Qualitative

Temporal
sequencing
Performance
feedback
Information
recorded

Relation to target
activity

Subject task
pacing

Attribute values

Outlines application or creation of quantitative theoretical models

Describes the reported results and the interpreted usability

Description of the quantitative methods used to analyze data, such as metrics and statistical methods

Outlines application or creation of qualitative theoretical models

The reported results and the subjects perceived usability

Description of the qualitative methods used to collect and analyze data, such as surveys or interviews

The activities performed after the testing phase such as exit interviews

Detailed description of what data is recorded

representative of target activity


close approximation
distant approximation
abstraction of target activity
For complex interaction tasks, a description of the relationship
Description of the temporal sequencing of the experimental protocol and behavior of the apparatus
during the experiment
Description of the type of sensory feedback available to the subject during testing

paced (the experimental system determines the timing of user actions)


self-paced (the user determines timing of his/her own actions) or
window-of-opportunity self-paced (the user may interact with the system during a prescribed time
window)
For basic tasks, one of:

item selection (multiple selectable items per trial)


positioning (i.e., 1) move or orient a pointer/object in space or 2) move a pointer/object along a
subject-selected path in spacealso known as navigation or drawing)
followed by
a description of the actual task the subjects performed
For complex interaction tasks
a description of the actual task the subjects performed and if they were guided (e.g., copy spelling
is guided) or self-guided (e.g., free spelling)
Subject task classone of:

Note. Proposed attribute set elements are marked in bold type on the right-hand side.

Analysis

Debriefing
protocol
Qualitative
methods
Results and
interpretations
Theoretical
modeling
Quantitative
methods
Results and
interpretations
Theoretical
modeling

Study attributes

Table 1. Continued.

Analyzing Trends in Brain Interface Technology: A Method to Compare Studies


867

868

JACKSON et al.

set elements or attribute descriptions. Attribute set elements


(shown in bold type in Table 1) are chosen from predetermined attribute sets of possible classifications, such
as choosing P300 for neurological mechanism. Attribute
descriptions are unrestricted prose characterizations of an
attribute in a specified study, such as Study Objective under
Synopsis. The attribute sets listed in Table 1 are an initial
set of possible values derived from a review of the literature and the authors collective experience with BI and
humanmachine interaction research. As defined, attribute
sets represent general classifications of attribute descriptions and were only proposed for an attribute if the attribute
descriptions could be readily generalized or if they fit an
existing taxonomy from another field. In selecting attribute
sets, our approach was to use classical terminology (i.e.,
terminology found in experimental psychology and clinical
trials design) and, if not appropriate, propose a synthesis of
terms and concepts drawn from BI, Assistive Technology
(AT) and Human Computer Interaction (HCI) literature.
METHODS
As a demonstration of the proposed BI study comparison method, we identified, characterized, and compared a
representative test set of BI studies.
Test Set Selection
Papers from attendees of the 2nd International Brain
Computer Interface Workshop (Rensslaerville NY, USA,
June 200224 ) were selected as a representative test set of
studies for this work. More than 20 research groups were
represented at this conference, with a research group defined by unique combinations of laboratory name, location,
laboratory director(s), and collaborative publishing history.
The test set was limited to one paper for each research group
in attendance to minimize group bias and reduce the volume
of studies reviewed. For each group, the most recent journal article dated prior to March 2004, describing a study or
multiple studies (or experiments), with a keyword BI, BCI,
BMI, DBI (or the corresponding descriptions) in their title,
abstract or keyword list were selected. Since the goal of
this work was to characterize BI studies, research overview
papers, tutorials, descriptions of neuroanatomy and related
neurophysiology discussions were not included.

pool with two different test protocols, such as controlling a


robot arm and communicating through a virtual keyboard
and text display, would be considered two separate studies.
Because some papers contained multiple studies under our
definition, only the first study in each paper was included in
this demonstration in order to remove bias towards papers
with more than one study. Each study was characterized
separately using the classification template described in
Section 2. Each study was reviewed by Mason and Moore
and the attribute values were interpreted using the context of
the functional models provided elsewhere.15 Final attribute
values and descriptions were decided on by consensus. The
result was a formal review per study, termed a study review.
Table 2 shows and example of a study review.
For each study, attributes requiring descriptions were
summarized in a sentence or short paragraph. For attributes
with proposed attribute sets, the closest attribute set element (or for some attributes, multiple elements) was used
to describe an attribute. If descriptions did not match one
of the defined attribute set elements, the attribute value
was set to other followed by a brief description. In cases
where attributes were not described in a paper, but were
provided in a referenced work, that referenced paper was
examined to determine appropriate attribute values. In cases
where an attribute was not clearly defined or implied, the
tag [implied] was added to the attribute value.
As defined by the Mason, Moore, and Birch study taxonomy, attributes can be required or optional. Missing required values were recorded as not reported, while missing optional values were recorded as none reported. This
practice was followed even for values that were obvious to
reviewers because of familiarity with a specific work.
Once all study reviews were complete (i.e., the studies
had been cast into a common taxonomic framework), the
individual study reviews were summarized in a matrix and
analyzed for similarity and differences between studies. To
identify common trends and areas of inactivity, the number
of times each attribute value was observed was illustrated
in a histogram or as a percentage in a pie chart. Also, the
percentage of required attributes reported was calculated as
a measure of study completeness.
RESULTS AND DISCUSSION
Selected Test Set and Study Reviews

Application
The following method was employed in order to interpret and classify the reported studies. Each journal paper
in the test set was divided into studies where a study was
defined to be an evaluation with a unique objective, which
if not clearly stated, was characterized by a unique set of
dependent and independent variables, subject pool, test protocols, and analysis methods. For example, a paper describing a mu-based BI technology that tested the same subject

Following the selection method defined in Section 3.1,


21 articles describing BI technology evaluations were selected across the prime research groups. These articles are
listed in the Appendix. The selection criteria resulted in
a significant representation of the activity in the BI field.
Although not comprehensive, it does capture most of the
principal research groups referenced in the literature. In
this set, there is a bias towards the June 2003 issue of
IEEE TNSRE (a special issue publishing the results of the

Data collection

Study 1
Synopsis

Apparatus

Physical environment

Assistive Technology

Experimental
variable(s)

Study description

Control interface
Demo Device

Relation to target environment


BI Transducer

Location

Target application

Dependent variable(s)
Description
BI technology design model

Online/Offline
Subject class
Independent variable(s)

Study Class
Objective(s)

Idle Support
All attributes . . .
Type of device

Output

Artifact processing
Stimulator
Feature extraction/translation
algorithms

Neurological phenomenon

Biorecording technology
Inputs

Type

Target activity
Target environment

Target population

TABLE 2. Example of study review.

Information rate of control interface output


mu-based menu system
(2-component) Demonstration
SystemTransducer and Demo Device
Fully-paralyzed (locked in) individuals
Partially-paralyzed individuals
Individuals with other severe motor disabilities
Communication with people
Not reported
Laboratory: subjects sat in reclined chair facing a
51 cm video screen 3 m away
Unknown. Target not reported
Endogenous (responds to internally generated
signal)
EEG
Over somatosensory cortex: 1-3 EEG channels
(relative to average reference taken from 64
channels or Laplacian transform)
Movement-related potentials (related to attempted
movement): slow pre-movement ERPs associated
with finger movement
None reported
N/Aendogenous transducer
mu power is calculated from selected input channels
and used as the feature vector. The features are
scaled and offset using a linear transformation that
is adapted continually
1-D relative continuous control no reference (like
the wheel on a wheel mouse)
Not reported
N/Ano Control Interface used in study
Computer monitor with a menu system

(non-controlled) Usability study


Study the effect of number of targets on selection
capability in a mu-based BI
Online (evaluation of technology with live biosignals)
Human
Number of targets

First of two studies

Analyzing Trends in Brain Interface Technology: A Method to Compare Studies


869

Experimental protocol

Subject pool

Structure

Preparation

Selection

Assistive Device
External controllers
Monitors
Cuing mechanisms

First of two studies


Indirect (pointer-based) item selection: a cursor
moves horizontally across the display from left to
right towards a target cells that line right edge of the
screen. All cells are equal size. The vertical position
of the cursor is modified over time by the BI
Transducer output
Dimensionality
1 of 2 to 5 items
Temporal control paradigm
Synchronized
Feedback
Visual
All attributes . . .
N/Ano Assistive Device used in study
None reported
None reported
Displayed objects: 1 of M visual target (rectangle)
displayed on right side of screen. (M = 2, 3, 4 or 5)
Chosen subjects
Eight subjects (age 2044, 3 female). Six able bodied,
one with C6 spinal cord injury and one with cerebral
palsy (the latter two in wheelchairs)
Selection criteria
Not reported
Method of recruitment
Not reported
Screening methods
Not reported
Relation to target population Able-bodied subjects are an approximation to human
with disabilities. The other two subjects represent
the target population
Usage history
All subjects had extensive experience with test
environmenttwo to three sessions a week over
several months
Training methods
Training over multiple sessions (method and number
of sessions not reported). Number of targets
increased gradually from 2 to 5
Special
None reported
conditioning/preparation
Equipment customization
All subjects underwent an initial evaluation to
determine the frequencies and scalp locations for
mu and beta activity
Subject grouping
Single-group design
Recording periods
Multiple training session (number not reported)
followed by four test sessions. Time period (number
of days/weeks) was not reported. Each test and
training session consisted of eight 3-min runs
(2030 trials) separated by 1-min breaks

Control type

Table 2. Continued.

870
JACKSON et al.

Quantitative

Qualitative

Theoretical modeling

Results and interpretations

Debriefing protocol
Qualitative methods
Results and interpretations
Theoretical modeling
Quantitative methods

Performance feedback
Information recorded

Subject task pacing


Relation to target activity
Temporal sequencing

Subject task(s)

First of two studies


Informed consent provided by all subjects
Guided, single item selection: subjects instructed to
adjust height of cursor as it moved across the
display to intercept target. Remain motionless
during trials
Paced
Distant approximation
1 s between target appearance and cursor movement,
2 s for cursor to move across screen,1.5 s for
feedback and 1 s inter-trial interval
Target flashed if attained, otherwise blank if missed
Eight runs (two each of 2, 3, 4, and 5 targets) were
recorded during each of the test sessions. Order of
target presentation counterbalanced across runs
Not reported
None reported
None reported
None reported
Target attainment accuracy/error rate and information
transfer rate was calculated for each combination of
subject/number of targets
The amount of information transmitted per unit time
depended on the number of targets. The number of
targets for maximum information rate varied across
subjects. Average performance indicated four
targets provides maximum information rate
None reported

Note. Attribute values depicted in bold type on the right represent attribute set elements defined in the study review template.17

Analysis

Introductory protocol
Test protocol

Table 2. Continued.

Analyzing Trends in Brain Interface Technology: A Method to Compare Studies


871

872

JACKSON et al.

2nd International BCI Workshop). The test set also reveals


a bias towards technical journals with only four papers
in non-engineering journals. This may reflect that the BI
research is predominantly in a technology development
phase. Table 2 shows an example of a study review, illustrating the format and content of a multiple-study paper.
Space prohibits including all the study reviews; however,
the remainder of the study reviews are available on-line
at www.braininterface.org\published links\BI Evaluation
Framework\study reviews set0804.htm.

Instead, we provide an alternate perspective on the survey


data based on the analysis of attribute similarity across all
studies.
Trends and Areas of Inactivity in BI Research
This section summarizes the range of attributes values
seen across all reviewed studies and presents some detailed
comments and observations on the research activities in
the field. The findings are grouped by the three categories:
Synopsis, Data Collection, and Analysis.

Comparing BI Studies
Characterizing the representative set of BI studies using
a common taxonomic framework resulted in several interesting findings. First, the results of many of the reviewed
studies could not be directly compared, because factors
in the experimental methods (whether subject related or
protocol related) were significantly different between studies. This observation emphasizes the important point that
the even though two studies use a common outcome measure, such as information rate, it is not sufficient to draw
conclusions; common methods are also required for meaningful comparison. Generally, there was little evidence of
common methods or controls used across groups, which
suggests a lack of synergy and communication between
research groups. Many studies reported performance using
the standard measure of information rate,27 but did not
fully report all of the other factors (such as selection space
size) that effect objective comparison. Studies often report
error rates, but again objective comparison can only be performed in the absence of bias inherent in control interface
designs. Another interesting finding was that none of the
studies tested the same technology, although many of them
used technology based on similar neurological phenomena
or produced the same type of output (such as a Transducer
with 2-state discrete output). Given the diversity of methods and metrics, it is difficult to make specific comparisons.

Synopsis Attributes
The Synopsis attributes represent the salient characteristics of a BI study, and consequently analysis of these
attributes can reveal high-level trends in the field. In the
following paragraphs, we report results for the significant
Synopsis attributes.
Study ClassAs shown in Fig. 1, all but three studies
were technical development studies (28.6%) and usability
studies (47.6%). Most of these studies (89.5%) were not
controlledthat is, the results were not compared to an
established technique. The other three studies were a safety
and tolerance study, an exploratory study and an efficacy
study. The focus on technology development evaluation
without established norms for comparison is an indicator
of the youth of the BI field.
Online/OfflineOur results showed that 28.6% of the
test set studies employed offline (pre-recorded) data versus
71.4% of the studies which employed online methods for
data recording. In the studies that focused solely on transducer design, the great majority (six out of seven) performed
offline testing. Only one Demonstration System was tested
offline, the majority were tested online. All full AT systems
were tested online.

FIGURE 1. Number of studies in each study class.

Analyzing Trends in Brain Interface Technology: A Method to Compare Studies

873

Target Population

Design Models Tested


60.0%

Transducer

Transducer
subcomponents

33%

52.4%
50.0%

10%
38.1%

40.0%
33.3%

Full AT
(2-component)
5%

30.0%
20.0%

(3-component)
10%

10.0%

Demo System

4.8%

0.0%

(2-component)
42%

fully-paralyzed

FIGURE 2. Design models tested.

partiallyparalyzed

other severe
motor disabilities

FIGURE 3. Distribution of target population.

Subject ClassEighty-one percent of our test set studied human subjects, with the remaining 19% reporting animal subjects. This bias towards human subjects may reflect
that current BI approaches utilize mostly non-implanted
electrodes and non-invasive recording techniques or may
be an artifact of the modest test set size used in this
demonstration. As recorded in the Special Conditioning/Preparation attribute (under Data CollectionSubject
PoolPreparation), two of the studies reported invasive
(requiring surgery) recording techniques in humans. All of
the animal studies reported invasive recording techniques.
BI AT Design ModelTwo-component Demonstration
Systems were most commonly tested (42.0%). Transducers
(i.e., the components that translate measured brain activity
into logical control signals) were the second most commonly tested components (33.3%), and full AT systems
(two-component 5% and three-component 10%). There
were two studies that focused on the Transducer subcomponents (electrode designs and elemental signal processing
techniques) (Fig. 2).
This result gives us insight into the real world contribution of our field. Only one-sixth of these studies are for

full AT systems; the majority are only testing sub-system


components.
Target PopulationFigure 3 shows the distribution of
attribute values for the Target Population attribute (noting
that each study could have multiple targets). Nearly half
(52.4%) of the studies evaluated technology designed for
totally-paralyzed individuals. One-third (33.3%) worked
with technology aimed at partially-paralyzed individuals
and one study (4.8%) was focused on other severe motor
disabilities. Nearly two-fifths (38.1%) of the studies did
not report their target population, although most of these
implied a disabled target population by describing assistive devices or applications. None of the studies explicitly
reported a target of able-bodied users.
Target ActivityFigure 4 shows the occurrences of the
reported Target Activity values in our test set. Communication with people (33.3%) represents the target of the most
studies, which is not surprising because restoring communication is the highest priority for individuals with severe
physical disabilities. Control of appliances was the second
most prevalent at (28.6%), control of limbs, and virtual
objects enjoying a similar (19.0%) level of attention, and

Target Activity
35.0%

33.3%
28.6%

30.0%

28.6%

25.0%
19.0%

20.0%

19.0%

15.0%
10.0%
4.8%
5.0%

0.0%

0.0%
comm w
people

not reported

control
control
appliances virtual obj's

control of
para limbs

control of
body fn

FIGURE 4. Target activities distribution.

personal
mobility

not reported

874

JACKSON et al.

personal mobility devices in one study (4.8%). Over onequarter (28.6%) of the studies did not state their target
activity.
Target EnvironmentThe great majority (87.5%) of
the studies did not report their target environment.
Of those that did, 9.5% reported a restricted living environment (such as a hospital room) and 4.8% reported a
general living environment (such as a home). There were
no studies that addressed general target environments, such
as the outdoors.
Data Collection Attributes
The Data Collection category consists of attributes that
describe the details of the BI technology being tested and
the experimental methods and protocols used to collect
the data for analysis. This section presents the results of
attribute value distribution in this category.
BI TransducerNeural Phenomenon: Movement Related Potentials (MRP), both movements attempts (MA)
and imagined movements (IM), represented the largest
number of studies (combined 33.3%), with neural firing
rate (23.8%) second. Mu rhythm power, and P300 were
other popular approaches (14% each). Only single studies
(5%) focused on SSVEP, SCPs, cognitive task differences
and other phenomena. The focus on MRPs may indicate
that motor control is better understood in neurology than
other brain signals. Studies examining neural firing rates are
currently limited to implanted electrodes correlated with
invasive technique studies (Fig. 5).
BI TransducerOutput: Figure 6 illustrates that there
is a wide variety of transducers that have been tested in
BI studies. The most ubiquitous are discrete transducers,
with two-state, N-state together making up 45% of the
transducers tested. Continuous transducers make up 31%

of the total, and spatial reference transducers form the


remaining 21%. Because discrete values present a simpler form of control than continuous values, this shows
that current BI technology may provide rudimentary or
coarse-grain control more often than it provides fine-grain
control.
Control Interface (all attributes): Only two (9.5%) of the
studies included a Control Interface. One of these provided
a multiple-selection virtual keyboard and the other a singleselection menu. Both of these systems produced a discrete
output in a synchronized manner.
Demo DeviceType & Feedback: Nine of the studies
(43%) utilized a Demo Device in order to test the ability to
control the proposed transducer designs. As seen in Fig. 7,
which displays the distribution of Demo Device types, the
vast majority of surveyed studies used virtual devices on
a computer screen and only one study. All Demo Devices
provided only visual feedback.
Demo DeviceControl Type & Dimensionality: The reported Demo Devices tested the following types of control:
discrete selection (of 1 item (10%), 1 of 310 items (10%),
and 1 of more than 10 items (10%)), indirect, or pointerbased, selection (one of two item using 1-D pointer, 20%),
1-D continuous control (30%), and move from starting point
to one of eight 3-D targets (10%). No devices tested path
following or 2-D continuous control.
Demo DeviceTemporal Control Paradigm: Of the
Demo Devices used, 60% operated in a synchronized manner, 30% in a constantly-engaged manner, and 10% were
user- or self-paced.
Assistive Devices (all attributes): Only three (12.3%) of
the studies utilized an Assistive Device. These were a text
display, speech synthesizer, and an appliance interface. The
speech synthesizer provided aural feedback, while the other
two supplied visual feedback.

Neurological Phenomenon
23.8%

25.0%
19.0%

20.0%
15.0%

14.3%

14.3%

14.3%

10.0%
5.0%

4.8%

4.8%

SCPs

cog task

4.8%

4.8%

SSVEP

other

0.0%
MRP-MA MRP-IM mu/Beta

P300

in field potentials (EEG or ECoG)

firing rate

in implanted
cortical electrodes

FIGURE 5. Distribution of neural mechanism. MRP-MA: movement-related potentialsmovement attempt, MRP-IM: movementrelated potentialsimagined movement, SCPs: slow cortical potentials, cog task: cognitive tasks, SSVEP: steady-state visual
evoked potentials.

Analyzing Trends in Brain Interface Technology: A Method to Compare Studies

875

Subjects Tested - Relation to Target Population

Transducer Types

2D spatial ref
21%

discrete 2-state
(all IC)
21%

3D
5%

distant approx
5%
approximation
31%

uncertain target undefined


32%

(1 NC state)
11%

rel continuous 2D .
5%
1D
21%

discrete N-state
(all IC)
11%
(1 NC state)
0%
(1 unknown state)
5%

representative
14%

not reported
18%

FIGURE 8. Chosen subjects relation to target population.

FIFURE 6. Distribution of types of transducer output.

Controllers, Monitors, and Cuing Mechanisms: Most of


the studies did not use external controllers (87.5%). Monitors were used more frequently (52.4%). Neither of these
is required for BI studies. The most popular Cuing mechanism is displayed objects, employed in nearly 31.8% of
the studies. Verbal instructions were used in 18.2% of the
studies, and displayed instructions were used in a smaller
percentage (13.6%). No studies in our test set explicitly
reported audio cues or written instructions.
Subject PoolSubjects Selected: The majority of studies
utilized able-bodied subjects as an approximation (31%) of
the target population, while only 14% of the studies used
subjects who are representative of their target population
and one used a distant approximation (animal). The subject
pool size was generally small, with 70.8% of the test set
studies having 14 or fewer subjects. There were no studies
with 20 or more subjects. The relation to the target population is often not reported (18%) and another 32% were
uncertain because the target population was not reported.
The fact that only 14% of the studies are being performed
on the target population has negative effects on the external
validity of BI studies, and is an issue that the BI community
will need to address (Fig. 8).

Types of Demo Devices

virtual vehicle
10%

robotic arm
10%

cursor
30%

virtual world
20%
menu system
30%

FIGURE 7. Distribution of Demo Device types.

Subject PreparationTraining: In many instances, it


was difficult to define the type and amount of subject training provided. Often, training was not described at all; only
42.9% of the test set studies described training methods.
Other studies describe sessions for familiarization, but not
specifically training on the desired task. Some studies report that the subjects were continuously trained and only
their final performance was reported. Screening methods
were often reported as part of training, but only 19.0% of
the studies detailed their screening criteria. Subject training
was sometimes confused with system training (or classifier
training). This topic, subject training, needs future thought
and discussion by the BI community.
Experimental ProtocolStructure: There is a strong bias
in BI research towards single-group studies, with 85.7% of
our test set that document grouping reporting this approach.
Only one study employed a within-group comparison, and
the others did not report subject grouping or had none (for
example, with case studies utilizing one subject). Again,
this finding reflects the time-consuming nature of BI data
collection and the fact that separate control groups are often
not feasible when testing AT.15
Subject Task: Basic cognitive tasks (visual attention
(17%), attempted movement (14%), imagined movement
(14%), auditory imagery (5%), and other mental tasks (9%))
were predominant (59%). Of the more interactive tasks, single item selection (17%) and positioning (14%) were the
most common with one study using (full) item selection
and another doing a more complex task. Every paper in
the test set reported the subject task for their experiment.
Only one of the studies tested the actual target activity
in their studies; approximately one-quarter tested a close
approximation and another quarter tested an abstraction.
One-third (33.3%) did not report the target activity and so
the relationship could not be determined (Fig. 9).
Only 3 of 21 studies used self-paced tasks. The remaining 87% used paced tasks, which are designed for
laboratory evaluation but not highly suitable for real-world
usage.
Relation to Target Activity: The relation to the target
activity is often uncertain (32.3%) because the target

876

JACKSON et al.

Subject Task
visual attent
17%
mental tasks
9%

single item selection


17%

auditory imagery
5%

item selection
5%

imagined movement
14%

positioning
14%

attempted
movement
14%

path following
0%

complex task
5%

Basic Cognitive
Tasks

Interaction Tasks

FIGURE 9. Subject task distribution.

usability and other qualitative measures may become more


important.

population was not reported. Only 6.54% of the studies


used the stated target activity in their experiments,
and 25.8% used a close approximation. 25.8% used an
abstraction of the target environment, and 9.7% used a
distant approximation. The large number of studies that
do not test their target activity has negative effects on the
external validity of BI studies, and is an issue that the BI
community will need to address.

Reporting Practices
The review process also provided a sample of the reporting practices in the field. Figure 10 illustrates a summary of
study completeness metric for all reviewed studies taking
into consideration only required attributes, as described in
Section 3.2. Study completeness ranged from 72 to 93%.
The fields that were reported less than 50% of the time often
related to stating target populations, target activities, and
target environments. The lack of target reporting precludes
reviewers from determining how well results generalize.
The least-reported apparatus attributes were On Mechanism (0%) and Idle Support (10.5%), but these were understandable because most researchers assume a manual on
mechanismgiven the level of technologyand Idle Support is a relatively new concept14 derived from the concept

Analysis Elements
Every one of the 21 studies in our test set included
a quantified methods analysis. Only four of the papers
(12.9%) included a qualitative analysis as well. Out of all
of the studies, only one included any theoretical modeling.
These observations further support the claim that BI
research currently exists in a technological development
phase, where the focus is on performance and accuracy.
As the technology matures, the focus on subjective

Completeness of Required Attributes

FIGURE 10. Study completeness based only on required attributes.

rg
sb
e

Tr
ej
o

W
es

or
Ta
yl

ru
ya

Se
r

ai
er

rm

m
an

be
O

illa
n

d
la
n

N
eu

cF
ar

Ki
pk
e
Le
vi
ne

dy
ne

tt
Ke
n

ao
G

ar
re
G

rtz
C
in
co
tt i
C
ur
ra
n
De
lo
rm
e
D
on
ch
in

ke

Bi
rc

Bl
an

lis
s
Ba
y

Al
l is
on

93.8%
100.0%
90.7%
88.5%
88.7% 87.3% 87.2% 87.8%
87.3% 90.5%
85.1%
84.8% 84.6%
82.4%
90.0% 81.1% 83.0%
81.3%
79.2%
76.9%
74.5%
73.5% 77.1%
80.0%
70.0%
60.0%
50.0%
40.0%
30.0%
20.0%
10.0%
0.0%

Analyzing Trends in Brain Interface Technology: A Method to Compare Studies

of User Control.16 Other omissions centered on subject reporting (selection criteria, recruitment and screening methods, usage history, and debriefing all under 50% reported,
with Method of Recruitment the lowest (5.9%)). Other attributes, such as Objective(s) (under SynopsisStudy Description), were not omitted, but neither were they explicitly
stated. They had to be discerned or implied from the text.
Improving consistency in reporting these areas will aid in
interpretation, useful comparisons, and study repeatability.

877

further validation of the general applicability and usefulness


of their theoretical framework. Future work may use this
tool or others like it to undertake a more comprehensive
and detailed comparisons of BI technology. Our hope is
that this work will illuminate the need for common methods and metrics, spawn more comprehensive analysis, and
encourage others to contribute to the theoretical base of the
BI field.
APPENDIX: TEST SET

CONCLUSIONS
We have introduced a new method for comparing studies of BI technology based on the theoretical models and
taxonomy proposed by Mason, Moore, and Birch.15 The
proposed method was shown to be an effective approach
for interpreting and comparing the 21 BI studies in our test
set. It allowed us to 1) identify the salient characteristics
of each study, 2) identify what was reported and what was
omitted, 3) facilitate a complete and objective comparison
with other studies, and 4) identify trends, areas of inactivity
and reporting practices. Future studies will be required to
confirm these findings on larger test sets.
Our demonstration presented samples of the types of
analyses that might be performed using this method. Even
though the test set was an approximation to the set of all
BI studies, some interesting comments can be made about
observed trends, areas of inactivity and reporting practices.
For example, the analysis revealed a profound lack of common experimental methods and measures. This situation
precludes direct comparison of research findings. The diversity of methods and measures also inhibits study replication and external validation of results. Many of the results
confirm that BI technology is nascent and in an early technological developmental stage. Target populations, environments, and activities are most often approximated rather
than using the stated targets. This is attributed partly to
the youth of the field but also to the difficulty of working
with subjects with severe disabilities. Research teams often
must transport large amounts of sensitive equipment, and
incur the time and expense of traveling to subjects homes
or hospital rooms. This issue is inherent in the BI field,
but may be eased as BI recording devices become smaller,
more robust, and more portable.
As a precursor to this work, we reviewed the literature
and proposed attribute values for specific attributes. These
attribute values themselves represent a major contribution
to the field, providing labels to assign and compare study
attributes. As general classifications of attribute descriptions, these values effectively extend Mason, Moore, and
Birchs BI Study taxonomy with more detail. Of course, the
proposed values are only an initial set that will hopefully
evolve with use and community input.
As the first application of Mason, Moore, and Birchs
formalisms, the successful application of the method is a

The representative test set used for method demonstration is summarized in Table A1. For space efficiency,
we will use the following abbreviations: TNSRE, IEEE
Transactions on Neural Systems and Rehabilitation Engineering and TRE, IEEE Transactions on Rehabilitation
Engineering.
TABLE A1.

Representative test set used for method


demonstration.

Identifier (first author)

Paper

Allison
Bayliss
Birch
Blankertz
Cincotti
Curran
DeLorme
Donchin
Garrett
Gao, X
Kennedy
Kipke
Levine
McFarland
Millan
Neumann
Obermaier
Serruya
Taylor
Trejo
Wessberg

Ref. (1)
Ref. (2)
Ref. (3)
Ref. (4)
Ref. (5)
Ref. (6)
Ref. (7)
Ref. (8)
Ref. (10)
Ref. (9)
Ref. (11)
Ref. (12)
Ref. (13)
Ref. (17)
Ref. (18)
Ref. (19)
Ref. (20)
Ref. (21)
Ref. (22)
Ref. (23)
Ref. (25)

Note. Papers marked with an report results from multiple studies,


although our results only consider the first study in each paper to
reduce bias.

ACKNOWLEDGMENTS
This work was supported by the National Science Foundations Universal Access program, under NSF Project
number 0118917, the Canadian Institutes of Health Research grant MOP-62711, the Natural Sciences and Engineering Research Council of Canada, grant 90278-02.
We would like to thank Gordon Handford, Adriane Davis,
Brendan Allison, Jaimie Borisoff and Regi Bohringer for
their insightful feedback during the preparation of the
manuscript.

878

JACKSON et al.

REFERENCES
1

Allison, B. Z., and J. A. Pineda, ERPs evoked by different matrix


sizes: Implications for a brain computer interface (BCI) system.
IEEE Trans. Neural Syst. Rehabil. Eng. 11(2):110113, 2003.
2
Bayliss, J. D., Use of the evoked potential P3 component for
control in a virtual apartment. IEEE Trans. Neural Syst. Rehabil.
Eng. 11(2):113116, 2003.
3
Birch, G. E., Z. Bozorgzadeh, and S. G. Mason. Initial on-line
evaluation of the LF-ASD braincomputer interface with ablebodied and spinal-cord subjects using imagined voluntary motor
potentials. Trans. Neural Sci. Rehabil. Eng. 11(1), 2002.
4
Blankertz, B., G. Dornhege, C. Schafer, R. Krepki,
J. Kolmorgen, K. R. Muller, V. Kunzmann, F. Losch, and G.
Curio. Boosting bit rates and error detection for the classification of fast-paced motor commands based on single-trial EEG
analysis. IEEE 2003.
5
Cincotti, F., D. Mattia, C. Babiloni, F. Carducci, S. Salinari, L.
Bianchi, M. G. Marciani, and F. Babiloni. The use of EEG modifications due to motor imagery for braincomputer interfaces.
IEEE Trans. Neural Syst. Rehabil. Eng. 11(2):131133, 2003.
6
Curran, E., P. Sykacek, M. Stokes, S. J. Roberts, W. Penny, I.
Johnsrude, and A. M. Owen Cognitive taskes for driving a brain
computer interface system: A pilot study. IEEE Trans. Neural
Syst. Rehabil. Eng.12(1):4854, 2004.
7
Delorme, A., and S. Makeig, EEG changes accompanying
learned regulation of 12-Hz EEG activity. IEEE Trans. Rehabil.
Eng, 2003, in press.
8
Donchin, E., K. M. Spencer, and R. Wijesinghe. The mental
prosthesis: Assessing the speed of a P300-based braincomputer
interface. IEEE Trans. Rehabil. Eng. 8(2):174179, 2000.
9
Gao, X., D. Xu, M. Cheng, and S. Gao. A BCI-based environmental controller for the motion-disabled. IEEE Trans. Neural
Syst. Rehabil. Eng. 11(2):137140, 2003.
10
Garrett, D., D. A. Peterson, C. W. Anderson, and M. H. Thaut.
Comparison of linear, nonlinear, and feature selection methods
for EEG signal classification. IEEE Trans. Neural Syst. Rehabil.
Eng. 11(2):141144, 2003.
11
Kennedy, P. R., R. A. Bakay, M. M. Moore, K. Adams, and
J. Goldwaithe Direct control of a computer from the human
central nervous system. IEEE Trans. Rehabil. Eng. 8(2):198
202, 2000.
12
Kipke, D. R., R. J. Vetter, J. C. Williams, and J. F. Hetke.
Silicon-substrate intracortical microelectrode arrays for longterm recording of neuronal spike activity in cerebral cortex. IEEE Trans. Neural Syst. Rehabil. Eng. 11(2):151155,
2003.
13
Levine, S. P., J. E. Huggins, S. L. Bement, R. K. Kushwaha, L.
A. Schuh, M. M. Rohde, E. A. Passaro, D. A. Ross, K. V. Elisevich, and B. J. Smith. A direct brain interface based on eventrelated potentials. IEEE Trans. Rehabil. Eng. 8(2):180185,
2000.
14
Mason, S. G., A. Bashashati, M. Fatourechi, and G. E.
Birch. Conceptual models for brain-interface technology

design. http://www.braininterface.org/published links/BI Design Framework/Brain Interface Design Framework-modelsframe.htm. 2005.


15
Mason, S. G., M. M. Moore, and G. E. Birch. A general framework for brain interface evaluation. IEEE Trans. Biomed Eng.
2004, submitted for publication.
16
Mason, S. G., and G. E. Birch. A general framework for brain
computer interface design. IEEE Trans. Neural Syst. Rehabil.
Eng. 11(1):7085, 2003.
17
McFarland, D. J., W. A. Sarnacki, and J. R. Wolpaw. Brain
computer interface (BCI) operation: Optimizing information
transfer rates. Biol. Psychol. 63(3):237251, 2003.
18
Millan, J. R., J. Mourino, M. Franze, F. Cincotti, M. Varsta, J.
Heikkonen, and F. Babilioni. A local neural classifier for the
recognition of EEG patterns associated to mental tasks. IEEE
Trans Neural Networks 13(3):678686, 2003.
19
Neumann, N., T. Hinterberger, J. Kaiser, U. Leins,
N. Birbaumer, and A. Kubler. Automatic processing of selfregulation of slow cortical potentials: Evidence from brain
computer communication in paralysed patients. Clin. Neurophysiol. 115(3):628635, 2004.
20
Obermaier, B., G. R. Muller, and G. Pfurtscheller, Virtual keyboard controlled by spontaneous EEG activity. IEEE Trans.
Neural Syst. Rehabil. Eng. 11(4):422426, 2003.
21
Serruya, M., N. Hatsopoulos, M. Fellows, L. Paninski,
and J. P. Donoghue Robustness of neuroprosthetic decoding
algorithms. Biol. Cybernet. 88(3):219228, 2003.
22
Taylor, D. M., S. I. H. Tillery, and A. B. Schwartz. Information conveyed through brain-control: Cursor versus
robot. IEEE Trans. Neural Syst. Rehabil. Eng. 11(2):195199,
2003.
23
Trejo, L. J., K. R. Wheeler, C. C. Jorgensen, R. Rosipal,
S. T. Clanton, B. Matthews, A. D. Hibbs, R. Matthews,
and M. Krupka. Multimodal neuroelectric interface development. IEEE Trans. Neural Syst. Rehabil. Eng. 11(2):199204,
2003.
24
Vaughan, T., W. J. Heetderks, L. J. Trejo, W. Z. Rymer,
M. Wienrich, M. M. Moore, A. Kubler, B. H. Dobkin, N.
Birbaumer, E. Donchin, E. W. Wolpaw, and J. R. Wolpaw.
Braincomputer interface technology: A review of the second
international meeting. IEEE Trans. Neural Syst. Rehabil. Eng.
11(2):94109, 2003.
25
Wessberg, J., C. R. Stambaugh, J. D. Kralik, P. D. Beck, M.
Laubach, J. K. Chapin, J. Kim, S. J. Biggs, M. A. Srinivasan, and
M. A. Nicolelis. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature 408 (6810):361
365, 2000.
26
Wolpaw, J. R., N. Birbaumer, W. J. Heetderks, D. J. McFarland,
P. H. Peckham, G. Schalk, E. Donchin, L. A. Quatrano, C. J.
Robinson, and T. M. Vaughan. Braincomputer interface technology: A review of the first international meeting. IEEE Trans.
Rehabil. Eng. 8(2):164173, 2000.
27
Wolpaw, J. R., D. McFarland, and G. Pfurtscheller. EEG-based
communication: Improved accuracy by reponse verification.
IEEE Trans. Rehabil. Eng. 6(3):326333, 1998.