A variety of neural imaging techniques are used to identify and diagnose neurological disorders like epilepsy, sleep disorders, strokes, and more. Devices vary in cost, resolution, and accessibility, with magnetic resonance imaging (MRI) machines as the most effective for the highest price but also the most difficult to maintain. Electroencephalography (EEG) machines are at the other end of the range as the most affordable and portable but at a lower resolution—usually.
Typical EEG machines often do not have high enough resolution to get highly-detailed brain scans because the number of sensing nodes that are attached to a patient’s scalp—10 to 40 electrodes—is too low. For this reason, EEG as a neural imaging technique is often replaced with more invasive, risky, and expensive methods. But Pulkit Grover, an assistant professor of electrical and computer engineering (ECE) at CMU, still has faith in EEG. His groups’ research predicts that current theories about EEG severely underestimate their capacity for spatial resolution. They postulate that high-density EEG, with 64 to 256 electrodes, offers clearer images that better send and receive signals through human skulls. Even more effective are UHD EEG: ultra-high-density EEG with up to 1,000 electrodes.
Of course, with more nodes come new problems, such as figuring out how to attach all of them and how to get the most out of each node. Grover’s lab includes a group of instrumentation engineers to explore these problems, led by post-doctoral researcher Ashwati Krishnan and doctoral candidate Ritesh Kumar in collaboration with Shawn Kelly, senior systems scientist in the Engineering Research Accelerator. The instrumentation team consists of academics from all levels of study. In their efforts to prove the usefulness of EEG testing for brain activity patterns, students are providing crucial insight into how to make EEG as accessible as possible for a variety of people.
EEG for virtual reality
Shi Johnson-Bey is a master’s student in biomedical engineering whose research focuses on EEG sensing as an input method for virtual reality (VR) modules. This means that a user can interact with a virtual world without making use of hand-held controllers—an ideal situation for individuals who have limited mobility without a loss of mental capabilities, such as patients with amyotrophic lateral sclerosis (ALS).
Johnson-Bey was a Journeyman Fellow for the U.S. Army Research Laboratory both in the summer of 2017 and again for his final year of study. In his summer experience, Johnson-Bey learned about various brain-computer interfaces and created a P300 speller application that allowed users to type text by using brain activity. He continued this research to develop a system where users can interact with objects in VR settings by using audio cues.
Within a virtual system, a user would see an environment or layout, such as a menu of options. In addition to wearing a VR headset, which includes immersion by sight and sound, the user would also be hooked up to EEG nodes that connect with the VR system. When hearing or seeing certain sounds or icons next to menu items, the EEG nodes would be able to pick up electrical signals from the brain when the user is paying more attention, a reaction they have to wanting to select an option. Reading that signal, the VR system can then select that item, essentially reading the user’s mind and moving forward with what they want.
This technology could open up the world of video games to those who previously did not have access, can provide simulations for medical and emotional treatment, and can lead to better understanding of how neurological patients with low communication skills function. Johnson-Bey also gave the example of using this kind of technology as something of a universal remote.
“The brain is basically just a system that gives off electrical signals,” Johnson-Bey explained. “Sending those signals to a device could allow someone to turn on the television or change the lights just by thinking a certain way.” He added that this sort of technology could also have benefits over some IoT devices since the one-way communication set-up from brain to device would not be possible for hackers to get into.
Braiding natural hair for better EEG recording
Another student in the instrumentation group is Arnelle Etienne who received her bachelor’s degree in a self-defined engineering major entitled Technology and Humanistic Studies. She was surprised and excited to be involved with research at the undergraduate level.
“I was originally looking into research as a way to get experience to get an internship. I asked Professor Grover if I could join his research efforts, and he said yes!” Etienne shared. “I ended up finding the problem that I was working on because I wondered how someone with my hair type would experience getting an EEG recording done.”
Etienne, who has thick, natural hair styled in an afro, realized that if she were to need EEG sensing in a pinch, medics would likely need to shave her hair away to get enough nodes attached to her scalp. Patients with thinner, straight hair types can usually have it pulled to the side with clips, but those with thicker and textured hair could potentially lose sections of hair in an emergency. To avoid sacrificing hair for safety, Etienne has been developing and documenting styles of braiding which can leave room for EEG nodes on thicker heads of hair.
I ended up finding the problem that I was working on because I wondered how someone with my hair type would experience getting an EEG recording done.Arnelle Etienne, Undergraduate Student, College of Engineering
Both students graduated this May and hope to move into industry work next. Johnson-Bey is interested in going on to research and develop intelligent agents for video games. Etienne is working with Grover and Kelly at their startup, Precision Neuroscopics and is also considering founding a startup of her own to create proprietary materials that can help practitioners use EEG on all styles of hair.
The work on EEG for coarse and curly hair was supported by the Chuck Noll Foundation for Brain Injury Research. Support of the Phil and Martha Dowd Fellowship, PITA, and a fellowship from the Center of Machine Learning and Health have enabled this research direction.