Meet Sinem, MiSynth’s lead researcher and superstar in the field of neurotech, while she explains her background, the research process, and what she thinks the impact of BCI’s will be fifty years from now!
Can you please provide some of your background in neuroscience and technology?
I (soon will) have a PhD in Neuroscience and a Master’s in Biomedical Engineering. I’ve been interested in neuroscience and technology since I was a teenager.
What sparked your interest in working in the field of neurotech?
I have to admit I am very new to the field of neurotech. The technology is advancing so fast now and becoming so much cheaper as well as less invasive— just more accessible. People have even developed noninvasive BCI prosthetics and BCI VR games. It is a very exciting time in the field I think.
What drew you to joining the MiSynth team?
I think the project is extremely interesting and has the potential to be a widely adapted use of BCI.
Pretend you are speaking to a second-grader. How would you describe the way MiSynth works?
Think of a sound. MiSynth tries to guess that sound by measuring activity from your brain. It has seen activity from brains thinking of sounds before, and tries to use that to guess what sound you’re thinking.
Can you explain a little bit about how the research process works?
We’re in the super early stages of our research right now and we’re just getting started. Let’s take timbre as an example. We have several hypotheses about how we could extract imagined timbre information from EEG responses. We conduct experiments to test out these hypotheses, so we can leverage the most relevant elements. We will then train machine learning models to predict timbre from these elements. We’re not sure what’s going to work right now, so we’ll have to iterate over our models and collect more data.
What is the biggest challenge that you are currently facing in regards to the research and how do you think you will be able to solve it?
Machine learning architectures are taking off, but it’s going to be a challenge to find the architecture for our specific problem. This we can only tackle with trial and error. Another challenge is going to be the amount of data. Not that much research has been done on imagined music, although a few datasets are publically available. We are considering different options as to how to solve this problem. Using platforms like Brains@Play, is one option, where anyone with a BCI can participate in our experiments.
What do you think is the biggest misconception that most people have about BCIs?
Well it’s still a very niche market, so I’m not sure how many misconceptions there are floating around. I know BCIs can bring up philosophical issues around the separation of mind and brain, but I think that’s a much longer and nuanced discussion.
Where do you see the future of BCI technology five years from now? Fifty years?
I think five years from now, the trend will continue with BCIs becoming more popular as more mobile and consumer devices become available. In fifty years, I think it may be an extension of our bodies we use everyday, much like smart phones.
Is there anything else you’d like to share with our readers?
MiSynth has the potential to democratize music production and to be an invaluable tool for musicians of all experience levels.