It has been a while since I entered my last blog entry here. A lot has happened from then until now: our project has been mentored by great programs such as Erasmus Center for Entrepreneurship, ENDuRE Project at University of Pisa, etc. The project of integrating music and clinical use of it is now in the phase of not only having the prototype ready for testing, but also preparing for the possible mass-market with much more accessible and usable device than ever before.

As a neuro nerd, this journey has been mesmerizing - started with my true nerd nature of wanting to understand the mechanism of piano playing and how it could be facilitated by comprehending brain functions by taking online free courses at spare times. Fast forward, after several online courses and self study of going through numerous (200+) published research articles relating to cognition, visual recognition, learning and basal ganglia/ cerebellar functions, I have created the hypothetical neural model of our product, hypothesis for its efficacy (validation testing pending) and our prototype. With increasingly visible integration between technology relating to machine learning / data science and its health application, data science/ statistics to detect certain illnesses using reliable, quantifiable biofeedback/biometric data is imminent. I believe that it will change how we view medicine drastically. Especially with my background of having one biologically close relative recently deceased with Parkinson's disease, it is also my own endeavor to witness some extent of prevention or cure for those illnesses which are currently largely unknown. My motto is "if no one is doing it (or not achieving the desired results), at least examine it and see if you can do it yourself." At this point, we seek the researchers who would like to collaborate with us for testing our hypothesis and the prototype. We are also seeking funds to make this testing happen.

As for the use of musical notation as visual input, it makes great sense from neurological point of view. Especially when you use quarter notes (filled note which is in circular shape, either on the line or between the lines of musical staff), it creates clear black-and-white contrast and induces the recognition of the "appearance" (where note is located in the staff, which translate the location on the keyboard or known as pitch) more so than alphabets or other signs often used for cognitive testing. Phototransduction and visual recognition are behind this particular reason. In addition to that, music notation has a lot to do with location in piano playing. In other words, it also helps to monitor memory function associated with spatial understanding. Memory has a lot to do with hippocampus of the brain and it also processes locations. In music, you need to comprehend the visual input (music) in order to facilitate the actual action which is enabled by mapping out your movements accordingly to specific locations on the piano keyboard. It is perfect to monitor cognitive processing capability, encoding the visual input to the motor output. For this reason, we feature music pitch notation in our product.

Among my past and current students, most of the errors happen when there is a glitch in relaying understood information (music; spatial information) than movement control itself. I believe that automatic movement is higher in order of hierarchy due to our ingrained instinct to avoid any danger than visual comprehension and motor movements lead by it. Therefore, students - between beginner to intermediate level - often fight within their minds to let visual comprehension win over the automatic motor movements. Amazingly, the solution can be the sense of tempo and pulse, the rhythmic side of music processing. There are some "magic numbers" of ideal tempo for visual processing and motor control (slight differences among different personality types). I have tested this on students who have attention issues including severe ADHD, it helped them engage in their tasks significantly than not exercising such tempo (metronome has been used to create the environment). This part of our knowledge and experience have not been examined for possible clinical use just yet (for educational purpose, I make the estimate for the metronomic number for the tempo for each student who wants one). My team and I would like to see if we could come up with interesting yet reliable and usable product with this aspect soon.

At this point, my interest lies in how interchangeable or relate-able our neural model can be to the computational equivalent. In my hypothesis, it integrates visual recognition (with building memory pool as a part of it) and decision making, subsequent motor output as outcome of the comprehension. In piano sight reading training (to read unfamiliar music and play on the spot), we need to recognize the patterns and encode them in much faster succession to enable reading and playing in quasi-simultaneous manner. On the contrary, regular visual recognition takes much longer, not fast enough to relay the encoded information to take actions almost at the same time. If my hypothesis is deemed feasible, in theory, we could accelerate the visual recognition capability leading to decision making drastically faster. I wonder how much of a difference it may make in the reality. That would be a question to be answered when we are able to collect our user data from validation testing.

Leave a Reply.

Author:Yayoi Sakaki

Pianist, Organist, music director and educator. Founder of Project Ipsilon, using simplified music notation as spatial-motor instruction, we offer quantitative cognitive control monitoring using biofeedback responses.