Meeting 2 : 2019-March-20
- Ricardo

- Mar 23, 2019
- 2 min read
We had a great second meeting with stimulating presentations and discussion! You can find the entire video that was streamed here: https://youtu.be/LoNLRDuNaJg . The audio during the presentation portions were decent. The Q&A section did not come out well, as you cannot hear the questions and the presenters did not repeat the questions. Something to improve!
The turnout (25-30) was slightly smaller than the first meeting but was geographically more widespread. At the MNI we had around 20 participants, at the McGill University Research Centre for Studies in Aging (MCSA) we had 7 participants watch the live stream and the Jewish General we had another couple participants watch the live stream. We were unsuccessful (classic!) in trying to get the video conference between the three sites but the live stream was very useful for those unable to be physically present at the MNI. We may try the video conference again, or try to use Skype/Zoom on a laptop, or just have the video stream again.

We started the meeting by looking at a project that could potentially benefit from deep learning applications. The project was presented by Drs. Mathilde Chaineu and Rhalena Thomas. Their lab is looking to develop treatments to help patients suffering from amyotrophic lateral sclerosis (ALS). One ongoing project they presented is looking to better understand the behavior of motor neurons. They demonstrated a video showing how motor neurons tend to form clusters and are looking to potentially implement deep learning to track the neurons and characterize the size, location, shape, dendrites etc.
"You will see motor neurons growing live, basically. ... They extend their axons, communicate with each other, retract, all the cell bodies come together in clusters." -Dr. Chaineau
The discussion was rich from many participants but it seems this project would need a lot more data to successfully implement deep learning algorithms. However, as suggested they could look to implement simpler and more traditional image processing techniques or machine learning algorithms where less data is required.

Next we delved into the paper discussion. We critically presented and discussed the paper looking for its strengths and weaknesses. The authors showed images to human subjects while recording MEG signals from the brain. In parallel, the authors "showed" images to a previously trained VGG-S (convolutional neural network). The authors correlated features extracted from the VGG-S to the dipole signals from different areas of the brain. They found that for some participants the signals correlated along the visual stream. We discussed the paper for a while and concluded the authors made a lot of assumptions and were a bit overarching with their conclusions not based on a lot of solid results.
Upcoming meetings
At the following meeting on April 25, we will have Alexia Jolicoeur-Martineau, a PhD candidate at MILA present her exciting work on GAN : a meow generator https://ajolicoeur.wordpress.com/cats/ . I am excited to learn more.
Following our meeting in April we will go back to discuss another journal in our May meeting. Please send us suggestions for which paper you wish to discuss! We will compile a list of papers and send it out for voting. Only the strong survive!




Comments