Interview with professor Tolias

Interview with professor Andreas Tolias of the Baylor College of Medicine by PhD student  Marco Fantini.

Following the professor’s Neuroscitolias-2ence seminar titled “Structure, function and computations of neocortical microcircuits” held the 19th of October 2016 at the Sala Stemmi of Scuola Normale, Marco asked:

 

Regarding the anatomical perspective of the wiring of the cortical circuit and the description of the several cell types present you proposed and published in your article in Science  (Nov 2015), there was recently a commentary of Bath et al. where they question the novelty and completeness of the classification. You gave a rebuttal few weeks ago where you address the critique and talk about heterogeneity of the group and some rare occurrence like some particular kind of cell that is found very rarely and cannot be classified. Based on these facts, is a minimal set of circuit elements is an achievable feature for brain modeling?

Given it is incomplete? We’re following a reductionist approach, we are trying to capture the most [important features]. We think that the right approach is to start modeling it in parallel to getting more detailed data. Part of our approach, especially with the electromicroscopy data, is to get the more complete circuit diagram and not just get more of this rare cell types. In our opinion, and we said this in the rebuttal too, for a lot of them the rules we found are all-or-none. It is either you have a lot of connectivity or no connectivity. We think these rules are going to hold even getting a more complex circuitry. And these are the rules we’re trying to model right now. We’re interested, from an engineering point of view, to build circuits in machine learning algorithms that are going to perform tasks, and we can test this specific nonlinearity like how much they may be improving [the outcome of this task] or what kind of task that they can implement in machine learning.

Do you believe computer-brain research could have application outside academia?

Yes. This field of machine learning the last two years has almost been taken over by companies. In fact, lots of students after their PhD decide to go to work for Google or Facebook instead of doing a postdoc in an academic institution. They get a chance to do the same type of basic research of the university but also have the opportunity [to develop some] applications. The applications as i was saying earlier [during the talk] are already happening. For example Facebook or Google are using the image search.  To give an example, few years ago if you wanted to find in Google Images a tiger, it had to go through all the jpeg or tiff or whatever that people uploaded they said it were tigers in the name of the image.  After 2012, since they worked with intern students, they’ve implemented deep neuronal network and deconvolution neural network that let you go somewhere and take a picture of this tiger and be able to find it [in the search engine without the need of an annotation]. There are a lot of these kinds of applications. The other thing is about medical research.  People and companies are interested to use this type of neural networks to explore and analyze complex data. Right now one of the limitations for these things to work is that you need to label data. Essentially you have a human labeling a lot of those data. [Instead you could] use them as training example for a neural network to learn what to do with the lots of data you want to annotate later (the testing data).  So one of the things i didn’t talk about, but we’re very interested in, is to explore and study, in particular with the electrophysiology, the plasticity rules in order to see how we can train these networks with very little labeled data.

Recently both system and computational biologist are very popular professional figures in academia and often for project progression or for reviewers’ request we have to take the role of a computational scientist in some part of the work.  Do you believe a biologist could take on these skills to become a hybrid computational-life science researcher or we need a laboratory figure that can specifically address these issues?

I think the right thing to do is encourage the people to take on these skills while are at the undergraduate level. If one wants to become a biologist he should also take courses in mathematics and statistics, probability, computer science. Only then a biologist will have the skills to be able to understand and analyze the own data. Of course there is always room for collaborations, and there always will be people who are more experts [in these subjects], like the people developing, less than people using, the statistical tools to analyses complex high dimensional data or people who use graphs or graph theory for very nonlinear problems. But i think it is important for the new generation of biologists to be trained to be very comfortable [with these skills], and not just programming but also with math, statistics, linear algebra and all these kind of fundamental aspects that are needed in the analysis of these complex data. The reason is because [the times are evolving.] Traditionally the data the biologist would collect in her or his lab or a student or a postdoc would use [are basically] an experimental condition and controls. You plot them, look at some figures and analyze the results. But now we get trascriptomics and such. Given that the complexity of the data is increasing humongously, we need the technologies available to make it easier for a small lab to collect complex data. And it is not there is a magic boot that is given to some statisticians to analyze them. You need both. You need the biological intuition to ask the right question, collect the right data. But you also need the skills now to be able to analyze them. That does not mean that you need to be able to do everything by yourself, and you will always want to collaborate with people who are developing some advance tools, because there is a whole field of research trying to develop mathematical tools to analyze the data. I think the educational system should change in that direction and unfortunately I am not sure it is.

Nowadays computer science and biology are getting closer and closer. In your opinion, as of today, which are the limitations of computer simulations and which are the limitations of traditional wet lab for brain research?   

I think the limitation of computer simulations right now is that we don’t have enough data to really explore a lot of these models. We want models to explain how the brain works which just don’t have enough of experimental data available to do a really exploratory simulation. So we definitely need more data.  That doesn’t mean that we need to wait to start doing models until we get the data. I think it is the other way around. These things must be moving in parallel. For example, and the first question was a good example, if we have to wait until we get every single cell type to be known, the [achievement of a] full wiring diagram may not happen in our lifetime. And moreover, [just focusing on getting experimental data means] we would just be collecting data without having guidance from what they are under computation. So the way i see this, it is like a loop. We start with some data for example we extract some rules even if it is not complete, is missing some rare cells classes, and  need more data to  get the very high significance.  It is a very complex problem. Moreover, because the experiments are open-ended you can spend hundreds of millions of dollars collecting data [without getting a true advancement for the model]. You need some kind of theoretical guidance and some understanding [before doing that]. I do think that is very dangerous to hear that those thing should be one or the other. I think they are a tight loop, at least for the foreseeable future.

 

Do you think a whole brain computer model would ever be achievable in the near future? Like in 20 years?

In 20 years… it depends on what you mean. I think there is a difference between a simulation and a model, I believe that a simulation is trying to faithfully replicate something, a model can model to a different levels, it can model at the algorithmic level. It is possible that in 20 years maybe we could not have the full brain but possibly the visual system would have fairly sophisticated models. They already are [sophisticated] in some tasks but, maybe, more generally, we’ll rival the some of the perception capability of humans. It won’t be a simulation, in the sense that we are not spying [how it works], is going to be like a model. [The visual system model] may be actually part of these [full brain] models, but may not be in the way the brain implements them.