Tiago H. Falk

Tiago FalkTiago H. Falk is the recipient of the 2016 Young Investigator Award. He is an associate professor at the Institut national de la recherche scientifique, (INRS-EMT) in Montreal, Quebec, where he directs the Multimedia/Multimodal Signal Analysis and Enhancement (MuSAE) Lab. His interdisciplinary work lies at the crossroads of biomedical engineering and multimedia communications and explores innovative ways of using one domain to advance the other.

Representative examples of the developed technologies include: human–machine interfaces that are aware of their user's mental/affective states; hearing aids that adapt to the surrounding noisy environment; video streaming services that are aware of the user's perceived quality, thus adapt to maximize enjoyment; assistive devices for severely impaired individuals, thus allowing them to communicate with loved ones; smartphone applications that can remotely monitor a patient's depression state, help diagnose the risk of developing Alzheimer's disease, or monitor voice therapy outcomes for stroke survivors; and brain-controlled robots that help treat attention deficit hyperactivity disorders or post-traumatic stress disorders.  

Falk's work has been documented in over 170 papers in top-tiered journals, peer-reviewed conferences, and international book chapters, and has engendered numerous awards, most recently the Early Career Achievement Award (2015) from the Canadian Medical and Biological Engineering Society and the Bell Outstanding Achievement Award (2015) for mental health technologies.

He has supervised over 50 graduate and undergraduate students, the majority of which have received prestigious scholarships, Best Thesis Awards, and Best Student Paper Awards at renowned conferences.  

Interview with Tiago Falk

Transcript of Interview

Heather Thorstensen: Hello and welcome to this interview with the winner of the Sigma Xi, the Scientific Research Society's 2016 Young Investigator Award winner. My name is Heather Thorstensen and I am the manager of communications for Sigma Xi. The Young Investigator Award winner this year is Dr. Tiago Falk. He is an associate professor at the National Institute of Scientific Research in Montreal, Quebec. He is the director of the Multimedia/Multimodal Signal Analysis & Enhancement Lab known as the MuSAE lab. The Young Investigator Award recognizes excellent research from a Sigma Xi member who is within ten years of receiving his or her highest earned degree. Congratulations, Dr. Falk. He will give a lecture at the Sigma Xi Annual Meeting this November in Atlanta. More information about that meeting is available online on Sigma Xi's website at www.sigmaxi.org

Since you are the Young Investigator Award winner, it's worth noting that you earned your Ph.D. in 2008 in electrical engineering from Queen's University in Canada. Why were you pursuing electrical engineering?

Tiago Falk:
 I've always been an engineer at heart, I guess, even from an early age. I did my undergraduate in Brazil in electrical engineering. I always wanted to go into academia. I was involved in research at an early age at the Bachelor's level. As soon as I finished my undergraduate, I knew that going abroad and getting my graduate studies was something that was needed to get a nice faculty position. That's what I ventured out into doing and I couldn't see myself doing anything else other than engineering at the time.

Thorstensen: The lab that you're at today has a main purpose to investigate human and machine interactions. Could you talk about some of those human sides of what you're working on?

Falk: Sure. The idea of the lab is to bring together the human side and the machine side in human-machine interfaces. As I mentioned, my graduate studies was more on the multimedia aspect and then I ventured off for a couple of years into a postdoc more into the biomedical side, more into the human side. After those two years, I had these two seemingly unrelated domains. Human-machine interfaces brings them back together. People typically just focus on the machine side when you talk about human-machine interfaces. The goal of the lab was to see, well, can we focus on the human aspect and bring the human back into human-machine interfaces.

An example of what we try to do is suppose you have a machine that is aware of its user's affective state. Is the person sad? Is the person happy? It can adjust itself to the person's mood. You might have an interface which can be a hearing aid and that hearing aid can adapt to the hearing profile automatically of the user. If the user is in a noisy setting, it can detect the environment and based on that person's hearing profile, it can adjust its settings automatically so that it maximizes the intelligibility for that particular user.

It's basically trying to understand as much information from the human or the user of the interface so that the interface now has that knowledge. The interface now becomes adaptive; it becomes intelligent and it becomes optimal to that particular user. You can have an interface that comes out of a box off the shelf and you put it on and it calibrates to that particular user. It's going to be optimal to you. If you give it to somebody else, it adapts to that particular user so it's always going to be optimal in that sense.

Thorstensen: Now what about the machine side of that relationship?

Falk: On the machine side now, today you have speech recognition, for example. You have computer vision. There's a lot of interest now on brain–computer interfaces and human-machine interfaces where the human's body is actually controlling the technology. You can visualize gestures. You have the connect which detects your gestures and it makes decisions based on your gestures. We're trying to see how can we, with the human now in the equation and with a machine that's adaptable, how can you make the machine as adaptable as possible without the human actually detecting that the machine is updating itself, so that it becomes transparent to the user.

A big limitation of existing machines today is when you put them within ... They work well if you're in a constrained environment in a laboratory setting where it's quiet, the lights are always one, the person is sitting still. They work perfectly. Once you go outside, the lights change, the noise starts playing a factor. Then the machines break down. Speech recognition doesn't work as well. Computer vision starts not working as well if there is a lot of movement. Gesture recognition becomes a problem. A big part of the lab also, which is in the title of the lab which is the signal analysis and enhancement ... A big part of our research within the machines is how do you enhance the signals that are being input to the machines so that the machine can now perform well in a noisy environment, in any everyday environment, and the person is not constrained to a laboratory setting.

Thorstensen: Some of that, with enhancing the signal, is also taking away some of those noisy signals. Is that right?

Falk: Yes, and one thing that we also do a lot here is typically, people filter the noise out. They say, okay, we're going to remove the noise from the speech signal so that speech recognition becomes nicer. In our case, we try to separate the source from the noise. Instead of just throwing away the noise, we actually use the noise and see if there's useful information there. That's where some of this signal analysis aspect comes in. What can you infer from the noise? In speech, for example, if you can separate speech from the noise, from the noise we're able to infer what type of noise you're in. Are you in a large room versus a small room? How many people are around you? Is it lots of people talking? Is it just one or two? What is the perhaps age of the people around you? Instead of throwing away the noise, you can also look for patterns within the noise that could be useful.

If now you're a machine and your smartphone is aware that you have now left your office and you're in the cafeteria and it's noisy, it can adapt itself so that the enhancement part kicks in or it boosts the volume up so that you can hear better. If it detects that you have gone from an indoor to an outdoor setting, it will detect the light changes. We don't just throw away the noise. We try to mine as much information from the noise as possible, as well.

Thorstensen: When you're talking about signals that you're receiving, it sounds like you're talking about signals from the person's environment but it could also be some bio signals from the person's body. Is that correct?

Falk: Yes. Typically, the modalities that we work with could be vision so these would be videos captured from cameras in the smart phone or computer; voice or audio captured by microphones; or physiological signals measured by bio-monitoring devices. These are typically heart rate, respiration, brain signals, so typically, the electroencephalogram (EEG). Those are the most common modalities that we investigate.

Thorstensen: How would some of the human and the machines come together with these interfaces and have applications? What kinds of applications are you using?

Falk: Applications can be very wide. They can be applications specifically in the multimedia space. They can be applications in health care spaces. Here in the lab, we have a number of different applications we've investigated. One example is how can you take advantage of speech coding techniques and use that to develop technologies for individuals that have severe disabilities so they can't control their voices. They could be paralyzed. They have typically very severe disabilities. They're not able to type; they can't speak. They're basically locked within their bodies.

Can you use these devices? Can you use cameras to try to infer what the person is trying to say? Maybe from a mouth opening, if they open their mouth, that becomes their control signal so if you can detect from a camera them opening their mouths. If you can have them hum, if they're able to make sounds, they're not understandable sounds but they can make some grunts or they could hum, could you transfer that into a control signal? We've used speech coding technologies to actually measure hums and different properties of the hums.

In the video now we can now drive a wheelchair based on humming. If the pitch of your hum goes up, the wheelchair goes forward. If it goes down, the wheelchair goes backward. If you keep a low pitch for a persistent amount of time, the wheelchair turns left. If you keep it high, the wheelchair goes right. Now you're using multimedia technologies together with a bio signal sensor that's measuring vibrations in the vocal chords and you put it in an application, which is driving the wheelchair. Now it allows a person that was in bed completely dependent on their caregivers, now it gives them some sort of freedom to move about. We've tried this technology to turn on and off lights, put the volume on the TV up and down, so it gives them an independence. That's one application.

Another that we're working on now is can you bring video games now into the equation and virtual reality and develop video games that are specifically targeting parts of the brain that we know are affected by Alzheimer's disease. We're looking at developing virtual reality games combined with brain monitoring and seeing if we can use that to help diagnose Alzheimer's disease. Today a lot of the tools that are used are very expensive, nueroimaging technology such as MRI, PET scans and things like that. We're seeing can we bring that down to a device that costs maybe a couple of hundred of dollars together with a virtual reality headset and maybe bring the cost own to something less than a thousand dollars that could be use in low and medium income countries and things like that. Those are two that are more health care applications.

If you were to talk about more on the multimedia side, suppose you're a video streaming service and you're streaming videos to users. If you know the user's preferences, if you know how engaged the user is, if you know how emotionally attached to a specific movie or game that's going on, you can maximize the way you transfer your bits to that person so that you maximize their experience. One example is suppose you're in a fixed bit rate plan with your mobile service provider. You can only download so much per day or, let's say, per month. Typically, you try to reduce that bit if you're watching a movie.Now if you know that person loves an action movie and during the action scenes is when he becomes very engaged and that's what drives that person to watch that kind of movie, you can lower the bit rate at other points in time and then you can maximize that experience. You can give them full-blown HD with surround sound whenever you are in the action scenes. Then you can cut back on your bit rate at other points during the movie. At the end, you still have the same overall bit rate but you maximize that person's experience because at the moments when he or she were really engaged in the movie, you gave them the maximum that you could give. You're somehow adapting your multimedia streaming service so that it's maximizing the person's experience or their perception of experience. That's another application where you can bring the two worlds, the human and the machine aspect to take forward the multimedia side.

Thorstensen: I want to go back a little bit and talk about what you had mentioned about the tool that you developed to help people who couldn't talk because of their disabilities and they were able to communicate by noticing the hums that they were doing. I saw that that was allowing people to communicate for the first time, possibly, in their lives and that you had accomplished that during your postdoc that you were doing. As such a young part of your career, how was that―to make such a direct impact on the lives of people?

Falk: It was very rewarding and invigorating. My postdoc was at the University of Toronto at the Holland Bloorview Kids Rehabilitation Hospital. It was in the department which was aimed for developing technologies for kids that had disabilities with which nothing that was in the markets would work for them. Of all the assisted devices that were available in the market, they could not use for whatever number of reasons. These had to be customized to a particular child or to a particular youth. A lot of the kids that had come through, there was no notice that they had this type of ... They had very limited control of their arms or their limbs, a lot of spastic movement in the head, but they still had somewhat intact their vocalizations. It wasn't something that they could actually say words but they could make sounds. What was unique about this is that the sounds and the hums cause a periodic vibration of the vocal chords. This is the premises of speech coding. My PhD and my early years had been on speech coding.

Once I arrived, people were struggling with this. Just by bringing this multi-disciplinary view, this view of speech that nobody had in this biomedical department, quickly we jumped into that idea. Today, it's one to the technologies that ... I just spoke to Professor Tom Chau yesterday, who is the director of the lab. He said there's roughly about three dozen people now that are actively using this device. The first child that we tried it on apparently is so proficient today that he has programmed a device to play video games so he can actually play mortal combat video games just using hums. I haven't seen it but I was told he plays just like any other kid with two hands a controller plays. He does it with hums on the wheelchair. I'm looking forward to maybe seeing a video of that soon.

Thorstensen: That sounds really interesting. That would be interesting to see.

Falk: Bringing that capability of giving somebody the independence and a lot of the kids have gone on to write their autobiographies. If you attach that to what's called a non-screen flashing keyboard ... Basically, it flashes the rows of the keyboard versus the columns and once they perform a hum, it stops it at that particular letter. That gives them also the ability to write so they can write; they can have pre-programmed messages stored on their device. If they want to go to the bathroom, they can just play that message. They can pick what voice, so you can have a synthetic voice that tells the messages out loud so they can select what voices they want. They can have character voices. It was a very fun. It was a very interesting period in my career. I miss it a lot so hopefully, we'll get back to working with kids soon.

Thorstensen: I also saw that you have used this research for some mental health applications, too. I saw the one about the remotely monitoring someone's state to see if they are seriously depressed. Could you explain how that one works?

Falk: That started as part of a challenge that's run every year by a multimedia organization. What they do is they release audio visual content from individuals that have been rated by psychologists as having different depression levels. The challenge is for individuals to come and develop technologies that will bring together the audio aspect and the vision aspect and try to predict what that person's particular mental state is or depression level is.

We participated a couple of years ago in this challenge and we got some very interesting results. We've been trying to take this technology forward. We've been finding that the audio aspect contributed more than the vision part. We've been looking at what can we infer from the speech signal. What is the unique pattern that discriminates a depressed versus a non-depressed person? What is so unique about their vocal patterns and can we extract that information? Can we analyze their speech signal and extract that information?

That's particularly useful if you're, for example, on a suicidal hotline where you have somebody that's calling in. You don't want somebody that's at high risk of committing suicide to be received with a "Please hold. We are receiving a high number of calls now." If you're able, just from that initial call, that initial five seconds of the person saying something, to say, "Oh, this person is very high risk. Let's give him or her extreme priority" versus "Oh, this person is okay. He can hang on for a few seconds," it can allow for that kind of adaptation also within these hotlines.

Thorstensen: Also, could you talk more about your Alzheimer's disease diagnosis and how that works?

Falk: We started with looking at, again, different ways of analyzing signals. Our expertise here is in signal processing and by signal processing, I mean can you clean up the signals? Can you extract information from the signals? What are new ways of looking at the signals? Can you take it into a different dimension where, in that dimension, you can easily see what is signal and what is noise? Can you infer specific details about the signals? That's where a lot of our research comes in as to how do you play with the signals; how do you extract this information from the signals.

We had to come up with a new way of analyzing EEG signals. We show that those patterns were very discriminative of Alzheimer's patients versus typical healthy elderly subjects. We got to a point where we could discriminate, I would say, mid to moderate levels of Alzheimer's disease and healthy patients with more than 90% accuracy. Once you're at that level of the disease, you don't need a machine to tell you this person has Alzheimer's disease. Any doctor can look and say, "Yes, you have Alzheimer's disease." What we're interested in is developing machines that can help clinicians detect risks of the disease or detect signs of the disease much earlier in the process. Even before you have symptoms are you able to detect these things.

Once we started looking at individuals that were high risk of developing Alzheimer's due to risk factors, perhaps families having it, genetic, and we started looking at these high risk individuals, the classifiers weren't so great any more. The reason was that we were just looking at the individuals at rest. Basically, we were measuring their signals when they were sitting down, closing their eyes, at rest for a couple of minutes. We came to the conclusion that just resting wasn't enough. We needed to get them to do some sort of activity that would elicit some sort of neural patterns that would help us discriminate if an individual had high risk of developing Alzheimer's or not.

This is something that we still have under study. We just had a quick pilot that we did that showed some interesting results. One of the first things that dies in Alzheimer's disease is the hippocampus, which is partially related to how you find yourself in space. As you move around in space, you have to go from Point A to Point B. How do you go from Point A to Point B? Typically, that's one of the first symptoms of Alzheimer's disease, when you start losing that way-finding skill. That's why you see a lot of Alzheimer's patients getting lost easily because they lose that way-finding skill. Now we're investigating, well, if we have individuals within the virtual environment trying to go from Point A to Point B and we monitor their neural signatures as they do that, are we able to bring this performance higher? Previously, with only the resting state, we were getting about 72% accuracy in discriminating that an individual was going to eventually develop Alzheimer's. We're hoping to bring that up now closer to the 90% that we had originally envisioned.

This is still some work in progress but the idea there is to try to bring some of that knowledge that we have about Alzheimer's disease, about the actual what the disorder is, what the disease causes in the brain and what part of the brain is affected, what tasks can we do to elicit that part of the brain to become active and how can we measure some of that information.

Thorstensen: You've been noted by the nominators who put your name up for this award as successfully leading interdisciplinary research teams. Do you have any advice about what it takes to successfully lead an interdisciplinary research team?

Falk: You have to be very open-minded, I think, is the main thing. Different people will come with very different ideas and if you're not open to those new ideas, I think you'll never get anywhere. I think I learned a lot of that from my Postdoc. Most of my life had been on multimedia, speech coding, speech processing and I arrived at my Postdoc to do brain computer interfaces, having never seen a brain signal in my life. The first thing I realized when I arrived was that biomedical signal processing eight years ago was using tools that, in speech processing, were being used fifty years ago. I was able to look at the fifty years of speech processing and how it developed over those fifty years. The new tools that were being used, I started playing around and seeing if they could be applied into biomedical signals. In many, many cases they gave results much better than what people were using back then.

If you're able to bring that knowledge from a different domain and somehow apply it to this new domain, you might be surprised as to what might come out. Being open to receiving these different people with different backgrounds and with different ideas, I think is a big part. I make an effort to do that. I typically don't hire students or Postdocs or researchers that have the same skill sets that I do. That would be somewhat redundant. I try to bring people with very different skill sets, very different backgrounds. That will make it complementary. They will be bringing new ideas that I won't be able to come up with or students in the lab won't be able to come up with. I think that's key, to be open to that. It's a risk. The person might come and no ideas might fly. Then again, if you don't take the risk, you can't reap the reward.

Thorstensen: What type of backgrounds do people in your lab have?

Falk: We have physicists; we have computer scientists; we have engineers, biomedical engineer, electrical engineers. Every now and then we get somebody in the arts. They will come in to help with user design and to develop some of the interfaces for the games. Again, from computer science to engineering to physics. My lab technician is in astrophysics. He's doing his PhD in astrophysics.

Thorstensen: Out of all the areas that you work with human and machine interfaces, do you have a favorite application that you've worked on?

Falk: That's a good question. I don't want to put any favorites so students will get upset.

Thorstensen: Okay.

Falk: We try to work very closely with industry as well. I find that it's very disappointing when you have a technology that's developed that has great potential but it just sits on a shelf somewhere and doesn't get used. A lot of our tools are developed in combination with industry. It fills a gap that's there. A lot of the stuff we build has a practical application. It starts off already with that mentality that something will come out of it. Individuals with disabilities will benefit; individuals with mental health will benefit. There's always that end goal that we have. There's always that challenge that we have to meet or that expectations or those outcomes that have to be achieved.

I would say personally, for me, for example, the Hummer would be something that would be very high up there because I see so many kids that were touched by that. It brought new dynamics to the family. Now parents can talk to the kids in many cases. There was a case of an individual who was eighteen. He had never spoken. The first time we tried them out and asked him to type something, the first thing he typed was, "Mom, I love you." That was the first time in eighteen years that that child's mom could hear him say, "Mom, I love you." Something like that is priceless. I think that would be a great example of something that really touches.

Thorstensen: That is another thing that one of your nominators brought up, that you are able to find solutions to problems rather than coming up with solutions and then looking for a problem to apply it to. Do you think it is because of the way that you work with industry or is that just the way that you've always thought about your research?

Falk: I think the way that I work with industry is more of a consequence. I think it's the way I approach research more in being open and thinking outside the box. Why throw this information away if we can use it for something? Instead of just filtering out the noise, let's gather as much information from that as we can so let's explore, let's analyze, let's see what's there, let's play around.

Thorstensen: Where do you think your lab is headed in the future?

Falk: I hope to great places. We're at the point now where a lot of the students are graduating so we're now bringing a bunch of new students in. I would say the majority of people, maybe out of the fifteen, just a couple will remain this year. Most of them will be graduating. Most already have found jobs and have moved on. This year will be a year where a lot of new blood will come in. A lot of new ideas will come in and new backgrounds. It's hard to say where we're going to be in the next couple of years, given that we're at this critical point now where there's so much new blood coming in that it's hard to predict where we're going.

We have several projects in the pipeline. We have several big projects that are starting this year. Monitoring, for example, the stress level of police officers and first responders in the environment, so when they are out in the field running around where noise is a big factor. How can you further mental states within these realistic environments? That's one project that we have up and running and several others, looking at the interfaces and mining the noise so that you get information about the environment; how to make the machines intelligent. We're going to be putting a lot of effort in those arenas once the new students arrive.

Thorstensen: Great. Okay. Well, for anybody who would like to see Dr. Falk speak, he will be at the Sigma Xi Annual Meeting in November. Thank you very much, Dr. Falk, for speaking with me today.

Falk: Thank you.

DONATE NOW

SOCIAL MEDIA STREAM