For many years, Intel has been developing AI technologies that can empower people with disabilities, and improve accessibility in our modern world. Disabilities come in all forms, and as we develop technology and tools, the taboo associated with such disabilities lessen over time. Muteness, for example, evolved into the rich sign languages used all over the world but with the adoption of AI, those who struggle to speak have even more options for communicating. Discover how AI is breaking down barriers, enhancing mobility, and promoting inclusivity. Join the conversation with XIMERA LLC co-founder, Jagadish Mahendran, and AI Evangelist, Lama Nachman as they dive into the many ways AI is making a meaningful difference in the lives of those with disabilities.
Learn more about how Intel is leading the charge in the AI Revolution at Intel.com/AIperformance
***The voice cloning feature was developed by Klassic Studios.
If you are hearing the sound of my voice, then you are not actually hearing my voice at all. What I mean is that the voice you are hearing is actually an assistive text of voice cloning tool that my company created, and it is completely powered by AI. There are many different types of AI tools to help people who are differently abled. For me, it has restored my voice, but there is so much more it can do. I'm excited to see how it grows. Hey, there, I'm grain class and this is technically speaking an Intel podcast. The show is dedicated to highlighting the way technology is revolutionizing the way we live, work and move. In every episode, we'll connect with innovators in areas like artificial intelligence to better understand the human centered technology they've developed. As early as the discovery of fire and the invention of the wheel, technology has always been an innovation to improve people's lives. However, sometimes leaders and technology unintentionally exclude those who may deal with uncommon issues such as physical immobility, neurodivergence, visual impairments, or even old age. While governments usually put systems in place to acknowledge and care for these communities, it has been the role of technology to create advancements necessary for those dealing with disabilities to thrive just as much as they abled counterparts. With the revelation of artificial intelligence, there are many new advancements that are providing accessibility to these communities in the ways we never thought possible until now. And I have two experts with me who are leading the charge to more accessible future. Lama and Ackmann is a visionary leader at the intersection of technology and human experience. With a distinguished career spanning academia and industry, Lama has consistently pushed the boundaries of technology to enhance our daily lives and redefine the way we interact with computers. Her innovative work has not only advanced the field of AI, but has also paved the way for more intuitive and human machine interfaces. As an Intel Fellow and director of its Human and AI Systems Research Lab, she leads the team defining and executing the research for contextually aware and personalized computing, developing sensing systems, algorithms, and applications to make it all possible.
Welcome Lama, thank you very nice to be here.
We're also joined by Jagadesh Mahendram, a visionary entrepreneur and tech innovator who has made significant contributions to the fields of artificial intelligence, renewable energy, and sustainable development. With a relentless passion for cutting edge technology to address global challenges, Jakodesh has emerged as a driving force in shaping a more sustainable and inconnected world. Most recently, he joined Camera LLC with his founding partners and a team of visually impaired volunteers. He uses AI to develop solutions and assistance for those dealing with site lost and low visibility. Welcome, Jaggedesh, Thank you very much. It's so honor to be here. Okay, I've just start with Lama. Lama, how did you get your start in tech and AI?
And I would say that I've been in love with tech probably since I was like two years old, you know. I've always been into kind of like the latest and the greatest technology growing up. But then after I graduated from UW Madison, I actually joined Intel out of college and then I worked there for a while. I went out and did a few startups and then came back to Intel specifically focused on that intersection of sensing and understanding the world through that to create really compelling technology. So that's been kind of like almost like a very long career progression that brought me back to what I was excited about.
Excellent, and then in terms of the AI component, how did you start to get involved in that?
Yeah, so early on when I went back to Intel, actually in two thousand and three, I started to look at, you know, how do we make sense out of the world around us? So to be able to understand a lot of that sensor data that we were processing, whether it's vision or audio or tex or whatever, right, that really required work and AI to actually make sense out of that data. So that's where it kind of started around two thousand and four, and then since then it's kind of looking at different ways of intersecting AI and HCI to actually bring about really compelling experiences for users and helping them perform all sorts of things in their lives.
Excellent and Jackets, how did you get your start in technology?
I have a very different story here. I was not interested in technology or all. Actually wanted to become a doctor, but you know, it's very competitive in India, so I didn't really get good ranking, so I couldn't join the college that I wanted to join, and the second option was engineering, and I chose to do the computer science. I think the very best turn out to me is good that we're enjoying artificial intelligence and more than doctor. I think I'm a bettery engineer than a doctor.
Oh that kind of makes sense with your I guess love of medicine and the type of projects that you've come up with. So we'll talk about that in a little bit. But one thing I'll go back to Lama in terms of AI improving the human experience you've mentioned HI. First of all, if you can define for the audience what HDI is and also how do you address solutions and maybe you could educate me around what's the difference between accessibility and accommodation when you're designing a system.
Yeah, so, first of all, HDI is a human computer interaction, So it's really trying to understand how would people directly interact with technology. And sometimes that technology is something that's actually physical that you're picking something on a computer or whatever. But a lot of times, you know, some of the work that we really work on is embedding it into the environment so that it almost becomes like invisible. And that's kind of one of the most interesting things is like to really architect for interactions of things that are invisible. Honestly, if you think about any technology that you're developing, you have to think about how you're making it accessible, how the interfaces are accessible, how different people with different types of disabilities and abilities can actually interact with your technology throughout like the development cycle. In some sense, part of what I've really been focused on is creating technologies for people who are severely disabled, where you really need very different ways of interacting with the technology to enable that to happen. Really, that focus specifically on the work with ACAT and the workforce even Hawking, has really been about how do you get around these on strengths to enable people to access the technologies just like all of us.
Lama mentions Intel's ACAT, which stands for Assistive Context Aware talkit. This technology was key in enabling Stephen Hawking's ability to continue to communicate and inspire people around the world. Listening to Lama speak about the human computer interaction processes, she sounds less like a tech person and more like an anthropologist. We often think of data and algorithms as being this cold and personal assessment of people. But Lama has such a passion for her programs, it makes me wonder just how impactful that passion is to the way AI tools interpret how to assist us. While she has spent so much time learning how to program and manage computers, it seems her real passion is in trying to understand humanity.
My passion has really been focused on how do we bring more equity with technology. The work towards specifically extreme disability really came about from my interaction with Professor Hawking. So before that, a lot of the work that I had focused on in terms of accessibility is really bridging where people's needs were as they're doing different aspects of their lives. Right you're driving, for example, how can that be contextually aware so it can help support you without assuming that you have all of your abilities there. But once I started working with Professor Stephen Hawking, it became very obvious to me that to bridge that extreme disability you really have to think very differently about how technology comes in. And that's what really got me excited about that work.
Okay, and in terms of the involvement you had with Professor Hawking's technology to help him interact with the world, What were the areas that you looked at?
The lab I lead is actually a multi disciplinary lab, right, so we bring social science, design, and AI together. So the first place you start is we needed to understand how Stephen interacts with the world, what he is trying to accomplish, and where are his bottomnecks in terms of being able to do that with existing technology that he was using. So there was a lot of observation to try to understand how do we define the problem and from there for people who are not aware of this, right, Professor Hawking really didn't have an ability to speak, and he didn't really have an ability to move, so he couldn't really utilize many of the technologies that are available. You couldn't do, for example, ASR where he could speak and then the computer could be controlled by speech, Nor could he type because he had no control over his hands. So then we started to basically look at if we really had a very very tiny signal, and in this specific case for Professor Hawking, it was actually his ability to move his cheek. Can we get access to that one signal and then turn that into a complete access for his whole machine? And then we went onto that path of essentially building a software platform and a sensing subsystem that allowed for that to happen. All he can do is confirm something with the movement of a cheek, and now he can type, he can email, he can serve the web, he can give lectures, he could do all.
Of that in What year was that that you were working on?
So we started our interaction with Stephen and twenty eleven and it kind of lasted throughout his life until he passed away, which was twenty eighteen. So we after a couple of years we were able to put together a system that he could use that he could switch to, and then over the years we just continued to enhance it and add more capabilities. We open sourced it so that we could take it into the world.
And yeah, that was my next question in terms of the technology that was developed. Have you seen it applied more broadly to others.
Yeah, And initially we were hoping that we could find some technology out there we could take and modify slightly so that he could use it. And then after being proven wrong, we then went onto this path to go and develop something from scratch. But from the get go our goal was to develop it so that it could support a lot of different users and be a platform for developers to build on top of, because we realized that there was that gap in what existed out there in the open world. And Stephen was a huge contributor to this project, right he he you know, he was a designer, he was a validator, he was you know, he gave a lot of his insights. So throughout all of this he was really focused on ensuring that that actually went to open source because you know, people reached out to him all the time because he was, you know, an own figure with that extreme disability, and everybody was asking him, like, what technology is available to us to actually use. So he's been like really focused quite a bit on making that available to the world.
You can really sense how dedicated Lama is to helping those with disabilities communicate with others. However, talking is just one way we communicate, and moreover, there are a combination of ways that we engage and interact with our environments. As a way to help people that may struggle with another sense is our other guest Jagged Dish originally designing a backpack that uses AI to help guide the blind. His project expanded into really dissecting what it means to be visually impaired. Lama and Jagged Dish both have different approaches to their missions. The work compliments each other so well. Jagsh I'd like to get you into conversation now, in particular the AI powered backpack that has been developed by yourself and others. Can you just tell me a little bit about I guess the genesis of that idea.
I've always wanted to do something using the technology that can help the society in what way or the other. And when it came to Masters in twenty thirteen, one of the first things that occurred to me was like, you know, we should use EI and use a bunch of sensors to help the usually impaired see the world, like how sighted people see. And one of the primary visions that I used to occur to me was when somebody is standing in public places like buz stop, there should be a solution in such a way that the person who is blind should get totally unnoticed. And around that time the technology was not as good as how it is now. The real inspiration occurred to me when I met my friend. The day when I met her, she had a black mark in her face, and I was like, you know, what happened to your face? And she's usually impair and she was saying, as she was walking outside in the sidewalk, she ran into a tree branch and then that left a mark in her face. And that was such an ironical for me because by then I was already a perception engineer, teaching robots to see things, you know, do complex us. But at the same time, there are so many people who cannot see right and that sort of spart my desire to work on this project sooner than later. Around the same time, this competition of Open Sea Special ai I was going on, sponsored by Intel, and I submitted this idea and the project ended up winning the first price. And this friend has been helping you throughout how to develop a system that is more user friendly and actually solves important use cases. And through this competition received a lot of attention, and this is when we started to think, you know, we should probably you know, get incorporated and try to create a full fledged open source system so that anybody in the world can use it and help in improving the lives of the visually impaired. Currently, we are supported by Intel's irt A program Intel Rise Technological Initiative program, and we are receiving an in collaboration with Accenture. Through this partnership, you're able to gain a lot of support both on the technical and non technical side. And soon we will be releasing our improved version of the system, which we call as Phoenix in a few months.
Okay, excellent, looking forward to it. And can you tell me what I've seen a little bit of a video on it where you've got a backpack. Maybe you could just describe some of the main system elements.
Yeah, the physical system mainly consists of a backpack that has Intel Look with a couple of new compute sticks and this is the sort of the compute resource. And at the front we have a camera. It's Obi camera that is is put in the front and connect it to the system behind and whatever this sensor collects the data. We run some AA processing behind using deep learning techniques and the system will infer useful information about the environment and update the user such as where are the obstacles and what are the common objects seen in the scenario, what are the moving objects, what are the traffic conditions? And more similar features for communicating there is audio interface through bluetooth headphones, and we're also working on a haptic band to communicate the same sort of information in form of vibrations through tacktail information.
Lama was just wondering if you had any comments or thoughts on this AI backpack.
I mean, it's a fantastic idea, and you know, if you think about what is actually now possible with perception and AI, I mean, it's it's just kind of like the most natural thing to do to actually empower users with such a capability that are vision impaired. I was actually also quite intrigued by the haptic aspect of what you mentioned, and I think it's something that tends to be un they're utilized, but really kind of a natural thing for this type of application, especially if you're trying to kind of guide somebody in a direction. So I was wondering maybe you can say a few words about that. I was really intrigued by that.
Yeah, So if the first prototype contained mainly the audio interface, all the information is actually shared via the wireless headphones and not all the users prefer that main reason is because they usually impair people rely on audio cues when they're wearing earphones. We are sort of blocking a lot of environmental cues, which is why we wanted to introduce another modality for user interface, which is haptic bands. Basically, using a combination of motors and vibration patterns, we can communicate tons of information just using few motors, like even less than ten morrors. And the current prototype that we're working on is pretty simple version. It can be put on the wrist and this can communicate potentially hundreds of combinations of vibrations, and at some point we're really aiming for a set of where we can communicate pretty much everything the system sees through the happic vibrations. If a user prefers completely one hundred percent haptic bands, that is something that we are targeting for. At the same time, some users might prefer, you know what, I want this sort of information to be communicated via audio and some sort of information with the haptics. We're also are working with the combination of system as well, but having hapic bands in a solution like this opens a sort of a different dimension for the users here, especially we're visually impaired.
What Jagadish hints at with his explanation of haptic bands versus audio interface is a very fascinating, multi pronged approach to the solution. In technology terms, haptics is all about how your device interacts with you through touch. Think of the times when your phone vibrates in your pocket, or when you play a video game and the controller shakes in response to you taking damage from the endgame boss. Oftentimes, in developing solutions for disabilities, there is a one size fits all approach that seeks to do an adequate job for the most number of people. This strategy fails to take into account the nuances of the human experience in the same way that some people are audio learners while others are visual. When it comes to aiding someone with a disability, it is important to consider what methods complement their strengths and experiences. The beauty of how jagger Dish seeks to develop this AI tool is that it is constantly studying and creating more specialized options for the users, from audio to haptics. It has the potential to grow in a number of ways to accommodate the visually impaired ways we never thought to supplement them, and maybe these developments will even have an impact on those with perfect sight. Leans into the human computer interface component that Lama mentioned earlier, the constant study and assessment of how people will actually use the tools. Given you're listening to technically Speaking an Intel podcast will be right back. Welcome back to technically speaking an Intel podcast. I'd just like to get more broadly into Intel and it's AI efforts now, Jaged you have a partnership with Intel. You've mentioned it before, how you're working with Intel and how they've come to the party, so to speak. What's it like working with their team in terms of their support and assistance they've given you. It's fantastic. The amount of exposure and the support that we've received from Intel is really amazing. What we admire about Intel is how open they are in developing the solutions for accessibility, and they have a dedicated team who's purely working on solutions like this. And we also got opportunity to look into the projects that Lama's team has been working on. They're simply superb. I think these solutions like this are transformative and it's going to change lives for people. And in terms of the support that we've received, they have been helping us on many aspects all the way from helping with putting up a process training the model assets, you know, creating a platform for training models, and also sharing the connections, and also with funding. So a lot of the features that is going to come as part of Phoenix is coming out of the IrDA project, and the sort of feedbacks that would risk in improving the solution is something that we don't get outset easily.
Yeah, and it's one of the things that I believe we're really all about at Intel, right If you look at our mission, it's really enriched the life of every person on the planet, and every person, right, not just able people. So it's really wonderful that you're seeing that support and the diversity of the type of platforms and solutions that we have, right. So I'm really just very heartened by what you said, Drakhandesh. One of the things maybe that is a top of mind project something called Omnibridge, and Omnibridge is essentially a software that is meant to bridge again the silence gap, but for people who are hearing impaired, so that you know, essentially you're translating in and out of like sign language, so you know, people can sign into their PC and then the PC can actually translate that into language on the other end, and then vice versa. Right, So it's like, you know, what you're really enabling again is to enable people in their everyday life life to actually be able to do that. And to be able to do that, you need a lot of the AI support and AI compute on these platforms. So one of the reasons again what I was saying, it's really nice to see it at these platforms and at the lowest cost that you can actually bring it. You start to really democratize AI in ways that really improve people's lives.
Yeah, for me, I mean one of the key things you've just said is democratizing technology, and I think that's the real power of it is. Yes, we can have those really fancy solutions that Professor Hawking has, but for me, it's about trying to get that cost down so that it makes it so much easier for people to use.
So actually, just as a correction, so Professor Hawking didn't have a fancy system, I was actually at PC with a very lightweight sensor and in fact, like a big part of what we've been really trying to do with BCI is also democratize that because the problem with BCI is if you want something with really high fidelity, you're paying fifteen to two thousand dollars on a headset versus what we're really trying to do is like use OpenBCI, so it's you know, a really low cost, but you know, compensate for the fidelity constraints with a lot of machine learning.
Okay, great, and jacket ashly did say it was relatively low cost. Is that one of the primary motivating factors for you? And how do you go about designing systems to try and get that cost down.
Absolutely, it's a major restricting factor. Just a bit of context, The unemployment rate in the visually impaired people community is extremely high. I think it's more than sixty percent, so it's hard for them to afford any product that is expensive. And this is something that we want to change by one making it completely open source, so that anybody in the world they can just if they have the technical skills, they can just assemble the system and they can get the system. If not, we can help them assemble the system. The complete solution is going to be open source. Two is building the product using the hardware systems that are cheap. At the same time, that are efficient, and that's where products like Intelook stands apart one because it has very good capability for running a lot of models in bartle and also using accelerators like neural computer stick. So things like that help us in shrinking the form factor and also the cost quite a bit. And at the same time, at the software design level, if you are putting in a modular based design, where if somebody wants to use a cheaper sensor, they could plug in a different sensor. The rest of the robotics or stack will remain intact as far as they take care of the sensor obstruction layer. And same thing goes for probably for haptic interface, probably audio interface, and potentially for computer interface. So we want to modularize it as much as possible and shrink the cost as much as possible.
Jagadish mentions something I had never really considered, which is the difficulty in finding gainful employment for those with visual impairments. In the US and other developed nations, there are protocols to provide reasonable accommodations to workers with disabilities, but globally that has yet to become a common practice. With an AI tool such as Jagadees being open source. It really helps move the needle in terms of what those with visual impairments can do for themselves. Lama also mentions BCI, or brain computer interfaces. Most brain computer interfaces use electrical energy of the brain to directly interface with computers or machines. The best way to imagine BCI is the character Cyborg from the Teen Titan series, where he developed superpowers from interfacing computing technology with his biological self. I'm wondering what accommodations are considered when the user has ADHD or some other form of cognitive processing disorder.
So we've been looking specifically at utilizing BCI for communication for locked in patients, right, And really, you don't want to use BCI for communication unless you have to, because it's not i mean, unless you're actually have something that's implanted in your brain. If you're going outside of the skull, you have a very very noisy signal. So that's in some sense you can think of it as a last resort. However, what you just mentioned is something very different, right, which is utilizing BCI as another sensing modality for all sorts of other inferences, not to communicate your intention, but to actually understand your state, and that is something that is you know, yes, can be totally utilized for understanding, for example, things like emotional state and concentration and focus and all sorts of things like that that can help in cases where you have people with autism, for example, and they're having a hard time expressing emotional state as it's actually getting worse.
Right.
There has been actually quite a lot of interesting research out of Jojia Tech, for example, specifically looking at that as an interesting modality for these type of settings.
In terms of other individuals and organizations contributing, you've mentioned both of you mentioned the open source initiative that Intel's pushing. If individuals and organizations want to be involved, what's the best way for them to get in and start contributing.
So basically with ACAT, we have essentially it's an open source project, right, and it's open to developers. We have different people contributing all sorts of different things, right. I mean, for example, we've seen a lot of interest in having ACAT beyond in different languages for people around the world, Right, So we have a way for having people easily contribute to extend it to other languages. As an example, extending it to other sensing modalities and so on, so you can go through that project and then just kind of communicate and submit what you want and communicate with us as the people who are still kind of overseeing the project. There are also like specific groups that we work with because we're trying to also kind of get access to users that we can test that technology with. So for example, you know the M and D or the ALS groups and things like that. So depending on usually some of these groups have access to a lot of the different solutions and open source systems that exist there. So that also is a way I mean not necessarily just for ACAT, but more broadly.
We are seeing the very strong trend of a lot of projects being open sourced, and because of this trend, we're seeing a lot of powerful projects being democratized and reaching people much easily than before. In fact, a lot of companies are actually following this model, starting to switch from a different model to open sourcing model, which is fantastic for the community. It's just fantastic for the world. However, there are certain things needs to be considered when developing open source solution. One of the most important things is how an open source project is defined, how can it evolve by itself at some point. Initiatively there is going to be primary contributors, but at some point there is going to be a lot of people. You're going to get contribution from all over the world, and this can be both good and bad. If the response is very high, then the initial contributors cannot handle it, right, it might end up pretty damaging, right, But at the same time, you need those responses, So it's important to know that balance and also come up with how do we address this as a process in general?
Right?
How can somebody contribue, but how can somebody create a pr It's going to be completely democratized and there will be more reviewers distributed throughout the world.
One of the things that I'm really happy to see is really the amount of contribution in the open source on all sorts of AI capabilities and language models, and which really I think is enabling a lot of democratization of AI, specifically to all of these different usages. Right, because if you think about a sist of computing in some sense, in many cases you're trying to compensate for some sort of a sense impairment. Right, So if you're able to actually use AI to help extract that sense automatically from the world. Having access to that democratization in AI models and algorithms is something that is really transformational for this space. And if I remember, for example, like in the past, right even getting something access to something like an ASR was really hard to do right in the open source, at least the level of quality that you would see. But now lately, because of that quick movement, you're seeing a lot of capability in the open source that actually rivals that of the really you know, big companies, which is I think is absolutely transformational.
Yeah, that's great, no question for both of you. Are start with jaggedesh. You know, we're seeing AI being used for accessibility efforts. Looking forward ten years, what's the number one area which you would want AI to help in this industry.
I'd be really pleased to see a system that is really small that somebody can put in like a glass or any firm, that goes totally unnoticed and it provides all the capabilities of human eye. I think that'll be fantastic, And same thing goes for other forms of disabilities. I think that will be fantastic to see and in tenuous timeline, I think it might be possible.
My number one area that I want to see solved, not necessarily in a sist of computing, but actually climate change. That's where I think, like we all need this otherwise I'm not sure we're going to have a world to actually do anything else. And in the area of asis of computing, it's really what I was saying earlier, which is I envision being able to compensate for every single sense that the human is missing, and that, to Jaggediesh's point, is only going to be possible if that is meeting people where there are in the world, which means they have to be sustainable, they have to be extremely power efficient, they need to be robust enough to everything that it hasn't seen in the world right, so, which is really not necessarily where things are today, but you know, given that appid improvement, I would really hope that that's where we would be in ten years from now.
Excellent. Okay, thank you very much, thank you, thank you. I would like to thank my guests Jaggedish Mahindra and Lama Nachman for joining me on this episode of Technically Speaking, an Intel podcast. I really enjoyed this conversation with Jagged Ash and Lama. I love being able to delve into the motivations of the why, but also the how. You heard from Jaggedesh and the story of his visually impaired friend being struck by a tree branch, and that was the seed for his idea for an AI assistant backpack. For me, this is the true technological empowerment, the ability for individuals to use their skills and talent to make a difference, taking action rather than just talking about it. These are the true innovators. It was great to hear of Lama's work with Professor Stephen Hawking and the context of where system her team developed. What is so pleasing to me was that it wasn't a Rolls Royce design, but rather an elegant yet simple system of sensors connected to a PC to allow Professor Hawking to interact and communicate with others. Because of this relatively inexpensive solution, it can be used by a wider range of people. This is what democratization of technology does for the world. I hope that Lama and Jaggedish's stories inspire you to take the leap and contribute to improving the lives of people, regardless of their background. Please join us on Tuesday, November fourteenth for the next episode of Technically Speaking, an Intel podcast. Technically Speaking was produced by Ruby Studios from iHeartRadio in partnership with Intel, and hosted by me Graham Class. Our executive producer is Molly Sosher, our EP of Post Production is James Foster, and our Supervising producer is Nikair Swinton. This episode was edited by Sierra Spreen and written and produced by Tyree Rush.