Dr. Cal Roberts, President and CEO of Lighthouse Guild, the leading provider of exceptional services that inspire people who are visually impaired to attain their goals, interviews inventors, developers and entrepreneurs who have innovative tech ideas and solutions to help improve the lives of people with vision loss.
This podcast is about big ideas on how technology is making life better for people with vision loss. When it comes to navigation technology for people who are blind or visually impaired, many apps utilize voice commands, loud tones or beeps, or haptic feedback. In an effort to create a more natural, seamless experience, the team at BenVision has created a different type of system that allows users to navigate using musical cues instead! For this episode, Dr. Cal spoke with BenVision’s CEO and co-founder, Patrick Burton, along with its Technology Leadd, Aaditya Vaze. They shared about the inspiration behind BenVision, how they’re able to create immersive soundscapes that double as navigation aids, and the exciting future applications this technology could offer. The episode also features BenVision’s other co-founder and Audio Director, Soobin Ha. Soobin described her creative process for designing BenVision’s soundscapes, how she harnesses the power of AI, and her bold vision of what’s to come. Lighthouse Guild volunteer Shanell Matos tested BenVision herself and shares her thoughts on the experience. As you’ll hear, this technology is transformative! The Big Takeaways Why Music? Navigation technology that uses voice, tone, or haptics can create an added distraction for some users. But the brain processes music differently. Instead of overloading the senses, for some users music works alongside them, allowing them to single out separate sound cues, or take in the entire environment as a whole. Like how the different instruments correspond to various characters in “Peter and the Wolf,” BenVision assigns unique musical cues to individual objects. User Experience: Shanell Matos appreciated how BenVision blends in more subconsciously, allowing her to navigate a space without having to be as actively engaged with the process. Additional Applications: BenVision began as an augmented reality program, and its creators see a potential for it to grow beyond a navigational tool to expand for use by people who are visually impaired or fully sighted. For example, it could be used to create unique soundscapes for museums, theme parks, and more, augmenting the experience in exciting new ways. The Role of AI: Artificial Intelligence already plays a big role in how BenVision works, and its creators see it being even more important in the future. BenVision already harnesses AI for object detection and its companion app uses AI to provide instant voice support about the immediate surroundings if needed. Moving forward, AI could be used to help instantaneously generate new sound cues or to help users customize their experience at the press of a button. Tweetables “We thought that if the human brain can learn echolocation and we have this amazing technology that’s available to us in the modern day, then why can’t we make echolocation a little bit more intuitive and perhaps a little bit more pleasant.” — Patrick Burton, BenVision CEO & Co-Founder “You can think of it like a bunch of virtual speakers placed at different locations around the user. So like a speaker on a door or a couch or a chair. And then there are sounds coming from all these virtual speakers at the same time.” — Aaditya Vaze, BenVision Technology Lead “I want to gamify this idea so that the user can actually find some interest and joy by using it, rather than just find it only helpful, but also [to create ] some pleasant feeling.” — Soobin Ha, BenVision Audio Director & Co-Founder “So if there’s a lot of people, there’s a lot of conversations happening, a lot of sounds happening, a lot of movement happening. It’s really difficult to keep up with what everything is doing. Whereas with music, it’s not as difficult to pick out layers.” – Shanell Matos, Lighthouse Guild Volunteer Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links Lighthouse Guild BenVision
9/17/24 • 31:34
We appreciate your support for our show — and now, we need your help nominating the On Tech & Vision podcast for the People’s Choice Podcast Awards! We are participating in these awards so we can showcase On Tech & Vision to a broader audience, gain recognition within the industry, and, most importantly, help spread the message about Lighthouse Guild and the role that technology is playing in tearing down barriers for people who are blind or visually impaired. To help us nominate On Tech & Vision, please go online to www.podcastawards.com, where you can register to vote for On Tech & Vision in both the Technology and Peoples’ Choice Categories. Voting is open until July 31st. Once again, your support is greatly appreciated!
7/12/24 • 01:11
This podcast is about big ideas on how technology is making life better for people with vision loss. For hundreds of years, health professionals have dreamed of restoring vision for people who are blind or visually impaired. However, doing so, either through transplanting a functioning eye or using technological aids, is an incredibly complex challenge. In fact, many considered it impossible. But thanks to cutting-edge research and programs, the ability to restore vision is getting closer than ever. As a first for this podcast, this episode features an interview with Dr. Cal Roberts himself! Adapting audio from an interview on The Doctors Podcast, Dr. Cal describes his work as a program manager for a project on eye transplantation called Transplantation of Human Eye Allographs (THEA). Funded by a government initiative called ARPA-H, THEA is bringing some of the country’s finest minds together to tackle the complexities of connecting a person’s brain to an eye from a human donor. This episode also features an interview with Dr. Daniel Palanker of Stanford University. Dr. Palanker is working on technology that can artificially restore sight through prosthetic replacement of photoreceptors. Having proved successful in animals, Dr. Palanker and his team are working hard to translate it to humans. And if that can happen, then something once considered impossible could finally be accomplished! The Big Takeaways The Challenges of Eye Transplants: Although eyeball transplants have been done, they’ve only been cosmetic. So far, nobody has been able to successfully connect a donor eyeball to a recipient’s brain. Dr. Roberts’s work with THEA is bringing together multiple teams to tackle the challenges associated with a whole eyeball transplant, from connecting nerves and muscles to ensuring the organ isn’t rejected, and much more. “Artificial” Vision Restoration: Dr. Palanker is working to replace the functions of photoreceptors through technological means. His photovoltaic array is placed underneath the retina and can convert light into an electrical current that activates the cells that send visual information to the brain. While it doesn’t completely restore sight for people with Age-Related Macular Degeneration, this technology shows incredible promise. Decoding “Brain Language”: For both Dr. Roberts and Dr. Palanker, one of the biggest challenges with vision restoration is understanding how the eye and brain communicate. Dr. Roberts likens it to Morse Code — the eye speaks to the brain in “dots and dashes,” which the brain then converts into vision. Right now, the language is still foreign to us, but we’re closer than ever to decoding it. The Evolution of the Brain-Machine Interface: Dr. Palanker imagines incredible possibilities in the interaction between the brain and technology. If we can find a way to truly translate the brain’s signals into information, Dr. Palanker envisions the possibility of direct brain-to-brain communication without verbalization. In a way, this could make people telepathic, able to understand and digest vast amounts of information in an instant. Tweetables: So ideally in medicine, at least the ideal therapy is the restoration of full functionality. If we can grow back photoreceptors and make them reconnect to bipolar cells, undo all the rewiring that right now underwent during degeneration, and restore the full extent of vision, that would be the ideal outcome. — Dr. Daniel Palanker, Professor of Ophthalmology, Stanford University We can think about other aspects of brain-machine interface, which takes you maybe into the realm of capabilities that humans never had. If you enable artificial senses or enable brain-to-brain connectivity so you can communicate without verbalization that would open completely new capabilities that humanity never had. — Dr. Palanker Forty-two years after the implantation of the first mechanical heart, there’s not a single person in the world walking around with a mechanical heart. All that work, all that research, and all that effort to come up with mechanical heart transplants are still state-of-the-art. And so, while I believe that there is a role for a bionic eye or mechanical eye, what I really believe is that everything that we learn from doing an eye transplant will just make it better and easier when we do eventually come up with a bionic or a mechanical eye. — Dr. Calvin Roberts, ARPA-H Health Science Futures Program Manager (President and CEO of Lighthouse Guild and Host of On Tech & Vision!) Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links Lighthouse Guild THEA Program Prima System People’s Choice Podcast Awards We appreciate your support for our show — and now, we need your help nominating the On Tech & Vision podcast for the People’s Choice Podcast Awards! We are participating in these awards so we can showcase On Tech & Vision to a broader audience, gain recognition within the industry, and, most importantly, help spread the message about Lighthouse Guild and the role that technology is playing in tearing down barriers for people who are blind or visually impaired. To help us nominate On Tech & Vision, please go online to www.podcastawards.com, where you can register to vote for On Tech & Vision in both the Technology and Peoples’ Choice Categories. Voting is open until July 31st. Once again, your support is greatly appreciated!
7/12/24 • 44:10
This podcast is about big ideas on how technology is making life better for people with vision loss. This episode is about how biosensor technology is revolutionizing the field of diagnostic and preventive medicine. Biosensors can take many forms — wearable, implantable, and even ingestible. And they can serve many different functions as well, most notably when it comes to detecting the various pressure levels in our bodies. This episode features interviews with several luminaries working with biosensors. One of them is Doug Adams, a revolutionary entrepreneur who became inspired to create a biosensor that can assist in the treatment of glaucoma patients, initially focusing on a sensor for intraocular pressure. More recently, Doug founded a company called QURA, whose current efforts are focused on a biosensor that detects blood pressure. To elaborate on QURA’s initiatives, this episode also includes insights from its Chief Business Officer, David Hendren. He and Dr. Cal discuss the current state of biosensor technology, the benefits of implantable biosensors, and how they work. Finally, this episode includes a conversation with Max Ostermeier, co-founder and General Manager of Implandata Ophthalmic Products. Max was previously interviewed by Dr. Cal for the episode “Innovations in Intraocular Pressure and Closed Loop Drug Delivery Systems.” This time, Max joins Dr. Cal to discuss the possibilities of biosensor technology and his company’s Eyemate system — which includes biosensor technology for glaucoma patients. All three guests also offer their thoughts on the future of biosensors and their endless possibilities. While it may seem like science fiction, it truly is science reality! The Big Takeaways What Biosensors Do: Currently, biosensors primarily sense the various pressures in the human body. QURA’s current sensor detects blood pressure and assists with hypertension. Meanwhile, Implandata’s Eyemate technology serves glaucoma patients by gathering data on intraocular pressure. The Rapid Shrinking of Biosensors: When Doug Adams first started working on biosensors, the model he saw was the size of a microwave. Now, it’s shrunk to the size of a grain of rice! By making biosensors smaller, they are easier to implant and place in different spots within the body. And by doing so, they can gather more and more data. The Benefits of AI: One drawback of gathering so much data is that it can sometimes be hard to analyze it. However, improvements in AI technology are making it easier to sort through all that data, giving doctors and patients valuable information for medical diagnostics and treatments. The Future of Biosensors: As implantable biosensors become smaller and more sophisticated, all our guests see them becoming a crucial part of healthcare. In addition to gathering data on all sorts of functions within the body, biosensors could provide therapies and treatments with minimal human intervention. Tweetables: So, we are measuring the absolute pressure inside the eye with this kind of technology. It originates from the automotive industry. Tire pressure sensors, where you also have to measure the pressure inside the tire. And so basically we took set technology and advanced it and made it so small that you can also implant this kind of sensor in an eye. — Max Ostermeier, co-founder and General Manager of Implandata Ophthalmic Products So I had a physical a month ago, and along with the physical, they draw blood, and they send that blood off to a lab. I have a feeling in the next decade, that goes away. Why do you have to send a vial of blood to the lab? Because if I had a sensor, not even in an artery, but on top of an artery, I could do a complete analysis of everything in that blood that you’re doing from the lab. — Doug Adams, entrepreneur and founder of QURA The important thing is that you are automatically getting data to the care group that is taking care of these patients, where they are able to see what’s happening. They’re able to see not just a snapshot once in a while, as you’d have from an external pressure cuff, but [get] continuous data longitudinally. — David Hendren, Chief Business Officer of QURA Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links Lighthouse Guild QURA Implandata Ophthalmic Products
4/26/24 • 31:51
When it comes to emerging technology, there’s no hotter topic than artificial intelligence. Programs like ChatGPT and Midjourney are becoming more popular and are inspiring people to explore the possibilities of what AI can achieve — including when it comes to accessible technology for people who are blind or visually impaired. One of those people is Saqib Shaikh, an engineering manager at Microsoft. Saqib leads the team that developed an app called Seeing AI, which utilizes the latest generation of artificial intelligence, known as generative AI Dr. Cal spoke with Saqib about how Generative AI works, his firsthand experience using an app like Seeing AI, and how it helped improve his daily life. This episode also features Alice Massa, an occupational therapist at Lighthouse Guild. Alice described the many benefits of generative AI, and how it helps her clients better engage in their world. Saqib and Alice also both agreed that the current state of AI is only the beginning of its potential. They shared their visions of what it could achieve in the future — and it doesn’t seem that far off. The Big Takeaways: The Power of Generative AI: Saqib discussed the present condition of artificial intelligence and why generative AI is a massive leap from what came before it. With a deep data pool to draw from, generative AI can do so much more than identify items or come up with an essay prompt. It can understand and interpret the world with startling depth and expediency. Seeing AI: This app can truly put the world in the palm of your hand. It can perform essential tasks like reading a prescription or the sign at a bus stop — and even more than that! It can describe all the colorful details of sea life in a fish tank at the aquarium or help you order dinner off a menu. The app doesn’t just provide people who are blind or visually impaired greater access to the world — it expands it. Embrace Change: There’s understandably a lot of uncertainty about what role AI should play in society. However, Saqib Shaikh and Alice Massa insist that there’s nothing to fear from AI, that the benefits far outweigh any potential drawbacks, and that as long as it’s handled responsibly, there’s a lot AI can do to help improve our lives. Tweetables: “I had a client at the Lighthouse who really was very disinterested in doing anything. The only thing he did on his phone was answer a call from his pastor and call his pastor. And I was able to put Seeing AI on his phone. And his wife said the first time in two years, she saw a smile on his face because now he could read his Bible by himself.” — Alice Massa, Occupational Therapist at Lighthouse Guild “What if AI could understand you as a human? What are your capabilities? What are your limitations at any moment in time? Whether that's due to a disability or your preferences or something else, and understand the environment, the world you're in, or the task you’re doing on the computer or whatever. And then we can use the AI to close that gap and enable everyone to do more and realize their full potential.” — Saqib Shaikh, Engineering Manager at Microsoft “I call my phone my sister because my phone is the person I go to when I’m on the street if I’m walking in Manhattan. The other day I was meeting someone on 47th Street. I wasn’t sure which block I was on. All I did was open Seeing AI short text, hold it up to the street sign, and it told me I was on West 46th Street.” — Alice Massa “Some of the interesting things powered by generative AI is going from taking a photo, say from your photo gallery if you’re reliving memories from your vacation, or even just what’s in front of you right now. It can go from saying it’s a man sitting on a chair in a room to actually giving you maybe a whole paragraph describing what’s in the room, what’s on the shelf, what’s In the background, what’s through the window, even. And it’s just remarkable. I work on this every day. I understand the technology, yet as an end user, I still am surprised and delighted by what this generation of AI is capable of telling me.” — Saqib Shaikh Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Together, Tacit Retissa Neoviewer State Tactile Omero Museum Emily Metauten Artist Page (Herminia Blue)
2/16/24 • 26:47
This podcast is about big ideas on how technology is making life better for people with vision loss. When it comes to art, a common phrase is “look, don’t touch.” Many think of art as a purely visual medium, and that can make it difficult for people who are blind or visually impaired to engage with it. But in recent years, people have begun to reimagine what it means to experience and express art. For this episode, Dr. Cal spoke to El-Deane Naude from Sony Electronics. El-Deane discussed the Retissa NeoViewer, a project developed with QD Laser that projects images taken on a camera directly onto the photographer’s retina. This technology allows people who are visually impaired to see their work much more clearly and with greater ease. Dr. Cal also spoke with Bonnie Collura, a sculptor and professor at Penn State University about her project, “Together, Tacit.” Bonnie and her team developed a haptic glove that allows artists who are blind or visually impaired to sculpt with virtual clay. They work in conjunction with a sighted partner wearing a VR headset, allowing both to engage with each other and gain a new understanding of the artistic process. This episode also includes an interview with Greta Sturm, who works for the State Tactile Omero Museum in Italy. Greta described how the museum’s founders created an experience solely centered around interacting with art through touch. Not only is it accessible for people who are blind or visually impaired, but it allows everyone to engage with the museum’s collection in a fascinating new way. Finally, a painter and makeup artist named Emily Metauten described how useful accessible technology has been for her career. But she also discussed the challenges artists who are blind or visually impaired face when it comes to gaining access to this valuable technology. The Big Takeaways: The Value of Versatility: Many photographers who are visually impaired require the use of large, unwieldy accessories in order to properly capture their work. Sony and QD Laser are determined to solve this problem with the Retissa NeoViewer, which can replace cumbersome accessories like screen magnifiers and optical scopes. Sculpting Virtual Clay: The aim of Together, Tacit, is to “foster creative collaboration between blind, low-vision, and sighted individuals.” A major way this is accomplished is by using the haptic glove to sculpt virtual, rather than physical, clay. Working in VR makes it harder for the sighted partner to unintentionally influence the work of the artist who is blind or visually impaired. As a result, the experience for both users is more authentic and enriching. Reimagining the Museum Experience: The Tactile Omero Museum is much more than an opportunity for people who are blind or visually impaired to interact with art – it’s reimagining how that art is fundamentally experienced. By giving visitors a chance to engage with pieces on a tactile level, the museum allows everyone a chance to reconnect with a vital sense that many take for granted. Expanding Ability to Access Technology: For artists like Emily Metauten who are visually impaired, accessible technology makes it much easier to do their jobs. However, many governmental organizations don’t have the infrastructure to provide this technology to them. Emily wants to raise awareness of how valuable this technology can be, and why providing it to people is so important. Tweetables: “When we’re little kids, we want to touch everything … and then soon after that, we’re told, no, no, no, you shouldn’t touch. You should look and not touch. And so, it becomes the reality and it becomes what you’re supposed to do.” – Greta Sturm, Operator at State Tactile Omero Museum “I carry a Monocular little optical scope. But it becomes extremely difficult when you’re out and about and you’re trying to take a photograph, trying to change your settings. This method, the laser projection, I can actually read, the tiniest little settings.” – El-Deane Naude, Senior Project Manager at Sony Electronics Imaging Division “The VR glasses definitely unlock an ability to see more details more easily for me. Because peripheral vision isn’t designed to see fine details. That's what the central vision is responsible for. So that’s what I have trouble with. But it made what I was already doing easier, and also did give me inspiration. Because we’re trying to unlock the greater things in life, that aren’t just beyond the basics for people with vision loss.” – Emily Metauten, professional painter and makeup artist “I’ve learned through teaching that if a visually impaired or blind person was to use real clay … a sighted person would inevitably start to signify it in terms of what it can be called … And already, immediately, that begins to change the power dynamic on how something is created.” – Bonnie Collura, Professor of Art, Penn State University Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links Lighthouse Guild Together, Tacit Retissa Neoviewer State Tactile Omero Museum Emily Metauten Artist Page (Herminia Blue)
12/8/23 • 37:06
This podcast is about big ideas on how technology is making life better for people with vision loss. When we buy a product off the shelf, we rarely think about how much work went into getting it there. Between initial conception and going to market, life-changing technology requires a rigorous testing and development process. That is especially true when it comes to accessible technology for people who are blind or visually impaired. For this episode, Dr. Cal spoke to Jay Cormier, the President and CEO of Eyedaptic, a company that specializes in vision-enhancement technology. Their flagship product, the EYE5, provides immense benefits to people with Age-Related Macular Degeneration, Diabetic Retinopathy, and other low-vision diseases. But this product didn’t arrive by magic. It took years of planning, testing, and internal development to bring this technology to market. This episode also features JR Rizzo, who is a professor and researcher of medicine and engineering at NYU — and a medical doctor. JR and his research team are developing a wearable “backpack” navigation system that uses sophisticated camera, computer, and sensor technology. JR discussed both the practical and technological challenges of creating such a sophisticated project, along with the importance of beta testing and feedback. The Big Takeaways: The importance of testing: There’s no straight line between the initial idea and the final product. It’s more of a wheel, that rolls along with the power of testing and feedback. It’s extremely important to have a wide range of beta testers engage with the product. Their experience with it can highlight unexpected blind spots and create opportunities to make something even greater than originally anticipated. Anticipating needs: When it comes to products like the EYE5, developers need to anticipate that its users will have evolving needs as their visual acuity deteriorates. So part of the development process involves anticipating what those needs will be and finding a way to deliver new features as users need them. Changing on the fly: Sometimes, we receive feedback we were never expecting. When JR Rizzo received some surprise reactions to his backpack device, he had to reconsider his approach and re-examine his fundamental design. Future-Casting: When Jay Cormier and his team at Eyedaptic first started designing the EYE5 device, they were already considering what the product would look like in the future, and how it would evolve. To that end, they submitted certain patents many years ahead of when they thought they’d need them — and now, they’re finally being put to use. Tweetables: “I’m no Steve Jobs and I don’t know better than our users. So the best thing to do is give them a choice and see what happens.” — Jay Cormier, President & CEO of Eyedaptic “I started to think a little bit more about … assistive technologies. … And, I thought about trying to build in and integrate other sensory inputs that we may not have natively … to augment our existing capabilities.” — JR Rizzo, NYU Professor of Medicine and Engineering “I think the way we’ve always looked at it is the right way, which is you put the user, the end user, front and center, and they’re really your guide, if you will. And we’ve always done that even in the beginning when we start development of a project.” – Jay Cormier “When we put a 10-pound backpack on some colleagues, they offered some fairly critical feedback that it was way too heavy and they would never wear it. … They were like … it’s a non-starter.” — JR Rizzo Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links Lighthouse Guild Eyedaptic Rizzo Lab
10/10/23 • 37:34
This podcast is about big ideas on how technology is making life better for people with vision loss. The white cane and guide dogs are long-established foundational tools used by people with vision impairment to navigate. Although it would be difficult to replace the 35,000 years of bonding between humans and dogs, researchers are working on robotic technologies that can replicate many of the same functions of a guide dog. One such project, called LYSA, is being developed by Vix Labs in Brazil. LYSA sits on two wheels and is pushed by the user. It’s capable of identifying obstacles and guiding users to saved destinations. And while hurdles such as outdoor navigation remain, LYSA could someday be a promising alternative for people who either don’t have access to guide dogs or aren’t interested in having one. In a similar vein, Dr. Cang Ye and his team at Virginia Commonwealth University are developing a robotic white cane that augments the familiar white cane experience for people with vision loss. Like the LYSA, the robotic white cane has a sophisticated computer learning system that allows it to identify obstacles and help the user navigate around them, using a roller tip at its base. Although it faces obstacles as well, the robotic guide cane is another incredible example of how robotics can help improve the lives of people who are blind or visually impaired. It may be a while until these technologies are widely available, and guide dogs and traditional canes will always be extremely useful for people who are blind or visually impaired. But with how fast innovations in robotics are happening, it may not be long until viable robotic alternatives are available. The Big Takeaways: Reliability of Biological Guide Dogs: Although guide dogs have only been around for a little over a century, humans and dogs have a relationship dating over 35,000 years. Thomas Panek, the President and CEO of Guiding Eyes for the Blind, points out that there will never be a true replacement for this timeless bond. That being said, he thinks there is a role for robotics to coexist alongside biological guide dogs, and even help augment their abilities. LYSA the Robotic Guide Dog: LYSA may look more like a rolling suitcase than a dog, but its developers at Brazil’s Vix Systems are working on giving it many of the same functions as its biological counterpart. LYSA can identify obstacles and guide its user around them. And for indoor environments that are fully mapped out, it can bring the user to pre-selected destinations as well. The Robotic White Cane: Dr. Cang Ye and his team at Virginia Commonwealth University are developing a Robotic White Cane that can provide more specific guidance than the traditional version. With a sophisticated camera combined with LiDAR technology, it can help its user navigate the world with increased confidence. Challenges of Outdoor Navigation: Both LYSA and the Robotic White Cane are currently better suited for indoor navigation. A major reason for that is the unpredictability of an outdoor environment along with more fast-moving objects, such as cars on the road. Researchers are working hard on overcoming this hurdle, but it still poses a major challenge. The Speed of Innovation: When Dr. Ye began developing the Robotic White Cane a decade ago, the camera his team used cost $500,000 and had image issues. Now, their technology can be run on a smartphone – making the technology much more affordable, and hopefully one day, more accessible if it becomes available to the public. Tweetables: “We’ve had a relationship with dogs for 35,000 years. And a relationship with robots for maybe, you know, 50 years. So the ability of a robot to take over that task is a way off. But technology is moving quickly.” — Thomas Panek, President and CEO of Guiding Eyes for the Blind “Outdoor navigation is a whole new world because if you go on the streets, it could be dangerous. You have to be very careful because you are driving a person, driving a human being.” — Kaio Ribeiro, Researcher at Vix Systems “The first … camera we used, it's close to 500 grand. … But now … the iPhone’s LiDAR … works outdoors. … And it just took … a … bit more than 10 years.” — Dr. Cang Ye, Prof. of Comp. Sci. at Virginia Commonwealth University and Program Director, National Science Foundation “It’s not the traditional … robot … that’s stiff. … We have to move into soft robotics … to accomplish the … activity … a dog can accomplish. … It’s a way off. … If … an engineering student … wants to get into soft robotics, … that’s where it will be.” — Thomas Panek Pertinent Links: Lighthouse Guild Guiding Eyes for the Blind LYSA Robot Guide Robotic White Cane
7/28/23 • 33:13
This podcast is about big ideas on how technology is making life better for people with vision loss. Navigating the world can be difficult for anyone, whether or not they have vision loss. Tasks like driving safely through a city, navigating a busy airport, or finding the right bus stop all provide unique challenges. Thankfully, advances in technology are giving people more freedom of movement than ever before, allowing them to get where they want, when they want, safely. Smart Cities are putting data collection to work in a healthy way by providing information to make busy intersections more secure, sidewalks more accessible, and navigation more accurate. They’re providing assistance for all aspects of travel, from the front door to the so-called “last hundred feet,” while using automated technology to make life easier every step of the way. And although fully autonomous vehicles are still on the horizon, the technology being used to develop them is being applied to improve other aspects of life in incredible ways. These applications are making the world more accessible, safer, and better for everyone, including people who are blind or visually impaired. One example of this is Dan Parker, the “World’s Fastest Blind Man,” who has developed sophisticated guidance systems for his racing vehicles, as well as a semi-autonomous bicycle that could give people with vision loss a new way to navigate the world safely and independently. The Big Takeaways: Smart Cities. Greg McGuire and his team at MCity in Ann Arbor, Michigan are working on the concept of Smart Cities, which focus on using data to improve the everyday lives of their citizens. That means improving traffic intersection safety, greater accessibility options, providing detailed “last hundred feet” guidance, and much more. Autonomous Driving. In a perfect world, self-driving cars will provide ease of transportation for everyone, and create safer, less congested roads. That technology isn’t there yet – but it’s being worked on by talented researchers like John Dolan, the Principal Systems Scientist at Carnegie Mellon’s Autonomous Driving Vehicle Research Center. Sophisticated sensors and advanced robot-human interfaces are being developed to make self-driving cars possible. Application of Technology. Even though the technologies behind Smart Cities and autonomous vehicles are still being developed, they can still be applied to everyday life in exciting ways. Things like miniature delivery robots that can deliver goods, AI-powered suitcases that can help you navigate busy airports, or semi-autonomous bicycles are already here – and there’s more on the way. The World’s Fastest Blind Man. When professional race car driver Dan Parker lost his vision in an accident, he felt lost. But a moment of inspiration led him and his business partner Patrick Johnson to develop a sophisticated guidance system that let him continue racing without human assistance. Thanks to this revolutionary technology, Dan became the “World’s Fastest Blind Man” when he set a land-speed record of 211.043 miles an hour in his customized Corvette. Tweetables: “One of the key pillars of MCity is accessibility. The four areas we think about are safety, efficiency, equity, and accessibility. … Accessibility is that we can make transportation systems available to as many of us as possible.” – Greg McGuire, Managing Director of MCity “I became the first blind man to race Bonneville, with an average speed of 55.331 mph. And I returned in 2014 and set my official FIM class record … at 62.05 mph. … I’m the only blind land speed racer … with no human assistance.”– Dan Parker, the “World’s Fastest Blind Man” “There are chairs, there are tables. ... We know we don’t want to run into them, but we do want to walk in the walkable space. … A car wants to drive in the drivable space.” – John Dolan, Principal Systems Scientist at Carnegie Mellon’s Autonomous Driving Vehicle Research Center “Because we know autonomous technology is increasing every day and it’s coming, you know, a hundred percent it’s coming. You know, transportation is freedom and that’s exactly what that would bring us. Freedom.” – Dan Parker Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild MCity Carnegie Mellon Autonomous Driving Vehicle Research Center Dan Parker
6/2/23 • 39:13
This podcast is about big ideas on how technology is making life better for people with vision loss. For decades, people with vision loss had limited options when it came to accessing video games. Aside from screen magnification and text-to-voice tools, gamers who are blind or visually impaired didn’t have many ways to play their favorite titles. But in recent years, the same cutting-edge technology used to create games has been used to also make them more accessible for people with vision impairment. These advances include more visibility options, the implementation of 3D audio, haptic feedback, and customizable controllers for gamers with vision impairment. Furthermore, 3D audio technologies being developed in live sports may soon make their way to online multiplayer video games. The implementation and improvement of these technologies mean that everyone will be able to play together, regardless of their visual acuity. The Big Takeaways: Leap in Accessible Gaming Options. In the past, the lack of accessibility features has made it much harder for gamers like Elaine Abdalla to access her favorite titles. But as gaming technology and accessibility improve, gamers like Elaine who are visually impaired are now able to more actively participate in their favorite titles. For example, The Last of Us: Part 2 boasts over 60 accessibility options including sophisticated visibility adjustments and inclusive difficulty settings. Participating in the Process. Robin Spinks is the Head of Inclusive Design at the UK’s Royal National Institute of Blind People (RNIB). He helps gaming companies implement accessibility options as part of the design process from the very beginning. By ensuring people with vision loss are part of the process from the jump, Robin is helping make games more immersive than ever — for everyone. Xbox Accessibility Team. Kaitlyn Jones first became passionate about gaming when she helped her father launch a nonprofit foundation that built customized setups for gamers with disabilities. She’s now the program manager of Xbox’s Gaming Accessibility Team. They develop guidelines for all kinds of accessible gaming options, including for people with vision loss. Thanks to her team’s efforts, new titles include options like spatial audio cues, which make it easier for gamers who are blind to navigate complex dungeons and unlock achievements. Action Audio. At the 2022 Australian Open, a new technology called Action Audio became available for tennis fans with vision loss. Through cutting-edge camera tracking technology, Action Audio creates an information-rich 3D audio experience that allows tennis fans with vision loss to experience every thrilling rally as it happened, and to share in a communal experience. Spatial Audio in Gaming. Action Audio’s developers hope to make it a universal technology that can also be used in gaming. Whether it’s working with a team in the latest first-person shooter or dribbling around opponents on a virtual basketball court, 3D (or spatial) audio technology is positioned to help people with vision loss to more equitably participate in the gaming community. Tweetables: “The motivation here is to remove barriers, to form partnerships, and to collaborate so that going forward it becomes a much more possible area of life for people. ’Cause gaming's fantastic, right? And why should you be stopped from enjoying it just because you’re blind or partially sighted?” — Robin Spinks, Head of Inclusive Design at RNIB “When something happens on the court, it’s captured quite fast with Hawkeye within hundreds of milliseconds, and we’re able to take that data relatively quickly and generate the audio and then send it to the broadcast before it’s received in the real world.” — Tim Devine, AKQA Executive Director of Innovation “It’s like that Japanese principle of continuous improvement, kaizen, where you’re kind of constantly looking to do things better. That’s what we want to see in the gaming world. — Robin Spinks, Head of Inclusive Design at RNIB “We’ve always really prioritized accessibility along the way. But I think in terms of the journey of, even where we started when I first joined the team a few years ago versus now, I think the bar honestly just keeps getting higher and higher.” — Kaitlyn Jones, Program Manager: Xbox Accessibility Team “I’m just happy to see the disabled community is finally getting the assistance they need. It’s taken a while and it’s still gonna take a while, but we’re going in the right direction.” — Elaine Abdalla, creator of BlindGamerChick YouTube Channel Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild RNIB Accessible Video Games Page Xbox Accessibility Guidelines BlindGamerChick YouTube Channel On Tech & Vision: Training the Brain: Sensory Substitution
4/14/23 • 31:06
This podcast is about big ideas on how technology is making life better for people with vision loss. Marcus Roberts, Stevie Wonder, Ray Charles, and even Louis Braille (who invented the Braille Music Notation system still used today) prove that musicians who are blind or visually impaired have made profound impacts on our musical landscape. However, to get their work to us, musicians who are blind have had to structure complex workarounds, like relying on sighted musicians to demonstrate complex scores; memorizing long pieces; or only performing when they can have a Braille score in front of them, shutting them out from opportunities that fall to those who can sight read, since Braille scores have often been time-consuming and expensive to produce. However, new technologies in music composition and production are making composition, nuanced scoring, and Braille printing easier than ever, bringing musicians and composers who are blind to centerstage to share their sound and song. The Big Takeaways: “Lullay and Lament” by James Risdon. The recorder — pivotal in music from the Renaissance and Baroque periods — only lately has emerged from a long period of obsolescence. James Risdon, a passionate player who lost his vision due to Leiber’s congenital amaurosis as a child, has written the original song for recorder “Lullay and Lament” for his album Echoes of Arcadia, which marries the early recorder with contemporary recorder music. To make this album, he relied on new musical technologies like Dancing Dots. Dancing Dots with Bill McCann. Bill McCann is the founder and president of Dancing Dots Braille Music Notation software. Dancing Dots is a suite of software — plus educational resources and training — that helps people who are blind to read, write and record their music. McCann founded Dancing Dots in 1992. Chris Cooke and PlayHymns.com. When Chris Cooke got the Dancing Dots software in 2016, her creativity exploded. She was able to arrange a song and bring it to church and play a duet with a member of the congregation, something she hadn’t been able to do before, given the former time and cost of translating music into Braille notation. What is Braille Music? Louis Braille, a noted musician, created the Braille musical notation system. Being able to translate music easily between Braille Musical Notation and Western musical notation, and to easily print either of these, helps musicians who are blind share music with other musicians, both sighted and blind, and play music together with ease. Musical Instrument Digital Interface and MusicXML. MIDI has made it possible for musicians to play music into their instruments and have it automatically translated into digital musical notation. MusicXML has made universal the file type for a score and allows musicians to share scores across popular music notation applications like Finale or Sebelius. The question of parity. James and Chris agree that while Dancing Dots technology has enabled them to take advantage of new musical opportunities, no technology exists that offers them complete parity with sighted musicians because musicians who are blind need additional lead time to get the music scanned correctly or to memorize pieces. Chris adds that preparing the music in a timely fashion and on a budget would help. The MIDI-to-brain connection. Bill McCann has explored using the Brainport, a technology from WICAB (which we profiled in an episode from September 2021, on “Training the Brain: Sensory Substitution”) to allow musicians who are blind to read music on their tongues. This is important if someone needs to read music live in a performance in order to play any instrument that also requires their hands. Early trials showed signs of success. He posits that maybe someday, maybe soon, people could think new music into notation. Tweetables: “I said, I will never put myself in this position again. If I write something and I am asking other people to play it, and they ask me questions or there’s something I am going to know before we meet exactly what I want and what I have.” — Bill McCann, founder and president of Dancing Dots “So the duet that we played was hot off my printer and went with me and we played it. And it was great to be able to share music in that way because of the technology in the Dancing Dots program.” — Chris Cooke, musician and music arranger, creator of Playhymns.com “As a blind person, I can say this for myself, often we end up following sighted people, or following somebody. Braille music gives blind musicians the chance to become leaders.” — Bill McCann, founder and president of Dancing Dots “I've set aside 2023 as a year. I'd really like to kind of develop some more expertise in the area and also come to grips with some of the technology that would help the process.” — James Risdon, musician and recorder player “Someone sitting there and getting inspired is what we call the MIDI-to-Brain connection. We’re not there yet, but you could … think the music in your head … at a computer and … music materializes in the form of a score.” — Bill McCann, founder and president of Dancing Dots Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Dancing Dots James Risdon Chris Cooke and Playhymns.com
3/6/23 • 34:40
This podcast is about big ideas on how technology is making life better for people with vision loss. Lots of people have voice-controlled smart home assistants like Siri, Google, or Alexa in their homes…. to listen to the news or to set timers. But they can do so much more! David Frerichs, Principal Engineer, Alexa Experience at Amazon on the aging and accessibility team, shares his design philosophy for making voice assistants more inclusive, and the preferred mode of engagement for every user. He also shares that the next stage of smart home assistants will be ambient computing, where your devices will intuit your needs without you speaking them. We talk with Lighthouse Guild client Aaron Vasquez, who has outfitted his home with smart home technology, and with Matthew Cho, a client who traveled to the Johnson Space Center in Houston to speak to the unmanned Orion Spacecraft via the Amazon Alexa on board, demonstrating that voice assistant technology can bring inclusivity and accessibility to many jobs and industries and are not just for the home anymore. The Big Takeaways: Alexa Onboard the Orion Spacecraft. NASA partnered with Amazon to use the Alexa voice-controlled smart assistant onboard the unmanned Orion spacecraft so that engineers could guide the spacecraft from Mission Control. This project tested the possibility of Alexa for space travel while demonstrating that voice-controlled smart assistants have uses beyond the home. Matthew Cho, an 18-year-old student and client of Lighthouse Guild had the opportunity to travel to Houston to be one of the volunteers to give voice commands to the spacecraft via an Amazon Alexa device while it hurtled through space. Accessibility and Preferences. David Frerichs, Principal Engineer, Alexa Experience at Amazon, and someone who works on the aging and accessibility team, has spent his career developing technology that adapts to the ever-changing needs of the user and has cultivated a design philosophy that makes clear that design choices (like voice control) that enable inclusivity for people who are blind can also become the preferred way that most users engage with a device or a tool. Curb cuts are a historical example. David often thinks in terms of “Hot tub safe computing.” What can a person do to engage with the device from their hot tub? Ambient Computing. David mentions ambient computing, the next phase of smart home technology, in which a network of devices in the built environment intuit and respond to a user's needs without the user even needing to speak a command. Smart Homes Today, Smart Industries Tomorrow. Aaron Vasquez is a smart home user. He uses Google Echo to power two smart lamps, operate his smart TV, and control a pet camera to oversee his rambunctious kitties when he’s not at home. As a person who is visually impaired, Aaron prefers voice command for running his home in this way. This episode asks how the smart home’s tools can be integrated into offices and industries to make these more accessible and inclusive spaces. Tweetables: “To be able to have a rocket be dependent on an AI without anybody having to control the spacecraft is, it is really amazing, and I feel that later on that they’ll be able to use it for much more things aside from space.” – Matthew Cho, student and client of Lighthouse Guild “So their goal was to eventually get different people to be able to go into space … They were trying to see if Alexa would work properly with all sorts of people, normal people. Not just astronauts, like regular, ordinary, everyday people.” – Matthew Cho, student and client of Lighthouse Guild “You have … permanent need, temporary need, situational need, and preferential need that really can inform us on … how we can address [a] barrier for the particular core use case. But if we do it well, it will serve a much broader community.” – David Frerichs, Principal Engineer, Alexa Experience at Amazon “We’re moving toward … ambient computing. That is … that the system should be able to respond to the needs of the customer, even if the customer doesn’t say anything. ... That’s … where the boundaries are and where it’s gonna continue to be pushed.” – David Frerichs, Principal Engineer, Alexa Experience at Amazon “We had seen this pack of smart bulbs and they were relatively cheap, and I was like, huh, that’s, that’s kind of cool. So we were like, you know what? Let’s get it. So we got ‘em, we hooked them up and that kind of is what started everything.” – Aaron Vasquez, smart home user and client of Lighthouse Guild “Honestly, it’s so much easier if I can ask Google a question and she can come up with the answer, then I’m better off that way instead of actually trying to look it up myself.” – Aaron Vasquez, smart home user and client of Lighthouse Guild Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Alexa in Space David Frerichs
1/24/23 • 34:53
This podcast is about big ideas on how technology is making life better for people with vision loss. Artifacts from Blackbeard’s sunken pirate ship are on display in the North Carolina Maritime Museum in Beaufort, North Carolina. But now they are also accessible to visitors who are blind, thanks to the efforts of Peter Crumley, who spearheads the Beaufort Blind Project. In this episode, we ask: How can new technology help make sites like these as accessible to people who are blind as they are to sighted people? We profile three companies applying new technologies paired with smartphone capabilities, to make strides in indoor navigation, orientation, and information transfer. Idan Meir is co-founder of RightHear, which uses Apple’s iBeacon technology to make visual signage dynamic and accessible via audio descriptions. We check in with Javier Pita, CEO of the NaviLens QR code technology which we profiled in our first season to see what they have been developing in the last two years. Rather than iBeacons or QR codes, GoodMaps uses LiDAR and geocoding to map the interior of a space. We speak with Mike May, Chief Evangelist. Thanks to Peter Crumley, the North Carolina Maritime Museum is fully outfitted with GoodMaps, and will soon have NaviLens as well. As the prices of these tools come down, the key will be getting them into all the buildings, organizations, and sites of information transfer that people who are blind need to access – which is all of them. The Big Takeaways: Beaufort Blind Project. Peter Crumley, a blind resident of Beaufort, North Carolina, has advocated having accessibility tools brought to various parts of his hometown. Along the way, he helped the North Carolina Maritime Museum outfit itself with GoodMaps technology for indoor navigation, and with NaviLens QR codes for information transfer. Thanks to these new technologies, the museum artifacts are now accessible to everyone. RightHear. Idan Meir cofounded RightHear, which uses iBeacon technology paired with users’ smartphones to guide visitors who are blind through an indoor space. iBeacons send unique signatures via low Bluetooth signals to phones inside the radius. When these iBeacons are paired with areas of interest in a space (e.g. the front door, the counters, or the bathrooms) users can orient themselves within a space, and identify where they want to go and how they want to navigate to each location. RightHear translates the information embedded in each beacon into audio feedback for users. On the subject of feedback, Idan Meir is looking for beta testers to try out RightHear and provide him with constructive feedback. NaviLens. We profiled NaviLens QR code technology in an episode from our first season. In this episode, we follow up with Javier Pita to see what has been in development in the last two years. Since we last spoke, NaviLens has launched NaviLens 360, which uses magnets to help guide users who are blind to the NaviLens codes, even if their camera is having trouble picking up the code, making the app even more user-friendly. In addition, NaviLens has launched a partnership with Kellogg’s in Europe and North America to test the effectiveness of the Navilens code on consumer product packaging. GoodMaps. GoodMaps uses LiDAR technology to map a space. Lasers are sent out from the LiDAR sensor, and when they bounce back, they have captured distances from the point of origin. Institutions work with GoodMaps to pay for the mapping service, and then users can access the maps for free. The app uses audio to communicate navigational directions with users. Technological advancement. Each of these tools relies on component technologies that have gotten less expensive in recent years (iBeacons, QR Codes, and LiDAR). They are also able to exist because their target markets carry smartphones in their pockets, enabling these potential users to access the tools quickly and easily, without much additional hassle or investment. Distribution. In this episode, we profile three different approaches to broadening access to indoor navigation technology, including for orientation and information transfer, proving there are many ways to solve these problems. It is good that some of these tools can be paired, as has been done at the North Carolina Maritime Museum, and that users may be able to choose which tools work best for them. The key will be getting them into all the buildings, organizations, and sites of information transfer that people who are blind need to access – which is all of them. Tweetables: “The advocacy is so important; when you’re actually interfacing with the app to make the app better and make it work in a way that a blind person really needs it to work.” – Peter Crumley, Beaufort Blind Project “Well, it's gonna be built from blind perspective philosophy. So not only will it work for me — it will work for anyone, totally blind and fully sighted to give an interactive experience.” – Peter Crumley, Beaufort Blind Project “Imagine, if this technology will be in all the products, we will solve the problem of accessible packaging for all users.” – Javier Pita, NaviLens “The point is we have solved the last few yards of the wayfinding problem that is super important for a blind user. And this was born in New York City with the collaboration with the MTA and the department of transportation of New York City.” – Javier Pita, NaviLens “That camera picks up the environment and it compares it with that point cloud and says, “I see based on this particular image … that you are near the Starbucks,” or “You're near Gate 27.” –Mike May, GoodMaps “It was important and kind of obvious for us from the very early on, you know, that nothing about us without us. It was clear to us that we have to involve users in the process. –Idan Meir, RightHear Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild RightHear NaviLens GoodMaps
11/11/22 • 39:13
This podcast is about big ideas on how technology is making life better for people with vision loss. In 1997, Gary Kasparov lost an epic chess rematch to IBM’s supercomputer Deep Blue, but since then, artificial intelligence has become humanity’s life-saving collaborator. This episode explores how AI will revolutionize vision technology and, beyond that, all of medicine. Karthik Kannan, co-founder of AI vision-tech company Envision, explains the difference between natural intelligence and artificial intelligence by imagining a restaurant recognizer. He describes how he would design the model and train it with positive or negative feedback through multiple “epochs” — the same process he used to build Envision. Envision uses AI to identify the world for a blind or visually-impaired user using only smartphones and smart glasses. Beyond vision tech, AI enables faster and more effective ophthalmic diagnosis and treatment. Dr. Ranya Habash, CEO of Lifelong Vision and a world-renowned eye surgeon, and her former colleagues at Bascom Palmer, together with Microsoft, built the Multi-Disease Retinal Algorithm, which uses AI to diagnose glaucoma and diabetic retinopathy from just a photograph. She acquired for Bascom Palmer a prototype of the new Kernal device, a wearable headset that records brain wave activity. Doctors use the device to apply algorithms to brainwave activity, in order to stage glaucoma, for example, or identify the most effective treatments for pain. Finally, AI revolutionizes drug discovery. Christina Cheddar Berk of CNBC reports that thanks to AI, Pfizer developed its COVID-19 treatment, Paxlovid, in just four months. Precision medicine, targeted to a patient’s genetic information, is one more way AI will make drugs more effective. These AI-reliant innovations will certainly lower drug costs, but the value to patients of having additional, targeted, and effective therapies will be priceless. The Big Takeaways: Natural vs. artificial intelligence, and the “restaurant recognizer.” Karthik Kannan, CEO and co-founder of Envision explains the difference between natural and artificial intelligence by describing how humans recognize restaurants in a foreign city and comparing that to how he’d train a “restaurant recognizing algorithm.” Here’s a hint: the algorithm needs a lot more data. Sensor fusion AI. AI developers are interested in using different types of sensors together to give the algorithms a sense of the world closer to human intelligence. One example is the use of LiDAR in the Envision app, in addition to the phone camera. Transhumanism. Humans don’t have LiDAR. Does that mean AI will surpass human capability? Karthik offers that some radiology AI have higher accuracy than human radiologists, but he thinks it will be much more of a partnership between the human and the machine. Multi-Disease Retinal Algorithm. Dr. Ranya Habash and her colleagues at Bascom Palmer Eye Institute worked with Microsoft on an AI diagnostic tool. They fed the algorithm 86,000 images of eyes, labeled with relevant diseases, and taught the machine to diagnose eye disease with just a photograph, making remote diagnosis not just possible but inexpensive. The Brain-Machine Interface. Dr. Habash wrote a grant that earned Bascom Palmer a prototype of the Kernal device, a helmet-like device that measures brainwave activity. Doctors used this device to create a “brain-machine interface” which advances brain research on glaucoma, diabetic retinopathy, Alzheimer’s, and pain management. Bias in AI. Karthik Kannan reminds us that the biggest threat that humanity faces from AI is bias encoded in the algorithms. This is a real harm that humans have already experienced, and AI engineers need to be extremely sensitive to ensure they are not encoding their own biases. AI for Drug Discovery. Christina Cheddar Berk, a reporter for CNBC, shares how the pace of drug discovery is set to speed up, thanks to AI algorithms and supercomputing power that can cycle through millions of possible chemical compounds per second to I.D. effective options. Pfizer used a similar process to develop Paxlovid, in a process that took only four months. Tweetables: “The secret sauce is always in the data.” — Karthik Kannan, CEO and Co-Founder of Envision “Human intelligence is so holistic. We have so many sensors on our bodies. […] Whereas an AI is taught only images.” — Karthik Kannan, CEO and Co-Founder of Envision “I know what’s going to work and what’s not going to work within thirty seconds of seeing it. […] They need to show up with a smartphone. Then I’ll take them seriously.” — Dr. Ranya Habash, CEO, Lifelong Vision “I don’t think there’s anything more powerful in medicine than to be able to treat a patient and get rid of a problem that is plaguing them so much.” — Dr. Ranya Habash, CEO, Lifelong Vision “If you can measure it you can control it.” — Dr. Ranya Habash, CEO, Lifelong Vision “It strongly takes over the bias of whoever is actually feeding the data […] and I think that has much, much more potential for harm than an AI taking over humanity.” — Karthik Kannan, CEO and Co-Founder of Envision “The value for patients of having those additional therapies available; it’s hard to put a price on.” — Christina Cheddar Berk, reporter, CNBC Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Karthik Kannan Dr. Ranya Habash Zephin Livingston Christina Cheddar Berk
10/7/22 • 35:51
This podcast is about big ideas on how technology is making life better for people with vision loss. Innovations in implant technology are advancing at lightning speed, profoundly impacting the lives of people who are blind or visually impaired. In On Tech And Vision, we’ve profiled some amazing new implant technologies that have the potential to restore people’s sight. But in this episode, we pump the breaks — because we need to address a critical part of the innovation process: the ethical frameworks that protect participants in early clinical trials, and the need for an updated framework that ensures patient protections without stifling innovation and development. Discussions between doctors and participants in clinical trials almost always focus on the new technology and very rarely on the manufacturer who sponsors the clinical trial — and almost never on the long-term commitment and financial viability of the company sponsoring the technology. And while clinical trial informed consent includes whose responsibility it is to remove the implants should they fail during the trial, that responsibility usually ends once the trial is over. At that stage, who will maintain or remove the implants that are still housed in patients’ bodies? In this episode, we talk about innovative implants such as the Argus II, which we featured in the first season of On Tech And Vision. The Argus II is a microchip implanted under the retina that, in combination with a special headset, provided some vision to people who otherwise had none. And while the technology was exciting, the company discontinued the retinal implant three years ago, and the Argus II was eventually sold to another pharmaceutical company. Dr. Joseph Fins, Professor of Medical Ethics and Professor of Medicine at Weill Cornell Medical Center in New York, joins us to share his thoughts on today’s big idea: How do we balance the life-changing potential of electroceutical implant technology with the ethics of caring for early participants — particularly after clinical trials are over? The Big Takeaways: Examples of electroceutical implants. Cochlear implants, retinal implants, and deep brain stimulators are examples of scientific advances that rely on in-dwelling devices. Regulatory framework today. The relationships between researchers and clinical trial participants are regulated by institutional review boards, which came out of the National Research Act of 1974. However, while this framework works well for drug trials, new issues specific to implants need to be addressed by new regulations. For example, who is responsible for people left with in-dwelling devices once trials are over? If the sponsoring company no longer supports their devices, are they victims of abandonment? Are the timelines for drug trial success too short to be relevant for implant device trials, since it may take the body longer to adopt a new technology than to respond to a new drug? Ancillary care obligations. Henry Richardson, in his book Moral Entanglements: The Ancillary Care Obligations of Medical Researchers writes that historically, researchers — to avoid conflicts of interest — did not assume a clinical care role. However, that is changing, as researchers realize they have an obligation to share actionable results with patients. The result is that there is even less of a “bright line distinction,” as Dr. Fins says, “between research and therapy.” Collective responsibility. Who is responsible for the long-term well-being of participants in electroceutical trials? Dr. Fins suggests that the sponsoring company, the medical school where the research is taking place, and the government should share responsibility. It’s a collective problem, he says. Some solutions. Requiring researchers, sponsoring companies, and researching universities to include in the costs of development insurance to cover long-term care for participants is one potential solution that Dr. Fins imagines. He also offers that researchers and sponsoring companies that develop successful and adopted medical products could subsidize the field. Or, he suggests, a tax on gaming devices (adjacent to electroceutical implants) to sustain people who are given indwelling devices in clinical trials. The law. The law needs to evolve to address the specific vulnerabilities of participants in electroceutical implant trials. Dr. Fins suggests that there are provisions within the Americans with Disabilities Act that account for assistive technologies that were relevant when the act was written in the nineties. According to Dr. Fins, these provisions in that could be read with a more contemporary lens, to include the assistive technologies of today (which would encompass electroceutical implants). There is room for lawyers and legal scholars to impact the legal frameworks in place now, to expand coverage from the ADA to protect participants in clinical trials for electroceutical implants. “Victims of Our Own Success.” Electroceutical implants are a miracle, says Dr. Fins. They are human ingenuity at its best. The science is harder to solve than the bureaucracy, but the bureaucracy to sustain medical advancements like these must catch up, or, ultimately, the vulnerability of trial participants threatens to impede scientific progress. Danger to the field. Clinical trials rely on willing participants, and when participants are not supported after trials end, it erodes participants’ trust across the field. Without a clear set of protections in place for participants in clinical trials, scientific and medical advancement in the area of electroceutical implants may be impeded. Tweetables: “I think this is a perfect rationale for insurance.”— Dr. Joseph Fins, Weill Cornell Medical Center “This is a huge problem. … We’re victims of our own success.”— Dr. Joseph Fins, Weill Cornell Medical Center “It's human ingenuity at its very best. And the fact that we can’t figure out the bureaucracy to sustain this? … The science is harder than the politics and the bureaucracy, but we’re being overmatched by the politics and the bureaucracy.” — Dr. Joseph Fins, Weill Cornell Medical Center “When stories like this come out it makes recruitment very hard.” — Dr. Joseph Fins, Weill Cornell Medical Center “These retinal implants, these deep brain stimulators, … they're gonna be looked upon as primitive halfway technologies 50 and 100 years from now. But we're only gonna get there if we're able to do this research.” — Dr. Joseph Fins, Weill Cornell Medical Center “Once you understand these facts, the ethics are pristine. They’re clear.” — Dr. Joseph Fins, Weill Cornell Medical Center Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Dr. Joseph Fins
8/12/22 • 31:01
This podcast is about big ideas on how technology is making life better for people with vision loss. People who are blind or visually impaired know all too well the challenges of living in a sighted world. But today, the capabilities of computer vision and other tech are converging with the needs of people who are blind and low-vision and may help level the playing field for young people with all different sensory abilities. These tools can pave the way for children’s active participation and collaboration in school, in social situations, and eventually, in the workplace, facilitating the important contributions they will make to our world in their adult lives. Access to educational materials is a consistent challenge for students and adults who are blind, but Greg Stilson, the head of Global Innovation at American Printing House for the Blind (APH), is trying to change that. Together with partner organizations Dot Inc. and Humanware, APH is on the verge of being able to deliver the “Holy Braille” of braille readers, a dynamic tactile device that delivers both Braille and tactile graphics in an instant, poised to fill a much-needed gap in the Braille textbook market. Extensive user testing means the device is as useful for people who are blind as possible. Greg sees a future in which more inclusively designed and accessible video games, augmented reality (AR), and virtual reality (VR) will help children who are blind learn with greater ease, and better engage with their sighted peers. Enter Dr. Cecily Morrison, principal researcher at Microsoft Research in Cambridge, UK. Based on extensive research and co-designing with people who are blind, she and her team developed PeopleLens, smart glasses worn on the forehead that can identify the person whom the user is facing, giving the user a spatial map in their mind of where classmates (as one example) are in space. PeopleLens helps children who are blind overcome social inhibitions and engage with classmates and peers, a skill that will be crucial to their development, and in their lives, as they move into the cooperative workspaces of the future. The Big Takeaways: Robin Akselrud, an occupational therapist and assistant professor at Long Island University in Brooklyn, author of MY OT Journey Planner and The My OT Journey Podcast, explains how a baby who is born blind becomes inhibited from their first developmental milestones. She explains the stressors that these children might face upon attending school and describes the kinds of interventions that occupational therapy offers. Bryce Weiler, disability consultant, sports enthusiast, and co-founder of the Beautiful Lives Project, emphasizes how important it is for children who are blind or low-vision to have rich sensory experiences — and life experiences — which give them a chance to flourish and socialize with peers. Beautiful Lives Project offers opportunities to do that. Greg Stilson, Director of Global Innovation at American Printing House for the Blind, and his team are developing a dynamic tactile device (DTD) that can switch seamlessly between Braille and tactile graphics — the “Holy Braille” of braille devices. The DTD is made possible by developments in pin technology by Dot Inc, and APH. Humanware developed the software for the device. No longer using the piezoelectric effect to move pins has reduced the cost of the device significantly, and APH can funnel federal funds to reduce the price further, making the DTD a potential, viable option for institutions. Cecily Morrison, principal researcher at Microsoft Research in Cambridge UK, and her team developed PeopleLens, a head-worn pair of smart glasses that lets the wearer know who is in their immediate vicinity. Dr. Morrison and her team tested it in classrooms for school-age children who are blind or visually impaired and found that PeopleLens reduces students’ cognitive load and helps young people overcome social anxiety and inhibitions that Robin Akselrud described at the top of the show. Wearers of PeopleLens learn to develop mental models of where people are in a room, and gain the confidence to engage others, or not, as they choose. Once social skills are built, students no longer have to wear the device but are set up for more successful social interactions at school and in their lives to come. Tweetables: If they have a visual impairment, it really impacts them from early on, from that first development milestone. — Robin Akselrud, occupational therapist and assistant professor at Long Island University in Brooklyn, author of MY OT Journey Planner and The My OT Journey Podcast For children, just giving them that foundation of making friendships as they’re growing up, and the opportunity to be a part of something, sport can allow them to do that, and it also gives them the chance to do things that their peers are taking part in. —Bryce Weiler, disability consultant, sports enthusiast, and co-founder of the Beautiful Lives Project This was what the field regards as the “Holy Braille” right? Having both [Braille and tactile graphics] on the same surface. —Greg Stilson, Director of Global Innovation at American Printing House for the Blind With the advancements of virtual reality and augmented reality, … along with the idea of making experiences and video games and things like that more inclusive, it’s going to create a more inclusive way for blind kids to engage with their sighted peers. — Greg Stilson, Director of Global Innovation at American Printing House for the Blind We found that “people” was the thing that was most interesting to people. And that doesn’t surprise us. We are people, and we like other people. — Dr. Cecily Morrison, principal researcher at Microsoft Research in Cambridge UK They can go out and find someone that they want to play with. They can choose not to talk to somebody by turning their head away from them, and the moment that they understand the agency they have in those situations is when we see a significant change in their ability to place people and to engage with them. — Dr. Cecily Morrison, principal researcher at Microsoft Research in Cambridge UK When we look at the workplaces of today, they’re often very collaborative places. So you can be the best mathematician in the world, but if you struggle to collaborate, you’re not building the AI technologies of tomorrow. By helping kids ensure that they have a strong foundation in these attentional skills, we’re giving them a significant lift. — Dr. Cecily Morrison, principal researcher at Microsoft Research in Cambridge UK Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Robin Akselrud Bryce Weiler Greg Stilson Dr. Cecily Morrison
6/3/22 • 32:25
This podcast is about big ideas on how technology is making life better for people with vision loss. In 2012, Christine Ha won the third season of Masterchef, after having lost her vision in her twenties. Since her win, she has opened two restaurants in Houston, adapting to the challenges the pandemic still poses to restaurateurs in order to meet the needs of her community. In a similarly innovative way, Max Ostermeier, CEO and Founder of Implandata Ophthalmic Products out of Hannover Germany, has reimagined the remote management and care of patients with glaucoma. Max and his team developed the EyeMate system, a microscopic implantable device and microsensor that measures intraocular pressure throughout the day. The EyeMate sends eye pressure data to an external device and uploads it to their eye doctor's office for analysis. This game-changing technology allows people with glaucoma to bypass regular trips to the ophthalmologist’s office to measure their eye pressure, key data in maintaining their eye health. We revisit a conversation with Sherrill Jones, who lost her sight due to glaucoma, in which she shares how difficult it was to adhere to compliance protocols. Max believes the EyeMate will evolve to be part of a closed loop drug delivery system; that is, when the EyeMate registers a high pressure, medications could automatically be released into the patient’s eye, which could improve outcomes significantly. We dig into issues of compliance and closed-loop systems by considering diabetes. We talk to occupational therapist Christina Senechal who has managed her diabetes for 27 years, and Dr. Carmen Pal, who specializes in internal medicine, endocrinology, diabetes, and metabolism in Lighthouse Guild’s Maxine and John M. Bendheim Center for Diabetes Care. The Big Takeaways: Innovation and adaptability have been key as Christine Ha, who won Masterchef in 2012 despite being blind, keeps the doors open on two Houston restaurants during the pandemic. In order to find opportunities through the pandemic, Christine has had to discover new needs that have to be met. The pandemic has made it harder for patients with glaucoma, many of whom are older, to get to the eye doctor. Dr. Max Ostermeier and his team have invented a microscopic implantable device, the EyeMate, that measures intraocular pressure throughout the day and then sends the data to a patient’s handheld device and their eye doctor's office for analysis, saving patients a trip. Since it can take many readings throughout the day, this system is better than relying on a doctor’s visit. The abundance of data can help patients use their medication more reliably, but can also be optimized in the future by algorithms. In the future, Max envisions that the EyeMate could be paired with a drug delivery implant to close the loop between monitoring and drug delivery, as with diabetes. This would make it easier for patients to stay in compliance with glaucoma protocols. We speak with Sherrill Jones, a patient with glaucoma, on why compliance for glaucoma was a challenge for her. We hear from Christina Senechal about her journey with diabetes and how hard compliance can be and how new technologies can help, and from Dr. Carmen Pal about how closed-loop drug delivery systems that pair glucose monitoring with insulin delivery help patients stay in compliance. Tweetables: There is opportunity and you just have to kind of think outside of the box and figure out what new ways … to think or do things, and what new needs are that need to be met. And that's how you survive. — Christine Ha, chef and restaurateur, Xin Chao and The Blind Goat, Houston “Glaucoma patients … are among the most vulnerable. … They have been asked to stay out of the doctor’s office. … But … they have an eye disease, which, if the pressure is too high, can damage the optic nerve. —Max Ostermeier, CEO and Founder of Implandata Ophthalmic Products Right now, ... a patient sees his eye doctor, maybe every few weeks or every few months and in between the office visits, the doctor has no understanding at all, what's going on with the patient. —Max Ostermeier, CEO and Founder of Implandata Ophthalmic Products I knew I had glaucoma, but the rapidness of it attacked me really fast. … and … I didn't faithfully use the drops in the beginning because I thought other remedies could help.—Sherrill Jones, patient with glaucoma I'm pretty sure in 10 years from now, we will see something like that, you know. A device, which … is releasing active ingredients … every time the pressure’s too high. —Max Ostermeier, CEO and Founder of Implandata Ophthalmic Products I'm a mom, … and … it’s … 10 o'clock. … I'm exhausted. I've had a full day … and my insulin pump starts beeping. … I just wanna go to bed! I'm so tired, but … I don't have that luxury of saying no. — Christina Senechal, occupational therapist and patient with diabetes It is quite beneficial … that … the patient does not … have to … fill the role of the pancreas, which is a little bit of an … unfair … task to ask of them. — Dr. Carmen Pal, diabetes specialist in Lighthouse Guild’s Maxine and John M. Bendheim Center for Diabetes Care. Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Max Ostermeier Christine Ha Dr. Carmen Pal
4/15/22 • 35:50
This podcast is about big ideas on how technology is making life better for people with vision loss. The Enigma machines that Germany used to encode messages during World War II were notorious for their complexity. Two Enigma experts — Dr. Tom Perera, a retired neuroscientist, and founder of EnigmaMuseum.com, and Dr. Mark Baldwin, an expert on the story of Enigma machines — tell us how the Allies were able to crack the code, by using input-output mapping. The human brain is similarly complex. Until recently, no one knew the code the retina used to communicate with the brain to create sight. Our guest, Dr. Sheila Nirenberg, a neuroscientist at Weill Cornell, and Principal and Founder of Bionic Sight has — using input-output mapping — cracked the retina’s neural code, enabling her to recreate the electric signals to the brain that could restore sight in people with retinal degeneration. She has created a set of goggles that convert a camera’s images into the code, via pulses of light. And she relies on optogenetics, a relatively new procedure in neuroscience that helps neurons become responsive to light. In her clinical trial, Dr. Nirenberg injects the optogenetic vector into the eye, and trial participants who are completely blind, like Barry Honig, who we speak with on this program, report being able to see light. In early studies, coupling the effects of the optogenetics with the code-enabled goggles has an even more impressive effect on patients’ vision. Dr. Nirenberg is also using her knowledge of the visual neural code to inform machine learning applications that could also be further used to support people who are blind or visually impaired. Clinical trial participants are important partners in the journey of discovery, Dr. Nirenberg says. Barry Honig agrees. He was happy to participate to help ease the burden on future children diagnosed with eye diseases that would otherwise result in blindness, but thanks to these advancements, someday may not. The Big Takeaways: Dr. Tom Perera and Dr. Mark Baldwin describe the history and workings of the Enigma machine, the complex encoding device that allowed Germany to take the upper hand at the beginning of World War II, a war in which communication was sent wirelessly, elevating the need for encryption. They then describe the Polish and British efforts to break Enigma, including standard decryption and Alan Turing’s Bombe machine. Similar to the Enigma, the human brain is incredibly complex, and much of the codes that make it run have not yet been deciphered, until now. Our guest, Dr. Sheila Nirenberg, conducted extensive input-output mapping on live human retinas. She was able to keep them alive in a dish outside the body for a few hours, during which time she’d show them videos. As the retina perceived the films, Dr. Nirenberg mapped the electrical current that would pulse through the ganglion nerve. In this way, she was able to learn how the human eye sees and to decipher the code that allows our brains to perceive images. This code has been honed via evolution over millennia. Having cracked the retinal neural code, Dr. Nirengberg held the key to restoring vision in people who are blind from retinal degeneration. She developed goggles embedded with a camera to convert the visual world into the retina’s neural code using pulses of light, but she still had to get these pulses of light into an unseeing eye. Optogenetics is the key to creating light perception. Optogenetics is a relatively new procedure in neuroscience, by which researchers have created a genetically modified virus based on light-responsive algae, which when injected into live human cells, recombines its DNA with the DNA of host cells. In Dr. Nirenberg’s case, she injects the optogenetic vector into the patient’s retina. Most patients report the restoration of light perception to varying degrees, with just the optogenetics alone. Coupled with the goggles, and with Dr. Nirenberg’s visual neural code, this system could restore some vision in people who are blind. The trial is still in its early stages. Dr. Nirenberg also uses the retinal neural code to improve how computers see. Computer vision experts are teaching computers to “see” more like humans. Still, if they can equip algorithms with humanity’s neural code, the goal of human-like vision becomes more easily achievable. Barry Honig is a participant in the clinical trial. After his optogenetic injection, he was able to perceive the menorah at Hanukkah. From different perspectives, Barry and Dr. Nirenberg agree on the importance of trial participants. Tweetables: “The analogy between the Enigma and the human brain is rather interesting. The brain has … 10 billion brain cells, … the Enigma machine has 10 to the 114th power possible internal connections. And that is actually a lot more.” — Tom Perera, neuroscientist and founder of EnigmaMuseum.com “Millions of years of evolution honed the retina into pulling out … the right features. And then it converts those features into electrical pulses and sends them up to the brain. That’s what a normal retina does.” — Dr. Sheila Nirenberg, neuroscientist at Weill Cornell, and Principal and Founder of Bionic Sight. “And so … you’re watching the retina watch a movie. And … you’re seeing what it would be sending … to your brain.” — Dr. Sheila Nirenberg, neuroscientist at Weill Cornell, and Principal and Founder of Bionic Sight.“So I went from … sort of bare light perception to … this recent past Hanukkah, I … said to my … wife, the menorah — I think it was the eighth day and … the whole menorah was lit!” — Barry Honig, clinical trial participant and founder of Honig International “Whether or not that’s perfect, you know, time will tell, but … to be able to spare kids that, and equally as importantly, to spare their parents that, you know, if I could … just take an injection and be a part of it, then, … that’s a lot of reward. — Barry Honig, clinical trial participant and founder of Honig International Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Dr. Sheila Nirenberg Dr. Tom Perera Dr. Mark Baldwin Barry Honig
3/4/22 • 31:05
This podcast is about big ideas on how technology is making life better for people with vision loss. Every day, people who are blind or visually impaired use their hearing to compensate for vision loss. But when we lose our vision, can we access our visual cortex via other senses? We call this ability for the brain to change its activity “plasticity,” and brain plasticity is an area of active research. In this episode, we’ll explore how, through sensory substitution, audio feedback can, in some cases, stimulate a user’s visual cortex, allowing a user to — without sight — achieve something close to visual perception. Erik Weihenmayer — world-class mountain climber, kayaker, and founder of No Barriers who lost his vision as a teenager due to retinoschisis — brings us to the summit of Everest by describing what it sounds like. He explains how his hearing helps him navigate his amazing outdoor adventures safely. We also speak with Peter Meijer, the creator of The vOICe, an experimental technology that converts visual information into sound, and has been shown to activate users’ visual cortices, especially as users train on the technology, and master how to interpret the audio feedback. We hear an example of what users of The vOICe hear when it translates a visual image of scissors into audio. Erik Weihenmayer shares his experience with Brainport, a similar sensory substitution technology featured in our episode “Training the Brain: Sensory Substitution. While research is ongoing in the areas of sensory substitution and brain plasticity, it’s encouraging that some users of The vOICe report that the experience is like seeing. In the spirit of Erik Weihenmayer, one user even uses it to surf. The Big Takeaways: Erik Weihenmayer, despite having lost his vision as a teenager, has become a world-class adventurer. He summited Everest in 2001 and then summitted the highest peaks on each continent. He has also kayaked 277 miles of whitewater rapids in the Colorado River through the Grand Canyon. He explains how his sense of hearing, in addition to his other senses, and technologies, teams, and systems, helps him achieve his goal to live a life with no barriers. Dutch Inventor Peter Meijer developed a technology called The vOICe, which converts a two-dimensional image from a camera into audio feedback. Dr. Roberts interviews Dr. Meijer about this technology and gives listeners a chance to hear what The vOICe sounds like. Users who train on this system interpret the sounds to make sense of the original visual image. Research on The vOICe shows that this happens in the brain’s visual cortex. While some users say the experience is more auditory than visual, others report the experience as akin to sight. The vOICe relies on the principles of sensory substitution established by the founder of sensory substitution Paul Bach-y-Rita. We discussed sensory substitution in our episode “Training the Brain: Sensory Substitution,” which featured the Brainport device by WICAB. Erik has used Brainport, and in this episode, he describes how the Brainport allowed him to catch a ball rolling across a table, an exciting feat for someone who is blind. He adds that sensory substitution takes serious practice to master. The vOICe is still in the experimental stage, and more research has to be done on sensory substitution. However, neuroscientists studying The vOICe have shown that it stimulates the visual cortex, and some users report visual results. One user of The vOICe recently reported using the technology to surf. Tweetables: “When there’s a lack of things that the sound bounces off of, like on a summit, the sound vibrations just move out through space infinitely and that’s a really beautiful awe-inspiring sound.” — Erik Weihenmayer, No Barriers. “She rolled this white tennis ball across. It lit up perfectly. [...] I’m like, ‘Holy cow, that is a tennis ball rolling towards me.’ And I just naturally reached out and I grabbed this tennis ball.” — Erik Weihenmayer, No Barriers (on using the Brainport device by WICAB) “They applied transcranial magnetic stimulation to ... temporarily disrupt the processing and the visual cortex of a user of The vOICe. ... So this showed that apparently, the visual cortex was doing visual things again.” — Dr. Peter Meijer, Seeing with Sound, The vOICe. “Some ... insist that the sensation of working with the soundscape of The vOICe is truly visual. ... But .... most ... users of The vOICe .... say, “It’s ... auditory but I can use it to visually interpret things.” — Dr. Peter Meijer, Seeing with Sound, The vOICe. “Yeah, sure, if you want a really really safe life, you can hang out on the couch and you can watch Netflix. But I think most people want to be out there in the thick of things. They want to be in the food fight.” — Erik Weihenmayer, No Barriers. Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Peter Meijer Erik Weihenmayer No Barriers
1/26/22 • 30:02
This podcast is about big ideas on how technology is making life better for people with vision loss. Today’s big idea is about exciting and emerging technologies that will someday allow people who are blind or visually impaired to navigate fully autonomously. In this episode, you will meet Jason Eichenholz, the Co-Founder and CTO of Luminar, and his manufacturing engineer, Nico Gentry. Luminar’s LIDAR technology is instrumental to the development of self-driving cars, but this same technology could be useful for people who are blind or visually impaired, who also have to navigate autonomously. You’ll hear from Thomas Panek, the President and CEO of Guiding Eyes for the Blind, an avid runner who dreamed of running on his own. He took this unmet need to a Google Hackathon and Ryan Burke, the Creative Producer at Google Creative Lab put together a team to develop a solution that turned into Project Guideline. Kevin Yoo, Co-Founder of WearWorks Technology is using inclusive design to develop Wayband, a navigation wristband that communicates directions with users via haptics. The Big Takeaways: Since LIDAR uses a shorter wavelength of light than other sensing technologies it creates the most nuanced image, but unlike a camera, LIDAR also measures the distance to each element in the landscape, making it perfect for self-driving cars. And the fact that LIDAR sensors have gotten better and cheaper for self-driving cars has made them available as well for technologies that help people who are blind and visually impaired. LIDAR’s Jason Eichenholz and his engineer, Nico Gentry; who is visually impaired; dive deep into the broad benefits of LIDAR for self-driving cars and for autonomously navigating people. As an avid runner who is visually impaired, Thomas Panek, President and CEO of Guiding Eyes for the Blind, decided to take matters into his own hands, and enlist Google to help build him a tool that would allow him to run without a guide — human or canine. Ryan Burke weighs in on how his prototype, Project Guideline, helps people like Thomas run safely. We can’t talk about running safely without talking about GPS. Kevin Yoo of WearWorks Technology has developed a wearable band called Wayband to help pedestrians navigate different paths and terrain more accurately by connecting to GPS maps. And he’s developing a haptic language that will allow users to understand nuanced directions without the need for visual or audio feedback. Tweetables: “The big difference of LIDAR technology over sonar or radar is the wavelength of light. So because the wavelength of light is so much shorter, you're able to get much higher spatial resolution. [...] So what you're able to do is to have [....] camera-like spatial resolution with radar-like range, you're getting the best of both worlds.” — Jason Eichenholz, of LIDAR technology. “The learning curve to be able to run as fast as my legs could carry was being able to train to those beeping sounds and fine-tuning those sounds with the Google engineering team.” — Thomas Panek “It's a compass; it's a vibration compass. And literally, as you rotate, [...] we can literally guide you down the line of a curvy road by creating this Pac-Man-like effect. So what we call these dew points. So as soon as you collect the dew point, it will guide you to the next one.” — Kevin Yoo Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Jason Eichenholz Thomas Panek Ryan Burke Kevin Yoo
12/10/21 • 25:23
This podcast is about big ideas on how technology is making life better for people with vision loss. Today’s big idea is Sonar and a somewhat similar technology called LiDAR! Can we use the latest sonar technology for obstacle detection the way bats and other nocturnal creatures do? There have been many exciting advances happening in sonar sensors that now make this possible for people who are blind. However, unlike bats, we won’t need to receive feedback signals through our ears. Advances in haptic technologies and languages make communication through touch possible. Dr. Cal Roberts talks with Dr. Matina Kalcounis-Rueppell from the College of Natural and Applied Science at the University of Alberta, Ben Eynon, and Diego Roel from Strap Technologies, Marco Trujillo of Sunu, and Sam Seavey of The Blind Life YouTube Channel to find out more. The Big Takeaways: How does a bat see what it sees? Dr. Kalcounis-Rueppell studies bats and how they use sound to thrive in their nighttime world. Bats use a series of echoes to see a 3D view of their environment, but their world isn’t always so simple. There’s rain, there are leaves, and other creatures flying that bats need to detect with their sonar. Similarly, people with vision impairment have to use their hearing to navigate complex auditory environments. Strap Technologies uses Sonar and LiDAR sensors that can be strapped across the chest, which helps people who are blind detect obstacles. These kinds of sensors have been used to park spacecraft, but with recent developments, they’re finally small enough that a human can wear them in a compact way. Ben and Diego share how it works. Unlike Sonor, LiDAR technology uses pulsed laser light instead of sound waves. Though bats have been honing their echolocation skills for millennia, interpreting information haptically, rather than sonically, is an adaptation that humans, using technologies like Strap, can make. Haptic information can help us navigate without sight through the use of vibrations, which is great news because it means we can leave our ears open to process our active world. More specifically, Ben and Diego suggest that people may no longer need to use a cane to detect obstacles. Ben and Diego are excited about the future. With their technology, they hope to create quick-reacting haptic technology so people who are blind can one day ride a bike or run a race. Infrared or radiation sensors could be added in the future to detect other hazards in the environment. The more user feedback they receive, the easier it will be to add on these product enhancements. Another way we can approximate sight is through echolocation. However, how easy is it for us to hear echoes, really? For Marco at Sunu, it’s actually a natural skill we can learn to develop. Similar to Strap Technologies, the process of learning echolocation could be improved if you're wearing a Sunu Band. Sam Seavey was diagnosed at age 11 with Stargardt’s Disease. He decided to use his voice and video skills to create a YouTube review channel for those who need to use assistive tech. The positive feedback from the community keeps him going. Sam has personally reviewed the Sunu Band, and you can check out the link to his review in the show notes! Tweetables: “They parked spacecraft with these same sensors, and recent developments have really pushed the miniaturization of the components, such that a human being can now wear them in a very compact form factor.” — Ben Eynon “He said, ‘I’m walking faster than I have in a long, long time,’ because he started to trust that the haptic vibrations were telling him every obstacle in the way.” — Ben Eynon shares the reaction from a user who is visually impaired testing Strap “We're changing our environment around us in ways that also change the acoustic environment.” — Dr. Matina Kalcounis-Rueppell “How is it that we have self-driving cars, we have rockets that land themselves like, we have a better iPhone every year, but we don’t have something better than a stick? How can this happen? We still have people moving around and having issues every day.” — Marco Trujillo Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Dr. Matina Kalcounis-Rueppell Ben Eynon & Diego Roel Marco Trujillo Sam Seavey — Sunu Band Review
10/29/21 • 23:21
This podcast is about big ideas on how technology is making life better for people with vision loss. Close your eyes. Raise your hands. Reach out and touch the nearest surface. What are you touching? A desktop, a leather steering wheel cover, a porcelain cup, a plastic keyboard? Our sense of touch and the way in which we interpret the materials in our environment are fundamental to our experience of the world. This episode’s big idea is the new developments in tactile technologies. You’re probably familiar with one of the oldest technologies, Braille, which was invented in 1824 by Louis Braille, a Frenchman who was blind by the age of three. Braille, which has undergone numerous refinements since its invention, has led the way in helping people who are blind read, write, and interact with the world around them. But as useful as Braille is, it has its limits: Braille is used for text; it can’t convey images. Two individuals who are working to develop technologies that will one day help people with vision impairment to experience images and graphics are material scientist Dr. Julia R. Greer from Caltech and physicist Dr. John Gardner from Oregon State University. The Big Takeaways: Did you know most people who are blind still don’t have access to good graphic descriptions? When Dr. John Gardner suddenly and unexpectedly lost his eyesight, he realized he could not see wave graphs in his research. He was unable to teach the concepts in the lectures to his students because they were too inexperienced to know how to interpret the graphs. He had to fax his research to a select number of experts to help him interpret the graphical data accurately. Eventually, Dr. Gardner came to develop a product called Tiger Software that, working with an embosser, enabled him to read with his hands — and ultimately carry on his work. Braille books are expensive and they take up large amounts of space. A normal algebra book could be 50 volumes in Braille. Dr. Gardner’s software has been life-changing for many students who are blind and eager to learn. It produces tactile graphics that can be perceived by touch. So instead of students relying only on Braille text or audio descriptions of images, they can use this technology as a complement to those tools by using special printers that create graphics, charts, and images that can be “read” by touch. There are a few tactical tools people who are blind can use: Braille books (which take up too much space), Braille embossers (which offer dynamic printing of Braille but not a complete experience), Braille and tactile embossers (like Dr. Gardner’s program), and refreshable Braille displays (downloadable content like ebooks in braille, but which are expensive). Promising research is coming out of the lab of Dr. Gardner’s colleague, Dr. Julia Greer. Her team discovered and is now developing a special electroactive polymer, a substance that exhibits a change in size or shape when stimulated by an electric field. They hope to use it to create raised tactile images on a plane. It has potential beyond Braille displays and could one day be used for students, architects, artists, scientists, engineers, or anyone who needs tactile representations of visual data. Tweetables: “We went back to Braille embossing and we had to develop a much higher resolution embossing format, and one of my students and I invented a new technology..” — Dr. John Gardner “I need to touch it. Touching is like seeing it. Hearing about a complicated diagram is not the same as seeing it with my fingers.” — Audrey Schading “It seems to be a natural connection to have this polyelectrolyte gel that swells in response to an applied field. Now that we have it, we can shape it into a 3D shape of some kind.” — Dr. Julia R. Greer Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Dr. Julia R. Greer Dr. John Gardner Tiger Software Suite Audrey Schading
9/17/21 • 26:02
This podcast is about big ideas on how technology is making life better for people with vision loss. Today’s big idea highlights how innovations don’t happen in a vacuum, but rather a long chain of science and research and developments that build on each other. Dr. Shelley Fried’s work exemplifies this process. It took him a career’s worth of experiments and adjustments to enable his cortical brain implants to bypass the eye and restore the patient’s ability to perceive light. He had a lot of obstacles to overcome, everything from circumventing the brain’s natural inflammatory response to getting the research published. One thing is clear, breakthroughs take time and you cannot give up in the process. Your work often becomes an iteration of an iteration. Dr. Fried took inspiration from the artificial retina, which was prototyped from a cochlear implant. Dr. Fried’s revolutionary technology is another step towards a world in which no person is limited by their visual capacity. The Big Takeaways: A cochlear implant is a neuroprosthetic device surgically implanted in the cochlea, the inner part of the ear that is responsible for the transmission of nerve impulses in to the auditory cortex of the brain. Originally developed in 1950, the modern form was honed in the 1970s with help from NASA engineer. Dr. Mark Humanyan took design cues from the cochlear when he was developing the Argus II retinal implant. What is a retinal prosthesis and how does it work? The simplest way to explain it is that it’s an array of electrodes that stimulates the retina and it helps restore vision loss. They work for some blindness cases but not all. For example, this treatment is not recommended for people with advanced glaucoma. Dr. Fried took inspiration from retinal prostheses to build upon the cortical brain implant. The implants are revolutionary because it means they go directly to the source (the brain). The cortical brain implant works by gathering information externally and it converts that data to stimulate the brain so the patient can perceive it. However, vision science doesn’t end there! Vision science keeps building on itself. In this case, the cortical implant technology was inspired by artificial retinas, which took their inspiration from the cochlear implant. How do you target a single neuron? Dr. Fried’s innovative solution was the use of coils, which are smaller than a human hair, to help specify which neurons need activation. When you go directly to the brain, there are some complications that occur. The brain sees the implant as a threat and creates an inflammatory response, which blocks the electrodes from communicating with one another. By using these coils, it bypasses the body’s natural inflammatory response and keeps the lines of communication open. This innovation in technology did not happen overnight. It took over a year and a half to get the coil experiments to work alone, and that doesn’t include all the other methods Dr. Fried experimented with that didn’t succeed. Science is about building upon prior research, and it takes time and a lot of experimentation before a solution will work. Tweetables: “Cochlear implants had taught us that if you even put some of a rudimentary signal in the ear, that the brain can start to use it….. So we want of reconfigured a cochlear implant and used it to stimulate the retina”. — Dr. Mark Humayun “In its simplest form, a retina prosthesis is an array of electrodes. The common one is 6x10 electrodes and each electrode is designed to stimulate a small portion of the retina.” — Dr. Shelley Fried “We run into additional problems when we go into the brain that don’t exist in the retina. One of them is the brain has a huge inflammatory response to the implant.” — Dr. Shelley Fried “Coils are not only more stable over time, but they’re more selective. They’re able to create a smaller region of activation. And so we think we can get much higher acuity with coils than we can with conventional electrodes.” – Dr. Shelley Fried “Our advance was that we showed that we could really shrink down coils to the sub millimeter size and that they would still be effective, that they can still induce neural activation. – Dr. Shelley Fried “I was fortunate that I certainly was not one of the pioneers in terms of being one of the first people to be implanted. [B]eing able to rely on other people’s experiences and being able to trust the process was really helpful.” – Rebecca Alexander, cochlear implant recipient Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Rebalexander.com Dr. Shelley Fried Guest Bios: Dr. Shelley Fried Shelley I. Fried, PhD, is an Associate Professor of Neurosurgery, Harvard Medical School and an Associate Professor for Massachusetts General Hospital, Department of Neurosurgery. He is the developer of cortical brain implants. Dr. Fried was inspired to do this work after reading a New York Times article on the in-depth work that went behind trying to restore vision to returning blind Vietnam vets. Dr. Mark Humayun Mark S. Humayun, MD, PhD, is Director, USC Ginsburg Institute for Biomedical Therapeutics and Co-Director, USC Roski Eye Institute. Dr. Humayun has devoted much of his career to clinical and scientific research in ophthalmology and bioengineering, becoming both a biomedical engineer and professor of ophthalmology. You can hear more about him and his work in Episode 4 — The Development of Artificial Vision. Rebecca Alexander Rebecca Alexander is an author, psychotherapist, group fitness instructor, advocate, and extreme athlete who is almost completely blind and deaf. Born and raised in the San Francisco Bay Area, she currently lives in New York City. Host Bio: Dr. Calvin W. Roberts Calvin W. Roberts, MD, is President and Chief Executive Officer of Lighthouse Guild, the leading organization dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals. Dr. Roberts has a unique blend of academic, clinical, business, and hands-on product development experience. Dr. Roberts is a Clinical Professor of Ophthalmology at Weill Cornell Medical College. He was formerly Senior Vice President and Chief Medical Officer, Eye Care, at Bausch Health Companies where he coordinated global development and research efforts across their vision care, pharmaceutical, and surgical business units. As a practicing ophthalmologist from 1982 to 2008, he performed more than 10,000 cataract surgeries as well as 5,000 refractive and other corneal surgeries. He is credited with developing surgical therapies, over-the-counter products for vision care, prescription ocular therapeutics, and innovative treatment regimens. He also holds patents on the wide-field specular microscope and has done extensive research on ophthalmic non-steroidals and postoperative cystoid macular edema. Dr. Roberts has co-founded a specialty pharmaceutical company and is a frequent industry lecturer and author. He currently serves as an Independent Director on multiple corporate boards and has served as a consultant to Allergan, Johnson & Johnson, and Novartis. A graduate of Princeton University and the College of Physicians and Surgeons of Columbia University, Dr. Roberts completed his internship and ophthalmology residency at New York-Presbyterian Hospital/Columbia University Medical Center in New York. He also completed cornea fellowships at Massachusetts Eye and Ear Infirmary and the Schepens Eye Research Institute in Boston.
9/8/21 • 29:49
This podcast is about big ideas on how technology is making life better for people with vision loss. Today’s big idea is: How will remote diagnostic tests change ophthalmology and vision care? It might be a foreign concept for some, but the specialists in today’s episode, Dr. Peter Pham and Dr. Sean Ianchulev, founders of (Keep Your Sight, a nonprofit focused on remote diagnostic vision tests) share how they can conduct more reliable perimetry tests that help detect macular degeneration, glaucoma, and other conditions that lead to vision loss and eventually blindness — remotely, while patients stay home. Developments like these in remote diagnostics are a stepping stone for the ways machine learning will impact the field of ophthalmology in the future. This episode also features Dr. Einar Stefansson and Dr. Arna Gudmundsdottir, developers of the app, Retina Risk, which helps with remote risk assessment of diabetic eye disease for people with diabetes, as well as Sherrill Jones, who lost her vision due to glaucoma. The Big Takeaways: Retina Risk was created to help people with diabetes assess in real-time their individualized risk for sight-threatening diabetic retinopathy. The app was created back in 2009 and the concept of using technology and algorithms to calculate risk was still quite foreign to most people. What goes into taking a regular perimetry test today? Patients have to come into the office, wait, register, wait some more, get taken to a dark room to be positioned correctly, and after 20-30 minutes, you get a result. Now, there’s an easier way: patients can take these tests at home. Why is telescreening so important? Dr. Pham and Dr. Ianchulev noticed it could take months for patients to be scheduled in for routine visual field tests. By that time, the glaucoma may have advanced, and in some cases, rapidly. There was an unmet need here and there was a better way to serve people quicker and more efficiently, especially people from rural communities who did not have readily available access to healthcare. Medicare did not allow for doctors to reimburse their services unless it was conducted within the physician’s office. This led to a lot of roadblocks in telemedicine, despite the technology being available for the last 15-plus years. Thankfully, in December of 2020, policies were changed so that doctors would be reimbursed for remote patient monitoring. Tweetables: “We know that our blindspot is 15 degrees away from fixation and, with simple trigonometry, you can now use that blindspot to help position patients correctly in front of the computer monitor. We can now use online technology to perform visual field tests.” — Dr. Peter Pham “It was our goal to do a hardware-free digital/virtual device. We felt in ophthalmology, we’re kind of lucky. We are looking at a visual function. So perimetry lends itself to a fully virtual software as a service device.” — Dr. Sean Ianchulev “I think technology will help us get to the next level. Technology has been around for this, but it hasn’t been applied for this.” — Dr. Sean Ianchulev Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Retina Risk Keep Your Sight.org Guest Bios: Dr. Peter Pham & Dr. Sean Ianchulev are both the Co-Founders of Keep Your Sight. Dr. Pham is a boarded certified ophthalmologist who has devoted his professional life to restoring sight and helping patients keep their vision. As a surgeon and clinician, Dr. Pham treats conditions such as glaucoma, cataract, and macular degeneration, all of which can cause blindness. As a researcher, he worked on the development of a novel delivery system for introducing large-sized molecular compounds into thousands of living cells simultaneously. Realizing the importance of technology and innovation for screening and prevention, Dr. Pham teamed up with Dr. Ianchulev to develop the KYS telemedicine system for vision health. Dr. Ianchulev has been on the cutting edge of innovation, making an impact in the treatment of major eye diseases such as macular degeneration and glaucoma. He was instrumental in the development of many new therapies and advances, such as Lucentis for AMD and Diabetic Retinopathy, intraoperative aberrometry for high-precision cataract surgery, micro-stent technology for glaucoma, the miLOOP interventional technology for cataract surgery, and others. Dr. Einar Stefansson & Dr. Arna Gudmundsdottir are both the Co-Founders of Retina Risk. Dr. Stefansson is a leader in the field of diabetic eye disease and diabetic screening and head supervisor for product development and clinical science. Dr. Stefansson graduated from the University of Iceland Medical School in 1978 with honors. He received a PhD degree in physiology from Duke University in 1981 followed by a residency at Duke. Dr. Gudmundsdottir takes an active role in all product development and clinical testing. Her expertise gives valuable insight into practical usage of products and medical approaches. Dr. Gudmundsdottir graduated from the University of Iceland Medical School in ’92. She undertook a fellowship program in endocrinology at the University of Iowa Hospital and Clinics. Sherrill Jones lives in New York City and volunteers administrative services in Lighthouse Guild's Volunteer Services department. Host Bio: Dr. Calvin W. Roberts Calvin W. Roberts, MD, is President and Chief Executive Officer of Lighthouse Guild, the leading organization dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals. Dr. Roberts has a unique blend of academic, clinical, business, and hands-on product development experience. Dr. Roberts is a Clinical Professor of Ophthalmology at Weill Cornell Medical College. He was formerly Senior Vice President and Chief Medical Officer, Eye Care, at Bausch Health Companies where he coordinated global development and research efforts across their vision care, pharmaceutical, and surgical business units. As a practicing ophthalmologist from 1982 to 2008, he performed more than 10,000 cataract surgeries as well as 5,000 refractive and other corneal surgeries. He is credited with developing surgical therapies, over-the-counter products for vision care, prescription ocular therapeutics, and innovative treatment regimens. He also holds patents on the wide-field specular microscope and has done extensive research on ophthalmic non-steroidals and postoperative cystoid macular edema. Dr. Roberts has co-founded a specialty pharmaceutical company and is a frequent industry lecturer and author. He currently serves as an Independent Director on multiple corporate boards and has served as a consultant to Allergan, Johnson & Johnson, and Novartis. A graduate of Princeton University and the College of Physicians and Surgeons of Columbia University, Dr. Roberts completed his internship and ophthalmology residency at New York-Presbyterian Hospital/Columbia University Medical Center in New York. He also completed cornea fellowships at Massachusetts Eye and Ear Infirmary and the Schepens Eye Research Institute in Boston.
9/8/21 • 30:38
This podcast is about big ideas on how technology is making life better for people with vision loss. This episode’s big idea is navigation and how to implement a navigation solution that enables people with vision impairment to broadly travel cities — how and when they want to, independently. Dr. Roberts talks with Javier Pita, the creator of such a technology called NaviLens, which marries location finding with information. Dr. Roberts also talks with representatives of New York City’s Metropolitan Transit Authority — one of the biggest transportation hubs in the world. They discuss the importance of accessible public transportation for people who are visually impaired and how NaviLens technology can help make independent navigation a reality. The Big Takeaways: NaviLens system uses improved QR technology with a new type of code made up of four colors that enables it to store more information than a black and white QR code. Using a smartphone, the NaviLens app scans the area. Once it picks up the unique NaviLens code, the app provides the embedded information audibly to the user along with their distance/directionality from the code. As long as the code appears anywhere in the field of view of the smartphone camera, the code is detected and information is delivered. NaviLens is more accurate than GPS technology because it takes into account smaller distances that are crucial to navigation for people who are visually impaired. NaviLens codes can be read up to 12 times farther away than QR or bar codes as well as at 160-degree angle. Future advances to the NaviLens technology include a 360-degree technology that will register and retain the user’s location so the system can still tell where they are, and guide them to the destination even if they lose contact with the code. In addition, the NaviLens GO app uses advanced technology to help users navigate indoor spaces such as stores and to locate items in the store. This technology is elegant, inexpensive, flexible, easy to use, and fits seamlessly into a user’s life. While already part of public transportation in Barcelona, cities like New York City are testing it and hope to make this technology a more integral part of their public transportation system. Tweetables “Public transportation is the answer to so much inequity across all urban areas, and nonurban areas. If we can work to make the system as safe as possible for any range of abilities, that would be an enormous win, and huge piece making public transit truly public transit.” – Mira Philipson, Systemwide Accessibility Analyst, Metropolitan Transportation Authority New York City Transit “I could walk down the hallway and it’s telling me when I’ve arrived at this department and the door is right in front of me — it really gives me that autonomy that I really crave.” - Ed Plumacher, Adaptive Technology Specialist, Lighthouse Guild “We began in public transportation because for us and the users on our team, it is super important to make public transportation more accessible.” - Javier Pita, Founder and CEO NaviLens “Accessibility needs to be built into products, websites, software, whatever it is, from the ground up, because it will just lead to a better product overall.” Gian Carlo Pedulla, Supervisor, NYC Department of Education and Member, Advisory Committee for Transit Accessibility, Metropolitan Transportation Authority New York City Transit Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild NaviLens NaviLens GO Guest Bios: Javier Pita Lozano, Founder and CEO, NaviLens Javier is the CEO of NaviLens, a solution whose objective is to increase autonomy, social inclusion and quality of life of the visually impaired. Any place can adopt the NaviLens technology in an easy way to improve the space's accessibility through the use of a new patented cutting-edge technology artificial markers called ddTags. Entrepreneur with more than 15 years of experience in launching disruptive technologic companies. Javier and his team are working hard to make this world more accessible for the visually impaired people. Mira Philipson, Analyst, Systemwide Accessibility, Office ofthe President, Metropolitan Transportation Authority New York City Transit Gian Carlo Pedulla, Supervisor, NYC Department of Education and Member, Advisory Committee for Transit Accessibility, Metropolitan Transportation Authority New York City Transit Gian Carlo Pedulla was born and raised in Bensonhurst, Brooklyn. Legally blind due to Leber’s Congenital Amaurosis, he has persevered to overcome his blindness as well as all related obstacles to meet both personal and professional goals. Raised in an Italian American home, he learned the importance of a good meal, being fastidious, having a strong work ethic, and to be as independent as possible despite his blindness. After 15 years of teaching, Mr. Pedulla is now an administrator for Educational Vision Services within the New York City Department of Education. Besides his passion for Mathematics, Physics, and being a Teacher of the Visually Impaired, Mr. Pedulla enjoys music and has been successful as a professional Disk Jockey performing at numerous private and corporate functions throughout the tri-state area over the last 25 years. Mr. Pedulla has been able to adapt and integrate himself to the different school environments and to utilize his strong interpersonal skills to interact with a variety of individuals and personalities, disabled and non-disabled alike. Assistive Technology has been an integral part of his ability to access an array of materials and complete a variety of assignments to achieve goals, both in academia and the workplace. Edward Plumacher, Adaptive Technology Specialist, Lighthouse Guild Adaptive Technology Specialist for Lighthouse Guild since 2016 Founder of a tech company that created products and services for domestic and international professional sports leagues and their television broadcast rights holders, providing advanced optical imaging systems for quantifying and measuring live action recreated in real-time 3-D computer generated video replays. Also produced scoring and measurement systems for teams, coaches, managers and league governing bodies. His world changed when he lost his vision — including his career — though it still involved technology. Purchased first iPhone after first orientation and mobility training. Self-taught how to use voiceover over a weekend, and went from having difficulty trying to email on his computer with a magnifying glass and mouse to texting for the first time, easily accessing email calendars, and the internet. Was very active with the Foundation Fighting Blindness (FFB) and became President of the Long Island chapter. Began making presentations on smartphones and smart tablets for FFB just after he lost his sight. Created audio tutorials, ran workshops and networking groups on adaptive technology. Puts together curriculums on teaching people with vision loss about using technology. Worked with New York State Commission for the Blind (NYSCB) to develop a curriculum for providing services on iOS devices and became one of the first people in NY State authorized to conduct iPhone and iPad training. Experienced in podcasting and media, facilitates a peer-to-peer support group at NY Public Library’s Andrew Heiskell Library, and is also very active in sports such as running, skiing, beat baseball, tandem cycling and outrigger canoeing. Host Bio: Dr. Calvin W. Roberts Calvin W. Roberts, MD, is President and Chief Executive Officer of Lighthouse Guild, the leading organization dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals. Dr. Roberts has a unique blend of academic, clinical, business, and hands-on product development experience. Dr. Roberts is a Clinical Professor of Ophthalmology at Weill Cornell Medical College. He was formerly Senior Vice President and Chief Medical Officer, Eye Care, at Bausch Health Companies where he coordinated global development and research efforts across their vision care, pharmaceutical, and surgical business units. As a practicing ophthalmologist from 1982 to 2008, he performed more than 10,000 cataract surgeries as well as 5,000 refractive and other corneal surgeries. He is credited with developing surgical therapies, over-the-counter products for vision care, prescription ocular therapeutics, and innovative treatment regimens. He also holds patents on the wide-field specular microscope and has done extensive research on ophthalmic non-steroidals and postoperative cystoid macular edema. Dr. Roberts has co-founded a specialty pharmaceutical company and is a frequent industry lecturer and author. He currently serves as an Independent Director on multiple corporate boards and has served as a consultant to Allergan, Johnson & Johnson, and Novartis. A graduate of Princeton University and the College of Physicians and Surgeons of Columbia University, Dr. Roberts completed his internship and ophthalmology residency at New York-Presbyterian Hospital/Columbia University Medical Center in New York. He also completed cornea fellowships at Massachusetts Eye and Ear Infirmary and the Schepens Eye Research Institute in Boston.
9/8/21 • 24:46
This podcast is about big ideas on how technology is making life better for people with vision loss. Today’s big idea centers on the place where big ideas get born — the human brain. In today’s episode, Dr. Roberts and his guests explore theories of brain plasticity, sensory substitution, and sensory augmentation. Dr. Patricia Grant discusses the BrainPort, which uses sensory substitution in this case, the nerve fibers in the tongue, to send information to the brain instead of the optic nerve. Dr. John-Ross Rizzo is developing a device to be called the Sensory Halo, which is supported by sensory augmentation. Both guests share what is being learned about sensory substitution and augmentation through these technologies and how this understanding will help perfect future devices to enable people with vision impairment to see better. The Big Takeaways: The BrainPort is a headset device with a camera that picks up visual input as the eyes would. It uses the theory of sensory substitution by sending stimulation to the nerve fibers on the tongue. The device picks up visual formation in grayscale imagery: lighter areas of the images produce high stimulation on the tongue, while dark areas produce none. This contrast allows users to identify objects in their environment. The BrainPort device is meant for people who are blind so it’s not crowding out a person’s residual vision. And surprisingly, both users who are congenitally blind and users who have seen before and have a visual memory — have performed the same in clinical trials. This shows that users are not experiencing a memory of sight. They are learning to interpret the camera’s image through stimulating the nerve fibers on their tongue. In the future, there are opportunities for collaboration between BrainPort and other technologies to continue to enhance the user experience to create more autonomy. Another device being developed that draws on some aspects of sensory augmentation is the Sensory Halo. Using a device with sensory augmentation is more intuitive to use than a device that uses sensory substitution. The Sensory Halo is designed to empower the wearer by delivering key pieces of information to safely and independently navigate their environment. Tweetables: “We put the brain port on him and started training him, and we were doing some mobility tasks...And I was walking around the room and he would just scan the room. Then all of a sudden, I could feel when he perceived me.” — Dr. Patricia Grant “The great thing about the BrainPort is that it gives a person their own sense. It’s something that they can experience on their own, and that is of great value to a person who is blind.” — Dr. Patricia Grant “Simply put, I just want to amplify your existing senses and augment what I can give to you right now so that you can have a richer experience.” — Dr. John-Ross Rizzo Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild BrainPort Assistive Technology & Advanced Wearables by John-Ross Rizzo, MD, MSCI Guest Bios: Patricia Grant, PhD, Director of Clinical Research, Wicab, Inc. Dr. Grant joined Wicab, Inc. as Director of Clinical Research in February 2014. She previously served as Co-Investigator for Wicab’s FDA clinical trial and currently serves as the Principal Investigator of a clinical trial, funded by the US Department of Defense, investigating the safety and efficacy of the BrainPort for people who have been blinded by traumatic injury. Her future research goals include demonstrating the value of the BrainPort in the workplace, in addition to teaching spatial concepts to children. Prior to joining Wicab, Dr. Grant was the Director of Research at the Chicago Lighthouse for People Who Are Blind and Visually Impaired and a Research Specialist in the Low Vision Research and the Applied Physics laboratories in the Department of Ophthalmology & Visual Sciences at the University of Illinois at Chicago. In addition to her work at Wicab, Dr. Grant has contributed to research in the areas of methods for assessing loss of vision due to retinal disease, treatments to optimize remaining vision, the psychological effects of vision loss, and the measurement of retinal image quality and ocular aberration. She earned a BA in Psychology, an MS in Public Health Sciences, and PhD from the School of Public Health at the University of Illinois at Chicago, with a concentration in behavioral science and eye health promotion. John-Ross (JR) Rizzo, MD, MSCI, Director of Innovation and Technology, Assistant Professor Department of Rehabilitation Medicine and Department of Neurology, NYU Langone Medical Center John-Ross (JR) Rizzo, MD, MSCI, is a physician-scientist at NYU Langone Medical Center. He is serving as the Director of Innovation and Technology for the Department of Physical medicine and rehabilitation at Rusk Institute of Rehabilitation Medicine, with cross-appointments in the Department of Neurology and the Departments of Biomedical & Mechanical and Aerospace Engineering New York University Tandon School of Engineering. He is also the Associate Director of Healthcare for the NYU Wireless Laboratory in the Department of Electrical and Computer Engineering at New York University Tandon School of Engineering. He leads the Visuomotor Integration Laboratory (VMIL), where his team focuses on eye-hand coordination, as it relates to acquired brain injury, and the REACTIV Laboratory (Rehabilitation Engineering Alliance and Center Transforming Low Vision), where his team focuses on advanced wearables for the sensory deprived and benefits from his own personal experiences with vision loss. He is also the Founder and Chief Medical Advisor of Tactile Navigation Tools, LLC, where he and his team work to disrupt the assistive technology space for those with visual impairments of all kinds, enhancing human capabilities. He partners with a number of industrial sponsors and laboratories throughout the country to help breakthrough new barriers in disability research and/or motor control. Host Bio: Dr. Calvin W. Roberts Calvin W. Roberts, MD, is President and Chief Executive Officer of Lighthouse Guild, the leading organization dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals. Dr. Roberts has a unique blend of academic, clinical, business, and hands-on product development experience. Dr. Roberts is a Clinical Professor of Ophthalmology at Weill Cornell Medical College. He was formerly Senior Vice President and Chief Medical Officer, Eye Care, at Bausch Health Companies where he coordinated global development and research efforts across their vision care, pharmaceutical, and surgical business units. As a practicing ophthalmologist from 1982 to 2008, he performed more than 10,000 cataract surgeries as well as 5,000 refractive and other corneal surgeries. He is credited with developing surgical therapies, over-the-counter products for vision care, prescription ocular therapeutics, and innovative treatment regimens. He also holds patents on the wide-field specular microscope and has done extensive research on ophthalmic non-steroidals and postoperative cystoid macular edema. Dr. Roberts has co-founded a specialty pharmaceutical company and is a frequent industry lecturer and author. He currently serves as an Independent Director on multiple corporate boards and has served as a consultant to Allergan, Johnson & Johnson, and Novartis. A graduate of Princeton University and the College of Physicians and Surgeons of Columbia University, Dr. Roberts completed his internship and ophthalmology residency at New York-Presbyterian Hospital/Columbia University Medical Center in New York. He also completed cornea fellowships at Massachusetts Eye and Ear Infirmary and the Schepens Eye Research Institute in Boston.
9/8/21 • 24:04
This podcast is about big ideas on how technology is making life better for people with vision loss. Today’s big idea is about using augmented reality, machine learning, and soon, fifth generation (5G) connectivity to improve vision. The Eyedaptic Eye2 is software-driven smart glasses for people with vision impairment. Dr. Roberts speaks with Dr. Mitul Mehta and Jay Cormier about how Eyedaptic uses machine learning to develop algorithms to guide augmentations that adapt to a person’s vision deficits, habits and environments to help them see better. They also discuss how 5G connectivity is going to continue to enhance the user experience for Eyedaptic users. The Big Takeaways: Eyedaptic identifies a wearer’s visual defect, and adapts to that specific problem by using technology to address the gaps in vision. It applies that technology to commercial augmented reality headsets. Algorithms analyze what the camera in the user’s Eyedaptic glasses is looking at, as well as what the user is doing at the time, and the combination informs what the user sees. As augmented reality continues to develop, the next big breakthrough is going to be connectivity using 5G. This will enable the Eyedaptic glasses to transfer large amounts of data very quickly, improving future machine learning algorithms; it will also allow the device to become more mobile. Tweetables: “That brought me to the whole concept of being able to use technology to fix problems with the body. One of the things I found ... was people trying to solve the problem of vision loss not medically, but with technology.” — Dr. Mitul Mehta “When we were able to put our technology on one of our users, this fellow couldn’t read anymore and we were able to get him to read again. Certainly, that was our first indication that this technology can really do what we hoped it could do.” — Jay Cormier “In essence, what these algorithms are doing is to become adaptive not only to the person’s vision deficits but also their habits and environments.” — Jay Cormier “The goal of any sort of vision technology company, in the end, should also be trying to help people who have ‘normal’ vision, be able to see things that they cannot currently see.” — Dr. Mitul Mehta “Ophthalmology is the most exciting field of medicine because ophthalmologists in general are very pro-technology and they’re always trying to get better.” - Dr. Mitul Mehta Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Eyedaptic Guest Bios: Jay Cormier, President and CEO, Eyedaptic As an experienced technology executive and entrepreneur, Jay has a strong track record of founding, growing, and turning around businesses. He has completed several successful exits totaling over $750M, across embedded software, SaaS, and hardware solutions. By leveraging his background at Analog Devices, Jay has led marketing, sales, engineering, operations, strategic partnerships, business development, new product strategy and execution. As Vice President & General Manager, Jay achieved exits at Teridian, Sierra Monolithics and Mindspeed using his expertise building high performance, execution-oriented multi-disciplinary teams. Jay earned his BS in Electrical Engineering from Worcester Polytechnic Institute and an MBA from Northeastern University. Dr. Mitul Mehta, Medial Advisor, Eyedaptic Mitul is a board-certified ophthalmologist with fellowship training in medical and surgical diseases of the retina at UCI’s Gavin Herbert Eye Institute. He earned his Medical Degree from the Keck School of Medicine of USC, and also holds a M.S. in BioPhysics from Georgetown and B.S. from MIT, where he first ventured into software startups. He completed fellowship training in vitreoretinal surgery at the New York Eye & Ear, and conducts research on surgical devices and techniques, as well as on vitreoretinal diseases, such as diabetic retinopathy and macular degeneration. Mitul is the cofounder of the online America Retina Forum and the Young Retina Forum as well as the editor of the retina section for the surgical education website, CSurgeries.com. Host Bio: Dr. Calvin W. Roberts Calvin W. Roberts, MD, is President and Chief Executive Officer of Lighthouse Guild, the leading organization dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals. Dr. Roberts has a unique blend of academic, clinical, business, and hands-on product development experience. Dr. Roberts is a Clinical Professor of Ophthalmology at Weill Cornell Medical College. He was formerly Senior Vice President and Chief Medical Officer, Eye Care, at Bausch Health Companies where he coordinated global development and research efforts across their vision care, pharmaceutical, and surgical business units. As a practicing ophthalmologist from 1982 to 2008, he performed more than 10,000 cataract surgeries as well as 5,000 refractive and other corneal surgeries. He is credited with developing surgical therapies, over-the-counter products for vision care, prescription ocular therapeutics, and innovative treatment regimens. He also holds patents on the wide-field specular microscope and has done extensive research on ophthalmic non-steroidals and postoperative cystoid macular edema. Dr. Roberts has co-founded a specialty pharmaceutical company and is a frequent industry lecturer and author. He currently serves as an Independent Director on multiple corporate boards and has served as a consultant to Allergan, Johnson & Johnson, and Novartis. A graduate of Princeton University and the College of Physicians and Surgeons of Columbia University, Dr. Roberts completed his internship and ophthalmology residency at New York-Presbyterian Hospital/Columbia University Medical Center in New York. He also completed cornea fellowships at Massachusetts Eye and Ear Infirmary and the Schepens Eye Research Institute in Boston.
7/1/21 • 29:21
This podcast is about big ideas on how technology is making life better for people with vision loss. Today’s big idea is how the technology used in instruments that extend human vision to space is being relied on by vision technology developers in devices that help people with vision loss in everyday tasks here on Earth. Using substitute senses has allowed scientists across many fields to continue their work without the use of sight. The eSight is one such device that stimulates the remaining functioning vision to improve the quality of life for users. Dr. Roberts speaks with Charles Lim about the development of the device, the principles behind how it works, and the motivation for future improvements. The Big Takeaways: Astronomers and other scientists who are blind can continue to make meaningful contributions to their field by using substitute senses — even discovering things unseen to the human eye — especially in fields where instruments do most of the heavy lifting. eSight is designed to help people with low vision; they’ve found that with the right stimuli, they can leverage the dormant portions of the eye that still have some function. It is a wearable and mobile device that maximizes the visual information provided to the brain to naturally compensate for gaps in the user’s vision. As they continue to develop the device, some of the most important factors are making sure it’s comfortable, accessible for a wide range of wearers, has a long battery life, and is future-proof. The ability to possibly change individual lives, and to create a more accessible world, is one of the most motivating reasons behind this technology advancement and continues to drive the developments that are on the horizon for eSight. Tweetables: “What it all means is how do we leverage the technology advances in cameras, image, sensors, and processing to allow...our users to enhance their vision through more information.” - Charles Lim, Chief Technology Officer, eSight “What we did is that we converted into sound data from a gamma-ray burst. We were able to listen to small variations in the data that were not visible to the human eye.” — Dr. Wanda Diaz Merced, Astronomer “Astronomers have realized that you can learn a lot about the Universe by developing instruments that can be extensions of our own senses.” — Dr. Bernard Beck-Winchatz, Astrophysicist “I dream of a future where eSight can really become a natural extension of our users’ vision.” — Dr. Charles Lim Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Touch the Universe, Noreen Grice eSight Guest Bios: Charles Lim, Chief Technology Officer, eSight Charles Lim is a global technology expert with 20 years of experience and a proven record of scaling businesses. Previously, Charles worked in progressive global senior leadership positions with IMAX where he led strategy, operations and business development during a key moment in the company’s rapid growth stage. He’s also acted as a consultant with MaRS Discovery District where he worked closely with technology startups to ensure their success and was a key player in building the MaRS technology innovation ecosystem. Charles has successfully led engineering teams developing leading-edge fiber optic broadcast systems, consumer electronics and aerospace technologies that earned him multiple awards including the NASA Goddard Space Flight Center Award of Excellence. Charles holds a Bachelor of Electrical Engineering and Master of Electrical and Computer Engineering from Ryerson University, and an MBA from Rotman School of Management at the University of Toronto. He has also completed executive-level courses at Harvard Business School. Dr. Bernard Beck-Winchatz, Professor, DePaul University Interim Director of STEM Center, Professor of Physics & Astrophysics, Graduate Program Director of Physics & Astrophysics; Campus Director of Illinois Space Grant Consortium Wanda Díaz-Merced Wanda Díaz-Merced is an astronomer best known for using sonification to turn large data sets into audible sound. She currently works at the South African observatory's Office of Astronomy for Development (OAD) leading the project AstroSense. As someone who has lost their eyesight, she is a leader in increasing equality of access to astronomy and using audible sound to study astrophysical data. Wanda has been included in the list of the 7 most trailblazing women in science by the BBC. Host Bio: Dr. Calvin W. Roberts Calvin W. Roberts, MD, is President and Chief Executive Officer of Lighthouse Guild, the leading organization dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals. Dr. Roberts has a unique blend of academic, clinical, business, and hands-on product development experience. Dr. Roberts is a Clinical Professor of Ophthalmology at Weill Cornell Medical College. He was formerly Senior Vice President and Chief Medical Officer, Eye Care, at Bausch Health Companies where he coordinated global development and research efforts across their vision care, pharmaceutical, and surgical business units. As a practicing ophthalmologist from 1982 to 2008, he performed more than 10,000 cataract surgeries as well as 5,000 refractive and other corneal surgeries. He is credited with developing surgical therapies, over-the-counter products for vision care, prescription ocular therapeutics, and innovative treatment regimens. He also holds patents on the wide-field specular microscope and has done extensive research on ophthalmic non-steroidals and postoperative cystoid macular edema. Dr. Roberts has co-founded a specialty pharmaceutical company and is a frequent industry lecturer and author. He currently serves as an Independent Director on multiple corporate boards and has served as a consultant to Allergan, Johnson & Johnson, and Novartis. A graduate of Princeton University and the College of Physicians and Surgeons of Columbia University, Dr. Roberts completed his internship and ophthalmology residency at New York-Presbyterian Hospital/Columbia University Medical Center in New York. He also completed cornea fellowships at Massachusetts Eye and Ear Infirmary and the Schepens Eye Research Institute in Boston.
7/1/21 • 32:52
This podcast is about big ideas on how technology is making life better for people with vision loss. Today’s big idea is all about the cutting edge advancements in ocular bionic prosthetics. The Argus II is a device that uses a camera and a chip to stimulate the retina and send signals to the brain. Our guest, Dr. Mark Humayun, developer of the Argus II, speaks with Dr. Roberts about the development of this device, and the importance of the collaboration between developers and early adopters. He talks about the engineering, neurophysiology, and surgical challenges they’ve overcome to get to where they are, as well as what kind of advancements might be possible in the future. The Big Takeaways: The Argus II is a device with two components: a wearable component that consists of glasses with a camera and video processing unit and an implanted device that includes an antenna and an electronic chip that sends electrodes to stimulate the remaining cells of the retina. The visual system is similar to a computer in that it requires hardware (our eyes, retina, optic nerve, visual cortex) and software (converts signals to what we describe as sight). When developing artificial vision, Dr. Humayun had to pinpoint how much of the retina needed to be replaced, as well as how much of the retina needed to still exist for the device to work. The electronic system stimulates groups of neurons into visual perceptions. Users of the Argus II can currently perceive up to 10 shades of gray. Dr. Humayun and his team are working on getting the device to generate color vision by stimulating the retina at different frequencies, which the wearer learns to associate with a named color. The cochlear implant was a big influence on the initial development of Argus II — they reconfigured a cochlear implant and used it to stimulate the retina rather than the cochlea. As they reconfigure and continue to develop the device, the collaboration between actual users and developers is crucial. Now that they have the hardware and technology, they can focus on future developments like an implant that bypasses the optic nerve and sends electrodes directly to the visual cortex. Tweetables: “I’ve been so lucky my whole adult life to have that collaborative experience with everyone who’s ever built legs for me.” — Aimee Mullins, actor, athlete, public speaker, and double amputee “The most emotional thing for me was being able to see letters again. That was such an emotional experience, I don’t know how to put it into words.” — Barbara Campbell, Argus II implant recipient “You can think of it like this, that it wirelessly connects the blind person to a camera, and jumpstarts the otherwise blind eye and sends the information to the brain.” — Dr. Mark Humayun “There are some features that are different than our human eye, there are some advantages, but clearly our human eye is incredibly, exquisitely engineered to give you a very pristine, refined, and high-resolution image.” — Dr. Mark Humayun Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Argus II Guest Bio: Dr. Mark Humayun Mark S. Humayun, MD, PhD, is Director, USC Ginsburg Institute for Biomedical Therapeutics and Co-Director, USC Roski Eye Institute. Dr. Humayan received his Bachelor of Science degree from Georgetown University in 1984, his Master's Doctorate from Duke University in 1989, and his PhD from the University of North Carolina, Chapel Hill in 1994. He completed his ophthalmology residency at Duke Eye Center and fellowships in both vitreoretinal and retinovascular surgery at Johns Hopkins Hospital. He stayed on as faculty at Johns Hopkins where he rose to the rank of associate professor before moving to USC in 2001. Dr. Humayun has devoted much of his career to clinical and scientific research in ophthalmology and bioengineering, becoming both a biomedical engineer and professor of ophthalmology. Dr. Humayun led a talented and diverse team of interdisciplinary researchers with the ultimate goal of creating the world’s first artificial retina. He assembled a team of world experts with a wide range of proficiency, including biomedical engineering, computer science, medicine, chemistry, biology, and business. In clinical trials since 2007 and approved by the FDA in 2013, the Argus II retinal implant, represents the culmination of a visual restoration strategy that offers an unprecedented degree of sight to those with complete retinal blindness. He was elected to the prestigious National Academy of Medicine (NAM) and National Academy of Engineering (NAE) for his pioneering work to restore sight. With over 200 publications and more than 100 patents and patent applications, Dr. Humayun has received several research awards, which include the 2005 Innovator of the Year award. He was also featured as one of the top 10 inventors in Time Magazine in 2013, voted as one of the Best Doctors in America for three years, and one of the top 1% of Doctors by U.S. News & World Report. In 2016, Dr. Humayun received the National Medal of Technology and Innovation from President Barack Obama for his innovative work and development of the Argus II. Host Bio: Dr. Calvin W. Roberts Calvin W. Roberts, MD, is President and Chief Executive Officer of Lighthouse Guild, the leading organization dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals. Dr. Roberts has a unique blend of academic, clinical, business, and hands-on product development experience. Dr. Roberts is a Clinical Professor of Ophthalmology at Weill Cornell Medical College. He was formerly Senior Vice President and Chief Medical Officer, Eye Care, at Bausch Health Companies where he coordinated global development and research efforts across their vision care, pharmaceutical, and surgical business units. As a practicing ophthalmologist from 1982 to 2008, he performed more than 10,000 cataract surgeries as well as 5,000 refractive and other corneal surgeries. He is credited with developing surgical therapies, over-the-counter products for vision care, prescription ocular therapeutics, and innovative treatment regimens. He also holds patents on the wide-field specular microscope and has done extensive research on ophthalmic non-steroidals and postoperative cystoid macular edema. Dr. Roberts has co-founded a specialty pharmaceutical company and is a frequent industry lecturer and author. He currently serves as an Independent Director on multiple corporate boards and has served as a consultant to Allergan, Johnson & Johnson, and Novartis. A graduate of Princeton University and the College of Physicians and Surgeons of Columbia University, Dr. Roberts completed his internship and ophthalmology residency at New York-Presbyterian Hospital/Columbia University Medical Center in New York. He also completed cornea fellowships at Massachusetts Eye and Ear Infirmary and the Schepens Eye Research Institute in Boston.
7/1/21 • 30:53
This podcast is about big ideas on how technology is making life better for people with vision loss. Today’s big idea is the power of virtual reality - how people are using VR to remap sight and help people with vision loss in their daily lives Dr. Roberts visits with Grace Andruszkiewicz and Dr. Frank Werblin about how emerging technologies help people with low vision access the areas of vision they still have, by using a virtual reality system. They also talk about how sight works biologically, and how one such device, IrisVision, works to connect people socially. The Big Takeaways: Virtual reality changes the sensory inputs our brain receives, to change the visual field our eyes are scanning and the sounds we are hearing. These new inputs trick the brain into thinking we’re in a different reality. IrisVision is a combination of a Samsung phone and a headset for virtual reality that modulates the visual field of a person with low vision so they can see better. The technology re-maps the visual world they’re seeing so it becomes resonant with their individual pathology. The development of IrisVision technology started as a device to improve macular degeneration, but has evolved to resolve any type of visual pathology (retinal versus cortical), including magnification or “minification” (reducing the size) to modify the visual field depending on the user’s visual function. IrisVision re-mapping provides the brain access to the raw material; after using the device for a few months, users’ native vision is often improved, ultimately leading to a renewed social connection for users. People who use the device need to have some remaining vision. Tweetables: “What’s really meaningful too is helping people go back to places that are really emotionally meaningful from their past. When they feel like they’re back in that place or doing that thing, they come alive again.” - Grace Andruszkiewicz describing senior’s experience with virtual reality “Because the screen is a half an inch from your eyes, it’s not uncommon for people to see something clearly for the first time in a VR headset.” - Grace Andruszkiewicz “It occurred to me that what was needed was a low-cost, non-invasive device that could recode the visual message in a way that would resonate with those islands of sight that people still have remaining.” - Dr. Frank Werblin on the development of IrisVision “Assisting patients with visual loss has a much broader function - it reconnects people with each other. IrisVision assists people in seeing the visual world, but what it’s really doing is reconnecting them socially.” - Dr. Frank Werblin Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss. Pertinent Links: Lighthouse Guild Rendever IrisVision Guest Bios: Dr. Frank Werblin, Co-Founder, Chief Scientist, IrisVision IrisVision was founded by Dr. Frank Werblin, an MIT graduate, Guggenheim Fellow, and professor at UC Berkeley. Dr. Werblin is renowned for his scientific contributions to our understanding of retinal functioning. He has dedicated his life to innovating, developing and delivering an affordable non-invasive solution for millions of people with low vision. With the help of our research partners at Stanford, Johns Hopkins, UC Berkeley, UPMC, The Chicago Lighthouse for the Blind, and other institutions contributing to low vision science, IrisVision is the realization of that lifetime of work. He is dedicated to pushing the boundaries of low vision solutions by continuing to expand the relationship with top vision scientists and technology powerhouses like Samsung. Grace Andruszkiewicz, Director of Marketing & Partnerships, Rendever Host Bio Dr. Calvin W. Roberts Calvin W. Roberts, M.D., is President and Chief Executive Officer of Lighthouse Guild, the leading organization dedicated to addressing and preventing vision loss. Dr. Roberts has a unique blend of academic, clinical, business, and hands-on product development experience. Dr. Roberts is a Clinical Professor of Ophthalmology at Weill Cornell Medical College. He was formerly Senior Vice President and Chief Medical Officer, Eye Care, at Bausch Health Companies where he coordinated global development and research efforts across their vision care, pharmaceutical and surgical business units. As a practicing ophthalmologist from 1982 to 2008, he performed more than 10,000 cataract surgeries as well as 5,000 refractive and other corneal surgeries. He is credited with developing surgical therapies, over-the-counter products for vision care, prescription ocular therapeutics, and innovative treatment regimens. He also holds patents on the wide-field specular microscope and has done extensive research on ophthalmic non-steroidals and postoperative cystoid macular edema. Dr. Roberts has co-founded a specialty pharmaceutical company and is a frequent industry lecturer and author. He currently serves as an Independent Director on multiple corporate boards and has served as a consultant to Allergan, Johnson & Johnson, and Novartis. A graduate of Princeton University and the College of Physicians and Surgeons of Columbia University, Dr. Roberts completed his internship and ophthalmology residency at New York-Presbyterian Hospital/Columbia University Medical Center in New York. He also completed cornea fellowships at Massachusetts Eye and Ear Infirmary and the Schepens Eye Research Institute in Boston.
7/1/21 • 30:59