From ‘door detection’ to scanning braille, phone accessibility keeps expanding

As smartphones become our default computing platform, the devices are having to adapt to make connectivity more accessible for people with disabilities.

From making sure everyone can read and understand the information being displayed on screens to solutions for specific problems that might keep a person from using live map directions or voice assistants, accessibility has become a major push for all big tech companies.

Apple has previewed door detection and Apple Watch mirroring as upcoming accessibility features.

Apple’s upcoming accessibility features, include the ability for iPhones to detect and give information about the location and design of doors, which it had heard was an issue for some users.

“When you think about a [taxi], it might drop you off anywhere in the block, where do you go? The hardest part is how do you get those last 10 feet to the actual destination,” said Apple’s senior director of accessibility, Sarah Herrlinger.

“For the blind community, it’s a unique and incredibly powerful way to utilise something that’s already built into the device.”

The feature, which can determine the distance to the doors, whether they are push or pull, what kind of handle they have and any writing that’s on them, uses the LiDar sensors on recent iPhones and iPads, which was also used for a recent feature giving information about how close people were standing nearby.

“Day to day that might be used when I’m standing in the queue at the grocery store, and I need to know if it’s my turn to move up to the teller. But in this pandemic world it also became an amazing way to understand social distancing, and to make sure that you were able to keep yourself safe and healthy,” Herrlinger said.

“We want to make sure that we’re constantly building out new features to support every user’s need.”

Apple also previewed the ability to mirror Apple Watch displays on iPhones, and Live Captions, which will add automatic subtitles to any audio content from video calls to streaming video, although only in the US and Canada. This is something Google already offers globally on its Android phones, and both companies have added accessibility features over the years that can read out notifications or describe on-screen elements, let you control your phone by talking or allow users to plug in a switch to navigate with external equipment.

Google previously rolled out Android features that transcribe real-world audio into captions, and describe objects and text seen by the phone’s camera, and is currently testing a system called Project Relate that can adapt its voice recognition to people with conditions that can make their speech difficult to understand.

“Speech recognition technology can potentially change people’s lives around the world, but if you have difficulty speaking, then those voice-activated technologies may not always work well,” said Google Australia’s Disability Alliance lead Lucinda Longcroft.

“We’ve been able to engage with the community, with the help of partner organisations, which enabled us to gather over 1000 hours of speech samples to make Project Relate possible. We want to continue researching and building helpful products for all people, including those of us who have speech impairments, and as a community we can accomplish that goal.”

But accessibility isn’t all about software. A team within Microsoft established a hardware lab in 2017 which created the Adaptive Controller; a gadget that connects to Xbox consoles and allows all standard buttons and sticks to be replaced with any preferred input devices. Last year the company produced the Surface Adaptive Kit; an inexpensive collection of labels, stickers and tools to make opening portable PCs and identifying the various ports and buttons easier.

Microsoft is producing a range of adaptive PC accessories.

Earlier this month Microsoft announced a new line of Adaptive Accessories for PC including buttons, hubs and a customisable mouse, and said its Inclusive Tech Lab was being expanded to become an incubator of new designs in collaboration with disability communities.

And development tools mean solutions can come not only from big tech in collaboration with disability communities, but independent app developers as well.

Aaron Stephenson, a software engineer for an Australian fintech company who also makes apps in his spare time, has developed program that can scan Braille with an iPhone’s camera and transcribe it into English text. At first, it was just a test using Apple’s machine learning tools to see if he could check his own attempts to learn Braille, but when he posted his findings on Twitter he realised there was a demand.

App developer Aaron Stephenson.

“I got like 13,000 views, I got 100 retweets. I just got so much feedback, and good feedback. I had people coming to me saying ‘this would be amazing’,” he said.

“I guess the expectation when people saw the video was that it was an app and it worked, but it was purely a proof of concept.”

Two years on and Braille Scanner officially launched on the App Store last month. Stephenson said he’s been contacted by many people with a huge range of use cases, including someone who wanted to help their nephew with their homework, someone from a theme park who wanted to check their Braille signage was the right way around and readable, an American library which wants to reproduce its collection of deteriorating Braille documents, and an aged care worker who said Braille documents and letters are often thrown out when a person passes away if their family can’t afford to have them transcribed.

The biggest challenge he faces is making the app work reliably for vision impaired people in all situations, as it was only initially designed with flat paper in a well-lit room.

“I’ve had people from Germany send me photos of medication with Braille on it, and I didn’t even think about there being Braille medication,” he said, adding that the app is built for privacy, so he relies on users sending him problem images rather than letting the app collect them automatically.

“The more data I get from the users, obviously the more accurate I can make it.”

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in Technology

From our partners

Source: Read Full Article