With AI-based automatic alternative text feature, Facebook hopes to reach to the millions with blindness or visual impairment to have a feel of what images are actually shared on its platform.
Facebook has announced it is launching a new feature that allows visually impaired or blind users of the platform to have an idea of what an image is all about. Termed ‘automatic alternative text’, the feature draws upon artificial intelligence technologies to identify what the image represents.
With the new feature, Facebook intends to reach out to the millions of blind or visually impaired users of the social media platform to have access to what can inarguably be considered one of the most sought after features of Facebook, that of posting and sharing photos. Users also get to do much more with photos on Facebook, which includes tagging friends and family and such. Facebook also added that over two billion photos are shared across Facebook or its supported channels such as Instagram, WhatsApp or Messenger.
However, those with varying degrees of visual impairments or are blind had access to none of these though things seem to be changing at last.
Developed by Facebook’s accessibility team headed by Jeff Wieland, Automatic alt text will first be rolled out on iOS before reaching out to Android or the web. The feature relies on object recognition technology to interpret the objects that a photo constitutes which are then read aloud using iPhone’s VoiceOver feature.
However, the feature is still very much a work-in-progress thing though it should still come as a relief to those who have had to sit out of the fun so far. So far, those using screen readers had to be content hearing who all posted photos and hearing the term ‘photo’ when it is actually about the image that is shared. With the Automatic alt text, users will get to hear more details like if the users are outdoors, whether they are smiling and so on.
Another restriction that currently applies to the Automatic alt text feature is that it only supports English though more languages will be added at a later stage. Also, of course, the scope of the feature itself is enlarged with enhanced accuracy in its predictions.