- WIRED (2023)

Assistive technology services are integrating OpenAI's GPT-4, using artificial intelligence to help describe objects and people. Ask Envision, an AI assistant that uses OpenAI’s GPT-4, a multimodal model that can take in images and text and output conversational responses. The system is one of several assistance products for visually impaired people to begin integrating language models, promising to give users far more visual details about the world around them—and much more independence...

Show More
saved by: FoundryBase
updated about 2 months ago
Visibility: Public (all visitors)


ChatGPT notes on this Article

Summary:

This article is about using artificial intelligence to assist people who are visually impaired. It specifically focuses on OpenAI's GPT-4 and how it can be utilized in assistive technology services to give users greater visual detail and independence. The article discusses the potential for using the AI model in language assistance, taking images and text as input and providing conversational responses as output.

Keywords: AI, Blind People, OpenAI's GPT-4, Assistive Technology Services, Multimodal Model, Visual Details, Independence

Comments

No comments yet. Be the first to comment!

Related Chunks

Related chunks with this resource

This Article can be found in 3 chunks
Blindness, eyes, vision and technology, innovation
a collection of resources looking at trends in technology behind AR and VR
A collection of articles and videos looking at the eye and human vision