An Artificial Intelligence (AI) tool developed by Google failed during real-world testing. It was supposed to detect signs of blindness.
Many in the medical field has touted the help brought by AI tools. These tools are usually used in the screening process of many ailments. When properly trained, these tools can render highly accurate diagnosis.
Google Health developed a deep learning AI that scans images of an eye. The AI tool then looks for evidence of diabetic retinopathy on these images. Diabetic retinopathy is one of the leading cause of blindness around the world.
Google claims that the AI tool was properly trained. The tech giant added that when it was tested on controlled environments, it returned accurate results. However, it would appear that this accuracy was not carried over to real-world tests.
What went wrong
Google researchers and medical experts tested the tool in clinics located in Thailand. The research and tests were conducted over the span of eight months. The team gathered data from patients from a total of 11 clinics.
Despite exhibiting high theoretical accuracy, the tool failed when tested in the real world. The negative result ended up frustrating both patients and researchers. The result also raised questions about the effectiveness of AI tools in real-world applications.
Our new user-centered research examines how nurses in Thailand are using our diabetic eye disease AI. This is one of the first published studies examining how a deep learning system is used in patient care https://t.co/KvLBTjPVjt
— Google Health (@GoogleHealth) April 25, 2020
According to Google, one of the reasons why the tool failed is due to various environmental factors. The tech giant said that factors like room lighting can have a direct impact on the quality of images.
Experienced and trained clinical technicians can properly respond and adjust to these environmental factors. On the other hand, AI tools need to be properly trained to handle such situations.
Google added that lighting have significant impact on the images it had gathered. In some instances, captured images tend to have dark areas and blurs. The tool then interpreted these areas as “ungradable,” thus affecting it accuracy.
What does this mean for AI
Google’s finding could help experts how to train better AI tools. Despite failing in real-world tests, Google maintains that the lessons learned throughout the test are invaluable to the future of AI.
The tech giant added that the problem was not with the artificial intelligence tool. One of the reasons it failed was because its developers failed to train it to various situations. Though the AI tool did produce some valuable outputs, Google said that it has to be perfectly accurate before it can be adopted further.
Image courtesy of Hitesh Choudhary/Unsplash