Artificial Intelligence is evolving at a rapid rate ushering in a new age in technology. Though we are still very much a ways away from creating super-intelligent robots, AI seems to be becoming more and more like us every few months. Though the debate for what role AI will play in society is a hot one, one cannot deny the usefulness of AI, tackling tasks that may be impossible for humans.
Computer scientists from the University of Texas at Austin have recently taught an artificial intelligence agent how to see like a human. The “seeing” AI agent could be used to improve a host of technologies ranging from agriculture to the medical fields.
I See You
Led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman, the University of Texas at Austin taught the AI agent to take a few glimpses at its environment and make inferences about the environment just like humans.
RELATED: WHO OWNS ART CREATED BY ARTIFICIAL INTELLIGENCE?
The most commonly used AI agents are trained for a very specific task, such as to recognize an object or estimate its volume. This new AI agent was developed for general purposes, having the ability to gain visual information that can be used for a wide range of tasks.
To do this the research team utilized a deep learning type of machine inspired by our own brain’s neural networks and trained it using thousands of 360-degree images of different environments.
"We want an agent that's generally equipped to enter environments and be ready for new perception tasks as they arise. It behaves in a way that's versatile and able to succeed at different tasks because it has learned useful patterns about the visual world,” says Grauman.
However, the system is not completely perfect yet. As this AI agent tool has a lot of potentials to be a powerful tool in the areas of search and rescue, researchers are hard at work to get the AI agent to work under specific time constraints, no easy feat. The AI agent would need to be able to help identify and find people in a disaster situation in a matter of minutes and potentially much less.
The next step of this process is to deploy the AI agent into a fully mobile robot with another AI Agent sidekick. More information would help the agent learn much faster, making it better at inferring from its environment.