Artificial intelligence doesn’t compare favorably to humans when it comes to problem solving. Ask any eight year old child to place a few blocks on a grid in Minecraft and they’ll almost certainly be bored by the task. A computer, on the other hand, doesn’t grasp such difficult concepts so easily.
Stephan Alaniz, a researcher with the Department of Electrical Engineering and Computer Science Technische Universitat Berlin, yesterday published a white paper titled “Deep Reinforcement Learning with Model Learning and Monte Carlo Tree Search in Minecraft.” In his paper the scientist explains his efforts to create a superior method for training an AI to perform simple tasks based on visual input.
If we’re ever going to have robots that can live and work among humans seamlessly without damaging us or our property they’re going to have to understand how to interact with the environment using visual context. One of the most popular ways to train AI for this task is using video games with simple controls.
We can judge an AI’s effectiveness at completing specific tasks in a structured environment, like Minecraft, by comparing it to human efforts.
Watching the above video, it’s apparent that AI – even one that’s shown to be more effective than other agents trained to perform similar tasks – isn’t very good at doing simple things yet. But developing cutting-edge technology takes time — though advances in machine learning techniques are happening at a terrifying pace.
Future research will drive training times down, effectiveness up, and generate new ideas for algorithms that further blur the lines between artificial and human intelligence.
But for now, it’s interesting enough to watch an AI process hundreds of different moves as it tries to figure out a simple block placing challenge in Minecraft. It might be worth remembering, in the future, how simple these things were when they began learning.
The Next Web’s 2018 conference is just a few months away, and it’ll be 💥💥. Find out all about our tracks here.