Algorithms capable of solving a magic cube have appeared before, but a new system from the University of California at Irvine uses artificial intelligence to solve the 3D puzzle from scratch without the help of humans. As if that were not enough, he does it with impressive speed and efficiency.

New research published this week in Nature Machine Intelligence describes DeepCubeA, a system capable of solving any magic cube it comes across. Most strikingly, it can find the most efficient path to the goal - that is, the solution that requires the least number of moves - by about 60% of the time. On average, DeepCubeA needed only 28 steps to solve the puzzle, requiring 1.2 seconds to calculate the solution.

It looks fast, but other systems have solved the puzzle in less time, including a robot that can solve the cube in just 0.38 seconds. But these systems were designed specifically for the task, using algorithms written by humans to solve the puzzle in the most efficient way possible. DeepCubeA, on the other hand, has learned to solve the magic cube using an artificial intelligence approach known as reinforcement learning.

"Artificial intelligence can defeat the best chess and Go players in the world, but some of the more difficult puzzles, such as Rubik's Cube, were not solved by computers, so we thought they were open to AI approaches," said Pierre Baldi, the lead author of the new article, in a press release. "The solution to the magic cube involves more symbolic, mathematical, and abstract thinking, so a deep learning machine with the potential to solve such a puzzle is closer to becoming a system that can think, reason, plan, and make decisions. "

A specialist system designed for a single task - such as solving a magic cube - will always be limited to that domain, but a system like DeepCubeA with its highly adaptive neural network could be harnessed for other tasks such as solving scientific problems, complex mathematics, and engineering.

"It's a small step toward creating agents who can learn to think and plan on their own in new environments," Stephen McAleer, co-author of the new article, told Gizmodo.

Reinforcement learning works as its name implies. Systems are motivated to achieve a designated goal, during which time they earn points by deploying successful actions or strategies and lose points by deflecting the course. This allows the algorithms to improve over time and without human intervention.

Reinforcement learning makes sense to the magic cube if we take into account the absurd number of possible combinations in the 3x3x3 puzzle, which amounts to about 43 quintiles. Simply choosing random moves in the hope of solving the cube will not work for either the humans or the world's most powerful supercomputers.

DeepCubeA is not the first attempt by these researchers at the University of California at Irvine. His previous system, called DeepCube, used a conventional tree search strategy and a reinforcement learning scheme similar to that used by DeepZind's AlphaZero.

But while this approach works well for one-on-one board games such as Chess and Go, it has been clumsy for the magic cube. In tests, the DeepCube system required a lot of time to do its calculations, and its solutions were far from ideal.

The UCI team used a different approach to DeepCubeA. Starting with a solved cube, the system made random moves to shuffle the puzzle. He learned to be proficient in the magic cube by playing backward.

At first, the movements were few, but the state of confusion became increasingly complicated as training progressed. All in all, DeepCubeA played 10 billion different combinations in two days while working to solve the cube in less than 30 moves.

"DeepCubeA tries to solve the cube using as few moves as possible," explained McAleer. "Consequently, movements tend to look very different from how a human would solve the cube."

After training, the system was tasked with solving 1,000 randomly scrambled magic cubes. In the tests, DeepCubeA found a solution for 100% of all hubs and found a shorter path to the goal of 60.3% of the time. The system required, on average, 28 moves to solve the cube, which occurred in about 1.2 seconds. In comparison, the world's fastest human puzzle solvers need about 50 steps.

"As we discovered that DeepCubeA is solving the cube in fewer moves 60 percent of the time, it's clear that the strategy it's using is close to the ideal, known as God's algorithm," said Forest Agostinelli, co-author of the study. "While human strategies are easily understandable with step-by-step instructions, defining an optimal strategy often requires sophisticated knowledge of group theory and combinatorial analysis. While setting this strategy, mathematically is not within the scope of this article, we can see that the approach that DeepCubeA is employing is not immediately apparent to humans. "

To show the flexibility of the system, DeepCubeA was also taught to solve other puzzles, including sliding games such as Lights Out and Sokoban, which he did with similar proficiency.

"We applied our algorithm to a total of seven puzzles and found that it solved all of them. Therefore, this is evidence that the method can be applied more generally, "Agostinelli said. "We believe that given only one goal and one method to work backward from that goal, artificial intelligence algorithms can not only learn to find a way to the goal but also learn how to do it the most efficiently possible."

Going forward, UCI researchers would like to modify the DeepCubeA algorithm to perform other tasks, such as predicting a protein structure that could be useful for the development of new drugs. They would also like to use the system skills that are used in finding paths to help robots navigate more efficiently in complex environments.

Artificial intelligence learns without help from humans to solve the magic cube. |

New research published this week in Nature Machine Intelligence describes DeepCubeA, a system capable of solving any magic cube it comes across. Most strikingly, it can find the most efficient path to the goal - that is, the solution that requires the least number of moves - by about 60% of the time. On average, DeepCubeA needed only 28 steps to solve the puzzle, requiring 1.2 seconds to calculate the solution.

It looks fast, but other systems have solved the puzzle in less time, including a robot that can solve the cube in just 0.38 seconds. But these systems were designed specifically for the task, using algorithms written by humans to solve the puzzle in the most efficient way possible. DeepCubeA, on the other hand, has learned to solve the magic cube using an artificial intelligence approach known as reinforcement learning.

"Artificial intelligence can defeat the best chess and Go players in the world, but some of the more difficult puzzles, such as Rubik's Cube, were not solved by computers, so we thought they were open to AI approaches," said Pierre Baldi, the lead author of the new article, in a press release. "The solution to the magic cube involves more symbolic, mathematical, and abstract thinking, so a deep learning machine with the potential to solve such a puzzle is closer to becoming a system that can think, reason, plan, and make decisions. "

A specialist system designed for a single task - such as solving a magic cube - will always be limited to that domain, but a system like DeepCubeA with its highly adaptive neural network could be harnessed for other tasks such as solving scientific problems, complex mathematics, and engineering.

"It's a small step toward creating agents who can learn to think and plan on their own in new environments," Stephen McAleer, co-author of the new article, told Gizmodo.

Reinforcement learning works as its name implies. Systems are motivated to achieve a designated goal, during which time they earn points by deploying successful actions or strategies and lose points by deflecting the course. This allows the algorithms to improve over time and without human intervention.

Reinforcement learning makes sense to the magic cube if we take into account the absurd number of possible combinations in the 3x3x3 puzzle, which amounts to about 43 quintiles. Simply choosing random moves in the hope of solving the cube will not work for either the humans or the world's most powerful supercomputers.

DeepCubeA is not the first attempt by these researchers at the University of California at Irvine. His previous system, called DeepCube, used a conventional tree search strategy and a reinforcement learning scheme similar to that used by DeepZind's AlphaZero.

But while this approach works well for one-on-one board games such as Chess and Go, it has been clumsy for the magic cube. In tests, the DeepCube system required a lot of time to do its calculations, and its solutions were far from ideal.

The UCI team used a different approach to DeepCubeA. Starting with a solved cube, the system made random moves to shuffle the puzzle. He learned to be proficient in the magic cube by playing backward.

At first, the movements were few, but the state of confusion became increasingly complicated as training progressed. All in all, DeepCubeA played 10 billion different combinations in two days while working to solve the cube in less than 30 moves.

"DeepCubeA tries to solve the cube using as few moves as possible," explained McAleer. "Consequently, movements tend to look very different from how a human would solve the cube."

After training, the system was tasked with solving 1,000 randomly scrambled magic cubes. In the tests, DeepCubeA found a solution for 100% of all hubs and found a shorter path to the goal of 60.3% of the time. The system required, on average, 28 moves to solve the cube, which occurred in about 1.2 seconds. In comparison, the world's fastest human puzzle solvers need about 50 steps.

"As we discovered that DeepCubeA is solving the cube in fewer moves 60 percent of the time, it's clear that the strategy it's using is close to the ideal, known as God's algorithm," said Forest Agostinelli, co-author of the study. "While human strategies are easily understandable with step-by-step instructions, defining an optimal strategy often requires sophisticated knowledge of group theory and combinatorial analysis. While setting this strategy, mathematically is not within the scope of this article, we can see that the approach that DeepCubeA is employing is not immediately apparent to humans. "

To show the flexibility of the system, DeepCubeA was also taught to solve other puzzles, including sliding games such as Lights Out and Sokoban, which he did with similar proficiency.

"We applied our algorithm to a total of seven puzzles and found that it solved all of them. Therefore, this is evidence that the method can be applied more generally, "Agostinelli said. "We believe that given only one goal and one method to work backward from that goal, artificial intelligence algorithms can not only learn to find a way to the goal but also learn how to do it the most efficiently possible."

Going forward, UCI researchers would like to modify the DeepCubeA algorithm to perform other tasks, such as predicting a protein structure that could be useful for the development of new drugs. They would also like to use the system skills that are used in finding paths to help robots navigate more efficiently in complex environments.