Deep Learning + AI: How machines are becoming master problem solvers
It’s been more than 20 years since IBM’s Deep Blue won its first match against world chess champion Garry Kasparov, marking the first time an artificial intelligence machine defeated a reigning champion. Deep Blue eventually lost the match 2-4, but evened the score in a May 1997 rematch.
Fourteen years later, AI made its television debut in grand style, when IBM’s Watson took down a pair of former “Jeopardy!” winners in a televised match. In milliseconds, the machine culled the most probable answer to each question from more than 200 million pages of content, including the complete Wikipedia catalog. (Watson was not connected to the Internet during the match.)
Now, Google’s AI system, AlphaGo, is making cognitive computing history. Earlier this month, the system outdueled Go Grandmaster Lee Sedol in a five-game match (4-1). Go is an East Asian “chess on steroids” strategy game that uses a larger board and many more pieces than chess, creating a scenario with more possible board positions (10170 positions) than atoms in the known universe (1080).
The buzz surrounding AlphaGo’s decisive victory has less to do with the win, and more with how the machine outsmarted Sedol. Given the staggering complexity of the game, with a near-infinite number of possible moves, the machine could not rely on memorizing every possible move to decide its next play.
The buzz surrounding AlphaGo’s decisive victory has less to do with the win, and more with how the machine outsmarted Sedol. Given the staggering complexity of the game, with a near-infinite number of possible moves, the machine could not rely on memorizing every possible move to decide its next play. Instead, it had to take a more human approach to the game, using hours of observation and practice—AlphaGo analyzed millions of professional games and played itself millions of times—to gain a sense of what feels like the best move, wrote tech blogger Scott Santens (http://bit.ly/1Rkx9OW). More to the point, the machine had to think.
In its debrief following the match (http://bit.ly/25mcund), Google engineers pointed out two game-changing aspects of AlphaGo’s performance. First, the machine demonstrated the “ability to look ‘globally’ across a board—and find solutions that humans either have been trained not to play or would not consider.” It made moves that, according to Google, had a one in 10,000 chance of being played by a human. The way they see it, AlphaGo-like technology, when applied in almost any industry, has the potential to find solutions that humans don’t necessarily see.
Second is the human achievement behind the machine’s performance. “Lee Sedol and the AlphaGo team both pushed each other toward new ideas, opportunities, and solutions—and in the long run that’s something we all stand to benefit from,” Google wrote.
Besides revolutionary changes to the world’s workforce (as we wrote about in the February issue, http://bit.ly/1Qq1OvD), AI could have a profound impact on the built environment and the AEC industry. For example, the Urban Land Institute last year addressed the potential impact of autonomous vehicles on cities (parking garages, street configurations, etc.). And we know of multiple AEC firms that are doubling down on computational design, predictive analytics, and other advanced technologies to vastly improve building designs and streamline the design and construction processes.
Only time will tell what’s in store for us. Or perhaps we should ask Watson.