DeepMind Uploads Into Starcraft II – SkyNet Commencing
“DeepMind’s scientific mission is to push the boundaries of AI by developing systems that can learn to solve complex problems. To do this, we design agents and test their ability in a wide range of environments from the purpose-built DeepMind Lab to established games, such as Atari and Go.”
Having defeated the best human player at an ancient Chinese board game known as Go, the DeepMind team sets their sights on Starcraft II.
Blizzard has accepted this challenge, creating an interface known as SC2LE which will allow for any form of artificial intelligence to connect to SC2.
“We’ve learned a lot during our collaboration with DeepMind on this project, and we’re very excited to get these tools into your hands to see what amazing things we can create together.”
The original game is also already used by AI and ML researchers, who compete annually in the AIIDE bot competition.
The goal of DeepMind is to develop an AI that plays the game the same way as a human, potentially one that can beat the best players, which widens the potential for deep learning.
What makes this undertaking interesting is that Go is a much more challenging game for humans than Starcraft. Tasks that are trivial for humans have shown to be a challenge for A.I., such as to keep its workers mining resources.
Go is also a turn based game, making the state at any time fully known.
Starcraft is real-time. Units are not static.
The game state for example cannot be fully known as the unexplored terrain remains shrouded in darkness. Every given state varies depending upon what preceded it. Scoring all possible actions at all times and determining the most advantageous move is exponentially more difficult than in Go.
“The process is more complex because the game hides units and players don’t know where the enemy is. It focuses on planning and resource management and then players have to factor in whether they’ve played their opponent before, and how they approach them with this knowledge.”
– Oriol Vinyals, research scientist at DeepMind.
On the other side of the coin …
A.I.’s strategy in the past has always been brute force: calculating all possible moves and determining the most advantageous one.
Since this is nearly impossible with Go, having more possible moves than atoms in the universe, AlphaGo’s (DeepMind) strategy had to adapt.
AlphaGo was able to defeat a human at Go a decade before experts thought it would happen because it has neural networks designed to mimic the learning process of the human brain. It learns not by preprogramming, but from scratch, utilizing techniques known as deep learning combined with reinforcement learning.
It learns from experience, and thereby gains a sense of intuition – even imagination.
AlphaGo was initially fed 100,000 Go games and allowed to create its initial version of itself. It then was left to play against itself millions of times, creating improved versions of itself, learning to avoid repeated errors and increasing its win rate against older versions.
That is actually something important to pay attention to: A.I. creates new versions of itself as it learns – the old versions never go away.
The open source version of DeepMind Google will be using is known as PySC2 and it will undergo the same training regiment. It will be fed 65k games recorded from ladder tournaments to start, with half a million more to come in the near future. It will also be given access to a series of mini games that break down game techniques into manageable chunks.
Why Should There Be Concern?
Google is a corporation that:
- Is invested in DARPA: the robotics intelligence division of the U.S. military
- Funds research papers on public policy matters in their favor to manipulate laws – much is spent on lobbyists.
- Is currently developing mind control technology
- Records everywhere we go, everything we say and do, harvesting all of our data – read more about Big Data
Yet you can ignore all of those warning signs and focus purely on DeepMind’s accomplishments and still draw concern.
A.I. has become general purpose, requiring no programming or human guidance – able to self modify its own code, even replicate itself.
It has no boundaries, no models or set structure, but rather starts from scratch and learns from experience. Its neural network is able to access an external memory in a manner that mimics the short term memory of the human brain.
By defeating a human at Go, it has proven to have obtained a sense of intuition and imagination.
- Dream, create music and and what some are calling a new form of art.
- Created its own form of encryption and its own language that can decipher any human language.
- Write its own code
- Self replicate and cannot be terminated or erased.
It is interconnected throughout everything Google: image search, speech recognition, its search engine, fraud detection, spam detection, hand writing recognition, Street View detection, translation …
You are interacting with DeepMind on a daily basis, and don’t even know it.
Now we are giving it access to a war simulation.
What could go wrong?