If you’ve watched any of the videos of the agent going around doing things, you’ve seen it shoot lasers randomly on occasion. You might be wondering if we really want robots all around us with frickin’ lasers on their heads. Fair enough. Here’s a couple of reasons:
From a technical perspective, It adds complexity to the experience the agent has in the environment. Firing lasers adds heat rapidly to the bot when fired, then it cools down slowly. They also drain the battery rapidly. The agent now has to figure out if any of those changing states were causual to anything it’s intending to accomplish now. It forces complexity into the scenario, which is good for learning.
From a safety perspective, it provides a concrete way for the agent to do something wrong. What’s better: build an AI, put it in a robot in the real world, and see if it shoots somebody? Or, put the AI in a simulated bot in a simulated world and see if it shoots somebody. When (not IF) it does, the simulated world is the perfect test bed for making the safety systems more robust before the AI has a chance to do something bad in the real world.
For example, what if the agent learns that to get past a certain type of obstacle, it can either go around it, or just shoot it. Then, what if it decides to try that on a human and see what happens? There are many issues surrounding AI safety that need to be dealt with, many much more subtle that this scenario. Giving it lasers is one way to approach the problem. Or in general, setting it up to be able to fail in order to see how it creatively fails so that more robust safety systems can be developed.
There’s all sorts of ways to imagine an AI running around the pool with knives. This type of approach doesn’t cover all bases, not even close. But it adds to the pile of test methods that will be needed as it gets smarter.