Researchers at the Alphabet subsidiary DeepMind have spelled out how they will ensure that AI is developed safely.

The guidelines aim to make certain that powerful systems capable of learning and figuring out their own solutions to problems don’t start to behave in unexpected and unwanted ways.

The big issues: The researchers say the key challenges are specifying the intended behavior of a system in a way that avoids unwanted consequences; making it robust even in the face of unpredictability; and providing assurances, or ways to override behavior if necessary.

Erratic behavior: This is a growing area of academic research. There are plenty of often amusing examples of machine-learning systems that have started behaving oddly. Take, for example, the AI agent that taught itself a rather bizarre way to rack up points in the game CoastRunners. The AI learned it could accumulate more points not by finishing a race, as was intended, but by hitting certain obstacles around the course instead (as in the gif above). DeepMind’s AI Safety team has also shown ways to have an AI agent shut itself off if it starts behaving in ways that might prove risky.

Far out: We shouldn’t worry unduly about AI systems becoming dangerously autonomous. In any case, there are far greater issues to worry about right now, including the bias that may lurk in AI algorithms or the fact that many machine-learning systems are difficult to understand.