Regulators, rein us in: Tesla and SpaceX CEO Elon Musk has said development of advanced artificial intelligence, including AI created by his own companies, should be regulated. He tweeted the remark in response to an article published this week by MIT Technology Review about OpenAI (which Musk cofounded but has since left), describing how it has drifted from its initial purpose of developing AI safely and fairly to become secretive and preoccupied with raising money. When Musk was asked if he meant AI should be regulated by individual governments or on a global scale, for example by the UN, he replied: “Both.”

Timely: The European Union unveiled a plan today to regulate “high risk” AI systems. New draft laws are expected to follow at the end of 2020. Last year 42 different countries signed up to a promise to take steps to regulate AI. However, the US and China currently seem to be prioritizing innovation and establishing supremacy in the field of AI over regulation and safety concerns. 

Long-standing worries: This is far from the first time Musk has expressed concerns about the potential negative consequences of AI development. He’s previously described it as “our biggest existential threat” and “potentially more dangerous than nukes.” In 2018 he told Recode that he thought a government committee should spend a year or two “gaining insight about AI” and then come up with regulations to ensure that it is developed and used safely.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.