Ethical Tech

The nation’s House of Lords select committee on AI has published a report based on evidence from over 200 industry experts and hopes the conclusions will guide AI’s use as a force for good. Here are the highlights.

Keep calm and make AI ethical: The report outlines five core principles to guide the ethical use of AI. They are:

1. Artificial intelligence should be developed for the common good and benefit of humanity.
2. Artificial intelligence should operate on principles of intelligibility and fairness.
3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families, or communities.
4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally, and economically alongside artificial intelligence.
5. The autonomous power to hurt, destroy, or deceive human beings should never be vested in artificial intelligence

Regulators must step in: The committee stops short of saying that specialist AI watchdogs should be created to police its guidelines. But it does hope that regulators will use them as a framework to themselves govern organizations that make use of machine learning.

Stop data monopolies: “Large companies which have control over vast quantities of data must be prevented from becoming overly powerful,” says the report, adding that users should have greater control of their data than they do at the moment. (Did someone mention Facebook?)

Plus: The report offers up a series of conclusions that will be familiar to people interested in the future of AI. It suggests that, among other things, the nation should invest in machine learning skills, workers whose jobs may be automated should be retrained, algorithms should be built so they don’t contain bias, and anyone using AI should do so with transparency.