If you shop on Amazon, an algorithm rather than a human probably set the price of the service or item you bought. Pricing algorithms have become ubiquitous in online retail as automated systems have grown increasingly affordable and easy to implement. But while companies like airlines and hotels have long used machines to set their prices, pricing systems have evolved. They have moved from rule-based programs to reinforcement-learning ones, where the logic of deciding a product’s price is no longer within a human’s control.

If you recall, reinforcement learning is a subset of machine learning that uses penalties and rewards to incentivize an AI agent toward a specific goal. AlphaGo famously used it to beat the best human players at the ancient board game Go. Within a pricing context, these systems are given a goal such as to maximize overall profit; then they experiment with different strategies in a simulated environment to find the optimal one. A new paper now suggests that these systems could pose a huge problem: they quickly learn to collude.

Researchers at the University of Bologna in Italy created two simple reinforcement-learning-based pricing algorithms and set them loose in a controlled environment. They discovered that the two completely autonomous algorithms learned to respond to one another’s behavior and quickly pulled the price of goods above where it would have been had either operated alone.

“What is most worrying is that the algorithms leave no trace of concerted action,” the researchers wrote. “They learn to collude purely by trial and error, with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude.” This risks driving up the price of goods and ultimately harming consumers.

This originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.