Rapid advances in AI have spawned a number of recent initiatives that aim to convince engineers, programmers, and others to prioritize ethical considerations in their work—but almost all of them have originated in rich Western countries. An effort from the huge engineering association IEEE is now trying to change that, with its own AI ethics proposal that it says will be a global, multilingual collaboration.

In the past two years alone, a raft of new efforts to explore ethics in AI have launched, including the Elon Musk–backed nonprofit OpenAI, the corporate alliance Partnership on AI, Carnegie Mellon University’s AI ethics research center, and the Ethics & Society research unit at Google’s AI subsidiary DeepMind.

But most of these projects are based in the U.S. or U.K., are led by a small group of researchers, and issue updates only in English, which could limit their ability to foster AI that benefits all of humanity, not just those in developed countries.

Since 2016, a group called the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems has been writing a document called “Ethically Aligned Design” that recommends societal and policy guidelines for technologies such as chatbots and home robots. This week, the group unveiled an updated version of the document that integrates feedback from people in East Asia, Latin America, the Middle East, and other regions.

Many of those comments came from members the initiative recruited from Brazil, China, Iran, Israel, Japan, Korea, Mexico, the Russian Federation, and Thailand this year. The group now numbers about 250 people worldwide and continues to grow, according to executive committee chair Raja Chatila.

These international members translated parts of the document into their native languages so it could be circulated widely within their countries, and they submitted reports about the state of AI ethics in their regions to the initiative’s executive committee.

To further diversify the viewpoints, the initiative created a “classical ethics” committee to identify non-Western value systems, such as Buddhism, or Confucianism, which could be incorporated into the document’s ethical guidelines. The group also solicited feedback from outreach organizations, like AI4All, that teach women and people of color about AI.

It’s not yet clear how the initiative will meld these different traditions and viewpoints—the final version won’t be published until 2019—but the group has some preliminary ideas. Citing the Buddhist belief that nothing exists in isolation could remind AI designers that they bear responsibility for the systems they create, for example. Similarly, teaching developers about the sub-Saharan philosophical tradition Ubuntu, which emphasizes the value of community, could prompt them to “work closely both with ethicists and the target audience to ascertain whether their needs are identified and met,” according to the latest version of the document.

In spite of its earnestness, there’s no guarantee the venture will produce results. Like other AI ethics guidelines in the works, “Ethically Aligned Design” just makes recommendations; it has no way of enforcing its suggestions. But growing awareness about the ways AI can discriminate against users if designers don’t take diversity into account—and increasing consumer demands for accountability—point to the value of thinking globally when formulating ethical principles for AI.