The news: Google has signed a deal with Ascension, the second-largest hospital system in the US, to collect and analyze millions of Americans’ personal health data, according to the Wall Street...

“Project Nightingale”: Eventually, data from all of the company’s patients (birth dates, lab results, diagnoses, and hospitalization records, for example) could be uploaded to Google’s cloud computing systems, with a view to using artificial intelligence to scan electronic records, or diagnose or identify medical conditions. The project, code-named Project Nightingale, began in secret last year, the WSJ reports. Neither patients nor doctors have been notified.

A touchy topic: Inevitably, there are worries. The company took control of the health division of its AI unit, DeepMind, back in November 2018, and people at the time warned it could pave the way for Google to access people’s private, identifiable health data. Ascension employees have raised concerns about how the data will be collected and shared, both technologically and ethically, the WSJ reports.

A competitive field: Amazon, Uber, and Apple are all pitching themselves as players in the lucrative health-care world too. However, Ascension is Google’s biggest cloud computing customer in health care so far, and this deal will put them ahead of the pack.

Expand

The news: Twitter has drafted a deepfake policy that would warn users about synthetic or manipulated media, but not remove it. Specifically, it says it would place a notice next to tweets that contain...

The context: It’s become relatively easy to make convincing doctored videos thanks to advances in artificial intelligence. That’s led to a huge panic over the potential for deepfakes to subvert democracy, as they can be used to make politicians seem to say or do whatever the creator wants.

A real threat?:The most notorious political deepfakes so far either have not been deepfakes (see the Nancy Pelosi video released in May) or have been created by people warning about deepfakes, rather than any bad actors themselves. For example, in the UK today two new deepfakes have been released of the prime minister, Boris Johnson, and leader of the opposition, Jeremy Corbyn, endorsing each other for an upcoming election on December 12. But they were created by a social enterprise trying to raise awareness of the issue.

The real problem: There is no denying that deepfakes pose a significant new threat. But so far, they’re mostly a threat to women, particularly famous actors and musicians. A recent report found that 96% of deepfakes are porn, virtually always created without the consent of the person depicted. These would already break Twitter’s existing rules and be removed.

An issue for the whole industry: That said, it is refreshing to see a social-media company wrangling with its content moderation responsibilities so openly. The varying responses to the Pelosi video (YouTube removed it, Facebook flagged it as false, and Twitter let it stand) show what a complex, thorny problem manipulated videos can pose. And unfortunately, we can’t expect deepfake detection technology to fix it, either. We’ll need social and legal solutions,too.

Sign up here for our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.

Expand

Sponsored

Breakthrough-to-Impact

The next wave of digital transformation is here. Demand for seamless end user experiences and the need to build new business models coupled with the rise of exponential technologies such as cloud, AI, 5G, blockchain and quantum, amongst others, is reshaping business platforms and architectures.