Autonomous vehicles and robots could share sensor data to help them better navigate the world around them....
The news: Ford is going to test a legged robot that unfolds from the back of an autonomous car to bring parcels to people’s doors. It’s teamed up with Agility Robotics, using its “Digit” robot to try out the idea. One day, a driverless taxi trip could double as a delivery service, dropping packages off between rides, Ford’s CTO suggested in a blog post.
Digit: Ford says the robot can carry packages up to 40 pounds, walk up and down stairs, work around obstacles, and regain its balance if it’s bumped. Bipedal robots have some advantages over wheeled ones: they can deal with obstacles and stairs more easily. However, they’re slower and less stable. Could Digit get up again if it was pushed over, for example?
A crowded market: The boom in home deliveries means companies are scrambling to find quicker, cheaper ways to get goods to your front door. Amazon and FedEx are working on their own pilots, and several smaller players have launched deliveries on college campuses.
A compelling combination: The combination of driverless car and robot is compelling, especially because the two could share camera and lidar sensor data to help each understand their surroundings. The robot could also charge in the car, helping to reduce the need for lots of bulky batteries.
However, we’re still many, many years away from this concept becoming a reality. Ford’s own CEO recently admitted that driverless cars are still years off, and there are still plenty of technical barriers to overcome before we ever see the robot-car duo launched in the wild.
Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.
The Trump administration might be building walls between America and some countries, but it is eager to forge alliances when it comes to shaping the course of artificial intelligence....
The Organization for Economic Co-operation and Development (OECD), a coalition of countries dedicated to promoting democracy and economic development, has announced a set of five principles for the development and deployment of artificial intelligence. The announcement came at a meeting of the OECD Forum in Paris.
The OECD does not include China, and the principles outlined by the group seem to contrast with the way AI is being deployed there, especially for face recognition and surveillance of ethnic groups associated with political dissent.
Speaking at the event, America’s recently appointed CTO, Michael Kratsios, said, “We are so pleased that the OECD AI recommendations address so many of the issues which are being tackled by the American AI Initiative.”
The OECD Principles on AI read as follows:
1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards—for example, enabling human intervention where necessary—to ensure a fair and just society.
3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
4. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
5. Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Many of the biggest political organizations in the US still have awful cyber hygiene ahead of next year’s election....
The news: Researchers at cybersecurity firm SecurityScorecard spent the first quarter of 2019 analyzing the anti-hacking defenses of the parties, including both the US Republican National Committee (RNC) and the Democratic National Committee (DNC). They found that both have some serious holes to address.
The dirty truth: The flaws include exposed personal data about employees that could be used to create fake identities; older versions of software that could let hackers steal usernames and passwords fairly easily; and malicious software, or malware, that could be used to spy on party activities and compromise user accounts.
Why this matters: Ahead of the 2016 US presidential election, hackers penetrated the DNC’s systems and stole e-mails and other data to cause chaos. With European Union parliamentary elections looming and the US about to enter another presidential election year, more attacks on political organizations are inevitable.
Bigger is (somewhat) better: The researchers acknowledge that the RNC and DNC have put significant effort into bolstering their cyber defenses since 2016 but say they still found some (undisclosed) weaknesses. Another, smaller party was using a tool that leaked voter names, dates of birth, and addresses. This flaw was fixed after the party was told what SecurityScorecard had found.