AI-powered software is getting better and could soon be weaponized for online disinformation.
The way its algorithm determines credit lines makes the risk of bias more acute.
Microsoft is experimenting with inserting small Word doc jobs into people’s Facebook feeds, letting them get work done while trying to procrastinate.
Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
The company’s new facial-recognition service comes with limitations to prevent abuse, which sometimes lets competitors take the lead.
An innovative chip from Graphcore could push artificial intelligence applications to greater heights.
Tech giants can access all of your personal medical details under existing health privacy laws. The question is how else that data might get used.
Microsoft’s $10 billion Pentagon contract puts the independent artificial-intelligence lab OpenAI in an awkward position.
For the second year in a row, researchers from the developing world have been denied visas to a major AI conference in Canada.
At WIRED25, the startup founder talks about her “new privacy paradigm” and how it could protect personal data, and keep it anonymous.
Opinion: While well intentioned, the law has too many loopholes for malicious actors and puts too little responsibility on platforms.
It also warns that AI-enhanced national security apparatus like autonomous weapons and surveillance systems will raise ethical questions.
The Defense Innovation Board, with members from Google, Microsoft, and Facebook, praises the power of military AI but warns of unintended harms or conflict.
Don’t fear the robots, according to a report from MIT and IBM. Worry about algorithms replacing any task that can be automated.
Alexa, Siri, and Google Assistant now all give you ways to opt out of human transcription of your voice snippets. Do it.
Amazon had long been considered the favorite for JEDI, a project to use cloud computing to modernize warfare.
The company is incorporating new software that better understands subtleties of language, with the biggest changes for queries outside the US.
A study shows the risks of making decisions using data that reflects inequities in American society.
Google researchers are training neural networks with a new technique to predict how a molecule smells based on its chemical structure.
The software can help developers constrain their creations so they don’t make bad decisions.
Fried onion meets 1984.