Re: [nexa] AI therapy is a surveillance machine in a police state
I chatbot usati a scopo di counseling e terapia psicologica come un altro mattoncino di un possibile surveillance State? "The past few months have seen a sharp escalation in the risks and scope of this. The Trump administration’s surveillance crusade is vast and almost unbelievably petty. It’s aimed at a much broader range of targets than even the typical US national security and policing apparatus. And it has seemingly little interest in keeping that surveillance secret or even low-profile. Chatbots, likewise, escalate the risks of typical online secret-sharing. Their conversational design can draw out private information in a format that can be more vivid and revealing — and, if exposed, embarrassing — than even something like a Google search. There’s no simple equivalent to a private iMessage or WhatsApp chat with a friend, which can be encrypted to make snooping harder. (Chatbot logs can use encryption, but especially on major platforms, this typically doesn’t hide what you’re doing from the company itself.) They’re built, for safety purposes, to sense when a user is discussing sensitive topics like suicide and sex." "During the Bush and Obama administrations, the NSA demanded unfettered access to American telephone providers’ call records. The Trump administration is singularly fascinated by AI, and it’s easy to imagine one of its agencies demanding a system for easily grabbing chat logs without a warrant or having certain topics of discussion flagged. They could get access by invoking the government’s broad national security powers or by simply threatening the CEO. For users whose chats veer toward the wrong topics, this surveillance could lead to any number of things: a visit from child protective services or immigration agents, a lengthy investigation into their company’s “illegal DEI” rules or their nonprofit’s tax-exempt status, or embarrassing conversations leaked to a right-wing activist for public shaming." "Like the NSA’s anti-terrorism programs, the data-sharing could be framed in wholesome, prosocial ways. A 14-year-old wonders if they might be transgender, or a woman seeks support for an abortion? Of course OpenAI would help flag that — they’re just protecting children. A foreign student who’s emotionally overwhelmed by the war in Gaza — what kind of monster would shield a supporter of Hamas? An Instagram user asking for advice about their autism — doesn’t Meta want to help find a cure? There are special risks for people who already have a target on their backs — not just those who have sought the political spotlight, but medical professionals who work with reproductive health and gender-affirming care, employees of universities, or anyone who could be associated with something “woke.” The government is already scouring publicly available information for ways to discredit enemies, and a therapy chatbot with minimal privacy protections would be an almost irresistible target." https://www.theverge.com/policy/665685/ai-therapy-meta-chatbot-surveillance-... Buona lettura, Federico http://www.forbes.com/sites/federicoguerrini/ https://reutersinstitute.politics.ox.ac.uk/our-research/newsroom-curators-an... My latest book: Content Curation (Italian): http://www.amazon.it/Content-Curation-Federico-Guerrini/dp/8820366126
participants (1)
-
Federico Guerrini