Checklist 360: A.I. and 23andMe Follow Ups
AI Experiment Reveals People Struggle to Distinguish Humans from Bots in Online Conversations
In a surprising experiment conducted by AI21, a competitor to OpenAI, participants engaged in two-minute conversations with online strangers, attempting to discern whether they were interacting with a human or an AI bot. The game, titled “Human or Not,” utilized AI bots based on leading large language models (LLMs) such as OpenAI’s GPT-4 and AI21 Labs’ Jurassic-2.
VentureBeat reports that the results of the experiment were eye-opening. Participants successfully identified fellow humans 73% of the time, but when interacting with bots, their accuracy dropped to just 60%. The game, which has been live for over seven months, continues to attract participants, and over a million conversations have been analyzed.
Interestingly, the experiment revealed that certain expectations, such as flawless sentence structure and responses, did not reliably indicate whether the interlocutor was human or artificial. Participants often assumed that bots wouldn’t make typos, grammar mistakes, or use slang, but the LLMs were trained to incorporate these human-like characteristics.
After a month of the experiment and millions of conversations, researchers found that 32% of participants couldn’t distinguish between human and AI interactions. Amos Meron, the creator of the game and creative product lead at AI21 Labs, emphasized that the primary purpose of the experiment is to educate people about the capabilities of AI in conversational settings.
Meron noted that the real danger lies in people being unaware of the technology, especially as AI-powered bots become increasingly sophisticated. He emphasized the need for awareness to prevent misuse of such technology by bad actors, influencers, scam artists, or anyone with ill intentions.
As concerns grow about the potential misuse of AI technology, especially in the context of upcoming elections, Meron suggests that discussions about the impact and implications of AI should become more prevalent. With the projection that, by the end of 2023, many products could possess advanced AI capabilities, the experiment aims to prompt necessary conversations about the evolving landscape of online interactions and the need for public awareness.
McAfee Unveils Project Mockingbird: AI Tool to Combat Deepfake Audio Threats
In response to the rising threat of AI-generated voice synthesizers and deepfake audio manipulation, McAfee has introduced Project Mockingbird, an AI-powered Deepfake Audio Detection technology. Highlighting concerns about cybercriminals using manipulated audio to perpetrate scams and manipulate public perception, McAfee aims to empower users to identify and shield themselves from maliciously altered audio in videos.
Project Mockingbird, introduced at the Consumer Electronics Show, employs a blend of AI-powered contextual, behavioral, and categorical detection models. According to VentureBeat, the technology boasts an impressive accuracy rate of over 90% in identifying and safeguarding against deepfake audio. The tool analyzes spoken words to determine whether the audio corresponds to the intended human speaker.
McAfee’s goal is to provide consumers with the “power of knowing what is real or fake directly into their hands.” However, the article raises important questions about the practical implementation of Project Mockingbird. It does not specify who the end consumers are or how they would access and use the technology. The piece highlights the potential application of the tool on platforms like Facebook, Instagram, Threads, Messenger, where deepfake videos have been detected.
As the 2024 election season looms, McAfee expresses concerns about the timely deployment of detection tools. The article emphasizes the need for answers regarding the accessibility, cost, and installation of Project Mockingbird, urging consumers to maintain a healthy skepticism in the absence of such tools. McAfee stresses the importance of developing awareness and skepticism to combat the threat of AI-generated deepfakes in the digital landscape.
23andMe Faces Backlash Over Data Breach Response: Weak-Sauce Play of the Week
In a recent podcast segment highlighting the “Weak-Sauce Play of the Week,” attention was drawn to the aftermath of the 23andMe data breach, which affected approximately 14,000 accounts and exposed personal information of nearly seven million users. The breach occurred a couple of weeks ago and has raised concerns about the company’s response and accountability.
The initial breach was attributed to credential stuffing, with hackers exploiting reused passwords from other compromised accounts. 23andMe’s recommendation to users was to avoid password reuse and activate two-factor or multi-factor authentication. However, the situation took a troubling turn as a piece from TechCrunch revealed that the breach had a more extensive impact on 6.9 million users who did not directly experience the hack.
The podcast highlighted 23andMe’s controversial stance, placing blame on the victims rather than taking responsibility for the breach. The company argued that users who opted for the DNA Relatives feature, which automatically shares data with relatives on the platform, were at fault for the exposure of their information.
In response to 23andMe’s attempt to shift blame, a lawyer representing some of the victims criticized the company’s failure to implement safeguards against credential stuffing. The lawyer emphasized that millions of users had their data compromised through the DNA Relatives feature, irrespective of password practices. The podcast labeled this response as “weak-sauce,” emphasizing the company’s attempt to absolve itself of responsibility.
Despite 23andMe downplaying the severity of the breach by stating that sensitive information such as social security numbers and financial details were not compromised, Wired’s previous report outlined a trove of exposed data, including ancestry reports, chromosome matching details, self-reported locations, family names, and profile pictures for millions of users.
The podcast criticized 23andMe’s dismissal of potential risks associated with the exposed information, describing the company’s response as “Weak. Sauce.” The segment emphasized the need for 23andMe to acknowledge its responsibility and take appropriate measures to address the concerns of affected users.