
Checklist 416: Malware as A.I. and A.I. Be Trippin’
DeepSeek AI Sparks Privacy and Security Concerns, Malware Threats Identified
Just a month after raising alarms about DeepSeek AI, security experts continue to warn against using the China-based artificial intelligence tool due to severe privacy risks and emerging cybersecurity threats.
DeepSeek: A Security Nightmare?
DeepSeek, a low-cost AI model developed in China, has been flagged for serious privacy concerns. According to ZDNet and Ars Technica, the app was found forwarding sensitive user data to Chinese government-operated companies like China Mobile and ByteDance, the parent company of TikTok. Additionally, Ars Technica reported that DeepSeek transmits user data over unencrypted channels, making it vulnerable to interception.
Beyond these concerns, The Future Society, an AI policy nonprofit, has warned about the potential for numerous third-party applications built on DeepSeek’s AI engine, each carrying its own set of security risks.
New Threats: Fake DeepSeek Installers and Malware
Cybersecurity firm McAfee has identified three major DeepSeek-related threats:
- Fake “DeepSeek” Installers – Cybercriminals are distributing seemingly legitimate files like “DeepSeek-R1.Leaked.Version.exe” that actually connect to hostile servers. These files install keyloggers, password stealers, and cryptominers, compromising victims’ devices.
- Unrelated Third-Party Software Installs – Some downloads disguise themselves as DeepSeek AI tools but turn out to be unrelated software like audio editors or system utilities, potentially hiding malware that can be activated later.
- Fake Captcha Pages – Fraudulent websites trick users into pasting secret commands into Windows’ Run dialog, disabling antivirus programs and installing data-stealing malware such as Vidar Infostealer, which targets browser data and digital wallets.
How to Stay Safe
Experts strongly advise against using DeepSeek due to its privacy and security risks. However, if users insist on using it, McAfee and SecureMac recommend the following precautions:
- Verify Downloads – Only use official websites and trusted developer forums.
- Check URLs – Watch for subtle typos or altered domains that mimic legitimate sites.
- Monitor Performance – Unexpected slowdowns or overheating may indicate hidden malware.
- Keep Software Updated – Ensure all applications, security software, and operating systems are up to date to protect against vulnerabilities.
Final Verdict: Just Don’t Use DeepSeek
Security professionals emphasize that DeepSeek is untrustworthy, poses a major privacy risk, and now serves as a vector for malware attacks. Their recommendation? Avoid DeepSeek entirely.
AI Hallucination Accuses Innocent Man of Murder, Privacy Advocates Sound Alarm
Artificial intelligence continues to struggle with accuracy, and now it’s facing legal scrutiny over the consequences. A Norwegian man who asked ChatGPT about himself was shocked to discover the AI falsely claimed he had murdered two of his children and attempted to kill a third. According to Engadget, the chatbot not only fabricated the crime but also sprinkled in real personal details, including the man’s number of children, their genders, and his hometown.
Privacy Watchdog NOYB Takes Action
The incident caught the attention of NOYB (None of Your Business), a European privacy rights group known for its work enforcing GDPR regulations. The organization argues that ChatGPT’s hallucination highlights a serious violation of data protection laws.
A NOYB data protection lawyer told Engadget:
“The GDPR is clear. Personal data has to be accurate. And if it’s not, users have the right to have it changed to reflect the truth… Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
AI’s History of False Accusations
This is not the first time ChatGPT has falsely accused people of serious crimes. Engadget reports that past AI-generated hallucinations have included:
- Accusing a man of fraud and embezzlement.
- Claiming a court reporter was guilty of child abuse.
- Alleging a law professor committed sexual harassment.
While such errors have raised ethical and legal questions before, NOYB’s latest complaint marks only its second formal action against OpenAI. The first, filed last year, focused on an incorrect birthdate for a public figure. OpenAI at the time stated that it could not alter stored information but could block its use in responses.
Will OpenAI Fix This?
With AI-generated misinformation now escalating to accusations of violent crime, Engadget questions whether OpenAI will develop a way to correct inaccuracies in its dataset. While possible, it remains uncertain whether such a solution is on the horizon