Checklist 390: Ask a Philosopher
University of California Santa Cruz Under Fire for Controversial Phishing Test
In a recent incident at the University of California Santa Cruz (UCSC), a phishing preparedness test sparked significant backlash after it caused widespread panic among students and staff. The simulated phishing email, sent on August 18, 2024, claimed that a staff member who had recently returned from South Africa had tested positive for the Ebola virus. The message urged recipients to log in to a fake information page, a tactic commonly used in real phishing attacks to harvest login credentials.
According to The Register, this test was designed to mimic real phishing attempts, aiming to educate the university community on how to recognize and avoid such scams. However, the choice of Ebola as the hook for the test was criticized for being overly alarming and inappropriate. The next day, UCSC’s Chief Information Security Officer (CISO) issued an apology to the university community, acknowledging that the email content was “not real and inappropriate as it caused unnecessary panic, potentially undermining trust in public health messaging.”
The apology further emphasized that the simulated phishing emails are intended to enhance security by training individuals to recognize and avoid real threats. However, the CISO admitted that the choice of topic for this simulation was problematic, stating, “We realize that the topic chosen for this simulation caused concern and inadvertently perpetuated harmful information about South Africa.“
Cybersecurity experts have weighed in on the incident, highlighting the delicate balance needed in phishing simulations. Marcus Hutchins, a cybersecurity researcher, had previously warned that phishing simulations could create distrust between employees and security teams if not handled carefully. Google security engineer Matt Linton has also argued for a shift in training approaches, advocating for less emphasis on surprises and more focus on accurate, clear training on how to identify and report phishing attempts.
This incident underscores the challenges organizations face in training individuals to be vigilant against phishing threats while avoiding unnecessary distress or mistrust. The UCSC case serves as a reminder of the potential repercussions of poorly executed simulations and the importance of considering the psychological impact on recipients.
Source: The Register
Cybersecurity Concerns: From Laser-based Keystroke Spying to Bicycle Hacking
In a world where cybersecurity threats are becoming increasingly sophisticated, a recent political event saw a city council candidate advocate for a community where residents could feel safe leaving their doors unlocked at night. However, this idealistic vision seems out of touch with current technological realities, where even the most mundane activities can be targets for cyber attacks.
According to a recent report highlighted by 9to5Mac, a Wired article detailed how white-hat hacker Samy Kamkar demonstrated the potential for using an infrared laser to spy on keystrokes from outside a room. By pointing the laser at the Apple logo on the back of a MacBook, Kamkar was able to decode the keystrokes being typed on the keyboard. While the results weren’t perfect, they were accurate enough to be considered a viable spying method. The Apple logo’s reflective surface made it an ideal target for this technique, raising concerns about the security of even the most basic digital activities.
In another Wired report, researchers at the Usenix Workshop on Offensive Technologies revealed a proof-of-concept attack that could disrupt the performance of high-end bicycles used in professional cycling. The attack, demonstrated by teams from UC San Diego and Northeastern University, showed how Shimano’s wireless gear-shifting systems could be hacked using simple hardware. The method could cause a bike to shift gears unexpectedly or even lock it into the wrong gear, potentially giving cheaters an unfair advantage in races like the Olympics and the Tour de France.
These incidents highlight the growing need for vigilance in cybersecurity, even in areas that might seem unlikely targets. The report advises that for any electronic device, no matter how niche, users should ensure they have a plan to keep its software up to date and question whether it really needs to be wireless and connected.
As technology continues to evolve, so do the methods used by hackers and cybercriminals. Whether it’s protecting your MacBook from laser-based keystroke spying or securing your bicycle’s gear-shifting system, the message is clear: always be prepared, and never assume your devices are safe from potential threats.
Stephen Wolfram Calls for Philosophical Input in AI Development
In the fast-paced world of artificial intelligence, mathematician and scientist Stephen Wolfram is urging a more measured approach. In a recent discussion highlighted by TechCrunch, Wolfram suggested that philosophers should play a significant role in addressing the ethical and societal questions surrounding AI.
As AI continues to advance, particularly in areas like voice cloning and content generation, Wolfram argues that these developments are not just technical challenges but also deeply philosophical ones. He believes that the intersection of humans and computers requires complex thinking akin to classical philosophy, which is often overlooked in the tech industry.
Wolfram pointed out that many in the tech sector advocate for letting AI make decisions based on what is considered “the right thing.” However, he cautioned that this raises a fundamental question: What is the right thing? Without thoughtful consideration, AI could be deployed in ways that have unforeseen and potentially harmful consequences.
One practical example of this caution might be seen in Apple’s upcoming “Image Playground” feature, expected later this year. Unlike other AI tools that generate photorealistic images, Image Playground is said to deliberately avoid creating images that could be mistaken for real photos. This move could be a reflection of concerns over deepfakes and other ethical dilemmas associated with AI-generated content. Instead, the feature will offer creative but clearly non-realistic styles, focusing on avoiding misuse rather than chasing technological capabilities.
Wolfram’s concerns are not theoretical; he recounts “horrifying discussions” with companies deploying AI without fully considering the ethical implications. He stresses the importance of having Socratic discussions about these issues, noting that many in the tech industry are not thinking clearly about the potential consequences. While Wolfram acknowledges that there is no easy solution, he believes that incorporating philosophical perspectives is crucial in shaping the future of AI.
This call for philosophical engagement in AI development highlights the need for a broader, more interdisciplinary approach to technology, especially as it becomes increasingly integrated into every aspect of society.
Source: TechCrunch