Checklist 96: The Latest Bugs, Flaws, and Privacy Worries
The summer keeps on heating up, so why not stay inside where it’s cool, and relax with the newest edition of The Checklist? As the temperatures outside climb, the headlines in the world of Mac security and beyond keep piling up too. This week, we’re touching on a few security hiccups in Apple products, talking about some welcome patches, and covering the latest creepy information about Facebook’s activities to splash into the news. On our agenda for today’s discussion are these topics:
- Quick Look may lead to long looks
- Apple code signing flaw lets anyone in
- Security updates squish the Black Dot bug & more
- Some freaky Facebook patents
- La Liga is listening
We’re kicking off this week’s episode with a look at a small window that could let the wrong people get big insights into the files you’ve got on your Mac — even if they’re hiding out in an encrypted folder. As it turns out, this window has been open for a long time.
Quick Look may lead to long looks
Convenience is a big part of why we love technology so much. Just look at how much we rely on our computers and smartphones to get more done every day. When personal computers first revolutionized the world, the amount of convenience they would bring to daily life was one of their biggest selling points. Unfortunately, as much as we may love this convenience, it often creates problems for us regarding privacy and security. All we need to do to see that in action is to look at the many issues we’ve covered on The Checklist such as those swirling around Facebook. There are countless instances of online services that, intentionally or not, ask us to give up our privacy for the sake of making things easier.
Even macOS seems to have one of these tradeoffs, although not one deliberately created by Apple. It all has to do with a system process that handles generating file previews. Because of the way it handles the data, macOS could inadvertently expose the names of secret files you thought you’d hidden away from prying eyes. Someone who knew how to access this data, like law enforcement or a malicious attacker, could exploit it. A security researcher named Wojciech Regula brought this issue up for discussion at the beginning of June after investigating the “Quick Look” feature in macOS.
What is Quick Look? Simply put, it’s the service that creates thumbnail snapshots for the files on a given volume of data — in other words, the thing that lets you see “at a glance” if a file is what you want to open. Helpful, right? So, what’s the problem? After generating the thumbnails, Quick Look stores them in a cache on the system’s boot drive, which may not have encryption. Even if you’ve protected the files in question, Quick Look still dumps a plain text version of the file name into this cache. If you know where to look, it’s easy to find — and that means you can get a window into a world otherwise locked away.
By using what Regula said was a “simple command” he could uncover the file path and the cached thumbnails for both the test files he used. Regula tested both macOS-created encrypted hard drives and one made using a third-party service. It was an overall trivial thing to execute.
The news attracted the attention of prominent Apple security researcher Patrick Wardle, a name you might remember due to his frequent contributions on issues that we often discuss. Wardle confirmed that not only was the issue legitimate, but that it had been an issue for nearly a decade — at least eight years that we know of. He also pointed out that privacy concerns run deeper than those originally raised because the same caching process takes place even on removable volumes.
What does that mean? Even if the data is no longer accessible to the system, a trace of it remains behind. Someone snooping for this data could still find file names and thumbnail previews even if they can’t access the actual files. For example, if you were to plug a USB drive into your Mac, Quick Look would generate thumbnails for every file on the drive and store them locally. Even after removing the USB stick, those thumbnail files don’t go away.
For most users, this issue won’t ever be a serious problem you’ll face — but the main concern here is the fact that exposing the contents of encrypted hard drives at all is a serious privacy concern. What’s the point of encrypting the file names in the first place if someone could find them anyway? Wardle pointed out in his analysis of the issue that Apple could likely easily patch this issue, but they have yet to do so. All it would take would be not to generate previews of encrypted files, or to dump the cache file regularly — especially for removable volumes. Will a fix come down the pipeline now that more attention is being drawn to the issue? We’ll have to wait and see.
Apple code signing flaw lets anyone in
Now let’s turn our attention to another issue that’s been ongoing in the world of Apple security in the past month. This is one area where the “fault” is hard to establish — some say it’s all on developers, others say it’s all on Apple. Wherever the truth lies on that question, what matters is that there were flawed implementations of critical security checks in several open-source projects that relied on APIs provided by Apple. The feature in question: code signing.
We’ve talked about code signing on The Checklist a few times in the past regarding its importance for keeping us safe from malware. Generally, that’s been in the context of developer signatures as a way to verify that a file hasn’t been tampered with and comes from where it claims. In this case, we’re not talking about a developer’s digital signature, but Apple’s itself. Keep in mind that a digital signature is often the first line of defense. If malware could masquerade behind Apple’s mask, the implications for personal security would be grave.
The issue in question concerns the way Apple’s signature checking APIs were documented and implemented by third-party developers. Because of confusion in the wording used, some developers missed certain steps that would have provided more robust verification. As a result, certain specially made files could fool these programs into believing they were executing a legitimate file signed by Apple. In reality, there could be a malicious payload lurking just beneath the surface.
The actual mechanism for how this all works is complex and steeped in the kind of jargon developers love to use. It involves “FAT/universal binaries,” “Mach O” files, and a whole slew of processor-related language. To put it simply, though, if a file met certain conditions, it could pass verification erroneously. First, developers had to implement the API improperly. Next, the malicious binary file needs to have several sets of processor instructions for different types of CPUs, including the old PowerPC chips no longer used in Macs. Finally, a particular flag in the binary needed to be set a certain way to trigger the exploit.
If everything works, the code will verify that the first set of instructions in the binary is properly signed by Apple. Because of the order of operations the software uses, it does not fully verify that the other instructions are trustworthy. Think of it another way: it would be like allowing a folder full of different malware files to land on your Mac just because the file listed first alphabetically wasn’t malware.
When notified, Apple claimed they didn’t see this as a security issue that was their responsibility. If the APIs were properly implemented, they said, the problem wouldn’t exist at all — the system would work as intended and catch the improper code, so it would not run at all.
On the other side of the equation, developers contend that Apple’s documentation was confusing and unclear on the proper rules to follow. They said it was unclear they were introducing a vulnerability when they believed they were adhering to procedure. Since this discussion began, Apple has since clarified its documentation to make the right signing implementation clearer. The third-party apps and programs affected have also since released patches to fix the problem. Whoever’s “fault” it was, the good news is that it seems that no malware authors learned of this problem, and no successful exploits are known to have occurred.
Security updates squish the Black Dot bug & more
Okay, so what about some things Apple is doing right these days? If you’ve looked at your devices lately, chances are you’ve seen the alerts popping up telling you there are updates available for macOS and iOS. That’s because at the start of June Apple blasted out a ton of new updates, fixing a wide range of issues from annoying exploits to big security vulnerabilities. Of course, they added some fun features in iOS 11.4 as well — but let’s drill down on the important changes you should know.
macOS got plenty of attention this month, with 32 updates in total affecting three versions from High Sierra back down to El Capitan. Unfortunately, Apple is playing the “close hold” game with information on most of these vulnerability numbers, so the information we have about them is limited.
One of the main patches in this update fixes an issue that Apple says could have exposed encrypted emails to third parties — however, we don’t know if this is related to the “eFail” issue we discussed recently. Apple’s usage of a different vulnerability identifier from the one known as eFail means it’s hard to know if this is a new and separate issue. This update also passed down a fix for a problem where “malicious websites” might have abused bogus email certificates to track users without the need for cookies.
The Messages app was patched across both iOS and macOS to fix two issues. One is serious, in that it could have allowed an attacker to impersonate another user. Apple doesn’t say precisely how this could have occurred, but the loophole has been closed and seems to have required local network access. The second patch for Messages was yet another text bug fix, this time the so-called “black dot bug.”
As with many of the other text message-based bugs we’ve discussed recently, such as the “flag” emoji bug, the black dot bug involves a special sequence of characters involving a series of emojis. Hidden inside the “black dot” are tons of invisible Unicode characters. When the phone tries to display the message, the CPU becomes task-saturated trying to manage the need to display thousands of characters all at once. This crashes Messages and can cause the iPhone to hang. Thankfully, this annoying issue is now fixed with this patch.
iOS’s updates also fix a range of issues, 35 in all, but one of the most hair-raising security vulnerabilities was in Apple’s Books app. An attacker in control of a malicious Wi-Fi network — such as an unsecured public network — could have forced Books to display spoofed password prompts. What better way to phish for personal info than to masquerade as a system prompt?
Altogether, Apple released updates that touched almost every part of its ecosystem, from the Mac and the iPhone to Apple TV and even the Apple Watch. With all these new fixes out there, now is the perfect time to jump aboard the update train if you haven’t punched your ticket yet. With these big security updates only coming every so often, it’s essential to bring your devices up to speed so that you can stay protected from all the latest threats.
Some freaky Facebook patents
Should we start dedicating a portion of the show every week just covering the latest weird, creepy, and bothersome news about Facebook? It sure feels like that sometimes — but this week, at least, we don’t have some new data breach or major fumble to discuss. Instead, we’re taking a more specific view of some of the ways Facebook has thought about using our data. In the wake of the Cambridge Analytica scandal, more people have begun to pay attention to concerns about the implications of “big data.” If what we share on social media can be used to create highly persuasive micro-targeted ads already, what else could it do?
The answer to that question lies in the thousands of files Facebook has submitted over the years to the US Patent Office. Each of these patents outlines one system or another, amount to a huge number of ideas all focused around creating tools to harness user-generated data for some bigger purpose. These ideas can give us insight into the kinds of things Facebook envisions for its future.
After all, it’s no secret that Zuckerberg believes that his company is all about making connections — whether or not people actually want those connections in the first place. Despite what Facebook has said about how it needs to “be better” about user data and privacy, it’s hard to take that at face value — especially considering what’s in some of these filings.
Before we dive into these, it’s important to remember that patents are not always an indication of a functional product that already exists, or even something that will actually come to fruition. In fact, if you look at the patent applications for some of these, many have vague components that only describe the general approach Facebook would use — not a real-world implementation. Many of the components, typically machine learning algorithms, would still need programmers to create and train them.
One of the most fleshed-out ideas that comes back in several forms throughout Facebook’s patents is the “life change prediction engine.” By using machine learning to analyze everything from which pages you visit, the searches you make, credit purchases, and much more, this engine would gain the ability to make predictions about things happening in your life before they come to pass. The patent filing details one example, an upcoming graduation.
That idea is simple: using all that data, detect when a user is about to graduate from school and provide helpful suggestions for what to do afterward. It could also show relevant ads or rely on other, as yet undeveloped features to inject content into a person’s Facebook feed. However, a graduation is just the most benign example. It details the ability to predict the beginnings or ends of relationships, events such as moving homes — or even a death in the family.
Another filing details a more in-depth version of the type of psych profiling Cambridge Analytica performed. This would use all your data to develop a personality profile of you — for example, it might judge you as an introvert who likes video games, and target ads and even news accordingly. We know that versions of this technology already exist in some form, but these systems all involve using volumes of data hitherto unheard of — it almost sounds like a digital version of the “panopticon,” a prison where guards could always see every inmate all day.
The company also believes it knows how to fingerprint your phone camera, too, so it can identify photos taken by your device even if the data hitting their servers didn’t come from you. That patent defines several ways to draw connections between users and photos based on this fingerprinting technique, presumably for enhancing sharing and tagging. One interesting use of this could be to detect “revenge porn” or other images uploaded without consent, though the broader privacy concerns of creating even unique identifiers must come into consideration, too.
For those out there wondering if Facebook could listen in on your conversation, the answer is yes. The company owns a patent on using your phone’s microphone to listen in to TV shows to figure out what you watch and how many of the ads you pay attention to, though it’s not currently a known feature of its apps. A related patent would use GPS data to see who you hang out with most often — and to combine all this data into a profile that defines your weekly habits and routines. Ultimately, these patents create a vision of a world where Facebook seeks to know literally everything about you that it can find.
Facebook might mean it when they say don’t have any intent to use these patents. After all, they could simply want to hold on to the idea to keep it away from the competition. Then again, this is Facebook — so should we give them the benefit of the doubt? A version of at least one of these systems, using the microphone to identify audio playing in a room passively, recently made a splash in Europe, but it wasn’t Facebook’s doing.
La Liga is listening
Are you a big fan of sports bars? Hanging out with your friends, catching the latest game on TV, and sharing drinks with one another is a popular pastime for many. That’s true in Spain, too, where legions of soccer fans crowd establishments every week to watch regional teams compete against one another. However, where fans see opportunities for fun, the Spanish national soccer organization, La Liga, sees a problem.
They’re happy to have fans who want to watch games, of course, but their concerns lie with the bars these fans visit. Like many organizations, La Liga makes money in part by licensing the rights to broadcast their games in public venues. Just as the NFL and the NBA charge businesses for the right to show their games to patrons, La Liga does the same. Naturally, not every bar in Spain wants to shell out for the right to do so when they could simply put on the TV or use an illegal online stream to attract business. It’s these unlicensed broadcasts La Liga wants to combat.
As with many businesses and organizations these days, La Liga has its own official app. Used to share news, scores, commentary and more with fans across Spain, nearly 10 million people have downloaded the app. Many of those, the league reasoned, are visiting establishments outside of the home to watch games. So why not enlist their help in cracking down on unlicensed establishments? From the beginning of June, the La Liga app gained a feature that would allow the app to turn on the microphone in a user’s phone to listen in for evidence of those broadcasts.
Worse still, the app combined this audio data with GPS location information, too. This way, if La Liga detected the audio of one of its games playing in a location known to be home to a bar or restaurant that did not have a license, they could initiate legal action. Once users became aware that this was going on, the Internet erupted in outrage — seemingly to the league’s confusion. It claims its actions are only intended to protect players and teams from negative financial impacts.
In their effort to try and manage the growing PR disaster, La Liga stated in response that detailed the steps they take to guarantee that they weren’t invading user privacy too much. For one thing, they said, the spying functionality would only activate when league matches are being broadcast. They also say the audio undergoes a conversion similar to password hashing locally before transmission to La Liga’s servers for analysis, and that users can revoke their permission for recordings at any time. The league also pointed out that users now explicitly had to opt-in during app installation, although it does not explain why the app needs the microphone access.
Though they say they can never hear the original audio, is it any consolation when so much about where you are and what you’re doing is flying off to a server for analysis? In the end, La Liga may wish they had looked for a different method of licensing enforcement. Spain’s technological watchdog, the Data Protection Agency, has since opened a preliminary investigation into the app and its recording practices.
With so many concerns out there already about apps using and misusing cameras and microphones, actions like this only make smartphones look more like surveillance tools and less like helpful and fun personal devices. This is a good reminder to always take care in reviewing the permissions you’re granting to apps. You can check on what apps are requesting access to which services on your iOS device by going into Settings and tapping on Privacy for a quick look at all the parts of your phone various apps use.
There is some good news besides Apple’s latest bug fixes to consider in relation to this week’s topics — nearly none of these issues are likely to impact the average user anytime soon. Unlike some weeks when we discuss major, far-reaching security problems (think Equifax or Cambridge Analytica), this week the average user isn’t under siege. Though the code signing issue may persist if developers fail to follow Apple’s instructions, the chances of running into it in the wild are low. Of course, it always helps to maintain a strong defense against malware and potential privacy problems, too!