Apple tells its customers that what happens on the iPhone stays on the iPhone. But as a hacking incident involving Amazon CEO Jeff Bezos could illustrate, that may not always be true.
A security report last week alleged that Bezos, who also owns The Washington Post, received a WhatsApp message laden with code that secretly snatched reams of personal data from his iPhone X. The message allegedly came from Mohammed bin Salman, the crown prince of Saudi Arabia. Security researchers say Bezos probably fell victim to the iPhone’s Achilles’ heel: Its defenses are so difficult to penetrate that once sophisticated attackers are in, they can go largely undetected.
That is in part because Apple employs a secretive approach to finding and fixing security flaws, researchers say, something that has generated debate in the security community.
The alleged incident shows that even as technology companies have encrypted more of the data flowing through their networks to help protect consumers from mass surveillance, the mobile phones in everyone’s pockets and purses are still vulnerable to attacks that can instantly siphon away secret and embarrassing personal information in a matter of seconds and without a trace. A hacker often can still gain access to a phone even if much of the data stored on it is encrypted. Security experts are divided on what, if anything, can be done about it.
“A lot of Apple security is amazing and really benefits the average user, but once you’re a target of an advanced adversary or three letter agency, the advanced security of these devices can be used against you,” says Patrick Wardle, who worked for the National Security Agency and is now principal security researcher for Minneapolis-based software maker Jamf.
Apple spokesman Todd Wilder declined to comment on the company’s security strategy. Amazon spokesman Jay Carney declined to comment on the hacking of Bezos’s phone.
As phone users have continued to migrate their digital lives onto their mobile devices, they have become attractive targets for hackers — particularly for high-value targets.
Apple has turned security into a part of its advertising and marketing, and its reputation as a safe device has helped it earn the business of VIP customers, who pay nearly $1,500 for Apple’s most expensive phone. In a glitzy 2018 advertising campaign, Apple took aim at competitors like Google, urging customers to trade their competing smartphones for iPhones. (Google’s Android operating system is the only major competitor to Apple’s iOS.) In one of the ads, a burglar donning a black stocking cap drops from the ceiling and walks around freely on the Android side of the screen. But when the burglar tries to venture to the iPhone side of the screen, he cannot penetrate it and walks away, dejected.
Google spokesman Scott Westover declined to comment on Android’s security.
Security researchers say iPhones and Androids have different approaches to security. They say they generally believe there are more bugs and vulnerabilities in Android. That may be because there are so many different versions, or “forks,” of Android. Google allows its myriad handset makers and others to customize the operating system. Even different model handsets made by the same company are likely to have different versions of Android. With so many versions, there is no central place where all the bugs can be fixed at once, and carriers and device makers often are slow to distribute patches to individual smartphones, leaving many running outdated versions of the operating system. Apple’s control of the iPhone and its operating system often makes patching faster and easier.
Google offers an “Advanced Protection Program” that it says protects user accounts like journalists and politicians at risk of targeted attacks. There is some evidence that Android bugs are selling for as much or more than iPhone exploits, a sign that Android bugs are becoming more difficult to come by.
Because almost every iPhone is running a recent version of the iOS operating system, when Apple fixes a software bug or vulnerability, it instantly goes out to most users. But the fact that everyone is using the same software can benefit hackers, too, because the bugs are likely to work on the majority of iPhones. If a “high value target” is using an iPhone, the chance that a sophisticated attacker knows of a vulnerability that will work on the phone is high.
Researchers like to compare iPhones to a home with a massive security system covering the perimeter of the house. Nearly every part of the system, like the wires and connection to Internet, is inaccessible even to the people living there. Most burglars would not even bother trying to break in. But the select few who knew how to bypass those protections could essentially go undetected once inside.
Apple tightly guards the code that would let people inside the off-limits area of the iPhone, and it puts considerable resources into making sure it is as difficult as possible to break in, researchers say.
In the same house security analogy, Android essentially lets people inside to test out the security features, researchers say. That is because Android is more open than iOS. Outside companies are also free to create customized versions of the operating system, including with enhanced security features.
That results in two security philosophies. In Android’s case, the researchers said, the more people who look for bugs, the more secure a system becomes. But Apple’s strategy follows the idea that less visibility into the software means fewer bugs will be discovered in the first place, making the overall operating system more secure. It takes skill, resources or both to find those bugs, which means hackers will typically use them sparingly to protect them from discovery.
Security experts say that for amateur hackers, breaking into an Android phone is generally easier. Westover, the Google spokesman, declined to comment on that opinion. But the deep-pocketed entities that stockpile and weaponize smartphone vulnerabilities probably have more options in their arsenals for hacking iPhones than Android. The most valuable hacks do not even require the victim to click on a link or download a file. Simply knowing the victim’s phone number is enough.
FTI Consulting, a business advisory firm hired by Bezos’s personal security consultant Gavin de Becker, released a report last week that relied on circumstantial evidence to conclude with “medium to high confidence” that Mohammed bin Salman was behind the breach of the Amazon CEO’s phone. That evidence included the fact that after the Saudi prince sent Bezos a suspicious WhatsApp message, the phone began sending out more data. A United Nations report echoed those assertions. There is no hard and fast proof.
Bezos first theorized Saudi Arabia might be involved in hacking his phone when he took to Medium last year to accuse the National Enquirer of extortion and blackmail for threatening to publish embarrassing photos and text messages. People close to the National Enquirer‘s parent company AMI who were not authorized to speak publicly told The Post the only source for their report on the Bezos affair was his girlfriend’s brother. The brother has denied providing explicit photos. Saudi Arabia’s foreign minister, Prince Faisal bin Farhan Al Saud, last week called the U.N. report “absurd.”
In this case, the hackers likely exploited a series of bugs unknown to Apple to hack through all the layers of the phone’s considerable defenses.
Davy Douhine, CEO of RandoriSec, a French company that tests mobile apps for security bugs, said iPhones are protected from many hacks. But “for VIP and special people, now I recommend they use a custom Android phone,” he said.
In the past, Apple has fought against attempts by security researchers to bypass iOS security restrictions to look deeper into the operating system. Android has been more supportive of those efforts. Both companies have offered “bug bounty” programs, where outside hackers can receive financial rewards for reporting vulnerabilities directly to Apple and Google. But until recently, Apple’s bug bounty program was only available to those who were invited to the program. Google’s was open to the public.
Apple recently announced changes to its bug bounty program, upping the maximum reward to $1.5 million. It also announced it would distribute special phones to security researchers that allow deeper access to the operating system. To receive a phone, researchers must submit an application. None have been handed out yet, and security researchers have expressed skepticism that Apple’s moves will significantly boost the number of bounty hunters helping Apple sniff out bugs and vulnerabilities.
Many security researchers are looking for new ways to determine whether iPhones have been hacked, an endeavor that still includes some guesswork. Zec Ops, a two-year-old cybersecurity firm, is aimed in part at helping companies and high-profile individuals deal with the iPhone security conundrum. Customers of Zec Ops instruct their employees to connect their iPhones to a computer or kiosk that uploads data logs to a central server, where they are analyzed for suspicious activity.
To the untrained eye, the logs appear to be meaningless strings of computer code. Zuk Avraham, co-founder and CEO of Zec Ops, says they also offer clues left behind by hackers. After analyzing tens of thousands of phones, Avraham says he estimates 2 to 3 percent of them showed possible indicators of attacks. Exactly how they were compromised is still unknown.
“Apple is doing a relatively great job at securing those devices,” Avraham said. But breaking into one remotely “is still within the capabilities of a talented individual.”
Avraham is leading an initiative called Free the Sandbox.org that advocates Google and Apple allow more access to their operating systems to research suspicious activity. The “sandbox” is the term used to describe the limited areas of the operating systems on Android and iOS that device owner and mobile apps are allowed to access.
Wilder, the Apple spokesman, and Westover of Google declined to comment.
One tactic used by researchers has been so-called “jailbreaking,” which allows the installation of unapproved outside software onto iPhones. Apple has argued that those who do so are in violation of a federal law known as the “Digital Millennium Copyright Act.” That argument was met with opposition from advocacy groups like The Electronic Frontier Foundation, and prompted the Library of Congress to create an exemption to copyright law that allowed the practice of jailbreaking.
Last year, Apple sued a company called Corellium, which makes software that allows security researchers to study “virtual” iPhones on desktop computers. Corellium says its software speeds up the process of bug research. After first attempting to acquire and shut down Corellium, according to court documents and interviews with people familiar with the case, Apple sued. It argued Corellium is in violation of Apple’s copyright and the Digital Millennium Copyright Act. Corellium denies the allegations. The lawsuit is scheduled for trial in October.
Google’s Android, by contrast, has long allowed researchers to use virtual Android devices similar to what Corellium offers.
Apple’s efforts to make it more difficult to penetrate its operating system may have helped reduce the number of bugs found on iOS, but it has also helped push some of that research underground. Researchers say they often keep secret their iPhone bugs and vulnerabilities because if they become public, Apple could “patch” them, rendering them useless and preventing researchers from using them to do further research.
Meanwhile, a black market for iPhone bugs has flourished, researchers say, with companies springing up to offer hacking services to the government or, in some cases, anyone willing to pay.
Bezos’s own investigation into the alleged hack by the Saudi royal cites an Israeli surveillance firm NSO Group as a possible source for the malware. NSO in a statement this week said “unequivocally that our technology was not used” in hacking Bezos’s phone. Security researchers are skeptical of those claims.
It is not the first time NSO Group has been accused of hacking mobile phones. Last fall, the Facebook-owned messaging service WhatsApp filed suit in federal court against NSO, claiming it illegally helped governments hack into the mobile devices of more than 100 people worldwide, including journalists, human rights workers and women who had been the subject of online attacks.
NSO denied those allegations at the time. A hearing in the suit is set for next month.
According to those in the cybersecurity industry, NSO Group is one of many companies offering similar services. Most of these firms operate in the shadows, but NSO Group became widely known after a 2016 incident in which human rights activist Ahmed Mansoor allegedly caught the firm in the midst of an attempted hack.
Mansoor sent a suspicious link to Citizen Lab, a University of Toronto-based research group focused on technology and human rights, which tied the malware to NSO Group. Had Mansoor not noticed a suspicious link, the software would have deleted itself, disappearing forever. The incident was widely publicized, and the name of the once-secret hacking product, Pegasus, became a household name in cybersecurity circles. At the time, NSO told the New York Times that only sells to authorized government agencies and requires its customers to use the service lawfully.
“We know NSO is constantly updating their Pegasus malware, and that they’re not too scrupulous about who that malware gets used by, or against,” said Thomas Reed, director of Mac & Mobile at Malwarebytes, a security software firm.
Taken from https://www.washingtonpost.com/