The FBI says the channels it uses to monitor terrorists and criminals are increasingly “going dark”—that the agency can’t crack the encrypted channels that its targets are using to communicate with one another. Probably the most popular of those encrypted platforms is something that millions of Americans use every day: iMessage, Apple’s alternative to SMS messaging.

Texts, videos, and photos sent through iMessage—anything that appears as a blue bubble on an iPhone—are encrypted in such a way that only the users texting one another can decipher them. Not even Apple can peek at messages as they pass through its servers. (It can, however, read messages that are backed up to iCloud.) But even that system, built by a company whose strong encryption has stymied the FBI, isn’t impenetrable: A team of researchers at Johns Hopkins University recently discovered a way to access and decrypt photos and videos sent via iMessage.

First, they intercepted an encrypted message sent from an phone running outdated software by creating software that poses as an Apple server. Then, they were able to repeatedly guess at a 64-character decryption key that corresponded to an encrypted photo on Apple’s iCloud servers. Once they found the correct key, they could download the photo from Apple’s server and view it. (Details of the attack were reported in The Washington Post over the weekend.)

The researchers, led by Matthew Green of Johns Hopkins’ Information Security Institute, published a paper detailing the exploit on Monday afternoon. Its publication was timed to coincide with Apple’s release of a new version of its operating system that same day. The latest version addresses the vulnerability—but the attack a reminder that digital security is a constant, uphill battle, and that even the best encryption is a far cry from an unbreakable safe.

“Apple has great cryptographic engineers and yet they still got this wrong,” said Christina Garman, one of the Johns Hopkins researchers. “Encryption is hard enough to get right when only the intended recipients should be reading things, let alone when you're trying to add in back doors, front doors, etc.” Garman was referring to the FBI's ongoing efforts to force Apple to unlock the iPhone that was in the possession of Syed Farook, one of the San Bernardino shooters. Apple has so far refused to play along with the government’s request.

The encryption that Green’s team attacked is unrelated to the security systems that protect the San Bernardino iPhone, but the vulnerability they found underscores the fact that there are a variety of ways of circumventing encryption. It also reinforces an argument made by a group of experts in a paper published by Harvard’s Berkman Center for Internet and Society last month: that the proliferation of Internet-connected devices has provided law enforcement with ample avenues for monitoring and surveilling its targets. (The FBI says hacking individual devices is too resource-intensive; it prefers relying on subpoenas and court orders to access the information it needs.)

The Johns Hopkins paper is also an encouraging moment for Apple’s slow move toward opening up the workings of its products to outsiders. Garman says the researchers first discovered the bug when they read a high-level description of the iMessage encryption system that Apple published in a security paper.

The authors reached out to Apple in November, once they had successfully crafted their attack on iMessage encryption. “They were very responsive and took our disclosure quite seriously,” Garman said.

But the flaw took a while to fix, because it wasn’t limited to iMessage. “Apple had to fix other apps, but won't say what,” said Ian Meyers, another co-author, on Twitter.

The actual code that protects iMessages remains private. That runs against the recommendations of cryptographers and security researchers, who prefer open-source encryption protocols that can anyone can test and attack. For example, the encryption behind Signal, an application that allows users to text and call one another privately, has been vetted by the developer community, earning a stamp of approval from Green. (“After reading the code, I literally discovered a line of drool running down my face,” reads a testimonial from Green on Signal’s homepage. “It’s really nice.”)

But Apple has increasingly experimented with open-source applications, including one that it announced just today: CareKit, a software framework that developers can use to build apps to monitor their medical conditions and medications.

That sort of openness promotes cooperation between a product’s developers and the wider world of security researchers. In a statement, Apple thanked the Johns Hopkins researchers for its help identifying the bug in iMessage, which it confirmed it patched in iOS 9.3. “Security requires constant dedication and we’re grateful to have a community of developers and researchers who help us stay ahead,” an Apple spokesperson said.

In their paper, the researchers called on Apple to go back to the drawing board with iMessage, and to allow outside auditors to vet whatever software it comes up with to replace it. “Our main recommendation is that Apple should replace the entirety of iMessage with a messaging system that has been properly designed and formally verified,” the authors concluded. But, recognizing the enormity of that proposition, went on to outline recommend specific fixes to close the security holes they found.

It’s telling that even Apple, a top-tier technology company, needs all the help it can get. In a recent episode of Last Week Tonight, John Oliver developed a new tagline for Apple: “Join us as we dance madly on the lip of the volcano,” a phrase meant to reflects the frantic reality of keeping up with the latest security holes and vulnerabilities.

The title that Green and his colleagues gave their paper? “Dancing on the Lip of the Volcano.”