The most mysterious technological object on the planet should have been destroyed at least three times.
First, the device made it through a violent shipwreck in the Mediterranean Sea. Then, it sat submerged in salt water on a sandy cliff 200 feet below the surface of the ocean for more than two millennia. After it was hauled back to dry land in the year 1901, the object was forgotten for nearly a year. A lump of corroded bronze and shredded wood, it was left to rot in an ordinary crate in the open courtyard of the National Archaeological Museum in Athens.
It should have disintegrated. It almost did.
At the time, Museum workers were focused on other things. The bizarre events that led to the object’s discovery began in the autumn of 1900, when fishermen diving for sea sponges off the coast of Antikythera, Greece, came face to face with a ghastly sight. The seabed they searched wasn’t dotted with sponges. It was strewn with bodies.
The first sponge diver to resurface was panicked by what he’d seen. There were too many men and horses for him to count, presumably they’d been doomed in a shipwreck. Except they weren’t corpses, after all. The bodies were statues, part of an astounding collection of ancient works, a blockbuster archaeological find.
Over the course of the next 10 months, divers recovered scores of marble and bronze artifacts from the Antikythera shipwreck, which today remains the largest ancient ship ever found. Nearly all of the ship's equipment was massively oversized—including tremendous hull planks more than 4 inches thick. (They left behind even more treasures than they collected, opting to scrap the recovery after one man died of the bends and two others were paralyzed.) The shipwreck made headlines around the world—in part because it yielded several rare bronze statues, which scholars believed might be the work of Lysippos or Praxiteles, two of the most important Classical Greek sculptors of the fourth century B.C.E., according to newspaper reports at the time.
But the divers had dredged up something even more precious. They wouldn’t realize it until nearly a year later, when museum curators peered into a forgotten crate in an Athens courtyard and began to examine the hunk of oxidized metal inside.
The corroded device still bore faded inscriptions and it appeared to have the guts of a clock, mechanics that didn’t make any sense. After all, the lump had been found among the wreckage of a ship that sailed the Mediterranean more than 1,000 years before timekeeping gearwork first appeared in Medieval Europe. When the ship went down, no one on the planet was supposed to have had complex scientific instruments—what was this thing?
It came to be known as the Antikythera Mechanism. In the decades that followed, with ever more sophisticated technology to guide them, researchers would begin to understand how the peculiar device once worked. Today, the mechanism is often described as the world’s oldest computer—more precisely, it seemed to be an analog machine for modeling and predicting astronomical and calendrical patterns. Even before it was lost, the device must have been a treasure. When it was new, the mechanism was a turn-crank marvel housed in a rectangular wooden case, like a mantel clock, with two dials on the back. Instead of having two hands to tell the time on the front, the mechanism had seven hands for displaying the movement of celestial bodies—the sun, the moon, Mercury, Venus, Mars, Jupiter, and Saturn. The planets were represented by tiny spheres that could themselves rotate, with the moon painted black and silvery white to depict its phases.
Yet the mystery of the mechanism is only partly solved. No one knows who made it, how many others like it were made, or where it was going when the ship carrying it sank. More than a century since it was discovered, the Antikythera Mechanism remains one of the strangest objects that has survived from the ancient world.
“We know what it did, but we don’t know exactly why they wanted it to do that, what it was used for, and the context in which it was used,” said Jo Marchant, the author of Decoding the Heavens: A 2,000-Year-old Computer and the Century Long Search to Discover Its Secrets. “We don’t know whether it was a teaching instrument in a school, or if a rich person would have had this on their dining table, whether it had religious importance, whether it had an astrological meaning—just what it meant to people.”
The prevailing theory today is that the mechanism was manufactured in Rhodes, perhaps for a buyer in Greece. Marine archaeologists and other researchers who have studied the Antikythera shipwreck believe the vessel was a gargantuan grain transporter, packed with valuable works of art, technology, and other luxury goods likely intended for trade, that set sail around 70 B.C.E. (Scholars suspect that grain would have been a natural, useful packing material.) It’s possible that the ship carried many strange and wonderful automata. One of the statues recovered from the site appears to have once stood on an automated pedestal.
Those who have studied the shipwreck believe the vessel could have carried several twins of the Antikythera Mechanism. The mechanism as it was recovered is split into three pieces and represents only a portion of the device as it was built. Scholars believe the rest of it was either destroyed, or is still on the seafloor, covered in sand. “Clearly, this mechanism wasn’t a one-off,” Marchant told me. “It was too sophisticated. It must be part of a whole tradition of these mechanisms.”
“What I believe is that it cannot be just one mechanism and there must be more of them somewhere else,” said Theotokis Theodoulou, an archaeologist and the head of Underwater Antiquities for Greece’s Ministry of Culture. “The Antikythera shipwreck could be such a site.”
Another possibility is more startling: What if other objects like the Antikythera Mechanism have already been discovered and forgotten? There may well be documented evidence of such finds somewhere in the world, in the vast archives of human research, scholarly and otherwise, but simply no way to search for them. Until now.
Scholars have long wrestled with “undiscovered public knowledge,” a problem that occurs when researchers arrive at conclusions independently from one another, creating fragments of understanding that are “logically related but never retrieved, brought together, [or] interpreted,” as Don Swanson wrote in an influential 1986 essay introducing the concept. “That is,” he wrote, “not only do we seek what we do not understand, we often do not even know at what level an understanding might be achieved.” In other words, on top of everything we don’t know, there’s everything we don’t know that we already know.
Solving this problem, Swanson argued, would require efforts “no less profound than trying to formalize human language, creativity, or inventiveness.” Thirty years after he published his essay, we no longer have to rely on human contrivances alone. Now, with the ubiquity of the internet and the rise of machine learning, a new kind of solution is beginning to take shape. The infrastructure of the web, built to link one resource to the next, was the beginning. The next wave of information systems promises to more deeply establish links between people, ideas, and artifacts that have, so far, remained out of reach—by drawing connections between information and objects that have come unmoored from context and history.
A simple Google search for “Antikythera Mechanism” turns up about 351,000 results, the first several pages of which are news articles, a Wikipedia page, and a few academic papers. These results offer decent context for what the device is, and the mystery surrounding it, but none of them go very deep. It would take quite a bit of additional reading and searching, for instance, to get to the 10th-century Arabic manuscript, discovered in the 1970s, that some researchers believe is proof that the Antikythera Mechanism directly influenced the development of modern clockwork, more than a millennia after the shipwreck at Antikythera.
Discovery in the online realm is powered by a mix of human curiosity and algorithmic inquiry, a dynamic that is reflected in the earliest language of the internet. The web was built to be explored not just by people, but by machines. As humans surf the web, they’re aided by algorithms doing the work beneath the surface, sequenced to monitor and rank an ever-swelling current of information for pluckable treasures. The search engine’s cultural status has evolved with the dramatic expansion of the web. Once a mere organizer or information, Google is now treated as an oracle.
The tipping point for this perception came sometime between 1993 and 1995, as the total number of websites online grew from about 130 to nearly 24,000. In 1994, for instance, a web search for the word “culinary” turned up nothing, according to a New York Times story published the following year. Within months, the same search yielded 800 websites. Search “culinary” today and you get 97 million results. There are, as of this writing, billions upon billions of webpages across more than 1 billion websites online, according to Internet Live Stats, and the galactic growth of the web over the course of the past two decades has required search engines to become smarter and faster as a result.
Google won the first battle of the search engines because of its obsession with relevancy, using a variety of weighted factors, such as a site’s quality or popularity, to influence the order of search results as they appear on a person’s screen. It wasn’t so long ago that this was a groundbreaking approach to search filtering. Algorithmic sorting was, in the year 2000, “‘the new nuclear bomb’ of the search-engine world,” Danny Sullivan, the technologist and founder of the website Search Engine Land, told The New York Times that year. But Google had already been thinking this way since its inception. Google’s “I’m Feeling Lucky” button was introduced when the search giant was still in beta, in 1998, as a way of communicating that it knew, down to a single search result, how to deliver what people wanted to find. (The button was designed to take people directly to whichever website Google determined was most relevant to their search, instead of showing them a list of 10 possible options.)
In its success, Google became the embodiment of a decades-long dream among information scientists to reorder the world’s data in ways that would make all of human knowledge more accessible. The search giant is still constantly tweaking its methods to meet the demands of a data-flooded digital world. Google now uses machine learning—as part of its RankBrain search system—in every single query it processes, a Google engineer told the tech site Backchannel earlier this year.
Using machines to find meaning in vast sets of data has been one of the great promises of the computing age since long before the internet was built. In his prescient essay, “As We May Think,” published by The Atlantic in 1945, the influential engineer and inventor Vannevar Bush imagined a future in which machines could handle tasks of logic by consulting large troves of connected data. His essay would prove instrumental in influencing early hypertext—which in turn helped shape the linked infrastructure of the web as we know it.
Bush envisioned sophisticated “selection devices” that would be able to comb through dense information and yield the relevant bits quickly and accurately. At the center of all this was what Bush called the Memex, his idea for a deep indexing system that could consolidate and search mammoth collections of information in various formats—including text, photocells, microfilm, and audio. The Memex, he argued, would be a technological solution to an almost existential problem: The totality of recorded human knowledge was constantly growing, but the tools for consulting this ever-swelling record remained “totally inadequate.” Instead, he looked to the intricate pathways of the human mind to inspire the architecture of a fantastical new system.
The Memex remains among Bush’s best known contributions to modern computing, including the computers he himself built in the 1920s and 1930s. Those machines, called differential analyzers, involved wheel-and-disc mechanisms designed to solve equations—a new kind of computational complexity in the 20th century, but based on much older inventions. “This idea is far from original,” he wrote in 1931, “...utilizing complex mechanical interrelationships as substitutes for intricate processes of reasoning owes its inception to an inventor of calculus itself.” Bush was referring to Gottfried Wilhelm Leibniz, the 17th-century philosopher and mathematician.
What Bush did not realize was that the predecessor for his machine was far, far older than Leibniz. The oldest known analog computer is the device found at Antikythera.
The island of Antikythera often appears as just a fleck on the map, if it’s pictured at all, in the cool waters that separate Cape Malea and Crete between the Aegean Sea and the Mediterranean.
In 1953, the ocean explorer Jacques Cousteau and his crew, voyaging on the research vessel Calypso, found themselves in this region. Windy seas had forced them to take shelter at Kythera, an island about 22 miles northwest of Antikythera. It was there that a little boy named John told Cousteau and his colleagues about what was hidden in the choppy waters nearby. “John introduced us to two fishermen who claimed to have knowledge of a sunken city, which is something every diver dreams about,” the legendary diver Frédéric Dumas wrote in his 1972 book, 30 Centuries Under the Sea. “So we were quickly back in the sea again.”
The next morning, locals agreed to lead the divers to the wreck site, where Dumas was the first to go down. “The water was so transparent that I felt as if it might let me fall right down the cliff, which extended vertically to a group of fallen boulders a hundred sixty feet below,” he wrote in his book. “Although I saw no trace of the wreck, I was sure it was there.”
Dumas’s certainty came in part from his appreciation for the local network of knowledge he’d stumbled upon—the kind of information that would have been difficult if not impossible to get from any other source at the time. (Today, Google can take the casual web explorer to a virtual pushpin on a map, showing where the Antikythera shipwreck is located.) “The excavation in 1901 was still the most important event in the history of the island, and it was unlikely that the fishermen, who lived by tradition, could have forgotten the location, especially when they had the cliff to go by, and not just some remote landmarks or a certain distance out to sea.”
“For some inexplicable reason,” he added, “I felt that the terrain was not in its natural, unspoiled state.”
In subsequent dives, he and his colleagues found bits of pottery, amphoras, decanters, a fragment of an ancient anchor, and other scattered debris. At one point, they used a makeshift vacuum-like device, made from a sheet-metal pipe, to suck up artifacts from the wreck more efficiently—a destructive practice that makes today’s archaeologists cringe. Dumas remembered the wreck site as both lovely and unnerving. Even at dusk, when the waters seemed “black and uninviting,” soft light filtered down to the boulders below. “The rocks had taken on a disturbingly somber appearance and the sand had become more luminous,” he wrote.
“After the tomb of Tutankhamen was opened, some superstitious individuals remarked that all the scientists who had worked on the project died from unnatural causes,” Dumas wrote. “I wouldn’t go so far as to say the same about ancient wrecks, but it is true that such ships, with their air of mystery and promise of lost treasures, fascinate the average diver and cause him to lose the sangfroid that is so necessary in underwater operations.” Dumas remained convinced that vast treasures from the ship remained at the site—including, he thought, the other half of a strange mechanism, almost like an “astronomical clock” which he and Cousteau had gone to see in Athens. The rest of the device, he surmised, was still in the sand amid the rest of the 2,000-year-old wreckage.
After a few weeks in the region, the crew moved on to Sicilian waters, leaving the mystery of the mechanism behind. From there, it would be more than two decades before Dumas and Cousteau returned to Antikythera, this time to conduct a full excavation of the wreck. In 1976, using the most sophisticated diving technology available at the time, the team discovered hundreds of artifacts—a cache of pottery, bronze ship nails, ornate glassware, gold jewelry, ancient coins, gemstones, an oil lamp, a marble hand, even a human skull. They sifted the sand in search of gearwork, hoping to find more mechanisms or even pieces of the original. There was nothing.
If the Antikythera Mechanism has a twin somewhere in the world—a device that’s been discovered and forgotten, or perhaps never fully appreciated for what it is—how can researchers even begin to look for it?
“Before the Antikythera Mechanism, not one single gearwheel had ever been found from antiquity, nor indeed any example of an accurate pointer or scale,” Marchant wrote in her book. “Apart from the Antikythera Mechanism, they still haven’t."
That might be about to change. The search engine as we know it now is undergoing a period of radical reinvention, in processing power and in structure, and is likely to be transformed even more dramatically in the years to come. “[Today’s] search engines were a fantastic instrument to get you to where the information is,” said Ruggero Gramatica, the founder and CEO of the search app Yewno, “but often it’s not about searching, but also discovering something that you don’t know you’re looking for.”
Yewno resembles a search engine—you use it to search for information, after all—but its structure is network-like rather than list-based the way Google’s is. The idea is to return search results that illustrate relationships between different relevant resources—mapping out connections between people, events, and concepts affiliated with the search. (You can choose how many related concepts you want to see when you search; anywhere from fewer than 20 to more than 100).
Yewno, which was built primarily for academic researchers, is populated by tens of millions of books and journal articles from nearly two dozen well-known publishers like Springer Nature, MIT Press, and JSTOR. Gramatica says Yewno’s database will swell to 78 million papers and documents by the end of the year, and will keep growing from there.
“What algorithms can help us do is process the whole information and delve into the knowledge to create something that is very similar to an inference,” he told me. “So when you are looking for something … thinking laterally—not just sequentially, but in a cross-disciplinary way—so you can connect things that are apparently unrelated. That is basically where we see the whole area of information processing going from now on.”
If there is any hope of finding new information about the Antikythera Mechanism—or, for that matter, any additional devices like it—it is likely that machines, working alongside human researchers, will play a pivotal role.
Just as Vannevar Bush envisioned, engineers are building computer models of neural networks, machines that mimic the elegance and complexity of human thought. But there are still many challenges ahead. Sourcing is a big one. Even a database built from tens of millions of well-vetted books and articles isn’t comprehensive. And there’s still the question of how the results from these new search engines ought to appear to the person searching. A simple graph that shows a connect-the-dots web of related resources and ideas is one way. A more sophisticated map-like interface is another—“like Google Maps,” Gramatica offers—but you’d still lose scale and context as you zoom in and out.
“In terms of how to visualize it, that is one of the biggest challenges. We need to move away from the list-of-links approach, like the traditional search engine, because otherwise you’re back to the same situation where you need to click, and read, and click, and another window opens, and another window, and another window—and you don’t let your brain see the whole connection.”
“In 10 years, I think we’re going to be offering an instrument where the ability to unearth information and correlate information is done for you,” he adds. “And basically you will ask a machine to generate an inference.” In this way, a search for “Antikythera Mechanism” might not only lead you to surprisingly relevant, long-lost manuscripts—but actually pose a theory that explains how the device is connected to such documents.
People who are thinking deeply about the future of search tend to agree that this sort of machine inference will be possible, yet there’s still no straightforward path to such a system. For all the promise and sophistication of machine learning systems, inference computing is only in its infancy. Computers can carry out massive contextualization tasks like facial recognition, but there are still many limitations to even the most impressive systems. Nevertheless, once machines can help process and catalogue huge troves of text—a not-too-distant inevitability in machine learning, many computer scientists say—it seems likely that a flood of previously forgotten artifacts will emerge from the depths of various archives.
Consider a discovery that occurred in 2012, for example, when a crucial document from American history surfaced after having been lost for nearly 150 years. It was a medical report on President Abraham Lincoln’s condition, written by the first doctor to arrive at Ford’s Theatre after Lincoln was shot. The document had been sent to the surgeon general shortly after Lincoln’s death. It had the potential to change the way scholars understood one of the darkest moments in American history.
It wasn’t actually lost, though. “No, it was in a box of other incoming correspondence to the Surgeon General, filed alphabetically under “L” for Leale, [the name of the doctor who wrote it],” Suzanne Fischer, a historian of technology and science, wrote for The Atlantic in 2012. “In short, this document that had been excavated from the depths of the earth with great physical effort was right where it was supposed to be.”
The trouble was with how the document had been catalogued. “This is because archivists catalogue not at ‘item level,’ a description of every piece of paper, which would take millennia, but at ‘collection level,’ a description of the shape of the collection, who owned it, and what kinds of things it contains. With the volume of materials, some collections may be undescribed or even described wrongly.”
But the bigger problem was this: “No one knew it existed, so how to locate it was beside the point,” Helena Iles Papaioannou, the researcher who found the document, wrote in a response to Fischer.
In the case of the Lincoln report, a human researcher happened upon the document. In the future, such serendipity may not be necessary. A machine that scrapes vast catalogues of text for context would be able to comb archived collections at the item level. (Of course, this would require digitization of the physical document, but that’s another issue). “I don’t think machines are going to completely supplant us, but they’re certainly going to augment our ability to discover things,” said Sam Arbesman, a scientist who studies complexity and the future of knowledge. “There are going to be more and more of these human-machine partnerships, especially in the realm of innovation and discovery.”
The structural underpinnings for these sorts of partnerships are already being built at the institutional level. For several years, the Library of Congress has been working with several universities—including Stanford, Cornell, Harvard, Princeton, and Columbia—on a project it calls BIBFRAME, a next-generation cataloguing system that will ultimately replace the current electronic system that most libraries use. The outgoing system, built on MARC records—short for MAchine-Readable Cataloging record—was what replaced physical card catalogues in the 1970s. Today’s electronic records are designed such that you can trace any descriptive element from one record—an author’s name, for example—to other records stored in the same format. But BIBFRAME will go much deeper, producing links that reveal connections about any number of other elements related to a book or resource, including items from the web. The new system is built for the Internet Age, and meant to meet expectations about how people search for information online. “[The existing system] is self contained and library-oriented, and we need to get something that is conversant with the larger information community,” said Beacher Wiggins, the library’s director for acquisitions and bibliographic access. With BIBFRAME, the idea is to use “the same language that the browser community and the internet community uses,” so that the library stays linked to outside resources even as browser technology changes.
It’s easy to see how such a system could accelerate major discoveries. In the 1950s, it took years for the Yale University historian, Derek de Solla Price, to work his way back through various manuscripts and scientific documents and eventually, having stumbled upon the Antikythera Mechanism, rewrite the history of modern clockwork. Price’s research spanned thousands of years of technological history. The Antikythera Mechanism, Price concluded, was not just a miracle of early gearwork but represented the very origin of modern machinery. While much of what he discovered came from his own direct observations of the mechanism and his ability to contextualize other findings related to it, consider how much more he might have learned if a computer had helped him comb through millions of documents in the first place. The right algorithm could find the thread between the wheel, the astrolabe, the sundial, and the Antikythera Mechanism—then produce a web of resources illustrating those connections.
It won’t be long before the public can begin ferreting out information this way themselves. In March, the Library of Congress completed its first pilot program for the new BIBFRAME system—which included transferring some 10,000 records to the new format. Now, it’s preparing for another test run, set to begin in early 2017. As part of the next pilot, the library is also developing specifications for other institutions who want to convert their data into the new format. The more data involved, the more powerful BIBFRAME becomes. The Library of Congress alone plans to convert around 20 million records to the new format within the next five years. “But keep in mind,” Wiggins says, “the library itself has 162 million items and all of those are not covered by MARC records even. Then you start thinking about the entire collection of MARC records in the world and you get into the hundreds of millions. How do you manage that? How do you have everyone who has a repository of MARC data come on aboard? I suppose in an ideal world, the goal is to convert all of them, but we know that won’t happen.”
“The value that I see going forward is the linking part of the data environment,” Wiggins added. “You start searching at one point, but you may be linked to things you didn’t know existed because of how another institution has listed it. This new system will show the relationship there. That’s going to be the piece that makes this transformative. It is the linking that is going to be the transformative.”
The idea for linking information this way can be traced back more than 70 years, all the way to Bush’s Memex. But none of it would be possible without new technology. Machine learning and artificial intelligence will change the way people search, but the search environments themselves will evolve, too. Already, computer scientists are building search functionalities into virtual reality. In other words, the future of human knowledge—how we discover and contextualize what we know—depends almost entirely on tools and digital spaces that are rapidly changing and will continue to change.
The field of marine archaeology, still in its infancy, began at Antikythera. Though the sponge divers in 1901 were able to recover great treasures without modern SCUBA gear, they really only ever glimpsed the environment of the wreck. More than a century later, divers have exhaustively searched this undersea world, with robot crawlers, 3-D mapping, closed circuit rebreathers, and an astronaut-like exosuit, among other technologies. All of the divers who have searched the site over the years have themselves become a crucial and “very symbolic” part of wreck’s significance, says Theodoulou, of Greece’s Ministry of Culture.
Today, the story of the wreck and those who have sought to understand it is told in a scatter of objects lost and found on the sandy sea shelf below the cliffs of Antikythera—and in the knowledge of the local folks who have led explorers to the site. “And it’s all embedded in this framework of technology,” Theodoulou told me. “The technology used over time to approach the site and the technological knowledge that the cargo itself provides to us.” That includes the heavy bronze helmets divers used in 1901, the early SCUBA equipment Cousteau used in 1953, a new kind of dredging tool that slurped up artifacts in 1976, all the way up to the advanced mapping software and high-tech diving suits of the past decade.
“We’ve got this feeling that we’re walking in the footsteps of giants, and that’s really cool,” said Brendan Foley, a marine archaeologist from Woods Hole Oceanographic Institution who has dived the wreck site multiple times. On one dive at Antikythera, for instance, Foley and his colleagues recovered a remarkably well preserved dinner plate—not an ancient artifact, they later realized, but likely a remnant from the dive mission in 1901. “We feel a direct connection to those sponge divers, and some of the things we’ve found that are most evocative are not the ancient artifacts but have to do with the 1901 and 1953 expeditions.”
“I’m absolutely convinced that knowledge is a big chain starting from the long past, from the neolithic times, even earlier, and reaching our times,” Theodoulou told me. “Rings of this chain have been broken in some places, but the chain is the same. You just have to find the pieces and bind them together. And the mechanism is the absolute, tangible example. It is so sophisticated that it could not be just a chance example, a chance find.”
Searching for lost information about the device is, in its own way, as much of a challenge as searching the seabed for fragments of the mechanism itself. But while many researchers are holding out hope that another mechanism might be found in the the ocean at Antikythera, it’s more likely that a similar device from the same era might be found elsewhere—or that other ancient artifacts or records might help fill in gaps of understanding about the existing device.
Researchers have long explored a possible link between the mechanism’s design and ancient Babylonian astronomical data. There are hints, too, in the writings of Cicero about the existence of a device that could reproduce the motions of the sun, moon, and planets. Later, around 400 C.E., the poet Claudian wrote of a “bold invention” of “human wit” that used a “toy moon” and other spheres to mimic nature. Researchers now believe that the use of gearwork to model celestial bodies was common among Islamic engineers in later centuries—and perhaps as part of a tradition inherited from the ancient Greeks. Several researchers believe that Archimedes’ treatise on sphere-making, a long-lost manuscript that’s referenced in existing works, could shed light on the origin of the Antikythera Mechanism. But it may never be found. The ancient documents that survive today aren’t always the best quality, in large part because the people who choose what to save over the course of many generations have different goals and value systems than the historians who come after them.
Surviving artifacts, especially anything made from bronze like the mechanism, are even harder to come by. Many such objects were melted down to make weapons and ammunition. We know from historic records that there were thousands—maybe even millions—of large bronze statues in ancient Greece. “Pliny wrote that there were 3,000 in the streets of Rhodes city alone, and this was in the first century A.D.,” Marchant wrote. Today, in the National Archaeological Museum in Athens, which boasts among the best collections of statues from this era on the planet, there are only 10.
“All but one,” Marchant wrote, “are from shipwrecks.”
Time erases most everything and everyone, eventually. Any effort to understand the past is based entirely on incomplete records. And because it is impossible to standardize the language used to catalogue what’s left, or to fully index what is found, humans are unable to search through our own vast repositories of knowledge.
To discover hidden gems in existing stores of human knowledge, Swanson wrote in his 1986 essay, we would need a massive thesaurus—one that describes “all relationships that people know about and then determine, for each search, which among those relationships” are actually relevant. “To build such a universal thesaurus entails no less than modeling all of human knowledge,” he wrote. It would be an impossible task—not least of all because, “to use such a thesaurus, one would have to retrieve relevant information from it, so a second universal thesaurus would be needed as a retrieval aid to the first, and so on ad infinitum. The builder of a thesaurus is, in principle, lost in an infinite regress.”
There’s some hope yet. Artificially intelligent systems are already creating and distilling robust models of human knowledge, but they’ll still be constrained by the datasets that feed into them. So there will be some degree of luck involved if, for instance, a machine happens upon an ancient document that reveals the whereabouts of more machines like the Antikythera Mechanism, or determines who built the one found on the Mediterranean seafloor so many decades ago. At the same time, the evolution of information systems makes remarkable discoveries seem more possible now than ever before. “All I can say is there are an awful lot of manuscripts that have never been read, let alone translated,” Marchant told me. “I think it really is a reminder of how much we don’t know.”
“Think how many other types of technology there must have been that we don’t know about,” she added. “What I find fascinating is this: We see this ancient technology and initially it seems it was lost, and we’re like, ‘Where did it go?’ But then you look and you see the threads, connecting it through history—of a sundial or the 13th century astrolabe. So it survived and played a key role in stimulated the tech we take for granted. The way different cultures use things in different ways, technology can become almost unrecognizable, but the kernel of that technology lives on.”
The richest source for new information about the mechanism may, for example, be waiting for researchers in old Islamic manuscripts—thousands of documents that have never been catalogued or translated by anyone with the technical expertise to appreciate what they might contain.
Or, perhaps the mystery of the mechanism will never be solved.
No amount of technology or depth of curiosity can bring back what’s forever lost. This is why searching is, and will always be, a “necessarily uncertain” endeavor, as Swanson put it. Searching for lost knowledge is its own kind of science, but ultimately an incomplete one. “In that sense,” Swanson wrote, “there are no limits to either science or information retrieval. But then, too, there are no final answers.”
And yet people keep searching, sifting through the sands of time for traces to the past. They continue looking, in dank archives and distant oceans, against all odds of discovery. We search because we must, because in every direction, stretching back to the beginning of human history, is the irresistible possibility that we might yet find a strange new sliver of who we were, and better understand what we have become.