I was walking through lower Manhattan on the way to work one morning a couple of years ago when I saw a guy in a hardhat peering through something like one of these things mounted on a tripod:
I'd seen this sort of equipment before, lots of times near construction sites or operated by crews on the side of the road. But I never knew exactly what it did. So I asked him: What is that thing? It was surveying equipment, the guy told me, which helps take measurements—like distance and elevation—for mapping and other planning purposes. Ohhh. Cool.
And it was cool. Not just the fact of the thing but my newfound (if limited) understanding of it. There was something so satisfying about a simple exchange that answered a question I'd wondered quietly for years. Which helps explain why a site like What Is This Thing Called is so delightful. It's a simple Tumblr, made in the spirit of a similar Reddit thread, that features photos of obscure, forgotten, or otherwise ambiguous technologies. Anybody can comment on the photos to help clarify what the thing is.
A lot of the mystery things are of commonplace items. There's the plastic pamphlet that a restaurant bill arrives in (a check presenter) and those stumpy cylindrical posts that prevent cars from driving onto pedestrian spaces (bollards). Others are things that happen, phenomena rather than physical objects themselves—like the kink in a landline cord.
It's sort of like Quora for pictures, a one-stop place to tap the knowledge of the crowd—which is really useful if you want to know what to call something (surveying equipment, for example). Put a random question before a big enough audience and you're liable to match the right human with the right object so you get the information you're looking for.
But what if you're looking for information—beyond what's possible on What Is This Thing Called—that requires cataloguing many, many more images? Think of it like a Shazam for the physical world, a program that might tell you not just what something is but what that thing is related to. Say, for example, you want to find objects visually similar to one another—maybe various price points for a style of flatware you liked at a restaurant and want to buy for home. These kinds of searches are possible, but made harder when you don't know what the object you saw is called.
What many people think of as "image searches" today aren't entirely reliable for the sorts of applications we can imagine, despite the vast repositories of images online. A Google image search for "that tripod thing people have on construction sites" did turn up a couple of photographs of surveying tools, but the results showed lots of other, unrelated stuff, too. (Even reverse Google searches—right-clicking on an image allows you to "search Google for this image"—are better for tracking down a photo's origins online than for identifying what an unknown object is or is like.)
There are nascent search-by-photo tools out there already. TinEye is a reverse image-search that lets you upload your own photos and check them against a database. And it works well for images that have already been posted online—the one of the surveying equipment above, for instance, was easily traceable to a site that sells the stuff. But TinEye isn't so good at identifying images that aren't already tagged with metadata across the web. This photo I just took of my wallet turned up zero results, while a human brain would probably be able to discern enough details to find it in a related word search.
Then there's Superfish, which promises that it can "find everything that words can't possibly describe" by using an algorithm to comb through millions of image matches to the photos you upload, then comparing and ranking the results. The company has a series of category-specific apps called Like That that help people identify the kinds of flowers, breeds of puppies, and styles of furniture that matches what they see in the world. (And it's pretty easy to take the mental leap from a robot that can learn that you like beagles and bouganvilleas to an algorithm that might, say, learn a person's type based on their Tinder-swiping habits—and maybe turn that into an app that learns your taste in romantic partners based on visual pattern recognition.)
Whatever the applications, searching by image will almost certainly be more commonplace as the web continues as an ever more visual medium, and especially as computers get better at recognizing images and reading patterns. Which means that as computers learn how we see the world, they'll increasingly be able to draw connections for us that we'd otherwise never see.