Tucked inside Steven Levy's interview with Google engineers Amit Singhal and Matt Cutts, there's a fascinating detail about how the company recently improved its search algorithm.
They used human feedback.
Here's how they described their methodology to Levy in the Wired piece.
Singhal: We wanted to keep it strictly scientific, so we used our standard evaluation system that we've developed, where we basically sent out documents to outside testers. Then we asked the raters questions like: "Would you be comfortable giving this site your credit card? Would you be comfortable giving medicine prescribed by this site to your kids?"
Cutts: There was an engineer who came up with a rigorous set of questions, everything from. "Do you consider this site to be authoritative? Would it be okay if this was in a magazine? Does this site have excessive ads?" Questions along those lines.
Singhal: And based on that, we basically formed some definition of what could be considered low quality.
So, to simplify. They asked human beings, "Is this site high or low quality?" Then looked for patterns in their data that could be used to identify the sites that humans rated poorly.
Now, that last operation -- "basically [forming] some definition of what could be considered low quality" -- seems like the tough part, and we don't get any details about it. But it does answer a key question I've had. How did they pick out low quality sites? Simple! They asked some people. (The future is now.)
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.