I'm late in congratulating Alexis Madrigal and the entire Atlantic online staff on the debut this week of the Atlantic Tech Channel, which Alexis is running. This is the sort of all-fronts, informed coverage of the tech world -- its how-to's, its personalities, its emerging trends, its scientific and engineering breakthroughs and challenges, its effects on our politics and culture and ways of interacting with the world -- that I've always enjoyed reading and am glad to have concentrated in one place here. The items on the home page just now give a sense of the range of coverage. Just as one sample, be sure to check out the "Tech Canon," a great idea. The Atlantic's Food Channel, under Corby Kummer and Daniel Fromson, has been our first fully realized standalone, thematic online magazine-within-a-magazine. It is nice to see this complement.
In further in-house news: If you're in DC this evening and looking for a good time, the much-discussed Politics Prose book store has an evening session with Deborah Fallows, author of Dreaming in Chinese, at 7pm, details here. As a preview, here is a 5-minute presentation, on language-learning for adults, that she made at a Google "Ignite" session this week. These are sessions in which speakers have exactly 300 seconds to make a no-notes-allowed memorized presentation, accompanied by exactly 20 PowerPoint slides that automatically advance every 15 seconds. I think Politics & Prose doesn't apply the same rules.See web-only content:
Interesting (to me) tech detail about this talk: the subscript-captions you see during this presentation, some of which block the captions on the slides, were produced automatically in real time by the same Google voice-recognition system that powers its voice-search on mobile devices. I've been fascinated by voice-recognition software for decades and have never quite believed it would prove workable in "speaker independent" situations -- that is, when it's being applied to any random person who happens to speak up, rather than by one patient user who refines a system's recognition of that user's own voice over a long period. But I've come to expect voice search on my mobile phone (Android Nexus One, fyi) to work, and this demonstration is a further step. It's possible that these captions were slightly tweaked before going up on YouTube, but I watched them happen in real time, with perhaps a one-second lag behind each word's utterance, and was surprised by their accuracy.
For later exploration -- no doubt on The Tech Channel -- the ways in which the "big data" era has finally made this feat possible.
This article available online at: