The January 11, 2013 edition of ACM TechNews:
The Future of Search
Wired News (01/04/13) Tom Vanderbilt
Google fellow Amit Singhal and colleagues are working on improving Web search so that Google will understand terms as things with an Internet life and a history of their own, keying them to individual searchers through increasingly refined techniques such as speech, gesture, or gaze recognition. Google’s Knowledge Graph has helped make Google smarter by supporting a paradigm in which the system does not seek Web pages containing a string of letters, but specific entities, Singhal says. “Search now understands that the Taj Mahal is a building, but also a music band, a casino and a bunch of restaurants,” he notes. The focus of the Knowledge Graph is working out what the searcher wants to know, parsing disambiguation and screening out noise. The graph was beta-tested by many people in its User Experience Lab, which enlisted two-way mirrors and eye-tracking devices for the tests. Early studies focused on whether users even saw the Knowledge Graph, and they frequently did not. Another technology that may play a role in advancing search is a neural network for unsupervised learning developed by Jeff Dean in Google’s Systems Infrastructure Group. This form of learning could be designated unsupervised search, as the machines would not only locate but also interpret what they find, a search engine that produces its own algorithms.
Full article: http://www.wired.co.uk/magazine/archive/2013/01/features/the-future-of-search?page=1