![]() |
Saturday, May 8, 2004 |
More people have written about this than I have." It's interesting, as Geoff Pullum observes, that such sentences go down so easy, since they're completely incoherent.[...] All these stimuli involve familiar and coherent local cues whose global integration is contradictory or impossible. [...] these sentences are telling us something about the nature of perception. Whether we're seeing a scene, hearing a sound or assimilating a sentence, there are automatic processes that happen effortlessly whenever we come across the right kind of stuff, and then there are kinds of analysis that involve more effort and more explicit scrutiny. [Language Log] Over the last ten years, work on graph models of (mostly probabilistic) inference has focused on techniques for approximate inference. Many interpretation and inference tasks, for instance image segmentation, have graph formulations that are computationally intractable. Some approximation methods enforce local constraints of bounded size while relaxing other constraints. Others approximate the original graph with tractable subgraphs (typically trees). Others till relax a discrete assignment problem into a continuous optimization problem that can be solved efficiently. In all cases, the results of inference may not be globally coherent. The rough parallels between these ways of approximately solving inference tasks and "Escher" perceptual phenomena are intriguing. 10:05:08 AM ![]() |