"I don't know of any way to build agents which can perform useful inferences without their being based on logic. There is a good, basic, simple reason for this: if agent1 concludes B from A, and then agent2 uses B to infer C, there may be no way for agent2 to find out how agent1 came to its conclusion. So the reliability of agent2's conclusions must be hostage to the validity of agent1's reasoning. If the inference chains are always rather short then this might not matter, but in general it does matter. Having monotonic entailment as a basic convention for SW transactions provides at least the possibility of having reliable entailments which can be passed between agents without their needing to go back to first principles at every stage. "
... Pat Hayes
But do we really want the lock step entailed by that reliable reasoning? Arn't we more just interested in finding what we're looking for? Don't we really just want to think outside the bun to glean solutions to problems where conventional reasoning is in a rut ?
To this Pat might respond something like ... Seth, Seth, Seth ... that only applies to automated computer agents, not human beings who can do anything they like.
To which I would say "But if we do get automated reasoning working on the smeantic web, and people start to rely upon it, then can you imagine how hard it will be to stand up and dispute it's conslusions ?"
Also I want to record the fact that nonmonotonic reasoning does not imply the Closed World Assumption (CWA). Me thinks is is not wise to conflate those two concepts.
And I'm saying all this to say that I do love logic and the computer too ... I love the game of it ... I love it's wise usage and what it can do for us ... but as a man, I want to remain its master ... not not become it's subject. That this is a bit personal ... even a bit paranoid ... definitely way way in the future if ever ... is ok ... hey it's my blog !
8:31:20 AM
|