/* * NARS-Examples-MultiSteps.txt * Pei Wang [All Rights Reserved] * last modified: August 15, 2006 * * Examples showing conclusions of multiple-step inference in the Java applet. * * Each example consists of * (1) input tasks and inference steps, * (2) the communication log displayed in the Main Window, * (3) a brief explanation, * separated by "====================". * * To run an example, copy the input tasks and inference steps and paste them * into the Input Window of NARS, then click "OK" in the Input Window. * * To reset the memory between examples, selecting menu item "Memory/Initialize". */ // Context Sensitivity Paris {-- city %1% 100 Boston {-- city %1% 1 ?x {-- city 100 Paris {-- city 1 ?x {-- city 100 ==================== 0 >>> <{Paris} --> city> %1.00;0.90% 100 >>> <{Boston} --> city> %1.00;0.90% 1 >>> <{?1} --> city> %1.00;0.00% 4 <<< <{Boston} --> city> %1.00;0.90% 96 >>> <{Paris} --> city> %1.00;0.00% 1 <<< <{Paris} --> city> %1.00;0.90% 0 >>> <{?1} --> city> %1.00;0.00% 19 <<< <{Paris} --> city> %1.00;0.90% ==================== When a question ("?x {-- city") has equally good answers ("<{Boston} --> city> %1.00;0.90%" and "<{Paris} --> city> %1.00;0.90%"), the reported answer is usually provided by the relatively active concept in the current context. // Contradiction coffee --> beverage %1% Java --> coffee %1% Java --> coffee %0% 10 Java --> coffee coffee --> beverage 10 ==================== 0 >>> beverage> %1.00;0.90% 0 >>> coffee> %1.00;0.90% 0 >>> coffee> %0.00;0.90% 10 >>> coffee> %1.00;0.00% 0 >>> beverage> %1.00;0.00% 1 <<< coffee> %0.50;0.94% 0 <<< beverage> %1.00;0.90% ==================== A contradiction makes the system unsure on directly related questions, but will not make the system to derive an arbitrary conclusion, as in propositional logic. // Deduction Chain Tweety {-- robin %1% robin --> bird %1% bird --> animal %1% 10 Tweety {-- bird Tweety {-- animal 10 ==================== 0 >>> <{Tweety} --> robin> %1.00;0.90% 0 >>> bird> %1.00;0.90% 0 >>> animal> %1.00;0.90% 10 >>> <{Tweety} --> bird> %1.00;0.00% 0 >>> <{Tweety} --> animal> %1.00;0.00% 1 <<< <{Tweety} --> bird> %1.00;0.81% 1 <<< <{Tweety} --> animal> %1.00;0.72% ==================== Though the frequency of the conclusions remains to be 1, the confidence is getting lower as the deduction chain is longer. // Similarity Chain dog <-> cat %0.9% cat <-> tiger %0.9% tiger <-> lion %0.9% dog <-> lion 25 ==================== 0 >>> dog> %0.90;0.90% 0 >>> tiger> %0.90;0.90% 0 >>> tiger> %0.90;0.90% 0 >>> lion> %1.00;0.00% 23 <<< lion> %0.72;0.70% ==================== For incomplete similarity, both frequency and the confidence decrease alone an inference chain. // Induction and Revision bird --> swimmer swimmer --> bird 1 swan --> bird %1% swan --> swimmer %1% 20 gull --> bird %1% gull --> swimmer %1% 20 crow --> bird %1% crow --> swimmer %0% 20 ==================== 0 >>> swimmer> %1.00;0.00% 0 >>> bird> %1.00;0.00% 1 >>> bird> %1.00;0.90% 0 >>> swimmer> %1.00;0.90% 11 <<< bird> %1.00;0.44% 0 <<< swimmer> %1.00;0.44% 9 >>> bird> %1.00;0.90% 0 >>> swimmer> %1.00;0.90% 10 <<< bird> %1.00;0.61% 0 <<< swimmer> %1.00;0.61% 10 >>> bird> %1.00;0.90% 0 >>> swimmer> %0.00;0.90% 9 <<< swimmer> %0.66;0.70% ==================== (1) Question may be remembered before available knowledge arrives, or after answers are reported; (2) The system can change its mind when new evidence is taken into consideration; (3) Positive evidence has the same effect on symmetric inductive conclusions, but negative evidence does not. // Mixed Inference swan --> bird %1% swan --> swimmer %1% 10 gull --> bird %1% gull --> swimmer %1% 20 crow --> bird %1% crow --> swimmer %0% 40 robin --] feathered %1% bird --] feathered %1% 80 robin --> swimmer 100 ==================== 0 >>> bird> %1.00;0.90% 0 >>> swimmer> %1.00;0.90% 10 >>> bird> %1.00;0.90% 0 >>> swimmer> %1.00;0.90% 20 >>> bird> %1.00;0.90% 0 >>> swimmer> %0.00;0.90% 40 >>> [feathered]> %1.00;0.90% 0 >>> [feathered]> %1.00;0.90% 80 >>> swimmer> %1.00;0.00% 83 <<< swimmer> %0.66;0.24% ==================== The final conclusion is produced using induction, abduction, deduction, and revision. The selection of inference rule is knowledge driven, not specified explicitly in the input. // Confidence and Revision Willy {-- whale %1% whale --] black %1% 10 Willy {-- swimmer %1% fish --> swimmer %1% 10 Willy {-] black Willy {-- fish 10 Willy {-] black %0% Willy {-- fish %0% 10 ==================== 0 >>> <{Willy} --> whale> %1.00;0.90% 0 >>> [black]> %1.00;0.90% 10 >>> <{Willy} --> swimmer> %1.00;0.90% 0 >>> swimmer> %1.00;0.90% 10 >>> <{Willy} --> [black]> %1.00;0.00% 0 >>> <{Willy} --> fish> %1.00;0.00% 1 <<< <{Willy} --> [black]> %1.00;0.81% 1 <<< <{Willy} --> fish> %1.00;0.44% 8 >>> <{Willy} --> [black]> %0.00;0.90% 0 >>> <{Willy} --> fish> %0.00;0.90% 1 <<< <{Willy} --> [black]> %0.00;0.90% 0 <<< <{Willy} --> fish> %0.00;0.90% 1 <<< <{Willy} --> [black]> %0.32;0.92% 0 <<< <{Willy} --> fish> %0.08;0.90% ==================== Even when all the input judgments using the default confidence value, different rules produce conclusions with difference confidence, which have different sensitivity when facing the same amount of new evidence. // Compositionality light --> traffic_signal %0.2% [red] --> traffic_signal %0.2% 10 (&, [red], light) --> traffic_signal 10 light_1 {-- (&, [red], light) %1% light_1 {-- traffic_signal %1% 10 light_2 {-- (&, [red], light) %1% light_2 {-- traffic_signal %1% 20 ========= 0 >>> traffic_signal> %0.20;0.90% 0 >>> <[red] --> traffic_signal> %0.20;0.90% 10 >>> <(&,[red],light) --> traffic_signal> %1.00;0.00% 1 <<< <(&,[red],light) --> traffic_signal> %0.36;0.84% 9 >>> <{light_1} --> (&,[red],light)> %1.00;0.90% 0 >>> <{light_1} --> traffic_signal> %1.00;0.90% 10 >>> <{light_2} --> (&,[red],light)> %1.00;0.90% 0 >>> <{light_2} --> traffic_signal> %1.00;0.90% 9 <<< <(&,[red],light) --> traffic_signal> %0.44;0.86% 10 <<< <(&,[red],light) --> traffic_signal> %0.50;0.87% ==================== Initially, the meaning of compound term "(&,[red],light)" is determined by the meaning of its components "red" and "light", but it will no longer be the case when the system gets experience about the compound that cannot be reduced to its components. // Relational Induction (*, vinegar, baking_soda) --> neutralization %1% vinegar --> acid %1% baking_soda --> base %1% 50 (*, acid, base) --> neutralization 100 ==================== 0 >>> <(*,vinegar,baking_soda) --> neutralization> %1.00;0.90% 0 >>> acid> %1.00;0.90% 0 >>> base> %1.00;0.90% 50 >>> <(*,acid,base) --> neutralization> %1.00;0.00% 73 <<< <(*,acid,base) --> neutralization> %1.00;0.28% ==================== Double induction on both arguments of the "neutralization" relation // Fuzzy Concept John {-- boy %1% John {-- (/, taller_than, {Tom}, _) %1% 10 David {-- boy %1% David {-- (/, taller_than, {Tom}, _) %1% 20 Karl {-- boy %1% Karl {-- (/, taller_than, {Tom}, _) %0% 40 Tom {-- (/, taller_than, _, boy) 100 ==================== 0 >>> <{John} --> boy> %1.00;0.90% 0 >>> <{John} --> (/,taller_than,{Tom},_)> %1.00;0.90% 10 >>> <{David} --> boy> %1.00;0.90% 0 >>> <{David} --> (/,taller_than,{Tom},_)> %1.00;0.90% 20 >>> <{Karl} --> boy> %1.00;0.90% 0 >>> <{Karl} --> (/,taller_than,{Tom},_)> %0.00;0.90% 40 >>> <{Tom} --> (/,taller_than,_,boy)> %1.00;0.00% 91 <<< <{Tom} --> (/,taller_than,_,boy)> %0.66;0.70% ==================== John's degree of membership to fuzzy concept "tall boy" depends on the extent to which he is taller than the other boys.