?

Log in

No account? Create an account

Previous Entry | Next Entry

Bounded Delusionality

Rationality is a central principle in artificial intelligence, where a rational agent is specifically defined as an agent which always chooses the action which maximises its expected performance, given all of the knowledge it currently possesses.

In philosophy, the word rationality has been used to describe numerous religious and philosophical theories, especially those concerned with truth, reason, and knowledge.

A logical argument is sometimes described as rational if it is logically valid. However, rationality is a much broader term than logic, as it includes "uncertain but sensible" arguments based on probability, expectation, personal experience and the like, whereas logic deals principally with provable facts and demonstrably valid relations between them.

In economics, sociology, and political science, a decision or situation is often called rational if it is in some sense optimal, and individuals or organizations are often called rational if they tend to act somehow optimally in pursuit of their goals.

Debates arise in these three fields about whether or not people or organizations are "really" rational, as well as whether it make sense to model them as such in formal models.

    -Wikipedia, "Rationality"

Banadad and I were discussing this definition, but the "AI definition" of bounded rationality as "maximizing expected utility given beliefs" seems to include agents with delusional beliefs about their capabilities. For instance, someone who genuinely believes he can fly off the roof of a skyscraper might be rational under this definition.

Opinions? Is this a useful definition? Operationalizable? Why or why not?

--
Banazir

Comments

( 8 comments — Leave a comment )
mapjunkie
Nov. 4th, 2005 05:23 pm (UTC)
Suppose some unknown but true loss function particular to a player, L*, Y be a players knowledge space, Y.L be the percieved loss function, X be the state space, x be the true current state, Y.x be the percieved state space, Beta be a blocking function X -> Y.X, O be the set of possible observations, Y.{O} being the stored observations, g be a discriminant function used by the player Y.{O} -> Y.X, f* be some transition function X times U -> x, and Y.f be the players approximation. Also, let h be some inference procedure that adds more information to the player state from the player state.

We can say a player p has bounded rationality if
a) they act according the the optimal plan of Y.L, Y.f, g, Y.{O}
b) used the best available learning procedure in approximating all of the true states, as well as the best available inference procedure.

So, the problem with your definition is that it doesn't include b, but if we recast the active learning problem as part of the reenforcement learning problem, then I think we're ok.
mapjunkie
Nov. 4th, 2005 05:31 pm (UTC)
and yes, this is part of my long ongoing thesis.
zaimoni
Nov. 5th, 2005 07:15 am (UTC)
Banadad and I were discussing this definition, but the "AI definition" of bounded rationality as "maximizing expected utility given beliefs" seems to include agents with delusional beliefs about their capabilities.</blockquote>Does.

However, that's not a real problem. First, it's not like natural rational agents never have delusional beliefs about their capabilities. But they usually pay attention to whether their actions are having the intended results.

Second, my gut reaction is that any operationalizable definition of rationality reduces to this one. The more pleasant definitions are metaphysical.
mapjunkie
Nov. 5th, 2005 06:59 pm (UTC)
I think your first and your second point only work together under some observations. The first point to my reading, says that bounded rationality is the "maximizing expected utility given beliefs that are updated according to observations," while the second point takes no account of, in my opinion, the necessity of the action of observational update (learning) to be considered rational.

The problem here is, whether or not belief update should be considered an action, or part of the framework that chooses actions. Given this ambiguity, I think that belief update is an important element to specify.
mapjunkie
Nov. 5th, 2005 07:00 pm (UTC)
s/under some observations/under some formulations/
zaimoni
Nov. 5th, 2005 07:40 pm (UTC)
[First point]
I specified "usually" because I wanted to allow, in politics and science, for diametrically opposed sides to both be rational. This is mediated by selective ignoring of grossly contradicting (or grossly incomprehensible) facts.

[Second point]
It's not an either-or. Both actual belief updates, and the metaknowledge about how to conduct belief updates, need specification. Technically, philosophical type theory won't work here (how to conduct belief updates is itself a belief).
mapjunkie
Nov. 6th, 2005 01:36 pm (UTC)
Right, I think we are on the same page. The pointy end of the stick here is one might say

"Look here, you've tried this before, and many times, and it hasn't worked. It's just not rational to try this way again."

and not be misusing the word.
mapjunkie
Nov. 7th, 2005 04:49 pm (UTC)
Following up
Was this useful? Did this resolve anything about your question?
( 8 comments — Leave a comment )

Latest Month

December 2008
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

KSU Genetic and Evolutionary Computation (GEC) Lab

Teunciness

Breakfast

Science, Technology, Engineering, Math (STEM) Communities

Fresh Pages

Tags

Powered by LiveJournal.com
Designed by Naoto Kishi