Rationality is a central principle in artificial intelligence, where a rational agent is specifically defined as an agent which always chooses the action which maximises its expected performance, given all of the knowledge it currently possesses.
In philosophy, the word rationality has been used to describe numerous religious and philosophical theories, especially those concerned with truth, reason, and knowledge.
A logical argument is sometimes described as rational if it is logically valid. However, rationality is a much broader term than logic, as it includes "uncertain but sensible" arguments based on probability, expectation, personal experience and the like, whereas logic deals principally with provable facts and demonstrably valid relations between them.
In economics, sociology, and political science, a decision or situation is often called rational if it is in some sense optimal, and individuals or organizations are often called rational if they tend to act somehow optimally in pursuit of their goals.
Debates arise in these three fields about whether or not people or organizations are "really" rational, as well as whether it make sense to model them as such in formal models.
Banadad and I were discussing this definition, but the "AI definition" of bounded rationality as "maximizing expected utility given beliefs" seems to include agents with delusional beliefs about their capabilities. For instance, someone who genuinely believes he can fly off the roof of a skyscraper might be rational under this definition.
Opinions? Is this a useful definition? Operationalizable? Why or why not?