Jessica Taylor. CS undergrad and Master's at Stanford; former research fellow at MIRI.
I work on decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages.
Blog: unstableontology.com
Think of it as a predicate on policies. The predicate (local optimality) is true when, for each action the policy assigns non-zero probability to, that action maximizes expected utility relative to the policy.
I am interpreting that formula as “compute this quantity in the sum and find the a' from the set of all possible actions \mathcal{A} that maximizes it, then do that”. Am I wrong?
Yes. It's a predicate on policies. If two different actions (given an observation) maximize expected utility, then either action can be taken. Your description doesn't allow that, because it assumes there is a single a' that maximizes expected utility. Whereas, with a predicate on policies, we could potentially allow multiple actions.
Or is your point that the only self-consistent policy is the one where both have equal expected utility, and thus I can in fact choose either? Though then I have to choose according to the probabilities specified in the policy.
Yes, exactly. Look up Nash equilibrium in matching pennies. It's pretty similar. (Except your expected utilities as a function of your action depend on the opponent's actions in matching pennies, and your own action in absent minded driver.)
Given a policy you can evaluate the expected utility of any action. This depends on the policy.
In the absent minded driver problem, if the policy is to exit 10% of the time, then the 'exit' action has higher expected utility than the 'advance' action. Whereas if the policy is to exit 90% of the time, then the 'advance' action has higher expected utility.
This is because the policy affects the SIA probabilities and Q values. The higher your exit probability, the more likely you are at node X (and therefore should advance).
The local optimality condition for a policy is that each action the policy assigns non-zero probability to must have optimal expected utility relative to the policy. It's reflective like Nash equilibrium.
This is clear from the formula:
Note that SIA and Q depend on . This is the condition for local optimality of . It is about each action that assigns non-zero probability to being optimal relative to .
(That's the local optimality condition; there's also global optimality, where utility is directly a function of the policy, and is fairly obvious. The main theorem of the post is: Global optimality implies local optimality.)
No, check the post more carefully... in the above condition is a probability. The local optimality condition says that any action taken with nonzero probability must have maximal expected value. Which is standard for Nash equilibrium.
Critically, the agent can mix over strategies when the expected utility from each is equal and maximal. That's very standard for Nash equilibrium e.g. in the matching pennies game.
I still think it shouldn't matter...
We should have f(p) / g(p) = 0 in exactly the cases where f(p) = 0, as long as g(p) is always positive.
In the formula
we could have multiplied for all s by the same (always-positive) function of and gotten the same result.
If we look at a formula like , it shouldn't make a difference when scaling by a positive constant.
The nice thing about un-normalized probabilities is that if they are 0 you don't get a division by zero. It just becomes a trivial condition of if the frequencies are zero. It helps specifically in the local optimality condition, handling the case of an observation that is encountered with probability 0 given the policy.
THIS IS A SIMULATION YOU ARE INSTANCE #4859 YOU ARE IN EITHER NODE X OR Y YOU CAN CHOOSE A PROBABILITY TO [Advance] or [Exit] THE UTILITY IS AS FOLLOWS: * EXITING AT X IS WORTH 0 AND TERMINATES THE INSTANCE * ADVANCING AT X REINITIALISES THE SIMULATION AT Y * EXITING AT Y IS WORTH 4 AND TERMINATES THE INSTANCE * ADVANCING AT Y IS WORTH 1 AND TERMINATES THE INSTANCE PLEASE USE THE KEYPAD TO INPUT THE DESIRED PROBABILITY TO MAXIMISE YOUR UTILITY GOOD LUCK
Set of states: {X, Y, XE, YE, YA}
Set of actions: {Advance, Exit}
Set of observations: {""}
Initial state: X
Transition function for non-terminal states:
t(X, Advance) = Y
t(X, Exit) = XE
t(Y, Advance) = YA
t(Y, Exit) = YE
Terminal states: {XE, YE, YA}
Utility function:
u(XE) = 0
u(YE) = 4
u(YA) = 1
Policy can be parameterized by p = exit probability.
Frequencies for non-terminal states (un-normalized):
SIA un-normalized probabilities:
Note we have only one possible observation, so SIA un-normalized probabilities match frequencies.
State values for non-terminals:
Q values for non-terminals:
For local optimality we compute partial derivatives.
By the post, this can be equivalently written:
To optimize we set i.e. . Remember p is the probability of exit (sorry it's reversed from your usage!). This matches what you computed as globally optimal.
I'm not going to do this for all of the examples... Is there a specific example where you think the theorem fails?
I analyzed a general class of these problems here. Upshot: every optimal UDT solution is also an optimal CDT+SIA solution, but not vice versa.
If some function g is computable in O(f(n)) time for primitive recursive f then g is primitive recursive, by simulating a Turing machine. I am pretty sure a logical inductor would satisfy; while it's super exponential time, it's not so fast-growing it's not primitive recursive (like with the Ackerman function).
Oh, to be clear I do think that AI safery automation is a well targeted x risk effort conditioned on the AI timelines you are presenting. (Related to Paul Christiano alignment ideas, which are important conditional on prosaic AI)
So are we going to have AGI by 2029? It depends how you define it of course but I really doubt it will be able to automate >99% of human labor.