Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
klderivation [2024/12/09 21:19] – [Assumption 2: Restrictions on the cost function] pedroortega | klderivation [2024/12/24 12:58] (current) – [Connecting to the free energy objective] pedroortega | ||
---|---|---|---|
Line 2: | Line 2: | ||
====== Why does every choice come with an entropy tax? ====== | ====== Why does every choice come with an entropy tax? ====== | ||
+ | |||
+ | > I present a very general derivation that shows how every choice carries an unavoidable " | ||
+ | |||
+ | //Cite as: Ortega, P.A. “Why does every choice come with a tax?”, Tech Note 3, DAIOS, 2024.// | ||
Imagine every choice you make —whether trivial or life-changing— comes with a hidden " | Imagine every choice you make —whether trivial or life-changing— comes with a hidden " | ||
- | This concept lies at the heart of [[https:// | + | This concept lies at the heart of [[https:// |
\[ | \[ | ||
| | ||
Line 15: | Line 19: | ||
===== Assumption 1: Temporal progress as conditioning ===== | ===== Assumption 1: Temporal progress as conditioning ===== | ||
- | First we need to model temporal progress of any kind. We'll go with a " | + | First we need to model temporal progress of any kind. We'll go with a " |
Now, any event –be it a choice, an observation, | Now, any event –be it a choice, an observation, | ||
Line 33: | Line 37: | ||
===== Assumption 2: Restrictions on the cost function ===== | ===== Assumption 2: Restrictions on the cost function ===== | ||
- | Next, we'll impose constraints on the cost function. We want our cost function to capture efforts that are structurally consistent with the underlying probability space. The following requirements are natural: | + | Next, we'll impose constraints on the cost function. We want our cost function to capture efforts that are structurally consistent with the underlying probability space. |
{{ :: | {{ :: | ||
Line 61: | Line 65: | ||
===== Cost of deliberation ===== | ===== Cost of deliberation ===== | ||
- | Now, let's calculate the cost of transforming the prior choice probabilities into posterior choice probabilities: | + | Now, based on our sketch above, let's calculate the cost of transforming the prior choice probabilities into posterior choice probabilities: |
\[ | \[ | ||
\begin{align} | \begin{align} | ||
Line 78: | Line 82: | ||
We've obtained two expectation terms. The second is proportional to the Kullback-Leibler divergence between of the posterior to the prior choice probabilities. What is the first expectation? | We've obtained two expectation terms. The second is proportional to the Kullback-Leibler divergence between of the posterior to the prior choice probabilities. What is the first expectation? | ||
- | The first expectation represents the expected cost of each individual choice. This is because each term $C(x \cap d|x \cap c)$ measures the cost of transforming the relative probability of a specific choice. | + | The first expectation represents the expected cost of each individual choice |
===== Connecting to the free energy objective ===== | ===== Connecting to the free energy objective ===== | ||
Line 84: | Line 88: | ||
We can transform the above equality into a variational principle by replacing the individual choice costs $C(x \cap d|x \cap c)$ with arbitrary numbers. The resulting expression is convex in the posterior choice probabilities $P(x|d)$, so we get a nice and clean objective function with a unique minimum. | We can transform the above equality into a variational principle by replacing the individual choice costs $C(x \cap d|x \cap c)$ with arbitrary numbers. The resulting expression is convex in the posterior choice probabilities $P(x|d)$, so we get a nice and clean objective function with a unique minimum. | ||
- | We can even go a step further: | + | We can even go a step further: |
\[ | \[ | ||
\sum_x P(x|d) U(x) - \frac{1}{\beta} \sum_x P(x|d) \log \frac{ P(x|d) }{ P(x|c) }. | \sum_x P(x|d) U(x) - \frac{1}{\beta} \sum_x P(x|d) \log \frac{ P(x|d) }{ P(x|c) }. |