Differences
This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
| sba [2024/10/27 18:00] – [Sampling from the bounded-rational policy] pedroortega | sba [2024/10/27 18:11] (current) – [Adding the context] pedroortega | ||
|---|---|---|---|
| Line 55: | Line 55: | ||
| - **Repeat**: Set $t \leftarrow t + 1$ and repeat from step 2. | - **Repeat**: Set $t \leftarrow t + 1$ and repeat from step 2. | ||
| - | The resulting distribution $P_t(\tau)$ is our bounded-rational policy. | + | The resulting distribution $P_t(\tau)$ is our bounded-rational policy. You will have to experiment with the choices of $\alpha$ (which controls the step size) and $N$ (which controls the representation quality of the target distribution) to obtain a satisfactory training time. |
| - | ===== Adding | + | ===== Adding |
| - | The above algorithm generates a new prior $P(\tau)$ which places more weights on desirable strings. However, often we want policies to respond to a given context $c$, i.e. we want to sample strings from $P(\tau|c)$, | + | The above algorithm generates a new prior $P(\tau)$ which places more weights on desirable strings. However, often we want policies to respond to a user-provided |
| - | ===== Memory-Constrained Agents ===== | + | ==== Enter memory-constrained agents |
| \[ | \[ | ||