Where do goals come from? A Generic Approach to Autonomous Goal-System Development
Goals express agents' intentions and allow them to organize their behavior based on low-dimensional abstractions of high-dimensional world states. How can agents develop such goals autonomously? This paper proposes a detailed conceptual and computational account to this longstanding problem. We argue to consider goals as high-level abstractions of lower-level intention mechanisms such as rewards and values, and point out that goals need to be considered alongside with a detection of the own actions' effects. We propose Latent Goal Analysis as a computational learning formulation thereof, and show constructively that any reward or value function can by explained by goals and such self-detection as latent mechanisms. We first show that learned goals provide a highly effective dimensionality reduction in a practical reinforcement learning problem. Then, we investigate a developmental scenario in which entirely task-unspecific rewards induced by visual saliency lead to self and goal representations that constitute goal-directed reaching.
READ FULL TEXT