Uncertainty for model based reinforcement learning
Title: Uncertainty for model based reinforcement learning
DNr: NAISS 2024/22-3
Project Type: NAISS Small Compute
Principal Investigator: Emilio Jorge <emilio.jorge@chalmers.se>
Affiliation: Chalmers tekniska högskola
Duration: 2024-01-26 – 2025-02-01
Classification: 10207
Keywords:

Abstract

We aim to develop novel approaches that are capable of producing in in a way that appropriately reflects underlying uncertainty for reinforcement learning. We are looking into approximate posterior sampling methods using Langevin/Hamiltonian dynamics for both neural networks and other representations to guide agents in their actions. Depending on the environments used, both gpu and cpu resources are more suitable. In the case of more advanced environments and larger neural networks, then GPU is a significant speedup and will be used. In the case of smaller environments, then CPU resources are more suitable as GPUs dont really give a speedup. I could try to make calculations estimating my usage, but honestly they would be wildly inaccurate, I have tried to estimate resonable amounts in the resources I have requested.