Slides

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

Description

Real-world reinforcement learning problems often exhibit nonlinear, continuous-valued,
noisy, partially-observable state-spaces that are prohibitively expensive to explore. The
formal reinforcement learning framework, unfortunately, has not been successfully demonstrated in a real-world domain having all of these constraints. We approach this domain
with a two-part solution. First, we overcome continuous-valued, partially observable state-spaces by constructing manifold embeddings of the system’s underlying dynamics, which
substitute as a complete state-space representation. We then define a generative model
over this manifold to learn a policy off-line. The model-based approach is preferred because it enables simplification of the learning problem by domain knowledge. In this work we formally integrate manifold embeddings into the reinforcement learning framework, summarize a spectral method for estimating embedding parameters, and demonstrate the model-based approach in a complex domain-adaptive seizure suppression of an epileptic neural system.