Applying Temporal Difference Methods to Machine Learning

- 1 min

Here I detail my project for the course Reinforcement Learning (COMP767) taken at McGill, applying Temporal Difference (TD) methods in a Machine Learning setting.

Temporal Difference learning methods were introduced in 1988 and they have been the foundation of Reinforcement Learning algorithms ever since. When introduced, these methods were not specially intended at RL problems but rather generally proposed as a learning framework for any prediction problems. This paper serves as a report for my case study on applying such TD methods to multi-steps prediction problems usually handled by supervised Machine Learning.

The blog posts related to this project can be found below.

All source code can be found in this GitHub repository. The final report submitted can be found here.

rss facebook twitter github gitlab youtube mail spotify lastfm instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora quora