Using Reinforcement Learning to mimic human perception
||Using Reinforcement Learning to mimic human perception|
||Volker Krueger <firstname.lastname@example.org>|
||2022-03-10 – 2022-10-01|
Contrary to technical video cameras the human visual perception has an uneven distribution of receptor cells. Whereas cameras have a cartesian distribution, the receptor distribution in the human eye is of a log-polar nature, with a high resolution in the center (fovea) and a linearly decreasing resolution toward the periphery.
As a result, the human visual system (HVS) controls the gaze of the eyes (saccadic movements) to assure that the relevant visual information is always perceived with the fovea.
The goal of this project is to a) mimic the foveated approach of the HVS and use reinforcement learning to learn to control the gaze based on the visual input and b) investigate to what extend learning rate and performance is impacted. For this, we will use the classic Atari games Pong, Break Out and Space Invaders. Classic Deep RL approaches will be used as a baseline.