bestcourses is supported by learners. When you buy through links on our website, we may earn an affiliate commission. Learn more
Modern Reinforcement Learning: Deep Q Learning in PyTorch
How to Turn Deep Reinforcement Learning Research Papers Into Agents That Beat Classic Atari Games
Created by Phil Tabor, offered on Udemy
To make sure that we score courses properly, we pay a lot of attention to the reviews students leave on courses and how many students are taking a course in the first place. This course has a total of 3875 students which left 770 reviews at an average rating of 4.58, which is average.
We analyze course length to see if courses cover all important aspects of a topic, taking into account how long the course is compared to the category average. This course has a length of 5 hours 42 minutes, which is pretty short. This might not be a bad thing, but we've found that longer courses are often more detailed & comprehensive. The average course length for this entire category is 7 hours 54 minutes.
This course currently has a bestcourses score of 6.6/10, which makes it an average course. Overall, there are probably better courses available for this topic on our platform.
In this complete deep reinforcement learning course you will learn a repeatable framework for reading and implementing deep reinforcement learning research papers. You will read the original papers that introduced the Deep Q learning, Double Deep Q learning, and Dueling Deep Q learning algorithms. You will then learn how to implement these in pythonic and concise PyTorch code, that can be extended to include any future deep Q learning algorithms. These algorithms will be used to solve a variety of environments from the Open AI gym's Atari library, including Pong, Breakout, and Bankheist.
You will learn the key to making these Deep Q Learning algorithms work, which is how to modify the Open AI Gym's Atari library to meet the specifications of the original Deep Q Learning papers. You will learn how to:
Repeat actions to reduce computational overhead
Rescale the Atari screen images to increase efficiency
Stack frames to give the Deep Q agent a sense of motion
Evaluate the Deep Q agent's performance with random no-ops to deal with model over training
Clip rewards to enable the Deep Q learning agent to generalize across Atari games with different score scales
If you do not have prior experience in reinforcement or deep reinforcement learning, that's no problem. Included in the course is a complete and concise course on the fundamentals of reinforcement learning. The introductory course in reinforcement learning will be taught in the context of solving the Frozen Lake environment from the Open AI Gym.
We will cover:
Markov decision processes
Temporal difference learning
The original Q learning algorithm
How to solve the Bellman equation
Value functions and action value functions
Model free vs. model based reinforcement learning
Solutions to the explore-exploit dilemma, including optimistic initial values and epsilon-greedy action selection
Also included is a mini course in deep learning using the PyTorch framework. This is geared for students who are familiar with the basic concepts of deep learning, but not the specifics, or those who are comfortable with deep learning in another framework, such as Tensorflow or Keras. You will learn how to code a deep neural network in Pytorch as well as how convolutional neural networks function. This will be put to use in implementing a naive Deep Q learning agent to solve the Cartpole problem from the Open AI gym.
What you will learn
- How to read and implement deep reinforcement learning papers
- How to code Deep Q learning agents
- How to Code Double Deep Q Learning Agents
- How to Code Dueling Deep Q and Dueling Double Deep Q Learning Agents
- How to write modular and extensible deep reinforcement learning software
- How to automate hyperparameter tuning with command line arguments
- Some College Calculus
- Exposure To Deep Learning
- Comfortable with Python