
A library for ready-made reinforcement learning agents and reusable components for neat prototyping
RLcycle is a reinforcement learning (RL) agents framework that provides ready-made RL agents and reusable components for easy prototyping. It offers a range of RL algorithms, including DQN, Distributional, A2C, A3C, DDPG, Soft Actor Critic, and more. RLcycle utilizes PyTorch for computations and building models, Hydra for configurations and building agents, Ray for parallelizing learning, and WandB for logging training and testing. The framework aims to simplify the process of building and running RL agents.
RLcycle is a reinforcement learning (RL) agents framework that aims to simplify the process of building and running RL agents. It offers a range of ready-made RL agents and reusable components for easy prototyping. The framework utilizes PyTorch, Hydra, Ray, and WandB for computations, configurations, parallelizing learning, and logging respectively. With support for various RL algorithms and features such as prioritized experience replay and n-step updates, RLcycle provides a comprehensive solution for RL practitioners. The framework also has plans for future updates, including the incorporation of TRPO, PPO, and compatibility with a distributed RL framework. Overall, RLcycle provides a valuable resource for researchers and developers in the field of reinforcement learning.
