Code repository

The material and code on this website is published under the Apache 2.0 license, which gives anyone permission to use and modify the content under the terms specified in the license. If you use Projective Simulation in your work (in particular academic publications), the appropriate way to acknowledge the authors who created it is by citing the pertinent Publication.

If you have any trouble, suggestions, or other comments, please let us know.

You can also find us on github!

Getting started

If you do not already have Python installed on your computer, you can try PythonWiki for instructions. Our code was written for Python 3.6.3 and uses the future package to ensure compatibility with Python 2. The package numpy is used throughout. You will also need an editor to view and change the code, for example Spyder or Atom. Inside the code we provide extensive comments, which should help you understand how it works and how it can be modified for your own purposes.

For your first steps with Projective Simulation, you need an agent, an environment, and a way for them to interact. As simple examples, download ps_agent_basic.py, the environment env_invasion_game.py and simple_interaction.py. If you execute simple_interaction, it will return a learning curve for a basic_ps_agent in the invasion_game environment. You can use the file as a template to start your own exploration.

For example, you can call on different types of agents and environments or change the learning parameters to test what Projective Simulation can do. The most up-to-date versions of the relevant files can be found in our github repository. For easier access, you can also download some files directly from here. (You can either place all files in a single folder or use subfolders called ‘agents’ and ‘environments’.)

Different agent types offer different features with regards to learning, as detailed in the docstring and in the relevant publications:  ps_agent_basic.pyps_agent_flexibleps_agent_sparseps_agent_generalization.

Various environments offer different challenges to the agents: env_neverending_colorenv_invasion_game.pyenv_grid_worldenv_driver_game.

In order to explore the collective behaviour of PS agents, you need different environments and different interaction structures, which can handle several agents at once. Here is an example, which models the behaviour of marching locusts: env_locustsimple_interaction_multiple .

In order to gather statistics about the learning of many agents in an automated manner, you can use the more high-level functions in run.py, which require rl_framework.py. When you execute run.py, it will train a number of basic_ps_agents in the environment invasion_game_env and save the learning data as text files in the subfolder results, which is created automatically.