Project Icon

sumo-rl

用于智能交通信号控制的强化学习框架

SUMO-RL是基于SUMO交通模拟器的强化学习框架,专注于智能交通信号控制。该框架提供简洁接口,支持创建单代理和多代理强化学习环境,允许自定义状态和奖励函数,并兼容主流RL库。SUMO-RL简化了交通信号控制的强化学习研究过程,适用于多种交通网络和场景。目前已应用于多项研究,覆盖从单一交叉口到大规模城市网络的各类交通控制问题。

DOI tests PyPI version pre-commit Code style: black License

SUMO-RL

SUMO-RL provides a simple interface to instantiate Reinforcement Learning (RL) environments with SUMO for Traffic Signal Control.

Goals of this repository:

  • Provide a simple interface to work with Reinforcement Learning for Traffic Signal Control using SUMO
  • Support Multiagent RL
  • Compatibility with gymnasium.Env and popular RL libraries such as stable-baselines3 and RLlib
  • Easy customisation: state and reward definitions are easily modifiable

The main class is SumoEnvironment. If instantiated with parameter 'single-agent=True', it behaves like a regular Gymnasium Env. For multiagent environments, use env or parallel_env to instantiate a PettingZoo environment with AEC or Parallel API, respectively. TrafficSignal is responsible for retrieving information and actuating on traffic lights using TraCI API.

For more details, check the documentation online.

Install

Install SUMO latest version:

sudo add-apt-repository ppa:sumo/stable
sudo apt-get update
sudo apt-get install sumo sumo-tools sumo-doc

Don't forget to set SUMO_HOME variable (default sumo installation path is /usr/share/sumo)

echo 'export SUMO_HOME="/usr/share/sumo"' >> ~/.bashrc
source ~/.bashrc

Important: for a huge performance boost (~8x) with Libsumo, you can declare the variable:

export LIBSUMO_AS_TRACI=1

Notice that you will not be able to run with sumo-gui or with multiple simulations in parallel if this is active (more details).

Install SUMO-RL

Stable release version is available through pip

pip install sumo-rl

Alternatively, you can install using the latest (unreleased) version

git clone https://github.com/LucasAlegre/sumo-rl
cd sumo-rl
pip install -e .

MDP - Observations, Actions and Rewards

Observation

The default observation for each traffic signal agent is a vector:

    obs = [phase_one_hot, min_green, lane_1_density,...,lane_n_density, lane_1_queue,...,lane_n_queue]
  • phase_one_hot is a one-hot encoded vector indicating the current active green phase
  • min_green is a binary variable indicating whether min_green seconds have already passed in the current phase
  • lane_i_density is the number of vehicles in incoming lane i dividided by the total capacity of the lane
  • lane_i_queueis the number of queued (speed below 0.1 m/s) vehicles in incoming lane i divided by the total capacity of the lane

You can define your own observation by implementing a class that inherits from ObservationFunction and passing it to the environment constructor.

Action

The action space is discrete. Every 'delta_time' seconds, each traffic signal agent can choose the next green phase configuration.

E.g.: In the 2-way single intersection there are |A| = 4 discrete actions, corresponding to the following green phase configurations:

Important: every time a phase change occurs, the next phase is preeceded by a yellow phase lasting yellow_time seconds.

Rewards

The default reward function is the change in cumulative vehicle delay:

That is, the reward is how much the total delay (sum of the waiting times of all approaching vehicles) changed in relation to the previous time-step.

You can choose a different reward function (see the ones implemented in TrafficSignal) with the parameter reward_fn in the SumoEnvironment constructor.

It is also possible to implement your own reward function:

def my_reward_fn(traffic_signal):
    return traffic_signal.get_average_speed()

env = SumoEnvironment(..., reward_fn=my_reward_fn)

API's (Gymnasium and PettingZoo)

Gymnasium Single-Agent API

If your network only has ONE traffic light, then you can instantiate a standard Gymnasium env (see Gymnasium API):

import gymnasium as gym
import sumo_rl
env = gym.make('sumo-rl-v0',
                net_file='path_to_your_network.net.xml',
                route_file='path_to_your_routefile.rou.xml',
                out_csv_name='path_to_output.csv',
                use_gui=True,
                num_seconds=100000)
obs, info = env.reset()
done = False
while not done:
    next_obs, reward, terminated, truncated, info = env.step(env.action_space.sample())
    done = terminated or truncated

PettingZoo Multi-Agent API

For multi-agent environments, you can use the PettingZoo API (see Petting Zoo API):

import sumo_rl
env = sumo_rl.parallel_env(net_file='nets/RESCO/grid4x4/grid4x4.net.xml',
                  route_file='nets/RESCO/grid4x4/grid4x4_1.rou.xml',
                  use_gui=True,
                  num_seconds=3600)
observations = env.reset()
while env.agents:
    actions = {agent: env.action_space(agent).sample() for agent in env.agents}  # this is where you would insert your policy
    observations, rewards, terminations, truncations, infos = env.step(actions)

RESCO Benchmarks

In the folder nets/RESCO you can find the network and route files from RESCO (Reinforcement Learning Benchmarks for Traffic Signal Control), which was built on top of SUMO-RL. See their paper for results.

Experiments

Check experiments for examples on how to instantiate an environment and train your RL agent.

Q-learning in a one-way single intersection:

python experiments/ql_single-intersection.py

RLlib PPO multiagent in a 4x4 grid:

python experiments/ppo_4x4grid.py

stable-baselines3 DQN in a 2-way single intersection:

Obs: you need to install stable-baselines3 with pip install "stable_baselines3[extra]>=2.0.0a9" for Gymnasium compatibility.

python experiments/dqn_2way-single-intersection.py

Plotting results:

python outputs/plot.py -f outputs/4x4grid/ppo_conn0_ep2

Citing

If you use this repository in your research, please cite:

@misc{sumorl,
    author = {Lucas N. Alegre},
    title = {{SUMO-RL}},
    year = {2019},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/LucasAlegre/sumo-rl}},
}

List of publications that use SUMO-RL (please open a pull request to add missing entries):

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

白日梦AI

白日梦AI提供专注于AI视频生成的多样化功能,包括文生视频、动态画面和形象生成等,帮助用户快速上手,创造专业级内容。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

讯飞绘镜

讯飞绘镜是一个支持从创意到完整视频创作的智能平台,用户可以快速生成视频素材并创作独特的音乐视频和故事。平台提供多样化的主题和精选作品,帮助用户探索创意灵感。

Project Cover

讯飞文书

讯飞文书依托讯飞星火大模型,为文书写作者提供从素材筹备到稿件撰写及审稿的全程支持。通过录音智记和以稿写稿等功能,满足事务性工作的高频需求,帮助撰稿人节省精力,提高效率,优化工作与生活。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号