-
Notifications
You must be signed in to change notification settings - Fork 6.9k
Closed
Closed
Copy link
Description
Ray Component
RLlib
What happened + What you expected to happen
I ran black rllib/env/multi_agent_env.py but got the message
error: cannot format rllib/env/multi_agent_env.py: INTERNAL ERROR: Black produce
d code that is not equivalent to the source on pass 1. Please report a bug on h
ttps://github.com/psf/black/issues. This diff might be helpful: /var/folders/gx
/t32twm6x54dftn9b2wxcbl100000gn/T/blk_8yrnynba.log
Oh no! 💥 💔 💥
1 file failed to reformat.
I'm not sure why the error occurs, but it has something to do with the yapf: disable.
ray/rllib/env/multi_agent_env.py
Lines 91 to 183 in 6996eaa
| # yapf: disable | |
| # __grouping_doc_begin__ | |
| @ExperimentalAPI | |
| def with_agent_groups( | |
| self, | |
| groups: Dict[str, List[AgentID]], | |
| obs_space: gym.Space = None, | |
| act_space: gym.Space = None) -> "MultiAgentEnv": | |
| """Convenience method for grouping together agents in this env. | |
| An agent group is a list of agent IDs that are mapped to a single | |
| logical agent. All agents of the group must act at the same time in the | |
| environment. The grouped agent exposes Tuple action and observation | |
| spaces that are the concatenated action and obs spaces of the | |
| individual agents. | |
| The rewards of all the agents in a group are summed. The individual | |
| agent rewards are available under the "individual_rewards" key of the | |
| group info return. | |
| Agent grouping is required to leverage algorithms such as Q-Mix. | |
| This API is experimental. | |
| Args: | |
| groups: Mapping from group id to a list of the agent ids | |
| of group members. If an agent id is not present in any group | |
| value, it will be left ungrouped. | |
| obs_space: Optional observation space for the grouped | |
| env. Must be a tuple space. | |
| act_space: Optional action space for the grouped env. | |
| Must be a tuple space. | |
| Examples: | |
| >>> env = YourMultiAgentEnv(...) | |
| >>> grouped_env = env.with_agent_groups(env, { | |
| ... "group1": ["agent1", "agent2", "agent3"], | |
| ... "group2": ["agent4", "agent5"], | |
| ... }) | |
| """ | |
| from ray.rllib.env.wrappers.group_agents_wrapper import \ | |
| GroupAgentsWrapper | |
| return GroupAgentsWrapper(self, groups, obs_space, act_space) | |
| @PublicAPI | |
| def to_base_env(self, | |
| make_env: Callable[[int], EnvType] = None, | |
| num_envs: int = 1, | |
| remote_envs: bool = False, | |
| remote_env_batch_wait_ms: int = 0, | |
| ) -> "BaseEnv": | |
| """Converts an RLlib MultiAgentEnv into a BaseEnv object. | |
| The resulting BaseEnv is always vectorized (contains n | |
| sub-environments) to support batched forward passes, where n may | |
| also be 1. BaseEnv also supports async execution via the `poll` and | |
| `send_actions` methods and thus supports external simulators. | |
| Args: | |
| make_env: A callable taking an int as input (which indicates | |
| the number of individual sub-environments within the final | |
| vectorized BaseEnv) and returning one individual | |
| sub-environment. | |
| num_envs: The number of sub-environments to create in the | |
| resulting (vectorized) BaseEnv. The already existing `env` | |
| will be one of the `num_envs`. | |
| remote_envs: Whether each sub-env should be a @ray.remote | |
| actor. You can set this behavior in your config via the | |
| `remote_worker_envs=True` option. | |
| remote_env_batch_wait_ms: The wait time (in ms) to poll remote | |
| sub-environments for, if applicable. Only used if | |
| `remote_envs` is True. | |
| Returns: | |
| The resulting BaseEnv object. | |
| """ | |
| from ray.rllib.env.remote_vector_env import RemoteBaseEnv | |
| if remote_envs: | |
| env = RemoteBaseEnv( | |
| make_env, | |
| num_envs, | |
| multiagent=True, | |
| remote_env_batch_wait_ms=remote_env_batch_wait_ms) | |
| # Sub-environments are not ray.remote actors. | |
| else: | |
| env = MultiAgentEnvWrapper( | |
| make_env=make_env, existing_envs=[self], num_envs=num_envs) | |
| return env | |
| # __grouping_doc_end__ | |
| # yapf: enable |
Versions / Dependencies
- Python:
3.8.12 - OS:
macOS 11.5.2 - Black:
21.12b0 - Ray commit:
14ed7cfaaa8f890204619c149d4d50b5857c3cd6
Reproduction script
black rllib/env/multi_agent_env.py
Metadata
Metadata
Assignees
Labels
No labels