-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Add ContinuousStates and -Observables #2851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
24191cb
5597615
7eefaf3
b4a1660
2b059ff
81f52ad
835ad79
b2f6d7f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2,6 +2,8 @@ | |
| Wolf-Sheep Predation Model | ||
| ================================ | ||
|
|
||
| Enhanced version with continuous energy depletion and reactive behaviors. | ||
|
|
||
| Replication of the model found in NetLogo: | ||
| Wilensky, U. (1997). NetLogo Wolf Sheep Predation model. | ||
| http://ccl.northwestern.edu/netlogo/models/WolfSheepPredation. | ||
|
|
@@ -21,11 +23,16 @@ | |
| class WolfSheep(Model): | ||
| """Wolf-Sheep Predation Model. | ||
|
|
||
| A model for simulating wolf and sheep (predator-prey) ecosystem modelling. | ||
| A model for simulating wolf and sheep (predator-prey) ecosystem with: | ||
| - Continuous energy depletion over time | ||
| - Reactive behaviors based on hunger levels | ||
| - Threshold-triggered events (death, starvation mode) | ||
| - Computed properties for decision making | ||
| """ | ||
|
|
||
| description = ( | ||
| "A model for simulating wolf and sheep (predator-prey) ecosystem modelling." | ||
| "A model for simulating wolf and sheep (predator-prey) ecosystem modelling " | ||
| "with continuous energy dynamics and reactive behaviors." | ||
| ) | ||
|
|
||
| def __init__( | ||
|
|
@@ -55,12 +62,13 @@ def __init__( | |
| wolf_gain_from_food: Energy a wolf gains from eating a sheep | ||
| grass: Whether to have the sheep eat grass for energy | ||
| grass_regrowth_time: How long it takes for a grass patch to regrow | ||
| once it is eaten | ||
| sheep_gain_from_food: Energy sheep gain from grass, if enabled | ||
| seed: Random seed | ||
| simulator: ABMSimulator instance for event scheduling | ||
| """ | ||
| super().__init__(seed=seed) | ||
|
|
||
| # Initialize time-based simulator for continuous energy dynamics | ||
| self.simulator = simulator | ||
| self.simulator.setup(self) | ||
|
|
||
|
|
@@ -77,19 +85,32 @@ def __init__( | |
| random=self.random, | ||
| ) | ||
|
|
||
| # Set up data collection | ||
| # Set up data collection (tracks observable changes automatically) | ||
| model_reporters = { | ||
| "Wolves": lambda m: len(m.agents_by_type[Wolf]), | ||
| "Sheep": lambda m: len(m.agents_by_type[Sheep]), | ||
| "Avg Wolf Energy": lambda m: ( | ||
| sum(w.energy for w in m.agents_by_type[Wolf]) | ||
| / len(m.agents_by_type[Wolf]) | ||
| if len(m.agents_by_type[Wolf]) > 0 | ||
| else 0 | ||
| ), | ||
| "Avg Sheep Energy": lambda m: ( | ||
| sum(s.energy for s in m.agents_by_type[Sheep]) | ||
| / len(m.agents_by_type[Sheep]) | ||
| if len(m.agents_by_type[Sheep]) > 0 | ||
| else 0 | ||
| ), | ||
| } | ||
|
Comment on lines
+88
to
104
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chainInvestigate The new continuous energy and threshold system interacts with:
Given that
Both patterns can cause Mitigations to consider (likely in concert with changes in
Please verify the exact call stack of the runtime error and align the removal strategy accordingly; it probably requires updating Also applies to: 147-159 I'll investigate the codebase to verify the claims in this review comment about the potential dictionary iteration issue. Length of output: 5820 🏁 Script executed: #!/bin/bash
# Find and examine agents.py in the same directory
find . -path "*/advanced/wolf_sheep/agents.py" -type f | head -1 | xargs -I {} sh -c 'echo "=== File: {} ===" && cat {}'Length of output: 8345 🏁 Script executed: #!/bin/bash
# Search for agents_by_type implementation and iteration methods
rg -n "agents_by_type" --type=py -A 3 -B 1 | head -100Length of output: 6078 🏁 Script executed: #!/bin/bash
# Search for _on_energy_depleted callback
rg -n "_on_energy_depleted" --type=py -A 5 -B 2Length of output: 1149 🏁 Script executed: #!/bin/bash
# Search for shuffle_do implementation
rg -n "shuffle_do" --type=py -A 5 -B 2Length of output: 7633 I need to examine the complete Length of output: 967 🏁 Script executed: #!/bin/bash
# Check DataCollector.collect() implementation
rg -n "def collect" --type=py -A 20 mesa/datacollection.pyLength of output: 1201 🏁 Script executed: #!/bin/bash
# Check AgentSet iteration and _agents structure
rg -n "self\._agents" --type=py -B 2 -A 5 mesa/agent.py | head -150Length of output: 5487 I found the vulnerability! Let me verify the exact timing of when threshold callbacks could fire during reporter iteration. Length of output: 8278 🏁 Script executed: #!/bin/bash
# Check if there's any evidence of concurrent energy updates during collect()
rg -n "schedule_event" --type=py mesa/examples/advanced/wolf_sheep/ -B 2 -A 2Length of output: 934 🏁 Script executed: #!/bin/bash
# Verify AgentSet __iter__ returns keys() directly without snapshot
rg -n "def __iter__" --type=py mesa/agent.py -A 5Length of output: 481 🏁 Script executed: #!/bin/bash
# Search for any test reproducing the RuntimeError mentioned in review
rg -n "RuntimeError.*dictionary" --type=pyLength of output: 0 Fix model reporters to safely handle agent removal during iteration The review comment is correct. The model reporters iterate directly over live The issue occurs at lines 88-104 (and 147-159) where lambdas like Solution: Wrap all agent iterations in the model reporters with model_reporters = {
"Wolves": lambda m: len(m.agents_by_type[Wolf]),
"Sheep": lambda m: len(m.agents_by_type[Sheep]),
"Avg Wolf Energy": lambda m: (
sum(w.energy for w in list(m.agents_by_type[Wolf]))
/ len(list(m.agents_by_type[Wolf]))
if len(m.agents_by_type[Wolf]) > 0
else 0
),
"Avg Sheep Energy": lambda m: (
sum(s.energy for s in list(m.agents_by_type[Sheep]))
/ len(list(m.agents_by_type[Sheep]))
if len(m.agents_by_type[Sheep]) > 0
else 0
),
}Apply the same fix to lines 147–159 and any other direct iterations over 🤖 Prompt for AI Agents |
||
|
|
||
| if grass: | ||
| model_reporters["Grass"] = lambda m: len( | ||
| m.agents_by_type[GrassPatch].select(lambda a: a.fully_grown) | ||
| ) | ||
|
|
||
| self.datacollector = DataCollector(model_reporters) | ||
|
|
||
| # Create sheep: | ||
| # Create sheep with random initial energy | ||
| Sheep.create_agents( | ||
| self, | ||
| initial_sheep, | ||
|
|
@@ -98,7 +119,8 @@ def __init__( | |
| energy_from_food=sheep_gain_from_food, | ||
| cell=self.random.choices(self.grid.all_cells.cells, k=initial_sheep), | ||
| ) | ||
| # Create Wolves: | ||
|
|
||
| # Create wolves with random initial energy | ||
| Wolf.create_agents( | ||
| self, | ||
| initial_wolves, | ||
|
|
@@ -123,10 +145,15 @@ def __init__( | |
| self.datacollector.collect(self) | ||
|
|
||
| def step(self): | ||
| """Execute one step of the model.""" | ||
| # First activate all sheep, then all wolves, both in random order | ||
| """Execute one step of the model. | ||
|
|
||
| Energy continuously depletes between steps via ContinuousObservable. | ||
| This step method only triggers agent decisions and actions. | ||
| """ | ||
| # Activate all sheep, then all wolves, both in random order | ||
| # Their energy has been continuously depleting since last step | ||
| self.agents_by_type[Sheep].shuffle_do("step") | ||
| self.agents_by_type[Wolf].shuffle_do("step") | ||
|
|
||
| # Collect data | ||
| # Collect data (automatically captures current energy levels) | ||
| self.datacollector.collect(self) | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Avoid removing agents directly from threshold callback to prevent container mutation during iteration
Animal._on_energy_depletedcurrently callsself.remove()whenever energy crosses zero downward:Because this callback is fired from inside
ContinuousObservable.__get__/__set__, it can run while:shuffle_do("step"), and/oragents_by_type[...]and accessing.energy.That matches the
RuntimeError: dictionary changed size during iterationreported for the Wolf–Sheep example.A safer pattern is to mark the agent for removal and let
step()perform the actualremove()at a controlled point:class Animal(CellAgent, HasObservables): @@ def __init__( self, model, energy=8, p_reproduce=0.04, energy_from_food=4, cell=None ): @@ - super().__init__(model) - - # Set base metabolic rate (energy loss per time unit when idle) - self.metabolic_rate = 0.5 + super().__init__(model) + + # Track whether this agent should be removed due to energy depletion + self._needs_removal = False + + # Set base metabolic rate (energy loss per time unit when idle) + self.metabolic_rate = 0.5 @@ - def _on_energy_depleted(self, signal): - """Called when energy crosses zero - animal dies.""" - if signal.direction == "down": # Only trigger on downward crossing - self.remove() + def _on_energy_depleted(self, signal): + """Called when energy crosses zero - mark for removal.""" + if signal.direction == "down": # Only trigger on downward crossing + self._needs_removal = True @@ def step(self): """Execute one step of the animal's behavior.""" - # Move to neighboring cell (uses more energy than standing still) + # Remove animals that died since last step + if getattr(self, "_needs_removal", False): + self.remove() + return + + # Move to neighboring cell (uses more energy than standing still) self.metabolic_rate = 1.0 # Movement costs more energy @@ - # Reproduce if conditions are met (using computed property) - if self.can_reproduce and self.random.random() < self.p_reproduce: + # Reproduce if conditions are met (using computed property) + if self.can_reproduce and self.random.random() < self.p_reproduce: self.spawn_offspring()This preserves the threshold‑driven semantics (death when crossing zero) while avoiding structural mutations of the agent containers during iteration.
Also applies to: 84-99
🏁 Script executed:
Length of output: 47437
🏁 Script executed:
Length of output: 1167
🏁 Script executed:
Length of output: 5342
🏁 Script executed:
Length of output: 1284
🏁 Script executed:
Length of output: 4233
🏁 Script executed:
Length of output: 5006
🏁 Script executed:
Length of output: 67
🏁 Script executed:
Length of output: 1278
🏁 Script executed:
Length of output: 0
🏁 Script executed:
Length of output: 325
🏁 Script executed:
Length of output: 768
🏁 Script executed:
Length of output: 441
🏁 Script executed:
Length of output: 0
🏁 Script executed:
Length of output: 2391
🏁 Script executed:
Length of output: 2419
🏁 Script executed:
Length of output: 4233
🏁 Script executed:
Length of output: 19365
🏁 Script executed:
Length of output: 3240
🏁 Script executed:
Length of output: 0
Deferred removal pattern required to prevent container mutation during threshold callback execution
The review's concern is valid and critical.
Threshold callbacks execute synchronously during property access, and ContinuousObservable's
__get__and__set__methods both invoke threshold callbacks immediately. WhenDataCollector.collect(self)iterates agents and accesses their.energyproperty to compute statistics (e.g.,sum(w.energy for w in m.agents_by_type[Wolf])), this triggers__get__, which fires the threshold callback. If_on_energy_depletedcallsself.remove()while the dictionary is being iterated, this causesRuntimeError: dictionary changed size during iteration.The proposed deferred removal pattern is the correct fix:
step()at a controlled point before returningLocations requiring fixes:
_needs_removal = Falseinitialization inAnimal.__init___on_energy_depletedto set flag instead of callingremove()Animal.step()before other logic🤖 Prompt for AI Agents