We are in early beta, and may not have tested Walnut in your browser/OS combination yet, so it may not look perfect. Thanks for your patience! ×
We are in early beta, and the documentation may be incomplete and/or have missing links. Thanks for your patience! ×

World and Problem Configuration

We have seen that problems describe concrete instances of worlds and agents, filling with specific details some parts of the world that are left blank in the world description (like the types of number of agents, initial states, etc). It is sometimes useful to define which additional parameters or configuration variables about the world are required in a problem.

For example, we could create a variation of the vacuum cleaner world where the status sensor fails (returning “Clean” independently of the location status) with some small random chance. We could define that sensor as

role ...
    sensor = SensorReading(
        bernoulli(0.03, state[agent.location], Clean),
        agent.location)

Here we have forced the sensor failure probability to 3% using the bernoulli builtin which chooses with a given probability between two values. It would be better to define a failure probability p, and then let the problem definer set this to a relevant value. One possible workaround for this is to add a state variable that actually never changes; but this forces us to use a more complex type for the state and apply a lot of minor changes. The solution to this problem provided by Walnut are configuration variables. As the world definer, you only decide what are the variable names and their types, in a section after the role definitions.

role ...
role ...

config
    p: Number
    other_config: String

You can use as many configuration variables as you want, of any type. You then can access their values through the config variable

role ...
    sensor = SensorReading(
        bernoulli(config.p, state[agent.location], Clean),
        agent.location)

In the problem definition, you will see a configuration section where you can enter a value for p (for example: 0.03).

The problem configuration must set values for all the configuration variables defined in the world. But it can additionally define some extra variables, which can be useful on defining specific settings for the problem. For example, assume a world that describes many agent moving on a map, and a problem where they must race to reach first a given goal position; then you can add a configuration variable "goal" (setting it to a point value) , and then set the performance function as:

if agent.location == config.goal then 1 else 0

The above is useful and valid even if the world does not define the goal configuration variable. Writing it in that way makes it easier to edit and update the value of the variable.

The configuration information is recorded on the trace, so it is available on the visualization definition. That allows, in the example above, marking a flag at the goal position with

sprite {
    x = config.goal.x
    y = config.goal.y
    image = ":triangular_flag_on_post: "
}

If you want to add configuration options per agent, there are two alternatives. You can add a configuration variable of dictionary type, containing agent ids as keys and the value to configure as related value. Alternatively, it is typically easy to add the variables to the agent state without creating many problems, and it is easier to refer to them in the rest of the definition. However, having them as state means that you should make sure that no actuator accidentally changes them.