Random float netlogo5/11/2023 To see that this game is indeed a Prisoner’s Dilemma, note that transferring the money would be what is often called “to cooperate”, and keeping the money would be “to defect”.įigure 1. The game can be summarized using the payoff matrix in Fig. Try to formalize this situation as a game, assuming you and the other person only care about money. Crucially, whenever money is transferred, the money doubles, i.e. This other person faces the same decision: she can transfer her $1000 money to you, or else keep it. Imagine that you have $1000, which you may keep for yourself, or transfer to another person’s account. Let us see a concrete example of a Prisoner’s Dilemma. Thus, in a Prisoner’s Dilemma, both players prefer mutual cooperation to mutual defection (R > P), but two motivations may drive players to behave uncooperatively: the temptation to exploit (T > R), and the fear to be exploited (P > S). The payoffs satisfy the condition T > R > P > S. In this game, the payoffs for the players are: if both cooperate, R (Reward) if both defect, P (Punishment) if one cooperates and the other defects, the cooperator obtains S (Sucker) and the defector obtains T (Temptation). The essence of many social dilemmas can be captured by a simple 2-person game called the Prisoner’s Dilemma. Such counterintuitive characteristic is the defining feature of social dilemmas, and life is full of them (Dawes, 1980). All these situations, where cooperating involves a personal cost but creates net social value, exhibit the somewhat paradoxical feature that individuals would prefer not to pay the cost of cooperation, but everyone prefers the situation where everybody cooperates to the situation where no one does. This type of behavior is often termed “to cooperate”, and can take a myriad forms: from paying your taxes, to inviting your friends over for a home-made dinner. There are many situations in life where we have the option to make a personal effort that will benefit others beyond the personal cost incurred. This fairly general model will allow us to explore a variety of specific questions, like the one we outline next. The revision protocol these agents will use is called “imitate-the-better-realization”, which dictates that a revising agent imitates the strategy of a randomly chosen player, if this player obtained a payoff greater than the revising agent’s. Agents will revise their strategy with a certain probability, also to be chosen by the user. The payoffs of the game will be determined by the user. These agents will repeatedly play a symmetric 2-player 2-strategy game, each time with a randomly chosen counterpart. In particular, in our model the number of (individually-represented) agents in the population will be chosen by the user. a revision protocol, which specifies how individual agents update their (pure) strategies when they are given the opportunity to revise.an assignment rule, which determines how revision opportunities are assigned to agents, and.a game that is recurrently played by the agents,.Being our first model, we will keep it simple nonetheless, the model will already contain the four building blocks that define most models in agent-based evolutionary game theory, namely: The goal of this section is to create our first agent-based evolutionary model in NetLogo. You can get a triangle-shaped distribution of results very simply just by summing two calls to random-float: observer> clear-plot set-plot-pen-interval 0.1. Observer> clear-plot set-plot-pen-interval 0.01 set-plot-x-range -0.1 1.1Īnother solution is to ask yourself whether you really need a bell curve, or whether a triangle-shaped distribution would be just fine. This avoids the spikes at the boundaries: to-report random-normal-in-bounds let result random-normal mid dev if result mmax report result Observer> histogram n-values 1000000 Īnother possible solution, as Marzy describes in the question itself, is to discard any out-of-bounds results random-normal gives you and just keeping trying again until you get an in-bounds result. Note that this approach creates spikes at the boundaries of the range: observer> clear-plot set-plot-pen-interval 0.01 set-plot-x-range -0.1 1.1 One possible solution is to clamp the output of random-normal within boundaries, as in Matt's answer. Īs you've discovered, random-normal can be problematic because the result you get back can be literally any number. OUTPUT is * 67 times out of 100000000 Time*Ħ7 is biggest one I got, I got 58, 51. I have a few variables which can be inherited to child agents by a variation of + 0.1 and -0.1 or without any changes, or random again, What I have done is like this: (The code is just an example) to reproduce ask turtle 1 ]Ĭurrently I have to check if X of child turtle is always within range by something like this: if X > 1 NetLogo : How to make sure a variable stays in a defined range?
0 Comments
Leave a Reply. |