Thermal treatment of the minority game

We study a cost function for the aggregate behavior of all the agents involved in the minority game ~ MG ! or the bar attendance model ~ BAM ! . The cost function allows us to deﬁne a deterministic, synchronous dynamic that yields results that have the main relevant features than those of the probabilistic, sequential dynamics used for the MG or the BAM. We deﬁne a temperature through a Langevin approach in terms of the ﬂuctuations of the average attendance. We prove that the cost function is an extensive quantity that can play the role of an internal energy of the many-agent system while the temperature so deﬁned is an intensive parameter. We compare the results of the thermal perturbation to the deterministic dynamics and prove that they agree with those obtained with the MG or BAM in the limit of very low temperature.


I. INTRODUCTION
The bar attendance model ͑BAM͒ ͓1͔ and the minority game ͑MG͒ ͑see Refs. ͓2-7͔͒ have recently become regular testing grounds to investigate how the individual actions of a system of independent agents give rise to some kind of macroscopic ordering. In the MG, the agents have to make a binary decision which for the sake of concreteness, it is usually taken to be associated to going or not going to a bar. The winning option is that of the minority. The MG is a particular case of the BAM which has in turn been introduced to show how an ensemble of agents that perform inductive reasoning can self organize to match some condition that is generally accepted to be the most adequate. In the case of the BAM this corresponds to the largest acceptable attendance without incurring some discomfort.
Both models have been compared with each other in Refs. ͓8͔ and ͓9͔ working out a generalized version of the MG ͑the GMG͒ in order to consider situations in which the minority is replaced by an arbitrary fraction of the ensemble of players. This is fixed externally as a control parameter. In all these models the players update their attendance probabilities with a random correction, depending upon the past record of successes and failures. Asymptotic stable configurations are always reached. These are, however, of quite a different nature depending upon the values of the control parameters, of the initial conditions, and on updating the rules involved in each model.
In the present paper we are interested in the cases in which the asymptotic stable distribution can be assimilated to a kind of thermodynamic equilibrium. In these situations the agents continue to update their attendance probabilities but the corresponding probability density distribution remains stationary. The stochastic dynamics that has been developed for the BAM in Ref. ͓9͔ always leads the system to these types of configurations while in the cases studied for the GMG, when is significantly larger ͑or smaller͒ than 1/2, the system gets stuck in quenched configurations that strongly depend upon the initial conditions. Updating stops because agents have accumulated a great number of successes. However, these ''glassy'' states can nevertheless be ''melted'' into equilibrium if the memory of past successes is repeatedly eliminated in an iterative process that can be assimilated to an annealing procedure.
A remarkable result that has been obtained in all numerical simulations is that the equilibrium configuration entails a diversity in the individual actions. The population is drastically partitioned into two subsets, one that always goes to the bar and the other that never goes. It therefore seems that in spite of the fact that the agents do not exchange information, they manage to coordinate their actions to proceed in two opposite ways. The number of agents in both subsets are in a ratio that is equal to /(1Ϫ). Such polarization is not an intuitive result. A naïve guess is to assume that all agents should choose the same probability of attendance and this should be equal to . However this turns out to be not a stable distribution because parties that are larger or smaller than the accepted crowding occur with a great chance.
The fact that all agents adjust their attendance probabilities in order to minimize their failures ͑i.e., to go when the bar is crowded or not go when the bar is empty͒ leads to an aggregate behavior that minimizes a global cost associated with inadequate attendances. We propose to express such cost by the second moment of the attendance with respect to the acceptable level .
The purpose of the present paper is to investigate the effects of introducing that cost function in the relaxation dynamics of the system. We show that this is a Lyapunov function for the many-agent system, i.e., it is possible to derive a deterministic dynamics as the descent along its gradient, that monotonically reduces its value. This corresponds to a heavily coordinated, synchronous evolution.
We prove that the cost function meets the requirements of an internal energy of the many-agent system. We also introduce a temperature parameter through a Langevin-like approach that can be defined in terms of the fluctuations of the attendance strategies. Except for finite-size effects this can *Email address: ceva@cnea.gov.ar be proven to be an intensive parameter. We also superimpose thermal fluctuations to the deterministic dynamics mentioned above. Depending upon the amplitude of these fluctuations, the polarization is gradually smeared until a point in which it completely disappears.
The thermally modified, relaxation process that we define here is completely different from those involved in the GMG or BAM approaches that involve the independent and uncoordinated actions of all the agents. The latter involves a random updating of individual attendance strategies governed by a ͑small͒ uncertainty amplitude that is interpreted as the precision of such updating. We prove that in the limit of low temperature, and small uncertainty amplitude both dynamics lead to entirely equivalent asymptotic equilibrium configurations. The thermal interpretation of the uncertainty amplitude also allows us to cast the annealing process presented in Refs. ͓8͔ and ͓9͔ into a thermal framework as the wellknown case of simulated annealing ͓10͔.
In Sec. II, we derive the cost function, and in Sec. III, we investigate the dynamics that corresponds to the descent along its gradient. In Sec. IV, we present a Langevin approach to define the temperature in terms of the fluctuations that are present in the asymptotic equilibrium configuration. In Sec. V, we compare this with more traditional approaches for the relaxation process. In Sec. VI, we draw the conclusions.

II. COST FUNCTION
Consider a set of N agents that have a probability p i (i ϭ1,2, . . . ,N) to go to the bar. The distribution of the p i 's is given by the probability density function P(p). As we shall shortly explain, the p i are updated in time according to some dynamics and therefore the function P(p) also changes in time.
In the ordinary rules of the GMG when a player goes to the bar and finds it is crowded or when she does not go and the bar is empty, she loses a point. If the opposite happens she gains a point. The level of crowding is specified by the value of the control parameter . When her account of points falls below zero she updates her attendance probability choosing at random a different value within the interval (p i Ϫ␦p/2,p i ϩ␦p/2). When equilibrium is reached, the resulting distribution P(p) concentrates the population in the immediate neighborhood of pӍ0 and pӍ1, plus an almost vanishing contribution from intermediate values. The ratio of the areas below these two peaks is close to /(1Ϫ).
The aggregate behavior is associated to the density distribution P(A) that gives the probability of occurrence of a party of A customers attending the bar. The function P(A) is of course completely determined by P( p). In order to calculate it let us assume without loss of generality that all the agents distribute themselves into Dϩ1 different bins of n d (dϭ0,1, . . . ,D) agents each, with strategies p d ϭd/D. The density distribution P(p) can then be written as

͑1͒
With this assumption, the distribution P(A) can be written as

͑2͒
We define the cost function for the whole ensemble of agents as in Ref. ͓9͔, namely, as the second moment ͓11͔ with respect to the tolerated crowding level , In order to calculate it, we introduce Eq. ͑2͒ into the definition of Eq. ͑3͒ and perform first the summation over A taking advantage of the ␦(AϪ͚ d D l d ). Once this is done, one can perform the summations involved in each of the terms in which (NϪ ͚l d ) 2 splits down. The summations over different l's decouple from each other and result either in a one; or in n d p d ; in n d . These terms can be gathered again to yield where ͗p m ͘ stands for ͚ p p m P( p)ϭ ͚ d p d m n d /N for mϭ1,2.
The expression of C given in Eq. ͑4͒ contains no assumption about the system being in equilibrium. This is the reason why C is proportional to N 2 instead of being proportional to the size N of the system, as befits to an extensive magnitude. The numerical simulations however indicate that in equilibrium ͗p͘ϭ and therefore this term cancels except for possible fluctuations. Actually the O(N 2 ) term is eliminated by any distribution P(p) whose mean has the required value . For an initial condition with uniformly distributed p i 's and P o (p)ϭ1, as it is used for most simulations, the cost is C ϭN 2 (Ϫ1/2) 2 ϩN/6. Such an initial condition is a good guess for the final distribution when Ӎ1/2 ͑as for the most traditional settings of the MG͒, but it is indeed very poor for the GMG when ϭ " 1/2. In the next sections, we discuss in greater detail the value of C in equilibrium.
The naïve guess P(p)ϭ␦(pϪ) is also seen to cancel the O(N 2 ) terms in C. However such distribution causes that parties with A close to, but different from N occur with a sizable probability. The O(N) in C are minimized precisely when the probability of occurrence of such parties tends to zero by polarizing the population into two subsets with opposite attendance strategies. To see this we approximate the two peaked equilibrium distribution that is usually obtained in numerical simulations by

͑5͒
One readily sees that the O(N 2 ) terms are eliminated when n 1 p 1 ϩn 2 p 2 ϭN and the O(N) terms are also eliminated if the two peaks are p 1 ϭ0;n 1 ϭN(1Ϫ), and p 2 ϭ1;n 2 ϭN. The relaxation dynamics that tends to minimize individual losses is therefore seen to also optimize the global cost function defined in Eqs. ͑3͒ and ͑4͒.

III. DETERMINISTIC DYNAMICS FOR GMG
All the agents of the system, through uncoordinated actions minimize the total cost C that is an aggregate function defined for the whole system. This fact suggests an alternative representation of the actions of the agents as a synchronous, deterministic dynamics associated to the descent along the gradient of C. This is described by the following set of coupled differential equations for the p i 's: In Eq. ͑6͒ stands for a positive free parameter that-as we shall shortly see-provides the scale for the time evolution of the system. The O(N 2 ) and O(N) terms in Eq. ͑4͒ are translated into a fast and a slow dynamic that involve corrections of the p i that are, respectively, O(N) and O(1). To see this we first derive the dynamics followed by ͗p͘ by calculating the average over i in both sides of Eq. 6. We thus obtain where we have set W(t)ϭ(͗p͘Ϫ). This can explicitly be integrated. The solution is with W o standing for the initial value of W(t). This expression allows us in turn to find an approximate solution of the equations of motion for the individual p i 's. To this end we write an asymptotic approximation of Eq. ͑6͒ in which we assume that a long enough time has elapsed so that ͗p͘Ϫ can be approximated by the constant term of O(1/N) in Eq. ͑8͒. By keeping only the leading order in N we obtain Note that dependence of p i (t) involves a positive exponential. However, this equation is not valid for t→ϱ because the fact that the p i 's are probabilities, and are therefore bounded between 0 and 1, it is not included in the equations but rather in the boundary conditions of Eqs. ͑6͒.
Equations ͑7͒ and ͑9͒ correspond, respectively, to the fast and slow dynamics that have been mentioned above. In the first place we see that except for terms that are O(1/N), ͗p͘ approaches exponentially with the very short-time constant ϭ1/(2N) that tends to zero as the system involves a larger number of individuals. On the other hand, the differ-ences p i (t)Ϫ instead grow exponentially for all i indicating that the p i 's depart exponentially from the average and eventually saturate at its largest or smallest possible values: 1 or 0, thus polarizing the population of agents. This process however takes place with a time constant 1/(2), that is O(N) longer than the one involved in the evolution of ͗p͘ and is independent of the size of the system. While the average ͗p͘ approaches very fast to the value , the individual p i 's depart slowly from the same value.
Equations ͑6͒ can be tested numerically by approximating them by finite differences. The individual attendance probabilities p i are thus taken to be updated as The resulting density distributions P(p) that are obtained with this dynamics are shown in Fig. 1. The value of and therefore that of the time constant is in principle arbitrary. However if ӷ1, the only effects that are noticeable are those of the fast dynamics while if Ӷ1, the descent towards the minimum keeps bouncing at opposite sides of the well and never reaches its bottom. When 1/2ՇՇ2 the descent is gradual enough so that the interplay of both terms in ⌬(p i ) leads the system to a minimum of C. The intermediate stages in the gradient descent are also shown in Fig. 1. In the first few steps the ͑fast͒ uniform correction of O(N) is seen to shift rigidly the initial distribution to one side with the aim of adjusting the value of ͗p͘ to that of . As a consequence, agents are piled up in one end while the other is completely cleared. Once the leading term in C is nearly canceled, the slow dynamics gradually gathers agents at both ends of the distribution producing minor fluctuations in the value of ͗p͘. The density distribution P(p) that is finally obtained is seen to correspond to a strongly polarized population thus reproducing the main feature of the equilibrium distributions obtained with the rules traditionally used in the GMG or the BAM.
The present approach yields a density distribution that displays the same polarization that is found in the GMG or in the BAM. It is remarkable that such a general qualitative agreement is found, although those frameworks differ deeply from the deterministic formulation. The conceptual difference between the two approaches lies in the special role played by the record of successes and failures that is kept in the BAM or GMG and that is completely absent in the present treatment. The usual rules of the GMG can thus be considered to correspond to a dynamics constrained by the ͑positive͒ balance of points that have been accumulated in the past instances of the game. There are other differences that deserve further discussion. These are related to the stochastic elements of the dynamics used in that framework which are absent from the present one. Within this approach, these can be assimilated to the effects of a finite temperature. We turn to this point in the next section.

IV. THERMAL FLUCTUATIONS
The usual rules of the BAM or the GMG involve a stochastic updating of the attendance probabilities of each customer. When the account of points of the ith player falls below zero a new value of p i is chosen at random from the interval (p i Ϫ␦p/2,p i ϩ␦p/2). This can be interpreted as a kind of thermal fluctuation in which ␦p can be related to the temperature.
A few qualitative features support this. In equilibrium, the population is drastically polarized into those that consistently go to the bar ͑and therefore p i ϭ1) and those that do not go (p i ϭ0). A small fraction having p i 's with intermediate values continuously migrate between both extreme strategies.
This migration causes that the value of ͗p͘ fluctuates around . These random values of ͗p͘ have a distribution that is sharply peaked at that value and has a width that is regulated by ␦p. In what regards the density distribution P(p), a small value of ␦p produces sharp peaks at pϭ0 and pϭ1 and P( p)ϳ0 for intermediate values.
For larger values of ␦p there is a larger fraction of players that migrate between p ϭ0 and pϭ1 thus producing a rising in the ''bottom'' of the distribution P(p).
The above qualitative arguments provide hints to introduce thermal fluctuations in the deterministic dynamics presented in the preceding section and also about their relationship with ␦p for the case of the GMG. However a singular situation occurs for ␦p→0 that is associated to an infinitely long relaxation process or when ␦pϾ1 in which this parameter loses its physical meaning of being a probability.
Thermal-like fluctuations can formally be introduced following the same steps as the Langevin approach to describe a Brownian particle. In the present situation we start with the Eq. ͑7͒ for the motion of the average value ͗p͘, and we add a stochastic term L(t) that accounts for the random fluctuations

͑11͒
We have added an index s to W(t) in Eq. ͑7͒ to stress the fact that this is the value of W(t) in the presence of stochastic external fluctuations. The source of noise L(t) can be taken to be the average of N uncorrelated sources of random fluctuations affecting all the independent agents. One still has to specify a parameter related to the statistical properties of the distribution of the stochastic function L(t). We will shortly prove that this is closely related to the temperature. As usual we assume L͑t ͒ϭ0, ͑12͒ L͑t ͒L͑ tЈ͒ϭ⌫␦͑tϪtЈ͒. ͑13͒ In Eqs. ͑12͒ and ͑13͒ and in all what follows, ( . . . ) denotes an average over a suitable ensemble of replicas of the N-agent system. The parameter ⌫ is a constant that represents the mean-square amplitude of instantaneous, uncorrelated perturbations. The stochastic differential Eq. ͑11͒ can explicitly be integrated. The result is where W(t) is the solution given in Eq. ͑8͒ in which no fluctuations are present. If an average is made on both sides of Eq. ͑14͒, over a subensemble of systems having the same initial conditions W o appearing in Eq. ͑8͒, one can immediately see that Eq. ͑12͒ implies that W s (t)ϭW(t) and therefore the convergence of ͗p͘ to ͓up to terms O(1/N)] is also insured within the stochastic dynamics. If the meansquare fluctuations of W s (t) are calculated with the aid of Eq. ͑13͒, we get The effect of the stochastic term in W s (t) produces a nonvanishing value W s 2 (ϱ). In ordinary statistical mechanics, the mean-square fluctuations of the stationary solution of the velocity of Brownian particles is directly related to its average kinetic energy and can be set equal to kT. By analogy we formally define a temperature parameter T that is independent from the size of the system, as the mean-square fluctuations of ͗p͘ in an equilibrium configuration, scaled by the number of agents of the system. Neglecting terms O(1/N 2 ) we obtain The parameter is a factor relating T with the amplitude of the random fluctuations and plays a similar role than the Boltzmann constant. Equation ͑16͒ allows us to write the ensemble average of the cost C for an equilibrium configuration and for finite temperature. Up to the leading order in N we obtain CϭN 2 ͑ Ϫ͗p͒͘ 2 ϩN͑͗p͘Ϫ͗p͘ 2 Ͼ ͒ϭN͓TϩϪ͗p͘ 2 Ͼ͔.

͑17͒
C is a positive, extensive magnitude which, in equilibrium, grows linearly with the size of the system and can therefore be taken to play the role of an internal energy.
The linear dependence of C with the size of the system can be checked for the GMG. To do so we have calculated the cost using the definition of Eq. ͑3͒, with a different number of agents. We first allowed the system to relax to the asymptotic equilibrium configuration and performed a suitable ensemble average over several replicas of the system. The linear dependence is shown in Fig. 2. The last iteration steps are used to estimate the dispersion of the numerical result and is shown with a pair of dotted lines. The slope of these lines change slightly with the parameter ␦p of the GMG. This is due to the relation between T and ␦p that we discuss later.

V. THERMAL RELAXATION
To include thermal fluctuations into a numerical treatment of the deterministic dynamics amounts only to introduce a random additive term in Eq. ͑10͒, namely, where L ϭ(1/2Ϫr) and r is a random number uniformly distributed in the interval ͓0,1͔. This function represents the fluctuations produced on the ith agent by a thermal bath. The temperature is defined by the second moment ⌫ of the probability density of the L (i) (t). The limit in which L (i) (t) has zero width ͑and therefore ϭ0) corresponds to the deterministic dynamics discussed in Sec. II. Larger values of are associated to fluctuations that may eventually override the updating amplitude ⌬(p i ) and tend to smear the distribution with two sharp ␦ functions, increasing the fraction of the population that have strategies p i ϭ " 0 or 1. ͓See Fig. 3͑a͒.͔ If is further increased the polarization is progressively destroyed because the drift of the p i 's towards 0 or 1 has to equilibrate against random shocks that prevent them to reach those limiting values.
Given the stochastic dynamics of Eq. ͑18͒ together with the definition in Eq. ͑16͒ it is possible to calculate the value of T in an equilibrium configuration, and relate T with . The parameter has to be chosen such that the relaxation of the deterministic dynamics is guaranteed, i.e., when the time constant ϭ1/(2N) introduced in Sec. III is ϳ1. In Fig.  4 we show that, as expected, Tϳ 2 .
Equation ͑16͒ allows us also to calculate T in any configuration reached through the stochastic dynamics of the GMG or the BAM. With this we can check two important features. The first is an estimation of the finite-size corrections in the definition of T given in Eq. ͑16͒, i.e., the regime in which T is independent of the size of the system. The second outcome is to establish a quantitative relationship between T and ␦p that goes into the relaxation dynamics of the GMG.
We have calculated W s 2 (t) for the GMG using several values of ␦p and N. We have allowed t to be large enough to reach equilibrium. We have then performed an ensemble average over several replicas of the system. The last steps have been used to gauge the dispersion of the numerical values. The results are shown in Fig. 4͑b͒ where we plot NW s 2 (ϱ) as a function of ␦p. All the above-mentioned features can be extracted from Fig. 4. First finite-size effects are clearly seen to affect only the smallest systems up to Nϳ500. Second, the independence of NW s 2 (ϱ) from the size of the system as assumed in the definition of Eq. ͑16͒ follows from the fact that the curves for Nу500 lump tightly together. In the third place, a linear regression of all the curves establishes that ␦p and T have the same physical interpretations, and within the interval considered are nearly proportional to each other, namely, TϭK␦p, with Kϭ(320Ϯ20)10 Ϫ4 .
The fact that T and ␦p are conceptually equivalent leads to extend the GMG simulations to higher values of ␦p.
These values have seldom been explored ͓13͔ in the literature because this parameter measures the minor adjustments performed by the agents that try to find the ''best'' attendance probability. Large values of ␦p could for instance correspond to irresolute or hesitating agents.
There are however important points that have to be considered. In the first place the value of ␦p cannot be taken arbitrarily large. This is so because it measures the uncertainty of the value of a probability. Values of ␦pտ1 have therefore little physical meaning. In addition, if ␦p is nevertheless extended to values higher than one by any plausible analytical extension ͑for instance using periodic or reflective boundary conditions͒, the fluctuations W s 2 for ␦pϾ1 are seen to saturate at an approximately constant value ͓see inset in Fig. 4͑b͔͒. These facts are the reason that the correspondence between ␦p and T necessarily breaks down.
A comparison of the probability density distributions P(p) obtained with both approaches further supports this departure. In Fig. 3͑b͒ we show the equilibrium density distributions that are obtained with the stochastic, asynchronous updating rules of the GMG for two values of ␦p ͑and ϭ0.6). It is seen that these diverge from those of Fig. 3͑a͒ that are obtained with the dynamics given in Eq. ͑18͒. Note however that there are noticeable resemblances for small amplitude fluctuations. See for instance the distributions plotted in full line in Fig. 3͑b͒ and the one for ϭ0.003 in Fig. 3͑a͒.
As mentioned before, the origin of the departure between both dynamics can be found in the scoring of successes and failures that is used in the GMG, that is absent in the present approach. Some customers can be considered to be excluded from the updating dynamics as a consequence of their great accumulation of points. This, for instance, produces the large value of P(pϭ1): many players that have accumulated a large positive account attending the bar do not change strategy. The scoring of each player works as a kind of ''Maxwell demon'' that classifies agents into different groups, endowing each one with a different updating rate.
The equilibrium configuration that is reached in the GMG therefore entails a distribution of updating rates in which some players are essentially frozen while others modify their attendance strategies frequently. This situation is completely different from the one obtained with the dynamics of Eq. ͑18͒ in which all agents undergo stochastic perturbations in every time step.
In order to show this we present in Fig. 5 some results of the GMG, in which we have used a large value of ␦p(␦p ϭ0.8) and we have arbitrarily partitioned the ensemble of 1001 players into two sets. One of the sets gathers all players having at most 10 points the other contains all the rest. We have plotted their respective density distributions P(p). The agents having less that 11 points are the ones that participate more strongly in the dynamics because they undergo more frequent updatings. The above comparison indicates that the GMG and the thermal relaxation dynamics of Eq. ͑18͒ strictly coincide only in the limit of T→0. However, the strong qualitative resemblance of the results for ␦pр0.6 allows us to interpret ␦p, with these limitations, as equivalent to a thermal fluctuation.
The thermal interpretation of ␦p has one interesting consequence. The most remarkable feature of the relaxation processes of the GMG performed with large ␦p is that the high fluctuations prevents quenching ͑see Fig. 6͒. This allows us to provide a framework to the annealing procedure presented in Refs. ͓8͔ and ͓9͔ that resembles more closely the traditional protocol of Ref. ͓10͔. The method presented in Ref. ͓8͔ requires an iterative procedure which involves a short evolution of the N-agent system and the subsequent elimination of all points accumulated in the system. This is repeated until a moment in which the distribution P(p) remains stationary. With the present interpretation of ␦p, a thermal annealing relaxation for the GMG can be performed for the cases in which is significantly different from 1/2. This protocol can be assumed to take place in episodes. In the first episode, relaxation is allowed using a value of ␦p that is large enough to insure that equilibrium is reached and quenching is prevented. The following episodes start from the equilibrium reached in the preceding one, and a relaxation process is allowed with a smaller value of ␦p that is still large enough to avoid the appearance of quenching. The process continues until a lower bound of ␦p is reached. Following this ''cooling'' protocol quenching never occurs, an absolute minimum of C is obtained, and the population remains strongly polarized.

VI. CONCLUSIONS
In the present paper, we provide an alternative description of the dynamics of a system composed by many agents that play at the GMG. This is given in terms of the optimization of a single global magnitude, instead of doing it in terms of independent actions of the N agents. We do this by studying the effect of introducing a cost function C that is associated to the second moment of the probability distribution of the size of the attending parties.
We have proven that C has the relevant properties of an internal energy. In equilibrium, it is a positive extensive quantity that scales linearly with the number of agents N and its minima correspond to equilibrium configurations with a highly polarized population, as found in the BAM or the GMG without quenching.
In addition, the deterministic dynamics that is derived from the descent along the gradient of C leads the system to configurations that have an equivalent polarization as that found with the traditional stochastic updating of the BAM or the GMG. This is a nontrivial equivalence between two completely different organization schemes of the N-agent system. On the one hand the gradient descent gives rise to a set of coupled differential equations that represents a coordinated evolution of all the agents as would be the result of the action of a ''central planner'' of the whole system. On the other hand, within the GMG all the agents act independently from each other adjusting their attendance strategies with the purpose of optimizing their individual utilities. Even though the two relaxation mechanisms are very different, the final configurations of the system turn out to have equivalent features. In other words, from the point of view of game theory the aggregate result of the actions of many independent players tends to minimize the total loss. The introduction of the cost function reverses this picture, in the sense that its minimization gives rise to a coordinated action of the N agents.
The definition of C in terms of the second moment of the probability distribution of attending parties is reminiscent of the many-body Hamiltonian introduced in Ref. ͓12͔ to cast a FIG. 5. Partial probability density distributions of individual attendance strategies for the GMG for different subsets of players obtained for 1001 players, crowding level of 600/1001, and averages made over 2000 histories. ͑a͒ Asymptotic distributions. Subset of players with more than 10 accumulated points ͑full line͒ and with less than 11 points ͑dash line͒. The total probability density distribution is shown with empty boxes. ͑b͒ Density distributions at the end of the first ten steps of the simulation. Players with zero points ͑open boxes͒ have the greatest mobility, players with five and ten points ͑full and dash lines, respectively͒ have lower mobility. The total density distribution is shown in full triangles. version of the MG into the spin glass formalism. In the present case C can also be considered as a many-body Hamiltonian with one-and two-body interactions in which the N dynamic variables are the attendance probabilities p i 's, with iϭ1,2•••N.
The introduction of C and the associated relaxation process allows us to define a temperature parameter through a Langevin-like approach. The value of T remains associated to the ensemble average of the square of the fluctuations of the attendance, scaled by the number of agents. Its introduction in C provides the proof that this quantity, in thermal equilibrium, scales linearly with the size N of the system and therefore qualifies as an extensive parameter.
On the other hand, in order to be an intensive parameter, T should be independent of the size of the system. This has been checked numerically for the case of the GMG. However, finite-size effects in the definition of T become negligible only for systems that are significantly larger than the minimal ones that already display the self-organization features and that have spurred the popularity of the minority game.
Thermal fluctuations can be included in the dynamics that correspond to the descent along the gradient of C. The corresponding distributions P( p) can readily be found and a comparison can be made of T with ␦p involved in the relaxation of the GMG or the BAM. A direct relationship can be established between both parameters but only in the limit of ␦p→0. We have also considered the dynamics of the GMG with moderately large values of ␦p when still the divergence between the GMG and the thermal dynamics is not important. A stochastic updating that involves large values of ␦p could be thought to be associated to irresolute or badly in-formed agents that correct their attendance probabilities performing significant changes in each correction.
The GMG relaxation for large values of ␦p avoids quenching even for significantly different from 1/2. This fact, together with the thermal interpretation of ␦p allows us to cast the annealing procedure presented in Ref. ͓8͔ into the more traditional framework in which T is progressively reduced in successive epochs. This ''cooling'' protocol could well be assimilated to a succession of learning episodes of the many-agent system. In the first episodes in which agents have little ''experience'' and the information about the past is scarce, all agents perform large amplitude-even randomcorrections. In the last episodes of the relaxation process, as there is a richer information about the past history of the system the agents perform finer corrections, the fluctuations are smaller and the cost paid by a wrong attendance are also smaller.
The fact that on the one hand an extensive magnitude can be defined playing the role of an internal energy, and that on the other, a microscopic definition of the temperature can be made, opens the way to a the full thermodynamic description of a system of N-agent performing a GMG. This amounts to introduce a Gibbs distribution defined as ⌽(C)ϭe ϪC/T /Z, where Z stands for the partition function. All thermodynamic functions should follow from this. ACKNOWLEDGMENTS E.B. has been partially supported by CONICET of Argentina, PICT-PMT0051; H.C. and R.P. were partially supported by EC Grant No. ARG/b7-3011/94/27, Contract No. 931005 AR.