Under some common circumstances it is possible to optimize the extended
Wolff algorithms by technical means. This will depend fundamentally on
the characteristics of the external source or of the boundary conditions
which are present. Specially in the case of external sources, the main
cause of performance degradation is the high cluster rejection rate
associated to large values of the sources. Let us recall that, after the
cluster is built, one has to calculate the variation
of
the external-source term of the action and then accept the cluster with
the probability
. In order to do this, one
extracts a random number
and only accepts the cluster if
or, equivalently, if
where one should note that is always a positive number. The
best way to calculate
is to do so along the
construction of the cluster, incrementing
as the sites
are added to it. The optimizations we are talking about here are usually
the following two things: if the variation of the action
is
guaranteed to be negative, then the cluster is sure to be accepted and it
is not necessary to calculate
or to perform the final
Metropolis test, which saves some computational effort; if the variation
of the action is known to always increase as sites are added to the
cluster, then at some point one may know that the cluster will be
rejected even before completing it, and then one may drop the cluster,
thus saving the significant computer effort required to finish building
it.
The main idea behind this is that, so long as the random number used
for the final Metropolis test has a flat distribution of probabilities in
, it does not matter whether it is extracted from the generator
after the cluster is built or before one starts to build it. Therefore
one may extract it beforehand and keep the value
in storage for
tests to be performed along the building of the cluster. In order to
better exemplify the idea, let us consider a specific case. Let us
consider the case in which the external source has a constant and
non-vanishing value
over the whole lattice. As we saw in the
discussion of the building algorithm for the clusters, it is always true
that all the sites of a cluster have the field oriented in the same
direction. Therefore, if
is a constant, all the variations of the
action
associated to the flipping of the field at each site of
the cluster will have the same sign. Therefore, given
and the
initial site
of the cluster, the following statements are
true:
Note that, so long as the external source is not zero, it never
happens that
. Note also that, if the random number
extracted beforehand happens to be
, then one cannot calculate and
store
, but in this case this is also not necessary, because the
condition
will be satisfied regardless of the value of
, and hence
the cluster is sure to be accepted. A simple and well-structured way to
implement these ideas is to imagine that the code for building the
cluster can run in one of two modes: a ``check'' mode, in which
is calculated along the building of the cluster, and a
``no-check'' mode, in which the variations of
are simply
ignored and the cluster is built and flipped just as would happen in the
absence of any external sources. Here is the algorithm for the optimized
code in this case:
In this way one makes as much economy as possible of computer effort. The
idea can be used in essentially the same way for other forms of constant
external sources such as, for example, one existing only over the world
line of a particle, which is a one-dimensional line of sites within the
-dimensional lattice. Also, it can easily be adapted to fixed boundary
conditions with a constant value of the field at the external boundary.
In fact, the external source or the value at the boundary do not really
have to be constant, so long as they do not change sign, for this idea to
work as explained above. In these other cases additional indexing
structures may be necessary, in order to single out the sites where the
external source exists, or the interior sites which are next to the
external boundary, connected to it by the boundary links.
It is much more difficult to optimize the case of a general external
source or set of boundary conditions, which can change sign freely. One
can use ideas which are similar to the ones described above, but it is
much more difficult to obtain a significant amount of optimization. The
basic idea goes something like what follows. Let us consider a typical
intermediate situation during the construction of the cluster. Assume
that one keeps updated along the construction two positive variables,
and
, which hold
the maximum value that can be subtracted from the current value of
and the maximum value that can be added to the
current value of
, respectively, during the rest of the
construction of the cluster.
Under these conditions, after the inclusion of a certain site in the
cluster, if one verifies that
is less than or equal to
, then there is no chance that the
action will ever increase enough to cause a rejection, and therefore one
can go to no-check mode in order to build the rest of the cluster and
flip it. This is so because the sum above is larger than or equal to the
maximum possible value that
can assume from this point
on. Similarly, if one verifies that
is greater than
, then there is no chance that the action
will ever decrease enough to avoid a rejection, and therefore one can
quit the current cluster before actually completing its construction. In
this case, the difference above is smaller than the minimum possible
value that
can assume from this point on.
The initial values of the variables
and
can be obtained, respectively, as the sum of
all negative single-site variations and the sum of all positive
single-site variations of the action
. Given the orientation of
a certain cluster, these sums can be limited to the sites of the lattice
where the field has that orientation, or even only to the sites that can
actually participate of that particular cluster, because they are
connected to the initial site by strings of
links. Along the
construction the values of
and
have to be modified by the addition or subtraction of
the variation of
at each site which is added to the cluster,
in order to keep the variables up to date. There is, of course, an
additional overhead of computer effort, needed in order to initialize and
update these variables. Due to this and to the added complexity of the
resulting code, sometimes it may not be worth while to implement this
kind of optimization.