The functional integral is a mathematical object whose complete analytical calculation is usually extremely difficult. There is a single case in which we can calculate the necessary integrals analytically on lattices of arbitrary size and dimension, and in fact take the continuum limit explicitly. This is the case in which the function to be integrated is the exponential of a quadratic form on the fields, that is, the case in which the action is quadratic on the fields. Such an exponential of a quadratic form is called a Gaussian function. Let us start by recalling how to calculate the integral of a Gaussian function in a single dimension, given by
where is some positive real number. We want to calculate the
integral
as a function of the parameter . Curiously, it is easier to
calculate the square of
than
directly! We can write
as
This integral extends over the two-dimensional plane
. Next, we
make a change of variables in this plane, from the Cartesian coordinates
to polar coordinates
, where
obtaining for the square of the integral
This integral can be done immediately, due to the factor of that
appears from the integration element. Doing the integration we obtain
so that the original integral is
This result can be generalized to integral of products of polynomials
with the Gaussian exponential. First of all, one can see that integrals
involving odd powers of are zero by means of symmetry arguments,
because in this case the integrand is an odd function and the domain of
integration is symmetrical. If
is a non-negative integer,
If we have even powers of , we can derive the integrals
successively starting from our result for
. For example we have,
differentiating the original expression for
with respect to
,
while, differentiating in the same way the explicit result that we obtained, we have
Comparing these two expressions we obtain the result for ,
This procedure can now be iterated in order to obtain the general result
for ,
Another kind of generalization in which we are interested is the one for
integrals in larger dimensions. Assuming that we have real variables
with
, in this case the form of the argument of the
exponential will be that of a quadratic form
with coefficients
, and the integral will be written as
Observe that this integral is a functional of . It is necessary to
assume that the quadratic form is not degenerate, that is, that it does
not have any zero or negative eigenvalues, because in this case the
integral does not exist due to the existence of a direction in the space
of the variables
in which the exponential does not decay, causing
the integral to diverge. If all the eigenvalues are positive it follows
that the quadratic form can be diagonalized by an orthogonal
transformation of the variables
into another set of coordinates
or, in matrix language,
.
This transformation involves the introduction of a Jacobian determinant
for the transformation of the integration volume but, since the
transformation is linear, the Jacobian is constant, independent of the
coordinates, so that it will always cancel out in the ratios of integrals
that we are interested in. Hence, what matters is that in all cases of
interest it is possible to reduce the integral, up to a normalization
factor that it is not necessary to calculate, to the form
where are the eigenvalues of
and we see here why none of
them can be negative or zero, since in that case one or more integrals
would not exist. This multiple integral may now be written as the product
of
one-dimensional integrals, one for each variable, so that the
application of our previous result takes us immediately to the answer
We see here, once more, why we cannot have zero eigenvalues. Since the
determinant of a matrix is equal to the product of its eigenvalues, we
may write this result in terms of the determinant of the quadratic form
, as
We see, therefore, that we can calculate the integral of the product of
any finite-order polynomial with the Gaussian exponential, for any
dimension of the space over which we are integrating. We will now use
these results in the quantum theory of the free field. The first thing to
do is to write the action in terms of the Fourier transform of
the field. We will see that in this way we will succeed in decoupling the
degrees of freedom of the field, because in momentum space they consist
of normal modes of oscillation that do not interact with each other. We
start with the action in its usual form
Writing the field in terms of its Fourier transform, using two different
momenta and
, because all the terms are quadratic in
the field, we obtain
Since the complex exponentials are eigenvectors of the finite-difference operator, we obtain
As one can see, we may now execute the sum over the positions
using the orthogonality and completeness relations, obtaining
Using now the delta function to execute the sum over the momenta
the expression simplifies considerably and we obtain
Finally, we have that
and,
since the field is real, that
, so
that we obtain the final result
In this expression the degrees of freedom are indexed by the coordinates
in momentum space. As one can see, there are no terms that
contain products of fields related to two independent vectors, thus
characterizing the fact that the normal modes are decoupled from each
other. We might say that the action is diagonalized in this system of
coordinates of the configuration space, but this would not really be a
correct statement. In fact each field
is multiplied by
its complex conjugate
, that is,
the momenta are paired in the form
. We should say
instead that the action has been anti-diagonalized by the
transformation of coordinates. If we represent the fields by vectors in
configuration space as we did before, using this time the basis formed by
the Fourier modes (problem 3.3.3), the quadratic
form of the action would be represented by a matrix that, rather than
containing only diagonal terms, that relate each
with itself,
would contain only anti-diagonal terms, that relate
with
. The diagonal and the anti-diagonal cross at the position of
the zero mode
.
This is not really a problem, because the integral of a multi-dimensional Gaussian is related to the determinant of the operator that appears in the quadratic form, as we saw above. Up to a sign, this determinant may be written as either the product of the diagonal elements or as the product of the anti-diagonal elements of the matrix, as one can easily verify using the Laplace expansion for the determinant. The sign that remains undetermined depends only on the dimension of the matrix and is not important since it always cancels out in the ratios of two integrals that define the expectation values of the observables of the theory. However, since this is a very basic and important result, we will do in what follows a direct verification of this fact, calculating explicitly an integral of this type. We want to learn here how to deal with a functional integral written in momentum space, for example the following one3.1,
Let us recall that there are always independent field values,
either with the field expressed in terms of
or in
terms of
, since there are always exactly
possible values for either
or
. However, the
are complex, unlike the
, which are real and, therefore, there
are twice as many real parameters in the set of the
, since for
each one of them we have
On the other hand, these parameters are
not all independent
because, since
is real, there are among them the constraints
that is,
While the domain of integration is clear in the space of the ,
where each
goes from
to
, the same
is not true in the space of the
, because it is necessary to find
a path in the complex plane of each
such that these constraints
are satisfied. The integral that we want to calculate may be understood
as an integral over a
-dimensional surface embedded in the
-dimensional space generated by the set of all the
and
variables. We may calculate the integral over the surface by
integrating over the whole space an expression involving Dirac delta
functions that have support over the surface. In this way we will be
explicitly implementing the constraints by means of the Dirac delta
functions and we may then extend the integration to all variables
and
, each one of them going from
to
, which makes
much clearer the treatment of the limits of integration. A simple example
of this kind of operation can be found in
problem 3.3.4. Using these ideas and performing a
careful counting of the modes in momentum space, in order to build a
consistent pairing of those that have their real and imaginary part
related (problem 3.3.5), we may write the
integral in the form
where
is a product that runs over the
real modes,
is a product that runs over one half of the complex modes
existing in momentum space, that is, over pairs of complex modes which
are paired up by the pairing operator
, and this pairing
operator is such that
unless one or more of
the components of
are outside their standard range of
variation, in which case one must add
to them in order to bring them
back into the correct range. When this happens
is not
equal to
, since only some of its components change sign. For
odd
the only mode for which
is the zero
mode
, but for even
there are
such modes. In
addition to this,
may be written in a simple form in terms of the
's and
's,
where, naturally,
for the real modes. In this way it becomes much
simpler to deal with these integrals, because now we may treat all the
variables as independent.
In order to verify in which cases our integral is equal to zero or not, let us start by the case in which we have
since in this case we can factor out the terms involving the four modes
,
,
and
. Since we now have
two independent variables per mode, we end up writing eight integrals in
explicit form,
where contains the integrals over all the other modes. We are
assuming here that the modes indexed by
and
are
complex and not real. We invite the reader to complete the deduction,
taking into account explicitly the real modes and thus verifying that the
results are correct in all cases. We may now use the four delta functions
to do the integrals over the four variables
,
,
and
, which appear
only in the corresponding exponentials, obtaining
We observe now that the remaining integrals may be decomposed in terms of
factors that are integrals of odd functions over symmetrical domains of
integration, being therefore zero. It follows that, for
and
, we have
.
Let us examine now the case in which
. In this case,
collecting the appropriate factors in a fashion analogous to the previous
case, we have
Once more, we may use the delta functions to obtain
For the third term of the bracket the same parity argument used in the
previous case is valid, and therefore it is zero. As for the first two
terms, we may change the names of the integration variables in one of
them, thus verifying that they cancel out. Therefore, in the case
we also have
.
There remains to be examined the case
. Once more we
collect the appropriate factors, obtaining
Using the delta functions to do the integrals over
and
we obtain
where the facts that
and that
imply the cancelling of the two imaginary
terms in the bracket. The two other terms no longer cancel out as was the
case in the previous case, so that it becomes now clear that in this case
the integral
is not zero. We have then the result
In analogous fashion, we also have the integral, with the same ,
so that we may now use these results to calculate the ratio of integrals that appears in the expectation value that defines the propagator in momentum space,
These two terms containing ratios of integrals are identical, as one can check with a simple change of the integration variable in one of them. We have then, using the trick seen before of differentiating with respect to the parameter in order to relate the integrals in the numerators with those in the denominators,
Using the fact that
in order to
write the left-hand side as a square modulus, we have therefore the final
result for the propagator of the free theory in momentum space,
Note that this result for the propagator in momentum space is exactly equal to the Green function of the classical theory written in momentum space. The same is true for other boundary conditions, but this relation between the classical and quantum theories is a specific property of the free theory, not a general property of quantum field theory. It is interesting to mention that we may systematize this kind of calculation writing only the result for the integral
because all the other relevant integrals, with even powers of the fields,
can be obtained from this one by means of differentiation with respect to
the quantity
.
The fact that expectation values of the type
are zero when
is related directly to the conservation of
momentum during the propagation of field waves and, indirectly, also
during the propagation of particles. It means that if a wave or particle
enters (we adopt the convention that “enters” means sign “
” for
the momentum) into a propagation process, which is a kind of interaction
of the object with itself, then it must exit (in this convention “exit”
means sign “
” for the momentum) with the same vector
, that
is, it propagates with a constant momentum, in a given mode of the
lattice in momentum space. This is, of course, a specific characteristic
of periodical boundary conditions, for which we have discrete translation
invariance.
For fixed boundary conditions nothing essential changes regarding the calculation of the Gaussian integrals. In that case the eigenmodes of the Laplacian are associated to stationary waves within the box, not to travelling plane waves. Due to this the components in momentum space are all real rather than complex, which implies that the transformation to momentum space really diagonalizes the quadratic form in the action, instead of anti-diagonalizing it as happened here. This actually makes the calculation of the integrals more straightforward than in the periodical case, because there is no additional complication of having complex components as integration variables. In this case a modified versions of equation (3.3.4) holds (problem 3.3.6),
starting from which we can calculate other integrals using the same procedures of differentiation with respect to a parameter, in this case the quantity
as we already discussed above for the periodical case.
In order to finish the development of this section it is still necessary
that we relate these integrals over the Fourier components of the field
with the original integrals over the fields in position space, by means
of which we defined the theory. We have already seen how to transform the
action from one field coordinate system to the other, but the same must
be done with the integration element in configuration space. As we saw
before in section 2.9, with our usual normalization we have for
the determinant of the transformation matrix of the finite Fourier
transform
, while
. The first determinant is simply the
Jacobian of the transformation from the basis
to the
basis
in the functional integral. Since it is
independent of the fields and therefore cancels out in the ratio of
integrals that defines the measure for the functional integration, we may
write a generic functional integral, such as
in terms of the basis of Fourier components, as
Of course a similar result is valid for the expression of
in terms of integrations involving
the dimensionfull field
.
and calculate .
For simplicity, use lattices with odd , enumerating the corresponding
momenta from
to
, in order to verify that the matrix
is anti-diagonal, and write explicitly the elements of the anti-diagonal.
Using this fact and the result in
equation (3.3.1), obtain the result in
equation (3.3.4) up to a multiplicative constant.
where the Dirac delta function appears. Hint: make, at each point along
the curve, a transformation of variables from to
, where
varies along the tangent to the curve and
varies along the line
perpendicular to it, that is,
; remember that
along the curve; for simplicity, assume that
is the
graph of a function
.3.2
where
was defined in
equation (2.8.4).