Gaussian Integration

The functional integral is a mathematical object whose complete analytical calculation is usually extremely difficult. There is a single case in which we can calculate the necessary integrals analytically on lattices of arbitrary size and dimension, and in fact take the continuum limit explicitly. This is the case in which the function to be integrated is the exponential of a quadratic form on the fields, that is, the case in which the action is quadratic on the fields. Such an exponential of a quadratic form is called a Gaussian function. Let us start by recalling how to calculate the integral of a Gaussian function in a single dimension, given by


\begin{displaymath}
g(x)=e^{-\alpha x^{2}},
\end{displaymath}

where $\alpha$ is some positive real number. We want to calculate the integral


\begin{displaymath}
I_{0}(\alpha)=\int_{-\infty}^{\infty}{\rm d}x\;g(x),
\end{displaymath}

as a function of the parameter $\alpha$. Curiously, it is easier to calculate the square of $I_{0}$ than $I_{0}$ directly! We can write $I_{0}^{2}$ as


\begin{displaymath}
I_{0}^{2}(\alpha)=\left[\int_{-\infty}^{\infty}{\rm d}x\;g(x)\right]
\left[\int_{-\infty}^{\infty}{\rm d}y\;g(y)\right].
\end{displaymath}

This integral extends over the two-dimensional plane $\mathbb{R}^{2}$. Next, we make a change of variables in this plane, from the Cartesian coordinates $(x,y)$ to polar coordinates $(r,\theta)$, where


\begin{displaymath}
x=r\cos(\theta),\;y=r\sin(\theta),\;g(x)g(y)=e^{-\alpha r^{2}},
\end{displaymath}

obtaining for the square of the integral


\begin{displaymath}
I_{0}^{2}(\alpha)=\int_{0}^{\infty}{\rm d}r\int_{0}^{2\pi}{\rm d}\theta
\;r\;e^{-\alpha r^{2}}.
\end{displaymath}

This integral can be done immediately, due to the factor of $r$ that appears from the integration element. Doing the integration we obtain

\begin{eqnarray*}
I_{0}^{2}(\alpha) & = & 2\pi\int_{0}^{\infty}{\rm d}r\;r\;e^{-...
...-\alpha
r^{2}}\right[_{0}^{\infty} \\ & = & \pi\frac{1}{\alpha},
\end{eqnarray*}


so that the original integral is


\begin{displaymath}
I_{0}(\alpha)=\sqrt{\frac{\pi}{\alpha}}.
\end{displaymath}

This result can be generalized to integral of products of polynomials with the Gaussian exponential. First of all, one can see that integrals involving odd powers of $x$ are zero by means of symmetry arguments, because in this case the integrand is an odd function and the domain of integration is symmetrical. If $i\geq 0$ is a non-negative integer,


\begin{displaymath}
I_{2i+1}(\alpha)=\int_{-\infty}^{\infty}{\rm d}x\;x^{2i+1}g(x)\equiv 0.
\end{displaymath}

If we have even powers of $x$, we can derive the integrals $I_{2i}$ successively starting from our result for $I_{0}$. For example we have, differentiating the original expression for $I_{0}$ with respect to $\alpha$,


\begin{displaymath}
\partial_{\alpha}I_{0}(\alpha) =\int_{-\infty}^{\infty}{\rm
...
...y}^{\infty}{\rm
d}x\;x^{2}\;e^{-\alpha x^{2}} =-I_{2}(\alpha),
\end{displaymath}

while, differentiating in the same way the explicit result that we obtained, we have


\begin{displaymath}
\partial_{\alpha}I_{0}(\alpha)
=\sqrt{\pi}\left(-\frac{1}{2}\right)\sqrt{\frac{1}{\alpha}}^{3}.
\end{displaymath}

Comparing these two expressions we obtain the result for $I_{2}$,


\begin{displaymath}
I_{2}(\alpha)=\frac{\sqrt{\pi}}{2}\sqrt{\frac{1}{\alpha}}^{3}.
\end{displaymath}

This procedure can now be iterated in order to obtain the general result for $I_{2i}$,


\begin{displaymath}
I_{2i}=\sqrt{\pi}\frac{(2i-1)!!}{2^{i}}\sqrt{\frac{1}{\alpha}}^{(2i+1)}.
\end{displaymath}

Another kind of generalization in which we are interested is the one for integrals in larger dimensions. Assuming that we have $n$ real variables $x_{i}$ with $i=1,\ldots,n$, in this case the form of the argument of the exponential will be that of a quadratic form $\mathbb{Q}$ with coefficients $Q_{ij}$, and the integral will be written as


\begin{displaymath}
I[\mathbb{Q}]=\int_{-\infty}^{\infty}{\rm
d}x_{1}\ldots\int_...
...^{\infty}{\rm d}x_{n}
\;e^{-\sum_{i}\sum_{j}x_{i}Q_{ij}x_{j}}.
\end{displaymath}

Observe that this integral is a functional of $\mathbb{Q}$. It is necessary to assume that the quadratic form is not degenerate, that is, that it does not have any zero or negative eigenvalues, because in this case the integral does not exist due to the existence of a direction in the space of the variables $x_{i}$ in which the exponential does not decay, causing the integral to diverge. If all the eigenvalues are positive it follows that the quadratic form can be diagonalized by an orthogonal transformation of the variables $x_{i}$ into another set of coordinates $y_{i}=T_{ij}x_{j}$ or, in matrix language, $\vec{y}=\mathbb{T}\vec{x}$. This transformation involves the introduction of a Jacobian determinant for the transformation of the integration volume but, since the transformation is linear, the Jacobian is constant, independent of the coordinates, so that it will always cancel out in the ratios of integrals that we are interested in. Hence, what matters is that in all cases of interest it is possible to reduce the integral, up to a normalization factor that it is not necessary to calculate, to the form


\begin{displaymath}
I[\mathbb{Q}]=\frac{1}{\det(\mathbb{T})}\int_{-\infty}^{\inf...
...t_{-\infty}^{\infty}{\rm d}y_{n}\;e^{-\sum_{i}q_{i}y^{2}_{i}},
\end{displaymath}

where $q_{i}$ are the eigenvalues of $\mathbb{Q}$ and we see here why none of them can be negative or zero, since in that case one or more integrals would not exist. This multiple integral may now be written as the product of $n$ one-dimensional integrals, one for each variable, so that the application of our previous result takes us immediately to the answer


\begin{displaymath}
I[\mathbb{Q}]=\frac{1}{\det(\mathbb{T})}\prod_{i=1}^{n}\sqrt{\frac{\pi}{q_{i}}}.
\end{displaymath}

We see here, once more, why we cannot have zero eigenvalues. Since the determinant of a matrix is equal to the product of its eigenvalues, we may write this result in terms of the determinant of the quadratic form $\mathbb{Q}$, as

  $\displaystyle
I[\mathbb{Q}]=\frac{\pi^{n/2}}{\det(\mathbb{T})}\sqrt{\frac{1}{\det(\mathbb{Q})}}.
$ (3.3.1)

We see, therefore, that we can calculate the integral of the product of any finite-order polynomial with the Gaussian exponential, for any dimension of the space over which we are integrating. We will now use these results in the quantum theory of the free field. The first thing to do is to write the action $S_{0}$ in terms of the Fourier transform of the field. We will see that in this way we will succeed in decoupling the degrees of freedom of the field, because in momentum space they consist of normal modes of oscillation that do not interact with each other. We start with the action in its usual form


\begin{displaymath}
S_{0}[\varphi]=\frac{1}{2}\sum_{\vec{n}}\left\{
\sum_{\mu}[\...
...\varphi(\vec{n})]^{2}
+\alpha_{0}\varphi^{2}(\vec{n})\right\}.
\end{displaymath}

Writing the field in terms of its Fourier transform, using two different momenta $\vec{k}$ and $\vec{k}'$, because all the terms are quadratic in the field, we obtain

\begin{eqnarray*}
S_{0}[\widetilde\varphi ] & = & \frac{1}{2}\sum_{\vec{n}} \lef...
...ot\vec{n}}
e^{\imath\frac{2\pi}{N}\vec{k}'\cdot\vec{n}} \right].
\end{eqnarray*}


Since the complex exponentials are eigenvectors of the finite-difference operator, we obtain

\begin{eqnarray*}
S_{0}[\widetilde\varphi ] & = & \frac{1}{2}\sum_{\vec{k}}\sum_...
...mu}(\vec{k}')
e^{\imath\frac{\pi}{N}k'_{\mu}}+\alpha_{0}\right].
\end{eqnarray*}


As one can see, we may now execute the sum over the positions $\vec{n}$ using the orthogonality and completeness relations, obtaining


\begin{displaymath}
S_{0}[\widetilde\varphi ]=\frac{1}{2}\sum_{\vec{k}}\sum_{\ve...
...}(\vec{k}') e^{\imath\frac{\pi}{N}k'_{\mu}}+\alpha_{0}\right].
\end{displaymath}

Using now the delta function to execute the sum over the momenta $\vec{k}'$ the expression simplifies considerably and we obtain


\begin{displaymath}
S_{0}[\widetilde\varphi ]=\frac{N^{d}}{2}\sum_{\vec{k}}
\lef...
...right]\widetilde\varphi (\vec{k})\widetilde\varphi (-\vec{k}).
\end{displaymath}

Finally, we have that $\rho_{\mu}(-\vec{k})=-\rho_{\mu}(\vec{k})$ and, since the field is real, that $\widetilde\varphi (-\vec{k})=\widetilde\varphi ^{*}(\vec{k})$, so that we obtain the final result

  $\displaystyle
S_{0}[\widetilde\varphi ]=\frac{N^{d}}{2}\sum_{\vec{k}}
\left[\rho^{2}(\vec{k})+\alpha_{0}\right]\vert\widetilde\varphi (\vec{k})\vert^{2}.
$ (3.3.2)

In this expression the degrees of freedom are indexed by the coordinates $\vec{k}$ in momentum space. As one can see, there are no terms that contain products of fields related to two independent vectors, thus characterizing the fact that the normal modes are decoupled from each other. We might say that the action is diagonalized in this system of coordinates of the configuration space, but this would not really be a correct statement. In fact each field $\widetilde\varphi (\vec{k})$ is multiplied by its complex conjugate $\widetilde\varphi ^{*}(\vec{k})=\widetilde\varphi (-\vec{k})$, that is, the momenta are paired in the form $(\vec{k},-\vec{k})$. We should say instead that the action has been anti-diagonalized by the transformation of coordinates. If we represent the fields by vectors in configuration space as we did before, using this time the basis formed by the Fourier modes (problem 3.3.3), the quadratic form of the action would be represented by a matrix that, rather than containing only diagonal terms, that relate each $\vec{k}$ with itself, would contain only anti-diagonal terms, that relate $\vec{k}$ with $-\vec{k}$. The diagonal and the anti-diagonal cross at the position of the zero mode $\vec{k}=\vec{0}$.

This is not really a problem, because the integral of a multi-dimensional Gaussian is related to the determinant of the operator that appears in the quadratic form, as we saw above. Up to a sign, this determinant may be written as either the product of the diagonal elements or as the product of the anti-diagonal elements of the matrix, as one can easily verify using the Laplace expansion for the determinant. The sign that remains undetermined depends only on the dimension of the matrix and is not important since it always cancels out in the ratios of two integrals that define the expectation values of the observables of the theory. However, since this is a very basic and important result, we will do in what follows a direct verification of this fact, calculating explicitly an integral of this type. We want to learn here how to deal with a functional integral written in momentum space, for example the following one3.1,


\begin{displaymath}
I=\int\prod_{\vec{k}}{\rm d}\widetilde\varphi (\vec{k})
\;\w...
...\widetilde\varphi (\vec{k}'')\;e^{-S_{0}[\widetilde\varphi ]}.
\end{displaymath}

Let us recall that there are always $N^{d}$ independent field values, either with the field expressed in terms of $\varphi(\vec{n})$ or in terms of $\widetilde\varphi (\vec{k})$, since there are always exactly $N^{d}$ possible values for either $\vec{n}$ or $\vec{k}$. However, the $\widetilde\varphi $ are complex, unlike the $\varphi$, which are real and, therefore, there are twice as many real parameters in the set of the $\widetilde\varphi $, since for each one of them we have


\begin{displaymath}
\widetilde\varphi (\vec{k})=\mathfrak{R}(\vec{k})+\imath\mathfrak{I}(\vec{k}).
\end{displaymath}

On the other hand, these parameters $\mathfrak{R}$ are $\mathfrak{I}$ not all independent because, since $\varphi$ is real, there are among them the constraints


\begin{displaymath}
\widetilde\varphi (-\vec{k})=\widetilde\varphi ^{*}(\vec{k}),
\end{displaymath}

that is,


\begin{displaymath}
\mathfrak{R}(-\vec{k})=\mathfrak{R}(\vec{k})\mbox{~~and~~}\mathfrak{I}(-\vec{k})=-\mathfrak{I}(\vec{k}).
\end{displaymath}

While the domain of integration is clear in the space of the $\varphi$, where each $\varphi(\vec{n})$ goes from $-\infty$ to $\infty$, the same is not true in the space of the $\widetilde\varphi $, because it is necessary to find a path in the complex plane of each $\widetilde\varphi $ such that these constraints are satisfied. The integral that we want to calculate may be understood as an integral over a $N^{d}$-dimensional surface embedded in the $2N^{d}$-dimensional space generated by the set of all the $\mathfrak{R}$ and $\mathfrak{I}$ variables. We may calculate the integral over the surface by integrating over the whole space an expression involving Dirac delta functions that have support over the surface. In this way we will be explicitly implementing the constraints by means of the Dirac delta functions and we may then extend the integration to all variables $\mathfrak{R}$ and $\mathfrak{I}$, each one of them going from $-\infty$ to $\infty$, which makes much clearer the treatment of the limits of integration. A simple example of this kind of operation can be found in problem 3.3.4. Using these ideas and performing a careful counting of the modes in momentum space, in order to build a consistent pairing of those that have their real and imaginary part related (problem 3.3.5), we may write the integral in the form

\begin{eqnarray*}
I & = & \int\left[\prod_{\vec{k}}{\rm d}\mathfrak{R}(\vec{k}){...
...h\mathfrak{I}(\vec{k}'')]
e^{-S_{0}[\mathfrak{R},\mathfrak{I}]},
\end{eqnarray*}


where $\prod_{\vec{k}={\cal P}(\vec{k})}$ is a product that runs over the real modes, $\prod_{(\vec{k},{\cal P}(\vec{k})),\vec{k}\neq{\cal
P}(\vec{k})}$ is a product that runs over one half of the complex modes existing in momentum space, that is, over pairs of complex modes which are paired up by the pairing operator ${\cal P}$, and this pairing operator is such that ${\cal P}(\vec{k})=-\vec{k}$ unless one or more of the components of $-\vec{k}$ are outside their standard range of variation, in which case one must add $N$ to them in order to bring them back into the correct range. When this happens ${\cal P}(\vec{k})$ is not equal to $-\vec{k}$, since only some of its components change sign. For odd $N$ the only mode for which $\vec{k}={\cal P}(\vec{k})$ is the zero mode $\vec{k}=\vec{0}$, but for even $N$ there are $2^{d}$ such modes. In addition to this, $S_{0}$ may be written in a simple form in terms of the $\mathfrak{R}$'s and $\mathfrak{I}$'s,


\begin{displaymath}
S_{0}[\mathfrak{R},\mathfrak{I}]=\frac{N^{d}}{2}\sum_{\vec{k...
...pha_{0}][\mathfrak{R}^{2}(\vec{k})+\mathfrak{I}^{2}(\vec{k})],
\end{displaymath}

where, naturally, $\mathfrak{I}=0$ for the real modes. In this way it becomes much simpler to deal with these integrals, because now we may treat all the variables as independent.

In order to verify in which cases our integral is equal to zero or not, let us start by the case in which we have


\begin{displaymath}
\vec{k}'\neq\vec{k}''\mbox{~~and~~}\vec{k}'\neq-\vec{k}'',
\end{displaymath}

since in this case we can factor out the terms involving the four modes $-\vec{k}'$, $\vec{k}'$, $-\vec{k}''$ and $\vec{k}''$. Since we now have two independent variables per mode, we end up writing eight integrals in explicit form,

\begin{eqnarray*}
I & = & I'\times \int_{-\infty}^{\infty}{\rm d}\mathfrak{R}(\v...
...athfrak{R}^{2}(-\vec{k}'')+\mathfrak{I}^{2}(-\vec{k}'')\right]},
\end{eqnarray*}


where $I'$ contains the integrals over all the other modes. We are assuming here that the modes indexed by $\vec{k}'$ and $\vec{k}''$ are complex and not real. We invite the reader to complete the deduction, taking into account explicitly the real modes and thus verifying that the results are correct in all cases. We may now use the four delta functions to do the integrals over the four variables $\mathfrak{R}(-\vec{k}')$, $\mathfrak{I}(-\vec{k}')$, $\mathfrak{R}(-\vec{k}'')$ and $\mathfrak{I}(-\vec{k}'')$, which appear only in the corresponding exponentials, obtaining

\begin{eqnarray*}
I & = & 4I'\times \int_{-\infty}^{\infty}{\rm d}\mathfrak{R}(\...
...\mathfrak{R}^{2}(\vec{k}'')+\mathfrak{I}^{2}(\vec{k}'')\right]}.
\end{eqnarray*}


We observe now that the remaining integrals may be decomposed in terms of factors that are integrals of odd functions over symmetrical domains of integration, being therefore zero. It follows that, for $\vec{k}'\neq\vec{k}''$ and $\vec{k}'\neq-\vec{k}''$, we have $I=0$.

Let us examine now the case in which $\vec{k}'=\vec{k}''$. In this case, collecting the appropriate factors in a fashion analogous to the previous case, we have

\begin{eqnarray*}
I & = & I'\times \int_{-\infty}^{\infty}{\rm d}\mathfrak{R}(\v...
...\mathfrak{R}^{2}(-\vec{k}')+\mathfrak{I}^{2}(-\vec{k}')\right]}.
\end{eqnarray*}


Once more, we may use the delta functions to obtain

\begin{eqnarray*}
I & = & 2I'\times \int_{-\infty}^{\infty}{\rm d}\mathfrak{R}(\...
...t[\mathfrak{R}^{2}(\vec{k}')+\mathfrak{I}^{2}(\vec{k}')\right]}.
\end{eqnarray*}


For the third term of the bracket the same parity argument used in the previous case is valid, and therefore it is zero. As for the first two terms, we may change the names of the integration variables in one of them, thus verifying that they cancel out. Therefore, in the case $\vec{k}'=\vec{k}''$ we also have $I=0$.

There remains to be examined the case $\vec{k}'=-\vec{k}''$. Once more we collect the appropriate factors, obtaining

\begin{eqnarray*}
I & = & I'\times \int_{-\infty}^{\infty}{\rm d}\mathfrak{R}(\v...
...\mathfrak{R}^{2}(-\vec{k}')+\mathfrak{I}^{2}(-\vec{k}')\right]}.
\end{eqnarray*}


Using the delta functions to do the integrals over $\mathfrak{R}(-\vec{k}')$ and $\mathfrak{I}(-\vec{k}')$ we obtain


\begin{displaymath}
I=2I'\times \int_{-\infty}^{\infty}{\rm d}\mathfrak{R}(\vec{...
...\mathfrak{R}^{2}(\vec{k}')+\mathfrak{I}^{2}(\vec{k}')\right]}.
\end{displaymath}

where the facts that $\mathfrak{R}(\vec{k}')=\mathfrak{R}(-\vec{k}')$ and that $\mathfrak{I}(\vec{k}')=-\mathfrak{I}(-\vec{k}')$ imply the cancelling of the two imaginary terms in the bracket. The two other terms no longer cancel out as was the case in the previous case, so that it becomes now clear that in this case the integral $I$ is not zero. We have then the result

\begin{eqnarray*}
\lefteqn{\int\prod_{\vec{k}}{\rm d}\widetilde\varphi (\vec{k})...
...t[\mathfrak{R}^{2}(\vec{k}')+\mathfrak{I}^{2}(\vec{k}')\right]}.
\end{eqnarray*}


In analogous fashion, we also have the integral, with the same $I'$,


\begin{displaymath}
\int\prod_{\vec{k}}{\rm d}\widetilde\varphi (\vec{k})e^{-S_{...
...\mathfrak{R}^{2}(\vec{k}')+\mathfrak{I}^{2}(\vec{k}')\right]},
\end{displaymath}

so that we may now use these results to calculate the ratio of integrals that appears in the expectation value that defines the propagator in momentum space,

\begin{eqnarray*}
\lefteqn{ \langle\widetilde\varphi (\vec{k}')\widetilde\varphi...
...^{d}[\rho^{2}(\vec{k}')+\alpha_{0}]\mathfrak{I}^{2}(\vec{k}')}}.
\end{eqnarray*}


These two terms containing ratios of integrals are identical, as one can check with a simple change of the integration variable in one of them. We have then, using the trick seen before of differentiating with respect to the parameter in order to relate the integrals in the numerators with those in the denominators,

\begin{eqnarray*}
\langle\widetilde\varphi (\vec{k}')\widetilde\varphi (-\vec{k}...
...& = &
\frac{1}{\sqrt{N^{d}[\rho^{2}(\vec{k}')+\alpha_{0}]}^{2}}.
\end{eqnarray*}


Using the fact that $\widetilde\varphi (-\vec{k}')=\widetilde\varphi ^{*}(\vec{k}')$ in order to write the left-hand side as a square modulus, we have therefore the final result for the propagator of the free theory in momentum space,

  $\displaystyle
\langle\vert\widetilde\varphi (\vec{k})\vert^{2}\rangle
=\frac{1}{N^{d}[\rho^{2}(\vec{k})+\alpha_{0}]}.
$ (3.3.3)

Note that this result for the propagator in momentum space is exactly equal to the Green function of the classical theory written in momentum space. The same is true for other boundary conditions, but this relation between the classical and quantum theories is a specific property of the free theory, not a general property of quantum field theory. It is interesting to mention that we may systematize this kind of calculation writing only the result for the integral

  $\displaystyle
\int[{\bf d}\widetilde\varphi ]e^{-S_{0}[\widetilde\varphi ]}=\in...
...^{2}}=\prod_{\vec{k}}
\sqrt{\frac{2\pi}{N^{d}[\rho^{2}(\vec{k})+\alpha_{0}]}},
$ (3.3.4)

because all the other relevant integrals, with even powers of the fields, can be obtained from this one by means of differentiation with respect to the quantity $-N^{d}[\rho^{2}(\vec{k})+\alpha_{0}]/2$.

The fact that expectation values of the type $\langle\widetilde\varphi (\vec{k})\widetilde\varphi (\vec{k'})\rangle$ are zero when $\vec{k'}\neq-\vec{k}$ is related directly to the conservation of momentum during the propagation of field waves and, indirectly, also during the propagation of particles. It means that if a wave or particle enters (we adopt the convention that “enters” means sign “$+$” for the momentum) into a propagation process, which is a kind of interaction of the object with itself, then it must exit (in this convention “exit” means sign “$-$” for the momentum) with the same vector $\vec{k}$, that is, it propagates with a constant momentum, in a given mode of the lattice in momentum space. This is, of course, a specific characteristic of periodical boundary conditions, for which we have discrete translation invariance.

For fixed boundary conditions nothing essential changes regarding the calculation of the Gaussian integrals. In that case the eigenmodes of the Laplacian are associated to stationary waves within the box, not to travelling plane waves. Due to this the components in momentum space are all real rather than complex, which implies that the transformation to momentum space really diagonalizes the quadratic form in the action, instead of anti-diagonalizing it as happened here. This actually makes the calculation of the integrals more straightforward than in the periodical case, because there is no additional complication of having complex components as integration variables. In this case a modified versions of equation (3.3.4) holds (problem 3.3.6),


\begin{displaymath}
\int[{\bf d}\widetilde\varphi ]e^{-S_{0}[\widetilde\varphi ]...
...qrt{\frac{2\pi}{(N+1)^{d}[\rho_{f}^{2}(\vec{k})+\alpha_{0}]}},
\end{displaymath}

starting from which we can calculate other integrals using the same procedures of differentiation with respect to a parameter, in this case the quantity


\begin{displaymath}
-(N+1)^{d}[\rho_{f}^{2}(\vec{k})+\alpha_{0}]/2,
\end{displaymath}

as we already discussed above for the periodical case.

In order to finish the development of this section it is still necessary that we relate these integrals over the Fourier components of the field with the original integrals over the fields in position space, by means of which we defined the theory. We have already seen how to transform the action from one field coordinate system to the other, but the same must be done with the integration element in configuration space. As we saw before in section 2.9, with our usual normalization we have for the determinant of the transformation matrix of the finite Fourier transform $\det(\mathbb{F})=n^{-n/2}$, while $\det(\mathbb{F}^{-1})=n^{n/2}$. The first determinant is simply the Jacobian of the transformation from the basis $[{\bf d}\varphi]$ to the basis $[{\bf d}\widetilde\varphi ]$ in the functional integral. Since it is independent of the fields and therefore cancels out in the ratio of integrals that defines the measure for the functional integration, we may write a generic functional integral, such as


\begin{displaymath}
\left\langle{\cal O}\right\rangle= \frac{\displaystyle \int\...
... \int\left[{\bf
d}\varphi\right]\;e^{-S\left[\varphi\right]}},
\end{displaymath}

in terms of the basis of Fourier components, as


\begin{displaymath}
\left\langle{\cal O}\right\rangle= \frac{\displaystyle \int\...
...etilde\varphi \right]\;e^{-S\left[\widetilde\varphi \right]}}.
\end{displaymath}

Of course a similar result is valid for the expression of $\left\langle{\cal O}\right\rangle$ in terms of integrations involving the dimensionfull field $\phi$.

Problems

  1. Calculate, on a lattice with $N^{d}$ sites in $d$ dimensions, the multiple integral


    \begin{displaymath}
I_{0}=\int\prod_{\vec{n}}{\rm d}\varphi(\vec{n})
\;e^{-\frac{\alpha_{0}}{2}\sum_{\vec{n}}\varphi^{2}(\vec{n})}.
\end{displaymath}

  2. Show that $I_{1}(\vec{n}_{1},\vec{n}_{2})=C\delta(\vec{n}_{1},\vec{n}_{2})$ where $\delta(\vec{n}_{1},\vec{n}_{2})$ is the $d$-dimensional Kronecker delta function and


    \begin{displaymath}
I_{1}(\vec{n}_{1},\vec{n}_{2})= \int\prod_{\vec{n}}{\rm
d}\v...
...\;e^{-\frac{\alpha_{0}}{2}\sum_{\vec{n}}\varphi^{2}(\vec{n})},
\end{displaymath}

    and calculate $C$.

  3. Write, in the one-dimensional case, the matrix that represents the operator $\widetilde K(k,k')$ in configuration space, using as a basis the Fourier modes $\widetilde\varphi (k)$ of the field, in terms of which the action of the free theory in momentum space, as given in equation (3.3.2), is written as


    \begin{displaymath}
S_{0}[\widetilde\varphi ]=\frac{N}{2}\sum_{k}\sum_{k'}\widetilde\varphi (k)\widetilde K(k,k')\widetilde\varphi (k').
\end{displaymath}

    For simplicity, use lattices with odd $N$, enumerating the corresponding momenta from $-(N-1)/2$ to $(N-1)/2$, in order to verify that the matrix is anti-diagonal, and write explicitly the elements of the anti-diagonal. Using this fact and the result in equation (3.3.1), obtain the result in equation (3.3.4) up to a multiplicative constant.

    1. Consider the square of vertices $(0,0)$, $(0,1)$, $(1,0)$, and $(1,1)$ in the plane $(x,y)$, a function $f(x,y)$ on this plane and the curve in the plane defined by $f(x,y)=0$. Let us denote by $C$ the part of the curve that is inside the square. Show that the line integral over the curve $C$ that gives its arc length can be written in terms of an integral over the plane in the following way,


      \begin{displaymath}
\int_{C}{\rm d}\ell
=
\int_{0}^{1}{\rm d}x
\int_{0}^{1}{...
...rt{(\partial_{x}f)^{2}+(\partial_{y}f)^{2}}\;
\delta[f(x,y)],
\end{displaymath}

      where the Dirac delta function appears. Hint: make, at each point along the curve, a transformation of variables from $(x,y)$ to $(u,v)$, where $u$ varies along the tangent to the curve and $v$ varies along the line perpendicular to it, that is, ${\rm d}u\propto{\rm d}\ell$; remember that ${\rm d}f\equiv 0$ along the curve; for simplicity, assume that $C$ is the graph of a function $y(x)$.3.2

    2. Verify the result above for the particular cases $f(x,y)=x^{2}+y^{2}-1$ and $f(x,y)=x-y$. Show that this last case, in which $f$ is linear on $x$ and $y$, is the one used in the text, in which the expression of the integral over the plane reduces to


      \begin{displaymath}
\int_{C}{\rm d}\ell= \int_{0}^{1}{\rm d}x\int_{0}^{1}{\rm d}y
\;\delta\left(\frac{x-y}{\sqrt{2}}\right).
\end{displaymath}

  4. Consider the definition of the pairing operator ${\cal P}(\vec{k})$, which is that ${\cal P}(\vec{k})=-\vec{k}$ unless one or more of the components of $-\vec{k}$ falls outside the allowed range of values, when they must be brought back into the range by the addition of $N$.

    1. Show that a real mode, for which the imaginary part of the Fourier transform of the field is zero, is one for which $\vec{k}={\cal P}(\vec{k})$. Examine then the situation with these real modes and define the product $\prod_{\vec{k}={\cal P}(\vec{k})}$ used in the text. Show that for odd $N$ the product has just one factor, the zero mode $\vec{k}=\vec{0}$, while for even $N$ it contains $2^{d}$ factors.

    2. Show that, excluding the real modes, it is possible to pair up all the remaining Fourier modes in momentum space so that each pair has equal real parts $\mathfrak{R}$ and imaginary parts $\mathfrak{I}$ that differ only by the sign. Use the pairing operator ${\cal P}$ in order to do the pairing and define, in this way, the product $\prod_{(\vec{k},{\cal P}(\vec{k})),\vec{k}\neq{\cal
P}(\vec{k})}$ used in the text. Write a detailed definition of this product, and show that for odd $N$ it consists of $N^{d}-1$ factors, while for even $N$ it consists of $N^{d}-2^{d}$ factors.

  5. Calculate the basic Gaussian integral of the free theory in the case of fixed boundary conditions, that is, show that


    \begin{displaymath}
\int[{\bf d}\widetilde\varphi ]e^{-S_{0}[\widetilde\varphi ]...
...qrt{\frac{2\pi}{(N+1)^{d}[\rho_{f}^{2}(\vec{k})+\alpha_{0}]}},
\end{displaymath}

    where $\rho_{f}^{2}(\vec{k})$ was defined in equation (2.8.4).