The Low-Pass Filters on the Real Line

The low-pass filters are defined as operations on real functions leading to other related real functions. Let us define the simplest such filter, namely the first-order linear low-pass filter. Given a real function $f(x)$ on the real line of the coordinate $x$, of which we require no more than that it be integrable, we define from it a filtered function $f_{\varepsilon}(x)$ by


\begin{displaymath}
f_{\varepsilon}(x)
=
\frac{1}{2\varepsilon}
\int_{x-\varepsilon}^{x+\varepsilon}dx'\,
f\!\left(x'\right),
\end{displaymath} (1)

where $\varepsilon $ is a strictly positive real constant, usually meant to be small by comparison to some physical scale, and which we will refer to as the range of the filter. One can also define $f_{0}(x)$ by continuity, as the $\varepsilon\to 0$ limit of this expression, which is mostly but not always identical to $f(x)$. The transition from $f(x)$ to $f_{\varepsilon}(x)$ constitutes an operation within the space of real functions. A discrete version of this operation is known in numerical and graphical settings as that of taking running averages. Another similar operation is known in quantum field theory as block renormalization. What we do here is to map the value of $f(x)$ at $x$ to its average value over a symmetric interval around $x$. This results in a new real function $f_{\varepsilon}(x)$ that is smoother than the original one, since the filter clearly damps out the high-frequency components of the Fourier spectrum of $f(x)$, as will be shown explicitly in what follows.

The filter can be understood as a linear integral operator acting in the space of integrable real functions. It may be written as an integral over the whole real line involving a kernel $K_{\varepsilon}\!\left(x-x'\right)$ with compact support,


\begin{displaymath}
f_{\varepsilon}(x)
=
\int_{-\infty}^{\infty}dx'\,
K_{\varepsilon}\!\left(x-x'\right)
f\!\left(x'\right),
\end{displaymath}

where the kernel is defined as $K_{\varepsilon}\!\left(x-x'\right)=1/(2\varepsilon)$ for $\vert x-x'\vert<\varepsilon$, and as $K_{\varepsilon}\!\left(x-x'\right)=0$ for $\vert x-x'\vert>\varepsilon$. This kernel is a discontinuous even function of $\left (x-x'\right )$ that has unit integral. If the functions one is dealing with are defined in a periodic interval such as $[-\pi ,\pi ]$, then the integral above has to be restricted to that interval, and the kernel can be easily expressed in terms of a convergent Fourier series,


\begin{displaymath}
K_{\varepsilon}\!\left(x-x'\right)
=
\frac{1}{2\pi}
+
\...
...\varepsilon)}
\right]
\cos\!\left[k\left(x-x'\right)\right],
\end{displaymath}

where we assume that $\varepsilon\leq\pi$. The calculation of the coefficients of this series is completely straightforward. The series can be shown to be convergent by the Dirichlet test, or alternatively by the monotonicity criterion discussed in [3]. The quantity within square brackets is known as the sinc function of the variable $(k\varepsilon)$. In spite of appearances, it is an analytic function, assuming the value $1$ at zero.

Although it is possible to define the filter of range $\varepsilon $ inside a periodic interval even if the overall range is larger that the length of the interval, that is when $\varepsilon>\pi$ in our case here, there is little point in doing so. The central idea of the filter is that the range be small compared to the relevant scales of a given problem, and once a periodic interval is introduces it immediately establishes such a scale with its length. Therefore we should have at least $\varepsilon\leq\pi$, and more often $\varepsilon\ll\pi$. We will therefore adopt as a basic hypothesis, from now on, the condition that the range be smaller than the length of the periodic interval, whenever we work with periodic functions within such an interval.

The filter defined above has several interesting properties, which are the reasons for its usefulness. Some of the most important and basic ones follow. In every case it is clear that $f(x)$ must be an integrable function, otherwise it is not even possible to define the corresponding filtered function.

  1. If $f(x)$ is a linear function on the real line, then $f_{\varepsilon}(x)=f(x)$.

  2. If $f(x)=x^{n}$ on the real line, then $f_{\varepsilon}(x)$ is a polynomial of order $n$, with the coefficient $1$ for the term $x^{n}$.

    Only lower powers of $x$ with the same parity as $n$ appear in this polynomial. All the other coefficients contain strictly positive powers of $\varepsilon^{2}$, and thus tend to zero when $\varepsilon\to 0$. This means that in the $\varepsilon\to 0$ limit the filter becomes the identity, in so far as polynomials are concerned.

  3. If $f(x)$ is a continuous function, then $f_{\varepsilon}(x)$ is a differentiable function.

  4. If $f(x)$ is a discontinuous function, then $f_{\varepsilon}(x)$ is a continuous function.

  5. If $f(x)$ is an integrable singular object such as Dirac's delta ``function'', then $f_{\varepsilon}(x)$ is a discontinuous function. In fact, the kernel defined above can itself be obtained by the application of the filter to a delta ``function''.

  6. In the $\varepsilon\to 0$ limit the filter becomes an almost-identity operation, in the sense that it reproduces in the output function $f_{\varepsilon}(x)$ the input function $f(x)$ almost everywhere.

  7. At isolated points where $f(x)$ is discontinuous the $\varepsilon\to 0$ limit of the function $f_{\varepsilon}(x)$ converges to the average of the two lateral limits of $f(x)$ to that point.

  8. At isolated points where $f(x)$ is non-differentiable the $\varepsilon\to 0$ limit of the derivative of the function $f_{\varepsilon}(x)$ converges to the average of the two lateral limits of the derivative of $f(x)$ to that point.

  9. The filter does not change the definite integral of a function that has compact support on the real line.

Up to this point we have assumed that $f(x)$ is defined on the whole real line. If instead of this it is defined within a periodic interval, then we have a few more properties.

  1. If $f(x)$ is periodic, then so is $f_{\varepsilon}(x)$, with the same period.

  2. The filter does not change the average value of a periodic function. This means that it does not change the integral of the function over its period, and hence that it does not change the Fourier coefficient $\alpha_{0}$ of the function.

  3. For periodic functions the effect of the filter on the asymptotic behavior of the Fourier coefficients $\alpha_{k}$ and $\beta_{k}$ of the function, for $k>0$, is to add an extra factor of $k$ to the denominator. This is so because the filtered coefficients may be written as

    \begin{eqnarray*}
\alpha_{\varepsilon,k}
& = &
\left[
\frac{\sin(k\varepsilo...
... \frac{\sin(k\varepsilon)}{(k\varepsilon)}
\right]
\beta_{k}.
\end{eqnarray*}


    Once more we see here the presence of the sinc function of the variable $(k\varepsilon)$.

All these properties can be demonstrated directly on the real line, and some such demonstrations can be found in Appendix A. For our purposes here one of the most important properties is the last one, since it implies that the action of the filter, when represented in the Fourier series of the real function, is very simple and has the effect of rendering the filtered series more rapidly convergent than the original one, since the filtered coefficients contain an extra factor of $1/k$ and hence approach zero faster than the original ones as we make $k\to\infty$.

The usefulness of the filter in physics applications, and the very possibility of using it to regularize divergent Fourier series in such circumstances, stem from two facts related to the mathematical representation of nature in physics. First, such a representation is always an approximate one. All physical measurements, as well as all theoretical calculations, of quantities which are represented by continuous variables, can only be performed with a finite amount of precision, that is, within finite and non-zero errors. In fact, not only this is true in practice, but with the advent of relativistic quantum mechanics and quantum field theory, it became a limitation in principle as well. Second, all physical laws are valid within a certain range of length, time or energy scales. Given any physical measurements or theoretical calculations, there is always a length or time scale below which, or an energy scale above which, the measurements and calculation, as well as the hypotheses behind them, cease to have any meaning.

If we observe that the application of a filter with range parameter $\varepsilon $ appreciably changes the function, and therefore the representation of nature that it implements, only at scales of the order of $\varepsilon $ or smaller, while at the same time resulting in series with better convergence characteristics for all non-zero values of $\varepsilon $, no matter how small, it becomes clear that it is always possible to choose $\varepsilon $ small enough so that no appreciable change in the physics is entailed within the relevant scales. We conclude therefore that it is always possible to filter the real functions involved in physics applications, in order to have a representation of the physics in terms of convergent series, without the introduction of any physically relevant changes in the description of nature and its laws. In fact, many times it turns out that the introduction of the low-pass filter actually improves the approximate representation of nature used in the applications, rather than harming it in any way, as shown in the examples discussed in Appendix B.



Subsections