$% Math \newcommand{\diff}{\operatorname{d}\!} \newcommand{\setR}{\mathbb{R}} \newcommand{\setN}{\mathbb{N}} \newcommand{\esp}[1]{\mathbb{E}\left[#1\right]} \newcommand{\Space}{\phantom{grrr}} \newcommand{\argmin}[1]{\underset{#1}{\operatorname{argmin}} \; } \newcommand{\Argmin}[1]{\underset{#1}{\operatorname{Argmin}} \; } \newcommand{\argmax}[1]{\underset{#1}{\operatorname{argmax}} \; } \let\oldvec\vec \renewcommand{\vec}{\boldsymbol} %boldsymbol or mathbf ? \newcommand{\tv}[1]{\operatorname{TV}\left( #1 \right)} \newcommand{\prox}[2]{\operatorname{prox}_{#1}{\left(#2\right)}} \newcommand{\proj}[2]{\operatorname{Proj}_{#1}{\left(#2\right)}} \newcommand{\sign}[1]{\operatorname{sign}\left( #1 \right)} \newcommand{\braket}[2]{\left\langle #1 \, , \, #2 \right\rangle} \renewcommand{\div}{\operatorname{div}} \newcommand{\id}{\operatorname{Id}} \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\lag}[1]{\mathcal{L}\left( #1 \right)} \newcommand{\mmax}[2]{\max_{#1}\left\{ #2 \right\}} \newcommand{\mmin}[2]{\min_{#1}\left\{ #2 \right\}} \newcommand{\conv}{\ast} \newcommand{\ft}[1]{\mathcal{F}\left( #1 \right)} \newcommand{\ift}[1]{\mathcal{F}^{-1}\left( #1 \right)} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\E}[1]{\cdot 10^{#1}} %notation scientifique. Utiliser "e", "E", ou "10^" ? \newcommand{\amin}[2]{\underset{#1}{\operatorname{argmin}} \; \left\{ #2 \right\} }$

# Tuning the performances¶

Some parameters can be used to tune the performances when using iterative methods.

## Preconditionning the iterative algorithms¶

A fundamental operation in iterative techniques is the projection-backprojection. Consider for example a simple Landweber iteration (SIRT) for solving the least-squares problem $$\frac{1}{2}\norm{P x - d}_2^2$$ :

\begin{split}\begin{aligned} x_{k+1} &= x_k - \gamma P^T (P x_k - d) \\ & = x_k - \gamma P^T P x_k + \gamma P^T d \end{aligned}\end{split}

The operator $$P^T P$$ is involved at each iteration. The convergence rate of any gradient-like algorithm (including FISTA) primarily depends on the norm of the operator $$P^T P$$ (its largest eigenvalue).

It is well-known that in the projection operation, the components around the center of the Fourier polar grid (low frequencies) are over-represented with respect to those far from the center (high frequencies). The use of a “ramp” filter weights back the frequencies, which is at the root of the Filtered Back-Projection method.

Using this ramp filter in the iterative process, i.e performing $$C P^T y$$ instead of $$P y$$ where $$C$$ is the ramp-filtering operation, boils down to using a preconditioner. The corresponding parameter (activated by default) is

DO_PRECONDITION = 1 # Default value is 1


This dramatically increases the convergence rate : a few hundreds of iterations are required, instead of thousands.

Note that if your projections are already filtered (for example as a part of the phase retrieval), this option has to be disabled.

## Speeding-up the iterative rings correction¶

Remember that for enabling the rings correction when using an iterative technique, the parameters are :

ITER_RING_HEIGHT = 50000 # Put a huge value
ITER_RING_SIZE = 1  # Only relevant for DL
ITER_RING_BETA = 0.5 # Weight of the "sparsity" of the rings
NUMBER_OF_RINGS = 1 # When using partial rings correction (only for DL)


The rings correction slows down the reconstruction process. The following parameter can be tuned to speed-up the convergence :

RING_ALPHA = 1.0 # Less of equal to 1


The default value is 1. This is a preconditioner for the rings variables : the more RING_ALPHA differs from 1.0, the more preconditioning is done.