$% Math \newcommand{\diff}{\operatorname{d}\!} \newcommand{\setR}{\mathbb{R}} \newcommand{\setN}{\mathbb{N}} \newcommand{\esp}{\mathbb{E}\left[#1\right]} \newcommand{\Space}{\phantom{grrr}} \newcommand{\argmin}{\underset{#1}{\operatorname{argmin}} \; } \newcommand{\Argmin}{\underset{#1}{\operatorname{Argmin}} \; } \newcommand{\argmax}{\underset{#1}{\operatorname{argmax}} \; } \let\oldvec\vec \renewcommand{\vec}{\boldsymbol} %boldsymbol or mathbf ? \newcommand{\tv}{\operatorname{TV}\left( #1 \right)} \newcommand{\prox}{\operatorname{prox}_{#1}{\left(#2\right)}} \newcommand{\proj}{\operatorname{Proj}_{#1}{\left(#2\right)}} \newcommand{\sign}{\operatorname{sign}\left( #1 \right)} \newcommand{\braket}{\left\langle #1 \, , \, #2 \right\rangle} \renewcommand{\div}{\operatorname{div}} \newcommand{\id}{\operatorname{Id}} \newcommand{\norm}{\left\| #1 \right\|} \newcommand{\lag}{\mathcal{L}\left( #1 \right)} \newcommand{\mmax}{\max_{#1}\left\{ #2 \right\}} \newcommand{\mmin}{\min_{#1}\left\{ #2 \right\}} \newcommand{\conv}{\ast} \newcommand{\ft}{\mathcal{F}\left( #1 \right)} \newcommand{\ift}{\mathcal{F}^{-1}\left( #1 \right)} \newcommand{\pmat}{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\E}{\cdot 10^{#1}} %notation scientifique. Utiliser "e", "E", ou "10^" ? \newcommand{\amin}{\underset{#1}{\operatorname{argmin}} \; \left\{ #2 \right\} }$

# Dictionary-based reconstruction¶

This tutorial explains in detail how to use the Dictionary Learning (DL) reconstruction in PyHST2. This kind of reconstruction is especially useful when you have a high quality reconstruction of a sample, and you want to reconstruct another sample with a lower dose. The idea is to “learn” a dictionary (a set of image patches) on the high quality volume, in order to reconstruct others similar volumes with this learned dictionary.

The learning process is described here : Building a dictionary. To get started, you can use the following dictionary : /tmp_14_days/paleo/patcheslena.h5

## Using DL reconstruction¶

To use the Dictionary-based technique, the parameter is

DENOISING_TYPE = 4


The principle is similar to Total Variation regularization : a regularization parameter promotes the sparsity (in the dictionary basis). This parameter is also named BETA_TV. However, DL has additionnal parameters :

PATCHES_FILE = patches.h5
STEPFORPATCHES = 4
WEIGHT_OVERLAP = 2.0


The parameter PATCHES_FILE is the location of the dictionary (in HDF5 format). To get started, you can use the following : /tmp_14_days/paleo/patcheslena.h5

The parameter STEPFORPATCHES tunes the overlapping between patches. For a $$8 \times 8$$ dictionary, STEPFORPATCHES = 4 leads to a $$\frac{1}{2}$$ overlaping. Unless you have a custom dictionary, this value is seldom changed.

The parameter WEIGHT_OVERLAP tunes the penalty applied to non-similar patches applied to the same area of the image. A high WEIGHT_OVERLAP leads to a smoother result.

The typical process of parameters tuning is the following :
• Build your dictionary, or use an available dictionary (preferably learned on a similar volume), providing PATCHES_FILE

• Tune ITERATIVE_CORRECTIONS to determine the number of iterations. If this value is too small, the algorithm will not provide the “converged” solution. On the other hand, a too high value produces useless iterations. The convergence indicator is the energy displayed in the stdout. A typical value, when the preconditioner is enabled, is 1000 iterations for FISTA and 500 for CSG.

• Tune BETA_TV to have a less “noisy” reconstruction while preserving the features

• Tune WEIGHT_OVERLAP to have a smoother reconstruction

## Choice of the optimization algorithm¶

DL reconstruction comes with two available optimization algorithms : FISTA and Conjugate Subgradient. The latter has a much faster convergence rate but has a higher memory consumption. The algorithms are specified with the keyword OPTIM_ALGORITHM :

OPTIM_ALGORITHM = 1 # FISTA


will use the FISTA solver, while

OPTIM_ALGORITHM = 5 # CSG


will invoke the Conjugate Subgradient algorithm.