pynx-cdi-id01#

Documentation for the pynx-cdi-id01 command-line script to reconstruct objects from a CDI dataset, with defaults parameters for the Bragg CDI case.

Script to perform a CDI analysis on data from ID01@ESRF

See examples at the end.

usage: pynx-cdi-id01 [-h] --data DATA [--detector_distance DETECTOR_DISTANCE] [--pixel_size_detector PIXEL_SIZE_DETECTOR] [--wavelength WAVELENGTH] [--auto_center_resize] [--roi ROI [ROI ...]] [--flatfield FLATFIELD] [--mask MASK [MASK ...]] [--free_pixel_mask FREE_PIXEL_MASK]
                     [--iobs_saturation IOBS_SATURATION] [--mask_interp MASK_INTERP MASK_INTERP] [--confidence_interval_factor_mask_min CONFIDENCE_INTERVAL_FACTOR_MASK_MIN] [--confidence_interval_factor_mask_max CONFIDENCE_INTERVAL_FACTOR_MASK_MAX] [--rebin REBIN [REBIN ...]] [--max_size MAX_SIZE]
                     [--verbose VERBOSE] [--live_plot] [--plot_axis {0,1,2}] [--fig_num {0,1,2}] [--gpu GPU] [--mpi {scan,run}] [--nb_run NB_RUN] [--nb_run_keep NB_RUN_KEEP] [--nb_run_keep_max_obj2_out NB_RUN_KEEP_MAX_OBJ2_OUT] [--save {all,final,none}] [--save_plot] [--data2cxi [{crop}]]
                     [--output_format {cxi,npz,none}] [--note NOTE] [--instrument INSTRUMENT] [--sample_name SAMPLE_NAME] [--crop_output CROP_OUTPUT] [--object OBJECT] [--support SUPPORT] [--support_size SUPPORT_SIZE [SUPPORT_SIZE ...]] [--support_formula SUPPORT_FORMULA]
                     [--support_autocorrelation_threshold SUPPORT_AUTOCORRELATION_THRESHOLD [SUPPORT_AUTOCORRELATION_THRESHOLD ...]] [--support_threshold SUPPORT_THRESHOLD [SUPPORT_THRESHOLD ...]] [--support_threshold_method {max,rms,average}] [--support_only_shrink]
                     [--support_smooth_width_begin SUPPORT_SMOOTH_WIDTH_BEGIN] [--support_smooth_width_end SUPPORT_SMOOTH_WIDTH_END] [--support_smooth_width_relax_n SUPPORT_SMOOTH_WIDTH_RELAX_N] [--support_post_expand SUPPORT_POST_EXPAND [SUPPORT_POST_EXPAND ...]]
                     [--support_update_border_n SUPPORT_UPDATE_BORDER_N] [--support_update_period SUPPORT_UPDATE_PERIOD] [--support_fraction_min SUPPORT_FRACTION_MIN] [--support_fraction_max SUPPORT_FRACTION_MAX] [--support_threshold_auto_tune_factor SUPPORT_THRESHOLD_AUTO_TUNE_FACTOR] [--zero_mask ZERO_MASK]
                     [--positivity] [--beta BETA] [--gps_inertia GPS_INERTIA] [--gps_t GPS_T] [--gps_s GPS_S] [--gps_sigma_f GPS_SIGMA_F] [--gps_sigma_o GPS_SIGMA_O] [--nb_raar NB_RAAR] [--nb_hio NB_HIO] [--nb_er NB_ER] [--nb_ml NB_ML] [--detwin [DETWIN]] [--psf PSF] [--psf_filter {hann,tukey,none}]
                     [--algorithm ALGORITHM] [--detector DETECTOR] [--scan SCAN] [--specfile SPECFILE] [--imgcounter {mpx4inr,ei2mint}] [--imgname IMGNAME]

Input parameters#

--data
path to the data file, e.g. /some/dir/to/data.Recognized formats include .npy, .npz (if several arrays are included iobs, should be named ‘data’ or ‘iobs’), .tif or .tiff (assumes a multiframe tiff image), or .cxi (hdf5). An hdf5 file can be supplied with a path, e.g.:

‘–data path/to/data.h5:entry/path/data’

Multiple files can be processed by combining with scan, e.g.:

  • –data my_data%04d.cxi scan=23,24,25

  • –data my_data%s.cxi scan=room_temperature

--detector_distance, --detectordistance, --distance

Detector distance in meters.

--pixel_size_detector

Detector pixel size (m)

--wavelength

Experiment wavelength in meters.

--auto_center_resize, --auto-crop, --autocrop

Automatically center and crop the input data

Default: True

--roi

Region-of-interest to be used for the reconstruction. This requires 4 (2D) or 6 (3D) values for xmin xmax ymin ymax [zmin zmax]. The area is taken with python conventions, i.e. pixels with indices xmin<= x < xmax and ymin<= y < ymax.The final shape must also have a suitable integer number for GPU FFT, i.e. it must be a multiple of 2 and the largest number inits prime factor decomposition must be less or equal to the largest valueacceptable by vkFFT for a radix transform(<=13).If n does not fulfill these constraints,it will be reduced using the largest possible integer smaller than n.This option supersedes “–max_size” unless roi=”auto”. If this option is not used, the entire data array is used.Other possible values: - “auto”: automatically selects the roi from the center of mass and the maximum possible size. [default]

--flatfield, --flat_field

filename with a flatfield array by which the observed intensity will be multiplied. In order to preserve Poisson statistics, the average correction should be close to 1. Accepted file formats: .npy, .npz, .edf, .mat, .tif, .h5, .cxi. The first available array will be used if multiple are present. For an hdf5 file (h5 or cxi), the path can be supplied e.g. using: ‘–flatfield path/to/flatfield.h5:entry/path/flat’ if the hdf5 path is not given,the first array with a correct shape and with ‘flat’ in the path is used. If the flatfield is 2D and the observed intensity 3D, the flatfield is repeatedalong the depth (first/slowest axis). Note that if no flatfield is given and ‘–mask maxipix1’ or ‘maxipix6’,a correction is automatically to the gap pixels (see –mask parameter).[default=None, no flatfield correction]

--mask

Optional mask for the diffraction data. Choices:

  • --mask zero: all pixels with iobs <= 0 will be masked

  • --mask negative, all pixels with iobs < 0 will be masked

  • --mask maxipix, the maxipix gaps will be masked

  • --mask maxipix1, the maxipix gaps will be masked, except for the large pixel one ach side of the gap, for which the intensity will be divided by 3

  • --mask maxipix6, the maxipix gaps will be linearly interpolated between the gap large pixels.

NaN-valued pixels in the observed intensity are always masked.

A filename can also be given (accepted formats: .npy, .npz, .edf, .mat, .tif, .h5, .cxi): (the first available array will be used if multiple are present) file. For an hdf5 file (.h5 or .cxi),the path can be supplied e.g. using data=path/to/mask.h5:entry/path/mask - if the hdf5 path is not given, the first array with a correct shape and with ‘mask’ in the path is used.When an imported array defines the mask, this follows the CXI convention: pixels = 0 are valid, > 0 are masked. If the mask is 2D and the data 3D, the mask is repeated for all frames along the first dimension (depth).Several masks can be combined, separated by a comma, e.g. ‘–mask maxipix dead.npz’

--free_pixel_mask

pixel mask for free log-likelihood calculation. By default the free pixel mask is randomly initialised, but this can be used to load a mask from a file for consistency between runs. It can be loaded from a .npy, .edf, .tif, .npz or .mat file - if .npz or .mat are used, the first array matching the iobs shape is used.A .cxi filename can also be used as input, in which case it is assumed that thisis the result of a previous optimisation, and that the mask can be loaded from’/entry_last/image_1/process_1/configuration/free_pixel_mask’

--iobs_saturation

saturation value for the observed intensity. Any pixel above this intensity will be masked

--mask_interp, --maskinterp

interpolate masked pixels from surrounding pixels, using an inverse distance weighting. The first number N indicates that the pixels used for interpolation range from i-N to i+N for pixel i around all dimensions. The second number n that the weight is equal to 1/d**n for pixels with at a distance n. The interpolated values iobs_m are stored in memory as -1e19*(iobs_m+1) so that the algorithm knows these are not true observations, and are applied with a large confidence interval. See –confidence_interval_factor_mask

--confidence_interval_factor_mask_min

When masked pixels are interpolated using mask_interp, the calculated values iobs_m are not observed and the amplitude projection is done for those with a confidence interval, equal to: iobs_m*confidence_interval_factor_mask Two values (min/max) must be given, normally around 1

Default: 0.5

--confidence_interval_factor_mask_max

When masked pixels are interpolated using mask_interp, the calculated values iobs_m are not observed and the amplitude projection is done for those with a confidence interval, equal to: iobs_m*confidence_interval_factor_mask Two values (min/max) must be given, normally around 1

Default: 1.2

--rebin, --binning

the experimental data can be rebinned (i.e. a group of n x n (x n) pixels is replaced by a single one whose intensity is equal to the sum of all the pixels). Both iobs and mask (if any) will be binned, but the support (if any) should correspond to the new size. The supplied pixel_size_detector should correspond to the original size. The rebin factor can also be supplied as one value per dimension, e.g. ‘–rebin 4 1 2’. Finally, instead of summing pixels with the given rebin size, it is possible to select a single sub-pixel by skipping instead of binning: e.g. using ‘–rebin 4 1 2 0 0 1’, the extracted pixel will use array slicing data[0::4,0::1,1::2].

--max_size, --maxsize

maximum size for the array used for analysis, along all dimensions. The data will be cropped to this value after binning and centering

Display parameters#

--verbose

print evolution of llk (and display plot if ‘liveplot’ is set) every N cycle

Default: 50

--live_plot, --liveplot

liveplot during optimisation

Default: False

--plot_axis, --plotaxis

Possible choices: 0, 1, 2

for 3D data, this option allows to choose the axisof the 2D cut used for live_plot

Default: 0

--fig_num, --fignum

Possible choices: 0, 1, 2

Figure number for live plotting

Default: 1

Compute parameters#

--gpu

string matching GPU name (or part of it, case-insensitive)

--mpi

Possible choices: scan, run

when launching the script using mpiexec, this tells the script to either distribute the list of scans to different processes (mpi=scan), or (mpi=run) split the runs to the different processes. If nb_run_keep is used, the results are merged before selecting the best results.

Default: “splitscan”

Run parameters#

--nb_run, --nbrun

number of times to run the optimization

Default: 1

--nb_run_keep, --nbrunkeep

number of best run results to keep, according to likelihood statistics. This is only useful associated with –nb_run [default: keep all run results]

--nb_run_keep_max_obj2_out

when performing multiple runs with nb_run_keep, if the fraction of the object square modulus outside the object support is larger than this value, the solution will be rejected regardless of the free log-likelihood

Default: 0.1

Output parameters#

--save

Possible choices: all, final, none

Option to save either only the final result (the default), after each algorithm step (comma-separated), or never. Note that the name of the save file including the LLK ank LLK_free values will be decided by the first step and not the final onewhen using ‘–save all’

Default: “final”

--save_plot, --saveplot

Use this option to save an image of the object- this will show three perpendicular cuts for 3D objects.

Default: False

--data2cxi

Possible choices: crop

Option to save the raw data in CXI format (http://cxidb.org/cxi.html), with all the required information for a CDI experiment if ‘–data2cxi crop’ is used, the data will be saved after centering and cropping (default is to save the raw data).

Default: False

--output_format

Possible choices: cxi, npz, none

output format for the final object and support. ‘none’ can be selected for testing and simulation

Default: “cxi”

--note
Optional text which will be saved as a note in theoutput CXI file, e.g.:

–note ‘This dataset was measured… Citation: Journal of coherent imaging (2018), 729…’

--instrument

Name of the beamline/instrument used for data collection

Default: “ESRF id01”

--sample_name, --sample

Sample name

--crop_output

if >0 (default:4), the output data will be cropped around the final support plus ‘crop_output’ pixels. If 0, no cropping is performed.

Default: 4

Initial object and support parameters#

--object

starting object. Import object from .npy, .npz, .mat (the first available array will be used if multiple are present), CXI or hdf5 modes file. It is also possible to supply the random range for both amplitude and phase, using:

  • --object support,0.9,1,0.5: this will use random values over the initial support, with random amplitudes between 0.9 and 1 and phases with a 0.5 radians range

  • --object obj.npy,0.9,1,0.5: same but the random values will be multiplied by the loaded object.

By default the object will be defined as random values inside the support area.

--support

starting support. Different options are available:

  • --support sup.cxi: import support from .npy, .npz, .edf, .mat (the first available array will be used if multiple are present) or CXI/hdf5 file. Pixels > 0 are in the support, 0 outside. If the support shape is different than the Iobs array, it will be cropped or padded accordingly. If a CXI filename is given, the support will be searched in entry_last/image_1/maskor entry_1/image_1/mask, e.g. to load support from a previous result file. For CXI and h5 files, the hdf5 path can also be given: --support path/to/data.h5:entry_1/image_1/data

  • --support auto: support will be estimated using auto-correlation

  • --support circle: or square, the support will be initialised to a circle (sphere in 3d), or a square (cube).

  • --support object: the support will be initialised from the object using the given support threshold and smoothing parameters - this requires that the object itself is loaded from a file. The applied threshold is always relative to the max in this case.

  • --support object,0.1,1.0: same as above, but 0.1 will be used as the relative threshold and 1.0 the smoothing width just for this initialisation.

Default: “auto”

--support_size, --supportsize

size (radius or half-size) for the initial support, to be used in combination with ‘support_type’. The size is given in pixel units. Alternatively one value can be given for each dimension, i.e. ‘–support_size 20 40’ for 2D data, and ‘–support_size 20 40 60’ for 3D data. This will result in an initial support which is a rectangle/parallelepipedor ellipsis/ellipsoid.

--support_formula, --supportformula

formula to compute the support. This should be an equation using ix,iy,iz and ir (radius) pixel coordinates(all centered on the array) which can be evaluated to produce an initial support (1/True inside, 0/False outside). Mathematical functions should use the np. prefix (np.sqrt, ..). Examples:

  • --support_formula 'ir<100': sphere or circle of radius 100 pixels

  • --support_formula '(np.sqrt(ix**2+iy**2)<50)*(np.abs(iz)<100)': cylinder of radius 50 and height 200.

--support_autocorrelation_threshold

if no support is given, it will be estimated from the intensity auto-correlation, with this relative threshold. A range can also be given, e.g. ‘–support_autocorrelation_threshold 0.09 0.11’ and the actual threshold will be randomly chosen between the min and max.

Default: (0.09, 0.11)

--support_threshold

Threshold for the support update. Alternatively two values can be given, and the threshold will be randomly chosen in the interval given by two values: ‘–support_threshold 0.20 0.28’ This is mostly useful in combination with –nb_run

Default: 0.25

--support_threshold_method

Possible choices: max, rms, average

method used to determine the absolute threshold for the support update. Either:’max’ or ‘average’ or ‘rms’ values, taken over the support area/volume, after smoothing. Note that ‘rms’ and ‘average’ use the modulus or root-mean-square value normalised over the support area, so when the support shrinks, the threshold tends to increase. In contrast, the ‘max’ value tends to diminish as the optimisation progresses. rms is varying more slowly than average and is thus more stable.In practice, using rms or average with a too low threshold can lead to the divergence of the support (too large), if the initial support is too large.

Default: “rms”

--support_only_shrink

Force the support to only shrink when updated.

Default: False

--support_smooth_width_begin

during support update, the object amplitude is convoluted by a gaussian with a size (sigma) exponentially decreasing from support_smooth_width_begin to support_smooth_width_end from the first to the last RAAR or HIO cycle.

Default: 2

--support_smooth_width_end

during support update, the object amplitude is convoluted by a gaussian with a size (sigma) exponentially decreasing from support_smooth_width_begin to support_smooth_width_end from the first to the last RAAR or HIO cycle.

Default: 0.5

--support_smooth_width_relax_n

Number of cycles over which the support smooth width will exponentially decrease from support_smooth_width_begin to support_smooth_width_end, and then stay constant. This is ignored if nb_hio, nb_raar, nb_er are used, and the number of cycles used is then the total number of HIO+RAAR cycles

Default: 500

--support_post_expand

after the support has been updated using a threshold, it can be shrunk or expanded by a few pixels, either one or multiple times, e.g. in orderto ‘clean’ the support:

  • --support_post_expand=1 will expand the support by 1 pixel

  • --support_post_expand=-1 will shrink the support by 1 pixel

  • --support_post_expand=-1,1 will shrink and then expand the support by 1 pixel

  • --support_post_expand=-2,3 will shrink and then expand the support by 2 and 3 pixels

  • --support_post_expand=2,-4,2 will expand/shrink/expand the support by 2, 4 and 2 pixels

Note that while it is possible to supply positive values separated by space, e.g. --support_post_expand 2, for negative values it is necessary to use --support_post_expand=-1 or --support_post_expand=-2,1 otherwise the negative number will be mis-interpreted as a command-line option.

--support_update_border_n

Value to restrict the support update around the previous one: the only pixels affected by the support updated must lie within +/- N pixels around the outer border of the support.

Default: 0

--support_update_period

Update support every N cycle. If 0, it is never updated.

Default: 50

--support_fraction_min

if the fraction of points inside the support falls below this value, the run is aborted, and a few attempts will be made to restart the run by dividing the support_threshold by support_threshold_auto_tune_factor.

Default: 0.0001

--support_fraction_max

If the fraction of points inside the support becomes larger than this value, the run is aborted, and a few tries will be made to restart the run by multiplying the support_threshold by support_threshold_auto_tune_factor.

Default: 0.7

--support_threshold_auto_tune_factor

the factor by which the support threshold will be changed if the support diverges.

Default: 1.1

--zero_mask, --zeromask

by default masked pixels are free and keep the calculated intensities during HIO, RAAR, ER and CF cycles.Setting this flag will force all masked pixels to zero intensity. While not recommended, this may be more stable with a large number of masked pixels and low intensity diffraction data. If a value is supplied the following options can be used:

  • --zero_mask 0: masked pixels are free and keep the calculated complex amplitudes

  • --zero_mask 1: masked pixels are set to zero

  • --zero_mask auto: this is only meaningful when using a ‘standard’ algorithm below. The masked pixels will be set to zero during the first 60% of the HIO/RAAR cycles,and will be free during the last 40% and ER ones.

Default: “auto”

Algorithm: general parameters#

--positivity

Use this option to bias the algorithms towards a real, positive object. Object is still complex-valued, but random start will begin with real values..

Default: False

--beta

Beta value for the HIO/RAAR algorithm

Default: 0.9

--gps_inertia

Inertia parameter for the GPS algorithm

Default: 0.05

--gps_t

T parameter for the GPS algorithm

Default: 1.0

--gps_s

S parameter for the GPS algorithm

Default: 0.9

--gps_sigma_f

Fourier sigma parameter for the GPS algorithm

Default: 0

--gps_sigma_o

Object sigma parameter for the GPS algorithm

Default: 0

Algorithm: standard run parameters- either use this or –algorithm#

--nb_raar

number of relaxed averaged alternating reflections cycles, which the algorithm will use first. During RAAR and HIO, the support is updated regularly

Default: 600

--nb_hio

number of hybrid input/output cycles, which the algorithm will use after RAAR. During RAAR and HIO, the support is updated regularly

Default: 0

--nb_er

number of error reduction cycles, performed after HIO/RAAR, without support update

Default: 200

--nb_ml

number of maximum-likelihood conjugate gradient to perform after ER

Default: 0

--detwin

Using this option, 10 cycles will be performed at 25% of the total number of RAAR or HIO cycles, with a support cut in half to bias towards one twin image

Default: True

--psf

this will trigger the activation of a point-spread function kernel to model partial coherence (and detector psf), after 66% of the total number of HIO and RAAR have been reached. The following options are possible for the psf:

  • --psf pseudo-voigt,1,0.05,10: use a pseudo-Voigt profile with FWHM 1 pixel and eta=0.05, i.e. 5% Lorentzian and 95% Gaussian, then update every 10 cycles (using 0 for the last number prevents the PSF update)

  • --psf gaussian,1.5,10: start with a Gaussian of FWHM 1.5 pixel, and update it every 10 cycles

  • --psf lorentzian,0.6,10: start with a Lorentzian of FWHM 0.6 pixel, and update it every 10 cycles

  • --psf path/to/file.npz: load the PSF array from an npz file. The array loaded should either be named ‘psf’ or ‘PSF’, or the first found array will be loaded.The PSF must be centred in the array, and will be resized (so it can be cropped). The PSF will be updated every 5 cycles

  • --psf /path/to/file.cxi: load the PSF from a previous result CXI file. The PSF will be updated every 5 cycles

  • --psf /path/to/file.cxi,10: same as before, update the PSF every 10 cycles

  • --psf /path/to/file.cxi,0: same as before, do not update the PSF

Recommended values:

  • for highly coherent datasets: no PSF

  • with some partial coherence: --psf pseudo-voigt,1,0.05,20

You can try different widths to start (from 0.25 to 2), and change the eta mixing parameter.

Default: False

--psf_filter

Possible choices: hann, tukey, none

the PSF update filter to avoid divergence. [EXPERIMENTAL, don’t use]

Algorithm: custom operator chain#

--algorithm, --algo

Custom algorithm, e.g. --algorithm 'ER**50,(Sup*ER**5*HIO**50)**10: give a specific sequence of algorithms and/or parameters to be used for the optimisation (note: this string is case-insensitive). Important: 1) when supplied from the command line, there should be NO SPACE in the expression ! And if there are parenthesis in the expression, quotes are required around the algorithm string 2) the string and operators are applied from right to left

Valid changes of individual parameters include (see detailed description above):

  • positivity=0 or 1

  • support_only_shrink=0 or 1

  • beta=0.7

  • live_plot=0 (no display) or an integer number N to trigger plotting every N cycle

  • support_update_period=0 (no update) or a positive integer number

  • support_smooth_width_begin=2.0

  • support_smooth_width_end=0.5

  • support_smooth_width_relax_n=500

  • support_threshold=0.25

  • support_threshold_mult=0.7 (multiply the threshold by 0.7)

  • support_threshold_method=max or average or rms

  • support_post_expand=-1#2 (in this case the commas are replaced by ‘#’ for parsing)

  • zero_mask=0 or 1

  • psf=5: will update the PSF every 5 cycles (and init PSF if necessary with the default pseudo-Voigt of FWHM 1 pixels and eta=0.1)

  • psf_init=gaussian@1.5: initialise the PSF with a Gaussian of FWHM 1.5 pixels

  • psf_init=lorentzian@0.5: initialise the PSF with a Lorentzian of FWHM 0.5 pixels

  • psf_init=pseudo-voigt@1@0.1: initialise the PSF with a pseudo-Voigt of FWHM 1 pixels and eta=0.1 Note that using psf_init automatically triggers updating the PSF every 5 cycles, unless it has already been set using ‘psf=…’

  • verbose=20

  • fig_num=1: change the figure number for plotting

Valid basic operators include:

  • ER: Error Reduction

  • HIO: Hybrid Input/Output

  • RAAR: Relaxed Averaged Alternating Reflections

  • DetwinHIO: HIO with a half-support (along first dimension)

  • DetwinHIO1: HIO with a half-support (along second dimension)

  • DetwinHIO2: HIO with a half-support (along third dimension)

  • DetwinRAAR: RAAR with a half-support (along first dimension)

  • DetwinRAAR1: RAAR with a half-support (along second dimension)

  • DetwinRAAR2: RAAR with a half-support (along third dimension)

  • CF: Charge Flipping

  • ML: Maximum Likelihood conjugate gradient (incompatible with partial coherence PSF)

  • FAP: FourierApplyAmplitude- Fourier to detector space, apply observed amplitudes, and back to object space.

  • Sup or SupportUpdate: update the support according to the support_* parameters

  • ShowCDI: display of the object and calculated/observed intensity. This can be used to trigger this plot at specific steps, instead of regularly using live_plot=N. This is thus best used using live_plot=0

Examples of algorithm string, where steps are separated with commas (and NO SPACE!),and are applied from right to left. Operations in a given step will be appliedmathematically, also from right to left, and **N means repeating N times (N cycles) the operation on the left of the exponent:

  • --algorithm HIO : single HIO cycle

  • --algorithm ER**100 : 100 cycles of ER

  • --algorithm ER**50*HIO**100 : 100 cycles of HIO, followed by 50 cycles of ER

  • --algorithm ER**50,HIO**100 : 100 cycles of HIO, followed by 50 cycles of ER. The result is the same as the previous example, the difference between using ‘*’ and ‘,’ when switching from HIO to ER is mostly cosmetic as the process will separate the two algorithmic steps explicitly when using a ‘,’ , which can be slightly slower. Moreover, when using ‘–save all’, the different steps will be saved as different entries in the CXI file.

  • --algorithm 'ER**50,(Sup*ER**5*HIO**50)**10': 10 times [50 HIO + 5 ER + Support update], followed by 50 ER

  • --algorithm 'ER**50,verbose=1,(Sup*ER**5*HIO**50)**10,verbose=100,HIO**100': change the periodicity of verbose output

  • --algorithm 'ER**50,(Sup*ER**5*HIO**50)**10,support_post_expand=1,(Sup*ER**5*HIO**50)**10,support_post_expand=-1#2,HIO**100': same but change the post-expand (wrap) method

  • --algorithm 'ER**50,(Sup*ER**5*HIO**50)**5,psf=5,(Sup*ER**5*HIO**50)**10,HIO**100': activate partial correlation after a first series of algorithms

  • --algorithm 'ER**50,(Sup*HIO**50)**4,psf=5,(Sup*HIO**50)**8':typical algorithm steps with partial coherence

  • --algorithm 'ER**50,(Sup*HIO**50)**4,(Sup*HIO**50)**4,positivity=0,(Sup*HIO**50)**8,positivity=1': same as previous but starting with positivity constraint, removed at the end.

ID01 parameters#

--detector

detector used, e.g. either ‘mpx1x4’ or ‘eiger2M’. This is normally auto-detected if only one detector data is present.

--scan

scan number in bliss (or spec) fileAlternatively, a list or range of scans can be given. Examples:

–scan 11 –scan 12,23,45 –scan ‘range(12,25)’ (note the quotes).

OBSOLETE ID01 parameters#

--specfile

/some/dir/to/specfile.spec: path to specfile [obsolete]

--imgcounter

Possible choices: mpx4inr, ei2mint

spec counter name for image number [only for SPEC data-obsolete]

--imgname

/dir/to/images/prefix%05d.edf.gz: images location with prefix (for SPEC data)[default: will be extracted from the ULIMA_mpx4 entry in the spec scan header]

Examples:
  • pynx-cdi-id01 --data alignment_S2280.cxi --liveplot: just using default parameters

  • pynx-cdi-id01 --data alignment_S2280.cxi --liveplot --support_threshold 0.2: changing the support threshold

  • pynx-cdi-id01 --data alignment_S2280.cxi --liveplot --support_threshold 0.2 0.3 --nb_run 10 --nb_run_keep 5: Generate 10 solutions with random support threshold between 0.2 and 0.3, and keep only the 5 best.