How to run a slingle slice basic reconstruction#

raw data for this example#

raw data of this example can be found in /scisoft/tomo_training/part1_basic_reco you must copy them and you can also remove the .nx file contained here (they will be created during step 1

step 0: launch tomwer canvas#

when you are on the cluster node after having source the python virtual environment launch the canvas:

tomwer canvas
[1]:
from IPython.display import Video

Video("video/start_tomwer_canvas.mp4", embed=True, height=500)
[1]:

there is two use cases for now:

  • from a bliss acquisition. Then we will convert on step 1 from bliss acquistion to NXtomo.

  • from edf. In this case you can use the native EDF folder as an NXtomo or convert it to NXtomo first looking at XXX for example and then take resume as step 2.

step1: from bliss acquisition to NXtomo#

For this we need to use the bliss(HDF5) - nxtomomill alias h52nx-nxtomomill widget. h52nx icon

  • Add the widget from the left panel with a click on the left panel item or by a copy / paste.

  • Open the widget dialog by a double click on the canvas widget freshly created

  • Then add the bliss file from the select button or a copy / past to the dedicated dialog

  • you should see one line added to the dialog per NXtomo that will be created.

  • (Optional: connect it with another widget like data list or data selector to see the output and make sure all the conversion went well)

  • press send all / send selected to trigger the conversion to bliss file (some other dialog might appear if the NXtomo already exists)

note 1 : keep in mind that this can also be done manually using the command line interface (CLI) of nxtomomill. For more details please seeh52nx nxtomomill tutorial

note 2 : for converting from edf to NXtomo you can do the same operation but using the ``edf2nx-nxtomomill`` widget. Have a look at https://tomotools.gitlab-pages.esrf.fr/nxtomomill/tutorials/edf2nx.html for CLI or advanced usage.

note 3 : if no configuration file is provided then the default parametesr will be used. Otherwise you can provide a configuration file. See for details theedf2nx widget video tutorialwhich explain how to provide a file (GUI mecanism is the same for edf and hdf5)

[2]:
from IPython.display import Video

Video("video/h52nx_example.mp4", embed=True, height=500)
[2]:

hand on - exercise A#

  • copy data from `to a local workspace (like/tmp_14_days/{your_name}`) or reuse them if exists already

  • launch tomwer canvas

  • convert the bliss .h5 file to a NXtomo (.nx) using the appropriate widget

step 2: define a basic workflow#

From raw data we will need:

  • reduced dark / flat widget reduced dark flat icon

    NXtomo is expected to contain dark and flat frames. In order to compute the flat field we need compute reduced darks and flats. to compute reduced darks and flats

  • center of rotation (see ‘cor_search’ notebook for more information). reduced dark flat icon

    For the training we will use the ‘sino-coarse-to-fine’ algorithm which provide a good estmation of the cor. On the video we will do it manually but we could lock the algorithm to avoid validating the value found.

  • nabu ‘slice’ nabu slice

    to reconstruct one slice. In the example we will also ask for a Paganin phase to have a better reconstruction

  • data viewer data viewer

    to display the reconstructed slice (and browse the dataset)

note: the widget can be created from the left panel (with a mouse left click on the widget) or by creating a link from a left click the node output and releasing it downstream)

[3]:
from IPython.display import Video

Video("video/create_dummy_workflow.mp4", embed=True, height=500)
[3]:

Step 3: run the workflow#

Now that you have an input and a basic workflow we can process it.

For this simply ‘select’ the NXtomo from the scan selector that has been created during step 1. Processing should start, you have to wait until all processes are finished.

[4]:
from IPython.display import Video

Video("video/execute_dummy_workflow.mp4", embed=True, height=500)
[4]:

Note: you can control advancement from the ‘object supervisor’ on the bottom of the window. If he is not visible you can display / hide it from the view / object supervisor option as show in the video

[5]:
from IPython.display import Video

Video("video/object_supervisor_display.mp4", embed=True, height=500)
[5]:

hand on - exercise B#

Reconstruct a slice of the NXtomo created during exercise A

save and share the workflow#

once you are happy with your workflow you can save it (ctrl+s) and you will be able to load it next time.

Note: dataset used will also be saved so it can be a good way to share data processing with collegues or report a bug easy to reproduce