workflow#
- class pydidas.workflow.workflow_results.WorkflowResults(diffraction_exp_context: None | DiffractionExperiment = None, scan_context: None | Scan = None, workflow_tree: None | ProcessingTree = None)#
A class for handling composite data from multiple plugins.
This class handles Datasets from each plugin in the WorkflowTree. Results are referenced by the node ID of the data’s producer.
Warning: Users should generally only use the WorkflowResults singleton, and never use the _WorkflowResults directly unless explicitly required.
- Parameters:
scan_context (Union[Scan, None], optional) – The scan context. If None, the generic context will be used. Only specify this, if you explicitly require a different context. The default is None.
diffraction_exp_context (Union[DiffractionExp, None], optional) – The diffraction experiment context. If None, the generic context will be used. Only specify this, if you explicitly require a different context. The default is None.
workflow_tree (Union[WorkflowTree, None], optional) – The WorkflowTree. If None, the generic WorkflowTree will be used. Only specify this, if you explicitly require a different context. The default is None.
- clear_all_results()#
Clear all internally stored results and reset the instance attributes.
- property data_labels: dict#
Return the data labels of the different Plugins to in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: label>
- Return type:
dict
- property data_units: dict#
Return the data units of the different Plugins to in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: label>
- Return type:
dict
- property frozen_exp: DiffractionExperiment#
Get the frozen instance of the DiffractionExperiment context.
- Returns:
The DiffractionExperiment at the time of processing.
- Return type:
- property frozen_scan: Scan#
Get the frozen instance of the Scan context.
- Returns:
The Scan at the time of processing.
- Return type:
- property frozen_tree: <pydidas.core.singleton_factory.SingletonFactory object at 0x7ff8cdc1aa50>#
Get the frozen instance of the WorkflowTree context.
- Returns:
The WorkflowTree at the time of processing.
- Return type:
WorkflowTree
- get_node_result_metadata_string(node_id: int, use_scan_timeline: bool = False, squeeze_results: bool = True) str #
Get the edited metadata from WorkflowResults as a formatted string.
- Parameters:
node_id (int) – The node ID of the active node.
use_scan_timeline (bool, optional) – The flag whether to reduce the scan dimensions to a single timeline. The default is False.
squeeze_results (bool, optional) – Flag whether to squeeze the results (i.e. remove all dimensions of length 1) from the data. The default is True.
- Returns:
The formatted string with a representation of all the metadata.
- Return type:
str
- get_result_metadata(node_id: int, use_scan_timeline: bool = False) dict #
Get the stored metadata for the results of the specified node.
- Parameters:
node_id (int) – The node ID identifier.
use_scan_timeline (bool, optional) – Flag to collapse all scan dimensions into a single timeline.
- Returns:
A dictionary with the metadata stored using the “axis_labels”, “axis_ranges”, “axis_units” and “metadata” keys.
- Return type:
dict
- get_result_ranges(node_id: int) dict #
Get the data ranges for the requested node id.
- Parameters:
node_id (int) – The node ID for which the result ranges should be returned.
- Returns:
The dictionary with the ranges with dimension keys and ranges values.
- Return type:
dict
- get_result_subset(node_id: int, slices: tuple, flattened_scan_dim: bool = False, squeeze: bool = False) Dataset #
Get a slices subset of a node_id result.
- Parameters:
node_id (int) – The node ID for which results should be returned.
slices (tuple) – The tuple used for slicing/indexing the np.ndarray.
flattened_scan_dim (bool, optional) – Keyword to process flattened Scan dimensions. If True, the Scan is assumed to be 1-d only and the first slice item will be used for the Scan whereas the remaining slice items will be used for the resulting data. The default is False.
squeeze (bool) – Keyword to squeeze dimensions of length 0 or 1. The default is False.
- Returns:
The subset of the results.
- Return type:
- get_results(node_id: int) Dataset #
Get the combined results for the requested node_id.
- Parameters:
node_id (int) – The node ID for which results should be returned.
- Returns:
The combined results of all frames for a specific node.
- Return type:
- get_results_for_flattened_scan(node_id: int, squeeze: bool = False) Dataset #
Get the combined results for the requested node_id with all scan dimensions flatted into a timeline
- Parameters:
node_id (int) – The node ID for which results should be returned.
squeeze (bool, optional) – Keyword to toggle squeezing of data dimensions of the final dataset. If True, all dimensions with a length of 1 will be removed. The default is False.
- Returns:
The combined results of all frames for a specific node.
- Return type:
- import_data_from_directory(directory: str | Path)#
Import data from a directory.
- Parameters:
directory (Union[pathlib.Path, str]) – The input directory with the exported pydidas results.
- property ndims: dict#
Return the number of dimensions of the results in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: n_dim>
- Return type:
dict
- property node_labels: dict#
Return the labels of the results in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: label>
- Return type:
dict
- prepare_files_for_saving(save_dir: str | Path, save_formats: str, overwrite: bool = False, single_node: None | int = None)#
Prepare the required files and directories for saving.
Note that the directory needs to be empty (or non-existing) if the overwrite keyword is not set.
- Parameters:
save_dir (Union[str, pathlib.Path]) – The basepath for all saved data.
save_formats (str) – A string of all formats to be written. Individual formats can be separated by comma (“,”), ampersand (”&”) or slash (“/”) characters.
overwrite (bool, optional) – Flag to enable overwriting of existing files. The default is False.
single_node (Union[None, int], optional) – Keyword to select a single node. If None, all nodes will be selected. The default is None.
- Raises:
FileExistsError – If the directory exists and is not empty and overwrite is not enabled.
- property result_titles: dict#
Return the result titles for all node IDs in form of a dictionary.
- Returns:
The result titles in the form of a dictionary with <node_id: result_title> entries.
- Return type:
dict
- save_results_to_disk(save_dir: str | Path, *save_formats: tuple[str], overwrite: bool = False, node_id: None | int = None)#
Save results to disk.
By default, this method saves all results to disk using the specified formats and directory. Note that the directory needs to be empty (or non-existing) if the overwrite keyword is not set.
Results from a single node can be saved by passing a value for the node_id keyword.
- Parameters:
save_dir (Union[str, pathlib.Path]) – The basepath for all saved data.
save_formats (tuple[str]) – Strings of all formats to be written. Individual formats can be also be given in a single string if they are separated by comma (“,”), ampersand (”&”) or slash (“/”) characters.
overwrite (bool, optional) – Flag to enable overwriting of existing files. The default is False.
node_id (Union[None, int], optional) – The node ID for which data shall be saved. If None, this defaults to all nodes. The default is None.
- property shapes: dict#
Return the shapes of the results in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: results_shape>
- Return type:
dict
- property source_hash: int#
Get the source hash from the input WorkflowTree ans ScanContext.
- Returns:
The hash value of the combined input data.
- Return type:
int
- store_results(index: int, results: dict)#
Store results from one scan point in the WorkflowResults.
Note: If write_to_disk is enabled, please be advised that this may slow down the WorkflowResults
- Parameters:
index (int) – The index of the scan point.
results (dict) – The results as dictionary with entries of the type <node_id: array>.
- update_frame_metadata(metadata: dict)#
Manually supply metadata for the non-scan dimensions of the results and update the stored metadata.
- Parameters:
metadata (dict) – The metadata in form of a dictionary with nodeID keys and dict items containing the axis_units, -_labels, and -_scales keys with the associated data.
- update_param_choices_from_labels(param: Parameter, add_no_selection_entry: bool = True)#
Store the current WorkflowResults node labels in the specified Parameter’s choices.
A neutral entry of “No selection” can be added with the optional flag.
- Parameters:
param (pydidas.core.Parameter) – The Parameter to be updated.
add_no_selection_entry (bool, optional) – Flag to add an entry of no selection in addition to the entries from the nodes. The default is True.
- update_shapes_from_scan_and_workflow()#
Update the shape of the results from the classes singletons’ metadata.
The workflow package defines classes to create and manage the workflow and to import / export.
- class pydidas.workflow.GenericNode(**kwargs: dict)#
The GenericNode class is used by trees to manage connections between items.
- add_child(child: Self)#
Add a child to the node.
This method will add the reference to a child to the current node.
- Parameters:
child (object) – The child object to be registered.
- change_node_parent(new_parent: Self)#
Change the parent of the selected node.
- Parameters:
new_parent (Union[pydidas.workflow.GenericNode, None]) – The new parent of the node.
- property children: list[Self]#
Get the list of children.
- Returns:
The list of children.
- Return type:
list
- property children_ids: list[int]#
Get the list of children IDs.
- Returns:
The list of children IDs.
- Return type:
list[int]
- connect_parent_to_children()#
Connect the node’s parent to the node’s children.
- Raises:
UserConfigError – If the node does not have a parent and multiple children.
- copy() Self #
Get a copy of the Node.
- Returns:
The nodes’s copy.
- Return type:
- deepcopy() Self #
Get a copy of the Node.
- Returns:
The nodes’s copy.
- Return type:
- delete_node_references(recursive: bool = True)#
Delete all references to the node from its parent and children.
If the node has a parent, the reference to itself is removed from the parent. If the node has children, references to these children are removed as well. Using the recursive keyword, this will be done for the whole branch of nodes starting with itself.
- Parameters:
recursive (bool, optional) – Keyword to toggle recursive delete of the node’s children as well. The default is True.
- Raises:
RecursionError – If the node has children but recursive is False, a recursion error will be raised. This will prevent the children to become separated from the tree structure.
- get_children() list #
Get the child objects.
This method will return the child objects themselves.
- Returns:
A list with the children.
- Return type:
list
- get_recursive_connections() List[int] #
Get recursive connections between the node and all children.
This method returns the recursive connection between a node and its children (and all further descendants) in the form of a list of entries with node_ids of parent and child.
- Returns:
conns – A list with entries in the form of [parent.node_id, child.node_id] for all descendants from the current node.
- Return type:
list
- get_recursive_ids() List[int] #
Get the node ids of the node and all children in its branch.
This method will return a list of all node_ids for the current node and all its children (recursively) to be able to select the complete branch for an operation.
- Returns:
res – A list of all node_ids for the node and all children on its branch.
- Return type:
list
- property is_leaf: bool#
Check if node has children.
This method will check if the node has children and return the result.
- Returns:
True if the node has no children, else False.
- Return type:
bool
- property n_children: int#
Get the number of children.
This property will return the number of children registered in the node.
- Returns:
The number of children.
- Return type:
int
- property node_id: None | int#
Get the node_id.
- Returns:
node_id – The node_id.
- Return type:
Union[None, int]
- property parent: None | Self#
Get the node’s parent.
- Returns:
parent – The parent node.
- Return type:
Union[GenericNode, None]
- property parent_id: None | int#
Get the parent’s node ID.
- Returns:
The parent’s nodeID or None if parent is None
- Return type:
Union[None, int]
- remove_child_reference(child: Self)#
Remove reference to an object from the node.
This method will remove the reference to the child but not delete the child itself. Note: This method’s main use is to allow children to un-register themselves from their parents before deletion and should not be called by the user.
- Parameters:
child (GenericNode) – The child instance.
- Raises:
ValueError – If the referenced child is not included in the node’s children.
- class pydidas.workflow.GenericTree(**kwargs: dict)#
A generic tree used for organising items.
- property active_node: GenericNode | None#
Get the active node.
If no node has been selected or the tree is empty, None will be returned.
- Returns:
The active node.
- Return type:
Union[pydidas.workflow.GenericNode, None]
- property active_node_id: int#
Get the active node ID.
- Returns:
The id of the active node.
- Return type:
Union[int, None]
- change_node_parent(node_id: int, new_parent_id: int)#
Change the parent of the selected node.
- Parameters:
node_id (int) – The id of the selected node.
new_parent_id (int) – The id of the selected node’s new parent.
- clear()#
Clear all items from the tree.
- copy() Self #
Get a copy of the WorkflowTree.
While this is method is not really useful in the main application (due to the fact that the WorkflowTree is a Singleton), it is required to pass working copies of the Tree to other processes in multiprocessing.
- Returns:
A new instance of the WorkflowTree
- Return type:
pydidas.workflow.WorkflowTree
- deepcopy() Self #
Get a copy of the WorkflowTree.
While this is method is not really useful in the main application (due to the fact that the WorkflowTree is a Singleton), it is required to pass working copies of the Tree to other processes in multiprocessing.
- Returns:
A new instance of the WorkflowTree
- Return type:
pydidas.workflow.WorkflowTree
- delete_node_by_id(node_id: int, recursive: bool = True, keep_children: bool = False)#
Remove a node from the tree and delete its object.
This method deletes a node from the tree. With the optional recursive keyword, node children will be deleted as well. With the keep_children keyword, children will be connected to the node’s parent. Note that ‘recursive’ and ‘keep_children’ are mutually exclusive.
Note: If you deselect the recursive option but the node has children, a RecursionError will be raised by the node itself upon the deletion request.
- Parameters:
node_id (int) – The id of the node to be deleted.
recursive (bool, optional) – Keyword to toggle deletion of the node’s children as well. The default is True.
keep_children (bool, optional) – Keyword to keep the nodes children (and connect them to the node’s parent). The default is False.
- get_all_leaves() list #
Get all tree nodes which are leaves.
- Returns:
A list of all leaf nodes.
- Return type:
list
- get_new_nodeid() int #
Get a new integer node id.
This method returns the next unused integer node id. Note that node ids will not be re-used, i.e. the number of nodes is ultimately limited by the integer namespace.
- Returns:
The new node id.
- Return type:
int
- get_node_by_id(node_id: int) GenericNode #
Get the node from the node_id.
- Parameters:
node_id (int) – The node_id of the registered node.
- Returns:
The node object registered as node_id.
- Return type:
- order_node_ids()#
Order the node ids of all of the tree’s nodes.
- register_node(node: GenericNode, node_id: None | int = None, check_ids: bool = True)#
Register a node with the tree.
This method will register a node with the tree. It will add the node to the managed nodes and it will check any supplied node_ids for consistency with the node_id namespace. If no node_id is supplied, a new one will be generated. Note: Creation of new node_ids should be left to the tree. While it is not possible to create duplicates, it is possible to create unused “gaps” in the node_ids. This is not an issue by itself but not good practice.
- Parameters:
node (GenericNode) – The node object to be registered.
node_id (Union[None, int}, optional) – A supplied node_id. If None, the tree will select the next suitable node_id automatically. The default is None.
check_ids (bool, optional) – Keyword to enable/disable node_id checking. By default, this should always be on if called by the user. If node trees are added to the GenericTree, the check will only be performed once for the newly added node and not again during registering of its children. The default is True.
- reset_tree_changed_flag()#
Reset the “has changed” flag for this Tree.
- set_root(node: GenericNode)#
Set the tree root node.
Note that this method will remove any references to the old parent in the node!
- Parameters:
node (GenericNode) – The node to become the new root node
- property tree_has_changed: bool#
Get a flag which tells whether the Tree has changed since the last flag reset.
- Returns:
The has changed flag.
- Return type:
bool
- static verify_node_type(node)#
Check that the node is a GenericNode.
- Parameters:
node (object) – The object to be checked.
- Raises:
TypeError – If the node is not a GenericNode.
- class pydidas.workflow.PluginPositionNode(**kwargs: dict)#
The PluginPositionNode class manages the sizes and positions of items in a tree.
This class only manages the position data without any reference to actual widgets.
- get_relative_positions(accuracy: int = 3) dict #
Get the relative positions of the node and all children.
This method will generate a dictionary with keys corresponding to the node_ids and the relative positions of children with respect to the parent node.
- Parameters:
accuracy (int, optional) – The accuracy of the position results.
- Returns:
pos – A dictionary with entries of the type “node_id: [xpos, ypos]”.
- Return type:
dict
- property height: float#
Get the height of the current branch.
This property will return the height of the current tree branch (this node and all children).
- Returns:
The height of the tree branch.
- Return type:
float
- property width: float#
Get the width of the current branch.
This property will return the width of the current tree branch (this node and all children).
- Returns:
The width of the tree branch.
- Return type:
float
- class pydidas.workflow.ProcessingTree(**kwargs: dict)#
ProcessingTree is a subclassed GenericTree with support for running a plugin chain.
Access to ProcessingTrees within pydidas should not normally be through direct class instances but through the WorkflowTree singleton instance.
- property active_plugin_header: str#
Get the header description of the active plugin.
- Returns:
The description. If no active plugin has been selected, an empty string will be returned.
- Return type:
str
- create_and_add_node(plugin: BasePlugin, parent: None | int | BasePlugin = None, node_id: None | int = None) int #
Create a new node and add it to the tree.
If the tree is empty, the new node is set as root node. If no parent is given, the node will be created as child of the latest node in the tree.
- Parameters:
plugin (pydidas.Plugin) – The plugin to be added to the tree.
parent (Union[WorkflowNode, int, None], optional) – The parent node of the newly created node. If an integer, this will be interpreted as the node_id of the parent and the respective parent will be selected. If None, this will select the latest node in the tree. The default is None.
node_id (Union[int, None], optional) – The node ID of the newly created node, used for referencing the node in the WorkflowTree. If not specified(ie. None), the WorkflowTree will create a new node ID. The default is None.
- Returns:
node_id – The node ID of the added node.
- Return type:
int
- execute_process(arg: object, **kwargs: dict)#
Execute the process defined in the WorkflowTree for data analysis.
- Parameters:
arg (object) – Any argument that need to be passed to the plugin chain.
**kwargs (dict) – Any keyword arguments which need to be passed to the plugin chain.
- execute_process_and_get_results(arg: object, **kwargs: dict) dict #
Execute the WorkflowTree process and get the results.
- Parameters:
arg (object) – Any argument that need to be passed to the plugin chain.
**kwargs (dict) – Any keyword arguments which need to be passed to the plugin chain.
- Returns:
results – A dictionary with results in the form of entries with node_id keys and results items.
- Return type:
dict
- execute_single_plugin(node_id: int, arg: object, **kwargs: dict) -> (<class 'object'>, <class 'dict'>)#
Execute a single node Plugin and get the return.
- Parameters:
node_id (int) – The ID of the node in the tree.
arg (object) – The input argument for the Plugin.
**kwargs (dict) – Any keyword arguments for the Plugin execution.
- Raises:
KeyError – If the node ID is not registered.
- Returns:
res (object) – The return value of the Plugin. Depending on the plugin, it can be a single value or an array.
kwargs (dict) – The (updated) kwargs dictionary.
- export_to_file(filename: str | Path, **kwargs: dict)#
Export the WorkflowTree to a file using any of the registered exporters.
- Parameters:
filename (Union[str, pathlib.Path]) – The filename of the file with the export.
- export_to_list_of_nodes() list[dict] #
Export the Tree to a representation of all nodes in form of dictionaries.
- Returns:
The list with a dictionary entry for each node.
- Return type:
list[dict]
- export_to_string() str #
Export the Tree to a simplified string representation.
- Returns:
The string representation.
- Return type:
str
- get_all_nodes_with_results() list #
Get all tree nodes which have results associated with them.
These are all leaf nodes in addition to all nodes which have been flagged with the “keep data” flag.
- Returns:
A list of all leaf nodes.
- Return type:
list
- get_all_result_shapes(force_update: bool = False) dict #
Get the shapes of all leaves in form of a dictionary.
- Parameters:
force_update (bool, optional) – Keyword to enforce a new calculation of the result shapes. The default is False.
- Raises:
UserConfigError – If the ProcessingTree has no nodes.
- Returns:
shapes – A dict with entries of type {node_id: shape} with node_ids of type int and shapes of type tuple.
- Return type:
dict
- get_complete_plugin_metadata(force_update: bool = False) dict #
Get the metadata (e.g. shapes, labels, names) for all of the tree’s plugins.
- Parameters:
force_update (bool, optional) – Keyword to enforce a new calculation of the result shapes. The default is False.
- Returns:
The dictionary with the metadata.
- Return type:
dict
- get_consistent_and_inconsistent_nodes() -> (<class 'list'>, <class 'list'>)#
Get the consistency flags for all plugins in the WorkflowTree.
- Returns:
list – List with the ID of consistent node
list – List with the IDs of nodes with inconsistent data
- import_from_file(filename: str | Path)#
Import the ProcessingTree from a configuration file.
- Parameters:
filename (Union[str, pathlib.Path]) – The filename which holds the ProcessingTree configuration.
- prepare_execution(forced: bool = False)#
Prepare the execution of the ProcessingTree.
This method calls all the nodes’ prepare_execution methods. If the tree has not changed, it will skip this method unless the forced keyword is set to True.
- Parameters:
forced (bool, optional) – Flag to force running the prepare_execution method. The default is False.
- replace_node_plugin(node_id: int, new_plugin: BasePlugin)#
Replace the plugin of the selected node with the new Plugin.
- Parameters:
node_id (int) – The node ID of the node to be replaced.
new_plugin (pydidas.plugins.BasePlugin) – The instance of the new Plugin.
- restore_from_list_of_nodes(list_of_nodes: list | tuple)#
Restore the ProcessingTree from a list of Nodes with the required information.
- Parameters:
list_of_nodes (list) – A list of nodes with a dictionary entry for each node holding all the required information (plugin_class, node_id and plugin Parameters).
- restore_from_string(string: str)#
Restore the ProcessingTree from a string representation.
This method will accept string representations written with the “export_to_string” method.
- Parameters:
string (str) – The representation.
- set_root(node: WorkflowNode)#
Set the tree root node.
Note that this method will remove any references to the old parent in the node!
- Parameters:
node (WorkflowNode) – The node to become the new root node
- update_from_tree(tree: Self)#
Update this tree from another ProcessingTree instance.
The main use of this method is to keep the referenced ProcessingTree object alive while updating it.
- Parameters:
tree (ProcessingTree) – A different ProcessingTree.
- class pydidas.workflow.WorkflowNode(**kwargs: dict)#
A subclassed GenericNode wit han added plugin attribute.
The WorkflowNode allows executing plugins individually or in a full workflow chain through the WorkflowTree.
- consistency_check() bool #
Property to determine if the data is consistent.
- Returns:
Flag whether the parent’s output is consistent with this node’s input dimensionality.
- Return type:
bool
- dump() dict #
Dump the node to a savable format.
The dump includes information about the parent and children nodes but not the nodes itself. References to the nodeIDs are stored to allow reconstruction of the tree. Note: This dump is not recursive and will only save references to the child layer of the node.
- Returns:
The dict with all required information about the node.
- Return type:
dict
- execute_plugin(arg: Dataset | int, **kwargs: dict)#
Execute the plugin associated with the node.
- Parameters:
arg (Union[Dataset, int]) – The argument which need to be passed to the plugin.
**kwargs (dict) – Any keyword arguments which need to be passed to the plugin.
- Returns:
results (tuple) – The result of the plugin.execute method.
kws (dict) – Any keywords required for calling the next plugin.
- execute_plugin_chain(arg: Dataset | int, **kwargs: dict)#
Execute the full plugin chain recursively.
This method will call the plugin.execute method and pass the results to the node’s children and call their execute_plugin_chain methods. Note: No result callback is intended. It is assumed that plugin chains are responsible for saving their own data at the end of the processing.
- Parameters:
arg (Union[Dataset, int]) – The argument which need to be passed to the plugin.
**kwargs (dict) – Any keyword arguments which need to be passed to the plugin.
- property node_id: int | None#
Get the node_id.
Note: This property needs to be reimplemented to allow a subclassed node_id.setter in the WorkflowNode.
- Returns:
node_id – The node_id.
- Return type:
Union[None, int]
- prepare_execution()#
Prepare the execution of the plugin chain.
This method recursively calls the pre_execute methods of all (child) plugins.
- propagate_shapes_and_global_config()#
Calculate the Plugin’s result shape results and push it to the node’s children.
- propagate_to_children()#
Propagate the global binning and ROI to the children.
- property result_shape#
Get the result shape of the plugin, if it has been calculated yet.
- Returns:
Returns the shape of the Plugin’s results, if it has been calculated. Else, returns None.
- Return type:
Union[tuple, None]
- update_plugin_result_data_shape()#
Update the result shape from the Plugin’s input shape and legacy operations.
- class pydidas.workflow.WorkflowResults(diffraction_exp_context: None | DiffractionExperiment = None, scan_context: None | Scan = None, workflow_tree: None | ProcessingTree = None)#
A class for handling composite data from multiple plugins.
This class handles Datasets from each plugin in the WorkflowTree. Results are referenced by the node ID of the data’s producer.
Warning: Users should generally only use the WorkflowResults singleton, and never use the _WorkflowResults directly unless explicitly required.
- Parameters:
scan_context (Union[Scan, None], optional) – The scan context. If None, the generic context will be used. Only specify this, if you explicitly require a different context. The default is None.
diffraction_exp_context (Union[DiffractionExp, None], optional) – The diffraction experiment context. If None, the generic context will be used. Only specify this, if you explicitly require a different context. The default is None.
workflow_tree (Union[WorkflowTree, None], optional) – The WorkflowTree. If None, the generic WorkflowTree will be used. Only specify this, if you explicitly require a different context. The default is None.
- clear_all_results()#
Clear all internally stored results and reset the instance attributes.
- property data_labels: dict#
Return the data labels of the different Plugins to in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: label>
- Return type:
dict
- property data_units: dict#
Return the data units of the different Plugins to in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: label>
- Return type:
dict
- property frozen_exp: DiffractionExperiment#
Get the frozen instance of the DiffractionExperiment context.
- Returns:
The DiffractionExperiment at the time of processing.
- Return type:
- property frozen_scan: Scan#
Get the frozen instance of the Scan context.
- Returns:
The Scan at the time of processing.
- Return type:
- property frozen_tree: <pydidas.core.singleton_factory.SingletonFactory object at 0x7ff8cdc1aa50>#
Get the frozen instance of the WorkflowTree context.
- Returns:
The WorkflowTree at the time of processing.
- Return type:
WorkflowTree
- get_node_result_metadata_string(node_id: int, use_scan_timeline: bool = False, squeeze_results: bool = True) str #
Get the edited metadata from WorkflowResults as a formatted string.
- Parameters:
node_id (int) – The node ID of the active node.
use_scan_timeline (bool, optional) – The flag whether to reduce the scan dimensions to a single timeline. The default is False.
squeeze_results (bool, optional) – Flag whether to squeeze the results (i.e. remove all dimensions of length 1) from the data. The default is True.
- Returns:
The formatted string with a representation of all the metadata.
- Return type:
str
- get_result_metadata(node_id: int, use_scan_timeline: bool = False) dict #
Get the stored metadata for the results of the specified node.
- Parameters:
node_id (int) – The node ID identifier.
use_scan_timeline (bool, optional) – Flag to collapse all scan dimensions into a single timeline.
- Returns:
A dictionary with the metadata stored using the “axis_labels”, “axis_ranges”, “axis_units” and “metadata” keys.
- Return type:
dict
- get_result_ranges(node_id: int) dict #
Get the data ranges for the requested node id.
- Parameters:
node_id (int) – The node ID for which the result ranges should be returned.
- Returns:
The dictionary with the ranges with dimension keys and ranges values.
- Return type:
dict
- get_result_subset(node_id: int, slices: tuple, flattened_scan_dim: bool = False, squeeze: bool = False) Dataset #
Get a slices subset of a node_id result.
- Parameters:
node_id (int) – The node ID for which results should be returned.
slices (tuple) – The tuple used for slicing/indexing the np.ndarray.
flattened_scan_dim (bool, optional) – Keyword to process flattened Scan dimensions. If True, the Scan is assumed to be 1-d only and the first slice item will be used for the Scan whereas the remaining slice items will be used for the resulting data. The default is False.
squeeze (bool) – Keyword to squeeze dimensions of length 0 or 1. The default is False.
- Returns:
The subset of the results.
- Return type:
- get_results(node_id: int) Dataset #
Get the combined results for the requested node_id.
- Parameters:
node_id (int) – The node ID for which results should be returned.
- Returns:
The combined results of all frames for a specific node.
- Return type:
- get_results_for_flattened_scan(node_id: int, squeeze: bool = False) Dataset #
Get the combined results for the requested node_id with all scan dimensions flatted into a timeline
- Parameters:
node_id (int) – The node ID for which results should be returned.
squeeze (bool, optional) – Keyword to toggle squeezing of data dimensions of the final dataset. If True, all dimensions with a length of 1 will be removed. The default is False.
- Returns:
The combined results of all frames for a specific node.
- Return type:
- import_data_from_directory(directory: str | Path)#
Import data from a directory.
- Parameters:
directory (Union[pathlib.Path, str]) – The input directory with the exported pydidas results.
- property ndims: dict#
Return the number of dimensions of the results in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: n_dim>
- Return type:
dict
- property node_labels: dict#
Return the labels of the results in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: label>
- Return type:
dict
- prepare_files_for_saving(save_dir: str | Path, save_formats: str, overwrite: bool = False, single_node: None | int = None)#
Prepare the required files and directories for saving.
Note that the directory needs to be empty (or non-existing) if the overwrite keyword is not set.
- Parameters:
save_dir (Union[str, pathlib.Path]) – The basepath for all saved data.
save_formats (str) – A string of all formats to be written. Individual formats can be separated by comma (“,”), ampersand (”&”) or slash (“/”) characters.
overwrite (bool, optional) – Flag to enable overwriting of existing files. The default is False.
single_node (Union[None, int], optional) – Keyword to select a single node. If None, all nodes will be selected. The default is None.
- Raises:
FileExistsError – If the directory exists and is not empty and overwrite is not enabled.
- property result_titles: dict#
Return the result titles for all node IDs in form of a dictionary.
- Returns:
The result titles in the form of a dictionary with <node_id: result_title> entries.
- Return type:
dict
- save_results_to_disk(save_dir: str | Path, *save_formats: tuple[str], overwrite: bool = False, node_id: None | int = None)#
Save results to disk.
By default, this method saves all results to disk using the specified formats and directory. Note that the directory needs to be empty (or non-existing) if the overwrite keyword is not set.
Results from a single node can be saved by passing a value for the node_id keyword.
- Parameters:
save_dir (Union[str, pathlib.Path]) – The basepath for all saved data.
save_formats (tuple[str]) – Strings of all formats to be written. Individual formats can be also be given in a single string if they are separated by comma (“,”), ampersand (”&”) or slash (“/”) characters.
overwrite (bool, optional) – Flag to enable overwriting of existing files. The default is False.
node_id (Union[None, int], optional) – The node ID for which data shall be saved. If None, this defaults to all nodes. The default is None.
- property shapes: dict#
Return the shapes of the results in form of a dictionary.
- Returns:
A dictionary with entries of the form <node_id: results_shape>
- Return type:
dict
- property source_hash: int#
Get the source hash from the input WorkflowTree ans ScanContext.
- Returns:
The hash value of the combined input data.
- Return type:
int
- store_results(index: int, results: dict)#
Store results from one scan point in the WorkflowResults.
Note: If write_to_disk is enabled, please be advised that this may slow down the WorkflowResults
- Parameters:
index (int) – The index of the scan point.
results (dict) – The results as dictionary with entries of the type <node_id: array>.
- update_frame_metadata(metadata: dict)#
Manually supply metadata for the non-scan dimensions of the results and update the stored metadata.
- Parameters:
metadata (dict) – The metadata in form of a dictionary with nodeID keys and dict items containing the axis_units, -_labels, and -_scales keys with the associated data.
- update_param_choices_from_labels(param: Parameter, add_no_selection_entry: bool = True)#
Store the current WorkflowResults node labels in the specified Parameter’s choices.
A neutral entry of “No selection” can be added with the optional flag.
- Parameters:
param (pydidas.core.Parameter) – The Parameter to be updated.
add_no_selection_entry (bool, optional) – Flag to add an entry of no selection in addition to the entries from the nodes. The default is True.
- update_shapes_from_scan_and_workflow()#
Update the shape of the results from the classes singletons’ metadata.
- class pydidas.workflow.WorkflowResultsSelector(*args: tuple, **kwargs: dict)#
The WorkflowResultsSelector class allows to select a subset of results from a full WorkflowResults node.
- Parameters:
parent (QtWidgets.QWidget) – The parent widget.
select_results_param (pydidas.core.Parameter) – The select_results Parameter instance. This instance should be shared between the WorkflowResultsSelector and the parent.
**kwargs (dict) –
Optional keyword arguments. Supported kwargs are:
- workflow_resultsWorkflowResults, optional
The WorkflowResults instance to use. If not specied, this will default to the WorkflowResultsContext.
- property active_dims: list[int, ...]#
Get the active dimensions (i.e. dimensions with more than one entry)
- Returns:
The active dimensions.
- Return type:
list
- get_best_index_for_value(value: float, valrange: ndarray) int #
Get the index which is the closest match to the selected value from a range.
- Parameters:
value (float) – The target value
valrange (np.ndarray) – The array with all values.
- Returns:
index – The index with the best match.
- Return type:
int
- reset()#
Reset the instance to its default selection, for example when a new processing has been started and the old information is no longer valid.
- select_active_node(index: int)#
Select the active node.
- Parameters:
index (int) – The new node index.
- property selection: tuple[slice, ...]#
Get the current selection object.
- Returns:
The selection for slicing the WorkflowResults array.
- Return type:
tuple