trepr.processing module¶
Data processing functionality.
Key to reproducible science is automatic documentation of each processing step applied to the data of a dataset. Such a processing step each is selfcontained, meaning it contains every necessary information to perform the processing task on a given dataset.
Processing steps, in contrast to analysis steps (see trepr.analysis
for details), not only operate on data of a trepr.dataset.Dataset
,
but change its data. The information necessary to reproduce each processing
step gets added to the trepr.dataset.Dataset.history
attribute of a
dataset.
Due to the inheritance from the aspecd.processing
module, all processing
steps provided are fully selfdocumenting, i.e. they add all necessary
information to reproduce each processing step to the
trepr.dataset.Dataset.history
attribute of the dataset.
Concrete processing steps¶
This module provides a series of processing steps that can be divided into those specific for TREPR data and those generally applicable to spectroscopic data and directly inherited from the ASpecD framwork.
What follows is a list as a first overview. For details, see the detailed documentation of each of the classes, readily accessible by the link.
Processing steps specific for TREPR data¶
A number of processing steps are rather specific for TREPR data namely correcting DC offsets, background, and microwave frequency:

Correct for DC offsets of TREPR data

Subtract background, mainly laserinduced fieldindependent background

Correct for same microwave frequency, necessary to compare measurements
General processing steps inherited from the ASpecD framework¶
Besides the processing steps specific for TREPR data, a number of further processing steps that are generally applicable to spectroscopic data have been inherited from the underlying ASpecD framework:

Normalise data.
There are different kinds of normalising data: maximum, minimum, amplitude, area

Perform scalar algebraic operation on one dataset.
Operations available: add, subtract, multiply, divide (by given scalar)

Perform scalar algebraic operation on axis values of a dataset.
Operations available: add, subtract, multiply, divide, power (by given scalar)

Perform scalar algebraic operation on two datasets.
Operations available: add, subtract

Project data, i.e. reduce dimensions along one axis.

Extract slice along one ore more dimensions from dataset.

Correct baseline of dataset.

Average data over given range along given axis.

Filter data
Further processing steps implemented in the ASpecD framework can be used as
well, by importing the respective modules. In case of recipedriven data
analysis, simply prefix the kind with aspecd
:
 kind: aspecd.processing
type: <ClassNameOfProcessingStep>
Implementing own processing steps is rather straightforward. For details,
see the documentation of the aspecd.processing
module.
Module documentation¶

class
trepr.processing.
PretriggerOffsetCompensation
¶ Bases:
aspecd.processing.SingleProcessingStep
Correct for DC offsets of TREPR data.
Usually the first processing step after recording TREPR data is to compensate for DC offsets due to experimental instabilities. This is done by setting the average of the pretrigger part of the time trace to zero (pretrigger offset compensation). At the same time, this will remove any background signals of stable paramangetic species, as they would appear as DC offset as well.

parameters
¶ All parameters necessary for this step.
 zeropoint_index
int
Index of the time axis corresponding to t = 0
Will be automatically detected during processing.
 Type
 zeropoint_index
Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.In the simplest case, just invoke the pretrigger offset compensation with default values:
 kind: processing type: PretriggerOffsetCompensation
This will correct your data accordingly and should always be the first step when processing and analysing TREPR data.


class
trepr.processing.
BackgroundCorrection
¶ Bases:
aspecd.processing.SingleProcessingStep
Subtract background, mainly laserinduced fieldindependent background.
When the laser hits the EPR cavity, this usually introduces a fieldindependent absorptive background signal that needs to be subtracted from the data.
Depending on the spectrometer control and measurement software used, this background signal can get automatically subtracted already during the measurement. More often, it needs to be done afterwards, and therefore, it is crucial to record the TREPR data with sufficient baseline at both ends of the magnetic field range to allow for reliable background correction.

parameters
¶ All parameters necessary for this step.
 num_profiles
list
Number of time profiles (transients) to use from lower and upper end of the magnetic field axis.
If two values are provided, a linear regression will be performed between lower and upper end and the background subtracted accordingly. If only a scalar (or a list with one element) is provided, the background traces from the lower magnetic field position are used.
Default: [5, 5]
 Type
 num_profiles
Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.In the simplest case, just invoke the background correction with default values:
 kind: processing type: BackgroundCorrection
This will correct your data accordingly.
If you would like to control more carefully the transients (time profiles) used to obtain the background signal, you can set the respective parameters. Suppose you would want to use only the first 10 transients from the lower end of the magnetic field:
 kind: processing type: BackgroundCorrection properties: parameters: num_profiles: 10
Similarly, if you would want to use only the last 10 transients from the lower end of the magnetic field:
 kind: processing type: BackgroundCorrection properties: parameters: num_profiles: 10
And finally, if you would like to use the first 5 and the last 10 transients, you would write:
 kind: processing type: BackgroundCorrection properties: parameters: num_profiles: [5, 10]

static
applicable
(dataset)¶ Check whether processing step is applicable to the given dataset.
Background correction is only applicable to 2D datasets.
 Parameters
dataset (
aspecd.dataset.Dataset
) – dataset to check Returns
applicable – True if successful, False otherwise.
 Return type


class
trepr.processing.
FrequencyCorrection
¶ Bases:
aspecd.processing.SingleProcessingStep
Convert data to a given microwave frequency.
To compare EPR spectra, it is necessary to first correct them for the same microwave frequency, i.e. to adjust the magnetic field axis accordingly. Note that each individual measurement will have its own microwave frequency. Particularly for TREPR data with their usually quite large steps of the magnetic field axis, one could first check whether the difference in microwave frequency is reasonably large compared to the magnetic field steps, and only in this case correct for the same frequency.

parameters
¶ All parameters necessary for this step.
 frequency
float
Microwave frequency to correct for in GHz.
Default: 9.5
 Type
 frequency
Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.In the simplest case, just invoke the frequency correction with default values:
 kind: processing type: FrequencyCorrection
This will correct your data accordingly.
If you would like to set the target microwave frequency explicitly, this can be done as well:
 kind: processing type: BackgroundCorrection properties: parameters: frequency: 9.8
In this case, the data would be corrected for a microwave frequency of 9.8 GHz.
Code author: Mirjam Schröder


class
trepr.processing.
Normalisation
¶ Bases:
aspecd.processing.Normalisation
Normalise data.
As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the
aspecd.processing.Normalisation
class for details.Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. Of course, all parameters settable for the superclasses can be set as well. The examples focus each on a single aspect.In the simplest case, just invoke the normalisation with default values:
 kind: processing type: Normalisation
This will normalise your data to their maximum.
Sometimes, normalising to maximum is not what you need, hence you can control in more detail the criterion using the appropriate parameter:
 kind: processing type: Normalisation properties: parameters: kind: amplitude
In this case, you would normalise to the amplitude, meaning setting the difference between minimum and maximum to one. For other kinds, see above.
If you want to normalise not over the entire range of the dataset, but only over a dedicated range, simply provide the necessary parameters:
 kind: processing type: Normalisation properties: parameters: range: [50, 150]
In this case, we assume a 1D dataset and use indices, requiring the data to span at least over 150 points. Of course, it is often more convenient to provide axis units. Here you go:
 kind: processing type: Normalisation properties: parameters: range: [340, 350] range_unit: axis
And in case of ND datasets with N>1, make sure to provide as many ranges as dimensions of your dataset, in case of a 2D dataset:
 kind: processing type: Normalisation properties: parameters: range:  [50, 150]  [30, 40]
Here as well, the range can be given in indices or axis units, but defaults to indices if no unit is explicitly given.

class
trepr.processing.
ScalarAlgebra
¶ Bases:
aspecd.processing.ScalarAlgebra
Perform scalar algebraic operation on one dataset.
As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the
aspecd.processing.ScalarAlgebra
class for details.Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.In case you would like to add a fixed value of 42 to your dataset:
 kind: processing type: ScalarAlgebra properties: parameters: kind: add value: 42
Similarly, you could use “minus”, “times”, “by”, “add”, “subtract”, “multiply”, or “divide” as kind  resulting in the given algebraic operation.

class
trepr.processing.
ScalarAxisAlgebra
¶ Bases:
aspecd.processing.ScalarAxisAlgebra
Perform scalar algebraic operation on the axis of a dataset.
As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the
aspecd.processing.ScalarAxisAlgebra
class for details.Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.In case you would like to add a fixed value of 42 to the first axis (index 0) your dataset:
 kind: processing type: ScalarAxisAlgebra properties: parameters: kind: plus axis: 0 value: 42
Similarly, you could use “minus”, “times”, “by”, “add”, “subtract”, “multiply”, “divide”, and “power” as kind  resulting in the given algebraic operation.

class
trepr.processing.
DatasetAlgebra
¶ Bases:
aspecd.processing.DatasetAlgebra
Perform scalar algebraic operation on two datasets.
As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the
aspecd.processing.DatasetAlgebra
class for details.Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.In case you would like to add the data of the dataset referred to by its label
label_to_other_dataset
to your dataset: kind: processing type: DatasetAlgebra properties: parameters: kind: plus dataset: label_to_other_dataset
Similarly, you could use “minus”, “add”, “subtract” as kind  resulting in the given algebraic operation.
As mentioned already, the data of both datasets need to have identical shape, and comparison is only meaningful if the axes are compatible as well. Hence, you will usually want to perform a CommonRangeExtraction processing step before doing algebra with two datasets:
 kind: multiprocessing type: CommonRangeExtraction results:  label_to_dataset  label_to_other_dataset  kind: processing type: DatasetAlgebra properties: parameters: kind: plus dataset: label_to_other_dataset apply_to:  label_to_dataset

class
trepr.processing.
Projection
¶ Bases:
aspecd.processing.Projection
Project data, i.e. reduce dimensions along one axis.
As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the
aspecd.processing.Projection
class for details.Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.In the simplest case, just invoke the projection with default values:
 kind: processing type: Projection
This will project the data along the first axis (index 0), yielding a 1D dataset.
If you would like to project along the second axis (index 1), simply set the appropriate parameter:
 kind: processing type: Projection properties: parameters: axis: 1
This will project the data along the second axis (index 1), yielding a 1D dataset.

class
trepr.processing.
SliceExtraction
¶ Bases:
aspecd.processing.SliceExtraction
Extract slice along one ore more dimensions from dataset.
As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the
aspecd.processing.SliceExtraction
class for details.Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.In the simplest case, just invoke the slice extraction with an index only:
 kind: processing type: SliceExtraction properties: parameters: position: 5
This will extract the sixth slice (index five) along the first axis (index zero).
If you would like to extract a slice along the second axis (with index one), simply provide both parameters, index and axis:
 kind: processing type: SliceExtraction properties: parameters: position: 5 axis: 1
This will extract the sixth slice along the second axis.
And as it is sometimes more convenient to give ranges in axis values rather than indices, even this is possible. Suppose the axis you would like to extract a slice from runs from 340 to 350 and you would like to extract the slice corresponding to 343:
 kind: processing type: SliceExtraction properties: parameters: position: 343 unit: axis
In case of you providing the range in axis units rather than indices, the value closest to the actual axis value will be chosen automatically.
For ND datasets with N>2, you can either extract a 1D or ND slice, with N always at least one dimension less than the original data. To extract a 2D slice from a 3D dataset, simply proceed as above, providing one value each for position and axis. If, however, you want to extract a 1D slice from a 3D dataset, you need to provide two values each for position and axis:
 kind: processing type: SliceExtraction properties: parameters: position: [21, 42] axis: [0, 2]
This particular case would be equivalent to
data[21, :, 42]
assumingdata
to contain the numeric data, besides, of course, that the processing step takes care of removing the axes as well.

class
trepr.processing.
BaselineCorrection
¶ Bases:
aspecd.processing.BaselineCorrection
Subtract baseline from dataset.
As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the
aspecd.processing.BaselineCorrection
class for details.Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.In the simplest case, just invoke the baseline correction with default values:
 kind: processing type: BaselineCorrection
In this case, a zerothorder polynomial baseline will be subtracted from your dataset using ten percent to the left and right, and in case of a 2D dataset, the baseline correction will be performed along the first axis (index zero) for all indices of the second axis (index 1).
Of course, often you want to control a little bit more how the baseline will be corrected. This can be done by explicitly setting some parameters.
Suppose you want to perform a baseline correction with a polynomial of first order:
 kind: processing type: BaselineCorrection properties: parameters: order: 1
If you want to change the (percental) area used for fitting the baseline, and even specify different ranges left and right:
 kind: processing type: BaselineCorrection properties: parameters: fit_area: [5, 20]
Here, five percent from the left and 20 percent from the right are used.
Finally, suppose you have a 2D dataset and want to average along the second axis (index one):
 kind: processing type: BaselineCorrection properties: parameters: axis: 1
Of course, you can combine the different options.

class
trepr.processing.
Filtering
¶ Bases:
aspecd.processing.Filtering
Filter data.
As the class is fully inherited from ASpecD for simple usage, see the ASpecD documentation of the
aspecd.processing.Filtering
class for details.Examples
For convenience, a series of examples in recipe style (for details of the recipedriven data analysis, see
aspecd.tasks
) is given below for how to make use of this class. The examples focus each on a single aspect.Generally, filtering requires to provide both, a type of filter and a window length. Therefore, for uniform and Gaussian filters, this would be:
 kind: processing type: Filtering properties: parameters: type: uniform window_length: 10
Of course, at least uniform filtering (also known as boxcar or moving average) is strongly discouraged due to the artifacts introduced. Probably the best bet for applying a filter to smooth your data is the SavitzkyGolay filter:
 kind: processing type: Filtering properties: parameters: type: savitzkygolay window_length: 10 order: 3
Note that for this filter, you need to provide the polynomial order as well. To get best results, you will need to experiment with the parameters a bit.

class
trepr.processing.
Averaging
(dimension=0, avg_range=None, unit='axis')¶ Bases:
aspecd.processing.SingleProcessingStep
Averaging of twodimensional data along a given axis.
When measuring TREPR data, the resulting spectrum is always twodimensional. To analyse the data it’s often necessary to extract a onedimensional spectrum. The onedimensional spectrum can either be a cut along the field or the time axis. To get a representative spectrum, the average over several points along the respective axis is calculated.
All parameters, implicit and explicit, necessary to perform the averaging processing step, will be stored in the attribute
trepr.processing.Averaging.parameters
.An example for using the averaging processing step may look like this:
avg = Averaging(dimension=0, avg_range=[4.e7, 6.e7], unit='axis') dataset.process(avg)
 Parameters
dimension ({0,1}, optional) – Dimension along which the averaging is done. 0 is along the field axis and 1 is along the time axis. Default is 0.
avg_range (list) – Range in which the averaging will take place.
unit ({'axis', 'index'}, optional) – Unit in which the average range is given. Either ‘axis’ for axis values or ‘index’ for indices. Default is axis.
 Raises
trepr.exceptions.DimensionError – Raised if dimension is not in [0, 1].
trepr.exceptions.UnitError – Raised if unit is not in [‘axis’, ‘index’].
trepr.exceptions.RangeError – Raised if range is not within axis.
Deprecated since version 0.1: Use
aspecd.processing.Averaging
instead.

class
trepr.processing.
Filter
¶ Bases:
aspecd.processing.SingleProcessingStep
Apply a filter to smooth 1D data.
Be careful to show filtered spectra.
It can be chosen between boxcar, SavitzkyGolay and binomial filters.
SavitzkyGolay: Takes a certain number of points and fits a polynomial through them.
Reference for the SavitzkyGolay Filter:
A. Savitzky, M. J. E. Golay, Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Analytical Chemistry, 1964, 36 (8), pp 16271639.

parameters
¶ All parameters necessary for this step.
 type
str
Type of the applied filter. Valid inputs: savitzkygolay, binomial, boxcar and some abbreviations and variations.
Default: savitzkygolay
 window_width
int
Full filter window width. Must be an odd number
Default: 1/5 of the data length.
 Type
 type
Deprecated since version 0.1: Use
Filtering
instead.
static
applicable
(dataset)¶ Check whether processing step is applicable to the given dataset.
Filtering is only applicable to 1D datasets.
 Parameters
dataset (
aspecd.dataset.Dataset
) – dataset to check Returns
applicable – True if successful, False otherwise.
 Return type