workflows.dmri.fsl.dti¶
bedpostx_parallel()
¶
Does the same as create_bedpostx_pipeline()
by splitting
the input dMRI in small ROIs that are better suited for parallel
processing).
Example¶
>>> from nipype.workflows.dmri.fsl.dti import bedpostx_parallel
>>> params = dict(n_fibres = 2, fudge = 1, burn_in = 1000,
... n_jumps = 1250, sample_every = 25)
>>> bpwf = bedpostx_parallel('nipype_bedpostx_parallel', params=params)
>>> bpwf.inputs.inputnode.dwi = 'diffusion.nii'
>>> bpwf.inputs.inputnode.mask = 'mask.nii'
>>> bpwf.inputs.inputnode.bvecs = 'bvecs'
>>> bpwf.inputs.inputnode.bvals = 'bvals'
>>> bpwf.run(plugin='CondorDAGMan')
Inputs:
inputnode.dwi
inputnode.mask
inputnode.bvecs
inputnode.bvals
Outputs:
outputnode wraps all XFibres outputs
Graph¶
create_bedpostx_pipeline()
¶
Creates a pipeline that does the same as bedpostx script from FSL - calculates diffusion model parameters (distributions not MLE) voxelwise for the whole volume (by splitting it slicewise).
Example¶
>>> from nipype.workflows.dmri.fsl.dti import create_bedpostx_pipeline
>>> params = dict(n_fibres = 2, fudge = 1, burn_in = 1000,
... n_jumps = 1250, sample_every = 25)
>>> bpwf = create_bedpostx_pipeline('nipype_bedpostx', params)
>>> bpwf.inputs.inputnode.dwi = 'diffusion.nii'
>>> bpwf.inputs.inputnode.mask = 'mask.nii'
>>> bpwf.inputs.inputnode.bvecs = 'bvecs'
>>> bpwf.inputs.inputnode.bvals = 'bvals'
>>> bpwf.run()
Inputs:
inputnode.dwi
inputnode.mask
inputnode.bvecs
inputnode.bvals
Outputs:
outputnode wraps all XFibres outputs