ants.segmentation package

Submodules

ants.segmentation.anti_alias module

Apply anti-alias filter on a binary ANTsImage

ants.segmentation.anti_alias.anti_alias(image)[source]

Apply Anti-Alias filter to a binary image

ANTsR function: N/A

Parameters:

image (ANTsImage) – binary image to which anti-aliasing will be applied

Return type:

ANTsImage

Example

>>> import ants
>>> img = ants.image_read(ants.get_data('r16'))
>>> mask = ants.get_mask(img)
>>> mask_aa = ants.anti_alias(mask)
>>> ants.plot(mask)
>>> ants.plot(mask_aa)

ants.segmentation.atropos module

Atropos segmentation

ants.segmentation.atropos.atropos(a, x, i='Kmeans[3]', m='[0.2,1x1]', c='[5,0]', priorweight=0.25, **kwargs)[source]

A finite mixture modeling (FMM) segmentation approach with possibilities for specifying prior constraints. These prior constraints include the specification of a prior label image, prior probability images (one for each class), and/or an MRF prior to enforce spatial smoothing of the labels. Similar algorithms include FAST and SPM. atropos can also perform multivariate segmentation if you pass a list of images in: e.g. a=(img1,img2).

ANTsR function: atropos

Parameters:
  • a (ANTsImage or list/tuple of ANTsImage python:types) – One or more scalar images to segment. If priors are not used, the intensities of the first image are used to order the classes in the segmentation output, from lowest to highest intensity. Otherwise the order of the classes is dictated by the order of the prior images.

  • x (ANTsImage) – mask image.

  • i (string) – initialization usually KMeans[N] for N classes or a list of N prior probability images. See Atropos in ANTs for full set of options.

  • m (string) – mrf parameters as a string, usually “[smoothingFactor,radius]” where smoothingFactor determines the amount of smoothing and radius determines the MRF neighborhood, as an ANTs style neighborhood vector eg “1x1x1” for a 3D image. The radius must match the dimensionality of the image, eg 1x1 for 2D and The default in ANTs is smoothingFactor=0.3 and radius=1. See Atropos for more options.

  • c (string) – convergence parameters, “[numberOfIterations,convergenceThreshold]”. A threshold of 0 runs the full numberOfIterations, otherwise Atropos tests convergence by comparing the mean maximum posterior probability over the whole region of interest defined by the mask x.

  • priorweight (scalar) – usually 0 (priors used for initialization only), 0.25 or 0.5.

  • kwargs (keyword arguments) – more parameters, see Atropos help in ANTs

Returns:

segmentation: ANTsImage

actually segmented image

probabilityimageslist of ANTsImage types

one image for each segmentation class

Return type:

dictionary with the following key/value pairs

Example

>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> img = ants.resample_image(img, (64,64), 1, 0)
>>> mask = ants.get_mask(img)
>>> ants.atropos( a = img, m = '[0.2,1x1]', c = '[2,0]',  i = 'kmeans[3]', x = mask )
>>> seg2 = ants.atropos( a = img, m = '[0.2,1x1]', c = '[2,0]', i = seg['probabilityimages'], x = mask, priorweight=0.25 )

ants.segmentation.joint_label_fusion module

Joint Label Fusion algorithm

ants.segmentation.joint_label_fusion.joint_label_fusion(target_image, target_image_mask, atlas_list, beta=4, rad=2, label_list=None, rho=0.01, usecor=False, r_search=3, nonnegative=False, no_zeroes=False, max_lab_plus_one=False, output_prefix=None, verbose=False)[source]

A multiple atlas voting scheme to customize labels for a new subject. This function will also perform intensity fusion. It almost directly calls the C++ in the ANTs executable so is much faster than other variants in ANTsR.

One may want to normalize image intensities for each input image before passing to this function. If no labels are passed, we do intensity fusion. Note on computation time: the underlying C++ is multithreaded. You can control the number of threads by setting the environment variable ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS e.g. to use all or some of your CPUs. This will improve performance substantially. For instance, on a macbook pro from 2015, 8 cores improves speed by about 4x.

ANTsR function: jointLabelFusion

Parameters:
  • target_image (ANTsImage) – image to be approximated

  • target_image_mask (ANTsImage) – mask with value 1

  • atlas_list (list of ANTsImage python:types) – list containing intensity images

  • beta (scalar) – weight sharpness, default to 2

  • rad (scalar) – neighborhood radius, default to 2

  • label_list (list of ANTsImage python:types (optional)) – list containing images with segmentation labels

  • rho (scalar) – ridge penalty increases robustness to outliers but also makes image converge to average

  • usecor (boolean) – employ correlation as local similarity

  • r_search (scalar) – radius of search, default is 3

  • nonnegative (boolean) – constrain weights to be non-negative

  • no_zeroes (boolean) – this will constrain the solution only to voxels that are always non-zero in the label list

  • max_lab_plus_one (boolean) – this will add max label plus one to the non-zero parts of each label where the target mask is greater than one. NOTE: this will have a side effect of adding to the original label images that are passed to the program. It also guarantees that every position in the labels have some label, rather than none. Ie it guarantees to explicitly parcellate the input data.

  • output_prefix (string) – file prefix for storing output probabilityimages to disk

  • verbose (boolean) – whether to show status updates

Returns:

segmentationANTsImage

segmentation image

intensityANTsImage

intensity image

probabilityimageslist of ANTsImage types

probability map image for each label

segmentation_numberslist of numbers

segmentation label (number, int) for each probability map

Return type:

dictionary w/ following key/value pairs

Example

>>> import ants
>>> ref = ants.image_read( ants.get_ants_data('r16'))
>>> ref = ants.resample_image(ref, (50,50),1,0)
>>> ref = ants.iMath(ref,'Normalize')
>>> mi = ants.image_read( ants.get_ants_data('r27'))
>>> mi2 = ants.image_read( ants.get_ants_data('r30'))
>>> mi3 = ants.image_read( ants.get_ants_data('r62'))
>>> mi4 = ants.image_read( ants.get_ants_data('r64'))
>>> mi5 = ants.image_read( ants.get_ants_data('r85'))
>>> refmask = ants.get_mask(ref)
>>> refmask = ants.iMath(refmask,'ME',2) # just to speed things up
>>> ilist = [mi,mi2,mi3,mi4,mi5]
>>> seglist = [None]*len(ilist)
>>> for i in range(len(ilist)):
>>>     ilist[i] = ants.iMath(ilist[i],'Normalize')
>>>     mytx = ants.registration(fixed=ref , moving=ilist[i] ,
>>>         typeofTransform = ('Affine') )
>>>     mywarpedimage = ants.apply_transforms(fixed=ref,moving=ilist[i],
>>>             transformlist=mytx['fwdtransforms'])
>>>     ilist[i] = mywarpedimage
>>>     seg = ants.threshold_image(ilist[i],'Otsu', 3)
>>>     seglist[i] = ( seg ) + ants.threshold_image( seg, 1, 3 ).morphology( operation='dilate', radius=3 )
>>> r = 2
>>> pp = ants.joint_label_fusion(ref, refmask, ilist, r_search=2,
>>>                     label_list=seglist, rad=[r]*ref.dimension )
>>> pp = ants.joint_label_fusion(ref,refmask,ilist, r_search=2, rad=[r]*ref.dimension)
ants.segmentation.joint_label_fusion.local_joint_label_fusion(target_image, which_labels, target_mask, initial_label, atlas_list, label_list, submask_dilation=10, type_of_transform='SyN', aff_metric='meansquares', syn_metric='mattes', syn_sampling=32, reg_iterations=(40, 20, 0), aff_iterations=(500, 50, 0), grad_step=0.2, flow_sigma=3, total_sigma=0, beta=4, rad=2, rho=0.1, usecor=False, r_search=3, nonnegative=False, no_zeroes=False, max_lab_plus_one=False, local_mask_transform='Similarity', output_prefix=None, verbose=False)[source]

A local version of joint label fusion that focuses on a subset of labels. This is primarily different from standard JLF because it performs registration on the label subset and focuses JLF on those labels alone.

ANTsR function: localJointLabelFusion

Parameters:
  • target_image (ANTsImage) – image to be labeled

  • which_labels (numeric vector) – label number(s) that exist(s) in both the template and library

  • target_image_mask (ANTsImage) – a mask for the target image (optional), passed to joint fusion

  • initial_label (ANTsImage) – initial label set, may be same labels as library or binary. typically labels would be produced by a single deformable registration or by manual labeling.

  • atlas_list (list of ANTsImage python:types) – list containing intensity images

  • label_list (list of ANTsImage python:types (optional)) – list containing images with segmentation labels

  • submask_dilation (python:integer) – amount to dilate initial mask to define region on which we perform focused registration

  • type_of_transform (string) – A linear or non-linear registration type. Mutual information metric by default. See Notes below for more.

  • aff_metric (string) – the metric for the affine part (GC, mattes, meansquares)

  • syn_metric (string) – the metric for the syn part (CC, mattes, meansquares, demons)

  • syn_sampling (scalar) – the nbins or radius parameter for the syn metric

  • reg_iterations (list/tuple of python:integers) – vector of iterations for syn. we will set the smoothing and multi-resolution parameters based on the length of this vector.

aff_iterationslist/tuple of integers

vector of iterations for low-dimensional registration.

grad_stepscalar

gradient step size (not for all tx)

flow_sigmascalar

smoothing for update field

total_sigmascalar

smoothing for total field

betascalar

weight sharpness, default to 2

radscalar

neighborhood radius, default to 2

rhoscalar

ridge penalty increases robustness to outliers but also makes image converge to average

usecorboolean

employ correlation as local similarity

r_searchscalar

radius of search, default is 3

nonnegativeboolean

constrain weights to be non-negative

no_zeroesboolean

this will constrain the solution only to voxels that are always non-zero in the label list

max_lab_plus_oneboolean

this will add max label plus one to the non-zero parts of each label where the target mask is greater than one. NOTE: this will have a side effect of adding to the original label images that are passed to the program. It also guarantees that every position in the labels have some label, rather than none. Ie it guarantees to explicitly parcellate the input data.

local_mask_transform: string

the type of transform for the local mask alignment - usually translation, rigid, similarity or affine.

output_prefix: string

file prefix for storing output probabilityimages to disk

verboseboolean

whether to show status updates

Returns:

segmentationANTsImage

segmentation image

intensityANTsImage

intensity image

probabilityimageslist of ANTsImage types

probability map image for each label

Return type:

dictionary w/ following key/value pairs

ants.segmentation.kelly_kapowski module

Kelly Kapowski algorithm with computing cortical thickness

ants.segmentation.kelly_kapowski.kelly_kapowski(s, g, w, its=45, r=0.025, m=1.5, gm_label=2, wm_label=3, **kwargs)[source]

Compute cortical thickness using the DiReCT algorithm.

Diffeomorphic registration-based cortical thickness based on probabilistic segmentation of an image. This is an optimization algorithm.

Parameters:
  • s (ANTsimage) – segmentation image

  • g (ANTsImage) – gray matter probability image

  • w (ANTsImage) – white matter probability image

  • its (python:integer) – convergence params - controls iterations

  • r (scalar) – gradient descent update parameter

  • m (scalar) – gradient field smoothing parameter

  • gm_label (python:integer) – label for gray matter in the segmentation image

  • wm_label (python:integer) – label for white matter in the segmentation image

  • kwargs (keyword arguments) – anything else, see KellyKapowski help in ANTs

Return type:

ANTsImage

Example

>>> import ants
>>> img = ants.image_read( ants.get_ants_data('r16') ,2)
>>> img = ants.resample_image(img, (64,64),1,0)
>>> mask = ants.get_mask( img )
>>> segs = ants.kmeans_segmentation( img, k=3, kmask = mask)
>>> thick = ants.kelly_kapowski(s=segs['segmentation'], g=segs['probabilityimages'][1],
                                w=segs['probabilityimages'][2], its=45,
                                r=0.5, m=1)

ants.segmentation.kmeans module

ants.segmentation.kmeans.kmeans_segmentation(image, k, kmask=None, mrf=0.1)[source]

K-means image segmentation that is a wrapper around ants.atropos

ANTsR function: kmeansSegmentation

Parameters:
  • image (ANTsImage) – input image

  • k (python:integer) – integer number of classes

  • kmask (ANTsImage (optional)) – segment inside this mask

  • mrf (scalar) – smoothness, higher is smoother

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read(ants.get_ants_data('r16'), 'float')
>>> fi = ants.n3_bias_field_correction(fi, 2)
>>> seg = ants.kmeans_segmentation(fi, 3)

ants.segmentation.label_geometry_measures module

ants.segmentation.label_geometry_measures.label_geometry_measures(label_image, intensity_image=None)[source]

Wrapper for the ANTs funtion labelGeometryMeasures

ANTsR function: labelGeometryMeasures

Parameters:
  • label_image (ANTsImage) – image on which to compute geometry

  • intensity_image (ANTsImage (optional)) – image with intensity values

Return type:

pandas.DataFrame

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data('r16') )
>>> seg = ants.kmeans_segmentation( fi, 3 )['segmentation']
>>> geom = ants.label_geometry_measures(seg,fi)

ants.segmentation.otsu module

ants.segmentation.otsu.otsu_segmentation(image, k, mask=None)[source]

Otsu image segmentation

This is a very fast segmentation algorithm good for quick explortation, but does not return probability maps.

ANTsR function: thresholdImage(image, ‘Otsu’, k)

Parameters:
  • image (ANTsImage) – input image

  • k (python:integer) – integer number of classes. Note that a background class will be added to this, so the resulting segmentation will have k+1 unique values.

  • mask (ANTsImage) – segment inside this mask

Return type:

ANTsImage

Example

>>> import ants
>>> mni = ants.image_read(ants.get_data('mni'))
>>> seg = mni.otsu_segmentation(k=3) #0=bg,1=csf,2=gm,3=wm

ants.segmentation.prior_based_segmentation module

ants.segmentation.prior_based_segmentation.prior_based_segmentation(image, priors, mask, priorweight=0.25, mrf=0.1, iterations=25)[source]

Spatial prior-based image segmentation.

Markov random field regularized, prior-based image segmentation that is a wrapper around atropos (see ANTs and related publications).

ANTsR function: priorBasedSegmentation

Parameters:
  • image (ANTsImage or list/tuple of ANTsImage python:types) – input image or image list for multivariate segmentation

  • priors (list/tuple of ANTsImage python:types) – list of priors that cover the number of classes

  • mask (ANTsImage) – segment inside this mask

  • prior_weight (scalar) – usually 0 (priors used for initialization only), 0.25 or 0.5.

  • mrf (scalar) – regularization, higher is smoother, a numerical value in range 0.0 to 0.2

  • iterations (python:integer) – maximum number of iterations. could be a large value eg 25.

Returns:

segmentation: ANTsImage

actually segmented image

probabilityimageslist of ANTsImage types

one image for each segmentation class

Return type:

dictionary with the following key/value pairs

Example

>>> import ants
>>> fi = ants.image_read(ants.get_ants_data('r16'))
>>> seg = ants.kmeans_segmentation(fi,3)
>>> mask = ants.threshold_image(seg['segmentation'], 1, 1e15)
>>> priorseg = ants.prior_based_segmentation(fi, seg['probabilityimages'], mask, 0.25, 0.1, 3)

Module contents