Utilities

ants.n3_bias_field_correction(image, downsample_factor=3)[source]

N3 Bias Field Correction

ANTsR function: n3BiasFieldCorrection

Parameters:
  • image (ANTsImage) – image to be bias corrected

  • downsample_factor (scalar) – how much to downsample image before performing bias correction

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> image_n3 = ants.n3_bias_field_correction(image)
ants.n4_bias_field_correction(image, mask=None, rescale_intensities=False, shrink_factor=4, convergence={'iters': [50, 50, 50, 50], 'tol': 1e-07}, spline_param=None, return_bias_field=False, verbose=False, weight_mask=None)[source]

N4 Bias Field Correction

ANTsR function: n4BiasFieldCorrection

Parameters:
  • image (ANTsImage) – image to bias correct

  • mask (ANTsImage) – Input mask. If not specified, the entire image is used.

  • rescale_intensities (boolean) – At each iteration, a new intensity mapping is calculated and applied but there is nothing which constrains the new intensity range to be within certain values. The result is that the range can “drift” from the original at each iteration. This option rescales to the [min,max] range of the original image intensities within the user-specified mask. A mask is required to perform rescaling. Default is False in ANTsR/ANTsPy but True in ANTs.

  • shrink_factor (scalar) – Shrink factor for multi-resolution correction, typically integer less than 4

  • convergence (dict w/ keys iters and tol) – iters : vector of maximum number of iterations for each level tol : the convergence tolerance. Default tolerance is 1e-7 in ANTsR/ANTsPy but 0.0 in ANTs.

  • spline_param (float or vector) – Parameter controlling number of control points in spline. Either single value, indicating the spacing in each direction, or vector with one entry per dimension of image, indicating the mesh size. If None, defaults to mesh size of 1 in all dimensions.

  • return_bias_field (boolean) – Return bias field instead of bias corrected image.

  • verbose (boolean) – enables verbose output.

  • weight_mask (ANTsImage (optional)) – antsImage of weight mask

Return type:

ANTsImage

Example

>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> image_n4 = ants.n4_bias_field_correction(image)
ants.abp_n4(image, intensity_truncation=(0.025, 0.975, 256), mask=None, usen3=False)[source]

Truncate outlier intensities and bias correct with the N4 algorithm.

ANTsR function: abpN4

Parameters:
  • image (ANTsImage) – image to correct and truncate

  • intensity_truncation (3-tuple) – quantiles for intensity truncation

  • mask (ANTsImage (optional)) – mask for bias correction

  • usen3 (boolean) – if True, use N3 bias correction instead of N4

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> image2 = ants.abp_n4(image)
ants.merge_channels(image_list)[source]

Merge channels of multiple scalar ANTsImage types into one multi-channel ANTsImage

ANTsR function: mergeChannels

Parameters:

image_list (list/tuple of ANTsImage python:types) – scalar images to merge

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'), 'float')
>>> image2 = ants.image_read(ants.get_ants_data('r16'), 'float')
>>> image3 = ants.merge_channels([image,image2])
>>> image3.components == 2
ants.split_channels(image)[source]

Split channels of a multi-channel ANTsImage into a collection of scalar ANTsImage types

Parameters:

image (ANTsImage) – multi-channel image to split

Return type:

list of ANTsImage types

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'), 'float')
>>> image2 = ants.image_read(ants.get_ants_data('r16'), 'float')
>>> imagemerge = ants.merge_channels([image,image2])
>>> imagemerge.components == 2
>>> images_unmerged = ants.split_channels(imagemerge)
>>> len(images_unmerged) == 2
>>> images_unmerged[0].components == 1
ants.crop_image(image, label_image=None, label=1)[source]

Use a label image to crop a smaller ANTsImage from within a larger ANTsImage

ANTsR function: cropImage

Parameters:
  • image (ANTsImage) – image to crop

  • label_image (ANTsImage) – image with label values. If not supplied, estimated from data.

  • label (python:integer) – the label value to use

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data('r16') )
>>> cropped = ants.crop_image(fi)
>>> cropped = ants.crop_image(fi, fi, 100 )
ants.crop_indices(image, lowerind, upperind)[source]

Create a proper ANTsImage sub-image by indexing the image with indices. This is similar to but different from array sub-setting in that the resulting sub-image can be decropped back into its place without having to store its original index locations explicitly.

ANTsR function: cropIndices

Parameters:
  • image (ANTsImage) – image to crop

  • lowerind (list/tuple of python:integers) – vector of lower index, should be length image dimensionality

  • upperind (list/tuple of python:integers) – vector of upper index, should be length image dimensionality

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data("r16"))
>>> cropped = ants.crop_indices( fi, (10,10), (100,100) )
>>> cropped = ants.smooth_image( cropped, 5 )
>>> decropped = ants.decrop_image( cropped, fi )
ants.decrop_image(cropped_image, full_image)[source]

The inverse function for ants.crop_image

ANTsR function: decropImage

Parameters:
  • cropped_image (ANTsImage) – cropped image

  • full_image (ANTsImage) – image in which the cropped image will be put back

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read(ants.get_ants_data('r16'))
>>> mask = ants.get_mask(fi)
>>> cropped = ants.crop_image(fi, mask, 1)
>>> cropped = ants.smooth_image(cropped, 1)
>>> decropped = ants.decrop_image(cropped, fi)
ants.denoise_image(image, mask=None, shrink_factor=1, p=1, r=3, noise_model='Rician', v=0)[source]

Denoise an image using a spatially adaptive filter originally described in J. V. Manjon, P. Coupe, Luis Marti-Bonmati, D. L. Collins, and M. Robles. Adaptive Non-Local Means Denoising of MR Images With Spatially Varying Noise Levels, Journal of Magnetic Resonance Imaging, 31:192-203, June 2010.

ANTsR function: denoiseImage

Parameters:
  • image (ANTsImage) – scalar image to denoise.

  • mask (ANTsImage) – to limit the denoise region.

  • shrink_factor (scalar) – downsampling level performed within the algorithm.

  • p (python:integer or character of format '2x2' where the x separates vector entries) – patch radius for local sample.

  • r (python:integer or character of format '2x2' where the x separates vector entries) – search radius from which to choose extra local samples.

  • noise_model (string) – ‘Rician’ or ‘Gaussian’

Return type:

ANTsImage

Example

>>> import ants
>>> import numpy as np
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> # add fairly large salt and pepper noise
>>> imagenoise = image + np.random.randn(*image.shape).astype('float32')*5
>>> imagedenoise = ants.denoise_image(imagenoise, ants.get_mask(image))
ants.fit_bspline_object_to_scattered_data(scattered_data, parametric_data, parametric_domain_origin, parametric_domain_spacing, parametric_domain_size, is_parametric_dimension_closed=None, data_weights=None, number_of_fitting_levels=4, mesh_size=1, spline_order=3)[source]

Fit a b-spline object to scattered data. This is basically a wrapper for the ITK filter

https://itk.org/Doxygen/html/classitk_1_1BSplineScatteredDataPointSetToImageFilter.html

This filter is flexible in the possible objects that can be approximated. Possibilities include:

  • 1/2/3/4-D curve

  • 2-D surface in 3-D space (not available/templated)

  • 2/3/4-D scalar field

  • 2/3-D displacement field

  • 2/3-D time-varying velocity field

In order to understand the input parameters, it is important to understand the difference between the parametric and data dimensions. A curve as one parametric dimension but the data dimension can be 1-D, 2-D, 3-D, or 4-D. In contrast, a 3-D displacement field has a parametric and data dimension of 3. The scattered data is what’s approximated by the B-spline object and the parametric point is the location of scattered data within the domain of the B-spline object.

ANTsR function: fitBsplineObjectToScatteredData

Parameters:
  • scattered_data (2-D numpy array) – Defines the scattered data input to be approximated. Data is organized by row –> data v, column —> data dimension.

  • parametric_data (2-D numpy array) – Defines the parametric location of the scattered data. Data is organized by row –> parametric point, column –> parametric dimension. Note that each row corresponds to the same row in the scatteredData.

  • data_weights (1-D numpy array) – Defines the individual weighting of the corresponding scattered data value. Default = None meaning all values are weighted the same.

  • parametric_domain_origin (n-D tuple) – Defines the parametric origin of the B-spline object.

  • parametric_domain_spacing (n-D tuple) – Defines the parametric spacing of the B-spline object. Defines the sampling rate in the parametric domain.

  • parametric_domain_size (n-D tuple) – Defines the size (length) of the B-spline object. Note that the length of the B-spline object in dimension d is defined as parametric_domain_spacing[d] * parametric_domain_size[d]-1.

  • is_parametric_dimension_closed (n-D tuple) – Booleans defining whether or not the corresponding parametric dimension is closed (e.g., closed loop). Default = None.

  • number_of_fitting_levels (python:integer) – Specifies the number of fitting levels.

  • mesh_size (n-D tuple) – Defines the mesh size at the initial fitting level.

  • spline_order (python:integer) – Spline order of the B-spline object. Default = 3.

Returns:

  • returns numpy array for B-spline curve (parametric dimension = 1). Otherwise,

  • returns an ANTsImage.

Example

>>> # Perform 2-D curve example
>>>
>>> import ants, numpy
>>> import matplotlib.pyplot as plt
>>>
>>> x = numpy.linspace(-4, 4, num=100)
>>> y = numpy.exp(-numpy.multiply(x, x)) + numpy.random.uniform(-0.1, 0.1, len(x))
>>> u = numpy.linspace(0, 1.0, num=len(x))
>>> scattered_data = numpy.column_stack((x, y))
>>> parametric_data = numpy.expand_dims(u, axis=-1)
>>> spacing = 1/(len(x)-1) * 1.0;
>>>
>>> bspline_curve = ants.fit_bspline_object_to_scattered_data(scattered_data, parametric_data,
>>>   parametric_domain_origin=[0.0], parametric_domain_spacing=[spacing],
>>>   parametric_domain_size=[len(x)], is_parametric_dimension_closed=None,
>>>   number_of_fitting_levels=5, mesh_size=1)
>>>
>>> plt.plot(x, y, label='Noisy points')
>>> plt.plot(bspline_curve[:,0], bspline_curve[:,1], label='B-spline curve')
>>> plt.grid(True)
>>> plt.axis('tight')
>>> plt.legend(loc='upper left')
>>> plt.show()
>>>
>>> ###########################################################################
>>>
>>> # Perform 2-D scalar field (i.e., image) example
>>>
>>> import ants, numpy
>>>
>>> number_of_random_points = 10000
>>>
>>> img = ants.image_read( ants.get_ants_data("r16"))
>>> img_array = img.numpy()
>>> row_indices = numpy.random.choice(range(2, img_array.shape[0]), number_of_random_points)
>>> col_indices = numpy.random.choice(range(2, img_array.shape[1]), number_of_random_points)
>>>
>>> scattered_data = numpy.zeros((number_of_random_points, 1))
>>> parametric_data = numpy.zeros((number_of_random_points, 2))
>>>
>>> for i in range(number_of_random_points):
>>>     scattered_data[i,0] = img_array[row_indices[i], col_indices[i]]
>>>     parametric_data[i,0] = row_indices[i]
>>>     parametric_data[i,1] = col_indices[i]
>>>
>>> bspline_img = ants.fit_bspline_object_to_scattered_data(
>>>     scattered_data, parametric_data,
>>>     parametric_domain_origin=[0.0, 0.0],
>>>     parametric_domain_spacing=[1.0, 1.0],
>>>     parametric_domain_size = img.shape,
>>>     number_of_fitting_levels=7, mesh_size=1)
>>>
>>> ants.plot(img, title="Original")
>>> ants.plot(bspline_img, title="B-spline approximation")
ants.fit_bspline_displacement_field(displacement_field=None, displacement_weight_image=None, displacement_origins=None, displacements=None, displacement_weights=None, origin=None, spacing=None, size=None, direction=None, number_of_fitting_levels=4, mesh_size=1, spline_order=3, enforce_stationary_boundary=True, estimate_inverse=False, rasterize_points=False)[source]

Fit a b-spline object to a dense displacement field image and/or a set of points with associated displacements and smooths them using B-splines. The inverse can also be estimated.. This is basically a wrapper for the ITK filter

https://itk.org/Doxygen/html/classitk_1_1DisplacementFieldToBSplineImageFilter.html}

which, in turn is a wrapper for the ITK filter used for the function fit_bspline_object_to_scattered_data.

ANTsR function: fitBsplineToDisplacementField

Parameters:
  • displacement_field (ANTs image) – Input displacement field. Either this and/or the points must be specified.

  • displacement_weight_image (ANTs image) – Input image defining weighting of the voxelwise displacements in the displacement_field. I If None, defaults to identity weighting for each displacement. Default = None.

  • displacement_origins (2-D numpy array) – Matrix (number_of_points x dimension) defining the origins of the input displacement points. Default = None.

  • displacements (2-D numpy array) – Matrix (number_of_points x dimension) defining the displacements of the input displacement points. Default = None.

  • displacement_weights (1-D numpy array) – Array defining the individual weighting of the corresponding scattered data value. Default = None meaning all values are weighted the same.

  • origin (n-D tuple) – Defines the physical origin of the B-spline object.

  • spacing (n-D tuple) – Defines the physical spacing of the B-spline object.

  • size (n-D tuple) – Defines the size (length) of the B-spline object. Note that the length of the B-spline object in dimension d is defined as spacing[d] * size[d]-1.

  • direction (2-D numpy array) – Booleans defining whether or not the corresponding parametric dimension is closed (e.g., closed loop). Default = None.

  • number_of_fitting_levels (python:integer) – Specifies the number of fitting levels.

  • mesh_size (n-D tuple) – Defines the mesh size at the initial fitting level.

  • spline_order (python:integer) – Spline order of the B-spline object. Default = 3.

  • enforce_stationary_boundary (boolean) – Ensure no displacements on the image boundary. Default = True.

  • estimate_inverse (boolean) – Estimate the inverse displacement field. Default = False.

  • rasterize_points (boolean) – Use nearest neighbor rasterization of points for estimating the field (potential speed-up). Default = False.

Return type:

Returns an ANTsImage.

Example

>>> # Perform 2-D fitting
>>>
>>> import ants, numpy
>>>
>>> points = numpy.array([[-50, -50]])
>>> deltas = numpy.array([[10, 10]])
>>>
>>> bspline_field = ants.fit_bspline_displacement_field(
>>>   displacement_origins=points, displacements=deltas,
>>>   origin=[0.0, 0.0], spacing=[1.0, 1.0], size=[100, 100],
>>>   direction=numpy.array([[-1, 0], [0, -1]]),
>>>   number_of_fitting_levels=4, mesh_size=(1, 1))
ants.functional_lung_segmentation(image, mask=None, number_of_iterations=2, number_of_atropos_iterations=5, mrf_parameters='[0.7,2x2x2]', number_of_clusters=6, cluster_centers=None, bias_correction='n4', verbose=True)[source]

Ventilation-based segmentation of hyperpolarized gas lung MRI.

Lung segmentation into classes based on ventilation as described in this paper:

Parameters:
  • image (ANTs image) – Input proton-weighted MRI.

  • mask (ANTs image) – Mask image designating the region to segment. 0/1 = background/foreground.

  • number_of_iterations (python:integer) – Number of Atropos <–> bias correction iterations (outer loop).

  • number_of_atropos_iterations (python:integer) – Number of Atropos iterations (inner loop). If number_of_atropos_iterations = 0, this is equivalent to K-means with no MRF priors.

  • mrf_parameters (string) – Parameters for MRF in Atropos.

  • number_of_clusters (python:integer) – Number of tissue classes.

  • cluster_centers (array or tuple) – Initialization centers for k-means.

  • bias_correction (string) – Apply “n3”, “n4”, or no bias correction (default = “n4”).

  • verbose (boolean) – Print progress to the screen.

Returns:

  • Dictionary with segmentation image, probability images, and

  • processed image.

Example

>>> import ants
>>> image = ants.image_read("lung_image.nii.gz")
>>> mask = ants.image_read("lung_mask.nii.gz")
>>> seg = functional_lung_segmentation(image, mask, verbose=True)
ants.get_ants_data(file_id=None, target_file_name=None, antsx_cache_directory=None)

Get ANTsPy test data file

ANTsR function: getANTsRData

Parameters:

name (string) –

name of test image tag to retrieve Options:

  • ’r16’

  • ’r27’

  • ’r30’

  • ’r62’

  • ’r64’

  • ’r85’

  • ’ch2’

  • ’mni’

  • ’surf’

  • ’pcasl’

Returns:

filepath of test image

Return type:

string

Example

>>> import ants
>>> mnipath = ants.get_ants_data('mni')
ants.get_centroids(image, clustparam=0)[source]

Reduces a variate/statistical/network image to a set of centroids describing the center of each stand-alone non-zero component in the image

ANTsR function: getCentroids

Parameters:
  • image (ANTsImage) – image from which centroids will be calculated

  • clustparam (python:integer) – look at regions greater than or equal to this size

Return type:

ndarray

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data( "r16" ) )
>>> image = ants.threshold_image( image, 90, 120 )
>>> image = ants.label_clusters( image, 10 )
>>> cents = ants.get_centroids( image )
ants.get_mask(image, low_thresh=None, high_thresh=None, cleanup=2)[source]

Get a binary mask image from the given image after thresholding

ANTsR function: getMask

Parameters:
  • image (ANTsImage) – image from which mask will be computed. Can be an antsImage of 2, 3 or 4 dimensions.

  • low_thresh (scalar (optional)) – An inclusive lower threshold for voxels to be included in the mask. If not given, defaults to image mean.

  • high_thresh (scalar (optional)) – An inclusive upper threshold for voxels to be included in the mask. If not given, defaults to image max

  • cleanup (python:integer) –

    If > 0, morphological operations will be applied to clean up the mask by eroding away small or weakly-connected areas, and closing holes. If cleanup is >0, the following steps are applied

    1. Erosion with radius 2 voxels

    2. Retain largest component

    3. Dilation with radius 1 voxel

    4. Morphological closing

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> mask = ants.get_mask(image)
ants.image_similarity(fixed_image, moving_image, metric_type='MeanSquares', fixed_mask=None, moving_mask=None, sampling_strategy='regular', sampling_percentage=1.0)[source]

Measure similarity between two images. NOTE: Similarity is actually returned as distance (i.e. dissimilarity) per ITK/ANTs convention. E.g. using Correlation metric, the similarity of an image with itself returns -1.

ANTsR function: imageSimilarity

Parameters:
  • fixed (ANTsImage) – the fixed image

  • moving (ANTsImage) – the moving image

  • metric_type (string) –

    image metric to calculate

    MeanSquares Correlation ANTSNeighborhoodCorrelation MattesMutualInformation JointHistogramMutualInformation Demons

  • fixed_mask (ANTsImage (optional)) – mask for the fixed image

  • moving_mask (ANTsImage (optional)) – mask for the moving image

  • sampling_strategy (string (optional)) –

    sampling strategy, default is full sampling

    None (Full sampling) random regular

  • sampling_percentage (scalar) – percentage of data to sample when calculating metric Must be between 0 and 1

Return type:

scalar

Example

>>> import ants
>>> x = ants.image_read(ants.get_ants_data('r16'))
>>> y = ants.image_read(ants.get_ants_data('r30'))
>>> metric = ants.image_similarity(x,y,metric_type='MeanSquares')
ants.image_to_cluster_images(image, min_cluster_size=50, min_thresh=1e-06, max_thresh=1)[source]

Converts an image to several independent images.

Produces a unique image for each connected component 1 through N of size > min_cluster_size

ANTsR function: image2ClusterImages

Parameters:
  • image (ANTsImage) – input image

  • min_cluster_size (python:integer) – throw away clusters smaller than this value

  • min_thresh (scalar) – threshold to a statistical map

  • max_thresh (scalar) – threshold to a statistical map

Return type:

list of ANTsImage types

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> image = ants.threshold_image(image, 1, 1e15)
>>> image_cluster_list = ants.image_to_cluster_images(image)
ants.iMath(image, operation, *args)[source]

Perform various (often mathematical) operations on the input image/s. Additional parameters should be specific for each operation. See the the full iMath in ANTs, on which this function is based.

ANTsR function: iMath

Parameters:
  • image (ANTsImage) – input object, usually antsImage

  • operation – a string e.g. “GetLargestComponent” … the special case of “GetOperations” or “GetOperationsFull” will return a list of operations and brief description. Some operations may not be valid (WIP), but most are.

  • *args (non-keyword arguments) – additional parameters specific to the operation

Example

>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> img2 = ants.iMath(img, 'Canny', 1, 5, 12)
ants.impute(data, method='mean', value=None, nan_value=nan)[source]

Impute missing values on a numpy ndarray in a column-wise manner.

ANTsR function: antsrimpute

Parameters:
  • data (numpy.ndarray) – data to impute

  • method (string or float) –

    type of imputation method to use Options:

    mean median constant KNN BiScaler NuclearNormMinimization SoftImpute IterativeSVD

  • value (scalar (optional)) –

    optional arguments for different methods if method == ‘constant’

    constant value

    if method == ‘KNN’

    number of nearest neighbors to use

  • nan_value (scalar) – value which is interpreted as a missing value

Returns:

  • ndarray if ndarray was given

  • OR

  • pd.DataFrame if pd.DataFrame was given

Example

>>> import ants
>>> import numpy as np
>>> data = np.random.randn(4,10)
>>> data[2,3] = np.nan
>>> data[3,5] = np.nan
>>> data_imputed = ants.impute(data, 'mean')

Details

KNN: Nearest neighbor imputations which weights samples using the mean squared

difference on features for which two rows both have observed data.

SoftImpute: Matrix completion by iterative soft thresholding of SVD

decompositions. Inspired by the softImpute package for R, which is based on Spectral Regularization Algorithms for Learning Large Incomplete Matrices by Mazumder et. al.

IterativeSVD: Matrix completion by iterative low-rank SVD decomposition.

Should be similar to SVDimpute from Missing value estimation methods for DNA microarrays by Troyanskaya et. al.

MICE: Reimplementation of Multiple Imputation by Chained Equations.

MatrixFactorization: Direct factorization of the incomplete matrix into

low-rank U and V, with an L1 sparsity penalty on the elements of U and an L2 penalty on the elements of V. Solved by gradient descent.

NuclearNormMinimization: Simple implementation of Exact Matrix Completion

via Convex Optimization by Emmanuel Candes and Benjamin Recht using cvxpy. Too slow for large matrices.

BiScaler: Iterative estimation of row/column means and standard deviations

to get doubly normalized matrix. Not guaranteed to converge but works well in practice. Taken from Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares.

ants.invariant_image_similarity(image1, image2, local_search_iterations=0, metric='MI', thetas=array([0., 90., 180., 270., 360.]), thetas2=array([0., 90., 180., 270., 360.]), thetas3=array([0., 90., 180., 270., 360.]), scale_image=1, do_reflection=False, txfn=None, transform='Affine')[source]

Similarity metrics between two images as a function of geometry

Compute similarity metric between two images as image is rotated about its center w/ or w/o optimization

ANTsR function: invariantImageSimilarity

Parameters:
  • image1 (ANTsImage) – reference image

  • image2 (ANTsImage) – moving image

  • local_search_iterations (python:integer) – integer controlling local search in multistart

  • metric (string) –

    which metric to use

    MI GC

  • thetas (1D-ndarray/list/tuple) – numeric vector of search angles in degrees

  • thetas2 (1D-ndarray/list/tuple) – numeric vector of search angles in degrees around principal axis 2 (3D)

  • thetas3 (1D-ndarray/list/tuple) – numeric vector of search angles in degrees around principal axis 3 (3D)

  • scale_image (scalar) – global scale

  • do_reflection (boolean) – whether to reflect image about principal axis

  • txfn (string (optional)) – if present, write optimal tx to .mat file

  • transform (string) –

    type of transform to use

    Rigid Similarity Affine

Returns:

dataframe with metric values and transformation parameters

Return type:

pd.DataFrame

Example

>>> import ants
>>> img1 = ants.image_read(ants.get_ants_data('r16'))
>>> img2 = ants.image_read(ants.get_ants_data('r64'))
>>> metric = ants.invariant_image_similarity(img1,img2)
ants.convolve_image(image, kernel_image, crop=True)[source]

Convolve one image with another

ANTsR function: convolveImage

Parameters:
  • image (ANTsImage) – image to convolve

  • kernel_image (ANTsImage) – image acting as kernel

  • crop (boolean) – whether to automatically crop kernel_image

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read(ants.get_ants_data('r16'))
>>> convimg = ants.make_image( (3,3), (1,0,1,0,-4,0,1,0,1) )
>>> convout = ants.convolve_image( fi, convimg )
>>> convimg2 = ants.make_image( (3,3), (0,1,0,1,0,-1,0,-1,0) )
>>> convout2 = ants.convolve_image( fi, convimg2 )
ants.label_clusters(image, min_cluster_size=50, min_thresh=1e-06, max_thresh=1, fully_connected=False)[source]

This will give a unique ID to each connected component 1 through N of size > min_cluster_size

ANTsR function: labelClusters

Parameters:
  • image (ANTsImage) – input image e.g. a statistical map

  • min_cluster_size (python:integer) – throw away clusters smaller than this value

  • min_thresh (scalar) – threshold to a statistical map

  • max_thresh (scalar) – threshold to a statistical map

  • fully_connected (boolean) – boolean sets neighborhood connectivity pattern

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> timageFully = ants.label_clusters( image, 10, 128, 150, True )
>>> timageFace = ants.label_clusters( image, 10, 128, 150, False )
ants.label_image_centroids(image, physical=False, convex=True, verbose=False)[source]

Converts a label image to coordinates summarizing their positions

ANTsR function: labelImageCentroids

Parameters:
  • image (ANTsImage) – image of integer labels

  • physical (boolean) – whether you want physical space coordinates or not

  • convex (boolean) – if True, return centroid if False return point with min average distance to other points with same label

Returns:

labels1D-ndarray

array of label values

verticespd.DataFrame

coordinates of label centroids

Return type:

dictionary w/ following key-value pairs

Example

>>> import ants
>>> import numpy as np
>>> image = ants.from_numpy(np.asarray([[[0,2],[1,3]],[[4,6],[5,7]]]).astype('float32'))
>>> labels = ants.label_image_centroids(image)
ants.label_overlap_measures(source_image, target_image)[source]

Get overlap measures from two label images (e.g., Dice)

ANTsR function: labelOverlapMeasures

Parameters:
  • image (source) – Source image

  • target_image (ANTsImage) – Target image

Return type:

data frame with measures for each label and all labels combined

Example

>>> import ants
>>> r16 = ants.image_read( ants.get_ants_data('r16') )
>>> r64 = ants.image_read( ants.get_ants_data('r64') )
>>> s16 = ants.kmeans_segmentation( r16, 3 )['segmentation']
>>> s64 = ants.kmeans_segmentation( r64, 3 )['segmentation']
>>> stats = ants.label_overlap_measures(s16, s64)
ants.mask_image(image, mask, level=1, binarize=False)[source]

Mask an input image by a mask image. If the mask image has multiple labels, it is possible to specify which label(s) to mask at.

ANTsR function: maskImage

Parameters:
  • image (ANTsImage) – Input image.

  • mask (ANTsImage) – Mask or label image.

  • level (scalar or tuple of scalars) – Level(s) at which to mask image. If vector or list of values, output image is non-zero at all locations where label image matches any of the levels specified.

  • binarize (boolean) – whether binarize the output image

Return type:

ANTsImage

Example

>>> import ants
>>> myimage = ants.image_read(ants.get_ants_data('r16'))
>>> mask = ants.get_mask(myimage)
>>> myimage_mask = ants.mask_image(myimage, mask, 3)
>>> seg = ants.kmeans_segmentation(myimage, 3)
>>> myimage_mask = ants.mask_image(myimage, seg['segmentation'], (1,3))
ants.mni2tal(xin)[source]

mni2tal for converting from ch2/mni space to tal - very approximate.

This is a standard approach but it’s not very accurate.

ANTsR function: mni2tal

Parameters:

xin (tuple) – point in mni152 space.

Return type:

tuple

Example

>>> import ants
>>> ants.mni2tal( (10,12,14) )

References

http://bioimagesuite.yale.edu/mni2tal/501_95733_More%20Accurate%20Talairach%20Coordinates%20SLIDES.pdf http://imaging.mrc-cbu.cam.ac.uk/imaging/MniTalairach

ants.morphology(image, operation, radius, mtype='binary', value=1, shape='ball', radius_is_parametric=False, thickness=1, lines=3, include_center=False)[source]

Apply morphological operations to an image

ANTsR function: morphology

Parameters:
  • input (ANTsImage) – input image

  • operation (string) –

    operation to apply

    ”close” Morpholgical closing “dilate” Morpholgical dilation “erode” Morpholgical erosion “open” Morpholgical opening

  • radius (scalar) – radius of structuring element

  • mtype (string) –

    type of morphology

    ”binary” Binary operation on a single value “grayscale” Grayscale operations

  • value (scalar) – value to operation on (type=’binary’ only)

  • shape (string) –

    shape of the structuring element ( type=’binary’ only )

    ”ball” spherical structuring element “box” box shaped structuring element “cross” cross shaped structuring element “annulus” annulus shaped structuring element “polygon” polygon structuring element

  • radius_is_parametric (boolean) – used parametric radius boolean (shape=’ball’ and shape=’annulus’ only)

  • thickness (scalar) – thickness (shape=’annulus’ only)

  • lines (python:integer) – number of lines in polygon (shape=’polygon’ only)

  • include_center (boolean) – include center of annulus boolean (shape=’annulus’ only)

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data('r16') , 2 )
>>> mask = ants.get_mask( fi )
>>> dilated_ball = ants.morphology( mask, operation='dilate', radius=3, mtype='binary', shape='ball')
>>> eroded_box = ants.morphology( mask, operation='erode', radius=3, mtype='binary', shape='box')
>>> opened_annulus = ants.morphology( mask, operation='open', radius=5, mtype='binary', shape='annulus', thickness=2)
ants.get_pointer_string(image)[source]
ants.simulate_displacement_field(domain_image, field_type='bspline', number_of_random_points=1000, sd_noise=10.0, enforce_stationary_boundary=True, number_of_fitting_levels=4, mesh_size=1, sd_smoothing=4.0)[source]

simulate displacement field using either b-spline or exponential transform

ANTsR function: simulateDisplacementField

Parameters:
  • domain_image (ANTsImage) – Domain image

  • field_type (string) – Either “bspline” or “exponential”.

  • number_of_random_points (python:integer) – Number of displacement points.

  • sd_noise (float) – Standard deviation of the displacement field noise.

  • enforce_stationary_boundary (boolean) – Determines fixed boundary conditions.

  • number_of_fitting_levels (python:integer) – Number of fitting levels (b-spline only).

  • mesh_size (python:integer or n-D tuple) – Determines fitting resolution at base level (b-spline only).

  • sd_smoothing (float) – Standard deviation of the Gaussian smoothing in mm (exponential only).

Return type:

ANTs vector image.

Example

>>> import ants
>>> domain = ants.image_read( ants.get_ants_data('r16'))
>>> exp_field = ants.simulate_displacement_field(domain, field_type="exponential")
>>> bsp_field = ants.simulate_displacement_field(domain, field_type="bspline")
>>> bsp_xfrm = ants.transform_from_displacement_field(bsp_field * 3)
>>> domain_warped = ants.apply_ants_transform_to_image(bsp_xfrm, domain, domain)
ants.smooth_image(image, sigma, sigma_in_physical_coordinates=True, FWHM=False, max_kernel_width=32)[source]

Smooth an image

ANTsR function: smoothImage

Parameters:
  • image – Image to smooth

  • sigma – Smoothing factor. Can be scalar, in which case the same sigma is applied to each dimension, or a vector of length dim(inimage) to specify a unique smoothness for each dimension.

  • sigma_in_physical_coordinates (boolean) – If true, the smoothing factor is in millimeters; if false, it is in pixels.

  • FWHM (boolean) – If true, sigma is interpreted as the full-width-half-max (FWHM) of the filter, not the sigma of a Gaussian kernel.

  • max_kernel_width (scalar) – Maximum kernel width

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16'))
>>> simage = ants.smooth_image(image, (1.2,1.5))
ants.threshold_image(image, low_thresh=None, high_thresh=None, inval=1, outval=0, binary=True)[source]

Converts a scalar image into a binary image by thresholding operations

ANTsR function: thresholdImage

Parameters:
  • image (ANTsImage) – Input image to operate on

  • low_thresh (scalar (optional)) – Lower edge of threshold window

  • hight_thresh (scalar (optional)) – Higher edge of threshold window

  • inval (scalar) – Output value for image voxels in between lothresh and hithresh

  • outval (scalar) – Output value for image voxels lower than lothresh or higher than hithresh

  • binary (boolean) – if true, returns binary thresholded image if false, return binary thresholded image multiplied by original image

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> timage = ants.threshold_image(image, 0.5, 1e15)
ants.weingarten_image_curvature(image, sigma=1.0, opt='mean')[source]

Uses the weingarten map to estimate image mean or gaussian curvature

ANTsR function: weingartenImageCurvature

Parameters:
  • image (ANTsImage) – image from which curvature is calculated

  • sigma (scalar) – smoothing parameter

  • opt (string) – mean by default, otherwise gaussian or characterize

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('mni')).resample_image((3,3,3))
>>> imagecurv = ants.weingarten_image_curvature(image)