Core

Images

ANTsImage

class ants.core.ants_image.ANTsImage(pointer)[source]
abp_n4(intensity_truncation=(0.025, 0.975, 256), mask=None, usen3=False)

Truncate outlier intensities and bias correct with the N4 algorithm.

ANTsR function: abpN4

Parameters:
  • image (ANTsImage) – image to correct and truncate

  • intensity_truncation (3-tuple) – quantiles for intensity truncation

  • mask (ANTsImage (optional)) – mask for bias correction

  • usen3 (boolean) – if True, use N3 bias correction instead of N4

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> image2 = ants.abp_n4(image)
abs(axis=None)[source]

Return absolute value of image

add_noise_to_image(noise_model, noise_parameters)

Add noise to an image using additive Gaussian, salt-and-pepper, shot, or speckle noise.

Parameters:
  • image (ANTsImage) – scalar image.

  • noise_model (string) – ‘additivegaussian’, ‘saltandpepper’, ‘shot’, or ‘speckle’.

  • noise_parameters (tuple or array or float) – ‘additivegaussian’: (mean, standardDeviation) ‘saltandpepper’: (probability, saltValue, pepperValue) ‘shot’: scale ‘speckle’: standardDeviation

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> noise_image = ants.add_noise_to_image(image, 'additivegaussian', (0.0, 1.0))
>>> noise_image = ants.add_noise_to_image(image, 'saltandpepper', (0.1, 0.0, 100.0))
>>> noise_image = ants.add_noise_to_image(image, 'shot', 1.0)
>>> noise_image = ants.add_noise_to_image(image, 'speckle', 1.0)
allclose(image2)

Check if two images have the same array values

anti_alias()

Apply Anti-Alias filter to a binary image

ANTsR function: N/A

Parameters:

image (ANTsImage) – binary image to which anti-aliasing will be applied

Return type:

ANTsImage

Example

>>> import ants
>>> img = ants.image_read(ants.get_data('r16'))
>>> mask = ants.get_mask(img)
>>> mask_aa = ants.anti_alias(mask)
>>> ants.plot(mask)
>>> ants.plot(mask_aa)
apply(fn)[source]

Apply an arbitrary function to ANTsImage.

Parameters:

fn (python function or lambda) – function to apply to ENTIRE image at once

Returns:

image with function applied to it

Return type:

ANTsImage

argmax(axis=None)[source]

Return argmax along specified axis

argmin(axis=None)[source]

Return argmin along specified axis

argrange(axis=None)[source]

Return argrange along specified axis

astype(dtype)[source]

Cast & clone an ANTsImage to a given numpy datatype.

Map:

uint8 : unsigned char uint32 : unsigned int float32 : float float64 : double

clone(pixeltype=None)

Create a copy of the given ANTsImage with the same data and info, possibly with a different data type for the image data. Only supports casting to uint8 (unsigned char), uint32 (unsigned int), float32 (float), and float64 (double)

Parameters:

dtype (string (optional)) –

if None, the dtype will be the same as the cloned ANTsImage. Otherwise, the data will be cast to this type. This can be a numpy type or an ITK type. Options:

’unsigned char’ or ‘uint8’, ‘unsigned int’ or ‘uint32’, ‘float’ or ‘float32’, ‘double’ or ‘float64’

Return type:

ANTsImage

crop_image(label_image=None, label=1)

Use a label image to crop a smaller ANTsImage from within a larger ANTsImage

ANTsR function: cropImage

Parameters:
  • image (ANTsImage) – image to crop

  • label_image (ANTsImage) – image with label values. If not supplied, estimated from data.

  • label (python:integer) – the label value to use

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data('r16') )
>>> cropped = ants.crop_image(fi)
>>> fi2 = ants.merge_channels([fi,fi])
>>> cropped2 = ants.crop_image(fi2)
>>> cropped = ants.crop_image(fi, fi, 100 )
crop_indices(lowerind, upperind)

Create a proper ANTsImage sub-image by indexing the image with indices. This is similar to but different from array sub-setting in that the resulting sub-image can be decropped back into its place without having to store its original index locations explicitly.

ANTsR function: cropIndices

Parameters:
  • image (ANTsImage) – image to crop

  • lowerind (list/tuple of python:integers) – vector of lower index, should be length image dimensionality

  • upperind (list/tuple of python:integers) – vector of upper index, should be length image dimensionality

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data("r16"))
>>> cropped = ants.crop_indices( fi, (10,10), (100,100) )
>>> cropped = ants.smooth_image( cropped, 5 )
>>> decropped = ants.decrop_image( cropped, fi )
decrop_image(full_image)

The inverse function for ants.crop_image

ANTsR function: decropImage

Parameters:
  • cropped_image (ANTsImage) – cropped image

  • full_image (ANTsImage) – image in which the cropped image will be put back

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read(ants.get_ants_data('r16'))
>>> mask = ants.get_mask(fi)
>>> cropped = ants.crop_image(fi, mask, 1)
>>> cropped = ants.smooth_image(cropped, 1)
>>> decropped = ants.decrop_image(cropped, fi)
denoise_image(mask=None, shrink_factor=1, p=1, r=2, noise_model='Rician', v=0)

Denoise an image using a spatially adaptive filter originally described in J. V. Manjon, P. Coupe, Luis Marti-Bonmati, D. L. Collins, and M. Robles. Adaptive Non-Local Means Denoising of MR Images With Spatially Varying Noise Levels, Journal of Magnetic Resonance Imaging, 31:192-203, June 2010.

ANTsR function: denoiseImage

Parameters:
  • image (ANTsImage) – scalar image to denoise.

  • mask (ANTsImage) – to limit the denoise region.

  • shrink_factor (scalar) – downsampling level performed within the algorithm.

  • p (python:integer or character of format '2x2' where the x separates vector entries) – patch radius for local sample.

  • r (python:integer or character of format '2x2' where the x separates vector entries) – search radius from which to choose extra local samples.

  • noise_model (string) – ‘Rician’ or ‘Gaussian’

Return type:

ANTsImage

Example

>>> import ants
>>> import numpy as np
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> # add fairly large salt and pepper noise
>>> imagenoise = image + np.random.randn(*image.shape).astype('float32')*5
>>> imagedenoise = ants.denoise_image(imagenoise, ants.get_mask(image))
property direction

Get image direction

Return type:

tuple

flatten()[source]

Flatten image data

get_center_of_mass()

Compute an image center of mass in physical space which is defined as the mean of the intensity weighted voxel coordinate system.

ANTsR function: getCenterOfMass

Parameters:

image (ANTsImage) – image from which center of mass will be computed

Return type:

scalar

Example

>>> fi = ants.image_read( ants.get_ants_data("r16"))
>>> com1 = ants.get_center_of_mass( fi )
>>> fi = ants.image_read( ants.get_ants_data("r64"))
>>> com2 = ants.get_center_of_mass( fi )
get_centroids(clustparam=0)

Reduces a variate/statistical/network image to a set of centroids describing the center of each stand-alone non-zero component in the image

ANTsR function: getCentroids

Parameters:
  • image (ANTsImage) – image from which centroids will be calculated

  • clustparam (python:integer) – look at regions greater than or equal to this size

Return type:

ndarray

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data( "r16" ) )
>>> image = ants.threshold_image( image, 90, 120 )
>>> image = ants.label_clusters( image, 10 )
>>> cents = ants.get_centroids( image )
get_mask(low_thresh=None, high_thresh=None, cleanup=2)

Get a binary mask image from the given image after thresholding

ANTsR function: getMask

Parameters:
  • image (ANTsImage) – image from which mask will be computed. Can be an antsImage of 2, 3 or 4 dimensions.

  • low_thresh (scalar (optional)) – An inclusive lower threshold for voxels to be included in the mask. If not given, defaults to image mean.

  • high_thresh (scalar (optional)) – An inclusive upper threshold for voxels to be included in the mask. If not given, defaults to image max

  • cleanup (python:integer) –

    If > 0, morphological operations will be applied to clean up the mask by eroding away small or weakly-connected areas, and closing holes. If cleanup is >0, the following steps are applied

    1. Erosion with radius 2 voxels

    2. Retain largest component

    3. Dilation with radius 1 voxel

    4. Morphological closing

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> mask = ants.get_mask(image)
get_neighborhood_at_voxel(center, kernel, physical_coordinates=False)

Get a hypercube neighborhood at a voxel. Get the values in a local neighborhood of an image.

ANTsR function: getNeighborhoodAtVoxel

Parameters:
  • image (ANTsImage) – image to get values from.

  • center (tuple/list) – indices for neighborhood center

  • kernel (tuple/list) – either a collection of values for neighborhood radius (in voxels) or a binary collection of the same dimension as the image, specifying the shape of the neighborhood to extract

  • physical_coordinates (boolean) – whether voxel indices and offsets should be in voxel or physical coordinates

Returns:

valuesndarray

array of neighborhood values at the voxel

indicesndarray

matrix providing the coordinates for each value

Return type:

dictionary w/ following key-value pairs

Example

>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> center = (2,2)
>>> radius = (3,3)
>>> retval = ants.get_neighborhood_at_voxel(img, center, radius)
get_neighborhood_in_mask(mask, radius, physical_coordinates=False, boundary_condition=None, spatial_info=False, get_gradient=False)

Get neighborhoods for voxels within mask.

This converts a scalar image to a matrix with rows that contain neighbors around a center voxel

ANTsR function: getNeighborhoodInMask

Parameters:
  • image (ANTsImage) – image to get values from

  • mask (ANTsImage) – image indicating which voxels to examine. Each voxel > 0 will be used as the center of a neighborhood

  • radius (tuple/list) – array of values for neighborhood radius (in voxels)

  • physical_coordinates (boolean) – whether voxel indices and offsets should be in voxel or physical coordinates

  • boundary_condition (string (optional)) –

    how to handle voxels in a neighborhood, but not in the mask.

    None : fill values with NaN image : use image value, even if not in mask mean : use mean of all non-NaN values for that neighborhood

  • spatial_info (boolean) – whether voxel locations and neighborhood offsets should be returned along with pixel values.

  • get_gradient (boolean) – whether a matrix of gradients (at the center voxel) should be returned in addition to the value matrix (WIP)

Returns:

  • if spatial_info is False

    if get_gradient is False:
    ndarray

    an array of pixel values where the number of rows is the size of the neighborhood and there is a column for each voxel

    else if get_gradient is True:
    dictionary w/ following key-value pairs:
    valuesndarray

    array of pixel values where the number of rows is the size of the neighborhood and there is a column for each voxel.

    gradientsndarray

    array providing the gradients at the center voxel of each neighborhood

  • else if spatial_info is True

    dictionary w/ following key-value pairs:
    valuesndarray

    array of pixel values where the number of rows is the size of the neighborhood and there is a column for each voxel.

    indicesndarray

    array provinding the center coordinates for each neighborhood

    offsetsndarray

    array providing the offsets from center for each voxel in a neighborhood

Example

>>> import ants
>>> r16 = ants.image_read(ants.get_ants_data('r16'))
>>> mask = ants.get_mask(r16)
>>> mat = ants.get_neighborhood_in_mask(r16, mask, radius=(2,2))
hausdorff_distance(image2)

Get Hausdorff distance between non-zero pixels in two images

ANTsR function: hausdorffDistance

Parameters:
  • image (source) – Source image

  • target_image (ANTsImage) – Target image

Return type:

data frame with “Distance” and “AverageDistance”

Example

>>> import ants
>>> r16 = ants.image_read( ants.get_ants_data('r16') )
>>> r64 = ants.image_read( ants.get_ants_data('r64') )
>>> s16 = ants.kmeans_segmentation( r16, 3 )['segmentation']
>>> s64 = ants.kmeans_segmentation( r64, 3 )['segmentation']
>>> stats = ants.hausdorff_distance(s16, s64)
hessian_objectness(object_dimension=1, is_bright_object=True, sigma_min=0.1, sigma_max=10, number_of_sigma_steps=10, use_sigma_logarithmic_spacing=True, alpha=0.5, beta=0.5, gamma=5.0, set_scale_objectness_measure=True)

Interface to ITK filter. Based on the paper by Westin et al., “Geometrical Diffusion Measures for MRI from Tensor Basis Analysis” and Luca Antiga’s Insight Journal paper http://hdl.handle.net/1926/576.

Parameters:
  • image (ANTsImage) – scalar image.

  • object_dimension (unsigned python:int) – 0: ‘sphere’, 1: ‘line’, or 2: ‘plane’.

  • is_bright_object (boolean) – Set ‘true’ for enhancing bright objects and ‘false’ for dark objects.

  • sigma_min (float) – Define scale domain for feature extraction.

  • sigma_max (float) – Define scale domain for feature extraction.

  • number_of_sigma_steps (unsigned python:int) – Define number of samples for scale space.

  • use_sigma_logarithmic_spacing (boolean) – Define sample spacing the for scale space.

  • alpha (float) – Hessian filter parameter.

  • beta (float) – Hessian filter parameter.

  • gamma (float) – Hessian filter parameter.

  • set_scale_objectness_measure (boolean) –

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> hessian_object_image = ants.hessian_objectness(image)
histogram_equalize_image(number_of_histogram_bins=256)

Histogram equalize image

# from http://www.janeriksolem.net/histogram-equalization-with-python-and.html

Parameters:
  • image (ANTsImage) – source image

  • number_of_histogram_bins (python:integer) – number of bins for cumulative histogram

Example

>>> import ants
>>> src_img = ants.image_read(ants.get_data('r16'))
>>> src_img_eq = ants.histogram_equalize_image(src_img)
histogram_match_image(reference_image, number_of_histogram_bins=255, number_of_match_points=64, use_threshold_at_mean_intensity=False)

Histogram match source image to reference image.

Parameters:
  • source_image (ANTsImage) – source image

  • reference_image (ANTsImage) – reference image

  • number_of_histogram_bins (python:integer) – number of bins for source and reference histograms

  • number_of_match_points (python:integer) – number of points for histogram matching

  • use_threshold_at_mean_intensity (boolean) – see ITK description.

Example

>>> import ants
>>> src_img = ants.image_read(ants.get_data('r16'))
>>> ref_img = ants.image_read(ants.get_data('r64'))
>>> src_ref = ants.histogram_match_image(src_img, ref_img)
histogram_match_image2(reference_image, source_mask=None, reference_mask=None, match_points=64, transform_domain_size=255)

Transform image intensities based on histogram mapping.

Apply B-spline 1-D maps to an input image for intensity warping.

Parameters:
  • source_image (ANTsImage) – source image

  • reference_image (ANTsImage) – reference image

  • source_mask (ANTsImage) – source mask

  • reference_mask (ANTsImage) – reference mask

  • match_points (python:integer or tuple) – Parametric points at which the intensity transform displacements are specified between [0, 1], i.e. quantiles. Alternatively, a single number can be given and the sequence is linearly spaced in [0, 1].

  • transform_domain_size (python:integer) – Defines the sampling resolution of the B-spline warping.

Return type:

ANTs image

Example

>>> import ants
>>> src_img = ants.image_read(ants.get_data('r16'))
>>> ref_img = ants.image_read(ants.get_data('r64'))
>>> src_ref = ants.histogram_match_image2(src_img, ref_img)
iMath(operation, *args)

Perform various (often mathematical) operations on the input image/s. Additional parameters should be specific for each operation. See the the full iMath in ANTs, on which this function is based.

ANTsR function: iMath

Parameters:
  • image (ANTsImage) – input object, usually antsImage

  • operation – a string e.g. “GetLargestComponent” … the special case of “GetOperations” or “GetOperationsFull” will return a list of operations and brief description. Some operations may not be valid (WIP), but most are.

  • *args (non-keyword arguments) – additional parameters specific to the operation

Example

>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> img2 = ants.iMath(img, 'Canny', 1, 5, 12)
iMath_propagate_labels_through_mask(labels, stopping_value=100, propagation_method=0)
>>> import ants
>>> wms = ants.image_read('~/desktop/wms.nii.gz')
>>> thal = ants.image_read('~/desktop/thal.nii.gz')
>>> img2 = ants.iMath_propagate_labels_through_mask(wms, thal, 500, 0)
iMath_truncate_intensity(lower_q, upper_q, n_bins=64)
>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> ants.iMath_truncate_intensity( img, 0.2, 0.8 )
image_mutual_information(image2)

Compute mutual information between two ANTsImage types

ANTsR function: antsImageMutualInformation

Parameters:
Return type:

scalar

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data('r16') ).clone('float')
>>> mi = ants.image_read( ants.get_ants_data('r64') ).clone('float')
>>> mival = ants.image_mutual_information(fi, mi) # -0.1796141
image_physical_space_consistency(image2, tolerance=0.01, datatype=False)

Check if two or more ANTsImage objects occupy the same physical space

ANTsR function: antsImagePhysicalSpaceConsistency

Parameters:
  • *images (ANTsImages) – images to compare

  • tolerance (float) – tolerance when checking origin and spacing

  • data_type (boolean) – If true, also check that the image data types are the same

Returns:

true if images share same physical space, false otherwise

Return type:

boolean

image_similarity(moving_image, metric_type='MeanSquares', fixed_mask=None, moving_mask=None, sampling_strategy='regular', sampling_percentage=1.0)

Measure similarity between two images. NOTE: Similarity is actually returned as distance (i.e. dissimilarity) per ITK/ANTs convention. E.g. using Correlation metric, the similarity of an image with itself returns -1.

ANTsR function: imageSimilarity

Parameters:
  • fixed (ANTsImage) – the fixed image

  • moving (ANTsImage) – the moving image

  • metric_type (string) –

    image metric to calculate

    MeanSquares Correlation ANTSNeighborhoodCorrelation MattesMutualInformation JointHistogramMutualInformation Demons

  • fixed_mask (ANTsImage (optional)) – mask for the fixed image

  • moving_mask (ANTsImage (optional)) – mask for the moving image

  • sampling_strategy (string (optional)) –

    sampling strategy, default is full sampling

    None (Full sampling) random regular

  • sampling_percentage (scalar) – percentage of data to sample when calculating metric Must be between 0 and 1

Return type:

scalar

Example

>>> import ants
>>> x = ants.image_read(ants.get_ants_data('r16'))
>>> y = ants.image_read(ants.get_ants_data('r30'))
>>> metric = ants.image_similarity(x,y,metric_type='MeanSquares')
image_to_cluster_images(min_cluster_size=50, min_thresh=1e-06, max_thresh=1)

Converts an image to several independent images.

Produces a unique image for each connected component 1 through N of size > min_cluster_size

ANTsR function: image2ClusterImages

Parameters:
  • image (ANTsImage) – input image

  • min_cluster_size (python:integer) – throw away clusters smaller than this value

  • min_thresh (scalar) – threshold to a statistical map

  • max_thresh (scalar) – threshold to a statistical map

Return type:

list of ANTsImage types

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> image = ants.threshold_image(image, 1, 1e15)
>>> image_cluster_list = ants.image_to_cluster_images(image)
image_write(filename, ri=False)

Write an ANTsImage to file

ANTsR function: antsImageWrite

Parameters:
  • image (ANTsImage) – image to save to file

  • filename (string) – name of file to which image will be saved

  • ri (boolean) –

    if True, return image. This allows for using this function in a pipeline:
    >>> img2 = img.smooth_image(2.).image_write(file1, ri=True).threshold_image(0,20).image_write(file2, ri=True)
    

    if False, do not return image

label_clusters(min_cluster_size=50, min_thresh=1e-06, max_thresh=1, fully_connected=False)

This will give a unique ID to each connected component 1 through N of size > min_cluster_size

ANTsR function: labelClusters

Parameters:
  • image (ANTsImage) – input image e.g. a statistical map

  • min_cluster_size (python:integer) – throw away clusters smaller than this value

  • min_thresh (scalar) – threshold to a statistical map

  • max_thresh (scalar) – threshold to a statistical map

  • fully_connected (boolean) – boolean sets neighborhood connectivity pattern

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> timageFully = ants.label_clusters( image, 10, 128, 150, True )
>>> timageFace = ants.label_clusters( image, 10, 128, 150, False )
label_geometry_measures(intensity_image=None)

Wrapper for the ANTs funtion labelGeometryMeasures

ANTsR function: labelGeometryMeasures

Parameters:
  • label_image (ANTsImage) – image on which to compute geometry

  • intensity_image (ANTsImage (optional)) – image with intensity values

Return type:

pandas.DataFrame

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data('r16') )
>>> seg = ants.kmeans_segmentation( fi, 3 )['segmentation']
>>> geom = ants.label_geometry_measures(seg,fi)
label_image_centroids(physical=False, convex=True, verbose=False)

Converts a label image to coordinates summarizing their positions

ANTsR function: labelImageCentroids

Parameters:
  • image (ANTsImage) – image of integer labels

  • physical (boolean) – whether you want physical space coordinates or not

  • convex (boolean) – if True, return centroid if False return point with min average distance to other points with same label

Returns:

labels1D-ndarray

array of label values

verticespd.DataFrame

coordinates of label centroids

Return type:

dictionary w/ following key-value pairs

Example

>>> import ants
>>> import numpy as np
>>> image = ants.from_numpy(np.asarray([[[0,2],[1,3]],[[4,6],[5,7]]]).astype('float32'))
>>> labels = ants.label_image_centroids(image)
label_overlap_measures(target_image)

Get overlap measures from two label images (e.g., Dice)

ANTsR function: labelOverlapMeasures

Parameters:
  • image (source) – Source image

  • target_image (ANTsImage) – Target image

Return type:

data frame with measures for each label and all labels combined

Example

>>> import ants
>>> r16 = ants.image_read( ants.get_ants_data('r16') )
>>> r64 = ants.image_read( ants.get_ants_data('r64') )
>>> s16 = ants.kmeans_segmentation( r16, 3 )['segmentation']
>>> s64 = ants.kmeans_segmentation( r64, 3 )['segmentation']
>>> stats = ants.label_overlap_measures(s16, s64)
label_stats(label_image)

Get label statistics from image

ANTsR function: labelStats

Parameters:
  • image (ANTsImage) – Image from which statistics will be calculated

  • label_image (ANTsImage) – Label image

Return type:

ndarray ?

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') , 2 )
>>> image = ants.resample_image( image, (64,64), 1, 0 )
>>> mask = ants.get_mask(image)
>>> segs1 = ants.kmeans_segmentation( image, 3 )
>>> stats = ants.label_stats(image, segs1['segmentation'])
labels_to_matrix(mask, target_labels=None, missing_val=nan)

Convert a labeled image to an n x m binary matrix where n = number of voxels and m = number of labels. Only includes values inside the provided mask while including background ( image == 0 ) for consistency with timeseries2matrix and other image to matrix operations.

ANTsR function: labels2matrix

Parameters:
  • image (ANTsImage) – input label image

  • mask (ANTsImage) – defines domain of interest

  • target_labels (list/tuple) – defines target regions to be returned. if the target label does not exist in the input label image, then the matrix will contain a constant value of missing_val (default None) in that row.

  • missing_val (scalar) – value to use for missing label values

Return type:

ndarray

Example

>>> import ants
>>> fi = ants.image_read(ants.get_ants_data('r16')).resample_image((60,60),1,0)
>>> mask = ants.get_mask(fi)
>>> labs = ants.kmeans_segmentation(fi,3)['segmentation']
>>> labmat = ants.labels_to_matrix(labs, mask)
list_to_ndimage(image_list)

Merge list of multiple scalar ANTsImage types of dimension into one ANTsImage of dimension plus one

ANTsR function: mergeListToNDImage

Parameters:
  • image (target image space) –

  • image_list (list/tuple of ANTsImage python:types) – scalar images to merge into target image space

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> image2 = ants.image_read(ants.get_ants_data('r16'))
>>> imageTar = ants.make_image( ( *image2.shape, 2 ) )
>>> image3 = ants.list_to_ndimage( imageTar, [image,image2])
>>> image3.dimension == 3
mask_image(mask, level=1, binarize=False)

Mask an input image by a mask image. If the mask image has multiple labels, it is possible to specify which label(s) to mask at.

ANTsR function: maskImage

Parameters:
  • image (ANTsImage) – Input image.

  • mask (ANTsImage) – Mask or label image.

  • level (scalar or tuple of scalars) – Level(s) at which to mask image. If vector or list of values, output image is non-zero at all locations where label image matches any of the levels specified.

  • binarize (boolean) – whether binarize the output image

Return type:

ANTsImage

Example

>>> import ants
>>> myimage = ants.image_read(ants.get_ants_data('r16'))
>>> mask = ants.get_mask(myimage)
>>> myimage_mask = ants.mask_image(myimage, mask, 3)
>>> seg = ants.kmeans_segmentation(myimage, 3)
>>> myimage_mask = ants.mask_image(myimage, seg['segmentation'], (1,3))
matrix_to_timeseries(matrix, mask=None)

converts a matrix to a ND image.

ANTsR function: matrix2timeseries

Parameters:
  • image (reference ND image) –

  • matrix (matrix to convert to image) –

  • mask (mask image defining voxels of python:interest) –

Return type:

ANTsImage

Example

>>> import ants
>>> img = ants.make_image( (10,10,10,5 ) )
>>> mask = ants.ndimage_to_list( img )[0] * 0
>>> mask[ 4:8, 4:8, 4:8 ] = 1
>>> mat = ants.timeseries_to_matrix( img, mask = mask )
>>> img2 = ants.matrix_to_timeseries( img,  mat, mask)
max(axis=None)[source]

Return max along specified axis

mean(axis=None)[source]

Return mean along specified axis

median(axis=None)[source]

Return median along specified axis

min(axis=None)[source]

Return min along specified axis

morphology(operation, radius, mtype='binary', value=1, shape='ball', radius_is_parametric=False, thickness=1, lines=3, include_center=False)

Apply morphological operations to an image

ANTsR function: morphology

Parameters:
  • input (ANTsImage) – input image

  • operation (string) –

    operation to apply

    ”close” Morpholgical closing “dilate” Morpholgical dilation “erode” Morpholgical erosion “open” Morpholgical opening

  • radius (scalar) – radius of structuring element

  • mtype (string) –

    type of morphology

    ”binary” Binary operation on a single value “grayscale” Grayscale operations

  • value (scalar) – value to operation on (type=’binary’ only)

  • shape (string) –

    shape of the structuring element ( type=’binary’ only )

    ”ball” spherical structuring element “box” box shaped structuring element “cross” cross shaped structuring element “annulus” annulus shaped structuring element “polygon” polygon structuring element

  • radius_is_parametric (boolean) – used parametric radius boolean (shape=’ball’ and shape=’annulus’ only)

  • thickness (scalar) – thickness (shape=’annulus’ only)

  • lines (python:integer) – number of lines in polygon (shape=’polygon’ only)

  • include_center (boolean) – include center of annulus boolean (shape=’annulus’ only)

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data('r16') , 2 )
>>> mask = ants.get_mask( fi )
>>> dilated_ball = ants.morphology( mask, operation='dilate', radius=3, mtype='binary', shape='ball')
>>> eroded_box = ants.morphology( mask, operation='erode', radius=3, mtype='binary', shape='box')
>>> opened_annulus = ants.morphology( mask, operation='open', radius=5, mtype='binary', shape='annulus', thickness=2)
movie(filename=None, writer=None, fps=30)

Create and save a movie - mp4, gif, etc - of the various 2D slices of a 3D ants image

Try this:

conda install -c conda-forge ffmpeg

Example

>>> import ants
>>> mni = ants.image_read(ants.get_data('mni'))
>>> ants.movie(mni, filename='~/desktop/movie.mp4')
multi_label_morphology(operation, radius, dilation_mask=None, label_list=None, force=False)

Morphology on multi label images.

Wraps calls to iMath binary morphology. Additionally, dilation and closing operations preserve pre-existing labels. The choices of operation are:

Dilation: dilates all labels sequentially, but does not overwrite original labels. This reduces dependence on the intensity ordering of adjoining labels. Ordering dependence can still arise if two or more labels dilate into the same space - in this case, the label with the lowest intensity is retained. With a mask, dilated labels are multiplied by the mask and then added to the original label, thus restricting dilation to the mask region.

Erosion: Erodes labels independently, equivalent to calling iMath iteratively.

Closing: Close holes in each label sequentially, but does not overwrite original labels.

Opening: Opens each label independently, equivalent to calling iMath iteratively.

Parameters:
  • image (ANTsImage) – Input image should contain only 0 for background and positive integers for labels.

  • operation (string) – One of MD, ME, MC, MO, passed to iMath.

  • radius (python:integer) – radius of the morphological operation.

  • dilation_mask (ANTsImage) – Optional binary mask to constrain dilation only (eg dilate cortical label into WM).

  • label_list (list or tuple or numpy.ndarray) – Optional list of labels, to perform operation upon. Defaults to all unique intensities in image.

Return type:

ANTsImage

Example

>>> import ants
>>> img = ants.image_read(ants.get_data('r16'))
>>> labels = ants.get_mask(img,1,150) + ants.get_mask(img,151,225) * 2
>>> labels_dilated = ants.multi_label_morphology(labels, 'MD', 2)
>>> # should see original label regions preserved in dilated version
>>> # label N should have mean N and 0 variance
>>> print(ants.label_stats(labels_dilated, labels))
n3_bias_field_correction(downsample_factor=3)

N3 Bias Field Correction

ANTsR function: n3BiasFieldCorrection

Parameters:
  • image (ANTsImage) – image to be bias corrected

  • downsample_factor (scalar) – how much to downsample image before performing bias correction

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> image_n3 = ants.n3_bias_field_correction(image)
n3_bias_field_correction2(mask=None, rescale_intensities=False, shrink_factor=4, convergence={'iters': 50, 'tol': 1e-07}, spline_param=None, number_of_fitting_levels=4, return_bias_field=False, verbose=False, weight_mask=None)

N3 Bias Field Correction

ANTsR function: n3BiasFieldCorrection2

Parameters:
  • image (ANTsImage) – image to bias correct

  • mask (ANTsImage) – Input mask. If not specified, the entire image is used.

  • rescale_intensities (boolean) – At each iteration, a new intensity mapping is calculated and applied but there is nothing which constrains the new intensity range to be within certain values. The result is that the range can “drift” from the original at each iteration. This option rescales to the [min,max] range of the original image intensities within the user-specified mask. A mask is required to perform rescaling. Default is False in ANTsR/ANTsPy but True in ANTs.

  • shrink_factor (scalar) – Shrink factor for multi-resolution correction, typically integer less than 4

  • convergence (dict w/ keys iters and tol) – iters : maximum number of iterations tol : the convergence tolerance. Default tolerance is 1e-7 in ANTsR/ANTsPy but 0.0 in ANTs.

  • spline_param (float or vector Parameter controlling number of control) – points in spline. Either single value, indicating the spacing in each direction, or vector with one entry per dimension of image, indicating the mesh size. If None, defaults to mesh size of 1 in all dimensions.

  • number_of_fitting_levels (python:integer) – Number of fitting levels per iteration.

  • return_bias_field (boolean) – Return bias field instead of bias corrected image.

  • verbose (boolean) – enables verbose output.

  • weight_mask (ANTsImage (optional)) – antsImage of weight mask

Return type:

ANTsImage

Example

>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> image_n3 = ants.n3_bias_field_correction2(image)
n4_bias_field_correction(mask=None, rescale_intensities=False, shrink_factor=4, convergence={'iters': [50, 50, 50, 50], 'tol': 1e-07}, spline_param=None, return_bias_field=False, verbose=False, weight_mask=None)

N4 Bias Field Correction

ANTsR function: n4BiasFieldCorrection

Parameters:
  • image (ANTsImage) – image to bias correct

  • mask (ANTsImage) – Input mask. If not specified, the entire image is used.

  • rescale_intensities (boolean) – At each iteration, a new intensity mapping is calculated and applied but there is nothing which constrains the new intensity range to be within certain values. The result is that the range can “drift” from the original at each iteration. This option rescales to the [min,max] range of the original image intensities within the user-specified mask. A mask is required to perform rescaling. Default is False in ANTsR/ANTsPy but True in ANTs.

  • shrink_factor (scalar) – Shrink factor for multi-resolution correction, typically integer less than 4

  • convergence (dict w/ keys iters and tol) – iters : vector of maximum number of iterations for each level tol : the convergence tolerance. Default tolerance is 1e-7 in ANTsR/ANTsPy but 0.0 in ANTs.

  • spline_param (float or vector) – Parameter controlling number of control points in spline. Either single value, indicating the spacing in each direction, or vector with one entry per dimension of image, indicating the mesh size. If None, defaults to mesh size of 1 in all dimensions.

  • return_bias_field (boolean) – Return bias field instead of bias corrected image.

  • verbose (boolean) – enables verbose output.

  • weight_mask (ANTsImage (optional)) – antsImage of weight mask

Return type:

ANTsImage

Example

>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> image_n4 = ants.n4_bias_field_correction(image)
ndimage_to_list()

Split a n dimensional ANTsImage into a list of n-1 dimensional ANTsImages

Parameters:

image (ANTsImage) – n-dimensional image to split

Return type:

list of ANTsImage types

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'))
>>> image2 = ants.image_read(ants.get_ants_data('r16'))
>>> imageTar = ants.make_image( ( *image2.shape, 2 ) )
>>> image3 = ants.list_to_ndimage( imageTar, [image,image2])
>>> image3.dimension == 3
>>> images_unmerged = ants.ndimage_to_list( image3 )
>>> len(images_unmerged) == 2
>>> images_unmerged[0].dimension == 2
new_image_like(data)

Create a new ANTsImage with the same header information, but with a new image array.

Parameters:

data (ndarray or py::capsule) – New array or pointer for the image. It must have the same shape as the current image data.

Return type:

ANTsImage

nonzero()[source]

Return non-zero indices of image

numpy(single_components=False)[source]

Get a numpy array copy representing the underlying image data. Altering this ndarray will have NO effect on the underlying image data.

Parameters:

single_components (boolean (default is False)) – if True, keep the extra component dimension in returned array even if image only has one component (i.e. self.has_components == False)

Return type:

ndarray

property origin

Get image origin

Return type:

tuple

pad_image(shape=None, pad_width=None, value=0.0, return_padvals=False)

Pad an image to have the given shape or to be isotropic.

Parameters:
  • image (ANTsImage) – image to pad

  • shape (tuple) –

    • if shape is given, the image will be padded in each dimension until it has this shape

    • if shape and pad_width are both None, the image will be padded along each dimension to match the largest existing dimension so that it has isotropic dimensions.

  • pad_width (list of python:integers or list-of-list of python:integers) – How much to pad in each direction. If a single list is supplied (e.g., [4,4,4]), then the image will be padded by half that amount on both sides. If a list-of-list is supplied (e.g., [(0,4),(0,4),(0,4)]), then the image will be padded unevenly on the different sides

  • pad_value (scalar) – value with which image will be padded

Example

>>> import ants
>>> img = ants.image_read(ants.get_data('r16'))
>>> img2 = ants.pad_image(img, shape=(300,300))
>>> mni = ants.image_read(ants.get_data('mni'))
>>> mni2 = ants.pad_image(mni)
>>> mni3 = ants.pad_image(mni, pad_width=[(0,4),(0,4),(0,4)])
>>> mni4 = ants.pad_image(mni, pad_width=(4,4,4))
plot(overlay=None, blend=False, alpha=1, cmap='Greys_r', overlay_cmap='turbo', overlay_alpha=0.9, vminol=None, vmaxol=None, cbar=False, cbar_length=0.8, cbar_dx=0.0, cbar_vertical=True, axis=0, nslices=12, slices=None, ncol=None, slice_buffer=None, black_bg=True, bg_thresh_quant=0.01, bg_val_quant=0.99, domain_image_map=None, crop=False, scale=False, reverse=False, title=None, title_fontsize=20, title_dx=0.0, title_dy=0.0, filename=None, dpi=500, figsize=1.5, reorient=True, resample=True)

Plot an ANTsImage.

Use mask_image and/or threshold_image to preprocess images to be be overlaid and display the overlays in a given range. See the wiki examples.

By default, images will be reoriented to ‘LAI’ orientation before plotting. So, if axis == 0, the images will be ordered from the left side of the brain to the right side of the brain. If axis == 1, the images will be ordered from the anterior (front) of the brain to the posterior (back) of the brain. And if axis == 2, the images will be ordered from the inferior (bottom) of the brain to the superior (top) of the brain.

ANTsR function: plot.antsImage

Parameters:
  • image (ANTsImage) – image to plot

  • overlay (ANTsImage) – image to overlay on base image

  • cmap (string) – colormap to use for base image. See matplotlib.

  • overlay_cmap (string) – colormap to use for overlay images, if applicable. See matplotlib.

  • overlay_alpha (float) – level of transparency for any overlays. Smaller value means the overlay is more transparent. See matplotlib.

  • axis (python:integer) – which axis to plot along if image is 3D

  • nslices (python:integer) – number of slices to plot if image is 3D

  • slices (list or tuple of python:integers) – specific slice indices to plot if image is 3D. If given, this will override nslices. This can be absolute array indices (e.g. (80,100,120)), or this can be relative array indices (e.g. (0.4,0.5,0.6))

  • ncol (python:integer) – Number of columns to have on the plot if image is 3D.

  • slice_buffer (python:integer) – how many slices to buffer when finding the non-zero slices of a 3D images. So, if slice_buffer = 10, then the first slice in a 3D image will be the first non-zero slice index plus 10 more slices.

  • black_bg (boolean) –

    if True, the background of the image(s) will be black. if False, the background of the image(s) will be determined by the

    values bg_thresh_quant and bg_val_quant.

  • bg_thresh_quant (float) –

    if white_bg=True, the background will be determined by thresholding the image at the bg_thresh quantile value and setting the background intensity to the bg_val quantile value. This value should be in [0, 1] - somewhere around 0.01 is recommended.

    • equal to 1 will threshold the entire image

    • equal to 0 will threshold none of the image

  • bg_val_quant (float) –

    if white_bg=True, the background will be determined by thresholding the image at the bg_thresh quantile value and setting the background intensity to the bg_val quantile value. This value should be in [0, 1]

    • equal to 1 is pure white

    • equal to 0 is pure black

    • somewhere in between is gray

  • domain_image_map (ANTsImage) – this input ANTsImage or list of ANTsImage types contains a reference image domain_image and optional reference mapping named domainMap. If supplied, the image(s) to be plotted will be mapped to the domain image space before plotting - useful for non-standard image orientations.

  • crop (boolean) – if true, the image(s) will be cropped to their bounding boxes, resulting in a potentially smaller image size. if false, the image(s) will not be cropped

  • scale (boolean or 2-tuple) – if true, nothing will happen to intensities of image(s) and overlay(s) if false, dynamic range will be maximized when visualizing overlays if 2-tuple, the image will be dynamically scaled between these quantiles

  • reverse (boolean) – if true, the order in which the slices are plotted will be reversed. This is useful if you want to plot from the front of the brain first to the back of the brain, or vice-versa

  • title (string) – add a title to the plot

  • filename (string) – if given, the resulting image will be saved to this file

  • dpi (python:integer) – determines resolution of image if saved to file. Higher values result in higher resolution images, but at a cost of having a larger file size

  • resample (bool) – if true, resample image if spacing is very unbalanced.

Example

>>> import ants
>>> import numpy as np
>>> img = ants.image_read(ants.get_data('r16'))
>>> segs = img.kmeans_segmentation(k=3)['segmentation']
>>> ants.plot(img, segs*(segs==1), crop=True)
>>> ants.plot(img, segs*(segs==1), crop=False)
>>> mni = ants.image_read(ants.get_data('mni'))
>>> segs = mni.kmeans_segmentation(k=3)['segmentation']
>>> ants.plot(mni, segs*(segs==1), crop=False)
plot_hist(threshold=0.0, fit_line=False, normfreq=True, title=None, grid=True, xlabel=None, ylabel=None, facecolor='green', alpha=0.75)

Plot a histogram from an ANTsImage

Parameters:

image (ANTsImage) – image from which histogram will be created

plot_ortho(overlay=None, reorient=True, blend=False, xyz=None, xyz_lines=True, xyz_color='red', xyz_alpha=0.6, xyz_linewidth=2, xyz_pad=5, orient_labels=True, alpha=1, cmap='Greys_r', overlay_cmap='jet', overlay_alpha=0.9, cbar=False, cbar_length=0.8, cbar_dx=0.0, cbar_vertical=True, black_bg=True, bg_thresh_quant=0.01, bg_val_quant=0.99, crop=False, scale=False, domain_image_map=None, title=None, titlefontsize=24, title_dx=0, title_dy=0, text=None, textfontsize=24, textfontcolor='white', text_dx=0, text_dy=0, filename=None, dpi=500, figsize=1.0, flat=False, transparent=True, resample=False, allow_xyz_change=True)

Plot an orthographic view of a 3D image

Use mask_image and/or threshold_image to preprocess images to be be overlaid and display the overlays in a given range. See the wiki examples.

ANTsR function: N/A

imageANTsImage

image to plot

overlayANTsImage

image to overlay on base image

xyzlist or tuple of 3 integers

selects index location on which to center display if given, solid lines will be drawn to converge at this coordinate. This is useful for pinpointing a specific location in the image.

flatboolean

if true, the ortho image will be plot in one row if false, the ortho image will be a 2x2 grid with the bottom

left corner blank

cmapstring

colormap to use for base image. See matplotlib.

overlay_cmapstring

colormap to use for overlay images, if applicable. See matplotlib.

overlay_alphafloat

level of transparency for any overlays. Smaller value means the overlay is more transparent. See matplotlib.

cbar: boolean

if true, a colorbar will be added to the plot

cbar_length: float

length of the colorbar relative to the image

cbar_dx: float

horizontal shift of the colorbar relative to the image

cbar_vertical: boolean

if true, the colorbar will be vertical, if false, it will be horizontal underneath the image

axisinteger

which axis to plot along if image is 3D

black_bgboolean

if True, the background of the image(s) will be black. if False, the background of the image(s) will be determined by the

values bg_thresh_quant and bg_val_quant.

bg_thresh_quantfloat

if white_bg=True, the background will be determined by thresholding the image at the bg_thresh quantile value and setting the background intensity to the bg_val quantile value. This value should be in [0, 1] - somewhere around 0.01 is recommended.

  • equal to 1 will threshold the entire image

  • equal to 0 will threshold none of the image

bg_val_quantfloat

if white_bg=True, the background will be determined by thresholding the image at the bg_thresh quantile value and setting the background intensity to the bg_val quantile value. This value should be in [0, 1]

  • equal to 1 is pure white

  • equal to 0 is pure black

  • somewhere in between is gray

domain_image_mapANTsImage

this input ANTsImage or list of ANTsImage types contains a reference image domain_image and optional reference mapping named domainMap. If supplied, the image(s) to be plotted will be mapped to the domain image space before plotting - useful for non-standard image orientations.

cropboolean

if true, the image(s) will be cropped to their bounding boxes, resulting in a potentially smaller image size. if false, the image(s) will not be cropped

scaleboolean or 2-tuple

if true, nothing will happen to intensities of image(s) and overlay(s) if false, dynamic range will be maximized when visualizing overlays if 2-tuple, the image will be dynamically scaled between these quantiles

titlestring

add a title to the plot

filenamestring

if given, the resulting image will be saved to this file

dpiinteger

determines resolution of image if saved to file. Higher values result in higher resolution images, but at a cost of having a larger file size

resample : resample image in case of unbalanced spacing

allow_xyz_change : boolean will attempt to adjust xyz after padding

>>> import ants
>>> mni = ants.image_read(ants.get_data('mni'))
>>> ants.plot_ortho(mni, xyz=(100,100,100))
>>> mni2 = mni.threshold_image(7000, mni.max())
>>> ants.plot_ortho(mni, overlay=mni2)
>>> ants.plot_ortho(mni, overlay=mni2, flat=True)
>>> ants.plot_ortho(mni, overlay=mni2, xyz=(110,110,110), xyz_lines=False,
                    text='Lines Turned Off', textfontsize=22)
>>> ants.plot_ortho(mni, mni2, xyz=(120,100,100),
                    text=' Example
Ortho Text’, textfontsize=26,

title=’Example Ortho Title’, titlefontsize=26)

quantile(q, nonzero=True)

Get the quantile values from an ANTsImage

Examples

>>> img = ants.image_read(ants.get_data('r16'))
>>> ants.quantile(img, 0.5)
>>> ants.quantile(img, (0.5, 0.75))
range(axis=None)[source]

Return range tuple along specified axis

reflect_image(axis=None, tx=None, metric='mattes')

Reflect an image along an axis

ANTsR function: reflectImage

Parameters:
  • image (ANTsImage) – image to reflect

  • axis (python:integer (optional)) – which dimension to reflect across, numbered from 0 to imageDimension-1

  • tx (string (optional)) – transformation type to estimate after reflection

  • metric (string) – similarity metric for image registration. see antsRegistration.

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data('r16'), 'float' )
>>> axis = 2
>>> asym = ants.reflect_image(fi, axis, 'Affine')['warpedmovout']
>>> asym = asym - fi
reorient_image2(orientation='RAS')

Reorient an image. .. rubric:: Example

>>> import ants
>>> mni = ants.image_read(ants.get_data('mni'))
>>> mni2 = mni.reorient_image2()
resample_image(resample_params, use_voxels=False, interp_type=1)

Resample image by spacing or number of voxels with various interpolators. Works with multi-channel images.

ANTsR function: resampleImage

Parameters:
  • image (ANTsImage) – input image

  • resample_params (tuple/list) – vector of size dimension with numeric values

  • use_voxels (boolean) – True means interpret resample params as voxel counts

  • interp_type (python:integer) – one of 0 (linear), 1 (nearest neighbor), 2 (gaussian), 3 (windowed sinc), 4 (bspline)

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read( ants.get_ants_data("r16"))
>>> finn = ants.resample_image(fi,(50,60),True,0)
>>> filin = ants.resample_image(fi,(1.5,1.5),False,1)
>>> img = ants.image_read( ants.get_ants_data("r16"))
>>> img = ants.merge_channels([img, img])
>>> outimg = ants.resample_image(img, (128,128), True)
resample_image_to_target(target, interp_type='linear', imagetype=0, verbose=False, **kwargs)

Resample image by using another image as target reference. This function uses ants.apply_transform with an identity matrix to achieve proper resampling.

ANTsR function: resampleImageToTarget

Parameters:
  • image (ANTsImage) – image to resample

  • target (ANTsImage) – image of reference, the output will be in this space and will have the same pixel type.

  • interp_type (string) –

    Choice of interpolator. Supports partial matching.

    linear nearestNeighbor multiLabel for label images but genericlabel is preferred gaussian bSpline cosineWindowedSinc welchWindowedSinc hammingWindowedSinc lanczosWindowedSinc genericLabel use this for label images

  • imagetype (python:integer) – choose 0/1/2/3 mapping to scalar/vector/tensor/time-series

  • verbose (boolean) – print command and run verbose application of transform.

  • kwargs (keyword arguments) – additional arugment passed to antsApplyTransforms C code

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read(ants.get_ants_data('r16'))
>>> fi2mm = ants.resample_image(fi, (2,2), use_voxels=0, interp_type='linear')
>>> resampled = ants.resample_image_to_target(fi2mm, fi, verbose=True)
rgb_to_vector()

Convert an RGB ANTsImage to a Vector ANTsImage

Parameters:

image (ANTsImage) – RGB image to be converted

Return type:

ANTsImage

Example

>>> import ants
>>> mni = ants.image_read(ants.get_data('mni'))
>>> mni_rgb = ants.scalar_to_rgb(mni)
>>> mni_vector = mni.rgb_to_vector()
>>> mni_rgb2 = mni.vector_to_rgb()
set_direction(new_direction)[source]

Set image direction

Parameters:

new_direction (numpy.ndarray or tuple or list) – updated direction for the image. should have one value for each dimension

Return type:

None

set_origin(new_origin)[source]

Set image origin

Parameters:

new_origin (tuple or list) – updated origin for the image. should have one value for each dimension

Return type:

None

set_spacing(new_spacing)[source]

Set image spacing

Parameters:

new_spacing (tuple or list) – updated spacing for the image. should have one value for each dimension

Return type:

None

slice_image(axis, idx, collapse_strategy=0)

Slice an image.

Parameters:
  • axis (python:integer) – Which axis.

  • idx (python:integer) – Which slice number.

  • collapse_strategy (python:integer) – Collapse strategy for sub-matrix: 0, 1, or 2. 0: collapse to sub-matrix if positive-definite. Otherwise throw an exception. Default. 1: Collapse to identity. 2: Collapse to sub-matrix if positive definite. Otherwise collapse to identity.

Example

>>> import ants
>>> mni = ants.image_read(ants.get_data('mni'))
>>> mni2 = ants.slice_image(mni, axis=1, idx=100)
smooth_image(sigma, sigma_in_physical_coordinates=True, FWHM=False, max_kernel_width=32)

Smooth an image

ANTsR function: smoothImage

Parameters:
  • image – Image to smooth

  • sigma – Smoothing factor. Can be scalar, in which case the same sigma is applied to each dimension, or a vector of length dim(inimage) to specify a unique smoothness for each dimension.

  • sigma_in_physical_coordinates (boolean) – If true, the smoothing factor is in millimeters; if false, it is in pixels.

  • FWHM (boolean) – If true, sigma is interpreted as the full-width-half-max (FWHM) of the filter, not the sigma of a Gaussian kernel.

  • max_kernel_width (scalar) – Maximum kernel width

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16'))
>>> simage = ants.smooth_image(image, (1.2,1.5))
property spacing

Get image spacing

Return type:

tuple

split_channels()

Split channels of a multi-channel ANTsImage into a collection of scalar ANTsImage types

Parameters:

image (ANTsImage) – multi-channel image to split

Return type:

list of ANTsImage types

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('r16'), 'float')
>>> image2 = ants.image_read(ants.get_ants_data('r16'), 'float')
>>> imagemerge = ants.merge_channels([image,image2])
>>> imagemerge.components == 2
>>> images_unmerged = ants.split_channels(imagemerge)
>>> len(images_unmerged) == 2
>>> images_unmerged[0].components == 1
std(axis=None)[source]

Return std along specified axis

sum(axis=None, keepdims=False)[source]

Return sum along specified axis

symmetrize_image()

Use registration and reflection to make an image symmetric

ANTsR function: N/A

Parameters:

image (ANTsImage) – image to make symmetric

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') , 'float')
>>> simage = ants.symimage(image)
threshold_image(low_thresh=None, high_thresh=None, inval=1, outval=0, binary=True)

Converts a scalar image into a binary image by thresholding operations

ANTsR function: thresholdImage

Parameters:
  • image (ANTsImage) – Input image to operate on

  • low_thresh (scalar (optional)) – Lower edge of threshold window

  • hight_thresh (scalar (optional)) – Higher edge of threshold window

  • inval (scalar) – Output value for image voxels in between lothresh and hithresh

  • outval (scalar) – Output value for image voxels lower than lothresh or higher than hithresh

  • binary (boolean) – if true, returns binary thresholded image if false, return binary thresholded image multiplied by original image

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read( ants.get_ants_data('r16') )
>>> timage = ants.threshold_image(image, 0.5, 1e15)
timeseries_to_matrix(mask=None)

Convert a timeseries image into a matrix.

ANTsR function: timeseries2matrix

Parameters:
  • image (image whose slices we convert to a matrix. E.g. a 3D image of size) – x by y by z will convert to a z by x*y sized matrix

  • mask (ANTsImage (optional)) – image containing binary mask. voxels in the mask are placed in the matrix

Returns:

array with a row for each image shape = (N_IMAGES, N_VOXELS)

Return type:

ndarray

Example

>>> import ants
>>> img = ants.make_image( (10,10,10,5 ) )
>>> mat = ants.timeseries_to_matrix( img )
to_file(filename)[source]

Write the ANTsImage to file

Parameters:

filename (string) – filepath to which the image will be written

to_filename(filename)

Write the ANTsImage to file

Parameters:

filename (string) – filepath to which the image will be written

unique(sort=False)[source]

Return unique set of values in image

vector_to_rgb()

Convert an Vector ANTsImage to a RGB ANTsImage

Parameters:

image (ANTsImage) – RGB image to be converted

Return type:

ANTsImage

Example

>>> import ants
>>> img = ants.image_read(ants.get_data('r16'), pixeltype='unsigned char')
>>> img_rgb = ants.scalar_to_rgb(img.clone())
>>> img_vec = img_rgb.rgb_to_vector()
>>> img_rgb2 = img_vec.vector_to_rgb()
view(single_components=False)[source]

Geet a numpy array providing direct, shared access to the image data. IMPORTANT: If you alter the view, then the underlying image data will also be altered.

Parameters:

single_components (boolean (default is False)) – if True, keep the extra component dimension in returned array even if image only has one component (i.e. self.has_components == False)

Return type:

ndarray

weingarten_image_curvature(sigma=1.0, opt='mean')

Uses the weingarten map to estimate image mean or gaussian curvature

ANTsR function: weingartenImageCurvature

Parameters:
  • image (ANTsImage) – image from which curvature is calculated

  • sigma (scalar) – smoothing parameter

  • opt (string) – mean by default, otherwise gaussian or characterize

Return type:

ANTsImage

Example

>>> import ants
>>> image = ants.image_read(ants.get_ants_data('mni')).resample_image((3,3,3))
>>> imagecurv = ants.weingarten_image_curvature(image)

ANTsImage IO

ants.image_clone(image, pixeltype=None)[source]

Clone an ANTsImage

ANTsR function: antsImageClone

Parameters:
  • image (ANTsImage) – image to clone

  • dtype (string (optional)) – new datatype for image

Return type:

ANTsImage

ants.image_header_info(filename)[source]

Read file info from image header

ANTsR function: antsImageHeaderInfo

Parameters:

filename (string) – name of image file from which info will be read

Return type:

dict

ants.image_read(filename, dimension=None, pixeltype='float', reorient=False)[source]

Read an ANTsImage from file

ANTsR function: antsImageRead

Parameters:
  • filename (string) – Name of the file to read the image from.

  • dimension (int) – Number of dimensions of the image read. This need not be the same as the dimensions of the image in the file. Allowed values: 2, 3, 4. If not provided, the dimension is obtained from the image file

  • pixeltype (string) – C++ datatype to be used to represent the pixels read. This datatype need not be the same as the datatype used in the file. Options: unsigned char, unsigned int, float, double

  • reorient (boolean | string) –

    if True, the image will be reoriented to RPI if it is 3D if False, nothing will happen if string, this should be the 3-letter orientation to which the

    input image will reoriented if 3D.

    if the image is 2D, this argument is ignored

Return type:

ANTsImage

ants.image_write(image, filename, ri=False)[source]

Write an ANTsImage to file

ANTsR function: antsImageWrite

Parameters:
  • image (ANTsImage) – image to save to file

  • filename (string) – name of file to which image will be saved

  • ri (boolean) –

    if True, return image. This allows for using this function in a pipeline:
    >>> img2 = img.smooth_image(2.).image_write(file1, ri=True).threshold_image(0,20).image_write(file2, ri=True)
    

    if False, do not return image

ants.make_image(imagesize, voxval=0, spacing=None, origin=None, direction=None, has_components=False, pixeltype='float')[source]

Make an image with given size and voxel value or given a mask and vector

ANTsR function: makeImage

Parameters:
  • shape (tuple/ANTsImage) – input image size or mask

  • voxval (scalar) – input image value or vector, size of mask

  • spacing (tuple/list) – image spatial resolution

  • origin (tuple/list) – image spatial origin

  • direction (list/ndarray) – direction matrix to convert from index to physical space

  • components (boolean) – whether there are components per pixel or not

  • pixeltype (float) – data type of image values

Return type:

ANTsImage

ants.from_numpy(data, origin=None, spacing=None, direction=None, has_components=False, is_rgb=False)[source]

Create an ANTsImage object from a numpy array

ANTsR function: as.antsImage

Parameters:
  • data (ndarray) – image data array

  • origin (tuple/list) – image origin

  • spacing (tuple/list) – image spacing

  • direction (list/ndarray) – image direction

  • has_components (boolean) – whether the image has components

Returns:

image with given data and any given information

Return type:

ANTsImage

ants.matrix_to_images(data_matrix, mask)[source]

Unmasks rows of a matrix and writes as images

ANTsR function: matrixToImages

Parameters:
  • data_matrix (numpy.ndarray) – each row corresponds to an image array should have number of columns equal to non-zero voxels in the mask

  • mask (ANTsImage) – image containing a binary mask. Rows of the matrix are unmasked and written as images. The mask defines the output image space

Return type:

list of ANTsImage types

Example

>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> msk = ants.get_mask( img )
>>> img2 = ants.image_read(ants.get_ants_data('r16'))
>>> img3 = ants.image_read(ants.get_ants_data('r16'))
>>> mat = ants.image_list_to_matrix([img,img2,img3], msk )
>>> ilist = ants.matrix_to_images( mat, msk )
ants.images_from_matrix(data_matrix, mask)

Unmasks rows of a matrix and writes as images

ANTsR function: matrixToImages

Parameters:
  • data_matrix (numpy.ndarray) – each row corresponds to an image array should have number of columns equal to non-zero voxels in the mask

  • mask (ANTsImage) – image containing a binary mask. Rows of the matrix are unmasked and written as images. The mask defines the output image space

Return type:

list of ANTsImage types

Example

>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> msk = ants.get_mask( img )
>>> img2 = ants.image_read(ants.get_ants_data('r16'))
>>> img3 = ants.image_read(ants.get_ants_data('r16'))
>>> mat = ants.image_list_to_matrix([img,img2,img3], msk )
>>> ilist = ants.matrix_to_images( mat, msk )
ants.image_list_to_matrix(image_list, mask=None, sigma=None, epsilon=0.5)

Read images into rows of a matrix, given a mask - much faster for large datasets as it is based on C++ implementations.

ANTsR function: imagesToMatrix

Parameters:
  • image_list (list of ANTsImage python:types) – images to convert to ndarray

  • mask (ANTsImage (optional)) – Mask image, voxels in the mask (>= epsilon) are placed in the matrix. If None, the first image in image_list is thresholded at its mean value to create a mask.

  • sigma (scaler (optional)) – smoothing factor

  • epsilon (scalar) – threshold for mask, values >= epsilon are included in the mask.

Returns:

array with a row for each image shape = (N_IMAGES, N_VOXELS)

Return type:

ndarray

Example

>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> img2 = ants.image_read(ants.get_ants_data('r16'))
>>> img3 = ants.image_read(ants.get_ants_data('r16'))
>>> mat = ants.image_list_to_matrix([img,img2,img3])
ants.images_to_matrix(image_list, mask=None, sigma=None, epsilon=0.5)[source]

Read images into rows of a matrix, given a mask - much faster for large datasets as it is based on C++ implementations.

ANTsR function: imagesToMatrix

Parameters:
  • image_list (list of ANTsImage python:types) – images to convert to ndarray

  • mask (ANTsImage (optional)) – Mask image, voxels in the mask (>= epsilon) are placed in the matrix. If None, the first image in image_list is thresholded at its mean value to create a mask.

  • sigma (scaler (optional)) – smoothing factor

  • epsilon (scalar) – threshold for mask, values >= epsilon are included in the mask.

Returns:

array with a row for each image shape = (N_IMAGES, N_VOXELS)

Return type:

ndarray

Example

>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> img2 = ants.image_read(ants.get_ants_data('r16'))
>>> img3 = ants.image_read(ants.get_ants_data('r16'))
>>> mat = ants.image_list_to_matrix([img,img2,img3])
ants.matrix_from_images(image_list, mask=None, sigma=None, epsilon=0.5)

Read images into rows of a matrix, given a mask - much faster for large datasets as it is based on C++ implementations.

ANTsR function: imagesToMatrix

Parameters:
  • image_list (list of ANTsImage python:types) – images to convert to ndarray

  • mask (ANTsImage (optional)) – Mask image, voxels in the mask (>= epsilon) are placed in the matrix. If None, the first image in image_list is thresholded at its mean value to create a mask.

  • sigma (scaler (optional)) – smoothing factor

  • epsilon (scalar) – threshold for mask, values >= epsilon are included in the mask.

Returns:

array with a row for each image shape = (N_IMAGES, N_VOXELS)

Return type:

ndarray

Example

>>> import ants
>>> img = ants.image_read(ants.get_ants_data('r16'))
>>> img2 = ants.image_read(ants.get_ants_data('r16'))
>>> img3 = ants.image_read(ants.get_ants_data('r16'))
>>> mat = ants.image_list_to_matrix([img,img2,img3])

Transforms

ANTsTransform

class ants.core.ants_transform.ANTsTransform(precision='float', dimension=3, transform_type='AffineTransform', pointer=None)[source]
apply(data, data_type='point', reference=None, **kwargs)[source]

Apply transform to data

apply_to_image(image, reference=None, interpolation='linear')[source]

Apply transform to an image

Parameters:
  • image (ANTsImage) – image to which the transform will be applied

  • reference (ANTsImage) – target space for transforming image

  • interpolation (string) –

    type of interpolation to use. Options are:

    linear nearestneighbor multilabel gaussian bspline cosinewindowedsinc welchwindowedsinc hammingwindoweddinc lanczoswindowedsinc genericlabel

Returns:

list

Return type:

transformed vector

apply_to_point(point)[source]

Apply transform to a point

Parameters:

point (list/tuple) – point to which the transform will be applied

Returns:

list

Return type:

transformed point

Example

>>> import ants
>>> tx = ants.new_ants_transform()
>>> params = tx.parameters
>>> tx.set_parameters(params*2)
>>> pt2 = tx.apply_to_point((1,2,3)) # should be (2,4,6)
apply_to_vector(vector)[source]

Apply transform to a vector

Parameters:

vector (list/tuple) – vector to which the transform will be applied

Returns:

list

Return type:

transformed vector

property fixed_parameters

Get parameters of transform

invert()[source]

Invert the transform

property parameters

Get parameters of transform

set_fixed_parameters(parameters)[source]

Set parameters of transform

set_parameters(parameters)[source]

Set parameters of transform

ANTsTransform IO

ants.create_ants_transform(transform_type='AffineTransform', precision='float', dimension=3, matrix=None, offset=None, center=None, translation=None, parameters=None, fixed_parameters=None, displacement_field=None, supported_types=False)[source]

Create and initialize an ANTsTransform

ANTsR function: createAntsrTransform

Parameters:
  • transform_type (string) – type of transform(s)

  • precision (string) – numerical precision

  • dimension (python:integer) – spatial dimension of transform

  • matrix (ndarray) – matrix for linear transforms

  • offset (tuple/list) – offset for linear transforms

  • center (tuple/list) – center for linear transforms

  • translation (tuple/list) – translation for linear transforms

  • parameters (ndarray/list) – array of parameters

  • fixed_parameters (ndarray/list) – array of fixed parameters

  • displacement_field (ANTsImage) – multichannel ANTsImage for non-linear transform

  • supported_types (boolean) – flag that returns array of possible transforms types

Return type:

ANTsTransform or list of ANTsTransform types

Example

>>> import ants
>>> translation = (3,4,5)
>>> tx = ants.create_ants_transform( type='Euler3DTransform', translation=translation )
ants.new_ants_transform(precision='float', dimension=3, transform_type='AffineTransform', parameters=None, fixed_parameters=None)[source]

Create a new ANTsTransform

ANTsR function: None

This is a simplified method for creating an ANTsTransform, mostly used internally. See create_ants_transform for more options.

Example

>>> import ants
>>> tx = ants.new_ants_transform()
ants.read_transform(filename, precision='float')[source]

Read a transform from file

ANTsR function: readAntsrTransform

Parameters:
  • filename (string) – filename of transform

  • precision (string) – numerical precision of transform

Return type:

ANTsTransform

Example

>>> import ants
>>> tx = ants.new_ants_transform(dimension=2)
>>> tx.set_parameters((0.9,0,0,1.1,10,11))
>>> ants.write_transform(tx, '~/desktop/tx.mat')
>>> tx2 = ants.read_transform('~/desktop/tx.mat')
ants.write_transform(transform, filename)[source]

Write ANTsTransform to file

ANTsR function: writeAntsrTransform

Parameters:
  • transform (ANTsTransform) – transform to save

  • filename (string) – filename of transform (file extension is “.mat” for affine transforms)

Return type:

N/A

Example

>>> import ants
>>> tx = ants.new_ants_transform(dimension=2)
>>> tx.set_parameters((0.9,0,0,1.1,10,11))
>>> ants.write_transform(tx, '~/desktop/tx.mat')
>>> tx2 = ants.read_transform('~/desktop/tx.mat')
ants.transform_from_displacement_field(field)[source]

Convert deformation field (multiChannel image) to ANTsTransform

ANTsR function: antsrTransformFromDisplacementField

Parameters:

field (ANTsImage) – deformation field as multi-channel ANTsImage

Return type:

ANTsImage

Example

>>> import ants
>>> fi = ants.image_read(ants.get_ants_data('r16') )
>>> mi = ants.image_read(ants.get_ants_data('r64') )
>>> fi = ants.resample_image(fi,(60,60),1,0)
>>> mi = ants.resample_image(mi,(60,60),1,0) # speed up
>>> mytx = ants.registration(fixed=fi, moving=mi, type_of_transform = ('SyN') )
>>> vec = ants.image_read( mytx['fwdtransforms'][0] )
>>> atx = ants.transform_from_displacement_field( vec )

Metrics

ANTsMetric

class ants.core.ants_metric.ANTsImageToImageMetric(metric)[source]

ANTsImageToImageMetric class

set_fixed_image(image)[source]

Set Fixed ANTsImage for metric

set_fixed_mask(image)[source]

Set Fixed ANTsImage Mask for metric

set_moving_image(image)[source]

Set Moving ANTsImage for metric

set_moving_mask(image)[source]

Set Fixed ANTsImage Mask for metric

ANTsMetric IO

ants.new_ants_metric(dimension=3, precision='float', metric_type='MeanSquares')[source]
ants.create_ants_metric(fixed, moving, metric_type='MeanSquares', fixed_mask=None, moving_mask=None, sampling_strategy='regular', sampling_percentage=1)[source]
Parameters:

metric_type (string) –

which metric to use options:

MeanSquares MattesMutualInformation ANTSNeighborhoodCorrelation Correlation Demons JointHistogramMutualInformation

Example

>>> import ants
>>> fixed = ants.image_read(ants.get_ants_data('r16'))
>>> moving = ants.image_read(ants.get_ants_data('r64'))
>>> metric_type = 'Correlation'
>>> metric = ants.create_ants_metric(fixed, moving, metric_type)
ants.supported_metrics()[source]