Documentation

General information about iCLOTS usage
iCLOTS software generates quantitative metrics from microscopy data obtained during use of a wide range of microfluidic and static assays.
Computational methods adapt to microscopy images and videomicroscopy of cells and cell solutions in a variety of common assay formats.
We demonstrate the use of these algorithms as applied to blood cells. Experimental assays performed with blood cells are subject to unique requirements and constraints, including samples comprised of heterogenous cell types as well as frequent integration of fluid flow to better recapitulate physiologic conditions, and as such provide ideal test cases to demonstrate the adaptability of iCLOTS.
iCLOTS has been extensively tested on all blood cell types (including red blood cells, white blood cells, platelets, and cell lines) and on various blood cell suspensions.
The presented tools can be applied to other fields who share the same objective of tracking single cell or body features or conditions as a function of time.
iCLOTS can accommodate data obtained using:
Standard microscopy slide or dish assays
Flow-based systems including:
Custom-made microfluidics
Commercially available microfluidics
Traditional flow chamber devices
iCLOTS has been designed with versatility and adaptability in mind: it has been lightly tested with animal cells and/or other, non-hematological types of cells as well.
iCLOTS is a post-processing image analysis software
Users can continue to acquire imaging data using the methods they are accustomed to.
iCLOTS can be used with both previously collected data or new assays planned with the software's capabilities in mind.
Image processing capabilities are separated into four main categories:
Adhesion applications, useful for single-cell resolution measures of biological functionality.
Single cell tracking applications, useful for single-cell velocity measurements, typically as they flow through a microfluidic device. Video frames may contain multiple cells at once. We share the necessary methods and tools for a specialized biophysical flow cytometer assay capable of quantifying cellular mechanical properties via relative deformability measurements.
Velocity profile applications, useful for investigating rheological properties of suspensions under flow. Minimum, mean, and maximum velocity values for each video frame are also provided, suitable for monitoring changes in cell suspension speed.
Multiscale microfluidic accumulation applications, useful for insight into pathologic processes such as occlusion and obstruction in thrombosis.
A suite of file pre-processing applications assists users with preparing their data for analysis. Type of file required is specific to each application and is provided in the documentation below.
Each application produces detailed output files.
iCLOTS detects "events" in the imaging data provided by the user. Events typically represent individual cells or patterns of cells. Each event is labeled with a number/index on the original imaging files and the imaging files as processed by the image processing algorithms applied.
Numerical output metrics are calculated for each event and are returned within an Excel sheet.
For example, each cell "event" may be described by metrics like cell area or total fluorescence intensity of the cell.
Metrics are given for every feature or cell (single-cell resolution) and can be traced back to the original image using the labeled index.
iCLOTS may produce a large amount of data. In an effort to help researchers quickly make sense of their imaging dataset, iCLOTS automatically graphs results from the image processing analysis in common formats such as histograms or scatter plots.
A specialized set of graphs called a "pairplot" is included in the file outputs. A pairplot plots the pairwise relationships in a dataset by creating a grid of graphs such that each metric output by iCLOTS will be shared in the y-axis across a single row and in the x-axis across a single column. Graphs on the leading diagonal are histograms describing a single metric.
The developers would like users to keep in mind that computational analysis is never perfect - some spurious features are to be expected. Users might find these data points don't significantly affect their conclusions or may find that manually removing obvious outliers is less time consuming than performing the analysis by hand.
Should the user need further interpretation of their results, the produced Excel files can be used in the machine learning-based clustering application.
Machine learning is a subset of artificial intelligence.
Machine learning clustering algorithms are an unsupervised approach designed to detect and mathematically characterize natural groupings and patterns within complex datasets, e.g. healthy/clinical sample dichotomies or subpopulations from a single sample.
iCLOTS implements k-means clustering algorithms, understood to be a strong general-purpose approach to clustering, where each data point is assigned a cluster label.
All iCLOTS applications follow a common, easy-to-use interactive format.
Users follow a series of software menus to open a specific analysis window.
All analysis windows are designed with the inputs on the left, the image processing steps as applied in the center, and the outputs on the right.
The user uploads one or several microscopy images, time course microscopy series, or videomicroscopy files as inputs. These files automatically display on the screen and can be scrolled through using the scale beneath the files.
Users may click on the scale or can use <Left> and <Right> keyboard keys to scroll through images or video frames.
Users are guided through a series of steps to describe their data.
This could including choosing a region of interest (ROI) or indicating immunofluorescence staining color channels present.
Users must then adjust parameters to fit the iCLOTS image processing algorithms to their specific set of data.
Parameters are numerical factors that define how image processing algorithms should be applied.
This could be a number such as minimum or maximum cell area.
Every effort has been made to ensure that parameters are intuitive. If the role of a parameter is unclear, please access the on-screen help documentation using the "Help" button in the lower left-hand corner.
Note that in iCLOTS, “a.u.” represents arbitrary units, typically used to describe pixel intensity values.
Effects of changing parameters are shown in real time.
iCLOTS currently does not have a zoom function, but this is planned for a later release. In the meantime, if your data is relatively low-magnification, we suggest cropping a small region of interest using the video processing tools and testing parameters on that image, then applying the same parameters to the larger image.
The "Run analysis" button on the top right of the analysis screen initiates the finalized analysis using the parameters provided.
Typically an analysis takes seconds-to-minutes - this depends heavily on file size and number.
If analysis, particularly of video files, is taking more than 3-5 minutes, consider reducing the resolution or length of files using the video editing suite.
Graphical results are automatically displayed when the analysis is complete.
Output files include:
Tabular data as an Excel file
In applications where several files are analyzed, individual sheets are named after individual files. These file names may be cropped to about 15 characters to prevent corrupting the output file. Please make sure individual files within a folder are named sufficiently differently.
Graphical results as .png images
The initial imaging dataset as transformed by the image processing algorithms and/or labeled with indices.
Videos are returned as individual, sequentially numbered frames.
Users should consider practical experimental design concerns before use.
Choosing cell concentration:
For all experiments involving quantification of single cell events, in our experimental and software testing we chose cell concentrations or hematocrits to ensure that we could operate within a quantifiable dynamic range of the microfluidic devices for both healthy or untreated controls and experimental samples. iCLOTS in its current iteration cannot distinguish between overlapping cell events. Typically we perform an initial experiment with a range of cell concentrations such that the most adhesive samples can adhere without overlap, then use this concentration for all future experiments.
Choosing brightfield illumination vs. fluorescence microscopy:
Brightfield microscopy does not rely on any type of cell labeling. We're found some stains can affect cell membrane properties, i.e. R18 appears to damage the RBC membrane. In experiments where simple count or simple movement is quantified, brightfield microscopy is typically sufficient.
Blood cells naturally have a heterogenous membrane appearance, which can affect area or other morphology measurements. To obtain the highest signal-to-noise ratio (e.g. the most apparent difference between image background and cell signal) we recommend staining cells or cell solutions with a stain indicating the cell membrane and using fluorescence microscopy. The fluorescence microscopy adhesion assay quantifies a secondary stain indicating some biological activity. Future version of iCLOTS will incorporate secondary "functional" quantification in additional applications.
Choosing constant perfusion vs. pressure-driven flow in microfluidic experiments:
iCLOTS has been shown to produce accurate, reliable analyses of both constant perfusion (syringe pump) and pressure-driven flow across a range of microfluidic, flow-based experiments. While pressure-driven flow is more physiologically relevant, users may find they are limited by equipment availability or small sample sizes, or experimental set up may necessitate the greater simplicity or ease-of-use of constant perfusion systems. Users should carefully consider the importance of physiological relevance in their assays. If constant perfusion is used, consider designing microfluidic devices with large bypass channels to prevent significant changes in pressure from channel clogging.
Over the course of long microfluidic experiments, factors such as a build up of adhesive factors on channel walls, cell suspension settling, or other variables may lead to artifacts within data. The iCLOTS team suggests plotting quantitative metrics with frame number as the x-variable to ensure results are reasonably consistent over time.
Users may always access the application-specific documentation available here using the "Help" button in the bottom left of the analysis window.
Downloading and opening iCLOTS in Mac OS
Installation guide
iCLOTS is written in Python 3.7 and packaged natively using the open-source library Pyinstaller version 5.6.2. iCLOTS has been designed as a standalone software to reach the widest range of users possible and for potential use in clinical environments. As such, no supporting software or software dependencies are required. No additional resources are needed to run the program.
iCLOTS is installed simply by downloading the appropriate files. Some users received errors downloading the software from the website using Microsoft Edge browser, and as such, the development team suggests using Google Chrome to download the software from the website.
On Mac OS, users click the tar.gz distribution file to open, then click the .app file to start the software.
iCLOTS .app file has been developed on Mac OS Monterrey version 12.5.1, but has been tested on other Mac OS, including Catalina and Ventura.
The iCLOTS software is approximately ~150 MB, so may take 1-10 minutes to download, depending on internet speed. iCLOTS will take an additional 1-5 minutes to open, particularly for first time use. On Mac OS, when opening, the icon will appear and "bounce" in the dock, disappear, and reappear when the software has loaded.
The development team has taken the necessary steps to identify ourselves as legitimate developers to Mac and Windows OS. Upon opening for the first time, you may receive some messages:
On Mac OS, you may be alerted that this is a software downloaded from the internet. This is common to most open-source projects, including other bioimage analysis software like Ilastik and CellProfiler. You may also be alerted that we are a new team of developers, and asked if you trust the source of this software. The source of the software is attributed to Meredith Fay, manuscript first author and iCLOTS lead developer. Please see a sample alert window produced by opening the software for the first time below:
Downloading and opening iCLOTS in Windows OS
Windows operating system
iCLOTS is written in Python 3.7 and packaged natively using the open-source library Pyinstaller version 5.6.2. iCLOTS has been designed as a standalone software to reach the widest range of users possible and for potential use in clinical environments. As such, no supporting software or software dependencies are required. No additional resources are needed to run the program.
iCLOTS is installed simply by downloading the appropriate files. Some users received errors downloading the software from the website using Microsoft Edge browser, and as such, the development team suggests using Google Chrome to download the software from the website.
iCLOTS .exe file has been developed on Windows version 11, but has been tested on other Windows versions, including Windows version 10.
The iCLOTS software is approximately ~150 MB, so may take 1-10 minutes to download, depending on internet speed. iCLOTS will take an additional 1-5 minutes to open, particularly for first time use.
The development team has taken the necessary steps to identify ourselves as legitimate developers to Windows OS. Upon opening for the first time, you may receive some messages:
On Windows OS, you may be alerted that Windows has protected your PC. Over time, as the iCLOTS development team develops a positive relationship with Windows, this window will no longer open upon downloading. There is an option to click "More info" (image 1, below) and "Run anyways" (image 2, below) from the message window. The source of the software may will be attributed to Meredith Fay.
In rare cases where the user does not have adminstrative privileges on their computer, there may not be an option to "Run anyways." In this case, users should right-click on the download, select "Properties," and select to unblock the application (image 3, below). If issues with download persist, please feel free to contact the development team.
During testing, all software users received and accepted these messages, with no negative effect to their computers.
Adhesion image processing application 1: brightfield microscopy
Application that analyzes static, brightfield images of cells (tested extensively on platelets, RBCs, and WBCs) adhered to some surface. May also be suitable for use with preliminary digital pathology approaches, e.g. with blood smears. This application does not separate cells by type, but you could use the post-processing machine learning clustering algorithm to group cells.
Input files:
This application is designed to analyze a single image or a folder of image files (.jpg, .png, and/or .tif)
The same input parameters are applied to each image.
Users are lead to select an "invert" setting for analysis:
You can indicate you would like to look for dark cells on a light background or light cells on a dark background.
Parameters to interactively adjust:
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image, used to convert pixel measurements into area or distance dimensions. Use 1 for no conversion.
Maximum diameter: The maximum diameter of a cell to be detected (pixels), must be set as an odd integer to work with the analysis algorithms.
Minimum intensity: Minimum summed intensity of a cell to be detected (a.u.).
Output metrics:
Resolution: single-cell
Metrics: area (pixels, µm²), circularity (a.u.)
Circularity ranges from 0 (straight line) to 1 (perfect circle).
Output files:
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from. Time and date are included in the folder name to identify when the analysis was performed and distinguish between different analyses.
Labeled imaging data:
Each original image with each detected cell labeled with an index. Each index corresponds to single-cell data within the optional numerical data exports.
An Excel file containing:
Area and circularity for each cell - one sheet/image.
Area and circularity for all cells - one sheet (this combined sheet is best for analyzing replicates)/all files.
Descriptive statistics (minimum, mean, maximum, standard deviation values for area and circularity) for individual files and combined files. Cell density (n/mm²) is also provided.
Parameters used and time/date analysis was performed, for reference.
Graphical data:
Histogram graphs for area and circularity for each individual image.
Pairplot graph for each individual image, for all images combined where one color represents all pooled data, and for all images combined where each color represents a different image file.
Pairplot including area and circularity metrics.
Some tips from the iCLOTS team:
Computational and experimental methods:
The tracking methods used search for particles represented by image regions with Gaussian-like distributions of pixel brightness.
Analysis methods cannot distinguish between overlapping cells.
If cells are significantly overlapping, repeat experiment with a lower cell concentration.
Owing to the heterogenous appearance of certain cell types (e.g. the classic biconcave red blood cell shape, or the textured appearance of activated white blood cells), brightfield analysis may be challenging.
Consider using a fluorescent membrane stain coupled with our fluorescence microscopy adhesion applications if this does not conflict with your experimental goals, especially for WBCs/neutrophils.
Choosing parameters:
Be sure to use µm-to-pixel ratio, not pixel-to-µm ratio.
Err on the high side of maximum diameter and low side of minimum intensity parameters unless data is particularly noisy or there's a large amount of debris.
If you're unsure if what parameter values to select, run the analysis with an artificially high maximum diameter and low minimum intensity and compare indexed cells to the resultant metrics - for example, perhaps you see a cell typically has a diameter of "x" so you set maximum diameter slightly higher to exclude debris, and a cell typically has a pixel intensity of "y" so you set minimum intensity just below this to exclude noise.
The maximum diameter parameter can behave non-intuitively if set too high for the sample presented. If you cannot detect clear cells, try lowering this parameter.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
Avoid spaces, punctuation, etc. within file names.
In use cases where several files are analyzed, individual sheets are named after individual files. These file names may be cropped to about 15 characters to prevent corrupting the output file. Please make sure individual files within a folder are named sufficiently differently.
Excel and pairplot data includes a sheet/graph with all images combined
Only use this when analyzing replicates of the same sample.
Learn more about the methods forming the basis of our brightfield microscopy adhesion application:
Crocker and Grier particle tracking, used to find individual cells:
Crocker JC, Grier DG. Methods of Digital Video Microscopy for Colloidal Studies. Journal of Colloid and Interface Science. 1996;179(1):298-310.
Python library Trackpy, used to implement these algorithms:
Documentation/tutorial: http://soft-matter.github.io/trackpy/v0.5.0/tutorial/walkthrough.html
Adhesion image processing application 2: fluorescence microscopy
Application that analyzes static, fluorescence microscopy images of cells (tested extensively on platelets, RBCs, and WBCs) adhered to some surface. This application does not separate cells by type, but you could use the post-processing machine learning clustering algorithm to group cells.
Input files:
This application is designed to analyze a single image or a folder of image files (.jpg, .png, and/or .tif)
The same input parameters are applied to each image.
Users are lead to select color channels for analysis, including:
A membrane stain (red, green, blue, or grayscale/white)
This stain typically represents the area/morphology of a cell.
A secondary "functional" stain (red, green, or blue - cannot be the same color as the membrane stain)
Optional additional color channel that typically represents some activity or characteristic.
Parameters to interactively adjust:
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image, used to convert pixel measurements into area or distance dimensions. Use 1 for no conversion.
Minimum area: The minimum area (pixels) of a region (ideally, a cell) to be quantified
Can be used to filter out obvious noise.
Maximum area: The maximum area (pixels) of a region to be quantified
Can be used to filter out obvious debris or cell clusters.
Membrane stain threshold: Integer between 0 (black) and 255 (white/brightest) to be used for the main channel threshold.
Any value below this threshold becomes background.
Any value greater than or equal to this threshold becomes signal to further quantify.
Secondary stain threshold: like the membrane stain threshold, but for the functional/characteristic stain.
Output metrics:
Resolution: single-cell
Metrics:
From membrane stain: area (pixels, µm²), circularity (a.u.), texture (a.u.)
Circularity ranges from 0 (straight line) to 1 (perfect circle).
Texture is the standard deviation of all pixel intensity values within one cell, a method for describing membrane heterogeneity.
From functional stain: binary positive/negative stain, total fluorescence intensity of functional stain per cell (a.u.).
Output files:
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from.
Labeled imaging data:
Each original image with each detected cell labeled with an index. Each index corresponds to single-cell data within the optional numerical data exports.
An Excel file containing:
Area, circularity, texture, and functional stain metrics for each cell - one sheet/image.
Area, circularity, texture, and functional stain metrics for all cells - one sheet (this combined sheet is best for analyzing replicates)/all files.
Descriptive statistics (minimum, mean, maximum, standard deviation values for area, circularity, texture, and functional stain metrics) for individual files and combined files. Cell density (n/mm²) is also provided.
Parameters used and time/date analysis was performed, for reference.
Graphical data:
Histogram graphs for area and circularity and a positive/negative functional stain pie chart for each individual image.
Pairplot graph for each individual image, for all images combined where one color represents all pooled data, and for all images combined where each color represents a different image file.
Some tips from the iCLOTS team:
Computational and experimental methods:
For all fluorescence microscopy applications, each stain to quantify must be solely in one red/green/blue channel, no other colors are accepted in the current version of iCLOTS.
See the export options on your microscopy acquisition software.
After application of the thresholds, the image processing algorithms analyze each interconnected region of signal as a cell.
Application cannot distinguish between overlapping cells. If cells are significantly overlapping, please repeat the experiment with a lower cell concentration.
The Lam lab and associated collaborators have found that red blood cells can be difficult to stain fluorescently. Antibody staining signal is typically weak and we've found membrane stains such as R18 can affect mechanical properties of the red blood cells.
Consider using our brightfield adhesion application if this does not conflict with your experimental goals.
Functional stain represents some activity or characteristic of the cell, e.g. expression of a surface marker.
Consider that all pixel values should be below 255, the brightest color possible. If many pixels are equal to 255, any information about degree of intensity of the functional stain above the 255 value is lost. Most microscope acquisition software has a function to detect if laser power, gain, etc. settings are producing "maxed-out," too-high values.
Choosing parameters:
Be sure to use µm-to-pixel ratio, not pixel-to-µm ratio
Sometimes cells (e.g., activated platelets) have a high-intensity "body" and low-intensity spreading or protrusions.
Choose a high membrane stain threshold if you're primarily quantifying number of cells.
Choose a low membrane stain threshold if you're primarily quantifying the morphology of cells.
Err on the high side of maximum area and low side of minimum area parameters unless data is particularly noisy or there's a large amount of debris
If you're unsure if what parameter values to select, run the analysis with an artificially high maximum area and low minimum area and compare indexed cells to the resultant metrics - for example, perhaps you see a cluster typically has an area greater than "x" so you set maximum area slightly lower, and obvious noise typically has an area less than "y" so you set minimum area slightly higher.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
.Avoid spaces, punctuation, etc. within file names
In use cases where several files are analyzed, individual sheets are named after individual files. These file names may be cropped to about 15 characters to prevent corrupting the output file. Please make sure individual files within a folder are named sufficiently differently.
Excel and pairplot data includes a sheet/graph with all images combined.
Only use this when analyzing replicates of the same sample.
Functional/secondary stain metrics are reported in two ways:
Signal (binary): 0 indicates negative for staining, 1 indicates positive for staining. This can be useful for calculating a percent expression.
Fn. stain intensity (a.u.): summed value of all functional stain pixels within the membrane stain area. Take care interpreting this number, as range of intensity can vary image-to-image or even within image due to changes in laser power, bleaching, etc.
No intensity metrics are reported from the main color, as this color should indicate morphology only.
Learn more about the methods forming the basis of our fluorescence microscopy adhesion application:
Region analysis via python library scikit-image:
Relevant citation: van der Walt S, Schönberger JL, Nunez-Iglesias J, et al. scikit-image: image processing in Python. PeerJ. 2014;2:e453.
Documentation/tutorial: https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_regionprops.html
Adhesion image processing application 3: filopodia counter
iCLOTS includes a specialized version of the fluorescence microscopy application designed to count and characterize filopodia at single-cell resolution. The Lam lab has found that it can be hard to objectively count filopodia. iCLOTS applies the same parameters (how distinct a filopodia must be, minimum distance from other leading edges) to an image or series of images to reduce this objectivity.
Number of filopodia per cell and descriptive statistics describing filopodia length per cell (minimum, mean, maximum, standard deviation) are reported in addition to cell area and membrane texture.
Input files:
This application is designed to analyze a single image or a folder of image files (.jpg, .png, and/or .tif)
The same input parameters are applied to each image.
Users are lead to select a color channel that indicates the cell membrane or area/morphology (red, green, blue, or grayscale/white).
Future versions of iCLOTS will also incorporate methods for quantifying a secondary stain indicating some biological character or process as well.
Parameters to interactively adjust:
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image, used to convert pixel measurements into area or distance dimensions. Use 1 for no conversion.
Minimum area: The minimum area (pixels) of a region (ideally, a cell) to be quantified
Can be used to filter out obvious noise.
Maximum area: The maximum area (pixels) of a region to be quantified
Can be used to filter out obvious debris or cell clusters.
Membrane stain threshold: Integer between 0 (black) and 255 (white/brightest) to be used for the main channel threshold.
Any value below this threshold becomes background.
Any value greater than or equal to this threshold becomes signal to further quantify.
Harris corner detection parameters: parameters necessary to detect the sharp "corners" created by filopodia in an image.
Corner sharpness : arbitrary unit parameter ranging from 0 to 2, with 0 indicating you'd like the most defined filopodia only.
Relative intensity: arbitrary unit parameter representing the minimum intensity of "peaks," calculated as the maximum value within the image multiplied by this relative threshold.
Minimum distance: minimum distance between detected filopodia (pix), also used with the peak finding algorithm.
Output metrics:
Resolution: single-cell
Metrics:
Area (pixels, µm²), circularity (a.u.), texture (a.u.), filopodia count (n), minimum/mean/maximum/standard deviation of length of all individual filopodia (if any) per cell.
Circularity ranges from 0 (straight line) to 1 (perfect circle).
Texture is the standard deviation of all pixel intensity values within one cell, a method for describing membrane heterogeneity.
Length of filopdodia is calculated as the distance of a detected filopodia end point to the centroid of the cell shape. You may want to normalize filopodia length to the area of the cell: a large cell will also have a larger mean distance.
Future versions of this application will give individual lengths as a vector. This may be useful for detecting directed response to some localized stimuli.
Output files:
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from.
Labeled imaging data:
Each original image and each image with the membrane threshold applied with each detected cell labeled with an index. Each index corresponds to single-cell data within the optional numerical data exports.
An Excel file containing:
Area, circularity, texture, and filopodia metrics for each cell - one sheet/image.
Area, circularity, texture, and filopodia metrics for all cells - one sheet (this combined sheet is best for analyzing replicates)/all files.
Descriptive statistics (minimum, mean, maximum, standard deviation values for area, circularity, texture, and filopodia metrics) for individual files and combined files. Cell density (n/mm²) is also provided.
Parameters used and time/date analysis was performed, for reference.
Graphical data:
Histogram graphs for filopodia per cell and mean filopodia length for each individual image.
Pairplot graph for each individual image, for all images combined where one color represents all pooled data, and for all images combined where each color represents a different image file.
Some tips from the iCLOTS team:
Computational and experimental methods:
We suggest a high microscopy magnification for this application, iCLOTS was tested on 100x magnification images.
For all fluorescence microscopy applications, each stain to quantify must be solely in one red/green/blue channel, no other colors are accepted in the current version of iCLOTS.
See the export options on your microscopy acquisition software.
After application of the thresholds, the image processing algorithms analyze each interconnected region of signal as a cell.
Application cannot distinguish between overlapping cells. If cells are significantly overlapping, please repeat the experiment with a lower cell concentration.
Searching for number of filopodia can be computationally expensive.
Analysis for filopodia may take longer than other iCLOTS adhesion applications.
Choosing parameters:
Be sure to use µm-to-pixel ratio, not pixel-to-µm ratio
Sometimes cells (e.g., activated platelets) have a high-intensity "body" and low-intensity spreading or protrusions.
Choose a low threshold, by counting filopodia you're primarily quantifying the morphology of the cells.
Err on the high side of maximum area and low side of minimum area parameters unless data is particularly noisy or there's a large amount of debris
If you're unsure if what parameter values to select, run the analysis with an artificially high maximum area and low minimum area and compare indexed cells to the resultant metrics - for example, perhaps you see a cluster typically has an area greater than "x" so you set maximum area slightly lower, and obvious noise typically has an area less than "y" so you set minimum area slightly higher.
It can be tricky to adjust all three Harris corner detection parameters to get a roughly accurate filopodia count.
We suggest doing a sensitivity analysis (trying a wide range of parameters and comparing results).
Ideally, conclusions are not significantly affected by small changes in parameters.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
.Avoid spaces, punctuation, etc. within file names
In use cases where several files are analyzed, individual sheets are named after individual files. These file names may be cropped to about 15 characters to prevent corrupting the output file. Please make sure individual files within a folder are named sufficiently differently.
Excel and pairplot data includes a sheet/graph with all images combined.
Only use this when analyzing replicates of the same sample.
No intensity metrics are reported from the membrane color, as this color should indicate morphology only.
Learn more about the methods forming the basis of our filopodia counting microscopy adhesion application:
Harris corner detection:
Relevant citation: Harris, C. & Stephens, M. in Proceedings of Fourth Alvey Vision Conference 147—151 (1988).
Region analysis via python library scikit-image:
Relevant citation: van der Walt S, Schönberger JL, Nunez-Iglesias J, et al. scikit-image: image processing in Python. PeerJ. 2014;2:e453.
Documentation/tutorial: https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_regionprops.html
Application of corner detection via python library OpenCV:
Relevant citation: Bradski, G. The OpenCV Library. Dr. Dobb’s Journal of Software Tools 2000 (2000).
Documentation/tutorial: https://docs.opencv.org/3.4/dc/d0d/tutorial_py_features_harris.html
Adhesion image processing application 4: transient adhesion
iCLOTS includes a specialized version of our adhesion applications coupled with our single cell tracking applications (see below) designed to measure adhesion time of individual cells within a suspension flowing through some kind of channel or microfluidic device, including traditional flow chambers and commercially available devices like the ibidi µSlide. Adhesion time is reported as transit time, the total time the individual cell is present within the field of view.
This application tracks one or many cells within a frame using adapted Crocker and Grier particle tracking methods. Cells are linked into individual trajectories. Cells can travel in any direction(s).Typically this application would be used to track cells transiting a microfluidic device, but other uses may be possible. This application will work for both brightfield and fluorescence microscopy applications, but no fluorescence intensity data is provided in this release.
Input files:
This application is designed to analyze a single video (.avi)
The same input parameters are applied to every frame.
The application will display the video in the center of the analysis window - users can scroll through frames using the scale bar below.
If your data is saved as a series of frames, please see the suite of video editing tools to convert to .avi
Users can optionally choose a region of interest from the video for analysis.
Currently, regions of interest are selected using a draggable rectangle. Later versions of iCLOTS will incorporate options for ROIs of other shapes.
Users are lead to select an "invert" setting for analysis: you can indicate that you would like to look for dark cells on a light background, or light cells on a dark background.
Parameters to interactively adjust:
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image, used to convert pixel measurements into area or distance dimensions. Use 1 for no conversion.
Maximum diameter: The maximum diameter of a cell to be detected (pixels), must be set as an odd integer to work with the analysis algorithms.
Minimum intensity: Minimum summed intensity of a cell to be detected (a.u.).
Can help filter our obvious noise, debris, or clumped cells.
Maximum intensity: Maximum summed intensity of a cell to be detected (a.u.)
Can also help filter out obvious noise, debris, or clumped cells.
Frames per second (FPS): the rate of imaging, a microscopy parameter
Note that FPS values pulled directly from videos can be inaccurate, especially if the video has been resized or edited in any way.
Higher FPS imaging settings provide more precise distance and transit time values.
Output metrics:
Resolution: single-cell
Metrics: first frame detected, last frame detected, transit time (s), distanced traveled (µm), velocity (µm/s), area (µm²), and circularity (a.u.)
For brightfield microscopy data analysis, if cell appearance is especially heterogenous, the algorithm may detect a portion of the cell rather than the complete cell. Take care interpreting area and circularity measurements.
Output files:
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from. Time and date are included in the folder name to identify when the analysis was performed and distinguish between different analyses.
Labeled imaging data (optional):
Each frame of the video with each detected cell labeled with an index. Each index corresponds to single-cell data within the optional numerical data exports.
While exporting the labeled frames takes extra time, the developers suggest doing so anyways. It will be useful for troubleshooting outliers, etc.
In the video adhesion application, each detected cell is labeled with a different color to aid in easy intepretation and result-checking.
An Excel file containing:
All metrics - one sheet/video.
Additional details from Trackpy algorithm use.
Descriptive statistics (minimum, mean, maximum, standard deviation values for area, distance traveled, transit time and velocity) for individual files and combined files.
Parameters used and time/date analysis was performed, for reference.
Graphical data:
Histogram graphs for area, circularity, and transit time for the complete video.
Pairplot graph.
Some tips from the iCLOTS team:
Computational and experimental methods:
The primary difference between the video adhesion and single cell tracking algorithms is the application of a pre-processing algorithm called "background subtraction" This algorithm removes features that don't move - like microfluidic channel walls, etc., but also adhered cells.
The tracking methods use search for particles repesenting by image regions with Gaussian-like distributions of pixel brightness.
It can be very tricky to get a good brightfield microfluidic video without significant debris. It may also be tricky to adjust parameters to exclude this debris.
If it does not conflict with your experimental goals try staining the cells.
It can be tricky to choose a good minimum to maximum mass range.
Try running with a very low/very high value, respectively, and look at outputs to find a more suitable, narrow range.
You may also want to adjust the contrast of the video using the suite of video processing tools. Making the cells more distinct may help with tracking, but will not affect time-based results.
Analysis methods cannot distinguish between overlapping cells. If cells are significantly overlapping, repeat experiment with a lower cell concentration.
If the analysis is taking an unacceptably long time, you can resize videos to be smaller.
This may cause you to miss the smallest cells - if size is important, we suggest waiting it out.
Choosing parameters:
Be sure to use µm-to-pixel ratio, not pixel-to-µm ratio.
Err on the high side of maximum diameter, low side of minimum intensity, and high side of maximum intensity parameters unless data is particularly noisy or there's a large amount of debris.
Maximum diameter can behave non-intuitively if set unnecessarily high. Lower if obvious cells are being missed.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
Avoid spaces, punctuation, etc. within file names.
Learn more about the methods forming the basis of our single cell tracking application:
Crocker and Grier particle tracking, used to find and track individual cells:
Crocker JC, Grier DG. Methods of Digital Video Microscopy for Colloidal Studies. Journal of Colloid and Interface Science. 1996;179(1):298-310.
Python library Trackpy, used to implement these algorithms:
Documentation/tutorial: http://soft-matter.github.io/trackpy/v0.5.0/tutorial/walkthrough.html
Single cell tracking image processing application 1: brightfield microscopy
This application tracks one or many cells within a frame using adapted Crocker and Grier particle tracking methods. Cells are linked into individual trajectories. Cells can travel in any direction(s). iCLOTS provides a distance traveled, transit time, and velocity (distance/time) for each tracked cell. Typically this application would be used to track cells transiting a microfluidic device, but other uses may be possible. A specialized secondary application analyzes fluorescence microscopy videos of cells transiting any device (see below).
The iCLOTS manuscript demonstrates use of this application primarily with the use of the Lam lab "biophysical flow cytometer" microfluidic device, a research-developed microfluidic designed to provide a relative measure of cell deformability, a mechanical property. A specialized version of the single cell tracking application, for both brightfield and fluorescence microscopy, is also provided (see below.)
Input files:
This application is designed to analyze a single video (.avi)
The same input parameters are applied to every frame.
The application will display the video in the center of the analysis window - users can scroll through frames using the scale bar below.
If your data is saved as a series of frames, please see the suite of video editing tools to convert to .avi
Users can optionally choose a region of interest from the video for analysis.
Currently, regions of interest are selected using a draggable rectangle. Later versions of iCLOTS will incorporate options for ROIs of other shapes.
Parameters to interactively adjust:
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image, used to convert pixel measurements into area or distance dimensions. Use 1 for no conversion.
Maximum diameter: The maximum diameter of a cell to be detected (pixels), must be set as an odd integer to work with the analysis algorithms.
Minimum intensity: Minimum summed intensity of a cell to be detected (a.u.).
Search range: the distance (in pixels) to search for a feature in the subsequent frame.
Minimum distance traveled: the total detected distance a cell must travel to be considered a valid data point (in pixels).
This should be used mostly to filter out obvious noise. Keep in mind, depending on the rate of imaging you may not see a cell at every position it transits - e.g., for the biophysical flow cytometer we typically set minimum distance as one-third the length of the channel - maybe the first time the microscope captures the cell, it's 10-30 microns into the channel, but the last time it captures the cell, it's 10-30 microns from the end. In this case, it would appear the cell traveled less than the length of the channel.
Frames per second (FPS): the rate of imaging, a microscopy parameter
Note that FPS values pulled directly from videos can be inaccurate, especially if the video has been resized or edited in any way.
Higher FPS imaging settings provide more precise distance and transit time values.
Output metrics:
Resolution: single-cell
Metrics: area (pixels), velocity or (µm/s)
Transit time (s) and distance traveled (µm) are also provided. Velocity is equal to distance traveled divided by transit time.
Area is provided in pixels only because the algorithm primarily detects moving shapes - this may include changes in intensity of the channel walls as the cell travels nearby. Please consider area a relative measurement.
Output files:
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from. Time and date are included in the folder name to identify when the analysis was performed and distinguish between different analyses.
Labeled imaging data (optional):
Each frame of the video with each detected cell labeled with an index. Each index corresponds to single-cell data within the optional numerical data exports.
While exporting the labeled frames takes extra time, the developers suggest doing so anyways. It will be useful for troubleshooting outliers, etc.
An Excel file containing:
Area and velocity for each cell - one sheet/video.
Descriptive statistics (minimum, mean, maximum, standard deviation values for area, distance traveled, transit time and velocity).
Parameters used and time/date analysis was performed, for reference.
Graphical data:
Histogram graphs for area, distance traveled, transit time, and velocity for the individual video.
Pairplot graph.
Some tips from the iCLOTS team:
Computational and experimental methods:
An algorithm called "background subtraction" is applied to the video frames before tracking algorithms are used to detect cell movement. This removes features that don't move - like microfluidic channel walls, etc. The first displayed analysis image of any video will be white. If cells are stuck in channels for exceedingly long amounts of time, the background subtraction algorithm may also remove them. You may need to adjust experimental variables, like pump speed or device height, if cells are stuck for too long. Our application that tracks transient adhesion (see above) does not use background subtraction.
Cells transiting a device so closely that they clump together will be detected as one cell. Check area measurements for especially large values to ensure this has not happened. You may need to adjust experimental variables such as cell concentration to prevent clumping, particularly for WBCs.
Some quality measures are imposed on data points which may affect quantitative results. To calculate an accurate velocity measurement, cells must be present in at least 3 frames. Users may need to reduce pump speed if cells are transiting a device too quickly.
Choosing parameters:
Be sure to use µm-to-pixel ratio, not pixel-to-µm ratio.
Err on the high side of maximum diameter and low side of minimum intensity parameters unless data is particularly noisy or there's a large amount of debris.
If you're unsure if what parameter values to select, run the analysis with an artificially high maximum diameter and low minimum intensity and compare indexed cells to the resultant metrics - for example, perhaps you see a cell typically has a diameter of "x" so you set maximum diameter slightly higher to exclude debris, and a cell typically has a pixel intensity of "y" so you set minimum intensity just below this to exclude noise.
Maximum diameter can behave non-intuitively if set unnecessarily high. Lower if obvious cells are being missed.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
Avoid spaces, punctuation, etc. within file names.
Excel and pairplot data includes a sheet/graph with all images combined
Only use this when analyzing replicates of the same sample.
Learn more about the methods forming the basis of our single cell tracking application:
Crocker and Grier particle tracking, used to find and track individual cells:
Crocker JC, Grier DG. Methods of Digital Video Microscopy for Colloidal Studies. Journal of Colloid and Interface Science. 1996;179(1):298-310.
Python library Trackpy, used to implement these algorithms:
Documentation/tutorial: http://soft-matter.github.io/trackpy/v0.5.0/tutorial/walkthrough.html
Single cell tracking image processing application 2: fluorescence microscopy
This application works in the same way as our single cell tracking image processing application for brightfield microscopy videos, but with an added fluorescence cell intensity output metric to describe the summed intensity of individual cells within a fluorescence microscopy video.
All inputs/outputs, methods, and tips and tricks remain the same. Ideally, the stain used for cells describes some functionality or property of the cell. The developers have found that it can be challenging to detect a strong fluorescence signal from moving cells. If troubleshooting experimental variables such as stain concentration, pump speed, and device height do not result in a stronger signal, use the "Edit contrast" video processing application with a gain (alpha) value that increases signal. Then, divide the output cell intensity metrics by this alpha value to remove any bias.
Users may optionally choose a region of interest to analyze. The application builds a map of potential channels from all fluorescence signal in the video. Usually this is a suitable representation of the microfluidic device.
Specialized single cell tracking image processing applications: deformability assay
The iCLOTS manuscript demonstrates use of single cell tracking capabilities (see above) primarily with use of the Lam lab "biophysical flow cytometer" microfluidic device, a research-developed microfluidic designed to provide a relative measure of cell deformability, a mechanical property. We have included a specialized version of the single cell tracking application, available for both brightfield and fluorescence microscopy, for specific use with this assay. This specialized application tracks cells in the x-direction only and imposes special data quality requirements. This application may be useful for any single-cell resolution channel flow assays. For use with channel flow, rotate the video using our suite of video editing tools so that flow is horizontal.
Input files:
Users are lead to choose a region of interest from the video for analysis:
When using with the biophysical flow cytometer device, please choose only the area of the smallest channels.
For any microfluidic device you may use, choose a region with only straight channel portions perpendicular to the bottom of the image (x-direction flow only.)
Parameters to interactively adjust:
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image, used to convert pixel measurements into area or distance dimensions. Use 1 for no conversion.
Maximum diameter: The maximum diameter of a cell to be detected (pixels), must be set as an odd integer to work with the analysis algorithms.
Minimum intensity: Minimum summed intensity of a cell to be detected (a.u.).
Search range: search range is automatically set to no more than 1/3 the channel length to ensure only the highest-quality data points are detected
Minimum distance traveled: minimum distance traveled is automatically set to 1/3 the channel length to ensure only the highest-quality data points are detected
Frames per second (FPS): the rate of imaging, a microscopy parameter
Note that FPS values pulled directly from videos can be inaccurate, especially if the video has been resized or edited in any way.
Output metrics:
Resolution: single-cell
Metrics: area (pixels), single cell deformability index (sDI, µm/s)
sDI is velocity, but presented as sDI to indicate that it represents a relative measure of cellular mechanical properties.
Transit time (s) and distance traveled (µm) are also provided. sDI is equal to distance traveled divided by transit time.
Area is provided in pixels only because the algorithm primarily detects moving shapes - this may include changes in intensity of the channel walls as the cell travels nearby. Please consider area a relative measurement.
Output files:
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from. Time and date are included in the folder name to identify when the analysis was performed and distinguish between different analyses.
Labeled imaging data (optional):
Each frame of the video with each detected cell labeled with an index. Each index corresponds to single-cell data within the optional numerical data exports.
While exporting the labeled frames takes extra time, the developers suggest doing so anyways. It will be useful for troubleshooting outliers, etc.
An Excel file containing:
Area and sDI for each cell - one sheet/video,
Descriptive statistics (minimum, mean, maximum, standard deviation values for area and sDI).
Parameters used and time/date analysis was performed, for reference.
Graphical data:
Histogram graphs for area and sDI.
Pairplot graph.
Some tips from the iCLOTS team:
Computational and experimental methods:
An algorithm called "background subtraction" is applied to the video frames before tracking algorithms are used to detect cell movement. This removes features that don't move - like microfluidic channel walls, etc. If cells are stuck in channels for exceedingly long amounts of time, the background subtraction algorithm may also remove them. You may need to adjust experimental variables, like pump speed or device height, if cells are stuck for too long.
Cells transiting a device so closely that they clump together will be detected as one cell. Check area measurements for especially large values to ensure this has not happened. You may need to adjust experimental variables such as cell concentration to prevent clumping, particularly for WBCs.
Some quality measures are imposed on data points which may affect quantitative results. To calculate an accurate velocity measurement, cells must be present in at least 3 frames. Users may need to reduce pump speed if cells are transiting the device too quickly.
You may see cells that were detected in the on-screen background substraction and cell detection analysis window that were not labeled in the output data, indicating they were not treated as suitable data points. This is because those cells did not meet these quality standards.
Choose the target device height of the biophysical flow cytometer device during fabrication methods carefully. The width of the smallest channels in the device is about 6 µm. The Lam lab typically fabricates microfluidic device masks at a height of 5 µm for red blood cells or a height of 12-15 µm for white blood cells and associated cell lines. Cells must deform to fit through the device for meaningful deformability metrics.
Depending on how "sticky" cells are, the assay may measure adherence vs. deformability. Coat channels with a bovine serum albumin (BSA) solution prior to using to prevent non-specific binding. The developers suggest not reusing devices for multiple experiments.
We typically use a pump speed of 1 µL/min coupled with a FPS rate of about 25 for use with the biophysical flow cytometer.
Choosing parameters:
Be sure to use µm-to-pixel ratio, not pixel-to-µm ratio.
Err on the high side of maximum diameter and low side of minimum intensity parameters unless data is particularly noisy or there's a large amount of debris.
If you're unsure if what parameter values to select, run the analysis with an artificially high maximum diameter and low minimum intensity and compare indexed cells to the resultant metrics - for example, perhaps you see a cell typically has a diameter of "x" so you set maximum diameter slightly higher to exclude debris, and a cell typically has a pixel intensity of "y" so you set minimum intensity just below this to exclude noise.
Maximum diameter can behave non-intuitively if set unnecessarily high. Lower if obvious cells are being missed.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
Avoid spaces, punctuation, etc. within file names.
Excel and pairplot data includes a sheet/graph with all images combined
Only use this when analyzing replicates of the same sample.
Learn more about the methods forming the basis of our deformability application:
Crocker and Grier particle tracking, used to find and track individual cells:
Crocker JC, Grier DG. Methods of Digital Video Microscopy for Colloidal Studies. Journal of Colloid and Interface Science. 1996;179(1):298-310.
Python library Trackpy, used to implement these algorithms:
Documentation/tutorial: http://soft-matter.github.io/trackpy/v0.5.0/tutorial/walkthrough.html
Manuscripts detailing the use of the biophysical flow cytometer device:
Original manuscript: Rosenbluth MJ, Lam WA, Fletcher DA. Analyzing cell mechanics in hematologic diseases with microfluidic biophysical flow cytometry. Lab Chip. 2008 Jul;8(7):1062-70. doi: 10.1039/b802931h. Epub 2008 Jun 5. PMID: 18584080; PMCID: PMC7931849.
Use with neutrophils: Fay ME, Myers DR, Kumar A, Turbyfield CT, Byler R, Crawford K, Mannino RG, Laohapant A, Tyburski EA, Sakurai Y, Rosenbluth MJ, Switz NA, Sulchek TA, Graham MD, Lam WA. Cellular softening mediates leukocyte demargination and trafficking, thereby increasing clinical blood counts. Proc Natl Acad Sci U S A. 2016 Feb 23;113(8):1987-92. doi: 10.1073/pnas.1508920113. Epub 2016 Feb 8. PMID: 26858400; PMCID: PMC4776450.
Use with red blood cells, sickle cell disease: Guruprasad P, Mannino RG, Caruso C, Zhang H, Josephson CD, Roback JD, Lam WA. Integrated automated particle tracking microfluidic enables high-throughput cell deformability cytometry for red cell disorders. Am J Hematol. 2019 Feb;94(2):189-199. doi: 10.1002/ajh.25345. Epub 2018 Nov 28. PMID: 30417938; PMCID: PMC7007699.
Use with red blood cells, iron deficiency anemia: Caruso C, Fay ME, Cheng X, Liu AY, Park SI, Sulchek TA, Graham MD, Lam WA. Pathologic mechanobiological interactions between red blood cells and endothelial cells directly induce vasculopathy in iron deficiency anemia. iScience. 2022 Jun 15;25(7):104606. doi: 10.1016/j.isci.2022.104606. PMID: 35800766; PMCID: PMC9253485.
Use with hematopoietic stem cells: Ni F, Yu WM, Wang X, Fay ME, Young KM, Qiu Y, Lam WA, Sulchek TA, Cheng T, Scadden DT, Qu CK. Ptpn21 Controls Hematopoietic Stem Cell Homeostasis and Biomechanics. Cell Stem Cell. 2019 Apr 4;24(4):608-620.e6. doi: 10.1016/j.stem.2019.02.009. Epub 2019 Mar 14. PMID: 30880025; PMCID: PMC6450721.
iCLOTS manuscript pending peer-reviewed publication contains additional data describing use with red blood cells, reticulocytes, and cancer cell lines.
Velocity profile image processing application
This application tracks detected features (typically patterns of cells) and their displacement using adapted Shi-Tomasi corner detection and Kanade-Lucas-Tomasi feature tracking methods. This application is ideal for suspensions of cells where cells are significantly overlapping. Cell suspensions can travel in any direction(s). iCLOTS provides velocity (displacement/time) for each tracked feature. Minimum, mean, and maximum velocity values within a frame are reported for each frame. A velocity profile is generated based on a user-specified bin number from all events. Typically this application would be used to track cells transiting a microfluidic device, but other uses may be possible.
This video has been extensively tested on brightfield videomicroscopy. Usage with fluorescently labeled features is likely also possible.
Input files:
This application is designed to analyze a single video (.avi)
The same input parameters are applied to every frame.
The application will display the video in the center of the analysis window - users can scroll through frames using the scale bar below.
If your data is saved as a series of frames, please see the suite of video editing tools to convert to .avi
Users are lead to choose a region of interest from the video for analysis:
Small defects in device walls or minor changes in microscopy illumination can present as features to be detected and tracked. Please choose only the channel area, excluding walls or other background. Current region of interest tools provide methods for choosing a rectangular shape. Later versions of iCLOTS will allow for customizable ROIs of varying geometries.
Parameters to interactively adjust:
Video descriptors
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image, used to convert pixel measurements into area or distance dimensions. Use 1 for no conversion.
Frames per second (FPS): The rate of imaging, a microscopy parameter
Note that FPS values pulled directly from videos can be inaccurate, especially if the video has been resized or edited in any way.
Number of bins: The number of bins to divide the channel into to create a profile.
The developers suggest roughly the size of a cell is best - e.g. a 5 µm bin size was used for the iCLOTS manuscript.
Image labeling settings: The developers suggest exporting at least some frames with trajectories labeled to ensure that the analysis is working as intended. As videomicroscopy of videos taken with velocity measurements in mind can have a large number of frames, the team has provided two export options:
Export first 100 frames
Export one frame every 100 frames
Users can select either/or/neither using a series of checkboxes.
Shi-Tomasi corner finding:
Block size (pixels): A block size to compute eigenvalues to pick optimal features is required. For best results, the developers find that roughly the size of a cell (in pixels) works.
The developers have set the maximum number of features that may be detected in a frame as 500, with a high required "quality level," to ensure the most representative data points are used. All features must be a minimum of two pixels apart. For users with coding experience, methods are available as scripts that can be further customized at github.com/LamLabEmory.
Kanade-Lucas-Tomasi feature tracking:
Window size: The most important parameter to select for this application! This is the area (in pixels) to search for a feature in the subsequent frame.
Err on the high side, as values too small can miss the fastest moving features.
If the created profiles look unusually blunted, your window size is likely too small.
An x-range and a y-range are set independently. If flow is unidirectional in the x-direction, you could set a low y-range to reduce computational expense, or vice versa.
Keep in mind, even if flow is unidirectional, the algorithm will search for features in every direction, e.g., if you're sure cells travel no more than 10 pixels to the right in the subsequent frame, you would set your x-direction window as 20 pixels, to account for the algorithm treating both sides as potential next locations.
The developers have set the possible number of iterations of the search such that only one subsequent frame is checked for a previously detected feature (1 iteration). The minimum distance a feature must travel is 1 pixel. For users with coding experience, methods are available as scripts that can be further customized at github.com/LamLabEmory.
Output metrics:
Resolution: single-feature (e.g. single pattern of cells)
Metric: velocity (µm/s)
Output files:
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from. Time and date are included in the folder name to identify when the analysis was performed and distinguish between different analyses.
Labeled imaging data (optional):
Depending on settings selected, the first 100 frames and/or every 100th frame, labeled with displacement of features from the subsequent frame.
While exporting the labeled frames takes extra time, the developers suggest doing so anyways. It will be useful for troubleshooting outliers, etc.
A comma separated value (.csv) sheet containing the initial position and velocity of every feature tracked.
Up to 500 features are tracked per frame, for possibly several thousand frames, so these files tend to be large. They are most suitable for further computational analysis.
An Excel file containing:
Minimum, mean, and maximum velocity values per frame.
Minimum, mean, maximum, and standard deviation of velocity values for the entire video.
Profile information: mean velocity and standard deviation (separate sheets) for all values within the video.
Parameters used and time/date analysis was performed, for reference.
Graphical data:
Time course graph containing minimum (blue), mean (green), and maximum (red) velocity values for each frame.
Profile graph with the position and velocity of each feature tracked as single points, overlaid with a line representing the profile values.
Some tips from the iCLOTS team:
Computational and experimental methods:
Tracking cell suspension velocity accurately is highly dependent on the quality of data. All data presented in iCLOTS was taken at a frame rate of at least160 frames per second. You may need a high-speed camera, depending on experimental goals. The Lam lab can provide recommendations for equipment upon request.
Calculating an accurate velocity profile relies on automatically-calculated linearly-spaced bins spanning the height of the channel. While cropping to a region of interest excluding channel walls is not required, for best profile results the team suggests doing so to avoid wall velocities that appear artificially slower.
The channel size used in the experimental data presented in iCLOTS was artificially shallow at ~15 µm tall. Deeper channels are hard to detect distinct features from.
The channel width used in the experimental data presented in iCLOTS was 70 µm, which we found to be ideal for calculating a velocity profile. At smaller widths, a "clean" profile may be hard to calculate. If channels are so small individual cells appear distinct, please try our single cell tracking methods.
Features are typically patterns of cells rather than a single individual cell. As such, no label index is provided. Exported frames are labeled with trajectories only.
Files taken at a high FPS rate can be quite large. To reduce computational expense:
You may be able to resize the video to a smaller resolution using the suite of video editing tools available in iCLOTS.
Typically you do not need a very long region of interest to calculate an accurate velocity profile. Try selecting only a short portion of the channel.
If you are not looking for changes in velocity over time, use relatively short video clips. The developers have found that 10 seconds at a high FPS is oftentimes sufficient to establish mean values and a profile. You can shorten longer videos using the suite of video editing tools.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
Avoid spaces, punctuation, etc. within file names.
Excel and pairplot data includes a sheet/graph with all images combined
Only use this when analyzing replicates of the same sample.
Learn more about the methods forming the basis of our velocity profile application:
Shi-Tomasi corner detection, used for finding features to track:
Shi J, Tomasi C. Good Features to Track. Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2000;600.
Kanade-Lucas-Tomasi feature displacement algorithms, used for calculating velocity measurements:
Lucas B, Kanade T. An Iterative Image Registration Technique with an Application to Stereo Vision (IJCAI). Vol. 81; 1981.
Python library Open-CV, used to implement these algorithms:
Initial description: Bradski G. The OpenCV Library. Dr Dobb’s Journal of Software Tools 2000. 2000.
Documentation/tutorial: https://docs.opencv.org/3.4/d4/dee/tutorial_optical_flow.html
Microfluidic accumulation image processing application 1: region of interest
Our accumulation-based image processing applications are designed to be multi-scale to fit a variety of researcher's needs. This scale is used to analyze accumulation and occlusion as indicated by fluorescence microscopy signal in any square region of interest selected by the user. The iCLOTS manuscript demonstrates use of this application with a small region from a commercially-available ibidi device.
Input files:
This application is designed to work with a single image or a folder of images describing timeseries data (.jpg, .png, and/or .tif).
The same input parameters are applied to each image.
After uploading one or several images, the user is prompted to choose an ROI from the first image.
The same ROI is applied to all images, take care that all images represent the same field of view.
It's important that frames are labeled sequentially in timeseries order.
Input parameters:
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image.
Use value = 1 for no conversion.
Red, green, and/or blue threshold(s).
Users may select which color channel(s) they would like to analyze.
After channel selection, a spinbox to choose a threshold for each channel appears.
Output files:
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from. Time and date are included in the folder name to identify when the analysis was performed and distinguish between different analyses.
ROI from each image:
ROI from the last image.
Images with image processing (threshold) steps applied.
A corresponding .xlsx sheet containing:
For each selected channel:
Frame data: mean occlusion and accumulation.
Conversion notes:
To convert accumulation per frame into per timepoint, divide frame number by FPS imaging rate.
Occlusion/accumulation graph:
For the time series, a line graph showing:
Occlusion (titled, left) for each color.
Accumulation (titled, right) - " "
Some tips from the iCLOTS team:
Computational and experimental methods:
See input requirements: a time series, in the same field of view.
We are planning a coupled brightfield/fluorescence microscopy application for future iCLOTS releases.
Time series images must be in the proper alphabetical/numerical order.
If image names contain numbers, use preceding zeros to order properly.
i.e. 01, 02... 10 instead of 1, 2... 10
Choosing parameters:
Be sure to use µm-to-pixel ratio, not pixel-to-µm ratio.
If you indicate more than one color channel, you might find the colors overlap in the analysis window, and you can't accurately see parameters as set.
Individually choose each color, select a good threshold, and then combine with the thresholds you chose.
Depending on the threshold you set, while the "trend" of accumulation/occlusion should stay constant, but the degree of accumulation/occlusion will decrease as threshold increases.
If you are comparing conditions, make sure they were taken with the same imaging settings and use the same threshold values.
Ideally these experiments are direct control-to-experimental comparisons taken on the same day.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
Avoid spaces, punctuation, etc. within file names.
Learn more about the methods forming the basis of our multiscale microfluidic accumulation applications:
Region analysis via python library scikit-image:
Relevant citation: van der Walt S, Schönberger JL, Nunez-Iglesias J, et al. scikit-image: image processing in Python. PeerJ. 2014;2:e453.
Documentation/tutorial: https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_regionprops.html
Microfluidic image processing application 2: complex geometry microfluidic device
Our accumulation-based image processing applications are designed to be multi-scale to fit a variety of researcher's needs. This scale is used to analyze accumulation, occlusion, and obstruction as indicated by fluorescence microscopy signal in a microfluidic device with complex geometry. Only the region of device indicated by a channel stain or the summed signal from a time course is quantified. The iCLOTS manuscript demonstrates use of this application with a branching microfluidic device.
Input files:
This application is designed to work with a single image or a folder of images describing timeseries data (.jpg, .png, and/or .tif).
The same input parameters are applied to each image.
After uploading one or several images, the user is prompted to choose an ROI from the first image.
The same ROI is applied to all images, take care that all images represent the same field of view.
It's important that frames are labeled sequentially in timeseries order.
Input parameters:
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image.
Use value = 1 for no conversion.
Red, green, and/or blue threshold(s).
Users may select which color channel(s) they would like to analyze.
After channel selection, a spinbox to choose a threshold for each channel appears.
The "map" comprising the area of the microfluidic device is created from the sum of all signal above threshold.
Output files:
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from. Time and date are included in the folder name to identify when the analysis was performed and distinguish between different analyses.
ROI from each image:
The "map" used - the channel region(s) as detected.
Images with image processing (threshold) steps applied.
For each frame, each selected color as detected by the set threshold overlaid on the channel map (white color).
A corresponding .xlsx sheet containing:
For each selected channel:
Frame data: mean occlusion and accumulation per frame (all channels).
Conversion notes:
To convert accumulation per frame into per timepoint, divide frame number by FPS imaging rate.
Occlusion/accumulation graph:
For the time series, a line graph showing:
Occlusion (titled, left) for each color.
Accumulation (titled, right) - " "
Some tips from the iCLOTS team:
Computational and experimental methods:
See input requirements: a time series, in the same field of view, with "complete" microfluidic channel signal.
Creating the map requires some signal at every point in that channel.
Consider staining the microfluidic channels - if this isn't possible, you may benefit from the region of interest-scale accumulation application.
We are planning a coupled brightfield/fluorescence microscopy application for future iCLOTS releases.
This would not require some bright, single-color signal at every height point in the channel.
Time series images must be in the proper alphabetical/numerical order.
If image names contain numbers, use preceding zeros to order properly.
i.e. 01, 02... 10 instead of 1, 2... 10
The Lam lab has developed these methods on an "endothelialized" branching microfluidic device.
See "Endothelialized Microfluidics for Studying Microvascular Interactions in Hematologic Diseases" manuscript by Myers and Sakurai et al., 2012, JOVE.
We are happy to share a detailed endothelialization protocol upon request.
We are happy to share the mask design files and instructions for fabrication upon request.
Choosing parameters:
Be sure to use µm-to-pixel ratio, not pixel-to-µm ratio.
If you indicate more than one color channel, you might find the colors overlap in the analysis window, and you can't accurately see parameters as set.
Individually choose each color, select a good threshold, and then combine with the thresholds you chose.
Depending on the threshold you set, while the "trend" of accumulation/occlusion should stay constant, but the degree of accumulation/occlusion will decrease as threshold increases.
If you are comparing conditions, make sure they were taken with the same imaging settings and use the same threshold values.
Ideally these experiments are direct control-to-experimental comparisons taken on the same day.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
Avoid spaces, punctuation, etc. within file names.
Learn more about the methods forming the basis of our multiscale microfluidic accumulation applications:
Region analysis via python library scikit-image:
Relevant citation: van der Walt S, Schönberger JL, Nunez-Iglesias J, et al. scikit-image: image processing in Python. PeerJ. 2014;2:e453.
Documentation/tutorial: https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_regionprops.html
Learn more about endothelialized microfluidic devices:
Myers DR, Sakurai Y, Tran R, et al. Endothelialized microfluidics for studying microvascular interactions in hematologic diseases. J Vis Exp. 2012(64).
Microfluidic accumulation image processing application 3: microchannel(s)
Our accumulation-based image processing applications are designed to be multi-scale to fit a variety of researcher's needs. This scale is used to analyze accumulation, occlusion, and obstruction as indicated by fluorescence microscopy signal in a series of straight microchannel(s) within some larger device. Individual channels as indicated by a channel stain or left-right extension of the summed signal from a frame are quantified. The iCLOTS manuscript demonstrates use of this application with a set of 32 of the smallest channels within a branching microfluidic device. A single image or a time series of images can be analyzed. Signal from red, blue, and/or green channels can be quantified.
Input files:
This application is designed to work with a single image or a folder of images describing timeseries data (.jpg, .png, and/or .tif).
The same input parameters are applied to each image.
Each image should consist of one or many straight portions of a microfluidic device.
After uploading one or several images, the user is prompted to choose an ROI from the first image.
This ROI should contain the straight channel portions.
The same ROI is applied to all images, take care that all images represent the same field of view.
The algorithm relies on left-to-right indexing to form the channel regions to analyze.
As such, channels should be perfectly horizontal.
iCLOTS provides a video-editing rotation tool that does not affect aspect ratio.
In order to create a complete channel area to analyze, some fluorescence signal must be present at every y pixel of the channel.
Staining the channels, or some feature of the channel, like a cell layer, helps with this.
Input parameters:
µm-to-pixel ratio: The ratio of microns (1e-6 m) to pixels for the image.
Use value = 1 for no conversion.
Red, green, and/or blue threshold(s).
Users may select which color channel(s) they would like to analyze.
After channel selection, a spinbox to choose a threshold for each channel appears.
Output files:
Region of signal is calculated with single pixel resolution.
Region of signal may not represent single cells.
All files are saved in a new folder titled "Results," located within the folder the original imaging data was selected from. Time and date are included in the folder name to identify when the analysis was performed and distinguish between different analyses.
ROI from each image:
The "map" used - the channel region(s) as detected.
Images with image processing (threshold) steps applied.
For each frame, each selected color as detected by the set threshold overlaid on the channel map (white color).
A corresponding .xlsx sheet containing:
For each selected channel:
Raw data: A percent y-occlusion for very frame, channel, x-position within the channel.
Obstruction, or percent y-occlusion, indicates what percentage of the height of the microchannel contains signal.
Channel data: Occlusion (area of signal), accumulation (pixels, µm²) and obstruction (%y occlusion) for each channel in each frame.
Frame data: mean occlusion, accumulation, and obstruction per frame (all channels)
Conversion notes:
To convert accumulation per frame into per timepoint, divide frame number by FPS imaging rate
To convert x-pixel coordinate to a measurement, multiple by µm-to-pixel ratio.
Occlusion/accumulation graph:
For the time series, a line graph showing:
Occlusion (titled, left) for each channel (light lines) and mean (dark lines) for each color.
Accumulation (titled, right) - " "
Some tips from the iCLOTS team:
Computational and experimental methods:
See input requirements: a time series, in the same field of view, with "complete" y-height horizontal channels.
The left-to-right indexing to form the channels requires some signal at every height point in that channel.
Consider staining the microfluidic channels.
We are planning a coupled brightfield/fluorescence microscopy application for future iCLOTS releases.
This would not require some bright, single-color signal at every height point in the channel.
Time series images must be in the proper alphabetical/numerical order.
If image names contain numbers, use preceding zeros to order properly.
i.e. 01, 02... 10 instead of 1, 2... 10
The Lam lab has developed these methods on an "endothelialized" branching microfluidic device.
See "Endothelialized Microfluidics for Studying Microvascular Interactions in Hematologic Diseases" manuscript by Myers and Sakurai et al., 2012, JOVE.
We are happy to share a detailed endothelialization protocol upon request.
We are happy to share the mask design files and instructions for fabrication upon request.
Choosing parameters:
Be sure to use µm-to-pixel ratio, not pixel-to-µm ratio.
Depending on the threshold you set, while the "trend" of accumulation/occlusion should stay constant, but the degree of accumulation/occlusion will decrease as threshold increases.
If you are comparing conditions, make sure they were taken with the same imaging settings and use the same threshold values.
Ideally these experiments are direct control-to-experimental comparisons taken on the same day.
Output files:
Analysis files are named after the folder containing all images (.xlsx) or image names (.png)
Avoid spaces, punctuation, etc. within file names.
Learn more about the methods forming the basis of our multiscale microfluidic accumulation applications:
Region analysis via python library scikit-image:
Relevant citation: van der Walt S, Schönberger JL, Nunez-Iglesias J, et al. scikit-image: image processing in Python. PeerJ. 2014;2:e453.
Documentation/tutorial: https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_regionprops.html
Learn more about endothelialized microfluidic devices:
Myers DR, Sakurai Y, Tran R, et al. Endothelialized microfluidics for studying microvascular interactions in hematologic diseases. J Vis Exp. 2012(64).
Machine learning-enabled post-processing clustering algorithm application
iCLOTS may generate large datasets, depending on file inputs. Should users need additional interpretation of these large datasets, our machine learning application mathematically characterizes natural groupings within any number of pooled datasets, proving useful for detecting cell sample subpopulations or healthy/clinical sample differences. Methods to apply k-means clustering are provided.
This machine learning application applies clustering algorithms to any properly-formatted data, including iCLOTS data. Typically, a series of data points (e.g., cells) are represented by multiple metrics (e.g. velocity, size, or fluorescence intensity). Clustering is an unsupervised machine learning technique designed to mathematically characterize natural groupings within datasets (e.g., cell subpopulations from a single dataset or healthy-clinical dichotomies).
The iCLOTS development team suggests the review paper "A guide to machine learning for biologists" (Greener, Nature Reviews Molecular Cell Biology, 2021) for a better understanding of machine learning. Please also see this documentation's guidance on reporting computational results.
Machine learning workflow (steps), briefly:
Step 1: load data
The user provides one or more excel files, each representing a single sample. Excel files must have only one sheet, which should contain only individual data points (e.g. cells) described using the same features (numerical metrics, e.g. velocity, size, or fluorescence intensity). A minimum of two features are required.
File loading is set up such that user selects a folder that contains all relevant excel sheets.
Your files may fail to load if they are already open in Excel. This application will read temporary files created by having the file open. Please close any workbooks you're using before selecting a folder of files.
Step 2: select features
After data is loaded, all datasets are combined into one "pool." Clustering is an unsupervised algorithm: no "labels" such as sample names are considered during clustering. Later outputs do provide the number of sample data points found in each cluster.
iCLOTS detects all numerical columns shared between files. Because iCLOTS does not produce any text metrics that could be considered a feature, opportunity to use text-based metrics are not included in v0.1.0. In later versions, text metrics will be converted to categorical values. In the meantime, you could change text-based categories to numerical categories on your own in Excel.
A correlation matrix is automatically displayed. A correlation matrix is a visualization of how much each pairwise combination of variables is correlated, or related. Highly related variables (value approaching -1 or 1) may bias results, e.g. considering both area (pix) and area (µm²) gives area undue influence on clustering.
In this step, users have the option to select what metrics they would like to retain for final analysis.
Step 3: select number of clusters to retain
After features are selected and submitted, a scree plot is generated.
A scree plot indicates a suggested optimal number of mathematically significant clusters to retain.
It is presented as a line plot of the sum of squared errors (SSE) of the distance to the closest centroid for all data points for each number of potential clusters (iCLOTS allows up to 12 clusters).
Typically, as the number of clusters increases, the variance, or sum of squares, for each cluster group decreases.
The "elbow" point of the graph represents the best balance between minimizing the number of clusters and minimizing the variance in each cluster.
The user can choose any number of clusters to group data into using the clustering algorithm.
Step 4: k-means clustering algorithms are applied
After the number of clusters are selected and submitted, k-means clustering algorithms are applied to the pooled datasets.
Several types of clustering algorithms exist, but a k-means algorithm was selected for iCLOTS v0.1.0 as it is understood to be a robust general-purpose approach to discovering natural groupings within high-dimensional data.
The pooled data points are automatically partitioned into clusters that minimize differences between shared metrics.
Step 5: review outputs
iCLOTS creates a series of graphs:
A mosaic plot, a specialized stacked bar chart, displays the number of stacked data points from each dataset in each of the clusters. This is designed to assist the user in visualizing the contribution of each dataset to each cluster.
A pair plot, a pairwise series of scatterplots and histograms, shows each dataset (marker type) and cluster (color).
An excel file with all data points is also created:
This sheet has the original sample name, all numerical metrics used, and a cluster label for each data point.
Descriptive statistics for clusters, cluster label count per dataset, cluster number, and silhouette score are also included.
Silhouette score is a metric with a value from -1 (inappropriate clusters) to 1 (best clustering)
Some tips from the iCLOTS team:
Clustering techniques are well-suited to exploring distinguishing features between known populations and to finding new, previously imperceptible groupings within a single population. However, metrics describing populations of cells typically follow Gaussian distributions which may have significant overlap.
Learn more about the methods forming the basis of our machine learning application:
K-means clustering:
Relevant citation, algorithm: Lloyd, Stuart P. "Least squares quantization in PCM." Information Theory, IEEE Transactions on 28.2 (1982): 129-137.
Relevant citation, assessing goodness of clustering: Rousseeuw, P. J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics 20, 53-65, doi:https://doi.org/10.1016/0377-0427(87)90125-7 (1987).
Clustering via python library scikit-lean:
Relevant citation: Pedregosa, F. et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).
Documentation/tutorial: https://scikit-learn.org/stable/modules/clustering.html
Suite of video and image editing tools
iCLOTS provides a suite of video and image file editing tools to help users format their data for iCLOTS analysis. Briefly, users select a single file or a folder of .png, .jpg, .tif, and/or .avi files for modification. Users may need to perform some operation or edit indicated parameters, some image processing step is applied, and all edited files are returned in a new directory within the original folder.
Choose a region of interest (ROI)
This application is designed to crop file(s) to a region of interest.
After file(s) upload, a window displaying the file will appear, and a draggable rectangle allows the user to select an area they are interested in analyzing further.
In assays using microfluidics, small defects in microfluidic walls are often detected as cells or patterns of cells. The iCLOTS team suggests cropping all data taken using microfluidic systems to the channel area only.
Crop a video to a specified frame range
This application is designed to shorten a video to a specifed start:finish frame range.
Input variables include the start frame number and the end frame number. If multiple files are selected, all will be cropped to the same range.
This script is most useful for removing portions of a video clearly affected by changes in microscopy acquisition settings such as changes in illumination or laser power, or for shortening videos to reduce computational analysis time.
To start and end at roughly a given time, multiply that time (in seconds) by the frames per second (FPS) imaging rate, a microscope acquisition setting you should be able to reference.
Edit the contrast of files
This application edits the contrast of file(s) using a two point process: multiplication followed by addition with a constant.
Input variables include alpha, the contast each pixel's intensity value is multiplied by, and beta, the constant added (or subtracted) from each pixel's intensity value.
Alpha is oftentimes called gain, and should be >1. This value controls contrast. High values of alpha cause the relatively bright pixels to become even brighter. Any value of alpha leaves black pixels as black (value 0).
Beta is oftentimes called bias. <0 decreases the overall brightness of the file, and >0 increases the overall brightness of the image.
Editing contrast can be useful in applications detecting movement. Features of interest, like a cell, and more easily distinguished from background, like channels.
Take care in interpreting pixel intensity values after editing contrast - it may lead to bias in fluorescence-based results.
It can be hard to detect a strong signal from fluorescently stained, moving cells. If you adjust the contrast of those videos to better detect the cells, return to the original fl. int. values by performing the inverse calculation:
Fl. int, original = (Fl. int., calculated - beta) / alpha
Convert an image sequence to a video
May be useful depending on the timecourse outputs of your microscopy software. Single cell tracking, specialized deformability, and velocity applications require an .avi video input.
One single video is made from all images within the selected file folder.
All images must have the same dimensions.
Images must be named in the proper alphabetical/numerical order.
If image names contain numbers, use preceding zeros to order properly.
i.e. 01, 02, ... 10 vs. 1, 2, ... 10
The video uses the file folder name as a filename - please avoid spaces or punctuation within this name.
Inputs include a frames per second rate, the rate at which you would like your created video to play. This parameter does not affect later analysis.
Normalize range of pixel intensity values
All images are scaled such that the lowest pixel value is 0 (black) and the highest pixel value is 255 (white). This can be useful for standardizing images taken during different experiments, etc.
This application is for use with image files only. All channels (red, green, blue) are normalized to the same range. Later iterations of iCLOTS will include options for normalizing all frames of a video file and for normalizing certain color channels only.
Use caution normalizing images to the same range. Normalizing can remove bias that comes from different laser power, gain, etc. settings, but can also introduce bias. It's almost always ideal to compare images taken during the same experiment.
Ideally the initial image pixel values are within a (0, 254) range. "Maxed out" pixel values (255, the highest possible value) cause loss of information - a range of intensities may have existed beyond the 255 value.
This application uses the following method of normalization:
new value = (original layer) - minimum value of the layer / (maximum value of the layer - minimum value of the layer) * 255
Resize file(s)
This application resizes image(s) or video(s) using a resize factor, a constant that a frame's dimensions are multiplied by during the resize process.
For input parameter resize factor, <1 indicates reducing resolution, >1 indicates "increasing" resolution.
Decreasing resolution can help speed computational analysis of large files.
In applications quantifying movement of cells, sometimes lower resolution is sufficient.
In applications quantifying changes in morphology or size, maintain the highest resolution possible.
Artificially increasing resolution in post-processing oftentimes isn't useful. It is not possible to add information that the microscopy did not provide. In may lead to bias in morphological results by increasing changes in dimension.
Rotate file(s)
This application rotates image(s) or video(s) using a specified angle input parameter. All files are rotated to the same degree.
Angle value units are degrees. An angle value >0 rotates the file counterclockwise and an angle value <0 rotates clockwise.
This application is specially designed to maintain the original aspect ratio of the file such that the same µm-to-pixel ratio can be used during analysis.
Rotating videos or images such that microfluidic channels are horizontal is crucial for applications that rely on left-right indexing, such as microchannel occlusion or velocity profiles. It is suggested for one-directional movement quantification, such as the deformability application.
Rotating images has no affect on morphology measurements.
Convert a video to an image sequence
Application converts a single video to a sequence of images. This script is useful for .avi files that must be presented to iCLOTS as a time series, like the accumulation/occlusion applications.
Images are returned named with the video name plus a frame number.
Up to five preceding zeros are used to number the frames sequentially.
This will not work on files >99,999 frames, but iCLOTS cannot realistically handle that many images anyways.
Images are saved as .png files to avoid unnecessary compression.
Tips for reporting computational results
Journals have increasingly high standards for resulting computational results of any kind, including image analysis and machine learning analysis.
Reporting image analysis results
When reporting image analysis results, we suggest performing three specialized analyses:
Parameter sensitivity analysis
Repeat image processing analysis with a wide range of all parameters. Ideally your conclusions should not significantly change with small changes in parameters (your results are robust). You may want to try a negative control (parameter values where you would expect to see no detected events/cells, like a maximum diameter of 0 pixels) and a positive control (like a minimum threshold of 0 a.u.).
Reproducibility analysis
Repeat image processing analysis on several subsets of data that you should be able to draw the same conclusion from (e.g., in a video where you're quantifying suspension velocity, 3 10-second clips of the same suspension in the same experiment).
Comparison to gold standard analysis (typically a manual analysis)
This may not be possible for all analyses, but you may want to manually characterize metrics like number and area of cells from a small subset of your data to make sure iCLOTS is providing you with reasonable results. ImageJ is a good basic-use image visualization and measuring tool.
Reporting machine learning results
iCLOTS v1.0b1 provides methods to implement k-means clustering.
Clustering seeks to group data points in a specified number of k clusters such that data points in the same group are more similar (in some sense) to each other than those in other groups.
The metric silhouette score, a mean measure of how similar objects are to their own cluster compared to other clusters, is returned to assess consistency of clustering. This score ranges from -1 to 1, where a high value indicates clusters are assigned well. Low or negative values may indicate too many/too few clusters.
k-means clustering is understood to be a strong, general-purpose approach to clustering, but it may not be the best algorithm for your specific set of data or the hypothesis you're trying to answer.
While iCLOTS retains information about what sample the data points came from, this label is not considered during the clustering process (the algorithm is "unsupervised", it doesn't consider if the data point was from a healthy control or a clinical sample).
Data points are returned with a cluster label/number, but this does not indicate what the cluster may be (e.g. healthy, clinical, a certain subpopulation or type of cell, etc.) The iCLOTS manuscript compared these cluster labels with the known label (healthy, clinical) to do a Chi-Squared test comparing expected frequencies (e.g. healthy) to observed frequencies (e.g. clinical).
iCLOTS returns the count of each cluster label within each sample and a mosaic plot to facilitate this type of analysis for users.
Always consider the correlation matrix provided - numerical data inputs with too strong a correlation value (either positive or inverse) may bias results.
Always consider the ideal number of clusters as indicated by the scree plot.
This is not an exhaustive resource for interpreting and reporting machine learning results. Your journal likely has more specific guidelines. For more information, please also see:
An excellent, accessibly written guide to machine learning for biologists and life scientists:
Greener, J.G., Kandathil, S.M., Moffat, L. et al. A guide to machine learning for biologists. Nat Rev Mol Cell Biol 23, 40–55 (2022). https://doi.org/10.1038/s41580-021-00407-0
More details on clustering algorithms specifically: https://en.wikipedia.org/wiki/Cluster_analysis
Software information
Software availability and source code
iCLOTS source code is available at github.com/iCLOTS. For users with computational expertise, standalone methods are also available at github.com/LamLabEmory.
iCLOTS is built for 64-bit operating systems and requires at least 8 GB of RAM, with more suggested for larger datasets.
All code is licensed under the Apache License 2.0, a standard open-source license. github.com/iCLOTS also includes a code of conduct and contributing information, files standard to open-source projects.
Version 0.1.1 is being released as a beta version, a version of a piece of software that is made available for testing, typically by a group of users outside the development team, before a formal release. As such, your feedback is especially valuable. As iCLOTS continues to grow, all versions of iCLOTS will be maintained at this location.
While we have extensively tested iCLOTS on several machines, as with any software, operational errors ("bugs") may still be present. Users can contact us for prompt resolution by (1) filling out the contact form at iCLOTS.org/contact, (2) emailing the development team directly at lamlabcomputational@gmail.com, or (3) raising an issue in GitHub, which is particularly useful for users with computational experience. The development team is dedicated to resolving any issues promptly.
iCLOTS is a Python-based software built upon many successful open source packages, including:
Numpy (https://numpy.org/doc/stable/index.html)
Pandas (https://pandas.pydata.org/)
OpenCV (https://opencv.org/)
scikit-image (https://scikit-image.org/)
scikit-learn (https://scikit-learn.org/stable/)
Trackpy (http://soft-matter.github.io/trackpy/v0.5.0/)
Matplotlib (https://matplotlib.org/3.1.0/index.html)
Seaborn (https://seaborn.pydata.org/index.html)
pyinstaller (https://pyinstaller.org/en/stable/)
Data availability
A limited set of test data for every application is available at github.com/LamLabEmory and at iCLOTS.org/software (located under software files). This set of test data is designed to demonstrate software capabilities using real-world data.
All data used within our pending manuscript is also available without restriction upon request to corresponding author Wilbur Lam, MD, PhD (email: wilbur.lam@emory.edu).
Detailed experimental protocols and microfluidic device mask files are also available upon request.
The iCLOTS development team thanks the authors of these projects. Descriptions of each application reference what Python libraries were used to implement image processing and machine learning algorithms. Please consider citing these resources as well when publishing iCLOTS-generated data.
The iCLOTS development team would also like to acknowledge several open-source software products that served as an inspiration and guide during the creation of this project. Depending on your analysis goals, you might find these pieces of software more suitable for your own analysis:
ilastik, useful for advanced image segmentation (https://www.ilastik.org/):
Berg S, Kutra D, Kroeger T, et al. ilastik: interactive machine learning for (bio)image analysis. Nature Methods. 2019;16(12):1226-1232.
CellProfiler, useful for advanced cell morphology metrics (https://cellprofiler.org/):
Lamprecht MR, Sabatini DM, Carpenter AE. CellProfiler: free, versatile software for automated biological image analysis. Biotechniques. 2007;42(1):71-75.
Stirling DR, Swain-Bowden MJ, Lucas AM, Carpenter AE, Cimini BA, Goodman A. CellProfiler 4: improvements in speed, utility and usability. BMC Bioinformatics. 2021;22(1):433.
ImageJ and FIJI, strong multi-purpose image analysis toolkits (https://imagej.nih.gov/ij/):
Schneider CA, Rasband WS, Eliceiri KW. NIH Image to ImageJ: 25 years of image analysis. Nature Methods. 2012;9(7):671-675.