Hi all,
I am the new person to use Qupath. I had 2 consecutive slides (so
next to each other
during tissue cutting) have to overlay them, and then I met an issue about alignment. I didn’t have the experience of Java script. Then I found 2 scripts from Mike_Nelson, one for calculation, and another one for applying the transform.
I got these issues in my case:
1.These 2 scripts are old versions, but I used Qupath v0.5.0, should I change something?
2. I saw 2 different registration types, “RIGID” and “AFFINE”, which is the best for TMA core?
3. How to determine the quality of my registration? I set the
use_single_channel
as “0” now.
4.
The main issue for now
: After calculation of the first script, I was trying to rotate the annotation (cell segmentation info) of
the second image
using the same matrix of
applying the transform
script. Then I got bugs as below.
INFO: Image data set to ImageData: Fluorescence, spleenCD3_Opal650.tif
INFO: Succesful export: 90 objects were exported to I:\Qupath-tiles\overlays-test\overlay\spleenCD3_Opal650.geojson
INFO: Updating server metadata for 8 channels
INFO: Image data set to ImageData: Fluorescence, spleenCD3_Opal540Affine.ome.tif
ERROR: QuPath exception: class java.lang.String cannot be cast to class java.lang.Number (java.lang.String and java.lang.Number are in module java.base of loader 'bootstrap')
java.lang.ClassCastException: class java.lang.String cannot be cast to class java.lang.Number (java.lang.String and java.lang.Number are in module java.base of loader 'bootstrap')
at qupath.lib.measurements.MeasurementList.putAll(MeasurementList.java:234)
at qupath.lib.io.QuPathTypeAdapters$MeasurementListTypeAdapter.read(QuPathTypeAdapters.java:679)
at qupath.lib.io.QuPathTypeAdapters$MeasurementListTypeAdapter.read(QuPathTypeAdapters.java:634)
at com.google.gson.TypeAdapter.fromJsonTree(TypeAdapter.java:296)
at qupath.lib.io.QuPathTypeAdapters$PathObjectTypeAdapter.parseObject(QuPathTypeAdapters.java:535)
at qupath.lib.io.QuPathTypeAdapters$PathObjectTypeAdapter.read(QuPathTypeAdapters.java:464)
at qupath.lib.io.QuPathTypeAdapters$PathObjectTypeAdapter.read(QuPathTypeAdapters.java:280)
at com.google.gson.TypeAdapter.fromJsonTree(TypeAdapter.java:296)
at qupath.lib.io.QuPathTypeAdapters$PathObjectCollectionTypeAdapter.read(QuPathTypeAdapters.java:253)
at qupath.lib.io.QuPathTypeAdapters$PathObjectCollectionTypeAdapter.read(QuPathTypeAdapters.java:211)
at com.google.gson.Gson.fromJson(Gson.java:1214)
at com.google.gson.Gson.fromJson(Gson.java:1319)
at com.google.gson.Gson.fromJson(Gson.java:1261)
at qupath.lib.io.PathIO.addPathObjects(PathIO.java:865)
at qupath.lib.io.PathIO.readObjectsFromGeoJSON(PathIO.java:830)
at qupath.lib.io.PathIO.readObjects(PathIO.java:793)
at qupath.lib.io.PathIO.readObjects(PathIO.java:738)
at qupath.lib.gui.commands.InteractiveObjectImporter.promptToImportObjectsFromFile(InteractiveObjectImporter.java:181)
at qupath.lib.gui.commands.Commands.runObjectImport(Commands.java:1807)
at qupath.lib.gui.Menus$FileMenuManager.lambda$new$18(Menus.java:437)
at qupath.lib.gui.QuPathGUI.lambda$createImageDataAction$0(QuPathGUI.java:387)
at org.controlsfx.control.action.Action.handle(Action.java:423)
at org.controlsfx.control.action.Action.handle(Action.java:64)
at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:86)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:234)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:49)
at javafx.event.Event.fireEvent(Event.java:198)
at javafx.scene.control.MenuItem.fire(MenuItem.java:459)
at com.sun.javafx.scene.control.ContextMenuContent$MenuItemContainer.doSelect(ContextMenuContent.java:1385)
at com.sun.javafx.scene.control.ContextMenuContent$MenuItemContainer.lambda$createChildren$12(ContextMenuContent.java:1338)
at com.sun.javafx.event.CompositeEventHandler$NormalEventHandlerRecord.handleBubblingEvent(CompositeEventHandler.java:247)
at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:80)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:234)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:54)
at javafx.event.Event.fireEvent(Event.java:198)
at javafx.scene.Scene$MouseHandler.process(Scene.java:3894)
at javafx.scene.Scene.processMouseEvent(Scene.java:1887)
at javafx.scene.Scene$ScenePeerListener.mouseEvent(Scene.java:2620)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:411)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:301)
at java.base/java.security.AccessController.doPrivileged(Unknown Source)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.lambda$handleMouseEvent$2(GlassViewEventHandler.java:450)
at com.sun.javafx.tk.quantum.QuantumToolkit.runWithoutRenderLock(QuantumToolkit.java:424)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleMouseEvent(GlassViewEventHandler.java:449)
at com.sun.glass.ui.View.handleMouseEvent(View.java:551)
at com.sun.glass.ui.View.notifyMouse(View.java:937)
at com.sun.glass.ui.win.WinApplication._runLoop(Native Method)
at com.sun.glass.ui.win.WinApplication.lambda$runLoop$3(WinApplication.java:184)
at java.base/java.lang.Thread.run(Unknown Source)
Here is the matrix I got from the applying the transformation script. And I copied this line for annotation rotating:
It does not like the matrix that we usually get from Affine alignment.
mapentry: spleenCD3_Opal650.tif=AffineTransform[[1.00668018050294, 0.097886238193699, -296.1002922434506], [-0.088266478464193, 1.000952756406568, 299.4115770646801]]
Hereby is the script I wrote by myself (for rotating annotation of the second image).
import qupath.lib.scripting.QP
import qupath.lib.awt.common.AffineTransforms
QP.transformSelectedObjects(
AffineTransforms.fromRows(
1.00668018050294, 0.097886238193699, -296.1002922434506,
-0.088266478464193, 1.000952756406568, 299.4115770646801
).createInverse())
Hereby is the first script from Mike_Nelson which I used to calculate the matrix for rotating.
import qupath.lib.objects.PathCellObject
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.objects.PathTileObject
import qupath.lib.objects.classes.PathClassFactory
import qupath.lib.roi.RoiTools
import qupath.lib.roi.interfaces.ROI
import java.awt.geom.AffineTransform
import javafx.scene.transform.Affine
import qupath.lib.images.servers.ImageServer
import java.awt.Graphics2D
import java.awt.Transparency
import java.awt.color.ColorSpace
import java.awt.image.BufferedImage
import org.bytedeco.opencv.global.opencv_core;
import org.bytedeco.opencv.opencv_core.Mat;
import org.bytedeco.opencv.opencv_core.TermCriteria;
import org.bytedeco.opencv.global.opencv_video;
import org.bytedeco.javacpp.indexer.FloatIndexer;
import org.bytedeco.javacpp.indexer.Indexer;
import qupath.lib.gui.dialogs.Dialogs;
import qupath.lib.images.servers.PixelCalibration;
import qupath.lib.regions.RegionRequest;
import qupath.opencv.tools.OpenCVTools
import java.awt.image.ComponentColorModel
import java.awt.image.DataBuffer
import static qupath.lib.gui.scripting.QPEx.*;
// Variables to set
//////////////////////////////////
String registrationType="AFFINE" //Specify as "RIGID" or "AFFINE"
String refStain = "Opal540" //stain to use as reference image (all images will be aligned to this)
String wsiExt = ".tif" //image name extension
//def align_specific=['N19-1107 30Gy M5']//If auto-align on intensity fails, put the image(s) that it fails on here
def AutoAlignPixelSize = 70 //downsample factor for calculating transform (tform). Does not affect scaling of output image
align_specific=null //When referencing an image, just include the slide name (stuff before _)
skip_image=0 // If 1, skips the images defined by 'align_specific'. If 0, skips all but image(s) in 'align_specific'
//Experimental features
use_single_channel=0 // Use a single channel from each image for alignment (set to channel number to use). Set to 0 to use all channels.,
iterations=50 // Number of times to iteratively calculate the transformation
mov_rotation=0 // rotation to apply to ALL moving images before calculating alignment. Strongly recommend ensuring proper orientation before loading into QuPath.
decrement_factor=1.1 // if iterations>1, by what factor to decrease AutoAlignPixelSize (increasing resolution of alignment). Set to 1 to leave AutoAlignPixelSize unchanged across iterations.
/////////////////////////////////
//Lim's code for file name matching
// Get list of all images in project
def projectImageList = getProject().getImageList()
// Create empty lists
def imageNameList = []
def slideIDList = []
def stainList = []
def missingList = []
// Split image file names to desired variables and add to previously created lists
for (entry in projectImageList) {
def name = entry.getImageName()
def (imageName, imageExt) = name.split('\\.')
def (slideID, stain) = imageName.split('_')
imageNameList << imageName
slideIDList << slideID
stainList << stain
// Remove duplicate entries from lists
slideIDList = slideIDList.unique()
stainList = stainList.unique()
print (slideIDList)
print (align_specific)
// Remove specific entries if causing alignment to not converge
if (align_specific != null)
if (skip_image == 1)
slideIDList.removeAll(align_specific)
slideIDList.retainAll(align_specific)
if (stainList.size() == 1) {
print 'Only one stain detected. Target slides may not be loaded.'
return
// Create Affine folder to put transformation matrix files
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')
mkdirs(path)
// Process all combinations of slide IDs, tissue blocks, and stains based on reference stain slide onto target slides
for (slide in slideIDList) {
for (stain in stainList) {
if (stain != refStain) {
refFileName = slide + "_" + refStain + wsiExt
targetFileName = slide + "_" + stain + wsiExt
path = buildFilePath(PROJECT_BASE_DIR, 'Affine', targetFileName)
def refImage = projectImageList.find {it.getImageName() == refFileName}
def targetImage = projectImageList.find {it.getImageName() == targetFileName}
if (refImage == null) {
print 'Reference slide ' + refFileName + ' missing!'
missingList << refFileName
continue
if (targetImage == null) {
print 'Target slide ' + targetFileName + ' missing!'
missingList << targetFileName
continue
println("Aligning reference " + refFileName + " to target " + targetFileName)
//McArdle's code for image alignment
ImageServer<BufferedImage> serverBase = refImage.readImageData().getServer()
ImageServer<BufferedImage> serverOverlay = targetImage.readImageData().getServer()
def static_img_name = refFileName
def moving_img_name = targetFileName
def project_name = getProject()
def entry_name_static = project_name.getImageList().find { it.getImageName() == static_img_name }
def entry_name_moving = project_name.getImageList().find { it.getImageName() == moving_img_name }
def serverBaseMark = entry_name_static.readImageData()
def serverOverlayMark = entry_name_moving.readImageData()
Affine affine=[]
mov_width=serverOverlayMark.getServer().getWidth()
mov_height=serverOverlayMark.getServer().getHeight()
affine.prependRotation(mov_rotation,mov_width/2,mov_height/2)
for(int i = 0;i<iterations;i++) {
//Perform the alignment. If no annotations present, use intensity. If annotations present, use area
print("autoalignpixelsize:" + AutoAlignPixelSize)
if (serverBaseMark.hierarchy.nObjects() > 0 || serverOverlayMark.hierarchy.nObjects() > 0)
autoAlignPrep(AutoAlignPixelSize, "AREA", serverBaseMark, serverOverlayMark, affine, registrationType, use_single_channel)
autoAlignPrep(AutoAlignPixelSize, "notAREA", serverBaseMark, serverOverlayMark, affine, registrationType, use_single_channel)
AutoAlignPixelSize/=decrement_factor
if (AutoAlignPixelSize<1){
AutoAlignPixelSize=1
def matrix = []
matrix << affine.getMxx()
matrix << affine.getMxy()
matrix << affine.getTx()
matrix << affine.getMyx()
matrix << affine.getMyy()
matrix << affine.getTy()
new File(path).withObjectOutputStream {
it.writeObject(matrix)
if (missingList.isEmpty() == true) {
print 'Done!'
} else {
missingList = missingList.unique()
print 'Done! Missing slides: ' + missingList
/*Subfunctions taken from here:
https://github.com/qupath/qupath/blob/a1465014c458d510336993802efb08f440b50cc1/qupath-experimental/src/main/java/qupath/lib/gui/align/ImageAlignmentPane.java
//creates an image server using the actual images (for intensity-based alignment) or a labeled image server (for annotation-based).
double autoAlignPrep(double requestedPixelSizeMicrons, String alignmentMethod, ImageData<BufferedImage> imageDataBase, ImageData<BufferedImage> imageDataSelected, Affine affine,String registrationType, int use_single_channel) throws IOException {
ImageServer<BufferedImage> serverBase, serverSelected;
if (alignmentMethod == 'AREA') {
logger.debug("Image alignment using area annotations");
Map<PathClass, Integer> labels = new LinkedHashMap<>();
int label = 1;
labels.put(PathClassFactory.getPathClassUnclassified(), label++);
for (def annotation : imageDataBase.getHierarchy().getAnnotationObjects()) {
def pathClass = annotation.getPathClass();
if (pathClass != null && !labels.containsKey(pathClass))
labels.put(pathClass, label++);
for (def annotation : imageDataSelected.getHierarchy().getAnnotationObjects()) {
def pathClass = annotation.getPathClass();
if (pathClass != null && !labels.containsKey(pathClass))
labels.put(pathClass, label++);
double downsampleBase = requestedPixelSizeMicrons / imageDataBase.getServer().getPixelCalibration().getAveragedPixelSize().doubleValue();
serverBase = new LabeledImageServer.Builder(imageDataBase)
.backgroundLabel(0)
.addLabels(labels)
.downsample(downsampleBase)
.build();
double downsampleSelected = requestedPixelSizeMicrons / imageDataSelected.getServer().getPixelCalibration().getAveragedPixelSize().doubleValue();
serverSelected = new LabeledImageServer.Builder(imageDataSelected)
.backgroundLabel(0)
.addLabels(labels)
.downsample(downsampleSelected)
.build();
//disable single channel alignment when working with Area annotations, unsure what bugs it can cause
use_single_channel=0
} else {
// Default - just use intensities
logger.debug("Image alignment using intensities");
serverBase = imageDataBase.getServer();
serverSelected = imageDataSelected.getServer();
scaleFactor=autoAlign(serverBase, serverSelected, registrationType, affine, requestedPixelSizeMicrons,use_single_channel);
return scaleFactor
double autoAlign(ImageServer<BufferedImage> serverBase, ImageServer<BufferedImage> serverOverlay, String regionstrationType, Affine affine, double requestedPixelSizeMicrons, use_single_channel) {
PixelCalibration calBase = serverBase.getPixelCalibration()
double pixelSizeBase = calBase.getAveragedPixelSizeMicrons()
double downsampleBase = 1
if (!Double.isFinite(pixelSizeBase)) {
// while (serverBase.getWidth() / downsampleBase > 2000)
// downsampleBase++;
// logger.warn("Pixel size is unavailable! Default downsample value of {} will be used", downsampleBase)
pixelSizeBase=50
downsampleBase = requestedPixelSizeMicrons / pixelSizeBase
} else {
downsampleBase = requestedPixelSizeMicrons / pixelSizeBase
PixelCalibration calOverlay = serverOverlay.getPixelCalibration()
double pixelSizeOverlay = calOverlay.getAveragedPixelSizeMicrons()
double downsampleOverlay = 1
if (!Double.isFinite(pixelSizeOverlay)) {
// while (serverBase.getWidth() / downsampleOverlay > 2000)
// downsampleOverlay++;
// logger.warn("Pixel size is unavailable! Default downsample value of {} will be used", downsampleOverlay)
pixelSizeOverlay=50
downsampleOverlay = requestedPixelSizeMicrons / pixelSizeOverlay
} else {
downsampleOverlay = requestedPixelSizeMicrons / pixelSizeOverlay
double scaleFactor=downsampleBase/downsampleOverlay
BufferedImage imgBase = serverBase.readRegion(RegionRequest.createInstance(serverBase.getPath(), downsampleBase, 0, 0, serverBase.getWidth(), serverBase.getHeight()))
BufferedImage imgOverlay = serverOverlay.readRegion(RegionRequest.createInstance(serverOverlay.getPath(), downsampleOverlay, 0, 0, serverOverlay.getWidth(), serverOverlay.getHeight()))
//Determine whether to calculate intensity-based alignment using all channels or a single channel
Mat matBase
Mat matOverlay
if (use_single_channel==0) {
//print 'using all channels'
imgBase = ensureGrayScale(imgBase)
imgOverlay = ensureGrayScale(imgOverlay)
matBase = OpenCVTools.imageToMat(imgBase)
matOverlay = OpenCVTools.imageToMat(imgOverlay)
} else {
matBase = OpenCVTools.imageToMat(imgBase)
matOverlay = OpenCVTools.imageToMat(imgOverlay)
int channel = use_single_channel-1
//print ('using channel ' + channel)
matBase = OpenCVTools.splitChannels(matBase)[channel]
matOverlay = OpenCVTools.splitChannels(matOverlay)[channel]
//use this to preview how the channel looks
//OpenCVTools.matToImagePlus('Channel:' + channel.toString(), matBase).show()
/////pete code block/////
//// New bit
// int channel = 2
// matBase = OpenCVTools.splitChannels(matBase)[channel]
// matOverlay = OpenCVTools.splitChannels(matOverlay)[channel]
// ///end pete code block///
Mat matTransform = Mat.eye(2, 3, opencv_core.CV_32F).asMat()
// Initialize using existing transform
// affine.setToTransform(mxx, mxy, tx, myx, myy, ty)
try {
FloatIndexer indexer = matTransform.createIndexer()
indexer.put(0, 0, (float)affine.getMxx())
indexer.put(0, 1, (float)affine.getMxy())
indexer.put(0, 2, (float)(affine.getTx() / downsampleBase))
indexer.put(1, 0, (float)affine.getMyx())
indexer.put(1, 1, (float)affine.getMyy())
indexer.put(1, 2, (float)(affine.getTy() / downsampleBase))
// System.err.println(indexer)
} catch (Exception e) {
logger.error("Error closing indexer", e)
TermCriteria termCrit = new TermCriteria(TermCriteria.COUNT, 100, 0.0001)
try {
int motion
switch (regionstrationType) {
case "AFFINE":
motion = opencv_video.MOTION_AFFINE
break
case "RIGID":
motion = opencv_video.MOTION_EUCLIDEAN
break
default:
logger.warn("Unknown registraton type {} - will use {}", regionstrationType, RegistrationType.AFFINE)
motion = opencv_video.MOTION_AFFINE
break
double result = opencv_video.findTransformECC(matBase, matOverlay, matTransform, motion, termCrit, null)
logger.info("Transformation result: {}", result)
} catch (Exception e) {
Dialogs.showErrorNotification("Estimate transform", "Unable to estimated transform - result did not converge")
logger.error("Unable to estimate transform", e)
return
// To use the following function, images need to be the same size
// def matTransform = opencv_video.estimateRigidTransform(matBase, matOverlay, false);
Indexer indexer = matTransform.createIndexer()
affine.setToTransform(
indexer.getDouble(0, 0),
indexer.getDouble(0, 1),
indexer.getDouble(0, 2) * downsampleBase,
indexer.getDouble(1, 0),
indexer.getDouble(1, 1),
indexer.getDouble(1, 2) * downsampleBase
indexer.release()
matBase.release()
matOverlay.release()
matTransform.release()
return scaleFactor
//to gather detection objects instead of annotation, change line ~250 to def pathObjects = otherHierarchy.getDetectionObjects()
def GatherObjects(boolean deleteExisting, boolean createInverse, File f){
f.withObjectInputStream {
matrix = it.readObject()
// Get the project & the requested image name
def project = getProject()
def entry = project.getImageList().find {it.getImageName()+".aff" == f.getName()}
if (entry == null) {
print 'Could not find image with name ' + f.getName()
return
def otherHierarchy = entry.readHierarchy()
def pathObjects = otherHierarchy.getDetectionObjects() //OR getAnnotationObjects()
// Define the transformation matrix
def transform = new AffineTransform(
matrix[0], matrix[3], matrix[1],
matrix[4], matrix[2], matrix[5]
if (createInverse)
transform = transform.createInverse()
if (deleteExisting)
clearAllObjects()
def newObjects = []
for (pathObject in pathObjects) {
newObjects << transformObject(pathObject, transform)
addObjects(newObjects)
//other subfunctions
PathObject transformObject(PathObject pathObject, AffineTransform transform) {
// Create a new object with the converted ROI
def roi = pathObject.getROI()
def roi2 = transformROI(roi, transform)
def newObject = null
if (pathObject instanceof PathCellObject) {
def nucleusROI = pathObject.getNucleusROI()
if (nucleusROI == null)
newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList())
} else if (pathObject instanceof PathTileObject) {
newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
} else if (pathObject instanceof PathDetectionObject) {
newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
newObject.setName(pathObject.getName())
} else {
newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
newObject.setName(pathObject.getName())
// Handle child objects
if (pathObject.hasChildren()) {
newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)}))
return newObject
ROI transformROI(ROI roi, AffineTransform transform) {
def shape = RoiTools.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses!
shape2 = transform.createTransformedShape(shape)
return RoiTools.getShapeROI(shape2, roi.getImagePlane(), 0.5)
static BufferedImage ensureGrayScale(BufferedImage img) {
if (img.getType() == BufferedImage.TYPE_BYTE_GRAY)
return img
if (img.getType() == BufferedImage.TYPE_BYTE_INDEXED) {
ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_GRAY)
def colorModel = new ComponentColorModel(cs, 8 as int[], false, true,
Transparency.OPAQUE,
DataBuffer.TYPE_BYTE)
return new BufferedImage(colorModel, img.getRaster(), false, null)
BufferedImage imgGray = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_BYTE_GRAY)
Graphics2D g2d = imgGray.createGraphics()
g2d.drawImage(img, 0, 0, null)
g2d.dispose()
return imgGray
And this one is also from Mike_Nelson for applying the transform.
import javafx.application.Platform
import javafx.scene.transform.Affine
import qupath.lib.images.ImageData
import qupath.lib.images.servers.ImageChannel
import qupath.lib.images.servers.ImageServer
import qupath.lib.images.servers.ImageServers
import java.awt.image.BufferedImage
import java.util.stream.Collectors
import qupath.lib.images.servers.TransformedServerBuilder
import java.awt.geom.AffineTransform
import static qupath.lib.gui.scripting.QPEx.*
def currentImageName = getProjectEntry().getImageName()
// Variables to set
//////////////////////////////////////////////////////////////
def deleteExisting = false // SET ME! Delete existing objects
def createInverse = true // SET ME! Change this if things end up in the wrong place
def performDeconvolution = false // If brightfield image, separate channels into individual stains (remember to set them in original image)
String refStain = "Opal540" // Specify reference stain, should be same as in 'Calculate-Transforms.groovy'
// Define an output path where the merged file should be written
// Recommended to use extension .ome.tif (required for a pyramidal image)
// If null, the image will be opened in a viewer
String pathOutput = buildFilePath(PROJECT_BASE_DIR, currentImageName + '.ome.tif')
double outputDownsample = 1 // Choose how much to downsample the output (can be *very* slow to export large images with downsample 1!)
//////////////////////////////////////////////////////////////
// Affine folder path
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')
// Get list of all images in project
def projectImageList = getProject().getImageList()
def list_of_moving_image_names=[]
def list_of_transforms=[]
def list_of_reference_image_names=[]
// Read and obtain filenames from Affine folder
new File(path).eachFile{ f->
f.withObjectInputStream {
matrix = it.readObject()
def targetFileName = f.getName()
list_of_moving_image_names << targetFileName
def (targetImageName, imageExt) = targetFileName.split('\\.')
def (slideID, targetStain) = targetImageName.split('_')
def targetImage = projectImageList.find {it.getImageName() == targetFileName}
if (targetImage == null) {
print 'Could not find image with name ' + f.getName()
return
def targetImageData = targetImage.readImageData()
def targetHierarchy = targetImageData.getHierarchy()
refFileName = slideID + "_" + refStain + "." + imageExt
list_of_reference_image_names << refFileName
def refImage = projectImageList.find {it.getImageName() == refFileName}
def refImageData = refImage.readImageData()
def refHierarchy = refImageData.getHierarchy()
def pathObjects = refHierarchy.getAnnotationObjects()
// Define the transformation matrix
def transform = new AffineTransform(
matrix[0], matrix[3], matrix[1],
matrix[4], matrix[2], matrix[5]
if (createInverse)
transform = transform.createInverse()
if (deleteExisting)
targetHierarchy.clearAll()
list_of_transforms << transform
// def newObjects = []
// for (pathObject in pathObjects) {
// newObjects << transformObject(pathObject, transform)
// }
//targetHierarchy.addPathObjects(newObjects)
//targetImage.saveImageData(targetImageData)
list_of_reference_image_names=list_of_reference_image_names.unique()
//create linkedhashmap from list of image names and corresponding transforms
all_moving_file_map=[list_of_moving_image_names,list_of_transforms].transpose().collectEntries{[it[0],it[1]]}
//get currentImageName. NOTE, ONLY RUN SCRIPT ON REFERENCE IMAGES.
print("Current image name: " + currentImageName);
if (!currentImageName.contains(refStain))
//print 'WARNING: non-reference image name detected. Only run script on reference images'
throw new Exception("WARNING: non-reference image name detected: " + currentImageName + ". Only run script on reference images")
currentRefSlideName=currentImageName.split('_')
currentRefSlideName=currentRefSlideName[0]
print 'Processing: ' + currentRefSlideName
//Only keep entries that pertain to transforms relevant to images sharing the same SlideID and exclude any that contain
// refStain (there shouldn't be any with refStain generated as it's created below as an identity matrix, however running
// calculate-transforms.groovy with different refStains set can cause them to be generated, and override the identity matrix set below)
//print 'all_moving_file_map: ' + all_moving_file_map
filteredMap= all_moving_file_map.findAll {it.key.contains(currentRefSlideName) && !it.key.contains(refStain)}
//print 'filteredMap' + filteredMap
def reference_transform_map = [
(currentImageName) : new AffineTransform()
transforms=reference_transform_map + filteredMap
// Loop through the transforms to create a server that merges these
def project = getProject()
def servers = []
def channels = []
int c = 0
for (def mapEntry : transforms.entrySet()) {
print 'mapentry: ' + mapEntry
// Find the next image & transform
def name = mapEntry.getKey()
print(name)
def transform = mapEntry.getValue()
if (transform == null)
transform = new AffineTransform()
def entry = project.getImageList().find {it.getImageName() == name}
// Read the image & check if it has stains (for deconvolution)
def imageData = entry.readImageData()
def currentServer = imageData.getServer()
def stains = imageData.getColorDeconvolutionStains()
print(stains)
// Nothing more to do if we have the identity trainform & no stains
if (transform.isIdentity() && stains == null) {
channels.addAll(updateChannelNames(name, currentServer.getMetadata().getChannels()))
servers << currentServer
continue
} else {
// Create a server to apply transforms
def builder = new TransformedServerBuilder(currentServer)
if (!transform.isIdentity())
builder.transform(transform)
// If we have stains, deconvolve them
if (performDeconvolution==false)
stains=null // Mark's way of disabling stain deconvolution if a brightfield image is present
if (stains != null) {
builder.deconvolveStains(stains)
for (int i = 1; i <= 3; i++)
channels << ImageChannel.getInstance(name + "-" + stains.getStain(i).getName(), ImageChannel.getDefaultChannelColor(c++))
} else {
channels.addAll(updateChannelNames(name, currentServer.getMetadata().getChannels()))
//Mark modification: in addition to writing out deconvolved channels, include original RGB channels for viewing purposes
//Currently unsupported due to bugs when used in conjuction with fluorescent images, will leave the 2 lines below commented out
//channels.addAll(updateChannelNames(name, currentServer.getMetadata().getChannels()))
//servers << currentServer
servers << builder.build()
println 'Channels: ' + channels.size()
// Remove the first server - we need to use it as a basis (defining key metadata, size)
ImageServer<BufferedImage> server = servers.remove(0)
// If anything else remains, concatenate along the channels dimension
if (!servers.isEmpty())
server = new TransformedServerBuilder(server)
.concatChannels(servers)
.build()
// Write the image or open it in the viewer
if (pathOutput != null) {
if (outputDownsample > 1)
server = ImageServers.pyramidalize(server, outputDownsample)
writeImage(server, pathOutput)
} else {
// Create the new image & add to the project
def imageData = new ImageData<BufferedImage>(server)
setChannels(imageData, channels as ImageChannel[])
Platform.runLater {
getCurrentViewer().setImageData(imageData)
// Prepend a base name to channel names
List<ImageChannel> updateChannelNames(String name, Collection<ImageChannel> channels) {
return channels
.stream()
.map( c -> {
return ImageChannel.getInstance(name + '-' + c.getName(), c.getColor())
).collect(Collectors.toList())
print('Done')
Warpy - Registration of whole slide images at cellular resolution with Fiji and QuPath Announcements
Dear community,
We are happy to share Warpy, a whole slide image registration workflow that can reach cellular resolution over a large area (an area that typically requires more than an affine transformation). It is bridging QuPath, Fiji and elastix.
Briefly, a Project is defined in QuPath, and can be opened in Fiji to be registered either automatically, manually with BigWarp, or semi-automatically (BigWarp is used to correct the result of the automated registration). The resulting transformat…
I saw 2 different registration types, “RIGID” and “AFFINE”, which is the best for TMA core?
How to determine the quality of my registration? I set the use_single_channel as “0” now.
Also for 2, Affine will usually give better results, main reason to use RIGID is when you know there is no shear. For example two images taken of the exact same tissue slice on different systems.
You can’t really, if they are different slices. You can try calculating an intersect over union, which might give you some idea when comparing multiple attempts at the same alignment, but there’s no ground truth to the alignment between serial sections.
Hi dear, thanks for your quick reply!
After image overlay, I would like to do spatial analysis between CD3 (T cells) in Panel 1 and CD14 (macrophages) in Panel 2 in 2 serial sections. So, 2 different cell types in 2 different slides, is that possible?
Seems like I have to keep it as AFFINE transformation? Btw, I tried Wrapy before, but I didn’t know how to use it in Qupath.
Did you have some suggestions for my coding of the AffineTransforms function (so Issue 4)?
AffineTransforms.fromRows()
Zach:
I would like to do spatial analysis between CD3 (T cells) in Panel 1 and CD14 (macrophages) in Panel 2 in 2 serial sections
Depends on what you mean. If you want to do average distances, I’m not sure it does. If you have the same rough areas defined (tumor vs stroma) and want to get the average density in different areas, that is fine.
Warpy is a combination of QuPath and Fiji, it has it’s own documentation through the link, plenty of people have used it, you can see other forum threads.
Zach:
Did you have some suggestions for my coding of the AffineTransforms function (so Issue 4)?
Your matrix doesn’t match the input from the javadocs AffineTransforms (QuPath 0.5.0) which you can find in the Help menu of the scripting interface