You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _pages/plugins/trackmate/detectors/trackmate-cellpose.md
+57-42Lines changed: 57 additions & 42 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,45 +50,41 @@ This is documented [on this page](/plugins/trackmate/trackmate-conda-path).
50
50
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-ui-01.png" align='center' width='300' %}
51
51
We document these parameters from top to bottom in the GUI.
52
52
53
-
<!---
53
+
54
54
##### `Conda environment`
55
55
56
-
Specify in what conda environment you installed Cellpose-SAM.
57
-
If you get an error at this stage, it is likely because the conda path for TrackMate was not configured properly. Check [this page]((/plugins/trackmate/trackmate-conda-path).
58
-
-->
56
+
Specify in what conda environment you installed Cellpose (the v3.1.1 in the case of this detector).
57
+
If you get an error at this stage, it is likely because the conda path for TrackMate was not configured properly.
58
+
Again, check [this page]((/plugins/trackmate/trackmate-conda-path) if this happens.
59
59
60
-
##### `Path to cellpose / Python executable`
61
60
62
-
We must specify where is the cellpose executable, as it was installed outside of Fiji. Because there are several ways of installing cellpose (with Conda or using the precompiled executables), we have to accommodate several cases we document [later in this document](#installing-cellpose). Briefly:
61
+
##### `Pretrained model`
63
62
64
-
- If you installed cellpose via the precompiled executables, enter the _path to the executable_. For instance: `C:\Users\tinevez\Applications\cellpose.exe`
65
-
- If you installed cellpose via Conda, enter the _path to the Python executable of the Conda environment you created for cellpose_. For instance: `C:\Users\tinevez\anaconda3\envs\cellpose\python.exe`
63
+
This list lets you select the pretrained models that were shipped with Cellpose 3.
64
+
They are dully documented in details on the [Cellpose doc website](https://cellpose.readthedocs.io/en/v3.1.1.1/models.html).
65
+
The 'main' ones are:
66
+
67
+
- the `cyto3` model, that excels at segmenting cells stained for their cytoplasm. This model supports the specification of a second channel (see below) where nuclei are stained, to guide cell sgmentation.
68
+
- the `nuclei` model, trained to segment cell nuclei.
66
69
67
-
##### `Pretrained model`
68
-
Right now we only support three pretrained models of cellpose:
69
-
- the `cyto` model ("Cytoplasm"), to segment cells stained for their cytoplasm;
70
-
- the `nuclei` model ("Nuclei") to segment cell nuclei;
71
-
- the `cyto2` model ("Cytoplasm 2.0"), which is the `cyto` model augmented with user-submitted images.
72
-
- the `live cell` model from Cellpose was trained on the `LIVECELL`[dataset](https://sartorius-research.github.io/LIVECell/) of label free images of cells.
73
-
- the `TissueNet` model from Cellpose was trained on the `TissueNet`[dataset](https://datasets.deepcell.org/data) available from deepcell.
74
-
- "Custom" lets you specify a custom model you would have trained or downloaded, which is documented below.
75
70
76
71
##### `Path to custom model`
77
72
78
-
If in the `Pretrained model` selection you pick `Custom`, a text field will appear above, allowing for entering the path to a custom cellpose model. You need to browse to the actually model file generated by the training. For instance: `D:\Projects\Brightfield\Cellpose model 171121-20211118T111402Z-001\171121\cellpose_residual_on_style_on_concatenation_off_train_folder_2021_11_17_19_45_23.398850`
73
+
There are radio buttons before the `Pretrained model` and `Path to custom model` that lets you select whether you want to use a pretrained model, or a custom one.
74
+
When `Path to custom model` is selected, you need to browse to the actual model file.
79
75
80
76
Note that it must be the an `absolute path` to the file.
81
77
82
-
##### `Channel to segment`
78
+
##### `Target channel`
83
79
84
80
If your image has several color channels, choose here the channel to segment. The number of the channel corresponds to the _imagej channel_ in your image.
85
81
It should be the channel where the cytoplam staining is for cell segmentation, or the nuclei staining for nuclei segmentation, e.g. with the `nuclei` model.
86
82
87
83
Note that TrackMate doesn't handle RGB stacks so you might need to convert your image if it is not already a composite image (see below for [tip on how to pass RGB images to TrackMate-Cellpose](#tip-passing-rgb-images-to-trackmate-for-cellpose))
88
84
89
-
##### `Optional second channel`
85
+
##### `Second optional channel`
90
86
91
-
The `cyto`and `cyto2` pretrained models have been trained on images with a second channels in which the cell nuclei were labeled. It is used as a seed to make the detection of single cells more robust.
87
+
The `cyto3` (and `cyto2` and `cyto`) pretrained models have been trained on images with a second channels in which the cell nuclei were labeled. It is used as a seed to make the detection of single cells more robust.
92
88
It is optional and this parameter specifies in which channel are the nuclei (the number of the imagej channel of the nuclei). Use '0' to skip using the second optional channel.
93
89
For the `nuclei` model, this parameter is ignored.
94
90
@@ -97,9 +93,19 @@ For the `nuclei` model, this parameter is ignored.
97
93
cellpose can exploit an _a priori_ knowledge on the size of cells. If you have a rough estimate of the size of the cell, enter it here. In TrackMate this parameter must be specified in physical units (for instance µm if the source image has a pixel size expressed in µm).
98
94
Enter the value '0' to have cellpose automatically determine the cell size estimate.
99
95
96
+
_Careful!_ I insist a bit on this point.
97
+
If you are used to running Cellpose from the Cellpose UI or a Python script, it expects the diameter to be in pixels.
98
+
But here in TrackMate, _the diameter is in the spatial units of your image_.
99
+
100
100
##### `Use GPU`
101
101
102
-
If this box is checked, the GPU will be used _if cellpose was installed with required librairies and hardware to use the GPU_. If the GPU support is absent or incorrect, this setting will be safely ignored and the computation will rely on CPU only. Unchecking the box will force cellpose to use the CPU even if GPU support is available.
102
+
If this box is checked, the GPU will be used _if cellpose was installed with required librairies and hardware to use the GPU_.
103
+
If the GPU support is absent or incorrect, this setting will be safely ignored and the computation will rely on CPU only.
104
+
Unchecking the box will force cellpose to use the CPU even if GPU support is available.
105
+
106
+
We took special care to implement multi-threaded processing if you use CPU.
107
+
With this setting, there will be one Cellpose process running per thread you set in the _Edit > Options > Memory and Threads..._ configuration ("Parallel Threads for stacks").
108
+
And then each image corresponding to the time-points will be dispatched automatically to these multiple processes, running concurrently.
103
109
104
110
##### `Simplify contours`
105
111
@@ -108,11 +114,11 @@ If this box is checked, the contour outlines generated by the masks returned by
108
114
109
115
### (Re)Training of a cellpose model
110
116
111
-
To optimize CellPose on your data, you can retrain one its model and use it in TrackMate-CellPose. You can do the retraining easily through [CellPose 2.0 GUI](https://www.nature.com/articles/s41592-022-01663-4). When the model gives acceptable results, select it in TrackMate interface with the `custom_model` parameter.
117
+
To optimize CellPose on your data, you can retrain one its model and use it in TrackMate-CellPose. You can do the retraining easily through [CellPose 2.0 GUI](https://www.nature.com/articles/s41592-022-01663-4). When the model gives acceptable results, select it in TrackMate interface with the `Path to custom model` parameter.
112
118
119
+
<!---
113
120
Note that in 3D stacks, it can be worth it to create xy, yz and xz images of the stack corrected for anisotropy and to draw annotations on it to train the model to make it more performant for the segmentation with the `3D mode`.
114
-
115
-
121
+
--->
116
122
117
123
118
124
## Tutorials
@@ -136,15 +142,21 @@ In the second panel, select the **cellpose detector**:
136
142
137
143
then click the `Next` button. You should see the configuration panel for cellpose.
138
144
139
-
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-tutorial-03" align='center' width='300' %}
145
+
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-ui-01.png" align='center' width='300' %}
140
146
141
-
We want to segment the touching cells and obtain their contour. So we will use the **Cytoplasm** model. The cytoplasm is imaged in the second channel (it happens to use the green LUT, but it would not matter) so we enter **2** for the `Channel to segment` parameter. The channel **1** contains the nuclei so we can specify it in the `Optional second channel` parameter. Finally, we can measure a rough estimate of the cell size using ImageJ ROI tools, which is about **40 µm**, and enter this value in the `Cell diameter` field.
147
+
On the `Conda environment` select the name of the environment Cellpose 3 is installed (**cellpose-3** in the example above).
148
+
We want to segment the touching cells and obtain their contour. So we will use the **cyto3** model, to be selected in the `Pretrained model` list.
149
+
The cytoplasm is imaged in the second channel (it happens to use the green LUT, but it would not matter) so we enter **2** for the `Target channel` parameter.
150
+
The channel **1** contains the nuclei so we can specify it in the `Second optional channel` parameter.
151
+
Finally, we can measure a rough estimate of the cell size using ImageJ ROI tools, which is about **40 µm**, and enter this value in the `Cell diameter` field.
142
152
143
-
The movie is quite big, and will take several minutes to process using only the CPU (on my Mac it takes about 20 minutes without multithreading). Having a cellpose installation with GPU support really accelerate segmentation, but this is not available on Mac. Instead, for we developed TrackMate-cellpose so that on Mac, it can accelerate processing using multithreading. CPU multithreading is used only on Mac and if the `Use GPU` box is unchecked, which is why it is the case on the image above.
153
+
The movie is quite big, and will take several minutes to process using only the CPU (on my Mac it takes about 20 minutes without multithreading).
154
+
Having a cellpose installation with GPU support really accelerate segmentation.
144
155
145
156
Finally, for this movie we want the cell contour to follow the pixels of the masks generated by cellpose, so we unchecked the `Simplify contour` box.
146
157
147
-
Then we are ready to launch segmentation. Click the `Next` button. The log window will echo the messages from cellpose. On a windows machine with GPU support, the detection step takes about 2 minutes. On a Mac with multithreading using 8 cores, it takes about 4 minutes.
158
+
Then we are ready to launch segmentation. Click the `Next` button. The log window will echo the messages from cellpose. On a windows machine with GPU support, the detection step takes about 2 minutes.
159
+
On a 2022 MacBook Pro using GPU, it takes about 4 minutes.
148
160
149
161
The **Initial thresholding** reports the quality histogram for all detections. Here, the quality is just the area of detections in pixel units. We see that there are some detections with very little size, probably for spurious objects. We can filter them out by setting a threshold value around 220.
150
162
@@ -174,7 +186,11 @@ Another key advantage of cellpose is that it is relatively easy and fast to trai
174
186
175
187
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-tutorial-b-01" align='center' width='400' %}
176
188
177
-
For instance the cells above were imaged in bright-field at high resolution. They are Glioblastoma-astrocytoma U373 cells migrating on a polyacrylamide gel. They have a rather complex shape and a low contrast. The segmentation of cells in such images is normally difficult with classical image processing techniques. In the following we will use the trackMate-Cellpose intragration to segment and track them. Our goal is to see if we discriminate between two types of cell locomotion based on dynamics and morphological features of single cells. We will see how to give more robustness to the analysis by filtering out some detections close to image border.
189
+
For instance the cells above were imaged in bright-field at high resolution.
190
+
They are Glioblastoma-astrocytoma U373 cells migrating on a polyacrylamide gel.
191
+
They have a rather complex shape and a low contrast.
192
+
The segmentation of cells in such images is normally difficult with classical image processing techniques. In the following we will use the TrackMate-Cellpose intragration to segment and track them.
193
+
Our goal is to see if we discriminate between two types of cell locomotion based on dynamics and morphological features of single cells. We will see how to give more robustness to the analysis by filtering out some detections close to image border.
178
194
179
195
The data can be obtained from Zenodo:
180
196
@@ -183,18 +199,17 @@ The data can be obtained from Zenodo:
183
199
The dataset contains a custom cellpose model. We have trained a it using the [ZeroCostDL4Mic platform](https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki). This cellpose model was trained for 500 epochs on 214 paired image patches (image dimensions: 520x 696), with a batch size of 8, using the [Cellpose ZeroCostDL4Mic notebook (v 1.13)](https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/Beta%20notebooks/Cellpose_2D_ZeroCostDL4Mic.ipynb). The cellpose `cyto2` model was used as a training starting point. The training was accelerated using a Tesla K80 GPU.
184
200
185
201
To use the model you need to unzip the `Cellpose model 171121-20211118T111402Z-001.zip` file. The cellpose model file itself is the `cellpose_residual_on_style_on_concatenation_off_train_folder_2021_11_17_19_45_23.398850` file in the `171121` folder.
186
-
We can use it in TrackMate by specifying "Custom" as a model in the interface:
202
+
We can use it in TrackMate by checking the radio button in front of the `Path to a custom model` parameter:
187
203
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-tutorial-b-02" align='center' width='350' %}
188
204
189
-
Here I am running TrackMate on a Windows machine, and I used the cellpose installation recommended by BIOP people ([see below](#biop-conda-installation-for-gpu-support-on-windows)). So in the `Path to cellpose / Python executable` I have: `C:\Users\tinevez\anaconda3\envs\cellpose_biop_gpu\python.exe`
190
-
191
-
As `Pretrained model` I picked **Custom** and in the `Path to custom model` text field I entered the path to the model file (in this case, ending in `.398850`).
205
+
For the `Path to a custom model` parameter, I browsed to the model file (in this case, ending in `.398850`).
192
206
193
207
There is only one channel, so I used **0** for both the target and optional channels. The cell size measured with ImageJ is about 60 µm.
194
208
195
-
On this Windows machine and with this cellpose installation I have GPU support, so I left the corresponding box checked. Later we will measure shape descriptors for the cell, so it is important to simplify their contours, and the corresponding box is also checked.
209
+
On a 2022 MacBook Pro used in this demo, the Cellpose installation have GPU support, so I left the corresponding box checked.
210
+
Later we will measure shape descriptors for the cell, so it is important to simplify their contours, and the corresponding box is also checked.
196
211
197
-
Clicking `Next` starts the segmentation. In my case it completes in about 1 minute. In the **Initial thresholding** panel, choose a threshold of 0 to include all detections in the next step. We get the following results displayed in the **Spot filter** panel:
212
+
Clicking `Next` starts the segmentation. In my case it completes in about 30 seconds. In the **Initial thresholding** panel, choose a threshold of 0 to include all detections in the next step. We get the following results displayed in the **Spot filter** panel:
198
213
199
214
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-tutorial-b-03" width='300' %}
200
215
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-tutorial-b-04" width='300' %}
@@ -236,17 +251,17 @@ Then proceed as before for the tracking step and the plot generation. The area a
236
251
237
252
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-tutorial-b-11" width='600' align='center' %}
238
253
239
-
254
+
<!-----------
240
255
### 3D segmentation with cellpose
241
256
242
257
A tutorial in the documentation for the [advanced version of the cellpose detector](trackmate-cellpose-advanced) demonstrates how to use cellpose for 3D cell segmentation and tracking.
243
-
258
+
-------------->
244
259
245
260
## Additional informations
246
261
247
-
### Tip: Passing RGB images to TrackMate for cellpose
262
+
### Tip: Passing RGB images to TrackMate for Cellpose
248
263
249
-
On the cellpose webiste you can find a collection of test images to test with cellpose. They will work with the TrackMate integration as well, but they are RGB images. TrackMate does not support RGB images. So we give here a short optional procedure on how to feed RGB images to TrackMate and have them still segmented by cellpose as expected. Again, if you don't have RGB images as input, you can skip this section.
264
+
On the Cellpose website you can find a collection of test images to test with cellpose. They will work with the TrackMate integration as well, but they are RGB images. TrackMate does not support RGB images. So we give here a short optional procedure on how to feed RGB images to TrackMate and have them still segmented by cellpose as expected. Again, if you don't have RGB images as input, you can skip this section.
250
265
251
266
cellpose can and does work with RGB images. They are single-channel but encode red, green and blue components of each pixel in one value, effectively behaving as a 3-channels 8-bit image. However, TrackMate cannot deal with RGB images. If you launch TrackMate with an RGB image you will receive an error message. The workaround is to convert them to a proper 3-channels 8-bit image before running TrackMate. cellpose will be able to run with them anyway. Here is how to do it.
252
267
@@ -264,13 +279,13 @@ cellpose can and does work with RGB images. They are single-channel but encode r
264
279
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-convert-rgb-04.png" %}
265
280
{% include img src="/media/plugins/trackmate/detectors/cellpose/trackmate-cellpose-convert-rgb-05.png" %}
266
281
267
-
### Installing cellpose
282
+
### Installing Cellpose
268
283
269
284
{% include notice icon="tech"
270
285
content="This is the recommended way to install Python tools to be used with TrackMate." %}
271
286
272
287
{% include notice icon="tech"
273
-
content="From now on we support **cellpose 3** in TrackMate. It is great, simple to install, and has GPU acceleration on all modern platforms." %}
288
+
content="This detector supports **Cellpose 3** in TrackMate. It is great, simple to install, and has GPU acceleration on all modern platforms. There is a separate detector for **Cellpose SAM**, linked above." %}
274
289
275
290
You need to have conda (or mamba) installed on your system.
276
291
Any flavor (Anaconda, miniconda, miniforge, mamba, micromamba, ... ) works, but if you have to install one, we recommend [miniforge](https://github.com/conda-forge/miniforge).
0 commit comments