add decimation to Images#295
add decimation to Images#295j-friedrich wants to merge 3 commits intothunder-project:masterfrom j-friedrich:decimate
Conversation
|
Another way to implement this would be to use def decimate(self, n):
def decimate_block(block):
return r_[ [mean(block[i:i+n].mean(axis=0)[np.newaxis] for i in arange(0, block.shape[0], n)]]
new_length = int(np.ceil(self.shape[0]/n))
return self.map_as_series(decimate_block, value_shape=new_length, dtype=np.float64)I would be interested in know if there's a performance difference between these. I know that we've have troubles with |
|
Images.map_as_series transforms to blocks, which is slow. The main idea behind decimation is to quickly reduce the size of the data first and then transform to blocks to do source extraction for which temporally decimated data is the sweet spot between merely one summary image and the whole data. I expect a clear performance difference between these implementations, most dramatically if image 1 to n is on the first node, n+1 to 2n on the second, … thus requiring no shuffling between nodes at all. Of course it would be terrible if image i is on node i mod n. How get the images distributed in the first place?
|
|
@j-friedrich @jwittenbach it'd be really awesome to see performance numbers on this for a large test dataset, I'd say we just measure it and go with whichever implementation is faster. |
Decimates images to reduce data size (and imaging rate).
Sequentially averages N frames together, instead of merely taking every N-th frame, thus preserving better SNR. Corresponds to running mean filtering with window length N followed by subsampling by N.