Hello Tobias!
Thanks for a great paper!
I have a question about actual last layers dimensions in your work: in the paper you state that after last FRRU96 and streams concatenation RU48 goes — that means for me that we need such dimensionality reductor as 48 convolutions 1x1x(96+32) but in the FRRNABuilder::build code I see that after concatenation you add RU with self.base_channels + self.lanes to self.base_channels reductor:
network = self.add_ru(
network, self.base_channels + self.lanes, self.base_channels)
In the add_ru() method if in_channels != out_channels the additional auto-reductor is added. Does it mean that lasagne in fact adds self.base_channels * self.multiplier + self.lanes to self.base_channels reductor before meaningful RU convolutions and that line in FRRNABuilder::build is just a false track?
Hello Tobias!
Thanks for a great paper!
I have a question about actual last layers dimensions in your work: in the paper you state that after last
FRRU96and streams concatenationRU48goes — that means for me that we need such dimensionality reductor as 48 convolutions1x1x(96+32)but in theFRRNABuilder::buildcode I see that after concatenation you add RU withself.base_channels + self.lanestoself.base_channelsreductor:In the
add_ru()method ifin_channels != out_channelsthe additional auto-reductor is added. Does it mean that lasagne in fact addsself.base_channels * self.multiplier + self.lanestoself.base_channelsreductor before meaningful RU convolutions and that line inFRRNABuilder::buildis just a false track?