Hi,
I see this code in trian.py
model = DistributedDataParallel(model, device_ids=[args.local_rank], broadcast_buffers=False)
since queue is in buffer. Does it mean that each GPU will have its own buffer and update by itself? If so, should we sync the queue across GPUs?
Thanks!!