The code by default assumes that it has a cuda enabled version of PyTorch. This is not so in cases such as on a MacBook. Maybe changing the .cuda() call to .to(device) where device is set based on cuda availability might make the code run without any hiccup on such devices.
The code by default assumes that it has a cuda enabled version of PyTorch. This is not so in cases such as on a MacBook. Maybe changing the .cuda() call to .to(device) where device is set based on cuda availability might make the code run without any hiccup on such devices.