I'm not sure if this is a problem with accelerate-llvm-ptx or cuda, but whenever I run "stack test" for this on the commit with checksum 49432b9265cc72606daf256b9ca641724e1edae9 it runs out of GPU memory and crashes. I'm gonna see if I can come up with a simpler test, but this is a problem I've had for years now with every attempt I've made to develop a neural net library using accelerate. I have an NVIDIA GeForce RTX 3050 Laptop GPU with 4 GiB of memory on Arch Linux (up to date as of April 5th, 2026).
Also when I run the nvtop command to see how much of the gpu is being used it's often at around 5% GPU usage.
I'm not sure if this is a problem with accelerate-llvm-ptx or cuda, but whenever I run "stack test" for this on the commit with checksum 49432b9265cc72606daf256b9ca641724e1edae9 it runs out of GPU memory and crashes. I'm gonna see if I can come up with a simpler test, but this is a problem I've had for years now with every attempt I've made to develop a neural net library using accelerate. I have an NVIDIA GeForce RTX 3050 Laptop GPU with 4 GiB of memory on Arch Linux (up to date as of April 5th, 2026).
Also when I run the nvtop command to see how much of the gpu is being used it's often at around 5% GPU usage.