Skip to content

Library seems to use up all available GPU memory. #85

@noahmartinwilliams

Description

@noahmartinwilliams

I'm not sure if this is a problem with accelerate-llvm-ptx or cuda, but whenever I run "stack test" for this on the commit with checksum 49432b9265cc72606daf256b9ca641724e1edae9 it runs out of GPU memory and crashes. I'm gonna see if I can come up with a simpler test, but this is a problem I've had for years now with every attempt I've made to develop a neural net library using accelerate. I have an NVIDIA GeForce RTX 3050 Laptop GPU with 4 GiB of memory on Arch Linux (up to date as of April 5th, 2026).

Also when I run the nvtop command to see how much of the gpu is being used it's often at around 5% GPU usage.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions