Skip to content

RTX 4090D 48GB support? (RuntimeError: CUDA error: mapping of buffer object failed) #36

@ac101m

Description

@ac101m

NVIDIA Open GPU Kernel Modules Version

570.124.04-p2p

Please confirm this issue does not happen with the proprietary driver (of the same version). This issue tracker is only for bugs specific to the open kernel driver.

  • I confirm that this does not happen with the proprietary driver package.

Operating System and Version

Ubuntu 24.04.2 LTS

Kernel Release

6.8.0-57-generic

Please confirm you are running a stable release kernel (e.g. not a -rc). We do not accept bug reports for unreleased kernels.

  • I am running on a stable kernel release.

Hardware: GPU

NVIDIA GeForce RTX 4090D (48GB)

Describe the bug

In a system with four RTX 4090D GPUs (modded Chinese variant with 48GB VRAM) an error 'mapping of buffer object failed' occurs when attempting p2p communication. I've been able to reproduce the same error with several programs (tabbyAPI, a simple cuda test program, cuda samples etc). The GPUs are connected to the CPU directly without switches.

Obvious stuff out of the way:

  • IOMMU off in bios
  • Pcie ACS off in bios
  • Virtualization disabled (SR-IOV & CPU virtualization)
  • Secure boot disabled
  • Resizable bar enabled
  • nouveau driver blacklisted

General system information:

  • Mainboard: asrock rack romd8-2t
  • CPU: epyc 7532
  • 256GB RAM
  • 4x "tronizm" (obscure chinese brand) RTX 4090D 48GB

The cards also purport to be capable of p2p operation (according to cuda samples deviceQuery):

<other 3 devices truncated>

Device 3: "NVIDIA GeForce RTX 4090 D"
  CUDA Driver Version / Runtime Version          12.8 / 12.0
  CUDA Capability Major/Minor version number:    8.9
  Total amount of global memory:                 48519 MBytes (50875924480 bytes)
MapSMtoCores for SM 8.9 is undefined.  Default to use 128 Cores/SM
MapSMtoCores for SM 8.9 is undefined.  Default to use 128 Cores/SM
  (114) Multiprocessors, (128) CUDA Cores/MP:    14592 CUDA Cores
  GPU Max Clock rate:                            2520 MHz (2.52 GHz)
  Memory Clock rate:                             10501 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 75497472 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        102400 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 193 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from NVIDIA GeForce RTX 4090 D (GPU0) -> NVIDIA GeForce RTX 4090 D (GPU1) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU0) -> NVIDIA GeForce RTX 4090 D (GPU2) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU0) -> NVIDIA GeForce RTX 4090 D (GPU3) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU1) -> NVIDIA GeForce RTX 4090 D (GPU0) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU1) -> NVIDIA GeForce RTX 4090 D (GPU2) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU1) -> NVIDIA GeForce RTX 4090 D (GPU3) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU2) -> NVIDIA GeForce RTX 4090 D (GPU0) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU2) -> NVIDIA GeForce RTX 4090 D (GPU1) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU2) -> NVIDIA GeForce RTX 4090 D (GPU3) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU3) -> NVIDIA GeForce RTX 4090 D (GPU0) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU3) -> NVIDIA GeForce RTX 4090 D (GPU1) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU3) -> NVIDIA GeForce RTX 4090 D (GPU2) : Yes

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.8, CUDA Runtime Version = 12.0, NumDevs = 4
Result = PASS

Also nvidia-smi output:

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.124.04             Driver Version: 570.124.04     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4090 D      On  |   00000000:01:00.0 Off |                  Off |
| 30%   43C    P0             53W /  425W |       1MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 4090 D      On  |   00000000:81:00.0 Off |                  Off |
| 30%   42C    P0             54W /  425W |       1MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA GeForce RTX 4090 D      On  |   00000000:82:00.0 Off |                  Off |
| 30%   35C    P0             58W /  425W |       1MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA GeForce RTX 4090 D      On  |   00000000:C1:00.0 Off |                  Off |
| 30%   40C    P0             42W /  425W |       1MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

To Reproduce

  • Install nvidia drivers 570.124.04 from .run file without kernel modules.
  • Install kernel modules from here: https://github.com/aikitoria/open-gpu-kernel-modules/tree/570.124.04-p2p (as I understand, this is just a more up-to-date fork of this repository with patches for more recent driver versions).
  • Install cuda 12-8 and cudnn 12-8.
  • Compile and run nvidia sample simpleP2P

Simple p2p output:

[./simpleP2P] - Starting...
Checking for multiple GPUs...
CUDA-capable device count: 4

Checking GPU(s) for support of peer to peer memory access...
> Peer access from NVIDIA GeForce RTX 4090 D (GPU0) -> NVIDIA GeForce RTX 4090 D (GPU1) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU0) -> NVIDIA GeForce RTX 4090 D (GPU2) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU0) -> NVIDIA GeForce RTX 4090 D (GPU3) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU1) -> NVIDIA GeForce RTX 4090 D (GPU0) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU1) -> NVIDIA GeForce RTX 4090 D (GPU2) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU1) -> NVIDIA GeForce RTX 4090 D (GPU3) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU2) -> NVIDIA GeForce RTX 4090 D (GPU0) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU2) -> NVIDIA GeForce RTX 4090 D (GPU1) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU2) -> NVIDIA GeForce RTX 4090 D (GPU3) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU3) -> NVIDIA GeForce RTX 4090 D (GPU0) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU3) -> NVIDIA GeForce RTX 4090 D (GPU1) : Yes
> Peer access from NVIDIA GeForce RTX 4090 D (GPU3) -> NVIDIA GeForce RTX 4090 D (GPU2) : Yes
Enabling peer access between GPU0 and GPU1...
CUDA error at simpleP2P.cu:129 code=205(cudaErrorMapBufferObjectFailed) "cudaDeviceEnablePeerAccess(gpuid[1], 0)"

As we can see, the GPUs purport to be be capable of p2p communication, but p2p programs fall over at runtime.

A test program derived from the content of this gist exhibits the following error:

device(type='cuda', index=0)
device(type='cuda', index=1)
device(type='cuda', index=2)
device(type='cuda', index=3)
cuda:0 -> cuda:1
cuda:2 -> cuda:3
cuda:1 -> cuda:0
cuda:3 -> cuda:2
cuda:1 -> cuda:2
cuda:3 -> cuda:0
cuda:2 -> cuda:1
cuda:0 -> cuda:3
cuda:2 -> cuda:0
cuda:3 -> cuda:1
cuda:0 -> cuda:2
cuda:1 -> cuda:3
All 12 GPU transfer combinations are present
GPU cuda:0 can access: GPU cuda:1: ✓,  GPU cuda:2: ✓,  GPU cuda:3: ✓,  
GPU cuda:1 can access: GPU cuda:0: ✓,  GPU cuda:2: ✓,  GPU cuda:3: ✓,  
GPU cuda:2 can access: GPU cuda:0: ✓,  GPU cuda:1: ✓,  GPU cuda:3: ✓,  
GPU cuda:3 can access: GPU cuda:0: ✓,  GPU cuda:1: ✓,  GPU cuda:2: ✓,  
  0%|                                                                                                                                 | 0/12 [00:00<?, ?it/s]Testing cuda:0 to cuda:1
  0%|                                                                                                                                 | 0/12 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/home/ac/local-projects/cuda-p2p-testing/./run.py", line 76, in <module>
    _ = warm_up.to(gpu2)
        ^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: mapping of buffer object failed
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Despite the error message, setting CUDA_LAUNCH_BLOCKING has no effect on the error.

Bug Incidence

Always

nvidia-bug-report.log.gz

nvidia-bug-report.log.gz

More Info

I wasn't expecting these GPUs to be well behaved exactly and had already accepted that I probably wouldn't get good p2p speeds. However, if this patch can be adapted to these cards somehow, then that truly would be excellent!

I expect the firmware has been tampered with in some way to support the additional memory so that may also play a role here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions