Skip to content

Conversation

@wdhongtw
Copy link
Collaborator

@wdhongtw wdhongtw commented Dec 4, 2025

Description

Avoid installing CUDA related stuff

  • Use PyTorch CPU version so we avoid installing CUDA.

This modification keeps the functionality and reduce image size by about 7.7GB.

wdhongtw/vllm-tpu   latest   d055fd2151a0   22 minutes ago      11.8GB
wdhongtw/vllm-tpu   base     07dbf76dbed8   About an hour ago   19.5GB

This PR should be merged after #1245

Tests

Build the image and run benchmarking in the container.

Checklist

Before submitting this PR, please make sure:

  • I have performed a self-review of my code.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have made or will make corresponding changes to any relevant documentation.

- Mount cache directory across layers when necessary.
- Allow cache directory usage for pip command.

Signed-off-by: Weida Hong <wdhongtw@google.com>
- Use PyTorch CPU version so we avoid installing CUDA.

Signed-off-by: Weida Hong <wdhongtw@google.com>
Copy link
Collaborator

@QiliangCui QiliangCui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid installing CUDA will be great!!

Can we do this change after #1245 so that we can have a cleaner base to diff?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants