Skip to content

[Bug]: Black screen when finishing generation #622

@Ma4a4a

Description

@Ma4a4a

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

RX 560. When AI finishes generating an image (progress goes 100%), my screen turns black, coolers on the GPU start spinning at 100% speed, and i need to reboot my pc.

Steps to reproduce the problem

  1. Start generation
  2. Wait for it to end
  3. Screen goes black, coolers go whoooooooo

What should have happened?

Everything should go normally. The image must appear. No GPU errors.

What browsers do you use to access the UI ?

Microsoft Edge

Sysinfo

sysinfo-2025-07-27-19-37.json

Console logs

venv "C:\sheesh\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-24-g63895a83
Commit hash: 63895a83f70651865cc9653583c69765009489f3
C:\sheesh\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\sheesh\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-directml --lowvram --opt-split-attention --opt-sub-quad-attention --disable-nan-check --no-half
ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute 'ORTPipelinePart'
Loading weights [e44c7b30c6] from C:\sheesh\models\Stable-diffusion\epicphotogasm_ultimateFidelity.safetensors
Creating model from config: C:\sheesh\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 13.5s (prepare environment: 17.8s, initialize shared: 2.1s, load scripts: 0.8s, create ui: 0.7s, gradio launch: 0.6s).
creating model quickly: OSError
Traceback (most recent call last):
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 409, in hf_raise_for_status
    response.raise_for_status()
  File "C:\sheesh\venv\lib\site-packages\requests\models.py", line 1026, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\sheesh\venv\lib\site-packages\transformers\utils\hub.py", line 342, in cached_file
    resolved_file = hf_hub_download(
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\file_download.py", line 1008, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\file_download.py", line 1115, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\file_download.py", line 1656, in _raise_on_head_call_error
    raise head_call_error
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\file_download.py", line 1544, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\file_download.py", line 1461, in get_hf_file_metadata
    r = _request_wrapper(
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\file_download.py", line 286, in _request_wrapper
    response = _request_wrapper(
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\file_download.py", line 310, in _request_wrapper
    hf_raise_for_status(response)
  File "C:\sheesh\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 459, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6886806c-76d7e2226384c75f6b012784;7280d3f6-bcc5-42a6-8dc1-7103177ce320)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Program Files\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Program Files\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Program Files\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\sheesh\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\sheesh\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\sheesh\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\sheesh\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "C:\sheesh\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "C:\sheesh\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "C:\sheesh\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "C:\sheesh\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\sheesh\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "C:\sheesh\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "C:\sheesh\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper
    return func(*args, **kwargs)
  File "C:\sheesh\venv\lib\site-packages\transformers\modeling_utils.py", line 3540, in from_pretrained
    resolved_config_file = cached_file(
  File "C:\sheesh\venv\lib\site-packages\transformers\utils\hub.py", line 365, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
Applying attention optimization: Doggettx... done.
Model loaded in 17.5s (load weights from disk: 0.5s, create model: 5.0s, apply weights to model: 10.0s, apply float(): 0.9s, calculate empty prompt: 1.1s).

Additional information

I've been testing this issue for a while. What i got to say:

First and foremost: I've been successfully using SD 1.5 for a year now. Never got this issue. And any other issues at all.

  1. In my case this is NOT a power supply issue!! No overheating too.

  2. Occurs on both driver version 25.5.1 and 24.9.1

  3. Im... quite not sure about this is a VRAM issue. I monitored VRAM and it's not been overflowing at all.

  4. 256x256 generates fine, even with --medvram. 512x512 causes this issue.

  5. I've switched to some commits, but the issue is still there.

P.S. Obviously i cannot provide the full log :( sorry

Metadata

Metadata

Assignees

No one assigned

    Labels

    directmlDirectML related or specific issuepre-navi

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions