Skip to content

Security Risk: Unfiltered LLM Output May Lead to Arbitrary File Write #28

@glmgbj233

Description

@glmgbj233

File Paths:
/engy/src/engy/produce_files.py
/engy/src/engy/app_builder.py

Risk Description:
The current implementation writes LLM-generated content directly to files without proper validation or sanitization. This introduces multiple risks:

  1. Path Traversal Attacks — Malicious filenames such as ../../malicious.py could lead to unauthorized file access or overwrite.
  2. Executable Code Injection — Malicious code could be injected into output files, leading to security breaches if executed.
  3. Overwriting Critical Files — Important system or project files could be unintentionally or maliciously overwritten.

Vulnerable Code Patterns:

  1. In produce_files.py:
# No validation on filename or content
with open(filename, "w") as f:
    f.write(block_content) 
  1. In app_builder.py:
# Directly passes LLM output to produce_files()
produce_files(responses[0])  

Suggested Fixes:

  1. Validate Filenames: Enforce a whitelist of allowed file extensions and prevent directory traversal (e.g., reject ../ or absolute paths).
  2. Scan File Content: Use regex or static analysis to detect and block potentially harmful content such as executable code.
  3. Restrict Write Locations: Limit all file writes to a controlled, sandboxed directory to contain potential damage.
  4. User Confirmation: Prompt the user for confirmation before creating or overwriting files, especially outside the default workspace.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions