Skip to content

LLM to LLM communcation (LLM processor)  #57

@nembal

Description

@nembal

In your blog post about the evolution of Deep Neural Nets, you talked about powerful AIs that can handle complex issues, making it less important to fine-tune local LLMs. However, you've recently suggested the concept of LLMs as processors, enabling them to communicate directly, assuming various models interact.

I'm asking this because I am exploring the LLM-to-LLM communication. I believe that while AGI will power many services, we'll have specialized AIs or LLMs embedded in services. This points to a decentralized computing system where not everything relies on one model but models can directly interact with each other. For better AI/LLM communication, traditional APIs might not be enough. A kind of LLM gateway to improve connections (discovery, communication, etc) between these models could be key.

1/ What are your perspectives on the current development of specialized LLMs and AI models?
2/ Do you see potential in a gateway style inter AI/LLM infrastructure? If so, I would love to connect and share more.

Thank you, @karpathy, for any response!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions