Plugin directory

The following plugins are available for LLM. Here’s how to install them.

Local models

These plugins all help you run LLMs directly on your own computer:

Remote APIs

These plugins can be used to interact with remotely hosted models via their API:

If an API model host provides an OpenAI-compatible API you can also configure LLM to talk to it without needing an extra plugin.

Embedding models

Embedding models are models that can be used to generate and store embedding vectors for text.

Extra commands

  • llm-cmd accepts a prompt for a shell command, runs that prompt and populates the result in your shell so you can review it, edit it and then hit <enter> to execute or ctrl+c to cancel.

  • llm-python adds a llm python command for running a Python interpreter in the same virtual environment as LLM. This is useful for debugging, and also provides a convenient way to interact with the LLM Python API if you installed LLM using Homebrew or pipx.

  • llm-cluster adds a llm cluster command for calculating clusters for a collection of embeddings. Calculated clusters can then be passed to a Large Language Model to generate a summary description.

  • llm-jq lets you pipe in JSON data and a prompt describing a jq program, then executes the generated program against the JSON.

Just for fun