
WEAREAI
A Customizable Clientside fabric mod that will let you wrap an API for ChatGPT Gemini Claude Or Ollama And it will take over your chat. it should be...
00—fabricClient1.21.1mod
WEAREAI mod notes
- Client-side Fabric mod that hijacks chat when a message starts with the trigger word (default
!ai), sends the prompt to your AI endpoint, and posts the reply back to chat. - Configurable via Cloth Config (keybind
\opens the screen) with options for trigger word, max character length (applied to prompts and replies), model name, base URL, optional API key + header name, and whether to send the AI reply to the server chat or keep it client-side. - Sends a single JSON POST to the configured base URL shaped like an OpenAI/Ollama-style chat request:
{"model": "<model>", "messages": [{"role": "user", "content": "<prompt>"}]}withContent-Type: application/jsonand optional API-key header. - Response text is pulled from common fields (
choices[0].message.content,choices[0].text,message.content, orresponse) and trimmed to the configured length before showing in chat; failures are reported locally instead of blocking chat. - The mod runs entirely on the client; no server install is required. Keep API keys optional and set only if your endpoint needs them.
Known limitations / ideas:
- Assumes your endpoint accepts an OpenAI/Ollama-like JSON body; Gemini/Claude endpoints may need a compatibility proxy.
- No streaming support; replies appear once the HTTP call finishes.
- Single-turn prompts only; conversation memory is not persisted between messages.