
LoveLLM
A mod which uses a small embedded large language model to generate prompts and overrides the messages generated by the love potion with it.
Date uploaded | 4 days ago |
Version | 1.0.2 |
Download link | DevN-LoveLLM-1.0.2.zip |
Downloads | 3388 |
Dependency string | DevN-LoveLLM-1.0.2 |
This mod requires the following mods to function

BepInEx-BepInExPack
BepInEx pack for Mono Unity games. Preconfigured and ready to use.
Preferred version: 5.4.2100README
LoveLLM Mod for R.E.P.O - Version 1.0
THIS MARKDOWN WAS GENERATED BY AI - SORRY IF THERES AN ISSUE IM LAZY AND DOING THIS FOR FUN
LoveLLM is a mod for the game R.E.P.O that adds the ability to override the love potion prompts with responses generated by a small configurable embedded large language model (LLM). This mod provides customization options for users to modify the prompts, and it supports any large language model in the GGUF format.
Features
- LLM Integration: Replaces the default love potion prompts with responses generated by an embedded LLM.
- Customizable Prompts: Configure the behavior of the LLM using a text file that defines the prompt.
- Supports GGUF Models: The mod supports any large language model in the GGUF format, allowing for easy integration of different models.
- Configurable: The model and prompt settings can be customized to suit your playstyle or requirements.
- REQUIRES A CPU WITH AVX INSTRUCTIONS!
Installation
-
Download the mod:
- Download the LoveLLM mod from the official mod repository or the source you got it from.
-
Install the mod:
- Place the mod files into the
Plugins
folder of your BepInEx installation directory.
- Place the mod files into the
-
Configure the Model:
- In the
Models
folder, locate theprompt.txt
file. Open it with a text editor to modify the prompt or the behavior of the LLM. - Add your desired model in GGUF format and configure any other settings. https://huggingface.co/HuggingFaceTB
- See https://blog.steelph0enix.dev/posts/llama-cpp-guide/#building-the-llama for instructions on how to convert to gguf.
- In the
Configuration
-
prompt.txt: The prompt file is where you define the prompts. Modify this text file to adjust the promots. The value after the semicolon is used to control what type of chat possess message will get generated (Love potion = make your eyes pink.)
1 = LovePotion, 2 = Ouch, 3 = SelfDestruct, 4 = Betrayal, 5 = SelfDestructCancel
Example:
You think the person {playerName} is sexy. Tell {playerName} that you just cannot stop thinking about them. Be creative but keep it short, one sentence. Use slang;1
-
In this example, the prompt text is on the left hand side of the semi-colon, and on the right hand side is the chat effect id.
Example Prompt File
The person {playerName} broke your heart. Generate some profanity about {playerName} use internet short hand. Keep it short, two sentences. Don't censor;4
You're angry at the player whose name is {playerName}. Generate profanity about them. Keep it short, two sentences. Don't censor;4
You think the person {playerName} is hideous. You don't understand why anyone would like them. Generate a mean insult about {playerName}. Keep it short, two sentences;4
You think the person {playerName} is sexy. Come up with a pick-up line for {playerName}. Keep it short, one sentence.;1
You think the person {playerName} is sexy. Tell {playerName} that you just cannot stop thinking about them. Be creative but keep it short, one sentence. Use slang;1
Example Config
## Plugin GUID: LoveLLM
[General]
## Prompt config. Each entry in the prompt config is delimited by a new line, and separated with a semicolon. The value to the left of the semicolon is the prompt and to the right is the chat possess id, so 1 is pink love potion and 4 is red.
# Setting type: String
# Default value: Models/Prompts.txt
Prompts = Models/Prompts.txt
## Ngl parameter to pass to llama
# Setting type: Int32
# Default value: 99
ngl = 99
## n_ctx parameter to pass to llama. Increasing this will increase the memory footprint and maybe get some better responses. Idk its finnciky..
# Setting type: Int32
# Default value: 2048
n_ctx = 2048
## Model to load
# Setting type: String
# Default value: Models/SmolLM2-1.7B.gguf
Model = Models/SmolLM2-1.7B.gguf
CHANGELOG
v1.0.0
- Initial release.