Christoph Haag
Asked on Dec 01, 2023
Tabby does supports all quantization types supported by llama.cpp, But it does use q8 as default.
If you want to convert your own model for use with Tabby, you can follow the How can I convert my own model for use with Tabby?
guide in this link.