general

How can I confirm that the LLM found the results of a query from the RAG in Tabby?

I'm having issues with Tabby where the LLM doesn't seem to be finding the results of a query from the RAG. I've added a new source code folder to the Repository providers, a new scheduler job kicks off, but when it's done, it shows 'No Data' in the code browser. Is there a way to confirm that the LLM found the results of a query from the RAG? Should I include the embedding model in the config file?

Te

TenasticImmerser

Asked on Jun 19, 2024

  • In the upcoming 0.13 release of Tabby, there will be a feature that shows the sources used for a chat response when interacting with a chat model.
  • To confirm that the LLM found the results of a query from the RAG in Tabby, you can include the embedding model in the config file.
  • You can also print out the sources to judge if the LLM really found the class you're asking about vs making it up.
  • Restarting the docker container may not resolve the issue of 'No Data' in the code browser, so verifying the embedding model configuration and upcoming features in the new release could help troubleshoot the problem.
Jun 21, 2024Edited by