Skip to content

Instantly share code, notes, and snippets.

@psychemedia
Last active December 21, 2023 17:30
Show Gist options
  • Save psychemedia/51f45fbfe160f78605bdd0c1b404e499 to your computer and use it in GitHub Desktop.
Save psychemedia/51f45fbfe160f78605bdd0c1b404e499 to your computer and use it in GitHub Desktop.
Example of running GPT4all local LLM via langchain in a Jupyter notebook (Python)
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@jrobles98
Copy link

Thanks, that is helpful. However it appears that these settings are already maxed out at default to 2048.

The file I tested with had only a few lines in it, so I think the problem might lie elsewhere.

Yes, Indeed. I was hoping to find that limit on GPT4All but only found that the standard model used 1024 input tokens. So maybe... the quantized lora version uses a limit of 512 tokens for some reason, although it doens't make that much sense since quantized and lora versions only looses precision rather than dimensionality.

Anyway I think the best way to improve this regard is to try to use other models that we know can handle already 2048 token input. I suggest Vicuna, that was born mainly with this purpose of maxing out input/output.

If somebody can test this it would be so great.

@JeffreyShran
Copy link

JeffreyShran commented Apr 18, 2023

I'm actually using ggml-vicuna-7b-4bit.bin. This is the one I'm having the most trouble with. :)

@mtthw-meyer
Copy link

This would be much easier to follow with the working code in one place instead of only scattered fragments.

@stellarkllc
Copy link

Is it possible to use GPT4All as llm with sql_agent or pandas_agent instead of OpenAI?

@zeke-john
Copy link

I've installed all the packages and still get this: zsh: command not found: pyllamacpp-convert-gpt4all

@lingjiekong
Copy link

I've installed all the packages and still get this: zsh: command not found: pyllamacpp-convert-gpt4all

Try a older version pyllamacpp pip install pyllamacpp==1.0.7.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment