Find answers from the community

Updated 2 months ago

Create-llama

Hi there, I am trying to spin up a new instance of Create LlamaIndex App. I used the command npx create-llama@latest and the app loads. I connected my OpenAI API key, set it to 3.5 turbo but do the following error when I submitted a message in the front end next.js app text area despite just typing "hi there" into the box.
Attachment
image.png
W
P
L
20 comments
I don't think that you should be getting this. Something is wierd here tbh
How are you ingesting docs?
literally just tried to use the default letter standard pdf initially (101.pdf) and then i deleted that and tried a smaller .txt file I wrote and ran npm run generate and restarted the app
I put the .txt into /data on next.js
Is this with the nextjs backend or python?
or which settings in create-llama did you use?
this definitely worked fine for me just the other day
well, glad it's not the python package πŸ˜† I'm the main maintainer over there

Let me create a new app and see if I reproduce
settings, er, html, next.js, typescript, eslint, chat with data option
Thanks Logan, appreciate it. I really want to use llamaindex!
im using node 20.10.0
Reproduced it! I also fixed it.

It seems maxTokens is hardcoded to 2048. I'm not sure why this was done though.

In any case, edit app/api/chat/route.ts on line 47, change maxTokens to something smaller like 512.
Attachment
image.png
In fact, you can probably just remove that line entirely
thanks. Out of interest am I am to easily use my own open souce llm? I have it hosted on an http endpoint and I have a bunch of parameters I need to pass to it rather than using your ContextChatEngine. Is that possible/a good idea? I mainly want to use llama index for the vector store (which I think runs locally?). Thanks
Yea the vector store part is mainly in memory, saved to disk

Sadly the LLM functions are a little lacking in typescript

We have a few integrations with other LLM Providers.

For a completely custom LLM though, I would just implement the LLM class directly and use it

Although if you aren't familiar with JS/TS, that might be a challenge πŸ˜… And example of implementing anthropic here
https://github.com/run-llama/LlamaIndexTS/blob/b9a5a0498a158e5281333f5176e482c64a5fd789/packages/core/src/llm/LLM.ts#L652
sounds like it might be easier to use pinecone or some hosted vector store
yeaaa maybe. Although there are other features offered by llama-index that make it worth using, unless you really only need pure LLM chat
I would like to make using more LLMs in the TS package easier
yea just trying to build a character chatbot with the backstory and other context being in the vector store
Add a reply
Sign up and join the conversation on Discord