Find answers from the community

Updated 3 months ago

why does not pandas index read the

why does not pandas index read the headers of df first? e.g. one of my columns is 'Ship State' but llama keeps using just 'State' in the queries
L
M
24 comments
It does read it, but the LLM needs to be smart enough to figure it out. Or, you can change the prompt template

This is the default template: https://github.com/jerryjliu/llama_index/blob/4d032cf5528d130f9f366acea872e2d89d928f7b/gpt_index/prompts/default_prompts.py#L304
If you are using gpt-3.5, I suggest trying text-davinci-003
thank you but where do i add the modified prompt?
funny enough it understood everything quite well yesterday
From that file I linked, you can copy the code to create your own prompt

Then you can pass it into the query

index.query(..., pandas_prompt=DEFAULT_PANDAS_PROMPT)

https://github.com/jerryjliu/llama_index/blob/main/gpt_index/indices/struct_store/pandas_query.py#L93
lol it happens. OpenAI does not version their models sadly, so they can change at any time. It's super annoying actually
is default model in llama 3.5 now?
it's still text-davinci-003
i presume i can change it somewhere? πŸ™‚
definitely

Plain Text
from langchain.chat_models import ChatOpenAI
from llama_index import ServiceContext, LLMPredictor

# can also use gpt-4 if you have access
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
isn't it for chat? will it work for gpt_index.indices.struct_store?
oh for sure. It's still an LLM. gpt-4 and gpt-3.5 just use a slightly different input system
tbh though the order of power goes gpt-4 > text-davinci-003 > gpt-3.5
so your mileage will vary if you use gpt-3.5
you 've been very helpful , thank you
sorry for stupid question
nah no worries
what llama-panda add value vs if i just use gpt3.5 with proper prompting?
llama index handles a lot of the under-the-hood stuff like prompt templating for you. You can always use the raw api to query gpt-3.5, it's just a bit more work.

Up to you πŸ™‚
ok , last question for today, promise!
i am struggling with response formatting. how can i 1) just have a simple string/text? , and 2) is it possible to prompt llama-pd to show graphs when appropriate.
well, those were two questions πŸ™‚
1) unfortunately not at the moment. The easiest way would be to hook it up to something like a langchain agent to do that, using the index as a custom tool

2) Also not at the moment πŸ˜… This index is quite new at the moment. PRs definitely welcome if you have any changes in mind!
Add a reply
Sign up and join the conversation on Discord