Find answers from the community

Updated 2 years ago

Install

guys need help. The libraries are all installed, but the error : from llama_index import SimpleDirectoryReader, GPTListIndex, readers, GPTSimpleVectorIndex, LLMPredictor, PromptHelper
ModuleNotFoundError: No module named 'llama_index'
L
A
a
32 comments
Are you using a venv? Highly recommend you do
no, didn't use it. thanks I'll try.
πŸ‘ if you are using bash, here's how it looks

Plain Text
python -m venv venv
source venv/bin/activate
pip install llama-index
This keeps the python packages seperate for each project you work on, which is pretty helpful
No, I'm on Windows in VS code
Ah OK. If you are using powershell, it's slightly different

It'll be
.\venv\Scripts\Activate.ps1

I think lol
also GPTSimpleVectorIndex has been renamed to GPTVectorStoreIndex so depending on the llamaindex version your using, you may see related errors
Thanks guys for the tips, it's appreciated. Please tell me who did the llama-index + ChatGPT bundle for the telegram bot with additional training according to the client's data?
Hi. Please tell me. For what reason is the response being truncated? INFO:llama_index.token_counter.token_counter:> [retrieve] Total embedding token usage: 9 tokens
[retrieve] Total embedding token usage: 9 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 2061 tokens
[get_response] Total LLM token usage: 2061 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens
[get_response] Total embedding token usage: 0 tokens
----------------------------------------
ChatGPT says: When your borders are violated, you need to show strength and defend your rights. You must be sure that your boundaries must be respected and you must be ready to take the necessary measures to protect yourself. You can read
What does your index setup look like?

Normally openai defaults to 256 max tokens. But that looks much less than 256 tokens πŸ€”
def build_index(file_path):
max_input_size = 4096
num_outputs = 512
max_chunk_overlap = 20
chunk_size_limit = 256

prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)

llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs))
And what type of index are you using? A vector index?
from pathlib import Path
from llama_index import download_loader, SimpleDirectoryReader, GPTVectorStoreIndex, LLMPredictor, PromptHelper
from langchain.chat_models import ChatOpenAI
download_loader('SimpleDirectoryReader')
documents = SimpleDirectoryReader(input_files=[file_path]).load_data()
index = GPTVectorStoreIndex.from_documents(documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper)
return index
Try increasing the top k when you query (your chunk size is quite small)

query_engine = index.as_query_engine(similarity_top_k=5)

Or optionally, increase the chunk size (usually around 1024 is optimal)
Thank you, I will try
It cuts anyway 😦
Oh wait, looking at your code, are you using the service context?

The prompt helper and llm predictor should go into the service context, then the service context goes into the index πŸ˜…
I'm new to this πŸ™‚
DocxReader = download_loader("DocxReader")

loader = DocxReader()
documents = loader.load_data(file=Path(r'C:\Users\adepu\Desktop\SF_DA\python\my_tg_bot\docsΠ³Ρ€Π°Π½ΠΈΡ†Ρ‹.docx'))

file_path = input('Enter the path of the file/doc:')

def build_index(file_path):
max_input_size = 4096
num_outputs = 256
max_chunk_overlap = 20
chunk_size_limit = 1024

prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)

llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs))

download_loader('SimpleDirectoryReader')
documents = SimpleDirectoryReader(input_files=[file_path]).load_data()
index = GPTVectorStoreIndex.from_documents(documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper)
return index


index = build_index(file_path=file_path)
query_engine = index.as_query_engine(similarity_top_k=5)

def chatbot(prompt):
return query_engine.query(prompt)

while True:
print('########################################')
pt = input('ASK: ')
if pt.lower()=='end':
break
response = chatbot(pt)
print('----------------------------------------')
print('ChatGPT says: ')
print(response)
Try adding this

Plain Text
from llama_index import ServiceContext

sc = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
...
index = GPTVectorStoreIndex.from_documents(documents, service_context=sc)
Super, it worked πŸ™‚ , but writes in a different language as opposed to the request πŸ™‚
fixed it. Thank you very much, I will continue to understand and improve πŸ™‚
ah that's a little annoying. Maybe add to the query something like "Respond using the language X" maybe?
there is also internal prompt templates that are also written in english...
but maybe the above will work lol
Everything is fine. Works great. Thank you for your time.
:dotsHARDSTYLE:
Add a reply
Sign up and join the conversation on Discord