Find answers from the community

Home
Members
TikGrig
T
TikGrig
Offline, last seen 3 months ago
Joined September 25, 2024
Hey, wanted to ask if SentenceSplitter is deprecated? Doesn't seem to be in the latest version.
10 comments
T
L
Hey guys, recently gpt-3.5-turbo started to make up too much stuff and 'suppose' too many times.

Maybe you guys are aware of what has changed recently, and maybe have suggesstions for me?

Here's what I am using:

llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=1024, callback_manager=callback_manager)
set_global_service_context(service_context)
response_synthesizer = get_response_synthesizer(
response_mode='refine'
)

memory = ChatMemoryBuffer.from_defaults(token_limit=2000)
chat_engine = testrail_project_index.as_chat_engine(
chat_mode="context",
memory=memory,
service_context = service_context,
refine_template=DEFAULT_REFINE_PROMPT,
system_prompt=(f'Your name is Mantis, who assisting only with questions related to {project_name}. You are able to have normal interactions as long as they don't discuss something outside the context. Only talk about a product's {project_name} based on its test cases through data provided to you in context. If there is something not related to the context, you explain that you can't answer.'),
response_synthesizer=response_synthesizer,
similarity_top_k=5
)
11 comments
T
L
T
Hey guys, I am encountering a weird issue. I am using the context chat engine and for most of my messages I get great replies, but for some (and pretty specific one, like "how to hold an order") I get the following error:


Exception in thread Thread-18 (process_event):
Traceback (most recent call last):
File “/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py”, line 1038, in _bootstrap_inner
self.run()
File “/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py”, line 975, in run
self._target(*self._args, **self._kwargs)
File “/Users/tigran/Desktop/Project llama/chat.py”, line 417, in process_event
response = chat_engine.chat(text)
^^^^^^^^^^^^^^^^^^^^^^
File “/Users/tigran/Desktop/Project llama/venv/lib/python3.11/site-packages/llama_index/chat_engine/context.py”, line 125, in chat
prefix_messages = self._get_prefix_messages_with_context(context_str_template)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/Users/tigran/Desktop/Project llama/venv/lib/python3.11/site-packages/llama_index/chat_engine/context.py”, line 114, in _get_prefix_messages_with_context
context_str = context_str_template.format(system_prompt=system_prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: ‘“content”’
9 comments
T
L
W
Question:

Do chat engines use similarity_top_k? If no, then how can I make the look inside more nodes to answer than it does now?
2 comments
T
W
Hey guys, wanted to ask if there's a way to get resources at the end of the chat engine response? By resources I mean to show the user from where did the answer's content came from.
3 comments
T
W
Hey guys, is there a way to store the nodes in MongoDB with encryption, so will be able to decrypt it when using for querying?
2 comments
L
W
Hey guys, I have this code, where I use the chat engined I had started earlier based on the documentation here: https://docs.llamaindex.ai/en/stable/examples/docstore/MongoDocstoreDemo.html


It answers correctly using the storage_context, but when I restart my server, it answers but stops answering from the context. Only uses the system_prompt. I tried lots of stuff but still don't know what can be the issue that the storage_context doesn't full work after the restart. It seems to load index correctly.

def initialize_confluence_chat_engine(self, team_id):
try:
team_document = user_collection.find_one({'team_id': team_id})
if not team_document or 'confluence_index_id' not in team_document:
raise ValueError("Confluence settings have not been configured for this team.")
confluence_index_id = team_document['confluence_index_id']
storage_context = storage_context_manager.get_storage_context(team_id, mongo_uri, mongo_db_name)
confluence_space_index_from_storage = load_index_from_storage(storage_context, index_id=confluence_index_id)
space_names = team_document.get('space_names', '')

chat_engine = confluence_space_index_from_storage.as_chat_engine(
chat_mode="context",
service_context=service_context,
refine_template=DEFAULT_REFINE_PROMPT,
system_prompt=f"""Your name is Tiko. You're set up to assist with the Confluence spaces: {space_names}...""",
response_synthesizer=response_synthesizer,
similarity_top_k=5
)
return chat_engine
except Exception as e:
logging.error(f"An error occurred while initializing the Confluence chat engine: {e}")
traceback.print_exc()
3 comments
T
L
T
TikGrig
·

Hello,

Hello,

Quick question: How can I retrieve nodes from the vector index? I am using MongoDB Reader.
19 comments
T
W
b
I have built a context chat engine (which is working quite well!). I have two separate functions that create VectorStoreIndex A and VectorStoreIndex B respectively when they are called.

I want to create a case that if VectorStoreIndex A is created and VectorStoreIndex B also gets created, then a new context chat engine to be created that will be using nodes of both VectorStoreIndex A and VectorStoreIndex B. How can I make this work? I explored the documentation and some session videos but didn't find quite the solution I wanted. What would you suggest? I am just an enthusiast, so lack some technical knowledge.
12 comments
T
e
t
А
T
TikGrig
·

Hey

Hey!

I’ve changed the chat_mode=“condense_question” to chat_mode=“openai” and now I am getting lots of random responses outside the context. For the same question, condense_question mode was able to answer within the context, while openai/best did’t (answered outside the context. Noticed that it doesn’t call function everytime when I send a message.)

Is there a way to fix it? Or is there a better way to implement OpenAIAgent in my case? (Still learning, so sorry I probably lack some knowledge)
6 comments
T
L
T
TikGrig
·

Hey

Hey!

My React chat engine does correct observation, but in its final response it doesn't mention it. So as a chat user it becomes weird.

Is there a way to fix it? (Or maybe I'm doing smth wrong?)
6 comments
n
T
L
Hello, can I make my chat_engine consider what has been talked in the conversation history when querying (make it consider the chat history when querying)?
2 comments
T
L