Find answers from the community

Home
Members
falconview_99
f
falconview_99
Offline, last seen 2 months ago
Joined September 25, 2024
In the Paul Graham essay exmple on github https://github.com/jerryjliu/llama_index/tree/main/examples/paul_graham_essay/data I notice that the ASCII/text data is pretty neatly organized into lines, each of which has a \n\n after them. In general, if we do have the luxury of controlling the contents of the txt file, whats the best way to structure the .txt files for ingest (VectorDB) to optimize subsequent performance (search and otherwise)? How long should groups or lines be? Newlines between? etc.
4 comments
f
L
Anyone using llamaindex as a means to do indexing and search together with an OpenAssistant backend? Although i've tinkered with llamaindex and VectorIndex to build doc-Q&A with OpenAI, I'm not entirely clear on what parts of that to change when using an Open Assistant model in the backend.. (aosst-sft-4-pythia-12b-epoch-3.5).. Any online examples or snippets floating around ?
3 comments
L
Hello community and all - a Sorta-noob question: How is that when a) creating an index from docs , and also b) setting up inference (e.g., loading same index file and querying it) in BOTH cases we can instantiate a LLM object and in both cases we can set "temperature" (higher vals mean more 'creative' responses). Does setting 'temperature' have any meaning at index-creation time? If so what? I could see that it would have impact at inference time.. Thanks.
1 comment
L