Find answers from the community

Updated 4 months ago

so im using llm sherpa as my parser and

At a glance

The community member is using LLM Sherpa as a parser and is wondering if there is a way to keep the chunk size when converting to nodes. The comments suggest that the community members can create the nodes on the fly using TextNode and then use VectorStoreIndex to index the nodes. The community members seem impressed with the capabilities of Llama Index and are grateful for it being open-source.

so im using llm sherpa as my parser and it does the chunks already is there a way to keep the chunks size when converting to nodes?
L
d
8 comments
Plain Text
from llama_index.core.schema import TextNode

node = TextNode(text=text_chunk, metadata={...})
Can just create the nodes on the fly
and then
VectorStoreIndex(nodes=nodes, ...)
llama index contines to blow my mind on the daily
ty for making it open-source
haha glad you are getting some use out of it! ๐Ÿ’ช
Add a reply
Sign up and join the conversation on Discord