Find answers from the community

Updated 3 months ago

so im using llm sherpa as my parser and

so im using llm sherpa as my parser and it does the chunks already is there a way to keep the chunks size when converting to nodes?
L
d
8 comments
Plain Text
from llama_index.core.schema import TextNode

node = TextNode(text=text_chunk, metadata={...})
Can just create the nodes on the fly
and then
VectorStoreIndex(nodes=nodes, ...)
llama index contines to blow my mind on the daily
ty for making it open-source
haha glad you are getting some use out of it! ๐Ÿ’ช
Add a reply
Sign up and join the conversation on Discord