Find answers from the community

Y
Yj
Offline, last seen 3 months ago
Joined September 25, 2024
Y
Yj
·

Router

what is the difference between react agent and router? both let LLM pick which tool to use isnt it?
1 comment
L
How does enterprise host local LLM models? I know for a fact that you can host local LLM models on your computer using Ollama,

but how does enterprise do that? do they use AWS? need some insights, thank you
8 comments
L
b
Y
there is an error in the documentation, the exact code shown in the documentation wont work.

Plain Text
%env GITHUB_TOKEN=github_pat_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
github_token = os.environ.get("GITHUB_TOKEN")
owner = "jerryjliu"
repo = "llama_index"
branch = "main"

documents = GithubRepositoryReader(
    github_token=github_token,
    owner=owner,
    repo=repo,
    use_parser=False,
    verbose=False,
    ignore_directories=["examples"],
).load_data(branch=branch)


Plain Text
TypeError: GithubRepositoryReader.__init__() got an unexpected keyword argument 'github_token'


https://docs.llamaindex.ai/en/stable/examples/data_connectors/GithubRepositoryReaderDemo/
5 comments
Y
L
llamaindex documentation is more clean than langchain, but langchain has WAYYY more tutorials on it.

Had been trying out both

glad to see llamaindex team had been trying to up the game on youtube as well tho, hopefully they post more often
1 comment
L
Something u guys could improve the documentation on:

explaining the retrievers module, recently i started trying new retrievers on the documentation but there is 0 explanation what is it for, why is it good etc.

for example,

auto retrieval,bm25, as someone who is not a background from data science i have no idea what does it do, how does it differ from normal rag retrievers and what kind of scenario its good for
5 comments
Y
L
Im building a system like this

  1. user give an input
  1. LLM Use pandasqueryengine to query a CSV to see if it can find any matching rows,
  1. If not found in the CSV file
  1. then use a fine tuned model to predict whats the output based on the given input
can someone teach me how to do step 3 + 4? I dont know how to let LLM knows to make another decision IF its not found in csv
1 comment
Y
if i use a model with higher context length ( 128 k ) vs a lower context length (16k) will it help when i do RAG with higher context length?

does larger context length model have larger chunk size? or node size?

can someone please explain, thank you. Or link me to any article that talks about this
5 comments
T
Y
Llamaparse doesnt seem to extract the full file for example the heading and footer of the file. It doesnt recognize the footer in this case ( sometimes header as well ) which is a very important factor for me to determine the section when im building a rag

anyone can help with it?
7 comments
Y
p
L
Y
Yj
·

thanks dev

thanks dev
4 comments
Y
L