Find answers from the community

Home
Members
sickness272
s
sickness272
Offline, last seen 3 months ago
Joined September 25, 2024
Yeah I know that what I hav ebeen using, but I would like the integration to be through llama index. Specifically for function calling. No one seems to integrate this
2 comments
s
L
Or anyone that can help. Bedrock Anthropic does not support function calling????
4 comments
L
s
Any way to use the messages api of models with images???
5 comments
L
s
Is there any kind of prompt caching in place? How can I intercept llm calls, to put a cache layer before? Is there any mechanism in-place to do this, instead of implementing by hand?
8 comments
W
s
s
sickness272
·

Tools

In query pipelines I want to have a LLMMultiSelector that selects tools, and then ask a question independently to each tool. How can I stream the multiple values of the output of LLMMultiSelector each to its own tool? And then summarize? I dont see a way to express this with pipelines
3 comments
L
s
Anyone knows how to use anthropic models through vertex?
1 comment
W
I am having issues with instrumentation span handlers, not catching LLM calls to bedrock. What should I do?
8 comments
s
L
Hey guys, I have looked around documentation and found nothing regarding getting the index at which text was split using TokenTextSplitter for example.

For now it returns the usual metadata, page number and document name, but I would like to have at which index it was split.

I think langchain as an option for this, is there something similar here?
Maybe a callback?
2 comments
s
L