Find answers from the community

e
em
Offline, last seen 2 months ago
Joined November 17, 2024
Hi there again,

having issues with the llm.achat at the moment.

My chat messages look like

[ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='[system_prompt]', additional_kwargs={}),
ChatMessage(role=<MessageRole.USER: 'user'>, content=[
{'text': '<document>{18000 token document}</document>', 'type': 'text', 'cache_control': {'type': 'ephemeral'}},
{'text': 'Please create a summary of the document that is at least 2500 tokens long. Currently the document is 18000 tokens long.', 'type': 'text'}
], additional_kwargs={})]

Executed by:
llm.achat(
messages,
extra_headers={
"anthropic-beta": "prompt-caching-2024-07-31",
},
),

It creates the response and uses the tokens, but hangs indefinitely. I can't copy the exact text due to privacy concerns
1 comment
L
Hi there, I'm getting errors with regards to embedding token count when upserting nodes in Pinecone.

Can anyone give me info as to how the token count for the Node object is counted? It looks like the metadata is counted towards the token count, my testing seems to confirm this
2 comments
L
e
e
em
·

Anthropic

Hey everyone,

when are we planning to allow haiku-3.5 as an option for anthropic models?
3 comments
e
L
e
em
·

Json

Hey gang! I've been using LlamaParse for the past couple weeks, loving it so far

I want to be able to get the page# for each of the text chunks I store in the db for later reference. I saw we have that in the JSON object when I do it through the web sandbox, is it possible with the library? If not, is it planned to be launched?
2 comments
L
e