Find answers from the community

Updated 4 weeks ago

Anthropic

What-the-heck : working code now throwing: AttributeError: 'Anthropic' object has no attribute 'get_tokenizer'
Oh... hrmm... https://discord.com/channels/1059199217496772688/1059200010622873741/1305527139973730376
L
C
10 comments
The anthropic client lib updated and broke the integration. It's fixed if you use the latest version of anthropic and the integration
shoot "pip install -U llama_index" hit errors (sometimes I hate python pkg mgmt)
Plain Text
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
llama-index-llms-ollama 0.3.2 requires llama-index-core<0.12.0,>=0.11.0, but you have llama-index-core 0.12.13 which is incompatible.                                             llama-index-llms-anthropic 0.3.1 requires anthropic[vertex]<0.29.0,>=0.26.2, but you have anthropic 0.44.0 which is incompatible.
llama-index-llms-anthropic 0.3.1 requires llama-index-core<0.12.0,>=0.11.0, but you have llama-index-core 0.12.13 which is incompatible.
This worked
pip install -U llama_index.llms.anthropic
but other stuff is busted
Plain Text
llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step 'prepare_chat_history': "Could not resolve authentication method. Expected either api_key or auth_token to be set. Or for one of the `X-Api-Key` or `Authorization` headers to be explicitly omitted"
so now the tokenizer needs the anthropic api key:
Plain Text
os.environ["ANTHROPIC_API_KEY"] = "sk-ant-api03-ntK

before the call to Anthropic().tokenizer
also the Anthropic() call defaults to claude-2.1, so you need your code to do the instantiation with a good model before getting the tokenizer, e.g.
Plain Text
    llm = Anthropic(model="claude-3-haiku-20240307", max_tokens=MAX_TOKENS)
    tokenizer = llm.tokenizer
    Settings.tokenizer = tokenizer

where the Settings is for llama_index.core ...
I think that makes sense, since I supposed the tokenizer could need to match the model (although it doesn't change often).
I'm not sure where I got the code originally .. hope the examples in the docs are okay.
Yea that needs a tweak it looks like. But besides that, seems all is well
And yea i know, confusing that anthropic would update the tokenizer like that πŸ€·β€β™‚οΈ

For future reference too, every integration is a package. llama-index isn't even a real package, just a wrapper on some starter installs
this is probably more about my environment because the original example worked in a jupyter notebook.
I'm using workflows and async which may be causing some of the issue, but it's weird...
πŸ˜₯
https://github.com/run-llama/llama_index/pull/17607/files
Add a reply
Sign up and join the conversation on Discord