Find answers from the community

Home
Members
drewskidang
d
drewskidang
Offline, last seen last month
Joined September 25, 2024
does the embedding fine tunning example work with any model or just bge
is there any function calling capabalites for gemini?
3 comments
L
d
@Logan M aside from token cost... have you notice llm re-rankers performing better than cross encoders?
1 comment
L
@Logan M also AttributeError: 'Anthropic' object has no attribute 'get_tokenizer'
do we have fix for this or do we not use tokziner for anthropic anymore
17 comments
L
d
can we use entity extraction with llms? Gettign issues with bert
5 comments
A
d
using vllm and getting some issues
TypeError: Unexpected keyword argument 'use_beam_search'
7 comments
L
d
Any plans to integrate real time voice assistant. I got some free time building again if even needed.
3 comments
d
L
Any plans of doing anthropics rag context with other models. Its so slow bc i tier 1 for anothropic
7 comments
L
d
isn't the anthropic rag chunk thing really similar to the meta data summary extractor?

Adding more context to the chunk?
8 comments
L
d
Plain Text
    437             The target length or query length the created mask shall have.
    438     """
--> 439     _, key_value_length = mask.shape
    440     tgt_len = tgt_len if tgt_len is not None else key_value_length
    441 

ValueError: too many values to unpack (expected 2)
Having trouble with the entity extractor
1 comment
L
Does llama support llms as embedding lol. I just want to compare the difference bc of the leader boards
1 comment
L
SOmeoby help please

Plain Text
completion_response = model.complete("To infinity, and")
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv\Lib\site-packages\text_generation\client.py", line 154, in chat
    raise parse_error(resp.status_code, payload)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  ext_generation\errors.py", line 81, in parse_error
    message = payload["error"]
              ~~~~~~~^^^^^^^^^
KeyError: 'error'
6 comments
d
L
How do I interstate cache with Qdrant. I want to story my queries.
9 comments
k
d
9 comments
d
L
b'{"status":{"error":"Wrong input: Vector params for text-sparse are not specified in config"},"time":0.000035159}'
Can someone help im confused

i have both but am i doing something wrong?
7 comments
L
d
d
drewskidang
·

Kwargs

"do_sample": True,
"temperature": 0.6,
"top_p": 0.9,
},

Is there a way to add this in the openailike class....I tried additonal kwargs but no avail
1 comment
L
d
drewskidang
·

Metadata

what are requriments for the custom meta data extractor
1 comment
L
d
drewskidang
·

Llms

does llama-index support gemeni pro experimental or command-r/r+
8 comments
d
L
so im using llm sherpa as my parser and it does the chunks already is there a way to keep the chunks size when converting to nodes?
8 comments
L
d
Is raptor ment for pdfs. I have a nest xml files and also not sure what splitter is best for xml files
1 comment
L
AttributeError: 'TransformQueryEngine' object has no attribute 'chat'
10 comments
L
d
d
drewskidang
·

Pinecone

is it possible to use a knowledge graph wiht pine cone ubt looking examples but it looks likek the vectostores are local?
1 comment
L
having trouble importing this
'''
from llama_index.core.program import (
DFFullProgram,
DataFrame,
DataFrameRowsOnly,
)
;;;
8 comments
d
L
from llama_index.core.response import StreamingResponse
i cant impore this for somereason
16 comments
d
L