Find answers from the community

Home
Members
DangFutures
D
DangFutures
Offline, last seen 3 months ago
Joined September 25, 2024
D
DangFutures
·

Scores

curious why similarity score is higher but correctness
4 comments
D
L
D
DangFutures
·

@Logan M



Plain Text
from llama_index.readers.schema.base import Document

from llmsherpa.readers import LayoutPDFReader

llmsherpa_api_url = "https://readers.llmsherpa.com/api/document/developer/parseDocument?renderFormat=all"
pdf_path = "2023190_riteaid_complaint_filed.pdf" # also allowed is a file path e.g. /home/downloads/xyz.pdf
pdf_reader = LayoutPDFReader(llmsherpa_api_url)
doc = pdf_reader.read_pdf(pdf_path)

doc = pdf_reader.read_pdf(pdf_path)
for chunk in doc.chunks():
    # Create a Document object for each chunk.
    document = Document(text=chunk.to_context_text(), extra_info={})

is this how you would use the libary to chunk it i'm a bit confused lol
26 comments
L
D
sorry is this how you change the models prompt template lol


Plain Text
from llama_index.prompts import PromptTemplate

# Define your custom prompt format
template = (
    "<|system|>\n"
    "{System}\n"
    "<|user|>\n"
    "{User}\n"
    "<|assistant|>\n"
    "{Assistant}"
)
# Create a PromptTemplate with your custom format
custom_prompt = PromptTemplate(template)
7 comments
D
L
'whisper' has no attribute 'load_model' any suggetions
1 comment
D
was looking at the mtbe benchmark and saw https://huggingface.co/intfloat/e5-mistral-7b-instruct
as number one... its an llm
https://huggingface.co/spaces/mteb/leaderboard
Can we use the fine-repo to fine-tune mistral?
3 comments
D
L
somebuddy help me pls
6 comments
D
W
following the example ImportError: cannot import name 'JSONNodeParser' from 'llama_index' (/usr/local/lib/python3.10/dist-packages/llama_index/init.py)
1 comment
D
trying to use mixtral api for datset generator
8 comments
L
D
Sorry been struggling setting an custom class using vllm wrapper. Lang chain doesn't except quantization from vllm import LLM, SamplingParams
Plain Text
# Sample prompts.
prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

# Create an LLM.
llm = LLM(model="TheBloke/Llama-2-7b-Chat-AWQ", quantization="AWQ")
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
4 comments
D
L
is there a way to use langsmith with llama index. Been getting this error since the update. I would prefer to use llama index llms wraper but langsmith only take langchain wrappers
3 comments
D
L
D
DangFutures
·

HyDe

for the hyde = HyDEQueryTransform(include_original=True)
is there a reason why the openai api key is needed. Im assuming its bc its a gpt is model creating the hyde_doc. Can we config which model is create the hyde_doc?
1 comment
W
is there a way to intergrate lllama index and langchain hub. Want to use the json loader from llama index and node parser but need this from langchain import hub
QA_CHAIN_PROMPT = hub.pull("rlm/rag-prompt-mistral for the model...


I thought llms were suppose to make life easier
3 comments
D
L
D
DangFutures
·

Cite

Does llama index have a way to clean up the node citations. I'm trying to create a dataset that has three collum question,example, and answer. Was hoping to the use the node citation for example the example but looks kind of meh
1 comment
L
is there a way to config the finetune 3.5 examples data format to {"instruction": "...", "input": "...", "output": "..."} for other models beside gpt3.5
4 comments
D
L
this is my last question for the week and i go back to spamming next weekend. I'm bit confused about the adapter layer fine tuning compared to the standard fine tuning example. Are the adapter embedding designed to improve current embedding or can they be used to embed other stuff as well. #llamaforlife
2 comments
D
L
is there way to save the gpt3.5 fine tune model or call it again after using the finetune example in a differnt notebook?
3 comments
D
T
is there a way to use bge large and anthropic with my existing pinecone index. Its keep using bge small and trying use llama cpp at context service.. also would 10/10 hire logan if i could
5 comments
D
L
I’m a little confused on the nodes v embeddings. I notice better search quires when using embeddings but I’m wondering how llama index enhances the the retrieval when vectors data bases are implemented
11 comments
i
D
when loading llama2 locally do i need to set as service context??? The local llama example doesn't change service context. When i requested a query, the model said it was openai not llama2
14 comments
D
L
can we increase the numbers of workers for the dataset generator
3 comments
a
D
using chain lit and it currently requires this for my model to stream
4 comments
L
D
is there basten intergation langchain isnt working for me

import requests

resp = requests.post(
"https://model-nwx4707q.api.baseten.co/production/predict",
headers={"Authorization": "Api-Key YOUR_API_KEY"},
json="MODEL_INPUT",
stream=True
)

for content in resp.iter_content():
print(content.decode("utf-8"), end="", flush=True)
3 comments
L
D
@kapa.ai what does the CallbackManager([finetuning_handler]) do
10 comments
k
D
is there an essay way to compare rag between different models
3 comments
D
T
indexing a fat dataset
are there any libareis to speed up the processing. use bge large on an A100
4 comments
L
D