Find answers from the community

Updated 3 months ago

So the generate.py Code is giving me

So the generate.py Code is giving me that app.engine is not a module, what should i do?
Attachment
image.png
W
T
82 comments
Probably environment is not being picked, Make sure your env is active
Was something else. Right now i face the issue of the API not getting any Body Data
And in "no-cors" mode, the API Spits me a "unprocessable identity"
i'm defo not the best with py
This is FastAPI right?
Yes, a Website is just basically equesting it via Fetch
You can checkout the swagger doc and check from there the kind of data it requires and then based on that you can simply format the data from the frontend.

Assuming your server is running on 8000 port just do

localhost:8000/docs on browser and it should open the swagger for your backend
Oh lol i didnt know that swagger is on there
Yea fastAPI provides it by default
But yea, i send it directly like i am supposed to
for 422 error it means the data is not coming in required format
Even swagger doesnt seem to work
It, too, gave no messages thrugh
Attachment
image.png
Can you share the full error trace
Was only able to make a screenshot
Ah so your error is that you are trying to access the first record but that is not present in that list
Yea because there's seemingly nothing in that list
Maybe it has sth to do wuth this
Attachment
image.png
You can check in the network call whether data is sent or not
If nothing is passed then most probably you need to check your frontend code
Well, i defenitly send sth#
Attachment
image.png
Here it sends data too btw ^
The API just doesnt seem to get it
Ah so frontend is fine, Need to check the backend, before accessing check if messages contain anything or not

Can you share code maybe I can take a look
Sure, its highly modified tho due to the app.engine errors etc, i added the functions to it
What files do you need
Plain Text
from typing import List
import timeit
from fastapi.responses import JSONResponse

from fastapi.responses import StreamingResponse
from llama_index.chat_engine.types import BaseChatEngine

from app.engine.index import get_chat_engine
from fastapi import APIRouter, Depends, HTTPException, Request, status
from llama_index.llms.base import ChatMessage
from llama_index.llms.types import MessageRole
from pydantic import BaseModel

chat_router = r = APIRouter()


class _Message(BaseModel):
    role: MessageRole
    content: str


class _ChatData(BaseModel):
    messages: List[_Message]


@r.post("")
async def chat(
    request: Request,
    data: _ChatData,
    chat_engine: BaseChatEngine = Depends(get_chat_engine),
):
    start = timeit.default_timer()

    # check preconditions and get last message
    if len(data.messages) == 0:
        raise HTTPException(
            status_code=status.HTTP_400_BAD_REQUEST,
            detail="No messages provided",
        )
    lastMessage = data.messages.pop()
    if lastMessage.role != MessageRole.USER:
        raise HTTPException(
            status_code=status.HTTP_400_BAD_REQUEST,
            detail="Last message must be from user",
        )
    # convert messages coming from the request to type ChatMessage
    messages = [
        ChatMessage(
            role=m.role,
            content=m.content,
        )
        for m in data.messages
    ]
    print(data)
    # query chat engine
    response = chat_engine.chat(messages[0].content)
    # convert response to JSON
    stop = timeit.default_timer()
    print("Time: ", stop - start)
    time = round(stop - start, 2)
    # Round the Time to be in Seconds
    sec = round(stop - start, 2)
    response_data = {
        "message": messages[0].content,
        "answer": response,
        "timeinfo": time,
        #"sourcetext": responsestuff.get_formatted_sources()
        "chat_history": [x.json() for x in messages],

    }

    return JSONResponse(content=response_data)
Fitted exactly in the limit
If you need more lmk
NO I think this is fine
print(data)

When you do this, do you get anything?
But there has to be sth
Else it would say no
IT POPS THE LAST MESSAGE
Line 40
Attachment
image.png
I am such an idiot lmao
Haha no no, even I didnt see that
missed by me too πŸ˜†
But why is it there tho lmao, that was part of the code made by create-llama
there we go, its generating
Attachment
image.png
Plain Text
    response = chat_engine.chat(lastMessage.content, messages)


WIll this then directly have the Chat History (when i give it to it) there?
yes if messages contain the entire chat history it will be be used that way
YESSS
Attachment
image.png
YESSS!!! Thank you so so so so so much
Attachments
image.png
image.png
Lets hope everything else runs as flawlessly
Now i just gotta give it a basic set of instructions, a TON of Data to read through and then we are good
Can i just paste in txt's, csv's, PDF's without having to change stuff?
I would need more info on this, There should be a endpoint that takes files right
I meant like for its Dataset
Like, can i easily drop it in here?
Attachment
image.png
Yea I think that should work.
Okay, lets hope
Hey, one last question, i was trying to make it log it's sources. But as it seems it takes ages even after generating to get a response back now
Response formation is taking time after you ask a query?
Also to log the sources, You can get all the sources using response.source_nodes
You are using OpenAI or opensource model?
Open Source model
That could be the reason for high time while generating the response.
"It could be" πŸ˜…
Let's hope is generates in the next 5h
TypeError: Object of type NodeWithScore is not JSON serializable
You are sending the whole response.source_nodes?
I wanna show the whole source
Try with this, this should give u a dict which should work

Plain Text
source = response.source_nodes.__dict__
Okay, running it rn
List has no Attributr dict
gon try this
Attachment
image.png
Add a reply
Sign up and join the conversation on Discord