Find answers from the community

Updated 3 months ago

Create llama

is there a create llama project generation that is all in python by any chance? We have built our pipelines in python, I don’t want to switch languages typescript
L
c
8 comments
Sadly if you want an actual production app, typescript is favored by 99% of people 😅

Otherwise for small POCs, I like to use streamlit
Ughh toggling between languages makes my head 🤯
@Logan M ok, so let’s say you have built the query pipeline using fancy dag stuff. Do you rewrite all that code in TS? Seems crazy to me.
No, you typically have a backend in python (hosted on a fastapi server), and a frontend in JS/TS (usually I'd recommend react, or next.js)
Ok how can I get the fastapi server up and running quickly? Is there something to do this as fast as possible?
create-llama has an option to create a fastapi backend. Or you can just write one yourself (fastapi is super easy to use)
Here's a dummy app if you wanted something to work off of (I made this to test something at one point)

Plain Text
from llama_index.llms.openai import OpenAI
from llama_index.core import SummaryIndex, Document

from fastapi import FastAPI
from fastapi.responses import StreamingResponse

app = FastAPI()


@app.get("/")
async def root():
    return {"message": "Hello World"}


@app.get("/test")
async def test():
    index = SummaryIndex([Document.example()])
    chat_engine = index.as_chat_engine(chat_mode="condense_plus_context")
    response = await chat_engine.astream_chat("Tell me a fact about LLMs.")

    async def gen():
        async for r in response.async_response_gen():
            yield str(r)

    return StreamingResponse(gen())

if __name__ == "__main__":
  import uvicorn
  uvicorn.run(app, loop="asyncio")
Ok thanks let me ponder this
Add a reply
Sign up and join the conversation on Discord