Find answers from the community

Updated 2 months ago

Create-llama

At a glance

The post asks for the best channel to get help on create-llama. The comments provide the following information:

A community member suggests posting queries on the channel, as someone from the team will pick it up. Another community member notes that the newer versions of create-llama no longer provide the option to create a NextJS frontend when picking the Express framework, and asks if this feature will be brought back.

Another community member has a few create-llama related questions: they want to know if the different options are documented somewhere, and they are interested in building various types of applications (Agentic RAG, Data Scientist Financial Report Generator, Form Filler, Code Artifact Agent Information Extractor). They mention that they followed the blog post, created an "Agentic RAG", and then customized the .env to use Ollama, but it didn't work as it requires the OpenAI agent library. They express that it would be useful to know which options are layers over OpenAI APIs and which ones support other local LLMs.

The final comment includes an error message related to installing the llama-index-embeddings-ollama package, which seems to be incompatible with the llama-index-agent-openai package.

hey! what's the best channel to get help on create-llama
W
K
e
4 comments
You can post your queries here. Someone from the team will pick it up πŸ™‚
it seems like the newer versions of create-llama doesn't provide the option to create a nextjs frontend when picking express framework, will this come back?
Another create-llama related question: Are the different options documented somwhere?

Plain Text
 What app do you want to build? Β» - Use arrow-keys. Return to submit.
>   Agentic RAG
    Data Scientist
    Financial Report Generator (using Workflows)
    Form Filler (using Workflows)
    Code Artifact Agent
    Information Extractor


I followed the blog post of create-llama, created an "Agentic RAG" and then customized .env to use Ollama. And it doesn't work as it requires OpenAI agent library.

Knowing which options are layers over OpenAI apis and which ones support all other local LLMs would be very useful.
This is the error message I got:

Plain Text
poetry add llama-index-embeddings-ollama
Using version ^0.4.0 for llama-index-embeddings-ollama

Updating dependencies
Resolving dependencies... (0.7s)

Because no versions of llama-index-agent-openai match >0.3.0,<0.3.1 || >0.3.1,<0.3.2 || >0.3.2,<0.3.3 || >0.3.3,<0.3.4 || >0.3.4,<0.4.0
 and llama-index-agent-openai (0.3.0) depends on llama-index-core (>=0.11.0,<0.12.0), llama-index-agent-openai (>=0.3.0,<0.3.1 || >0.3.1,<0.3.2 || 
>0.3.2,<0.3.3 || >0.3.3,<0.3.4 || >0.3.4,<0.4.0) requires llama-index-core (>=0.11.0,<0.12.0).
And because llama-index-agent-openai (0.3.1) depends on llama-index-core (>=0.11.0,<0.12.0)
 and llama-index-agent-openai (0.3.2) depends on llama-index-core (>=0.11.0,<0.12.0), llama-index-agent-openai (>=0.3.0,<0.3.3 || >0.3.3,<0.3.4 || 
>0.3.4,<0.4.0) requires llama-index-core (>=0.11.0,<0.12.0).
And because llama-index-agent-openai (0.3.3) depends on llama-index-core (>=0.11.0,<0.12.0)
 and llama-index-agent-openai (0.3.4) depends on llama-index-core (>=0.11.0,<0.12.0), llama-index-agent-openai (>=0.3.0,<0.4.0) requires llama-index-core (>=0.11.0,<0.12.0).
Because no versions of llama-index-embeddings-ollama match >0.4.0,<0.5.0
 and llama-index-embeddings-ollama (0.4.0) depends on llama-index-core (>=0.12.0,<0.13.0), llama-index-embeddings-ollama (>=0.4.0,<0.5.0) requires 
llama-index-core (>=0.12.0,<0.13.0).
Thus, llama-index-embeddings-ollama (>=0.4.0,<0.5.0) is incompatible with llama-index-agent-openai (>=0.3.0,<0.4.0).
So, because app depends on both llama-index-agent-openai (^0.3.0) and llama-index-embeddings-ollama (^0.4.0), version solving failed.
Add a reply
Sign up and join the conversation on Discord