Find answers from the community

Home
Members
shekhargaikwad_
s
shekhargaikwad_
Offline, last seen 3 months ago
Joined September 25, 2024
@everyoneI wish to demonstrate to the community that Zephyr-7b-beta has a running and inference cost that is 13 times lower than GPT 3.5 and 6 times less than Llama 70b. Can I make this cost comparison in a Colab notebook?
How to do that? which will show cost per query something like this?
9 comments
L
s
W
@Logan M Hey Actually I build one chatbot using create-llama and deployed it on vercel initially it responded to my message after 2-3 conversation it stopped responding i again ran the model locally got the error that quota has exceeded but now i have changes 2,3 different openAI keys I'm still not able to get it working i did reinstalltion of every repo but still same error what's the issue
Note: I have tried changing many API Keys but issue tried multiple installtion with multiple backend but issue is still there
2 comments
s
E
Hey @Logan M
Can you help me whats the issue
I've tried everything is there any way i can add it to the enviorement insted of .toml file
4 comments
L
s
@Logan M i just created a chatbot using RAGS and trying to deploy it but getting this error
3 comments
s
W
@Logan M
I'm currently working on enhancing the user interaction of our chatbot, which is built using Create-llama. At present, when the chatbot is greeted with salutations like 'Hi' or 'Hello', it responds with 'Hello! How can I assist you today?'. I'd like to add a more personalized and informative introduction to this response.

Specifically, I want the chatbot to introduce itself with the following message before addressing the user's query:

'Hello! I'm your Research Assistant, powered by LLAMAINDEX 🌾 I'm here to provide you with valuable insights and information to support your farming activities. If you have questions or need advice on agricultural practices, pest control, government schemes, or anything else related to farming, feel free to ask! 🤖 Your feedback helps me improve, so don’t hesitate to share your thoughts using the 👍🏾 or 👎🏽 buttons. Now, how may I assist you in nurturing your crops today?'

Could you guide me on how to implement this change in our chatbot's code? Any pointers on where in the Create-llama framework I can modify the greeting response would be greatly appreciated.
4 comments
L
a
s
@Logan M

How to increse the context length I'm getting this error

[1] C:\Users\MSI\test_chatbot\backend\node_modules\openai\error.js:43
[1] return new BadRequestError(status, error, message, headers);
[1] ^
[1]
[1] BadRequestError: 400 This model's maximum context length is 4097 tokens. However, your messages resulted in 4153 tokens. Please reduce the length of the messages.
3 comments
s
L
found 0 vulnerabilities
PS C:\Users\MSI\chatbot_llama\frontend> npm run dev

chatbot_llama@0.1.0 dev
NEXT_PUBLIC_CHAT_API=http://localhost:8000/api/chat next dev

'NEXT_PUBLIC_CHAT_API' is not recognized as an internal or external command,
operable program or batch file.
PS C:\Users\MSI\chatbot_llama\frontend>


I've tried reinstalling create_llama i didn't get the option of chat engine but still i went anyway but now I'm getting this error
2 comments
s
L