Find answers from the community

Updated 5 months ago

Is there a way to implement a json file

At a glance
The community member is asking if there is a way to implement a JSON file as input containing information such as context, prompt, index, model, and history, and output another JSON file with the chosen index, model, and updated history. The community members suggest using a POST request with Flask for a web app and Postman to achieve this. They also discuss the possibility of using llama_index in a C# app and the need to format the output as a JSON file to be used in the C# app. One community member provides an example of how to use C# to hit a Python server and get the JSON output, which they suggest could remove the need for a JSON file.
Useful resources
Is there a way to implement a json file as input containing a lot of information such as the context, the prompt, the index to choose or the model as well as the history and to output another json file containing as before the chosen index, the model and a history containing the context, and the exchanges between the user and the ia? It seems to me that this is possible via a POST request with flask for a web app and using Postman too. Thanks for your help!

here is the schema of the json file i thought off:

{ "Prompt": "my text", "Index": "my_index", "Model": "test1", "History": [{ "Type": "System", "Prompt": "some text" }, { "Type": "User", "Prompt": "some text" }, { "Type": "Assistant", "Prompt": "some text" } ] } ------------------------------------ { "IsError": false, "Error": null, "Result": { "Prompt": "the_prompt", "Index": "my_index", "Model": "test1", "History": [{ "Type": "System", "Prompt": "some text" }, { "Type": "User", "Prompt": "some text" }, { "Type": "Assistant", "Prompt": "some text" }, { "Type": "User", "Prompt": "some text" }, { "Type": "Assistant", "Prompt": "prompt result" } ] } }
W
V
11 comments
What is the use of context?
You can use any python framework to create POST endpoint and pick out the query and run it against the query_engine and once the response is returned. You can create a JSON response as per your requirement
Yeah i managed to foud out how everything is working with POST request, It's because i have a app in C# where i wanted to implement my program with llama_index and the easiest way i found was to get or send a json file to my C# app
Now i'm trying to implement the serialisation or to deal with the history part, i don't know how to do it :/
llama_index is supported in C# ? That's great. Didn't know that
no it's not haha !
that's why i want to format my output in a json file !
in order to be used in my C# app
You can try this approach

Plain Text
var client = new HttpClient();
var request = new HttpRequestMessage();
request.RequestUri = new Uri("http://localhost:8000/api/v1/bot/ChatBot-Widget");
request.Method = HttpMethod.Post;

request.Headers.Add("Accept", "*/*");
request.Headers.Add("User-Agent", "Thunder Client (https://www.thunderclient.com)");

var bodyString = "{  \"question\": \"your_own_query\"}";
var content = new StringContent(bodyString, Encoding.UTF8, "application/json");
request.Content = content;

var response = await client.SendAsync(request);
var result = await response.Content.ReadAsStringAsync();
Console.WriteLine(result);
You can use this to hit to your python server and get back the JSON output
This will remove the need for Json file in my opinion
Add a reply
Sign up and join the conversation on Discord