Anyone using llamaindex as a means to do indexing and search together with an OpenAssistant backend? Although i've tinkered with llamaindex and VectorIndex to build doc-Q&A with OpenAI, I'm not entirely clear on what parts of that to change when using an Open Assistant model in the backend.. (aosst-sft-4-pythia-12b-epoch-3.5).. Any online examples or snippets floating around ?