Find answers from the community

Updated 3 months ago

I ve been working on a new search

I've been working on a new search algorithm for code [https://twitter.com/ocolegro/status/1676602607106760705] since it seems aparrent to me that half the battle of self-coding agents is getting the right context.

I indexed llama index over the weekend and have some early stage demo work showing search over it in this repo here [https://github.com/emrgnt-cmplxty/automata/blob/main/examples/demo_search.ipynb]. It would be great to get some feedback or discussion around this if anyone has the chance to take a look!
Attachment
Screenshot_2023-07-10_at_5.44.23_PM.png
L
e
M
6 comments
interesting, so you are trying to return pieces of code that might be used to answer the user question?
yes, a context is constructed downstream using these results
The results of the symbol rank seem more accurate, the vector results won't help you create an index really
within my own codebase symbol rank has been very useful for getting reasonable docs self-written
all the class docs here were self-generated using gpt-4 and generally accurate [https://automata.readthedocs.io/en/latest/]. I'm trying to branch out to other open source repositories now.
Ah this is something I was trying to achieve myself because if we just straight up vector huge blocks of code it tends to cut off important parts of the code block. Like I want it to know that the 150 lines of code reflect that function and it must keep that within a chunk. I was thinking that also using "Question Commenting or Instructions" might also help here.
Add a reply
Sign up and join the conversation on Discord