Find answers from the community

Home
Members
sansmoraxz
s
sansmoraxz
Offline, last seen 2 months ago
Joined September 25, 2024
OK I figured out how to use LSPs.

For anyone still interested, here's a demo: https://gist.github.com/sansmoraxz/374776fd6a10eaf870cdd1fdba96e08f

I guess the next step would be to investigate how effective LLM agents are to interact with codebases. Hopefully it provides better alternative to generating and using vector embedding for vendored dependencies.
3 comments
s
Any tips on improving the codesplitter?
4 comments
s
Is it just me or is claude-3 difficult to ground for biases by default?
2 comments
s
Do issues in GitHub occasionally get buried with no interaction?
10 comments
L
W
s
How long is the legacy package gonna be part of the built in dependency chain?
1 comment
L
not sure if I like this approach
6 comments
L
s
A bit unrelated, but does anyone have good guides on how to setup and use LSP?
1 comment
s
Is there any plans for standardizing the metadata in responses? Picking raw response for example to count the tokens will not scale across the many different providers.
18 comments
s
L
Hi, just want to poll out here how much customizations do ppl do when using codespitter (from llamaindex, langchain or any proprietary engines)?

Thinking from the perspective of extraction some metadata at file level does get dropped (such as import statements or the package name) from the existing implementation. Ideally these should be captured in metadata.

Walking over ast is a chore, slows down the chunking process and can go wrong due to something being unaccounted for in millions of ways.

Alternatively tree sitter query expressions can help out, but those will be language specific and are only as powerful as regular expressions. Sometime you need to write wrapper code to extract surrounding blocks such as what is the parent containing a method node (class, interface, another method or something else).

Or I may be entirely wrong about my approach and there might be something far simpler and easier to pull off.
turns out claude responses can be empty if you are sending completed messages sequence. I monkey patched an interceptor there.
otherwise:-

Plain Text
File ~/.local/lib/python3.10/site-packages/llama_index/llms/bedrock/utils.py:150, in AnthropicProvider.get_text_from_response(self, response)
    149 def get_text_from_response(self, response: dict) -> str:
--> 150     return response["content"][0]["text"]

IndexError: list index out of range

should probably have some secondstage validation.


temporary money patch fixed:
Plain Text
import llama_index.llms.bedrock.utils
def f(self, response):
    if len(response["content"]) > 0:
        return response["content"][0]["text"]
    else:
        return ''
llama_index.llms.bedrock.utils.AnthropicProvider.get_text_from_response = f
10 comments
s
R
W
Weird quirk in anthropic messages API.

Plain Text
ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: messages: final assistant content cannot end with trailing whitespace


This applies to all the message blocks for last assistant prompt viz. prefilled (assuming we are passing array as content).
4 comments
L
s
Am trying to lift the existing interfaces to golang

Although I may be missing something, why are CallbackHandlers working with the native types and not the specialized CBEvent ?
3 comments
L
s
I wonder how bedrock agents orchestration could be done dynamically. It does require lambdas to be executed to provide the results. Am open to suggestions.
11 comments
D
s
@Logan M is there any plans for Golang integration?
7 comments
L
s
I have a suggestion.

There is an issue trying to incorporate the prompts for various languages. Sometimes it goes swimmingly like gpt4 - claude or vice versa.
Other times it struggles. Like trying to use those prompts with multilingual models, small models, custom specialized models, yada yada.

So how about some centralized hub for storing these prompt sets that can be dynamically set through Settings? The community can create and share entire promptsets.
18 comments
L
s
Hi is there any way to use multiStepQueryEngine without OpenAI?
33 comments
s
W
i