Find answers from the community

Home
Members
Hazmat1027
H
Hazmat1027
Offline, last seen 2 months ago
Joined September 25, 2024
Has anyone had luck implementing the observability funcs in https://docs.llamaindex.ai/en/stable/module_guides/observability ? I'm trying to get LiteralAI working and I see some very basic logs in my console but nothing on the cloud instance. Are there jupyter notebooks somewhere that I can be pointed to for a bit of context?
9 comments
H
L
H
Hazmat1027
·

Nest

Hey, y'all! I ran into an issue using the SimpleDirectoryReader with the TitleExtractor transformation and I wanted to run it by y'all before raising a GitHub issue.

The async calls in the method are fighting with the asyncio call in IngestionPipeline.run and throwing this error:

Plain Text
python 
def asyncio_run(coro: Coroutine) -> Any:
    """Gets an existing event loop to run the coroutine.

    If there is no existing event loop, creates a new one.
    """
    try:
        loop = asyncio.get_running_loop()
        if loop.is_running():
            raise RuntimeError(
                "Nested async detected. "
                "Use async functions where possible (`aquery`, `aretrieve`, `arun`, etc.). "
                "Otherwise, use `import nest_asyncio; nest_asyncio.apply()` "
                "to enable nested async or use in a jupyter notebook.\n\n"
                "If you are experiencing while using async functions and not in a notebook, "
                "please raise an issue on github, as it indicates a bad design pattern."
            )
        else:
            return loop.run_until_complete(coro)
    except RuntimeError:
        return asyncio.run(coro)


I edited my cache file to just use nest_asyncio.apply() here instead of throwing the error, but that's kinda counterintuitive for the long term.
Have y'all
a) seen this before?
b) found a decent workaround?

If there's a different transform that I could use to acheive the same thing without running up against that error, that'd be great (I haven't found one)
5 comments
L
H
Hey, y'all! I have an app set up where users can switch between different a swath of different settings. One of those settings switches out the ChatEngineMode between Context, Best, FLARE, HyDE, and the BaseQueryEngine. I want every query to perform a retrieval, but am seeing the engine spit out a pure LLM response a good amout of the time, despite my efforts to tweak the system prompt and ReActChatFormatter. Is there a way to force the llama_index engines to retrieve before responding? They don't seem to be respecting the system prompt at all.
4 comments
H
L
Has anyone had trouble with Anthropic models and ChatMode Context? I get this error when the astream_chat tries to run awrite_response_to_history:

Plain Text
Encountered exception writing response to history: <asyncio.locks.Event object at 0x7f606d050690 [unset]> is bound to a different event loop


/llama_index/core/chat_engine/types.py -> StreamingAgentChatResponse.awrite_response_to_history()
4 comments
H
L
Hey, y'all! I'm trying to filter my Milvus collection by an array and I'm having some trouble. There doesn't seem to be a way to use Milvus's ARRAY_CONTAINS method in the MetadataFilter class. Has anyone solved for this problem yet?
19 comments
L
H