Find answers from the community

Updated 4 months ago

Hi Where can I find the key and engine

At a glance

The community members are discussing how to find the key and engine required to use the GoogleSearchToolSpec in a Python script. The comments suggest that the key can be obtained from the developer console, and the search engine ID can be created through the Google Programmable Search Engine. However, the community members note that Google does not make this process straightforward.

Additionally, the community members discuss the ability to create custom spec tools, with one member providing examples and guidance on how to do so. They also discuss an issue with the output of the agent.chat() function, where the LLM is struggling to provide a satisfactory answer to the query about the last time Barack Obama visited Michigan.

Useful resources
Hi! Where can I find the key and engine to add in googlesearchtoolspec ? google_spec = GoogleSearchToolSpec(key="your-key", engine="your-engine")
L
T
10 comments
You'll have to use the developer console to get those

I think the readme page here has the proper link
https://llamahub.ai/l/tools-google_search
I saw this page but I couldn't understand how to get the key and engine.
There's a big button on the page that says get key

That's the key

Then the search engine id comes from creating a custom search engine here
google does not make this straightforward it seems, but was able to figure that out just now by reading the pages πŸ™‚
Thanks. And I have another question. Can we create our own spec tool?
Definitely!

There's a few options. You can convert any function to a tool. Note that any docstring you write will help the LLM understand how to use the tool: https://gpt-index.readthedocs.io/en/latest/core_modules/agent_modules/agents/usage_pattern.html#get-started

You can also create a complete tool spec, but you'll have to follow an example from the repo. It's pretty easy thoiugh, this one is pretty simple to follow:
https://github.com/emptycrown/llama-hub/blob/main/llama_hub/tools/yelp/base.py

Basically the spec_functions at the top define which functions get exported as tools. Not that again, making the docstring descriptive like this helps a lot.
Why am I getting the following output when I run this question?
agent.chat('when is the last time barrack obama visited michigan')

=== Calling Function ===
Calling function: google_search with args: {
"query": "last time Barack Obama visited Michigan"
}
Got output: Content loaded! You can now search the information using read_google_search
========================
=== Calling Function ===
Calling function: read_google_search with args: {
"query": "When is the last time Barack Obama visited Michigan?"
}
Got output:
It is not possible to answer this question with the given context information.
========================
AgentChatResponse(response="I'm sorry, but I couldn't find information about the last time Barack Obama visited Michigan.", sources=[ToolOutput(content='Content loaded! You can now search the information using read_google_search', tool_name='google_search', raw_input={'args': (), 'kwargs': {'query': 'last time Barack Obama visited Michigan'}}, raw_output='Content loaded! You can now search the information using read_google_search'), ToolOutput(content='\nIt is not possible to answer this question with the given context information.', tool_name='read_google_search', raw_input={'args': (), 'kwargs': {'query': 'When is the last time Barack Obama visited Michigan?'}}, raw_output='\nIt is not possible to answer this question with the given context information.')])
Seems like the LLM is just struggling to search. OpenAI updates often and tbh it usually makes things worse without telling anyone

Under the hood, the SearchAndLoad thing is creating an index with the google search results and then using that as a query engine to answer questions

You could try setting a global service context with a different LLM, otherwise it's defaulting to GPT3
Add a reply
Sign up and join the conversation on Discord