How are folks sending Python objects to an LLM as context for summarization?
Are you transforming them to JSON first (and then to a string)? Do you have to augment your prompt to explain to the LLM what each field means/does? Do some LLMs happily accept serialized objects? Does pydantic help? Is there a library/service that can do this for you?