Discussion about this post

User's avatar
Hassan LÂASRI's avatar

When employing a multi-LLM strategy, avoid having one model craft or refine prompts for the others. While a model may attempt the task, it cannot account for the internal workings of its counterparts.

A more reliable method, based on my experience with megaprompts, is this:

1. First, establish and refine the core specifications. Circulate them to achieve a robust, consensus foundation.

2. Then, let each model generate its response independently from that single source.

No posts

Ready for more?