Prompt Engineering. Tree of Thought + Chain of Thought

    0
    334

    Prompt engineering is a dynamic, creative and powerful skill to learn in current AI LLM era. LLMs are a powerful tool to leverage for solving a variety of problems. To do that we need to learn their language, a combination of English language, systems design, problem solving, question asking and other skills. Also, LLM prompts respond well to specific keywords based on the scenario. Better the instructions, better the output.

    Most problems be it design, analysis, coding, building, writing etc require a well thought out plan to assess different options in detail and then finally choosing the best path with a final output result/summary report. The plan requires multiple scenario analysis with high level view of all required inputs, analysis/calculations and outputs. The detailed analysis is done in each path where the real details, nitty gritty’s are analyzed for each data point and feature. Humans create this high level plan and the required detailed analysis plan and the final output presentation. They also review the final results and make a judgement call based on skill, experience, gut feel etc to reach a final conclusion. This is a very high level view on how humans solve problems.

    Extrapolating this to different scenarios, industries, problems, creative work, writing, innovation etc creates a giant complex networked decision tree with many branches of interconnected and evolving nodes that need to work asynchronously in parallel. To coordinate their work for a final result requires a master controller algorithm like the brain and its billions of neural network child nodes. I don’t think that one person or a group of people can possibly solve all these problems, even as separate groups. The number of paths to analyze are akin to a combinatorial explosion, too complex to comprehend and process for human or group capabilities. But when networked together, the possibilities are endless.

    This is where Tree of Thought (ToT) and Chain of Thought (CoT) can be used quite effectively. ToT creates the plan to analyze multiple paths, as high level tree branches, optimizing and ranking them for each scenario. Each branch is then analyzed in detail with all the inputs using CoT, traversing down the branch as deep as required. You can force the LLM to self review its ToT paths and CoT analysis to make sure it covers all possibilities and avoid missing any edge cases. The more detailed, precise and relevant the prompt instructions, the better the output prompt with plan and details for analysis.

    Creating such a design is humanly very challenging and fraught with oversight, analysis paralysis and hindsight risk. There are various techniques and tools used in different industries, research labs, etc, but all require a high level of human input and experience for creating the entire plan. Using a well designed prompt you can ask the LLM to create a detailed, structured ToT and CoT prompt to perform a more thorough and insightful analysis. Effectively you are prompting the LLM to generate its own detailed analysis prompt. Executing such a prompt with all the relevant data as context then creates a way to automate the analysis process, once it is reviewed, refined and the results are accepted. It’s like a senior experienced analyst doing the work of multiple junior analysts in parallel networked together automatically, possibly 24/7, without getting tired physically or mentally by the multiple scenarios to analyze and process.

    I have been using this concept for some time now and the results I am seeing are quite promising and can be extended to all kinds of problems in different industries. LLM prompting opens up multiple possibilities to solve various problems, with further enhancements like recursive self improvement, if designed to do so. It’s not that hard, a few iterations of human inputs and you can design the LLM to do amazing stuff. This is not a new concept. It’s a well known prompt engineering technique, but not sure how widely it is used.

    Only limitation is how much compute you have access to, as each scenario run of ToT+CoT prompt with context data requires lots of tokens and GPUs. But I am quite excited by all the problems I can solve. This is just scratching the surface, the possibilities are endless. Only limitation is your imagination, desire to solve problems and access to lots of compute. Hope this article helps everyone. I’ll share a simple prompt generator below. You can use that to then generate the ToT+CoT prompt. It’s a simple prompt, you can do much better and more detailed prompts for your problems.

    Sample ToT+CoT promt generator:

    “you are an expert analyzer that generates a prompt for analyzing different scenarios using tree of thought and chain of thought. you will be given this {…} data for context. Tot instructions here {… }. your overall goal should be to create an analysis plan that analyzes each scenario in detail. you will plan the analysis using tree of thought. you will then analyze each path in detail using chain of thought. Cot detailed instructions are here {…}. final analysis summary combining the results, giving recommendations and other specific instructions are here {…}”

    Happy prompt engineering!