Fork LLM chats
Say you’re learning about protein biology by interrogating ChatGPT about it. You become interested in the history of protein sequencing, and want to explore how the technique has evolved. Problem: a long back-and-forth with the LLM about the history of protein sequencing will clutter the UI, and make the model stupider (especially when you return the conversation to the original topic).
Basically, I am describing this Nick Cammarata complaint:
Solution, as a reply to Cammarata pointed out:
Notably, this is not a reset of the conversation history, but literally a fork:
I do this all the time, to the point where I thought everyone knew about the forking trick. But if a seasoned LLM researcher and power user like Cammarata wasn’t doing it, I figured it is worth making a post about.
Of course, this workflow is far from perfect. In particular, an obvious quality-of-life improvement would be exposing the underlying tree structure:
I predict that this tree would still be unwieldy for many chats. It’d be great to get an LLM to label every branch (e.g., “History of protein sequencing”, “How protein transportation works”, etc), and to allow users to prune branches off the tree (not by deleting them permanently, but by hiding them).