Say you’re learning about protein biology by interrogating ChatGPT about it. You become interested in the history of protein sequencing, and want to explore how the technique has evolved. Problem: a long back-and-forth with the LLM about the history of protein sequencing will clutter the UI, and make the model stupider (especially when you return the conversation to the original topic).

Basically, I am describing this Nick Cammarata complaint:

@nickcammarata: wish I could 'duplicate chat' in chatgpt/claude like you can with files and docs: spin off a side convo, play with the tangent, trash it, and keep the original thread pristine

Solution, as a reply to Cammarata pointed out:

@nickcammarata: Oh wait you can scroll way up to the part you originally forked off and edit it to clean up; quoting @ChrisChipMonk: wdym u can branch by editing replies

Notably, this is not a reset of the conversation history, but literally a fork:

@ClaudiuDP: this is not a fork, but a reset from that point on. you lose the conversation after the point of reset.; @papayathreesome's reply: no, it's literally a fork - you can switch between branches on the forked message

I do this all the time, to the point where I thought everyone knew about the forking trick. But if a seasoned LLM researcher and power user like Cammarata wasn’t doing it, I figured it is worth making a post about.

Of course, this workflow is far from perfect. In particular, an obvious quality-of-life improvement would be exposing the underlying tree structure:

@niplav_site: TODO: write a browser extension that shows the underlying tree datastructure

I predict that this tree would still be unwieldy for many chats. It’d be great to get an LLM to label every branch (e.g., “History of protein sequencing”, “How protein transportation works”, etc), and to allow users to prune branches off the tree (not by deleting them permanently, but by hiding them).