It seems that Mixtral 8x7B can get very anxious about lack of feedback on a response that it made in what was otherwise an entirely fine response.

This is just one of the challenges or joys of working with LLMs trained on the internet. Sometimes the rabbit holes the model goes down can be completely bewildering. I dare say if I wanted it to produce that sort of output I’d struggle to get it to do so.