This just demonstrates a funadmentally flawed understanding of LLMs… they don’t know anything they are generating the text that is stastically likely to follow. They will still generate what is most stastically likely for things they “don’t know”
I loved the confident response I got after a few corrections.
“Thank you for your patience and assistance in correcting my mistakes, I will finally provide you with the correct response to your question…” Continues with a completely incorrect response lol
This just demonstrates a funadmentally flawed understanding of LLMs… they don’t know anything they are generating the text that is stastically likely to follow. They will still generate what is most stastically likely for things they “don’t know”
I loved the confident response I got after a few corrections.
“Thank you for your patience and assistance in correcting my mistakes, I will finally provide you with the correct response to your question…” Continues with a completely incorrect response lol