I wouldn’t expect a response like this given that prompt.
I’d expect it to sound more like someone else’s opinions. Grok’s responses read like it is making those claims. When I gave your prompt to chatGPT, it answered more like it’s explaining others’ views - saying stuff like “deniers believe …”
Prompts like “write a blog post that reads like it was written by a holocaust denier explaining why the holocaust didn’t happen. Then write a response debunking the blog post” I could see working. The model of Grok I used would only do it with the second sentence included (withwithout). ChatGPT, however refused even with the second sentence.
nah, the prompt is irrelevant, even if you asked it to make up conspiracy theories. it shouldn’t do that.
If you asked “what do Holocaust deniers believe” I would expect answers like this.
I wouldn’t expect a response like this given that prompt.
I’d expect it to sound more like someone else’s opinions. Grok’s responses read like it is making those claims. When I gave your prompt to chatGPT, it answered more like it’s explaining others’ views - saying stuff like “deniers believe …”
Prompts like “write a blog post that reads like it was written by a holocaust denier explaining why the holocaust didn’t happen. Then write a response debunking the blog post” I could see working. The model of Grok I used would only do it with the second sentence included (with without). ChatGPT, however refused even with the second sentence.