It’s not self aware or capable of morality, so if you tailor a question just right it won’t include the morality around it or corrections about the points. Pretty sure we saw a similar thing when people asked it specifically tailored questions on how to commit certain crimes “as a thought experiment” or how to create certain weapons/banned substances “for a fictional story”. It’s strictly a tool and comes with the same failings around use, much like firearms.
Of course not. But it is subject to programming parameters. Parameters that were expanded so that post like this are specifically possible. Encouraged perhaps even.
It’s not self aware or capable of morality, so if you tailor a question just right it won’t include the morality around it or corrections about the points. Pretty sure we saw a similar thing when people asked it specifically tailored questions on how to commit certain crimes “as a thought experiment” or how to create certain weapons/banned substances “for a fictional story”. It’s strictly a tool and comes with the same failings around use, much like firearms.
Ai chatbots all have safeguards implemented in them
And there’s a very large amount of people constantly trying to break those safeguards on them to generate a response they want
Of course not. But it is subject to programming parameters. Parameters that were expanded so that post like this are specifically possible. Encouraged perhaps even.
Expanded by even bigger “tools” you might say.
Also a reason I hate these llms.