@ThefuzzyFurryComrade@pawb.socialM to Fuck AI@lemmy.world • 1 month agoOn AI Reliabilitypawb.socialimagemessage-square8fedilinkarrow-up1500arrow-down10file-text
arrow-up1500arrow-down1imageOn AI Reliabilitypawb.social@ThefuzzyFurryComrade@pawb.socialM to Fuck AI@lemmy.world • 1 month agomessage-square8fedilinkfile-text
minus-square𝕸𝖔𝖘𝖘linkfedilink59•1 month agoUnless something improved, they’re wrong more than 60% of the time, but at least they’re confident.
minus-square@henfredemars@infosec.publinkfedilinkEnglish18•1 month agoThis is an excellent exploit of the human mind. AI being convincing and correct are two very different ideals.
minus-square@davidgro@lemmy.worldlinkfedilink12•1 month agoAnd they are very specifically optimized to be convincing.
minus-square@jsomae@lemmy.mllinkfedilink13•1 month agoThis is why LLMs should only be employed in cases where a 60% error rate is acceptable. In other words, almost none of the places where people are currently being hyped to use them.
Unless something improved, they’re wrong more than 60% of the time, but at least they’re confident.
This is an excellent exploit of the human mind. AI being convincing and correct are two very different ideals.
And they are very specifically optimized to be convincing.
This is why LLMs should only be employed in cases where a 60% error rate is acceptable. In other words, almost none of the places where people are currently being hyped to use them.