Of course, it’s an image generated by AI that has been trained to generate images that can’t be distinguished from reality. So of course the AI thinks it’s real lol
Weirder in that it gets better at “photorealism” (textures, etc) but subjects might be nonsensical. Only teaching it how to avoid automated detection will not teach it to understand what scenes mean.
Of course, it’s an image generated by AI that has been trained to generate images that can’t be distinguished from reality. So of course the AI thinks it’s real lol
It’s been trained to generate images that it thinks* can’t be distinguished from reality
And if it could distinguish better, it could also generate better.
Not necessarily, but errors would be less obvious or weirder since it would spend more time in training
Weirder? Interesting, like how for example?
Weirder in that it gets better at “photorealism” (textures, etc) but subjects might be nonsensical. Only teaching it how to avoid automated detection will not teach it to understand what scenes mean.
Absolutely not