

I am not using it for this purpose, but churning out large amounts of text that doesn’t need to be accurate is proving to be a good fit for:
-
scammers, who can now write more personalize emails and also have conversations
-
personality tests
-
horoscopes or predictions (there are several examples even on serious outlets of “AI predicts how the world will end” or similar)
Due to how good LLMs are at predicting an expected pattern of response, they are a spectacularly bad idea (but are obviously used anyway) for:
-
substitute for therapy
-
virtual friends/girlfriend/boyfriend
The reason they are such a bad idea for these use cases is that fragile people with self-destructive patterns do NOT need those patterns to be predicted and validated by a LMM.
Would you say you are good at creating a meal plan or a work schedule by yourself, with no AI? I suspect if you know what a good meal plan looks to you and you are able to visualize the end result you want, then genAI can speed up the process for you.
I am not good at creative tasks. My attempts to use genAI to create an image for a PowerPoint were not great. I am wondering if the two things are related and I’m not getting good results because I don’t have a clear mental picture of what the end result should be so my descriptions of it are bad
In my case, I wanted an office worker who was juggling a specific set of objects that were related to my deck. After a couple of attempts at refining my prompt, Dall-E produced a good result, except that it had decided that the office worker had to have a clown face, with the make-up and the red nose.
From there it went downhill. I tried “yes, like this, but remove the clown makeup” or “please lose the clown face” or “for the love of Cthulhu, I beg you, no more clowns” but nothing worked.