I've done my own fun experiments asking ChatGPT to tell me something I already know (for example, "What advice does Andromeda Romano-Lax give to other writers?") and got bland, generic, inaccurate answers as a response. When I tuned the prompt by asking for quotes, then I got a more accurate, limited answer, with sources. I worry that people's takeaway from Caitlin's post and ones like it will simply be, "AI will get more accurate with time--just wait and see" instead of recognizing that we are at the precipice of a brave (frightening) new world of misinformation...plus outsourcing of creative jobs to AI. This is such a strange parallel to what we're seeing in politics right now. Unfortunately, it seems like half the country (at least) doesn't really care very much whether statements are truthful.
I had an editor at a company who used ChatGPT to analyze data from a survey they did as a basis for an article. ChatGPT spat out one point that the editor was very excited about, but the data didn't support it at all. This is how I discovered AI hallucinations, and what a fun way to have it happen! (No, the article was never published, but I got paid so Yay for them!)
Frankly, at this point we consider ChatGPT (and other programs like it) to be like the computer on the Starship Enterprise, a system with all of the answers. But really, it's just learning how to talk to people. Perhaps that's why they call them "Large Language Model" instead of "Large Piles of Useful Information"?
But with this piece, I think I'll stick to my regular old ways for research for a while longer.
I sometimes use AI a little bit during research. My default search engine has an AI/chat-type function that starts automatically whenever I type anything into the search box. I don't bother reading AI's answer. I check what sources it uses, track down those sources (always checking their reliability!), and expand the search beyond that. I do the same with Wikipedia. It's strange to me that people WOULDN'T do this, though I understand time limitations. (Chat GPT has annoyed me so many times that I don't bother with it.)
It's also frankly disturbing and bizarre that some people don't care if something is objectively true and confirmed by a reliable source.
I've done my own fun experiments asking ChatGPT to tell me something I already know (for example, "What advice does Andromeda Romano-Lax give to other writers?") and got bland, generic, inaccurate answers as a response. When I tuned the prompt by asking for quotes, then I got a more accurate, limited answer, with sources. I worry that people's takeaway from Caitlin's post and ones like it will simply be, "AI will get more accurate with time--just wait and see" instead of recognizing that we are at the precipice of a brave (frightening) new world of misinformation...plus outsourcing of creative jobs to AI. This is such a strange parallel to what we're seeing in politics right now. Unfortunately, it seems like half the country (at least) doesn't really care very much whether statements are truthful.
I had an editor at a company who used ChatGPT to analyze data from a survey they did as a basis for an article. ChatGPT spat out one point that the editor was very excited about, but the data didn't support it at all. This is how I discovered AI hallucinations, and what a fun way to have it happen! (No, the article was never published, but I got paid so Yay for them!)
Frankly, at this point we consider ChatGPT (and other programs like it) to be like the computer on the Starship Enterprise, a system with all of the answers. But really, it's just learning how to talk to people. Perhaps that's why they call them "Large Language Model" instead of "Large Piles of Useful Information"?
But with this piece, I think I'll stick to my regular old ways for research for a while longer.
Down right terrifying! Great information to have though. Thank you!
I sometimes use AI a little bit during research. My default search engine has an AI/chat-type function that starts automatically whenever I type anything into the search box. I don't bother reading AI's answer. I check what sources it uses, track down those sources (always checking their reliability!), and expand the search beyond that. I do the same with Wikipedia. It's strange to me that people WOULDN'T do this, though I understand time limitations. (Chat GPT has annoyed me so many times that I don't bother with it.)
It's also frankly disturbing and bizarre that some people don't care if something is objectively true and confirmed by a reliable source.
Thank you for this post.