3 Comments

I had an editor at a company who used ChatGPT to analyze data from a survey they did as a basis for an article. ChatGPT spat out one point that the editor was very excited about, but the data didn't support it at all. This is how I discovered AI hallucinations, and what a fun way to have it happen! (No, the article was never published, but I got paid so Yay for them!)

Frankly, at this point we consider ChatGPT (and other programs like it) to be like the computer on the Starship Enterprise, a system with all of the answers. But really, it's just learning how to talk to people. Perhaps that's why they call them "Large Language Model" instead of "Large Piles of Useful Information"?

But with this piece, I think I'll stick to my regular old ways for research for a while longer.

Expand full comment
13 hrs agoLiked by Andromeda Romano-Lax

Down right terrifying! Great information to have though. Thank you!

Expand full comment
author

I've done my own fun experiments asking ChatGPT to tell me something I already know (for example, "What advice does Andromeda Romano-Lax give to other writers?") and got bland, generic, inaccurate answers as a response. When I tuned the prompt by asking for quotes, then I got a more accurate, limited answer, with sources. I worry that people's takeaway from Caitlin's post and ones like it will simply be, "AI will get more accurate with time--just wait and see" instead of recognizing that we are at the precipice of a brave (frightening) new world of misinformation...plus outsourcing of creative jobs to AI. This is such a strange parallel to what we're seeing in politics right now. Unfortunately, it seems like half the country (at least) doesn't really care very much whether statements are truthful.

Expand full comment