Welcome back!
This week in my Digital Media Literacy II course, we were asked to experiment with AI text generation. I chose to use ChatGPT for this assignment as it is the most talked about AI software at the moment and also because my boyfriend already has an account so that was one less step for me. For the assignment we were tasked with asking AI to write a 250-300 word blurb on any topic that we are familiar with and then take the text it generated and edit it. Here’s my assignment if you’d like to take a closer look!
I chose to ask ChatGPT to “in 250-300 words, describe the difference between a Ragdoll cat and a Bengal cat”. Honestly, I was really surprised with the text it generated for me. First of all, it failed to hit the minimum word count I asked it for, the original text before my edits was about 243 words long not 250. Second, it gave me a very simple and casual answer. I was really expecting it to be a bit more detailed and technical. Also, to be honest, I had expected it to sound more robotic in tone, but the casualness of the whole thing didn’t automatically scream “AI” to me.
What I learned from doing this experiment was that AI might seem to know the answers but when you dig a little deeper you can tell that it gives very surface level vague answers. If you want a more nuanced explanation with more detail you will have to do your own research to find it. Personally with my topic for this assignment, I didn’t find much, if any, of the text it gave me to be down right false, it just wasn’t totally accurate or explicit enough to be trusted either. I can see how AI might be helpful for getting a general idea about something. The results you get are similar to if you were to Google “how to bake a cake” and only read the first blurb of text it gives you at the very top. If you’re interested in a more thought provoking, quality answer you should absolutely look else where and formulate your own ideas.