Last week, we examined what appeared to be a news story from USA Today touting discount site Temu.com. On closer inspection, however, that “story” may have actually been an advertisement, but it was not labeled as such.
This week we focus on another news story that popped up in MrConsumer’s consumer news feed recently. It was entitled, “The Nuclear Savings Rule: 10 Frugal Living Tips from the 1950s Era.”
It sounded like old-fashioned savings advice that might be of interest to Consumer World readers even today. Some enterprising reporter, I thought, must have done an awful lot of research to go back 70 years to find consumer tips from the ’50s.
Here’s that Go Banking Rates story to quickly browse — just look at the bolded tips.

Scroll down the story.
If you skip to the bottom of the story, there is a surprising editor’s note.
*MOUSE PRINT:
Say what? This story was written by a computer using artificial intelligence and then fact-checked by a human editor. Wow! Or maybe I should say “oy.” Is this what journalism of the near future is going to look like — computers do the research and write the stories, and then a human double-checks them?
Please share your thoughts in the comments.
On the one hand, I like the idea of a computer system researching information, a capability far beyond even a small team of human researchers. To add a human element at the end and claiming all is well is, however, not to my total liking.
That human is where the bias may come in. Delete what they don’t agree with, embellish what they do agree with. Seems we are right back where we started — wondering and having to research for ourselves to find out if we are being propagandized or not.
>>> having to research for ourselves to find out if we are being propagandized or not.<<<
Unfortunately, that's where we have always been. It's just that many of us didn't realize until recently. The Pandemic opened a lot of peoples eyes.. You can't lay off millions and confine them to their to their homes, and expect them NOT to spend more time on the Internet, where they are bound to stumble on some alternative news sites where they will learn things they never knew.
As wrong as this is, it’s still not as bad as the pathetic “journalists” who just plagiarize Reddit threads and try to pass it off as actual journalism.
This is going to be more normal as time goes on for better or for worse…
Even major website Cnet.com tried the AI thing and it did not go so well..
AI learns from the web. A large percentage of the material on the web is either original BS or outdated facts. Just try searching in your area for a business and see how much bad info you get. If we’re too lazy to do the research we’ll suffer.
“ AI learns from the web. A large percentage of the material on the web is either original BS or outdated facts. Just try searching in your area for a business and see how much bad info you get. If we’re too lazy to do the research we’ll suffer.”
True, but with all the BS out there it is getting more and more difficult to actually do your own research. Sifting through all the clutter and trying to figure out which is real or significant gets more difficult every day. Product reviews and trying to find “top Ten” of ANYTHING is becoming fruitless. Bias is a thriving industry.
Having AI do our research is a terrible idea. If we let AI (re-)write our history then we’re all at the mercy of whatever it comes up with. I think it will be harder and harder for human editors to find actual facts for fact checking.
The problem isn’t AI. It’s corporate decisions, like Gannet (owner of USA Today and other newspapers) and Private equity owners of many other businesses,
that constantly seek to minimize labor costs while increasing profits. The writers’ strike is very much about how AI will be used and who is doing the using. They won, temporarily, some restrictions and labeling about scripts, but there will continue to be an incessant push to minimize humans and utilize AI.
Agreed completely.
GoBankingRates is trash and always has been trash. At least now they have replaced their deceptive people with trash robots.
Computers have assisted people with time-consuming chores for about four decades. If a computer can put a good piece together and it’s checked over by real journalists and editors, then why not? If the product is bad, the publisher will be punished in the market. This reminds me of most new-tech early criticism and the outcry for lost jobs. Automated looms, mechanical calculators, gas lights, early automobiles, and rider-controlled elevators were all lambasted as job killers. They were, but the effect was temporary as new technologies generally create more new opportunities than they destroy. Arguments touting job losses when a new advancement comes along, compared to history, are unpersuasive.
This has been going on for years in college with all the “essay writing services” available. I went to college in the seventies and there were loads of those “services”. Only thing changed between then and now is “AI” is doing it.
Colleges are supposedly against this sort of thing and consider it cheating. There are “AI” programs now that are supposed to spot “AI”- written content.
So, if it’s not acceptable at colleges, why should it be acceptable anywhere else?
This seems more like Google on steroids than true AI. Doesn’t a human still have to set the criteria? Or at least pick the topic? This looks like they just automated the task of copy and paste.
All the comments are very thoughtful, and aptly foreshadowed by Edgar’s “oy”!
This drives home the old saying, “garbage in/garbage out”. To be quite blunt, the fault ultimately lies with the failure of our education system and the lack of good parenting to teach kids something useful (which in a lot of cases may not even be for lack of trying by the parents because their kids think the internet “knows better” when it really doesn’t in a lot of cases). If you have poorly educated people spewing garbage on the internet, it’s only going to be compounded as similarly poorly educated people who have not been given the critical thinking skills necessary to separate the true from the likely false look for answers. AI is only going to spew out the very same garbage it ingests. And having a similarly poorly educated person “editing” AI is only going to further compound the errors. The internet has existed for decades but this wasn’t a major problem until increasingly poorly educated people started writing most of the stuff online. Everyone complains about the quality of education going downhill but do they realize that these are the kind of consequences it has? This is why huge numbers of people under a certain age don’t know basic grammar and usage and think the 1800s were the “18th century”. As Edgar said so well above, “Oy”.
I agree wholeheartedly. Good description of what can (and will) happen.
I’m actually not opposed to AI aided research. The truth is that machines can research way faster and with more breadth than we ever could. Where I dislike it is when we get to AI writing the article and it sounding like AI wrote the article, if there’s a human reviewer on the end writing it, that’s good for me. I especially appreciate that they called it out.
There should be regulations, with teeth, that specify a warning at the head of any article using AI to generate and compose the bulk of the article. AI is going to be used more frequently as the inexorable push for profits mean reducing human labor at every point of the production process.
There should be a law that disclosure of AI is notated at the very beginning of the article whether or not it is checked by a human. IMHO