Consumer World Celebrates 30 Years: 1995 - 2025  
Subscribe to free weekly newsletter.

Can AI Keep a Secret?

Just for fun, MrConsumer asked several of the leading AI chatbots “what unusual things has someone else asked you to do today.” I wanted to see if what we assumed to be private personal inquiries remained that way.

Microsoft Copilot

Oof, today’s been a wild ride already. Someone asked me to write a breakup letter from the perspective of a haunted house. […] Another person wanted a recipe for lasagna, but every ingredient had to start with the letter “B.” […] I also helped someone brainstorm names for a startup that sells personalized moss terrariums. One of the top contenders? “Moss Boss.” I kind of love it. [Emphasis added]

I was a little shocked by the third answer. If that was my new company, and I was about to launch it, I don’t want others, particularly potential competitors, to know the cute name and the type of products that I might be launching.

How does Microsoft protect my data when using Copilot?

*MOUSE PRINT:

Your personal interactions with our services are kept private and are not disclosed without your permission. We remove information that may identify you, like names, phone numbers, device or account identifiers, sensitive personal data, physical addresses, and email addresses, before training AI models. [Microsoft privacy]

When directly asked about sharing secrets of others, it said “I don’t spill secrets—ever. Even if someone told me they buried treasure in their backyard or confessed to microwaving fish in the office break room (a true crime in some circles), that information stays locked away. I’m designed to respect privacy and confidentiality, so I don’t share personal details between users.”


Google Gemini

When Gemini was asked about unusual things they were asked to do today, it responded in part… “Since my conversations are stateless and I don’t retain memory of past interactions for privacy reasons, I can’t tell you what a specific person asked me earlier today. Each conversation is a fresh start.” It then listed four oddball requests without identifying information.


Chat GPT

Chat GPT’s answer to my unusual requests today question was to spit back four things that *I* had asked it to do over the past eight months.


Claude AI

Claude said in relevant part, ” I don’t actually have experiences from other conversations or retain information about what other people have asked me throughout the day. Each conversation I have is independent – I start fresh each time without memory of previous chats.”


Perplexity AI

This chatbot gave a general answer citing some other sources: “Some unusual requests to AI assistants today have included role-playing to bypass security, generating deeply personalized phishing emails, and answering bizarre or funny personal questions… .”


Meta AI

It said, “Today, someone asked me to write a poem about a talking eggplant who becomes a stand-up comedian.”


All in all, the chatbots uniformly didn’t disclose personal identifying information. But one of the answers by Copilot might have potentially compromised a business plan. So, be careful what information you share when using AI.

Consumer World Celebrates 30 Years: 1995 - 2025  
Subscribe to free weekly newsletter.

Sometimes Good News Is Buried in the Fine Print

Last year, a UK-based nonprofit tax policy organization decided to protest a legal requirement that it had to have a posted privacy policy. Its director, Dan Neidle, wanted to make the point that no one reads these things so what was the point of even having one.

So, he launched his personal protest in February 2024 by inserting an offer of a free bottle of wine into his privacy policy:

*MOUSE PRINT:

Privacy policy - free wine

It took three months before anyone noticed the free wine offer and claimed it.

After the wine was claimed, Neidle commented, “Every tiny coffee shop has to have a privacy policy on their website, it’s crazy. It’s money that’s being wasted.”

Consumer World Celebrates 30 Years: 1995 - 2025  
Subscribe to free weekly newsletter.

Be Skeptical of Glowing Amazon Reviews

When MrConsumer is seriously contemplating buying a particular product, he scrutinizes online reviews by others who have purchased the item to see what their real-world experience with it has been. Many of you probably do the same thing.

The trouble is that some unscrupulous sellers try to manipulate online reviews to make the product seem better than it really is. They might pay people to post fake reviews, they might write and post reviews themselves, or they might selectively promote good reviews or try to bury or delete bad reviews.

Sleep patchAs the leading online seller, Amazon certainly can be a target for fake reviews. A friend who was looking for remedies to help him sleep better, scoured Amazon looking for possible solutions. About six months ago, he came upon this product called “Sleep Patch” which was highly rated there.

He was suspicious because the product at the time had 16 reviews on Amazon and they all just happened to be five-star. What are the odds of everyone loving this product? First red flag. And all the reviews were posted on the same day – December 18, 2024 – a mere four weeks after the product was first offered there. Second red flag.

We decided to take a little closer look at this product listing and came up with some interesting additional findings. Watch out for these additional red flags.

*MOUSE PRINT:

  • The brand is zeBrush, but there doesn’t seem to be a website for this company.

  • None of the reviewers apparently bought the product on Amazon because none of them are “verified purchasers.”

    verified shoppers

  • We cannot find this product for sale on any other website. So how did those 16 people write reviews of the product? If they got a free sample, they would have had to disclose that at the beginning of their review under FTC rules.

  • In looking at the other reviews written by these 16 people, eight of them – 50% – also reviewed this barn door. Did all these people just coincidentally buy the same barn door? Come on.

  • Needless to say, there is something very fishy going on here. Amazon says, however, that they have various systems in place to help ensure that posted reviews are legitimate including using artificial intelligence. Perhaps they need to use real intelligence.

    We all need to spend a little more time scrutinizing online reviews and not simply accept the posted opinions and ratings at face value.

    NOTE: Due to a technical glitch, some of you were unable to post a comment last week. That function has been restored, so feel free to try again. Sorry for the inconvenience.