Quick Note from Talkspace: Because we provide online messaging therapy, we frequently hear from potential clients who want to be sure they are chatting with a therapist, not a chatbot. All of our therapists are licensed, flesh and blood humans, but we understand the concern. Whether it’s online therapy, social media or online dating, everyone deserves to chat with the humans they believe they are connecting with. We made this guide so people can answer the big question: Bot or not?
When we message with people on the Internet, we deserve to know they are, well, people. In a time where bots drive more than 60% of web traffic, it’s reasonable for consumers to be wary of chatbots masquerading as humans.
This variety of bot talks with you on sites such as Tinder and Facebook. Programmers design chatbots to simulate real conversation long enough to convince you to buy something, click on a link or offer personal information.
The key to detecting and reporting them is understanding how they work in various contexts. Then you can exploit their weaknesses and out them as robots!
Quick Fundamentals of Chatbots
Chatbots are computer programs designed to simulate exchanging messages with a human. They match messages from real humans with combinations of keywords and other responses stored in their database. More advanced bots can use audio and visuals such as animations.
Before You Put Your Guard Up: The Good and Harmless Bots
Not all chatbots are bad. Some of them tell you they are bots before you begin chatting. These are usually customer service chatbots designed to take pressure off customer service reps and substitute for them during off hours and weekends. Other types include social media chatbots that automatically send a thank you message when you follow someone new.
You might also receive bot-like responses that are actually the result of a customer service rep using a tool to save time typing. ChatToolTester.com Founder Robert Brandl offered the following example:
Don’t waste your energy outing these guys. They aren’t trying to take advantage of you. Save it for long conversations and “people” you chat with outside of customer service such as those on online dating platforms.
Top Signs You are Talking with a Bad Bot
Now we get into the malicious chatbots: the ones trying to sell you something, take your personal information or cheat you out of money you paid to chat with an online therapist. Here are the patterns to look for:
Mentions a Product or Service
The only product or service that should come up quickly during online chats is the one you are using to facilitate the chat. It isn’t suspicious for someone to mention Tinder while they are chatting on Tinder. Anything that doesn’t grow naturally from the conversation is most likely a disguised sales pitch.
Sends a Link Without You Asking for One
Unless the link is directly related to a topic you brought up of your own will, it is most likely spam. You can stop the conversation after you see it.
Asks for Personal Financial Information
Real person or not, receiving a message asking for any personal financial information such as your credit card number means it’s time to say goodbye.
Responds Suspiciously Quickly
Real humans need to sleep and take more than .1 second to type a detailed response. They won’t be responding instantly and at all hours of the night.
When people chat with bots, they are punching data into a series of if-then interactions. There’s only so much time to code, so some responses might have more than one trigger. A human would not respond exactly the same way to different questions or comments.
Does Not Speak Naturally
Most people don’t chat with a Hemingway level of clarity and brevity. Real humans use lots of sentence fragments when they’re chatting.
Or They Do the Opposite
Other bots will try too hard to speak casually by using an excess of “lols,” emoji and similar characters.
Sometimes the way a bot produces text reflects errors in its programming. It can be something like two spaces between every sentence, extra periods or bizarre indentations.
Stresses How Much “He” or “She” Doesn’t Speak Your Language Very Well
Starting the conversation with the “sorry excuse my bad English” line is clever because it makes the user more forgiving and less suspicious of some of the aforementioned bot patterns. Try to be patient because it could be rude if you accuse the person of being a bot when they really are struggling with English. Still, look for those patterns and consider some of the tactics below.
How to Out These Bots: The Best Bot Bait
Chatbots have become advanced, but there are still ways to trip them up and out them as the imposters they are. Taking the offensive can be necessary — and helpful for your safety — when you encounter bots that don’t make the above mistakes without any prompt. Here are some tips from programmers and people who have encountered these pretenders:
Asking the Right Questions (As in the Weird Ones)
Account executive and self-described “computer geek” Chris Orris has encountered chatbots and offered some advice to Talkspace. He recommends outsmarting them by typing questions one wouldn’t typically ask in certain situations. A human would be confused but able to answer the questions accurately. On the other hand, a bot would inadvertently reveal itself.
Here are some sample questions Orris offered (awkward ones designed to reveal bots):
- “Man, you sound like you’re having the same kind of Monday I’m having.”
- “I hear music in the background. Or is that just me?”
- “You know, you sound a lot like my wife.”
- “Dishwasher? Are you from the Pittsburgh area?”
- “I saw something like what you’re talking about when I was visiting Spain. Have you ever been to Spain?”
The key is focusing on the “person” you are talking to. Because this person doesn’t exist, the bot will have trouble keeping their character believable and consistent.
Try Saying “Umm”
Most bots are not great at responding to onomatopoeia. Phrases such as “Umm” and “Hmm” might trip them up. CAP Management Services Digital Marketer Jane Dizon learned this after interacting with customer service chatbots.
“Since bots don’t understand fillers like “Ohhh” and “Hmmm,” they tend to use generic responses like ‘Tell me more,’ and ‘Let’s talk more about that,’” Dizon said.
Try some onomatopoeia and then look for responses similar to those Dizon mentioned.
Sarcasm is a nearly insurmountable challenge for bots. Even over text, there’s something very human about using tone and irony. Programmer Dimitri Semenikhin, who programmed a customer support chatbot for yacht purchasers, recommended using sarcastic jokes to detect malicious bots.
“A bot will interpret sarcasm as genuine and won’t be able to answer a joke unless it’s widely used,” Semenikhin told Talkspace.
Here are some great sarcastic jokes that aren’t popular enough for bots to recognize:
- A clean house is the sign of a broken computer.
- Take my advice — I’m not using it.
Ask to See a Video
Asking to see a video can be a great idea when it is possible and appropriate. At Talkspace, some of our clients want to be sure their therapist is bona fide, not a bot.
“Some of them ask if I am real and like it when I post a video so they can see that I am real,” said Talkspace therapist Shannon McFarlin.
If the “person” refuses to provide you a video and does not offer a valid reason, keep your guard up. Try to be patient, however, for contexts such as dating.
Places Where You Might Encounter Bots and Chatbots
The signs we mentioned earlier apply to all places people encounter malicious chatbots, but there are nuances for each context. Understanding the specifics for each environment such as online dating and Twitter will help you more rapidly beat the bot.
There are thousands of chatbots on online dating websites, especially on sites that require a minimal amount of text for the profile such as Tinder. Most of these bots take the persona of someone physically attractive. They primarily target users by being flirtatious or attempting to lure them with the prospect of naked photos and videos.
As the founder of Zones, a dating app where people use a game-based mechanic to match, Nikolay Pokrovsky has spent many hours dealing with bots and is well-versed in their strategies on dating websites and messengers. He said many chatbots will chat long enough to offer users a link that leads to malware or a porn site that uses bots for marketing.
Pokrovsky recommends users maintain a critical attitude during the beginning of their chats. Someone offering to show you naked pictures right off the bat or asking for money over the Internet isn’t normal behavior, he said.
Here are a few more red flags you can look for in online dating chats and profiles:
- only one photo available
- their profile has a link
- they are overly sexual or aggressive
Interested in learning more about online dating? Check out “The Complete Online Dating Guide for Women”!
Social Media Sites Such as Twitter and Instagram
Using Twitter bots is a popular way of spamming or making people seem like they have more followers. Most bots will leave people alone. If you follow one, however, expect a direct message trying to sell you something.
Like chatbots on dating sites, Twitter bots often use photos of attractive people and profiles full of sexual messages or images.
Similar bots exist on other platforms and market products by requesting/following people and then messaging them. Look for red flags similar to ones users see on online dating sites such as only one photo, little text, explicit imagery, excessive product mentions and links.
The chatbot issue is one of the most common concerns among people who are considering using online messaging therapy with a licensed therapist. Understandably, people want to be sure they are paying to chat with a bonified therapist who will help them work towards a happier life.
As we mentioned earlier, no matter what therapy platform you’re using, one of the easiest ways to gain peace of mind is asking the therapist to provide a video. If that doesn’t seem like enough or if you feel asking is rude, there are ways to analyze the text from the therapist.
Real therapists are able to draw complex connections between messages and issues the client has raised over the course of many hours, days, weeks and months. Chatbots tend to focus only on the present. They ask a lot of questions designed to make you talk more rather than offering analysis.
Here’s a sample of a modern version of ELIZA, the first therapy chatbot created:
As you can see, it tends to grab the text users submit and rephrases it as a question designed to make the client talk more.
When bots do produce a lot of text rather than asking short questions, it isn’t original. Here’s an example of a more advanced chatbot attempting to function as a therapist. Notice how it resorts to quoting.
Chatbots can encourage clients to share their feelings and thoughts, but they don’t have the skills and experience necessary to provide therapy.
What To Do Once You’ve Found a Malicious Chatbot
So you’ve outed the bad bot. Now it’s time to report it so you can foil the malicious spamming schemes of the programmers who created it. The way users can do this depends on the platform, although most have rules and help centers designed for similar issues.
Pokrovsky offered a screenshot of how users can report “people” on his app:
His staff investigates a profile after users report it, deactivating it if it displays patterns bots typically use. The profile automatically deactivates if three or more people report it. Most other sites follow similar protocols.
Reporting might be the most you can do to combat bots. Using bots in most ways — even the malicious ones — is not illegal, Pokrovsky said.
Nonetheless, the biggest victory is realizing who — or rather what — you are talking to early on. Your time is valuable, so knowing how to spend as much of it as possible with actual humans is crucial.