Staying Safe from Chatbot Scams: Your Ultimate Guide (2024)

The realistic nature of automated chat features and devices may make people comfortable sharing their personal information. However, criminals often use these popular technologies in phishing schemes.

All of our content is written by humans, not robots. Learn More

By Staying Safe from Chatbot Scams: Your Ultimate Guide (1) Aliza Vigderman, Senior Editor, Industry Analyst Last Updated on Jun 11, 2024

Automated chatbots have various uses today: ordering pizza, checking bank account balances, or mental health therapy. One in three U.S. men even use ChatGPT to help them with their relationships.

The uses for chatbots are nearly endless. But, businesses primarily use them to engage their website visitors and increase their online purchases. This is one of the main drives of the predicted 25 percent year over year growth in the market size for chatbots.

As businesses find more applications for chatbots, bad actors are also finding new ways to exploit chatbot users and steal their personal information. And unfortunately, most people are not aware of this threat. According to our recent study, 58 percent of adults said they didn’t know chatbots could be manipulated to gain access to personal information.

This guide will help you use chatbots safely and securely and show you how to spot a scammy chat before you share sensitive information.

Table of Contents

  • Online chatbot risks and vulnerabilities
  • Recent high-profile chatbot scams
  • How to stay safe while using chatbots
  • Smart home assistant risks and vulnerabilities
  • Ways to secure your AI home assistant devices
  • Dating app chatbot scams
  • Conclusion: What you should never share with a chatbot

Online chat risks and vulnerabilities

Although they benefit businesses and customers, there are certain risks and vulnerabilities to be aware of when using chat features on websites. Most people are not confident that these online chats are completely secure. In fact, only 11 percent were very or extremely confident that companies had sufficiently secured these chat features.

Staying Safe from Chatbot Scams: Your Ultimate Guide (2)

If customers would not feel safe placing an order with their credit card number over public WiFi networks, they should not when ordering through a chatbot, either. According to our research, people seemed to be more concerned about their information security in online banking chats than in online retailers’ chats. This makes sense, given the sensitive information associated with bank accounts that could be damaging if placed in the wrong hands.

Staying Safe from Chatbot Scams: Your Ultimate Guide (3)

However, even in retail shopping chats, criminals could access victims’ credit card information if a company has a data breach. If credit card or other personal data was shared and stored in automated chats, hackers could steal and manipulate it. No matter what chatbot you use, it’s essential to be vigilant and protective of your personal information.

Staying Safe from Chatbot Scams: Your Ultimate Guide (4)

Recent high-profile chatbot scams

Chatbot scammers often impersonate trustworthy brands when planning their schemes. According to Kaspersky Labs research, major brands like Apple, Amazon, and eBay are the ones that phishers impersonate most often. Here are a few examples of chatbot scams where bad actors pretend to represent major companies to obtain sensitive information from victims.

DHL Chatbot Scam

DHL is a courier, package delivery, and express mail service company. In May 2022, a chatbot phishing scam spread – but it did not start in chatbot form. Essentially, the scammers asked unsuspecting people to pay additional shipping costs to receive packages. Victims had to share their credit card information to pay the shipping charge.

Staying Safe from Chatbot Scams: Your Ultimate Guide (5)

First, the victims received an email about DHL package delivery problems. If they clicked on the email link, they were eventually directed to a chatbot. The chatbot conversation may have seemed trustworthy to some users since it included a captcha form, email and password prompts, and even a photo of a damaged package. However, there were a few tell-tale signs that this was a scam:

  • The “From” field in the email was blank.
  • The website address was incorrect (dhiparcel.com, not DHL.com).

Staying Safe from Chatbot Scams: Your Ultimate Guide (6)

Staying Safe from Chatbot Scams: Your Ultimate Guide (7)

Did You Know: Phishing scams don’t just end in your inbox. Cybercriminals have evolved their phishing scams to include text messages. Read our guide to phishing text messages called “smishing” to learn more.

Facebook Messenger chat scam

Meta’s Messenger and Facebook have also been targeted in a chatbot scam. Some Facebook users received a fraudulent email explaining that their page violated community standards. Their Facebook account would be automatically deleted if they didn’t appeal the decision within 48 hours.

Staying Safe from Chatbot Scams: Your Ultimate Guide (8)

A link took unsuspecting readers to an automated support chat within Facebook Messenger. The chatbot directed them to share their Facebook username and password with the scammers, which was the scheme’s goal.

Staying Safe from Chatbot Scams: Your Ultimate Guide (9)

There were a few clear signs that this was a scam:

  • The email’s sending domain and “from” address were not Facebook, Meta, or Messenger.
  • The website was not an official Facebook support page.
  • The Facebook page associated with the chatbot had no posts or followers.
  • The chatbot prompted users to share their usernames and passwords.

How to stay safe while using chatbots

Chatbots can be hugely valuable and are typically very safe, whether you’re using them online or in your home via a device such as the Alexa Echo Dot. A few telltale signs may indicate a scammy chatbot is targeting you.

Here are a few ways to stay safe and spot fraudulent chats while on the internet:

  • Use chatbots only on websites you have navigated to yourself (do not use chatbots on websites you reached by clicking on links in suspicious emails or texts).
  • If you receive an email with a link to a chatbot, always verify the “from” address before clicking on any links. For example, if the sender claims to be from Walmart, the from address suffix should match Walmart’s web address.
  • Only click on links in texts or emails you were expecting or from senders you know.
  • Ignore tempting offers and incredible prizes, especially when they appear out of nowhere. Chatbot scams typically originate from pop-ups and links from websites, emails, and text messages. For example, the Apple iPhone 12 scam started with a text message. Text messages confer a significant advantage over email scams in that shortened URLs hide questionable URLs and abbreviated or incorrect grammar more easily.
  • Follow the maxim that if a prize or offer seems too good to be true, it is.
  • If you receive a suspicious message, do an internet search for the company name and the offer or message in question. If the offer is real and valid, you will likely find more information about it on the company’s website. If it is fake, you might find news reports about the scam or no information at all.
  • Use two-factor or multi-factor authentication to further protect your accounts from unauthorized users.
  • Be aware of unknown or random requests for your payment information or personal details via chatbots. If it’s a real company, they will not ask you sensitive questions via chat.
  • Always keep your security software and browsers updated.
  • Be vigilant about suspicious chatbot messages and report any malicious activity.
  • Refrain from using unsecured or public Wi-Fi networks to access your sensitive information.

If you receive frequent texts from unknown numbers that contain suspicious links, you can adjust your carrier settings to filter out spam calls and texts. You can also download mobile apps to block shady numbers and texts. Forward any scam text messages you receive to the Federal Trade Commission (FTC) at 7726, so they can track and investigate the fraud.

AI home assistant bot risks and vulnerabilities

Chatbot threats aren’t only online – most Americans have already opened their homes to very similar interfaces. The same conversational AI powering internet chatbots is coded into virtual personal assistants (VPA) like Amazon’s Alexa, Google’s Home, and Apple’s Siri.

More than half of respondents in a survey we conducted own an AI assistant, meaning that 120 million Americans share personal information with such devices regularly. Amazon’s Alexa is the most popular model today.

Home assistants offer convenience by handling household operations with an understanding of our personal needs and preferences. That functionality also makes the interfaces a risk to privacy and security. Appropriately, not everyone trusts them.

More than 90 percent of users have doubts about home assistant security, and less than half have any level of confidence in them.

Staying Safe from Chatbot Scams: Your Ultimate Guide (10)

This skepticism is justified. Voice-activated assistants lack many protocols that can foil a browser-linked scambot. Rather than requesting password logins through verifiable web pages, assistants can accept commands from anyone without visual confirmation of their remote connections. This process allows for fraud on either end of the equation. Including always-on microphones completes a recipe for potential disaster.

Some VPA security lapses are borderline comical, like when children commandeer them to buy toys with parental credit cards. Other stories feel far more nefarious. For instance, Amazon admits to listening in on Alexa conversations even as Google and Apple face litigation for similar practices.

Third-party hacks are particularly frightening since virtual assistants often have access to personal accounts and may be connected to household controls (lighting, heating, locks, and more).

Outside breaches of AI assistants usually fall into one of the following categories:

  • Eavesdropping: The most basic exploit of an always-on microphone is turning it into a spying device. Constant listening is intended as a feature instead of a bug: devices passively await “wake words,” manufacturers review interactions to improve performance, and some VAs allow remote listening for communication or monitoring purposes. Hijacking this capability would effectively plant an evil ear right in your home. Beyond creepily invading your privacy, such intrusions could capture financial details, password clues, blackmail material, and a way to confirm that a residence is empty.
  • Imposters: Smart speakers connect customers to services via voice commands with little verification, a vulnerability that clever programmers abuse to fool unwitting users. Hackers can employ a method called “voice squatting,” where unwanted apps launch in response to commands that sound like legitimate requests. An alternate approach called “voice masquerading” involves apps pretending to close or connect elsewhere. Rather than obeying requests to shut down or launch alternate apps, corrupt programs feign execution, then collect information intended for others. According to our study, only 15 percent of respondents knew of these possible hacks.
  • Overriding: Several technological tricks can allow outsiders to control AI assistants remotely. Devices rely on ultrasonic frequencies to pair with other electronics, so encoded voice commands transmitted above 20Khz are inaudible to humans but received by compliant smart speakers. These so-called “dolphin attacks” can be broadcast independently or embedded within other audio. Researchers have also triggered assistant actions via light commands – fluctuating lasers interpreted as voices to control devices from afar. Eighty-eight percent of AI assistant owners in our study had never heard of these dastardly tactics.
  • Self-hacks: Researchers recently uncovered a method to turn smart speakers against themselves. Hackers within Bluetooth range can pair with an assistant and use a program to force it to speak audio commands. Since the chatbot is chatting with itself, the instructions are perceived as legitimate and executed – potentially accessing sensitive information or opening doors for an intruder.

Ways to secure your AI home assistant devices

Manufacturers issue updates to address security flaws, but when personal assistant divisions like Alexa lose billions of dollars, such support is likely to be slashed. Luckily, there are simple steps that consumers can take to help safeguard their devices.

  1. Mute the mic: Turning off an assistant’s microphone when not actively in use may reduce fun and convenience (preventing one from randomly soliciting a song or weather report) but eliminates access to many hacking attacks.
  2. Add a PIN or voice recognition: Most AI assistants can require voice-matching, personal identification numbers, or two-factor authentication before executing costly commands. Activating these safeguards keeps a device from obeying unauthorized users and stops children from making purchases without authorization.
  3. Delete and prevent recordings: Users can revoke manufacturers’ permission to record or review audio commands. Within the “Alexa Privacy” (Amazon) or “Activity Controls” (Google) settings, owners can find the option to disallow saving or sending recordings. Apple devices no longer record users by default, though owners can opt-in and allow it.
  4. Set listening notifications: Configuring assistants to emit audible alerts when actively listening or acknowledging commands provides a reminder when ears are open…and can uncover external ultrasonic or laser attacks.
  5. Disable voice purchasing: Impulse buying with only a sentence spoken aloud is a modern marvel that’s rarely necessary. Given their security shortcomings, allowing virtual assistants to access or execute financial transactions is risky. Have the assistant place items on a list instead, then confirm the purchase from a more secure terminal. Users might even save money by reconsidering late-night splurges.
  6. Keep your firmware updated: Make sure your AI assistant devices are kept regularly updated with the latest software. This helps your firmware fight against new attacks created to hack devices with outdated software.
  7. Keep your Wi-Fi network secured: Don’t just assume your Wi-Fi network is unhackable. Use a strong password and enable WPA3 encryption if possible.
  8. Limit third-party apps: Don’t just connect every single third-party app possible. Only enable the ones you trust.

Additionally, one must remember to follow the baseline safety protocols inherent in any online device. Some consider VAs family members rather than gadgets, yet it’s still critical to keep firmware updated, use strong passwords, and connect them only to secured personal routers.

Dating app chatbot scams

Beware of chatbot scams if you’re looking for love on dating apps. From January 2023 to January 2024, there’s been a 2087 percent increase in scammers using chatbots and AI-generated text on dating apps.

Scammers use bots to sign up for new accounts and create fake dating profiles. This occurs on a massive scale and reported romance scams in 2022 cost about $1.3 billion in the United States. You may be especially at risk if you’re between 51 and 60 years old, according to Barclays research.

AI technology such as ChatGPT is allowing scammers to be more successful at making conversation with potential victims. Still, there are signs that you may be talking to a bot or a scammer relying on AI to make conversation:

  • The profile photos are “too good to be true” (you can try Google Images to see if the pictures are elsewhere online)
  • The profile is bare bones and has few details
  • The user asks to move the conversation off the app as soon as possible (before the app kicks the scammer off due to user reports or AI detector tools)
  • You notice personality and tone inconsistency, which is common when multiple scammers converse through a single profile
  • They refuse to do video or phone calls
  • They send messages at odd times of the day
  • They ask you for money
  • They avoid answering personal questions or provide vague responses
  • They exhibit poor grammar or unusual phrasing despite claiming to be native speakers
  • They send links or attachments without context or explanation

As recently as a few years ago, one person could run 20 dating scams per day. With automated chatbots now, though, it’s possible to work hundreds of thousands of scams at once.

>> Related:Is ChatGPT Safe?

Conclusion: What you should never share with chatbots

Most of the time, chatbots are legitimate and as safe as any other apps or websites. Security measures like encryption, data redaction, multi-factor authentication, and cookies keep information secure on chatbots. If you accidentally give a legitimate chatbot your Social Security number or date of birth, redaction features can automatically erase the data from the transcript.

That said, you should tread carefully when sending personally identifiable information and other types of sensitive data. Sometimes, sending PII is unavoidable since interacting with a chatbot is similar to logging into an account and submitting PII yourself. Here are some personal data points you should not send via chatbot:

  • Social security number
  • Credit card numbers
  • Bank information or account numbers
  • Medical information

At a minimum, do not overshare information. Refrain from sending or volunteering extra information beyond what is required. For instance, don’t give your complete address if the chatbot asks only for your ZIP code. Never share information you would ordinarily not need to, and think carefully before sending any personal information online. For example, you would never have to share your Social Security number to check on a package delivery or get your bank balance.

Staying Safe from Chatbot Scams: Your Ultimate Guide (2024)

FAQs

Is the chat AI bot safe? ›

These are a few AI chat programmes that are renowned for their security features. 1. Security Features of Microsoft Azure Bot Service Microsoft's secure cloud architecture serves as the foundation for Azure Bot Service. Features like identity management, data encryption, and industry standard compliance are included.

How to use AI chatbots safely? ›

Be aware of unknown or random requests for your payment information or personal details via chatbots. If it's a real company, they will not ask you sensitive questions via chat. Always keep your security software and browsers updated. Be vigilant about suspicious chatbot messages and report any malicious activity.

How to tell if someone is using a chatbot? ›

While bots can mimic humans, they are always subject to their programing and looking over their posts may show you patterns in that programming. Tunnel Vision: Bots are created for a purpose. This could make them seemingly obsessed with a particular topic by repeating a link too much.

Are my chatbot conversations private? ›

Not every chatbot conversation is private. It depends on who made the chatbot and how serious they are about security. Before you share anything personal, make sure your chatbot has your back!

What is the real danger of chatbot? ›

Dangers of Chat AI

If a chatbot is trained on inaccurate or misleading information, it will spread that misinformation to anyone who interacts with it. Privacy concerns: Chatbots can collect and store large amounts of personal information from users, which can be vulnerable to hacking or mishandling.

Does ChatGPT sell your data? ›

So naturally, you might assume that OpenAI has found a way to sell or monetize your data. Luckily, that's not the case. According to an OpenAI support page, your ChatGPT conversations aren't shared for marketing purposes.

Why not to use chatbot? ›

Chatbots have limited responses, so they're not often able to answer multi-part questions or questions that require decisions. This often means your customers are left without a solution, and have to go through more steps to contact your support team.

Is chatbot a security risk? ›

API vulnerabilities present another significant security risk for chatbots, particularly when these interfaces are used to share data with other systems and applications. Exploiting API vulnerabilities can give attackers unauthorized access to sensitive information such as customer data, passwords, and more.

Is ChatGPT safe? ›

Malicious actors can use ChatGPT to gather information for harmful purposes. Since the chatbot has been trained on large volumes of data, it knows a great deal of information that could be used for harm if placed in the wrong hands.

How to tell a bot from a real person? ›

Pay attention to unnatural behavior: bots often exhibit certain behaviors that can help you recognize them. Pay attention to general or repetitive responses, unnatural typing speed, irrelevant or meaningless responses, and an inability to engage in meaningful conversation.

How do I know if a text is from a chatbot? ›

Creativity and originality

Human writers often inject their work with a personal voice reflecting their unique perspective, producing fresh and engaging content. In contrast, chatbot-generated text may lack these qualities, appearing generic or formulaic due to their reliance on predefined algorithms and patterns.

What is the dark side of chatbots? ›

Identity Theft: AI chatbots can gather personal information from conversations, and this data can be used for identity theft. Fraudsters may use the information to impersonate individuals, open fraudulent accounts, or commit other types of financial fraud.

What shouldn't you use ChatGPT for? ›

Financial Information

Just like you wouldn't leave your banking or social security number on a public forum online, you shouldn't be entering them into ChatGPT either. It's fine to ask the platform for finance tips, to help you budget, or even tax guidance, but never put in your sensitive financial information.

Does chatbot keep your history? ›

Additionally, you should always know how long the chatbot stores user data and delete it when it's no longer necessary. OpenAI has a 30-day retention policy, but other companies may keep your data longer or shorter periods. Check any chatbot provider's terms of service to understand the data storage policies.

Is ChatGPT AI safe? ›

Chat GPT is generally considered to be safe to use.

It is a large language model that has been trained on a massive dataset of text and code. This means that it is able to generate text that is both accurate and relevant. However, there are some potential risks associated with using Chat GPT.

Is the AI app safe? ›

They can access your information: Some AI apps might need things like your photos or messages to work properly. It's important to make sure only the app can access this information and no one else. They can be tricked: Like any technology, AI apps can be fooled by bad people who want to steal information or cause harm.

Can people see your AI chat? ›

In usual situations, chats with AI characters are private unless shared publicly. Only the character and user can see these chats. Creators of the AI character don't see private chats unless they're made public.

Top Articles
Latest Posts
Article information

Author: Carmelo Roob

Last Updated:

Views: 6285

Rating: 4.4 / 5 (65 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Carmelo Roob

Birthday: 1995-01-09

Address: Apt. 915 481 Sipes Cliff, New Gonzalobury, CO 80176

Phone: +6773780339780

Job: Sales Executive

Hobby: Gaming, Jogging, Rugby, Video gaming, Handball, Ice skating, Web surfing

Introduction: My name is Carmelo Roob, I am a modern, handsome, delightful, comfortable, attractive, vast, good person who loves writing and wants to share my knowledge and understanding with you.