security meets culture
Former CIA: If this Light Turns On, You're Being Watched
The Truth about Browser Privacy: The Good, Bad & Dangerous
Web Portal Leaves Kids' Chats with AI Toy Open to Anyone
with Gmail Account
By Andy Greenberg, wired.com
Just about anyone with a Gmail account could access Bondu chat transcripts.
Just about anyone with a Gmail account could access Bondu chat transcripts.
Credit: Bondu
Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd preordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts.
So Thacker looked into it. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu's web-based portal, intended to allow parents to check on their children's conversations and for Bondu's staff to monitor the products' use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy.
Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the 2 researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves.
In total, Margolis and Thacker discovered that the data Bondu left unprotected-- accessible to anyone who logged in to the company's public-facing web console with their Google username-- included children's names, birth dates, family member names, "objectives" for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate 1-on-1 conversation. Bondu confirmed in conversations with the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal, essentially all conversations the toys had engaged in other than those that had been manually deleted by parents or staff.
"It felt pretty intrusive and really weird to know these things," Thacker says of the children's private chats and documented preferences that he saw. "Being able to see all these conversations was a massive violation of children's privacy."
When Thacker and Margolis alerted Bondu to its glaring data exposure, they say, the company acted to take down the console in a matter of minutes before relaunching the portal the next day with proper authentication measures. When WIRED reached out to the company, Bondu CEO Fateen Anam Rafid wrote in a statement that security fixes for the problem "were completed within hours, followed by a broader security review and the implementation of additional preventative measures for all users." He added that Bondu "found no evidence of access beyond the researchers involved."
(The researchers note that they didn't download or keep any copies of the sensitive data they accessed via Bondu's console, other than a few screenshots and a screen-recording video shared with WIRED to confirm their findings.)
"We take user privacy seriously and are committed to protecting user data," Anam Rafid added in his statement. "We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections," as well as hiring a security firm to validate its investigation and monitor its systems in the future.
While Bondu's near-total lack of security around the children's data that it stored may be fixed, the researchers argue that what they saw represents a larger warning about the dangers of AI-enabled chat toys for kids. Their glimpse of Bondu's backend showed how detailed the information is that it stored on children, keeping histories of every chat to better inform the toy's next conversation with its owner.
(Bondu thankfully didn't store audio of those conversations, auto-deleting them after a short time and keeping only written transcripts.)
Even now that the data is secured, Margolis and Thacker argue that it raises questions about how many people inside companies that make AI toys have access to the data they collect, how their access is monitored, and how well their credentials are protected. "There are cascading privacy implications from this," says Margolis. "All it takes is one employee to have a bad password, and then we're back to the same place we started, where it's all exposed to the public internet."
Margolis adds that this sort of sensitive information about a child's thoughts and feelings could be used for horrific forms of child abuse or manipulation. "To be blunt, this is a kidnapper's dream," he says. "We're talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody."
Margolis and Thacker point out that, beyond its accidental data exposure, Bondu also-- based on what they saw inside its admin console-- appears to use Google's Gemini and OpenAI's GPT5, and as a result may share information about kids' conversations with those companies. Bondu's Anam Rafid responded to that point in an email, stating that the company does use "3rd-party enterprise AI services to generate responses and run certain safety checks, which involves securely transmitting relevant conversation content for processing." But he adds that the company takes precautions to "minimize what's sent, use contractual and technical controls, and operate under enterprise configurations where providers state prompts/outputs aren't used to train their models."
The 2 researchers also warn that part of the risk of AI toy companies may be that they're more likely to use AI in the coding of their products, tools, and web infrastructure. They say they suspect that the unsecured Bondu console they discovered was itself "vibe-coded"-- created with generative AI programming tools that often lead to security flaws. Bondu didn't respond to WIRED's question about whether the console was programmed with AI tools.
Warnings about the risks of AI toys for kids have grown in recent months but have largely focused on the threat that a toy's conversations will raise inappropriate topics or even lead them to dangerous behavior or self-harm. NBC News, for instance, reported in December that AI toys its reporters chatted with offered detailed explanations of sexual terms, tips about how to sharpen knives, and even seemed to echo Chinese government propaganda, stating for example that Taiwan is a part of China.
Bondu, by contrast, appears to have at least attempted to build safeguards into the AI chatbot it gives children access to. The company even offers a $500 bounty for reports of "an inappropriate response" from the toy. "We've had this program for over a year, and no one has been able to make it say anything inappropriate," a line on the company's website reads.
Yet at the same time, Thacker and Margolis found that Bondu was simultaneously leaving all of its users' sensitive data entirely exposed. "This is a perfect conflation of safety with security," says Thacker. "Does 'AI safety' even matter when all the data is exposed?"
Thacker says that prior to looking into Bondu's security, he'd considered giving AI-enabled toys to his own kids, just as his neighbor had. Seeing Bondu's data exposure firsthand changed his mind.
"Do I really want this in my house? No, I don't," he says. "It's kind of just a privacy nightmare."
This story originally appeared on wired.com.
So Thacker looked into it. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu's web-based portal, intended to allow parents to check on their children's conversations and for Bondu's staff to monitor the products' use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy.
Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the 2 researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves.
In total, Margolis and Thacker discovered that the data Bondu left unprotected-- accessible to anyone who logged in to the company's public-facing web console with their Google username-- included children's names, birth dates, family member names, "objectives" for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate 1-on-1 conversation. Bondu confirmed in conversations with the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal, essentially all conversations the toys had engaged in other than those that had been manually deleted by parents or staff.
"It felt pretty intrusive and really weird to know these things," Thacker says of the children's private chats and documented preferences that he saw. "Being able to see all these conversations was a massive violation of children's privacy."
When Thacker and Margolis alerted Bondu to its glaring data exposure, they say, the company acted to take down the console in a matter of minutes before relaunching the portal the next day with proper authentication measures. When WIRED reached out to the company, Bondu CEO Fateen Anam Rafid wrote in a statement that security fixes for the problem "were completed within hours, followed by a broader security review and the implementation of additional preventative measures for all users." He added that Bondu "found no evidence of access beyond the researchers involved."
(The researchers note that they didn't download or keep any copies of the sensitive data they accessed via Bondu's console, other than a few screenshots and a screen-recording video shared with WIRED to confirm their findings.)
"We take user privacy seriously and are committed to protecting user data," Anam Rafid added in his statement. "We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections," as well as hiring a security firm to validate its investigation and monitor its systems in the future.
While Bondu's near-total lack of security around the children's data that it stored may be fixed, the researchers argue that what they saw represents a larger warning about the dangers of AI-enabled chat toys for kids. Their glimpse of Bondu's backend showed how detailed the information is that it stored on children, keeping histories of every chat to better inform the toy's next conversation with its owner.
(Bondu thankfully didn't store audio of those conversations, auto-deleting them after a short time and keeping only written transcripts.)
Even now that the data is secured, Margolis and Thacker argue that it raises questions about how many people inside companies that make AI toys have access to the data they collect, how their access is monitored, and how well their credentials are protected. "There are cascading privacy implications from this," says Margolis. "All it takes is one employee to have a bad password, and then we're back to the same place we started, where it's all exposed to the public internet."
Margolis adds that this sort of sensitive information about a child's thoughts and feelings could be used for horrific forms of child abuse or manipulation. "To be blunt, this is a kidnapper's dream," he says. "We're talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody."
Margolis and Thacker point out that, beyond its accidental data exposure, Bondu also-- based on what they saw inside its admin console-- appears to use Google's Gemini and OpenAI's GPT5, and as a result may share information about kids' conversations with those companies. Bondu's Anam Rafid responded to that point in an email, stating that the company does use "3rd-party enterprise AI services to generate responses and run certain safety checks, which involves securely transmitting relevant conversation content for processing." But he adds that the company takes precautions to "minimize what's sent, use contractual and technical controls, and operate under enterprise configurations where providers state prompts/outputs aren't used to train their models."
The 2 researchers also warn that part of the risk of AI toy companies may be that they're more likely to use AI in the coding of their products, tools, and web infrastructure. They say they suspect that the unsecured Bondu console they discovered was itself "vibe-coded"-- created with generative AI programming tools that often lead to security flaws. Bondu didn't respond to WIRED's question about whether the console was programmed with AI tools.
Warnings about the risks of AI toys for kids have grown in recent months but have largely focused on the threat that a toy's conversations will raise inappropriate topics or even lead them to dangerous behavior or self-harm. NBC News, for instance, reported in December that AI toys its reporters chatted with offered detailed explanations of sexual terms, tips about how to sharpen knives, and even seemed to echo Chinese government propaganda, stating for example that Taiwan is a part of China.
Bondu, by contrast, appears to have at least attempted to build safeguards into the AI chatbot it gives children access to. The company even offers a $500 bounty for reports of "an inappropriate response" from the toy. "We've had this program for over a year, and no one has been able to make it say anything inappropriate," a line on the company's website reads.
Yet at the same time, Thacker and Margolis found that Bondu was simultaneously leaving all of its users' sensitive data entirely exposed. "This is a perfect conflation of safety with security," says Thacker. "Does 'AI safety' even matter when all the data is exposed?"
Thacker says that prior to looking into Bondu's security, he'd considered giving AI-enabled toys to his own kids, just as his neighbor had. Seeing Bondu's data exposure firsthand changed his mind.
"Do I really want this in my house? No, I don't," he says. "It's kind of just a privacy nightmare."
This story originally appeared on wired.com.
The Single Sign-on Trap: How One Click Snitches on You to 10,000 Sites
By Kim Komando
Here's how clicking Sign in with Google or Sign in with Facebook lets tech giants track every site you visit and everything you do there.
Here's how clicking Sign in with Google or Sign in with Facebook lets tech giants track every site you visit and everything you do there.
ChatGPT
You know that Sign in with Google or Facebook button? The one you click because who wants to set up another account? Yeah, I get it. One click, you're in. Super convenient.
But here's what they don't want you to know.
The second you click that button, the site gets your name, email and profile photo. Sometimes your phone number and birthday, too. And Google or Facebook? Let us count the ways.
What they're tracking
Tech folks call this single sign-on, or SSO. Sounds harmless, right? It's not. You're not just logging in-- you're handing over a permission slip to track you on tens of thousands of sites.
Meta admitted in 2024 that it uses SSO data to "improve ad targeting and user experience." Translation: They're selling everything about you to who knows who.
The profile they're building
After a few months, they have:
That's why you google knee pain once and suddenly every site you visit shows you knee brace ads for 6 months.
How to stop it
Now, if you're looking for true private email, hit this link for a 7-day free trial of StartMail.
But here's what they don't want you to know.
The second you click that button, the site gets your name, email and profile photo. Sometimes your phone number and birthday, too. And Google or Facebook? Let us count the ways.
What they're tracking
Tech folks call this single sign-on, or SSO. Sounds harmless, right? It's not. You're not just logging in-- you're handing over a permission slip to track you on tens of thousands of sites.
- Shopping sites: They know you browsed engagement rings, visited a jeweler and searched "how to propose." And look, you bought plus-size clothes, baby gear or anti-anxiety meds.
- News sites: They track articles you read. Political leanings? Check. Financial concerns? Noted. Job hunting? That's why you're reading career advice at 2 p.m.
- Dating apps: They know you're on Tinder, Hinge or Match, along with your swipes.
- Health sites: They see you researching diabetes, fertility clinics or therapists.
Meta admitted in 2024 that it uses SSO data to "improve ad targeting and user experience." Translation: They're selling everything about you to who knows who.
The profile they're building
After a few months, they have:
- Your shopping patterns: what you buy, when, how much you spend
- Your health concerns: conditions you're researching, medications you're comparing
- Your relationship status: dating apps, wedding planning sites, divorce lawyers
- Your political views: news sites, petition sites, donation pages
- Your financial situation: loan comparison sites, credit card apps
That's why you google knee pain once and suddenly every site you visit shows you knee brace ads for 6 months.
How to stop it
- Stop using SSO: Create unique accounts for each site.
- Check what's already connected: Google: myaccount.google.com/permissions. Facebook: Settings & privacy > Settings > Apps and websites: Revoke access to anything you don't use all the time.
- Use email aliases ; Apple folks, check out Hide My Email. It creates unique email addresses for each site that forward to your real inbox. Companies can't connect them back to you. Gmail users, add a plus sign and any text you want between your username and the @ symbol. It all comes to your inbox, but companies still see your username.
Now, if you're looking for true private email, hit this link for a 7-day free trial of StartMail.
The Grocery Store is Watching You: How 'Smart Shelves' Track your Face and Every Move
By Kim Komando
The Current unmasks the surveillance pricing and hidden cameras taking over Kroger and Walmart aisles. Discover the smart shelf tech that uses facial recognition to detect your age and gender while tracking exactly how long you hesitate in front of a product. Learn why the FTC is investigating these digital tags and find out how to stop the data harvest.
The Current unmasks the surveillance pricing and hidden cameras taking over Kroger and Walmart aisles. Discover the smart shelf tech that uses facial recognition to detect your age and gender while tracking exactly how long you hesitate in front of a product. Learn why the FTC is investigating these digital tags and find out how to stop the data harvest.
ChatGPT
Smart shelves
Kroger rolled out EDGE in 500 stores-- expanding to 2,600 this year. EDGE is short for Enhanced Display for Grocery Environment, which means AI tech and cameras on shelves. Walmart's doing the same thing. Devices are in 60 stores now, ramping up to 2,300.
Built with Microsoft, cameras detect your age and gender. Woman in her 30s? Here's a baby formula coupon. College-age guy? Energy drinks on sale. Older male? Sensitive toothpaste is buy one/get one free.
Digital tags can change prices on the spot. Snowstorm coming? Bread and milk jumped $2. Store's dead early in the morning? Here's a deal. Lunchtime rush? Sandwiches cost more.
Kroger and Walmart both say they'd never use this for surge pricing -- Yeah, right.
They're timing you
Cameras, WiFi and sensors track which aisles you walk down, products you pick up, how long you hesitate and when you walk away empty-handed.
They know you stood in front of the pasta sauce for 23 seconds. They know you picked up the organic brand, looked at the price and grabbed the store brand instead.
You on sale
Kroger sells your shopping data. Your name, address, phone number, purchase history, location data, health information-- hello, hemorrhoid cream-- along with your age, marital status, gender and race.
Americans are spending more of their income on food than at any time in the last 30 years. Grocery stores looked at that and thought, "How can we squeeze more?"
Regulators are looking into how grocery stores use AI and electronic shelf labels (ESLs) to update prices between the time a customer picks up an item and when they reach the checkout.
Here's how to spot the cameras:
Fight back
Kroger rolled out EDGE in 500 stores-- expanding to 2,600 this year. EDGE is short for Enhanced Display for Grocery Environment, which means AI tech and cameras on shelves. Walmart's doing the same thing. Devices are in 60 stores now, ramping up to 2,300.
Built with Microsoft, cameras detect your age and gender. Woman in her 30s? Here's a baby formula coupon. College-age guy? Energy drinks on sale. Older male? Sensitive toothpaste is buy one/get one free.
Digital tags can change prices on the spot. Snowstorm coming? Bread and milk jumped $2. Store's dead early in the morning? Here's a deal. Lunchtime rush? Sandwiches cost more.
Kroger and Walmart both say they'd never use this for surge pricing -- Yeah, right.
They're timing you
Cameras, WiFi and sensors track which aisles you walk down, products you pick up, how long you hesitate and when you walk away empty-handed.
They know you stood in front of the pasta sauce for 23 seconds. They know you picked up the organic brand, looked at the price and grabbed the store brand instead.
You on sale
Kroger sells your shopping data. Your name, address, phone number, purchase history, location data, health information-- hello, hemorrhoid cream-- along with your age, marital status, gender and race.
Americans are spending more of their income on food than at any time in the last 30 years. Grocery stores looked at that and thought, "How can we squeeze more?"
Regulators are looking into how grocery stores use AI and electronic shelf labels (ESLs) to update prices between the time a customer picks up an item and when they reach the checkout.
Here's how to spot the cameras:
- Digital price tags, not paper stickers. Those are ESLs.
- Black domes at eye level on shelves and the end of aisles.
- Digital screens showing ads that change when you approach.
Fight back
- Pay cash. Harder to link your purchases to a profile.
- Skip the loyalty app. Ask the cashier for a store number. Most have one. Or try your area code + 867-5309. (Thanks, Tommy Tutone.) Works more often than you'd think.
- Turn off Bluetooth. Your phone pings even when you're not connected.
- Disable auto-join guest WiFi: In your settings, make sure auto-join is turned off.
- Wear a hat and sunglasses: Yes, really. Makes it harder for them to scrape your age and gender.
Your Phone is Not Off when you Switch it Off - Apple and Google Admit it
AI Mode to Enable 'Personal Intelligence'
By Ryan Whitwam for Ars Technica
Personal Intelligence is optional and rolling out first to AI Pro and AI Ultra subscribers.
Personal Intelligence is optional and rolling out first to AI Pro and AI Ultra subscribers.
Credit: Google
Google believes AI is the future of search, and it's not shy about saying it. After adding account-level personalization to Gemini earlier this month, it's now updating AI Mode with so-called "Personal Intelligence." According to Google, this makes the bot's answers more useful because they are tailored to your personal context.
Starting today, the feature is rolling out to all users who subscribe to Google AI Pro or AI Ultra. However, it will be a Labs feature that needs to be explicitly enabled-- subscribers will be prompted to do this. Google tends to expand access to new AI features to free accounts later on, so free users will most likely get access to Personal Intelligence in the future. Whenever this option does land on your account, it's entirely optional and can be disabled at any time.
If you decide to integrate your data with AI Mode, the search bot will be able to scan your Gmail and Google Photos. That's less extensive than the Gemini app version, which supports Gmail, Photos, Search, and YouTube history. Gmail will probably be the biggest contributor to AI Mode-- a great many life events involve confirmation emails. Traditional search results when you are logged in are adjusted based on your usage history, but this goes a step further.
If you're going to use AI Mode to find information, Personal Intelligence could actually be quite helpful. When you connect data from other Google apps, Google's custom Gemini search model will instantly know about your preferences and background-- that's the kind of information you'd otherwise have to include in your search query to get the best output. With Personal Intelligence, AI Mode can just pull those details from your email or photos.
For example, as in the video below, you could ask about clothing options for an upcoming trip. Instead of telling the robot when and where you're going in the prompt, it can get that information from your email confirmation. When AI Mode uses your personal context in a response, it will cite it in-line the same way it does for websites.
Starting today, the feature is rolling out to all users who subscribe to Google AI Pro or AI Ultra. However, it will be a Labs feature that needs to be explicitly enabled-- subscribers will be prompted to do this. Google tends to expand access to new AI features to free accounts later on, so free users will most likely get access to Personal Intelligence in the future. Whenever this option does land on your account, it's entirely optional and can be disabled at any time.
If you decide to integrate your data with AI Mode, the search bot will be able to scan your Gmail and Google Photos. That's less extensive than the Gemini app version, which supports Gmail, Photos, Search, and YouTube history. Gmail will probably be the biggest contributor to AI Mode-- a great many life events involve confirmation emails. Traditional search results when you are logged in are adjusted based on your usage history, but this goes a step further.
If you're going to use AI Mode to find information, Personal Intelligence could actually be quite helpful. When you connect data from other Google apps, Google's custom Gemini search model will instantly know about your preferences and background-- that's the kind of information you'd otherwise have to include in your search query to get the best output. With Personal Intelligence, AI Mode can just pull those details from your email or photos.
For example, as in the video below, you could ask about clothing options for an upcoming trip. Instead of telling the robot when and where you're going in the prompt, it can get that information from your email confirmation. When AI Mode uses your personal context in a response, it will cite it in-line the same way it does for websites.
Personal Intelligence in AI Mode.
Perfectly imperfect
Google says, as it often does, that AI is not perfect. AI Mode with Personal Intelligence can make mistakes, drawing the wrong conclusions from the data it mines from your account. In that case, Google suggests using a follow-up prompt to correct it and get more accurate information. It's similar to the way you might refine a traditional Google search when the links aren't to your liking.
AI Mode and Google AI are generally supposed to improve over time to reduce such failures. The way you use the service contributes to that, but Google says the model is not being trained directly on your email or photos, even if you connect them to AI Mode. Instead, Google uses your prompts and the resulting output to train its AI models. Access to Gmail and Photos can be revoked at any time, but it sounds like there won't be a simple way to toggle off Personal Intelligence for a single query, which is possible in Gemini.
The Mobile Data Heist: How 5 Popular Apps Sell your Driving Habits
to Allstate
By Kim Komando
Your favorite safe driving and weather apps are doubling as full-time trackers for the insurance industry. Kim Komando exposes how Life360, Fuel Rewards, Routely, GasBuddy and MyRadar monetize your every turn to jack up your monthly premiums. Discover how to shut down the surveillance and pull your free LexisNexis report.
Your favorite safe driving and weather apps are doubling as full-time trackers for the insurance industry. Kim Komando exposes how Life360, Fuel Rewards, Routely, GasBuddy and MyRadar monetize your every turn to jack up your monthly premiums. Discover how to shut down the surveillance and pull your free LexisNexis report.
ChatGPT
Larry Johnson in Atlanta installed Life360 to keep tabs on his teenage kids. Good parenting, right?
Then he got quoted insane car insurance rates. When he pushed back, he learned the truth. That family safety app had been tracking every turn, every hard brake, every mile his family drove, and it sold all that information to insurance companies.
Larry had no clue. Neither do the 45 million other Americans getting spied on right now.
The 5 apps - go check your phone
Insurance companies buy driving scores based on your speed, braking and routes. Then they use them to raise your rates. You never agreed to this. You never even knew.
Shut them down
Or delete them. GasBuddy isn't worth your insurance jumping $300 a year.
See what they know about you
You can request your driving report like you pull a credit report. It's free once a year. You might be shocked at what's already in your file.
LexisNexis is the big one. Insurance companies use them constantly to check your history before giving you a quote.
Rather talk to a human? Call 1-888-497-0011.
Your report will show what driving data they have on file, any claims history and who they've shared it with. If something's wrong, you have the legal right to dispute it under the Fair Credit Reporting Act. Same rules as your credit report.
These apps promised to keep your family safe or save you a few bucks on gas. Instead, they've been selling your every move to the highest bidder.
Check your phone. Pull your report. Delete the snitches.
These apps shared your data without asking.
Then he got quoted insane car insurance rates. When he pushed back, he learned the truth. That family safety app had been tracking every turn, every hard brake, every mile his family drove, and it sold all that information to insurance companies.
Larry had no clue. Neither do the 45 million other Americans getting spied on right now.
The 5 apps - go check your phone
- Life360: The family tracker. Selling your driving data to Arity, which is owned by Allstate. Yeah, that Allstate.
- GasBuddy: That feature rating your fuel efficiency? It's powered by Arity. Surprise.
- MyRadar: Innocent little weather app. Same tracking garbage hidden inside.
- Fuel Rewards: Saving you 3 cents a gallon while selling you out.
- Routely: Marketed to gig workers. Monetizing your every mile.
Insurance companies buy driving scores based on your speed, braking and routes. Then they use them to raise your rates. You never agreed to this. You never even knew.
Shut them down
- iPhone: Settings > Privacy & Security > Location Services. Find the offenders. Change them to Never or While using. Tap each one and toggle OFF Precise location.
- Android: Settings > Location > App permissions > [App Name]. Choose Don't allow or Allow only while using the app.
Or delete them. GasBuddy isn't worth your insurance jumping $300 a year.
See what they know about you
You can request your driving report like you pull a credit report. It's free once a year. You might be shocked at what's already in your file.
LexisNexis is the big one. Insurance companies use them constantly to check your history before giving you a quote.
- Go to consumer.risk.lexisnexis.com.
- Click the red rectangle marked Request a Consumer Disclosure Report.
- Fill out the form with your name, address, date of birth, SSN and driver's license number. Yes, you need to give them all that info to confirm it's you. They have it already.
- They'll mail you instructions to access your report online.
Rather talk to a human? Call 1-888-497-0011.
Your report will show what driving data they have on file, any claims history and who they've shared it with. If something's wrong, you have the legal right to dispute it under the Fair Credit Reporting Act. Same rules as your credit report.
These apps promised to keep your family safe or save you a few bucks on gas. Instead, they've been selling your every move to the highest bidder.
Check your phone. Pull your report. Delete the snitches.
These apps shared your data without asking.
Windows Privacy Hack Every User Needs to Know
FTC Bans GM from Selling Drivers' Location Data for 5 Years
By Sergiu Gatlan for bleepingcomputer
bleepingcomputer
The US Federal Trade Commission has finalized an order with General Motors (GM) and its subsidiary, OnStar, settling charges that they collected and sold the location and driving data of millions of drivers without consent.
General Motors owns the GMC, Cadillac, Chevrolet, and Buick brands and produces over 6.1 million vehicles each year. OnStar, GM's subsidiary, provides digital in-car services such as navigation, communications, security, emergency services, and remote diagnostics.
As the FTC claimed in its January 2025 complaint, GM collected precise geolocation data and detailed driving behavior information from millions of vehicles-- without customers' consent-- every 3 seconds through OnStar's now-discontinued "Smart Driver" feature, which was marketed as a driving-habits self-assessment tool rather than a data-collection mechanism.
This data was then sold to 3rd-parties, including consumer reporting agencies, which then provided it to insurance companies, leading to higher insurance rates or denial of coverage.
The finalized order approved by the commission bans GM from sharing consumers' geolocation and driver behavior data with consumer reporting agencies for 5 years.
Also, for the full 20-year duration of the order, GM must obtain express consent from consumers before collecting their data, using or sharing their connected vehicle data, with exceptions for emergency services.
The company must allow US consumers to request copies of their data and seek its deletion, provide vehicle owners the ability to disable precise geolocation data collection, and enable them to opt out of location and driving behavior data collection-- with some limited exceptions.
"This fencing-in relief is appropriate given GM's egregious betrayal of consumers' trust," the FTC said on Wednesday.
"The Federal Trade Commission has formally approved the agreement reached last year with General Motors to address concerns," a GM spokesperson told BleepingComputer, noting that "it's important to note there is no monetary payment."
"As vehicle connectivity becomes increasingly integral to the driving experience, GM remains committed to protecting customer privacy, maintaining trust, and ensuring customers have a clear understanding of our practices."
One year ago, in January 2025, Texas Attorney General Ken Paxton also filed a lawsuit against car insurance firm Allstate for unlawfully collecting and selling driving data from over 45 million Americans.
The tracking activity was carried out by adding an SDK developed by Allstate subsidiary Arity to popular apps such as Life360, GasBuddy, Fuel Rewards, and Routely, without drivers' consent.
The lawsuit also involves several car makers, including Toyota, Lexus, Mazda, Chrysler, Jeep, Dodge, Fiat, Maserati, and Ram, who also allegedly collected and sold data directly to Allstate and Arity.
Update January 15, 10:19 EST: Added GM statement.
General Motors owns the GMC, Cadillac, Chevrolet, and Buick brands and produces over 6.1 million vehicles each year. OnStar, GM's subsidiary, provides digital in-car services such as navigation, communications, security, emergency services, and remote diagnostics.
As the FTC claimed in its January 2025 complaint, GM collected precise geolocation data and detailed driving behavior information from millions of vehicles-- without customers' consent-- every 3 seconds through OnStar's now-discontinued "Smart Driver" feature, which was marketed as a driving-habits self-assessment tool rather than a data-collection mechanism.
This data was then sold to 3rd-parties, including consumer reporting agencies, which then provided it to insurance companies, leading to higher insurance rates or denial of coverage.
The finalized order approved by the commission bans GM from sharing consumers' geolocation and driver behavior data with consumer reporting agencies for 5 years.
Also, for the full 20-year duration of the order, GM must obtain express consent from consumers before collecting their data, using or sharing their connected vehicle data, with exceptions for emergency services.
The company must allow US consumers to request copies of their data and seek its deletion, provide vehicle owners the ability to disable precise geolocation data collection, and enable them to opt out of location and driving behavior data collection-- with some limited exceptions.
"This fencing-in relief is appropriate given GM's egregious betrayal of consumers' trust," the FTC said on Wednesday.
"The Federal Trade Commission has formally approved the agreement reached last year with General Motors to address concerns," a GM spokesperson told BleepingComputer, noting that "it's important to note there is no monetary payment."
"As vehicle connectivity becomes increasingly integral to the driving experience, GM remains committed to protecting customer privacy, maintaining trust, and ensuring customers have a clear understanding of our practices."
One year ago, in January 2025, Texas Attorney General Ken Paxton also filed a lawsuit against car insurance firm Allstate for unlawfully collecting and selling driving data from over 45 million Americans.
The tracking activity was carried out by adding an SDK developed by Allstate subsidiary Arity to popular apps such as Life360, GasBuddy, Fuel Rewards, and Routely, without drivers' consent.
The lawsuit also involves several car makers, including Toyota, Lexus, Mazda, Chrysler, Jeep, Dodge, Fiat, Maserati, and Ram, who also allegedly collected and sold data directly to Allstate and Arity.
Update January 15, 10:19 EST: Added GM statement.
Your Smart TV is Spying on You More than Alexa ever Could
By Kim Komando
Your living room screen takes a screenshot every few seconds to build a profile on you. The Current exposes the ACR trackers in every major smart TV and shows you how to shut them down.
Your living room screen takes a screenshot every few seconds to build a profile on you. The Current exposes the ACR trackers in every major smart TV and shows you how to shut them down.
ChatGPT
I bet you never thought that while you're watching TV, it's watching you right back. The surveillance never stops, that's one thing you need to know. But I always have your back.
Remember when TVs just showed you stuff? Those days are gone.
Why your TV was so cheap
That beautiful 65-inch 4K you got on Black Friday for $400? It should've cost $1,200. The reason it was so cheap: You're not the customer. You're the product.
Every smart TV sold today has ACR-- automatic content recognition-- built in. It screenshots your screen every few seconds and matches those shots against a database. Cable, streaming, DVDs, gaming, even what's on your laptop via HDMI. Then it sells that data to advertisers and data brokers.
What they know about you
They're tracking every show and movie you watch, when and how long. Your Netflix and Hulu habits, even though you pay for those. What games you play. Which commercials you skip.
Combine that with your IP address and purchase history, and they've built a profile. They know you watch true crime at night, cartoons in the morning and fall asleep to HGTV.
Who's buying?
Turn it off
Think this is illegal? In 2017, Vizio paid $2.2 million to the FTC for tracking 11 million TVs without consent. Here's the kicker: Samsung, LG, Sony and TCL still do the exact same thing. They buried the consent on page 47 of terms you didn't read.
Every brand hides these settings differently. Samsung calls them "Viewing Information Services." For LG, it's "Live Plus." Vizio buries them in "Reset & Admin."
I've got a free step-by-step guide for every major brand. It only takes 2 minutes once you know where to look.
Why this matters
"I don't care if they know I watch The Office reruns" misses the point. Your viewing habits reveal your income level, political leanings, health concerns and vulnerabilities. That profile gets sold, leaked or hacked. Unlike a credit card, you can't change your behavioral patterns.
Remember when TVs just showed you stuff? Those days are gone.
Why your TV was so cheap
That beautiful 65-inch 4K you got on Black Friday for $400? It should've cost $1,200. The reason it was so cheap: You're not the customer. You're the product.
Every smart TV sold today has ACR-- automatic content recognition-- built in. It screenshots your screen every few seconds and matches those shots against a database. Cable, streaming, DVDs, gaming, even what's on your laptop via HDMI. Then it sells that data to advertisers and data brokers.
What they know about you
They're tracking every show and movie you watch, when and how long. Your Netflix and Hulu habits, even though you pay for those. What games you play. Which commercials you skip.
Combine that with your IP address and purchase history, and they've built a profile. They know you watch true crime at night, cartoons in the morning and fall asleep to HGTV.
Who's buying?
- Advertisers: Watch a Chevy commercial? You'll see Chevy ads on your phone an hour later
- Data brokers: Experian merges your TV habits with credit card purchases, then sells access. Another reason to use Incogni to remove your info from data brokers. If you're not in their databases, they cannot sell your info
- Political campaigns: They know whether you watch Fox News or MS NOW-- formerly MSNBC-- and target you accordingly.
- Insurance companies: Some are using viewing data to assess "lifestyle risk."
Turn it off
Think this is illegal? In 2017, Vizio paid $2.2 million to the FTC for tracking 11 million TVs without consent. Here's the kicker: Samsung, LG, Sony and TCL still do the exact same thing. They buried the consent on page 47 of terms you didn't read.
Every brand hides these settings differently. Samsung calls them "Viewing Information Services." For LG, it's "Live Plus." Vizio buries them in "Reset & Admin."
I've got a free step-by-step guide for every major brand. It only takes 2 minutes once you know where to look.
Why this matters
"I don't care if they know I watch The Office reruns" misses the point. Your viewing habits reveal your income level, political leanings, health concerns and vulnerabilities. That profile gets sold, leaked or hacked. Unlike a credit card, you can't change your behavioral patterns.
California Residents have a New Way to Dodge Spam Calls and Texts
By Stephen Council for the SF Gate
A California flag flies on May 9, 2023 in San Francisco. The state is the first in the nation to launch a portal for people to get their information deleted by data brokers.
Justin Sullivan/Getty Images
California residents have a new way to protect their identities online for years to come-- and it takes less than 5 minutes of work.
For years, the state has led the nation in a push for digital privacy, giving residents the right to ask companies to delete their stored personal data. But it's a tall task to contact individual companies or data brokers and request one-by-one deletions. So on Jan. 1, California launched a first-in-the-nation portal that allows residents to wipe away a large part of their digital footprints in one fell swoop.
The site is dubbed the "Delete Request and Opt-out Platform," or "DROP." It takes a few minutes of clicking, filling out basic forms and verifying contact information. Californians who complete the process will force data brokers to delete much of the information they've collected. And that will give those residents better protection from spam calls, targeted fraud and stalkers, Consumer Reports senior policy analyst Matt Schwartz told SFGATE.
What does 'DROP' actually do?
Under California law, consumers have the right to force a company to delete the personal information it's gathered about them. You could do this by emailing one company at a time--Snapchat, Facebook, Google, etc.-- but there are likely hundreds of companies with some piece of your data. And data brokers, who buy and sell this information, are not nearly as well-known as the social media giants.
Data brokers make up an unseen web behind our experience of the internet. As our data use creates information about our locations, spending habits and app signups, data brokers can buy that information and sell it to willing buyers. Over time, brokers build up troves of data that even end up feeding into advertising, background checks and more.
The new system streamlines the data deletion process for Californians. "DROP" puts the burden of work on the data brokers to comply, rather than on consumers to repeatedly clean up their digital footprints.
Starting on Aug. 1, data brokers will begin processing the deletion requests. They'll look for a match in their records to the information each person provides, and then delete things like browsing history, email addresses, phone numbers and geolocation data, plus inferred data like political views and living arrangements. Brokers will be allowed to keep data that's publicly available, like real estate ownership or criminal complaints. If they don't comply, they'll face state penalties.
What's the benefit of getting data brokers to delete your data?
Beyond gaining better control over your data, the California Privacy Protection Agency is pushing 2 main reasons for participating in "DROP."
One is to cut down on spam and scam texts and calls, which have become more sophisticated in recent years. The second is for security-- the agency touts a chance to "decrease risk of identity theft, fraud, AI impersonations, or that your data is leaked or hacked." California has hundreds of data brokers in its registry. Deleting your data should mean you'll get fewer targeted ads and less personalized content online, the agency notes, but it's a precaution many will be willing to take.
Schwartz added that the information data brokers give for background checks-- which can be used to set loan terms, validate a tenant or hire a new employee-- can often be inaccurate, because there isn't as much accountability as for, say, credit agencies. As a result, deleting data may also mean fairer treatment for people going through such checks.
Though California is the only state with such a system so far, Schwartz said he's already been hearing chatter in states like Connecticut and Vermont about potentially following in the state's footsteps.
"People are definitely waiting to see how the regulations were written, because that was a multi-year process, but now that that's kind of all done, the system's been built, there's kind of a proof of concept," he said. "I think you might see more interest in this."
For years, the state has led the nation in a push for digital privacy, giving residents the right to ask companies to delete their stored personal data. But it's a tall task to contact individual companies or data brokers and request one-by-one deletions. So on Jan. 1, California launched a first-in-the-nation portal that allows residents to wipe away a large part of their digital footprints in one fell swoop.
The site is dubbed the "Delete Request and Opt-out Platform," or "DROP." It takes a few minutes of clicking, filling out basic forms and verifying contact information. Californians who complete the process will force data brokers to delete much of the information they've collected. And that will give those residents better protection from spam calls, targeted fraud and stalkers, Consumer Reports senior policy analyst Matt Schwartz told SFGATE.
What does 'DROP' actually do?
Under California law, consumers have the right to force a company to delete the personal information it's gathered about them. You could do this by emailing one company at a time--Snapchat, Facebook, Google, etc.-- but there are likely hundreds of companies with some piece of your data. And data brokers, who buy and sell this information, are not nearly as well-known as the social media giants.
Data brokers make up an unseen web behind our experience of the internet. As our data use creates information about our locations, spending habits and app signups, data brokers can buy that information and sell it to willing buyers. Over time, brokers build up troves of data that even end up feeding into advertising, background checks and more.
The new system streamlines the data deletion process for Californians. "DROP" puts the burden of work on the data brokers to comply, rather than on consumers to repeatedly clean up their digital footprints.
Starting on Aug. 1, data brokers will begin processing the deletion requests. They'll look for a match in their records to the information each person provides, and then delete things like browsing history, email addresses, phone numbers and geolocation data, plus inferred data like political views and living arrangements. Brokers will be allowed to keep data that's publicly available, like real estate ownership or criminal complaints. If they don't comply, they'll face state penalties.
What's the benefit of getting data brokers to delete your data?
Beyond gaining better control over your data, the California Privacy Protection Agency is pushing 2 main reasons for participating in "DROP."
One is to cut down on spam and scam texts and calls, which have become more sophisticated in recent years. The second is for security-- the agency touts a chance to "decrease risk of identity theft, fraud, AI impersonations, or that your data is leaked or hacked." California has hundreds of data brokers in its registry. Deleting your data should mean you'll get fewer targeted ads and less personalized content online, the agency notes, but it's a precaution many will be willing to take.
Schwartz added that the information data brokers give for background checks-- which can be used to set loan terms, validate a tenant or hire a new employee-- can often be inaccurate, because there isn't as much accountability as for, say, credit agencies. As a result, deleting data may also mean fairer treatment for people going through such checks.
Though California is the only state with such a system so far, Schwartz said he's already been hearing chatter in states like Connecticut and Vermont about potentially following in the state's footsteps.
"People are definitely waiting to see how the regulations were written, because that was a multi-year process, but now that that's kind of all done, the system's been built, there's kind of a proof of concept," he said. "I think you might see more interest in this."
How to Disable ACR on your TV - Why it Makes Such a Big Difference
By Chris Bayer for ZDNET
Your smart TV comes with privacy risks. Here's how to avoid one of the biggest with just a few steps.
Your smart TV comes with privacy risks. Here's how to avoid one of the biggest with just a few steps.
Adam Breeden/ZDNET
Did you know that whenever you turn on your smart TV, you invite an unseen guest to watch it with you?
These days, most mainstream TVs use automatic content recognition (ACR), a type of ad-tracking technology that collects data on everything you watch and sends it to a central database. Manufacturers then use this information to understand your viewing habits and deliver highly targeted ads.
What's the incentive behind this invasive technology? According to market research firm eMarketer, in 2022, advertisers spent an estimated $18.6 billion on smart TV ads, and those numbers are only going up.
To understand how ACR works, imagine a constant, real-time Shazam-like service running in the background while your TV is on. It identifies content displayed on your screen, including programs from cable TV boxes, streaming services, or gaming consoles. ACR does this by capturing continuous screenshots and cross-referencing them with a vast database of media content and advertisements.
According to The Markup, ACR can capture and identify up to 7,200 images per hour, or approximately 2 images every second. This extensive tracking offers money-making insights for marketers and content distributors because it can reveal connections between viewers' personal information and their preferred content. By "personal information," I mean email addresses, IP addresses-- and even your physical street address.
By understanding what viewers watch and engage with, marketers can make decisions on content recommendations to create bespoke advertising placements. They can also track advertisements that lead to purchases.
But the most disturbing part is the potential for exploitation. In the wrong hands, sensitive information gathered through ACR could be exploited or misused, which may result in security risks or, at worst, identity theft.
Because ACR operates clandestinely in the background, many of us aren't even aware of its active presence each time we're enjoying our favorite shows. Opting out of using ACR is complex and sometimes challenging. Navigating through your TV settings might take several dozen clicks to protect your privacy better.
If you, like me, perceive this feature to be intrusive or unsettling, there's a way to dismiss this data collection feature on your smart TV. It might take some patience, but below is a How-To list for 5 major brands demonstrating how to turn off ACR.
How to turn off ACR on a smart TV
For Samsung TVs:
For an LG TV:
LG further allows you to limit ad tracking, which can be found in Additional Settings.
You can also turn off home promotions and content recommendations:
For a Sony TV:
Sony also allows for enhanced privacy by disabling ad personalization:
As an extra step, you can entirely disable the Samba Services Manager, which is embedded in the firmware of certain Sony Bravia TVs as a 3rd-party interactive app.
If your Sony TV uses Android TV, you should also turn off data collection for Chromecast:
For a Hisense TV:
To disable personalized ads and opt out of content recommendations:
For a TCL TV (and other Roku-powered TVs):
For extra privacy, TCL TVs offer a few more options, all of which can be found in the Privacy menu:
Remember that while these steps will significantly reduce data collection, they may also limit some smart features of your TV. Also, it's a good idea to periodically check these settings to ensure they remain as you've set them. Especially after software updates, your revised settings may sometimes revert to their default state.
The driving force behind targeted advertisements on smart TVs is ACR technology, and its inclusion speaks volumes about manufacturers' focus on monetizing user data rather than prioritizing consumer interests.
For most of us, ACR offers few tangible benefits, while the real-time sharing of our viewing habits and preferences exposes us to potential privacy risks. By disabling ACR, you can help keep your data to yourself and enjoy viewing with some peace of mind.
These days, most mainstream TVs use automatic content recognition (ACR), a type of ad-tracking technology that collects data on everything you watch and sends it to a central database. Manufacturers then use this information to understand your viewing habits and deliver highly targeted ads.
What's the incentive behind this invasive technology? According to market research firm eMarketer, in 2022, advertisers spent an estimated $18.6 billion on smart TV ads, and those numbers are only going up.
To understand how ACR works, imagine a constant, real-time Shazam-like service running in the background while your TV is on. It identifies content displayed on your screen, including programs from cable TV boxes, streaming services, or gaming consoles. ACR does this by capturing continuous screenshots and cross-referencing them with a vast database of media content and advertisements.
According to The Markup, ACR can capture and identify up to 7,200 images per hour, or approximately 2 images every second. This extensive tracking offers money-making insights for marketers and content distributors because it can reveal connections between viewers' personal information and their preferred content. By "personal information," I mean email addresses, IP addresses-- and even your physical street address.
By understanding what viewers watch and engage with, marketers can make decisions on content recommendations to create bespoke advertising placements. They can also track advertisements that lead to purchases.
But the most disturbing part is the potential for exploitation. In the wrong hands, sensitive information gathered through ACR could be exploited or misused, which may result in security risks or, at worst, identity theft.
Because ACR operates clandestinely in the background, many of us aren't even aware of its active presence each time we're enjoying our favorite shows. Opting out of using ACR is complex and sometimes challenging. Navigating through your TV settings might take several dozen clicks to protect your privacy better.
If you, like me, perceive this feature to be intrusive or unsettling, there's a way to dismiss this data collection feature on your smart TV. It might take some patience, but below is a How-To list for 5 major brands demonstrating how to turn off ACR.
How to turn off ACR on a smart TV
For Samsung TVs:
- Press the Home button on your remote control.
- Navigate to the left to access the sidebar menu.
- In the sidebar menu, choose the Privacy Choices option.
- Select the Terms & Conditions, Privacy Policy option.
- Ensure that the checkbox for Viewing Information Services is unchecked. This will turn off ACR and any associated ad targeting.
- Select the OK option at the bottom of the screen to confirm your changes.
For an LG TV:
- Press the Home button on your remote control to access the home screen.
- Press the Settings button on your remote.
- In the settings side menu, select the Settings option.
- Navigate to and select the General option.
- In the General menu, choose System.
- Select Additional Settings.
- In Additional Settings, locate and toggle off the Live Plus option.
LG further allows you to limit ad tracking, which can be found in Additional Settings.
- In the Additional Settings menu, select Advertisement.
- Toggle on the Limit AD Tracking option.
You can also turn off home promotions and content recommendations:
- In the Additional Settings menu, select Home Settings.
- Uncheck the Home Promotion option.
- Uncheck the Content Recommendation option.
For a Sony TV:
- Press the Home button on your remote control to access the main menu.
- Navigate to and select Settings.
- Choose Initial Setup.
- Scroll down and select Samba Interactive TV.
- Select Disable to turn off Samba TV, which is Sony's ACR technology.
Sony also allows for enhanced privacy by disabling ad personalization:
- Go to Settings.
- Select About.
- Choose Ads.
- Turn off Ads Personalization.
As an extra step, you can entirely disable the Samba Services Manager, which is embedded in the firmware of certain Sony Bravia TVs as a 3rd-party interactive app.
- Go to Settings.
- Select Apps.
- Select Samba Services Manager.
- Choose Clear Cache.
- Select Force Stop.
- Finally, select Disable.
If your Sony TV uses Android TV, you should also turn off data collection for Chromecast:
- Open the Google Home app on your smartphone.
- Tap the Menu icon.
- Select your TV from the list of devices.
- Tap the 3- dots in the upper right corner.
- Choose Settings.
- Turn off Send Chromecast device usage data and crash reports.
For a Hisense TV:
- Press the Home button on your remote control to access the main menu.
- Navigate to and select Settings.
- Choose System.
- Select Privacy.
- Look for an option called Smart TV Experience, Viewing Information Services, or something similar.
- Toggle this option off to disable ACR.
To disable personalized ads and opt out of content recommendations:
- In the Privacy menu, look for an option like Ad Tracking or Interest-Based Ads.
- Turn this option off.
- Look for options related to content recommendations or personalized content.
- Disable these features if you don't want the TV to suggest content based on your viewing habits.
For a TCL TV (and other Roku-powered TVs):
- Press the Home button on your TCL TV remote control.
- Navigate to and select Settings in the main menu.
- Scroll down and select the Privacy option.
- Look for Smart TV Experience and select it.
- Uncheck or toggle off the option labeled Use Info from TV Inputs.
For extra privacy, TCL TVs offer a few more options, all of which can be found in the Privacy menu:
- Select Advertising.
- Choose Limit ad tracking.
- Again, select Advertising.
- Uncheck Personalized ads.
- Now, still in the Privacy menu, select Microphone.
- Adjust Channel Microphone Access and Channel Permissions as desired.
Remember that while these steps will significantly reduce data collection, they may also limit some smart features of your TV. Also, it's a good idea to periodically check these settings to ensure they remain as you've set them. Especially after software updates, your revised settings may sometimes revert to their default state.
The driving force behind targeted advertisements on smart TVs is ACR technology, and its inclusion speaks volumes about manufacturers' focus on monetizing user data rather than prioritizing consumer interests.
For most of us, ACR offers few tangible benefits, while the real-time sharing of our viewing habits and preferences exposes us to potential privacy risks. By disabling ACR, you can help keep your data to yourself and enjoy viewing with some peace of mind.
5 Signs Someone Might be Taking Advantage of Your Security Goodness
Not everyone in a security department is acting in good faith, and they'll do what they can to bypass those who do. Here's how to spot them.
zwolafasola / stock.adobe - darkreading
Wikipedia defines "good faith" as "a sincere intention to be fair, open, and honest, regardless of the outcome of the interaction." A person who acts in good faith must be truthful and forthcoming with information, even if it affects the end state of a negotiation or transaction. In other words, lying and withholding information, by their very nature, make an interaction anything but good faith.
For many security professionals, good faith is the only way they know how to operate. Unfortunately, the security profession, like any profession, has its share of bad faith actors, too. For example, consider a co-worker who is underperforming and introducing unnecessary risk into the security organization. In certain cases, underperformers will look to sabotage others rather than improve the quality of their work. Or, as another example, consider a bad faith actor who is out to gain competitive intelligence or other information that can be used for any number of purposes, including social engineering.
How can good faith security practitioners identify bad actors and understand when they're being taken advantage of? Here are 5 signs.
1. Information hoarding: Ever had a conversation, meeting, chat correspondence, or email exchange that feels more like an interrogation than a two-way exchange information? This is a well-known trick-- and sign of-- a bad faith actor. By the time most good faith actors catch on to the fact that the information flow is entirely 1-way, they've already given the bad faith actor a wealth of information.
2. My way or the highway: As a generally rational bunch, good faith actors understand that life is a give and take. But bad faith actors know only how to take, making it difficult to negotiate. Their only concern is what they want, and they will employ a variety of tactics to get what they want while offering little to nothing in return. Unfortunately, good faith actors often fall for this approach, as they would rather disengage and get back to constructive activities than get dirty wrestling in the mud with a bad actor.
3. False generosity: When bad faith actors seek to manipulate people or situations, they will sometimes make what appears to be a generous offer. Conversely, these offers often come at a tremendous cost. How so? If a good faith actor takes a bad faith actor up on an offer, it could be used against them in the future. The bad faith actor could also attempt to convince others of their "good nature" and "generosity" by pointing to a good faith actor who took the offer.
4. Bait and switch: Bait and switch is one of the oldest tricks in the book. As the Latin phrase so aptly states, caveat emptor: Buyer beware. Bad faith actors will often make promises of something they have absolutely no intention of giving to extract what they want from good actors. Once they have what they were after, they go quiet or become evasive. The chances of a good faith actor ever seeing what they wanted are very slim.
5. Promoting a narrative: One way bad faith actors seek out, persuade, and take advantage of new victims is by surrounding themselves with a chorus of approvers. This "posse," of sorts, may consist of witting and/or unwitting accomplices. In some cases, accomplices were recruited via lies or manipulation. In other cases, the accomplices may have their own motivations for why they wish to partake in certain bad faith activities. In any event, bad faith actors will often promote a narrative to help convince new audiences they can be believed. This can be difficult to navigate and often catches good faith actors by surprise.
In the end, a heaping dose of awareness-- and even a bit of healthy cynicism-- of misleading behaviors can stop bad faith actors from taking advantage and achieving their goals.
For many security professionals, good faith is the only way they know how to operate. Unfortunately, the security profession, like any profession, has its share of bad faith actors, too. For example, consider a co-worker who is underperforming and introducing unnecessary risk into the security organization. In certain cases, underperformers will look to sabotage others rather than improve the quality of their work. Or, as another example, consider a bad faith actor who is out to gain competitive intelligence or other information that can be used for any number of purposes, including social engineering.
How can good faith security practitioners identify bad actors and understand when they're being taken advantage of? Here are 5 signs.
1. Information hoarding: Ever had a conversation, meeting, chat correspondence, or email exchange that feels more like an interrogation than a two-way exchange information? This is a well-known trick-- and sign of-- a bad faith actor. By the time most good faith actors catch on to the fact that the information flow is entirely 1-way, they've already given the bad faith actor a wealth of information.
2. My way or the highway: As a generally rational bunch, good faith actors understand that life is a give and take. But bad faith actors know only how to take, making it difficult to negotiate. Their only concern is what they want, and they will employ a variety of tactics to get what they want while offering little to nothing in return. Unfortunately, good faith actors often fall for this approach, as they would rather disengage and get back to constructive activities than get dirty wrestling in the mud with a bad actor.
3. False generosity: When bad faith actors seek to manipulate people or situations, they will sometimes make what appears to be a generous offer. Conversely, these offers often come at a tremendous cost. How so? If a good faith actor takes a bad faith actor up on an offer, it could be used against them in the future. The bad faith actor could also attempt to convince others of their "good nature" and "generosity" by pointing to a good faith actor who took the offer.
4. Bait and switch: Bait and switch is one of the oldest tricks in the book. As the Latin phrase so aptly states, caveat emptor: Buyer beware. Bad faith actors will often make promises of something they have absolutely no intention of giving to extract what they want from good actors. Once they have what they were after, they go quiet or become evasive. The chances of a good faith actor ever seeing what they wanted are very slim.
5. Promoting a narrative: One way bad faith actors seek out, persuade, and take advantage of new victims is by surrounding themselves with a chorus of approvers. This "posse," of sorts, may consist of witting and/or unwitting accomplices. In some cases, accomplices were recruited via lies or manipulation. In other cases, the accomplices may have their own motivations for why they wish to partake in certain bad faith activities. In any event, bad faith actors will often promote a narrative to help convince new audiences they can be believed. This can be difficult to navigate and often catches good faith actors by surprise.
In the end, a heaping dose of awareness-- and even a bit of healthy cynicism-- of misleading behaviors can stop bad faith actors from taking advantage and achieving their goals.
© vocalbits.com