security meets culture
X's New Location Feature Sparks Controversy, but is the Data Reliable?
Flock Safety is a Privacy Nightmare and it's Getting Worse
How to Disable ACR on your TV - Why it Makes Such a Big Difference
By Chris Bayer for ZDNET
Your smart TV comes with privacy risks. Here's how to avoid one of the biggest with just a few steps.
Your smart TV comes with privacy risks. Here's how to avoid one of the biggest with just a few steps.
Adam Breeden/ZDNET
Did you know that whenever you turn on your smart TV, you invite an unseen guest to watch it with you?
These days, most mainstream TVs use automatic content recognition (ACR), a type of ad-tracking technology that collects data on everything you watch and sends it to a central database. Manufacturers then use this information to understand your viewing habits and deliver highly targeted ads.
What's the incentive behind this invasive technology? According to market research firm eMarketer, in 2022, advertisers spent an estimated $18.6 billion on smart TV ads, and those numbers are only going up.
To understand how ACR works, imagine a constant, real-time Shazam-like service running in the background while your TV is on. It identifies content displayed on your screen, including programs from cable TV boxes, streaming services, or gaming consoles. ACR does this by capturing continuous screenshots and cross-referencing them with a vast database of media content and advertisements.
According to The Markup, ACR can capture and identify up to 7,200 images per hour, or approximately 2 images every second. This extensive tracking offers money-making insights for marketers and content distributors because it can reveal connections between viewers' personal information and their preferred content. By "personal information," I mean email addresses, IP addresses-- and even your physical street address.
By understanding what viewers watch and engage with, marketers can make decisions on content recommendations to create bespoke advertising placements. They can also track advertisements that lead to purchases.
But the most disturbing part is the potential for exploitation. In the wrong hands, sensitive information gathered through ACR could be exploited or misused, which may result in security risks or, at worst, identity theft.
Because ACR operates clandestinely in the background, many of us aren't even aware of its active presence each time we're enjoying our favorite shows. Opting out of using ACR is complex and sometimes challenging. Navigating through your TV settings might take several dozen clicks to protect your privacy better.
If you, like me, perceive this feature to be intrusive or unsettling, there's a way to dismiss this data collection feature on your smart TV. It might take some patience, but below is a How-To list for 5 major brands demonstrating how to turn off ACR.
How to turn off ACR on a smart TV
For Samsung TVs:
For an LG TV:
LG further allows you to limit ad tracking, which can be found in Additional Settings.
You can also turn off home promotions and content recommendations:
For a Sony TV:
Sony also allows for enhanced privacy by disabling ad personalization:
As an extra step, you can entirely disable the Samba Services Manager, which is embedded in the firmware of certain Sony Bravia TVs as a 3rd-party interactive app.
If your Sony TV uses Android TV, you should also turn off data collection for Chromecast:
For a Hisense TV:
To disable personalized ads and opt out of content recommendations:
For a TCL TV (and other Roku-powered TVs):
For extra privacy, TCL TVs offer a few more options, all of which can be found in the Privacy menu:
Remember that while these steps will significantly reduce data collection, they may also limit some smart features of your TV. Also, it's a good idea to periodically check these settings to ensure they remain as you've set them. Especially after software updates, your revised settings may sometimes revert to their default state.
The driving force behind targeted advertisements on smart TVs is ACR technology, and its inclusion speaks volumes about manufacturers' focus on monetizing user data rather than prioritizing consumer interests.
For most of us, ACR offers few tangible benefits, while the real-time sharing of our viewing habits and preferences exposes us to potential privacy risks. By disabling ACR, you can help keep your data to yourself and enjoy viewing with some peace of mind.
These days, most mainstream TVs use automatic content recognition (ACR), a type of ad-tracking technology that collects data on everything you watch and sends it to a central database. Manufacturers then use this information to understand your viewing habits and deliver highly targeted ads.
What's the incentive behind this invasive technology? According to market research firm eMarketer, in 2022, advertisers spent an estimated $18.6 billion on smart TV ads, and those numbers are only going up.
To understand how ACR works, imagine a constant, real-time Shazam-like service running in the background while your TV is on. It identifies content displayed on your screen, including programs from cable TV boxes, streaming services, or gaming consoles. ACR does this by capturing continuous screenshots and cross-referencing them with a vast database of media content and advertisements.
According to The Markup, ACR can capture and identify up to 7,200 images per hour, or approximately 2 images every second. This extensive tracking offers money-making insights for marketers and content distributors because it can reveal connections between viewers' personal information and their preferred content. By "personal information," I mean email addresses, IP addresses-- and even your physical street address.
By understanding what viewers watch and engage with, marketers can make decisions on content recommendations to create bespoke advertising placements. They can also track advertisements that lead to purchases.
But the most disturbing part is the potential for exploitation. In the wrong hands, sensitive information gathered through ACR could be exploited or misused, which may result in security risks or, at worst, identity theft.
Because ACR operates clandestinely in the background, many of us aren't even aware of its active presence each time we're enjoying our favorite shows. Opting out of using ACR is complex and sometimes challenging. Navigating through your TV settings might take several dozen clicks to protect your privacy better.
If you, like me, perceive this feature to be intrusive or unsettling, there's a way to dismiss this data collection feature on your smart TV. It might take some patience, but below is a How-To list for 5 major brands demonstrating how to turn off ACR.
How to turn off ACR on a smart TV
For Samsung TVs:
- Press the Home button on your remote control.
- Navigate to the left to access the sidebar menu.
- In the sidebar menu, choose the Privacy Choices option.
- Select the Terms & Conditions, Privacy Policy option.
- Ensure that the checkbox for Viewing Information Services is unchecked. This will turn off ACR and any associated ad targeting.
- Select the OK option at the bottom of the screen to confirm your changes.
For an LG TV:
- Press the Home button on your remote control to access the home screen.
- Press the Settings button on your remote.
- In the settings side menu, select the Settings option.
- Navigate to and select the General option.
- In the General menu, choose System.
- Select Additional Settings.
- In Additional Settings, locate and toggle off the Live Plus option.
LG further allows you to limit ad tracking, which can be found in Additional Settings.
- In the Additional Settings menu, select Advertisement.
- Toggle on the Limit AD Tracking option.
You can also turn off home promotions and content recommendations:
- In the Additional Settings menu, select Home Settings.
- Uncheck the Home Promotion option.
- Uncheck the Content Recommendation option.
For a Sony TV:
- Press the Home button on your remote control to access the main menu.
- Navigate to and select Settings.
- Choose Initial Setup.
- Scroll down and select Samba Interactive TV.
- Select Disable to turn off Samba TV, which is Sony's ACR technology.
Sony also allows for enhanced privacy by disabling ad personalization:
- Go to Settings.
- Select About.
- Choose Ads.
- Turn off Ads Personalization.
As an extra step, you can entirely disable the Samba Services Manager, which is embedded in the firmware of certain Sony Bravia TVs as a 3rd-party interactive app.
- Go to Settings.
- Select Apps.
- Select Samba Services Manager.
- Choose Clear Cache.
- Select Force Stop.
- Finally, select Disable.
If your Sony TV uses Android TV, you should also turn off data collection for Chromecast:
- Open the Google Home app on your smartphone.
- Tap the Menu icon.
- Select your TV from the list of devices.
- Tap the 3- dots in the upper right corner.
- Choose Settings.
- Turn off Send Chromecast device usage data and crash reports.
For a Hisense TV:
- Press the Home button on your remote control to access the main menu.
- Navigate to and select Settings.
- Choose System.
- Select Privacy.
- Look for an option called Smart TV Experience, Viewing Information Services, or something similar.
- Toggle this option off to disable ACR.
To disable personalized ads and opt out of content recommendations:
- In the Privacy menu, look for an option like Ad Tracking or Interest-Based Ads.
- Turn this option off.
- Look for options related to content recommendations or personalized content.
- Disable these features if you don't want the TV to suggest content based on your viewing habits.
For a TCL TV (and other Roku-powered TVs):
- Press the Home button on your TCL TV remote control.
- Navigate to and select Settings in the main menu.
- Scroll down and select the Privacy option.
- Look for Smart TV Experience and select it.
- Uncheck or toggle off the option labeled Use Info from TV Inputs.
For extra privacy, TCL TVs offer a few more options, all of which can be found in the Privacy menu:
- Select Advertising.
- Choose Limit ad tracking.
- Again, select Advertising.
- Uncheck Personalized ads.
- Now, still in the Privacy menu, select Microphone.
- Adjust Channel Microphone Access and Channel Permissions as desired.
Remember that while these steps will significantly reduce data collection, they may also limit some smart features of your TV. Also, it's a good idea to periodically check these settings to ensure they remain as you've set them. Especially after software updates, your revised settings may sometimes revert to their default state.
The driving force behind targeted advertisements on smart TVs is ACR technology, and its inclusion speaks volumes about manufacturers' focus on monetizing user data rather than prioritizing consumer interests.
For most of us, ACR offers few tangible benefits, while the real-time sharing of our viewing habits and preferences exposes us to potential privacy risks. By disabling ACR, you can help keep your data to yourself and enjoy viewing with some peace of mind.
Google's AI is Now Snooping on Your Emails - here's How to Opt Out
By Lance Whitney for ZDNET
A new change quietly rolling out allows Google to access your private messages and attachments to train its AI models-- likely without your knowledge. Opting out takes just moments.
A new change quietly rolling out allows Google to access your private messages and attachments to train its AI models-- likely without your knowledge. Opting out takes just moments.
ZDNET
Are you OK with Google snooping on your private emails to help train its AI without your permission? Nope, didn't think so. Apparently that's what the company has been doing.
In a Thursday blog post, security firm Malwarebytes detailed a new change now rolling out to Gmail users in which their private emails and attachments are being used to train the company's Gemini and other AI tools. Specifically, your emails could be analyzed to improve such features as Gmail's Smart Compose, Smart Reply, and predictive text. But it doesn't stop there. Google may also be snooping on your data in Chat, Meet, and Drive.
Enabled without your knowledge or permission?
The problem here is that these options could be enabled automatically without your knowledge or permission. I checked the three Gmail settings described by Malwarebytes. All three were turned on.
The setting for "Turn on smart features in Gmail, Chat, and Meet" allows Google to use your content in Gmail, Chat, and Meet to provide smart features. The setting for "Smart features in Google Workspace" grants Gemini access to your data, allowing it to summarize your content. The third setting for "Smart features in other Google products" taps into your data in other products to suggest everything from restaurants to event tickets.
Now, you may be fine with Google analyzing your private data if it means you can use all its cool AI tools to answer your questions, improve your content, and personalize your experience. That's not the point. Rather, the issue here is 2-fold.
First, Google seems to be opting you in to these features without your permission. Second, the company doesn't seem to have notified its users about this. As a Gmail user, I don't recall seeing any notifications about this change.
If you think this sounds unethical, you're not alone. A proposed class-action lawsuit filed on November 11 in federal court in San Jose, California, alleges that Google secretly granted Gemini access to the private communications of Gmail, Chat, and Meet users. As reported by Bloomberg on November 12, the suit charges that doing so without the consent of users and making it difficult to opt out may be a violation of the California Invasion of Privacy Act.
So far, Google hasn't publicly chimed in on the lawsuit. I reached out to the company for comment and will update the story if I get a response.
How to stop Google snooping on your data
If you don't want Google snooping on your data for AI training, you can certainly turn off any or all of the three key settings. Here's how.
On the desktop, signin to the Gmail website, click the Gear icon in the upper right, and then select the button to view all settings. At the General screen on the Settings page, look for the Smart features section. If the setting for Turn on smart features in Gmail, Chat, and Meet is turned on, click the checkbox to turn if off.
In the next section for Google Workspace smart features, click the button to manage Workspace smart feature settings. At the pop-up window, turn off the switches for Smart features in Google Workspace and Smart features in other Google products.
In the Gmail mobile app, tap the 3-lined icon in the upper left and select Settings. In the iOS app, tap the setting for Data privacy. In the Android app, tap the name of your Google account. Turn off the switch for Smart features. Tap the option for Google Workspace smart features and then turn off the switches for Smart features in Google Workspace and Smart features in other Google products.
In a Thursday blog post, security firm Malwarebytes detailed a new change now rolling out to Gmail users in which their private emails and attachments are being used to train the company's Gemini and other AI tools. Specifically, your emails could be analyzed to improve such features as Gmail's Smart Compose, Smart Reply, and predictive text. But it doesn't stop there. Google may also be snooping on your data in Chat, Meet, and Drive.
Enabled without your knowledge or permission?
The problem here is that these options could be enabled automatically without your knowledge or permission. I checked the three Gmail settings described by Malwarebytes. All three were turned on.
The setting for "Turn on smart features in Gmail, Chat, and Meet" allows Google to use your content in Gmail, Chat, and Meet to provide smart features. The setting for "Smart features in Google Workspace" grants Gemini access to your data, allowing it to summarize your content. The third setting for "Smart features in other Google products" taps into your data in other products to suggest everything from restaurants to event tickets.
Now, you may be fine with Google analyzing your private data if it means you can use all its cool AI tools to answer your questions, improve your content, and personalize your experience. That's not the point. Rather, the issue here is 2-fold.
First, Google seems to be opting you in to these features without your permission. Second, the company doesn't seem to have notified its users about this. As a Gmail user, I don't recall seeing any notifications about this change.
If you think this sounds unethical, you're not alone. A proposed class-action lawsuit filed on November 11 in federal court in San Jose, California, alleges that Google secretly granted Gemini access to the private communications of Gmail, Chat, and Meet users. As reported by Bloomberg on November 12, the suit charges that doing so without the consent of users and making it difficult to opt out may be a violation of the California Invasion of Privacy Act.
So far, Google hasn't publicly chimed in on the lawsuit. I reached out to the company for comment and will update the story if I get a response.
How to stop Google snooping on your data
If you don't want Google snooping on your data for AI training, you can certainly turn off any or all of the three key settings. Here's how.
On the desktop, signin to the Gmail website, click the Gear icon in the upper right, and then select the button to view all settings. At the General screen on the Settings page, look for the Smart features section. If the setting for Turn on smart features in Gmail, Chat, and Meet is turned on, click the checkbox to turn if off.
In the next section for Google Workspace smart features, click the button to manage Workspace smart feature settings. At the pop-up window, turn off the switches for Smart features in Google Workspace and Smart features in other Google products.
In the Gmail mobile app, tap the 3-lined icon in the upper left and select Settings. In the iOS app, tap the setting for Data privacy. In the Android app, tap the name of your Google account. Turn off the switch for Smart features. Tap the option for Google Workspace smart features and then turn off the switches for Smart features in Google Workspace and Smart features in other Google products.
How to Blur Your Home on Google Maps
By Nelson Aguilar for CNET
One tiny Maps tweak keeps strangers from seeing your home. Google Maps
Google Maps is the most popular navigation app. In addition to making it easy to get where you're going, Google Street View gives you a picture of any address. While this can be helpful, you might not want your front door, yard or driveway visible to anyone making a search.
For one, it raises privacy concerns. If you'd rather not have your personal space shown to strangers online, there are simple steps you can take to make it less exposed.
Here's how to help protect your privacy and limit how much strangers can see of your home.
You'll need to do this on your computer since the blurring feature isn't available in the Google Maps application on iOS or Android. It is accessible through the web browser on your mobile device, but it's rather difficult to use, so your best option is a trusted web browser on your Mac or PC.
At maps.google.com, enter your home address in the search bar at the top-right, hit return, then click the photo of your home that appears.
For one, it raises privacy concerns. If you'd rather not have your personal space shown to strangers online, there are simple steps you can take to make it less exposed.
Here's how to help protect your privacy and limit how much strangers can see of your home.
You'll need to do this on your computer since the blurring feature isn't available in the Google Maps application on iOS or Android. It is accessible through the web browser on your mobile device, but it's rather difficult to use, so your best option is a trusted web browser on your Mac or PC.
At maps.google.com, enter your home address in the search bar at the top-right, hit return, then click the photo of your home that appears.
Next, you'll see the Street View of your location. Click Report a Problem at the bottom-right. The text is super tiny, but it's there.
Now, it's up to you to choose what you want Google to blur. Using your mouse, adjust the view of the image so that your home and anything else you want to blur are all contained within the red and black box. Use your cursor to move around and the plus and minus buttons to zoom in and out, respectively.
Once you're finished adjusting the image, choose what you're requesting to blur underneath:
- A face
- Your home
- Car/license plate
- A different object
You'll be asked to provide a bit more detail about what you want blurred in case the image is busy with cars, people and other objects.
Also, be completely sure that what you select is exactly what you want blurred. Google cautions that once you blur something on Street View, it's blurred permanently.
Finally, enter your email-- this is required-- verify the captcha-- if needed-- and click Submit.
You should then receive an email from Google that says it'll review your report and get back to you once the request is either denied or approved. You may receive more emails from Google asking for more information regarding your request. Google doesn't offer any information on how long your request will take to process, so just keep an eye out for any further emails.
How 6 Devices Secretly Track You Everywhere
OpenAI's New Web Browser has ChatGPT Baked In
- That's Raising Privacy Questions
How Neighbors Could Spy on Smart Homes
By Mirko Zorz for Helpnet Security
Image: freepik.com
Even with strong wireless encryption, privacy in connected homes may be thinner than expected. A new study from Leipzig University shows that someone in an adjacent apartment could learn personal details about a household without breaking any encryption. By monitoring the wireless traffic of nearby smart devices, the "nosy neighbor" can infer what people are doing, when they are home, and even which room they are in.
Listening through the wall
The researcher tested what information could be learned from encrypted WiFi and Bluetooth Low Energy (BLE) signals. The experiment simulated a neighbor who sets up 3 cheap antennas along a shared wall. These antennas collected wireless data from a mock smart home next door filled with connected light bulbs, sensors, plugs, and a few everyday devices such as smartphones.
The observer never decrypted any data. Instead, the analysis focused on what leaks through side channels, the parts of communication that remain visible even when payloads are protected. Every wireless packet exposes timing, size, and signal strength. By watching these details over time, the researcher could map out daily routines.
Listening through the wall
The researcher tested what information could be learned from encrypted WiFi and Bluetooth Low Energy (BLE) signals. The experiment simulated a neighbor who sets up 3 cheap antennas along a shared wall. These antennas collected wireless data from a mock smart home next door filled with connected light bulbs, sensors, plugs, and a few everyday devices such as smartphones.
The observer never decrypted any data. Instead, the analysis focused on what leaks through side channels, the parts of communication that remain visible even when payloads are protected. Every wireless packet exposes timing, size, and signal strength. By watching these details over time, the researcher could map out daily routines.
WiFi sniffer based on a Raspberry Pi with the TP-Link antenna. SD Card adapter for scale. (Source: Research paper)
Identifying devices by their patterns
Even encrypted devices leave distinct traces. Packet frequency, transmission bursts, and radio signal strength helped identify which devices were in use. Over days of monitoring, the study could classify smart plugs, lights, and air sensors with notable accuracy. The system also detected when devices changed state, such as a lamp being turned on or a vacuum starting its cleaning cycle.
Bartosz Wojciech Burgiel, penetration tester at DigiFors and the author of the study, told Help Net Security that better hardware could widen the attack surface. He said, "I think that more advanced antennas-- i.e. the ones which allow for CSI monitoring, could create new possibilities for behavioral fingerprinting in this setting. I can't tell you much on the accuracy of CSI in obstructed settings-- i.e. when you're listening through the wall. Given the black box nature of this passive monitoring, even if the CSI was accurate, you would have no ground truth to 'decode' the readings to assign them to human behavior. So technically it would be advantageous, but you would have a hard time in classifying this data."
Once these patterns were established, a passive observer could tell when someone was awake, working, cooking, or relaxing. Activity peaks from a smart speaker or streaming box pointed to media consumption, while long quiet periods matched sleeping hours. None of this required access to the home's WiFi network.
Locating people and rooms
The next part of the experiment used the signal strength of different devices to estimate their location. By comparing readings from multiple antennas, the researcher could perform trilateration, estimating where signals originated inside the apartment. While not precise enough to pinpoint exact positions, the results were accurate enough to divide the home into zones such as kitchen, office, and bedroom.
When residents moved around with smartphones or wearables, their approximate paths through the apartment could be tracked in near real time. Over multiple days, these traces made it possible to sketch the layout of rooms and identify which areas were used most often.
Learning about personal behavior
Beyond devices and locations, the study explored what this information reveals about people. Correlating traffic from multiple devices exposed behavioral patterns. A surge in kitchen device activity followed by a drop in motion sensors could suggest someone preparing dinner and leaving the room. Repeated evening peaks from a smart TV and game console indicated entertainment habits.
The research also captured probe requests, signals that WiFi devices send while looking for familiar networks. These requests sometimes included the names of previously connected networks, which can reveal places the user has visited, such as workplaces or cafés. During one case study, the appearance of a new smartphone pattern indicated that a guest had arrived, and their movements could be followed until the device left range.
A quiet but serious privacy problem
The findings show that privacy exposure in smart homes goes beyond traditional hacking. Even with WPA2 or WPA3 encryption, network traffic leaks enough side information for outsiders to make inferences about occupants. A determined observer could build profiles of daily schedules, detect absences, and learn which devices are in use.
For security professionals, this highlights an often overlooked threat category: passive data collection in physical proximity. Unlike network intrusions, these attacks require no access credentials, malware, or interaction with the target network. They depend only on being within radio range.
Limited defenses
The study noted few practical countermeasures for consumers. Randomizing device identifiers and reducing unnecessary broadcasting could help, but most off-the-shelf smart devices do not offer these options. Strong encryption remains essential but cannot hide metadata such as timing or signal strength. Shielding rooms or lowering transmission power may reduce exposure but are impractical for most homes.
Burgiel was blunt about the limits of realistic defenses. He said, "While there are some theoretical countermeasures, I don't think that realistically anything can be done against such attacks. It is practically impossible to obscure or mask the wireless communication outside of the house. Theoretically someone could place all devices deep within the home, such that no signal leaks outside of the walls. But then effectively one room in your house could be smart."
He offered caveats about partial options. "There are some ways to 'hide' BLE, however I can't say how it would perform in a smart home setting. For WiFi, you can hide your BSSID, such that it's not broadcasted, but as I explained in my methodology, it would not stop a motivated attacker."
Burgiel also described a disruptive countermeasure that is possible in theory but hard in practice. "The only, albeit unrealistic, defense against such attacks is setting up dupes. You can easily spoof a device's transmitter MAC address, either Bluetooth or WiFi, and send random bytes such that in Wireshark their communication appears as if it came from the same device. By doing this, you could inject random patterns into the data stream making pattern recognition more challenging. But I do not know how the routers or hosts would react to such interference."
He added a final practical warning about that trick. "This countermeasure has one weakness. An attacker with spatially separated antennas would be able to tell the dupes and the original devices apart by examining their RSSI fingerprint. So you would have to either locate them very close to the true devices and match their TX, or spread them throughout the apartment such that the attacker does not know which of the devices is the original."
When the nosy neighbor becomes an insider threat
While the experiment used a domestic scenario, similar methods could apply in offices, labs, or corporate apartments where smart sensors are common.
Monitoring signal emissions and auditing device behavior could become part of security hygiene, especially in areas handling sensitive work. The "nosy neighbor" in this study might be an actual neighbor today, but the same techniques could be used by corporate spies or investigative actors tomorrow.
Even encrypted devices leave distinct traces. Packet frequency, transmission bursts, and radio signal strength helped identify which devices were in use. Over days of monitoring, the study could classify smart plugs, lights, and air sensors with notable accuracy. The system also detected when devices changed state, such as a lamp being turned on or a vacuum starting its cleaning cycle.
Bartosz Wojciech Burgiel, penetration tester at DigiFors and the author of the study, told Help Net Security that better hardware could widen the attack surface. He said, "I think that more advanced antennas-- i.e. the ones which allow for CSI monitoring, could create new possibilities for behavioral fingerprinting in this setting. I can't tell you much on the accuracy of CSI in obstructed settings-- i.e. when you're listening through the wall. Given the black box nature of this passive monitoring, even if the CSI was accurate, you would have no ground truth to 'decode' the readings to assign them to human behavior. So technically it would be advantageous, but you would have a hard time in classifying this data."
Once these patterns were established, a passive observer could tell when someone was awake, working, cooking, or relaxing. Activity peaks from a smart speaker or streaming box pointed to media consumption, while long quiet periods matched sleeping hours. None of this required access to the home's WiFi network.
Locating people and rooms
The next part of the experiment used the signal strength of different devices to estimate their location. By comparing readings from multiple antennas, the researcher could perform trilateration, estimating where signals originated inside the apartment. While not precise enough to pinpoint exact positions, the results were accurate enough to divide the home into zones such as kitchen, office, and bedroom.
When residents moved around with smartphones or wearables, their approximate paths through the apartment could be tracked in near real time. Over multiple days, these traces made it possible to sketch the layout of rooms and identify which areas were used most often.
Learning about personal behavior
Beyond devices and locations, the study explored what this information reveals about people. Correlating traffic from multiple devices exposed behavioral patterns. A surge in kitchen device activity followed by a drop in motion sensors could suggest someone preparing dinner and leaving the room. Repeated evening peaks from a smart TV and game console indicated entertainment habits.
The research also captured probe requests, signals that WiFi devices send while looking for familiar networks. These requests sometimes included the names of previously connected networks, which can reveal places the user has visited, such as workplaces or cafés. During one case study, the appearance of a new smartphone pattern indicated that a guest had arrived, and their movements could be followed until the device left range.
A quiet but serious privacy problem
The findings show that privacy exposure in smart homes goes beyond traditional hacking. Even with WPA2 or WPA3 encryption, network traffic leaks enough side information for outsiders to make inferences about occupants. A determined observer could build profiles of daily schedules, detect absences, and learn which devices are in use.
For security professionals, this highlights an often overlooked threat category: passive data collection in physical proximity. Unlike network intrusions, these attacks require no access credentials, malware, or interaction with the target network. They depend only on being within radio range.
Limited defenses
The study noted few practical countermeasures for consumers. Randomizing device identifiers and reducing unnecessary broadcasting could help, but most off-the-shelf smart devices do not offer these options. Strong encryption remains essential but cannot hide metadata such as timing or signal strength. Shielding rooms or lowering transmission power may reduce exposure but are impractical for most homes.
Burgiel was blunt about the limits of realistic defenses. He said, "While there are some theoretical countermeasures, I don't think that realistically anything can be done against such attacks. It is practically impossible to obscure or mask the wireless communication outside of the house. Theoretically someone could place all devices deep within the home, such that no signal leaks outside of the walls. But then effectively one room in your house could be smart."
He offered caveats about partial options. "There are some ways to 'hide' BLE, however I can't say how it would perform in a smart home setting. For WiFi, you can hide your BSSID, such that it's not broadcasted, but as I explained in my methodology, it would not stop a motivated attacker."
Burgiel also described a disruptive countermeasure that is possible in theory but hard in practice. "The only, albeit unrealistic, defense against such attacks is setting up dupes. You can easily spoof a device's transmitter MAC address, either Bluetooth or WiFi, and send random bytes such that in Wireshark their communication appears as if it came from the same device. By doing this, you could inject random patterns into the data stream making pattern recognition more challenging. But I do not know how the routers or hosts would react to such interference."
He added a final practical warning about that trick. "This countermeasure has one weakness. An attacker with spatially separated antennas would be able to tell the dupes and the original devices apart by examining their RSSI fingerprint. So you would have to either locate them very close to the true devices and match their TX, or spread them throughout the apartment such that the attacker does not know which of the devices is the original."
When the nosy neighbor becomes an insider threat
While the experiment used a domestic scenario, similar methods could apply in offices, labs, or corporate apartments where smart sensors are common.
Monitoring signal emissions and auditing device behavior could become part of security hygiene, especially in areas handling sensitive work. The "nosy neighbor" in this study might be an actual neighbor today, but the same techniques could be used by corporate spies or investigative actors tomorrow.
Your Windows 11 Computer's Hidden Spy: The Dark Truth
about TPM Chips
Who's on Your WiFi? - Find Intruders and Isolate Your SmartHome
The Secret Privacy Tool You've Never Heard Of
Plug the Leaks in Your Digital Life
By Komando Deals
ChatGPT
Hackers, scammers and snoopers are getting slicker every day. Your gear should, too. I did the digging-- so you don't have to-- and found the best tools to outsmart them.
Guard your cards
RFID blocking cards-- $5, 50% off-- stop fraud before it even starts. Slip one into your wallet, and forget it's there. Your ID and financial info stays safe and sound.
Eyes where they shouldn't be?
Scan your hotel room, Airbnb or sketchy bathroom with a hidden-camera detector-- $32, 47% off. Even sniffs out GPS trackers and listening devices.
Safe family travels
A passport holder-- $31, 15% off-- with built-in RFID blocking protects your important documents without adding bulk. Comes with a SIM card holder and an ejector pin.
Swipe out snoops
These security rollers-- $28, 22% off-- cover sensitive details on bills, letters and forms with a swipe. Fast way to secret-proof your mail without a shredder.
Write safe, write smart
Fraudsters love to "wash" checks and rewrite new amounts and names. Uniball's gel pens-- $13, 5% off-- chemically bond with paper, so your autograph can't be changed.
Sharp, safe and clear
Emeet's NOVA 4K webcam-- $57, 5% off-- has a privacy cover when you need it. Autofocus and dual noise-canceling mics make video calls super crisp.
Motion-sensing watchdog
This solar-powered security camera-- $67, 16% off-- shows 360° panoramic views and full-color night vision. The best part? Totally wire-free.
Your files, on lockdown
With military-grade encryption, cloud backup and multiple passwords, this USB flash drive-- $38, 10% off-- is peace of mind on a stick.
Peek-proof your phone
Stay private in public spaces. Grab some anti-spy tempered glass shields for iPhones-- $18, 5% off-- and Samsungs-- $16, 16% off.
It's a trap
I'm talking about public USB ports. These data blockers-- $8, 17% off-- are invisible shields. Works with Androids and iPhone 15s and newer.
Stay two steps ahead
From AirTag holders to security envelopes, here for more must-haves that actually work.
Guard your cards
RFID blocking cards-- $5, 50% off-- stop fraud before it even starts. Slip one into your wallet, and forget it's there. Your ID and financial info stays safe and sound.
Eyes where they shouldn't be?
Scan your hotel room, Airbnb or sketchy bathroom with a hidden-camera detector-- $32, 47% off. Even sniffs out GPS trackers and listening devices.
Safe family travels
A passport holder-- $31, 15% off-- with built-in RFID blocking protects your important documents without adding bulk. Comes with a SIM card holder and an ejector pin.
Swipe out snoops
These security rollers-- $28, 22% off-- cover sensitive details on bills, letters and forms with a swipe. Fast way to secret-proof your mail without a shredder.
Write safe, write smart
Fraudsters love to "wash" checks and rewrite new amounts and names. Uniball's gel pens-- $13, 5% off-- chemically bond with paper, so your autograph can't be changed.
Sharp, safe and clear
Emeet's NOVA 4K webcam-- $57, 5% off-- has a privacy cover when you need it. Autofocus and dual noise-canceling mics make video calls super crisp.
Motion-sensing watchdog
This solar-powered security camera-- $67, 16% off-- shows 360° panoramic views and full-color night vision. The best part? Totally wire-free.
Your files, on lockdown
With military-grade encryption, cloud backup and multiple passwords, this USB flash drive-- $38, 10% off-- is peace of mind on a stick.
Peek-proof your phone
Stay private in public spaces. Grab some anti-spy tempered glass shields for iPhones-- $18, 5% off-- and Samsungs-- $16, 16% off.
It's a trap
I'm talking about public USB ports. These data blockers-- $8, 17% off-- are invisible shields. Works with Androids and iPhone 15s and newer.
Stay two steps ahead
From AirTag holders to security envelopes, here for more must-haves that actually work.
Can the Owner of an Open WiFi Hotspot See What Files
I'm Downloading?
5 Signs Someone Might be Taking Advantage of Your Security Goodness
Not everyone in a security department is acting in good faith, and they'll do what they can to bypass those who do. Here's how to spot them.
zwolafasola / stock.adobe - darkreading
Wikipedia defines "good faith" as "a sincere intention to be fair, open, and honest, regardless of the outcome of the interaction." A person who acts in good faith must be truthful and forthcoming with information, even if it affects the end state of a negotiation or transaction. In other words, lying and withholding information, by their very nature, make an interaction anything but good faith.
For many security professionals, good faith is the only way they know how to operate. Unfortunately, the security profession, like any profession, has its share of bad faith actors, too. For example, consider a co-worker who is underperforming and introducing unnecessary risk into the security organization. In certain cases, underperformers will look to sabotage others rather than improve the quality of their work. Or, as another example, consider a bad faith actor who is out to gain competitive intelligence or other information that can be used for any number of purposes, including social engineering.
How can good faith security practitioners identify bad actors and understand when they're being taken advantage of? Here are 5 signs.
1. Information hoarding: Ever had a conversation, meeting, chat correspondence, or email exchange that feels more like an interrogation than a two-way exchange information? This is a well-known trick-- and sign of-- a bad faith actor. By the time most good faith actors catch on to the fact that the information flow is entirely 1-way, they've already given the bad faith actor a wealth of information.
2. My way or the highway: As a generally rational bunch, good faith actors understand that life is a give and take. But bad faith actors know only how to take, making it difficult to negotiate. Their only concern is what they want, and they will employ a variety of tactics to get what they want while offering little to nothing in return. Unfortunately, good faith actors often fall for this approach, as they would rather disengage and get back to constructive activities than get dirty wrestling in the mud with a bad actor.
3. False generosity: When bad faith actors seek to manipulate people or situations, they will sometimes make what appears to be a generous offer. Conversely, these offers often come at a tremendous cost. How so? If a good faith actor takes a bad faith actor up on an offer, it could be used against them in the future. The bad faith actor could also attempt to convince others of their "good nature" and "generosity" by pointing to a good faith actor who took the offer.
4. Bait and switch: Bait and switch is one of the oldest tricks in the book. As the Latin phrase so aptly states, caveat emptor: Buyer beware. Bad faith actors will often make promises of something they have absolutely no intention of giving to extract what they want from good actors. Once they have what they were after, they go quiet or become evasive. The chances of a good faith actor ever seeing what they wanted are very slim.
5. Promoting a narrative: One way bad faith actors seek out, persuade, and take advantage of new victims is by surrounding themselves with a chorus of approvers. This "posse," of sorts, may consist of witting and/or unwitting accomplices. In some cases, accomplices were recruited via lies or manipulation. In other cases, the accomplices may have their own motivations for why they wish to partake in certain bad faith activities. In any event, bad faith actors will often promote a narrative to help convince new audiences they can be believed. This can be difficult to navigate and often catches good faith actors by surprise.
In the end, a heaping dose of awareness-- and even a bit of healthy cynicism-- of misleading behaviors can stop bad faith actors from taking advantage and achieving their goals.
For many security professionals, good faith is the only way they know how to operate. Unfortunately, the security profession, like any profession, has its share of bad faith actors, too. For example, consider a co-worker who is underperforming and introducing unnecessary risk into the security organization. In certain cases, underperformers will look to sabotage others rather than improve the quality of their work. Or, as another example, consider a bad faith actor who is out to gain competitive intelligence or other information that can be used for any number of purposes, including social engineering.
How can good faith security practitioners identify bad actors and understand when they're being taken advantage of? Here are 5 signs.
1. Information hoarding: Ever had a conversation, meeting, chat correspondence, or email exchange that feels more like an interrogation than a two-way exchange information? This is a well-known trick-- and sign of-- a bad faith actor. By the time most good faith actors catch on to the fact that the information flow is entirely 1-way, they've already given the bad faith actor a wealth of information.
2. My way or the highway: As a generally rational bunch, good faith actors understand that life is a give and take. But bad faith actors know only how to take, making it difficult to negotiate. Their only concern is what they want, and they will employ a variety of tactics to get what they want while offering little to nothing in return. Unfortunately, good faith actors often fall for this approach, as they would rather disengage and get back to constructive activities than get dirty wrestling in the mud with a bad actor.
3. False generosity: When bad faith actors seek to manipulate people or situations, they will sometimes make what appears to be a generous offer. Conversely, these offers often come at a tremendous cost. How so? If a good faith actor takes a bad faith actor up on an offer, it could be used against them in the future. The bad faith actor could also attempt to convince others of their "good nature" and "generosity" by pointing to a good faith actor who took the offer.
4. Bait and switch: Bait and switch is one of the oldest tricks in the book. As the Latin phrase so aptly states, caveat emptor: Buyer beware. Bad faith actors will often make promises of something they have absolutely no intention of giving to extract what they want from good actors. Once they have what they were after, they go quiet or become evasive. The chances of a good faith actor ever seeing what they wanted are very slim.
5. Promoting a narrative: One way bad faith actors seek out, persuade, and take advantage of new victims is by surrounding themselves with a chorus of approvers. This "posse," of sorts, may consist of witting and/or unwitting accomplices. In some cases, accomplices were recruited via lies or manipulation. In other cases, the accomplices may have their own motivations for why they wish to partake in certain bad faith activities. In any event, bad faith actors will often promote a narrative to help convince new audiences they can be believed. This can be difficult to navigate and often catches good faith actors by surprise.
In the end, a heaping dose of awareness-- and even a bit of healthy cynicism-- of misleading behaviors can stop bad faith actors from taking advantage and achieving their goals.
© vocalbits.com