#privacy

Public notes from activescott tagged with #privacy

Friday, April 10, 2026

If you use Signal, you actually have an advantage here, now that you know about this vulnerability. Signal has a setting that blocks the content of messages from appearing in their notifications. That way, even if someone accesses your alerts, all they'll see is you received a Signal message—not who sent it or what it contains.

To turn it on, open Signal, tap your profile in the top-left corner, then hit "Settings." Under Notification Content, choose "No Name or Content" to block all data to the alert. You can compromise here and choose "Name Only" if you want to know who a message is from before you open it—just remember, an intruder may also see you received a message from that person if they scrape your iPhone's notifications.

Sunday, April 5, 2026

Microsoft is running one of the largest corporate espionage operations in modern history.

Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm.

The user is never asked. Never told. LinkedIn’s privacy policy does not mention it.

Because LinkedIn knows each user’s real name, employer, and job title, it is not searching anonymous visitors. It is searching identified people at identified companies. Millions of companies. Every day. All over the world. This is illegal and potentially a criminal offense in every jurisdiction we have examined.

LinkedIn loads an invisible tracking element from HUMAN Security (formerly PerimeterX), an American-Israeli cybersecurity firm, zero pixels wide, hidden off-screen, that sets cookies on your browser without your knowledge. A separate fingerprinting script runs from LinkedIn’s own servers. A third script from Google executes silently on every page load. All of it encrypted. None of it disclosed.

Every time you open LinkedIn in a Chrome-based browser, LinkedIn’s JavaScript executes a silent scan of your installed browser extensions. The scan probes for thousands of specific extensions by ID, collects the results, encrypts them, and transmits them to LinkedIn’s servers. The entire process happens in the background. There is no consent dialog, no notification, no mention of it in LinkedIn’s privacy policy.

This page documents exactly how the system works, with line references and code excerpts from LinkedIn’s production JavaScript bundle.

See https://browsergate.eu/how-it-works/

Tuesday, March 31, 2026

A filter composed of several other filters (AdGuard Base filter, Social media filter, Tracking Protection filter, Mobile Ads filter, EasyList and EasyPrivacy) and simplified specifically to be better compatible with DNS-level ad blocking.

The direct link to the filter: https://adguardteam.github.io/AdGuardSDNSFilter/Filters/filter.txt.

Please note, that to use this filter it is necessary to support basic ad blocking rules syntax. It does not make much sense to extract just the hosts file.

Wednesday, February 25, 2026

The danger here isn’t just about one contract; it’s about the precedent. If the Pentagon successfully bullies Anthropic into submission or replaces it with a more “flexible” competitor, we are effectively witnessing the birth of an intentionally unethical AI.

The Death of Human Agency When AI is integrated into weaponry for “all lawful purposes” without restrictions on autonomy, we invite the Responsibility Gap. If an AI-driven drone swarm misidentifies a target, who is at fault? By removing the “human-in-the-loop” requirement, the military is seeking a weapon that offers the ultimate prize of war: lethality without accountability. Surveillance as a Service Existing U.S. laws were written for wiretaps, not for generative AI that can ingest millions of data points to build predictive profiles. Under an “all lawful purposes” mandate, an LLM could be turned into a digital Panopticon. Anthropic has warned that current laws have not caught up to what AI can do in terms of analyzing open-source intelligence on citizens. The Moral Race to the Bottom If the Pentagon blacklists Anthropic, it sends a clear message to competitors: Safety is a liability. To win government billions, firms will be incentivized to strip away safety layers. Reports already suggest OpenAI, Google, and xAI have shown more “flexibility” regarding the Pentagon’s demands.

The Pentagon’s “supply chain threat” maneuver is a scorched-earth tactic designed to force Silicon Valley to choose between its values and its bottom line.

If Anthropic stands firm, it may lose $200 million in revenue and a seat at the defense table. But if they cave, they may well be providing the operating system for the very “Terminator” future they were founded to prevent. In the world of 2026, the most dangerous threat to the supply chain might just be an AI that has been ordered to stop caring about ethics.

Sunday, January 25, 2026

Sunday, January 4, 2026

Saturday, December 27, 2025

The consequences of getting caught in this expanding digital cage can be dire. In rural China, a family’s home is ringed by security cameras that alert authorities whenever they try to go to Beijing to complain about local officials. Near San Antonio, a driver is stopped as part of a secretive U.S. Border Patrol program that uses license plate readers to monitor millions of drivers and detain those whose travel patterns are deemed suspicious. In Gaza, AI-powered technology helps the Israeli military decide who to kill.

Wednesday, December 3, 2025

State legislation to attempt to protect privacy forthcoming hopefully.

A report last month out of the University of Washington found several local police departments authorized U.S. Border Patrol to use their license plate reader databases. And in other cases, Border Patrol had backdoor access without express permission. In some instances, police conducted searches on behalf of the federal agency. By Worries extend beyond immigration.

Authorities in Texas this year searched thousands of the cameras, as far as Washington state and Illinois, in their search for a woman believed to have had a self-administered abortion.

Tuesday, November 18, 2025

The cities’ move to exempt the records from disclosure was a dangerous attempt to deny transparency and reflects another problem with the massive amount of data that police departments collect through Flock cameras and store on Flock servers: the wiggle room cities seek when public data is hosted on a private company’s server.

If a government agency is conducting mass surveillance, EFF supports individuals’ access to data collected specifically on them, at the very least. And to address legitimate privacy concerns, governments can and should redact personal information in these records while still disclosing information about how the systems work and the data that they capture.

Privacy in general matters because you never know how your data might be used even if you’re a good guy.

On Thursday, a Skagit County Superior Court judge ruled that pictures taken by Flock cameras in the cities of Sedro-Woolley and Stanwood qualify as public records, and therefore must be released as required by the state's Public Records Act, court records show.

Flock's cameras, also called automated license plate readers, continuously and indiscriminately capture time- and location-stamped photos of any passing vehicles. Those images are then stored, and information about the vehicles, including their condition, make, model and license plate number, is added to a searchable database controlled by the customer.

Last week's Skagit County ruling could oblige the dozens of Washington police agencies which use Flock cameras, ostensibly to help them find stolen vehicles, crime suspects and missing people, to release the photos and data they collect — an outcome privacy advocates warned was possible.

The ruling also exacerbated concerns about potential misuse of Flock data, which swelled after University of Washington researchers released a report Oct. 21 showing federal immigration agencies like ICE and Border Patrol had accessed the data of at least 18 Washington cities, often without their police departments' knowing. The report raised concerns that the agencies might be using the data to target and arrest immigrants as part of Trump's immigration crackdown.