privacy
- Press the MENU button on your TV's remote.
- Select Settings.
- Highlight Smart Interactivity.
- Press RIGHT arrow to change setting to Off.
- Press the MENU button on your TV's remote or open HDTV Settings app.
- Select System.
- Select Reset & Admin.
- Highlight Smart Interactivity.
- Press RIGHT arrow to change setting to Off.
Senate Republicans Vote To Gut Internet Privacy
Senate Republicans Vote To Gut Internet Privacy
Hamza Shaban, writing for BuzzFeed:
The Senate voted Thursday to make it easier for internet service providers to share sensitive information about their customers, a first step in overturning landmark privacy rules that consumer advocates and Democratic lawmakers view as crucial protections in the digital age. The vote was passed along party lines, 50-48, with all but two Republicans voting in favor of the repeal and every Democrat voting against it. Two Republican Senators did not vote.
Disgusting. This is what buying policy looks like, folks. Kate Tummarello of the Electronic Frontier Foundation also did a write-up, and included a particularly scary piece of information:
Republicans in the Senate just voted 50-48 (with two absent votes) to approve a Congressional Review Action resolution from Sen. Jeff Flake which—if it makes it through the House—would not only roll back the FCC’s rules but also prevent the FCC from writing similar rules in the future.
(emphasis added)
This may not seem like a big deal, but it very much is, especially in an age where ISPs and the data brokers to whom they sell your information are frequently hacked.
More shameful behavior from Senate Republicans whose retirement can’t possibly come soon enough.
Dropbox employee’s password reuse led to theft of 60M+ user credentials
Dropbox employee’s password reuse led to theft of 60M+ user credentials
Kate Conger, reporting at TechCrunch:
Dropbox disclosed in 2012 that an employee’s password was acquired and used to access a document with email addresses, but did not disclose that passwords were also acquired in the theft. Because Dropbox stores its user passwords hashed and salted, that’s technically accurate — it seems that hackers were only able to obtain hashed files of Dropbox user passwords and were unable to crack them. But it does appear that more information was taken from Dropbox than was previously let on, and it’s strange that it’s taken this long for the breach to surface.
Don’t reuse passwords folks. Find a password manager and learn to love it. There’s 1Password, LastPass, Dashlane and many others. That means there’s no excuse for you to keep using your dog’s name combined with your college graduation year or whatever terrible password you’re using for everything.
Secret Cameras Record Baltimore’s Every Move From Above
Secret Cameras Record Baltimore’s Every Move From Above
Pritchett had no idea that as he spoke, a small Cessna airplane equipped with a sophisticated array of cameras was circling Baltimore at roughly the same altitude as the massing clouds. The plane’s wide-angle cameras captured an area of roughly 30 square miles and continuously transmitted real-time images to analysts on the ground. The footage from the plane was instantly archived and stored on massive hard drives, allowing analysts to review it weeks later if necessary.
It must be the NSA or the CIA or the FBI, right? They must have a warrant, right? They must be deleting the video after a certain period of time, right?
Wrong.
It’s the Baltimore Police Department. The article and accompanying video clarify the motivation of the company providing the technology and the service to BPD. Founder Ross McNutt says he hopes technology like his will have a deterrent effect on crime in cities where its deployment is disclosed. That’s a good goal but it’s not the BPD or the company’s founder I’m worried about.
Anything on a hard drive that isn’t air gapped is vulnerable to exfiltration by hackers. That includes a massive digital video recorder covering an entire city for an indeterminate amount of time.
Scary stuff.
Vizio TVs spy on you, here's how to disable it
Vizio TVs spy on you, here’s how to disable it
Vizio’s technology works by analyzing snippets of the shows you’re watching, whether on traditional television or streaming Internet services such as Netflix. Vizio determines the date, time, channel of programs — as well as whether you watched them live or recorded. The viewing patterns are then connected your IP address - the Internet address that can be used to identify every device in a home, from your TV to a phone.
This is a damn good reason not to buy a Vizio TV. I won’t rant about opt-out/opt-in again. But I found Vizio generally had a good price-to-quality ratio: not top shelf hardware, but not top shelf prices, either. So this shadiness is a shame.
A shamey-ness?
Anyway, props to Samsung and LG, who, according to Julia Angwin at ProPublica, require user consent before enabling the sort of tracking Vizio turns on by default.
Disable Vizio "Smart Interactivity"
Vizio obviously knows how shady its default spying is because they have a page named after the feature which begins with information on how to turn it off:
VIA TV Interface
VIA Plus TV Interface
The how and why of sneaky ultrasonic ad tracking
Dan Goodin reports over at Ars Technica on the development of technology which can use inaudible frequencies to tie together multiple unconnected devices. He explains:
The ultrasonic pitches are embedded into TV commercials or are played when a user encounters an ad displayed in a computer browser. While the sound can't be heard by the human ear, nearby tablets and smartphones can detect it. When they do, browser cookies can now pair a single user to multiple devices and keep track of what TV commercials the person sees, how long the person watches the ads, and whether the person acts on the ads by doing a Web search or buying a product.
Goodin cites a letter from the Center for Democracy and Technology to the Federal Trade Commission [PDF] describing the technical aspects of the practice and the privacy implications. I won’t repeat what Goodin or CDT have already explained with clarity. Instead, I wanted to talk about the inability of users like us to opt out of cross-device tracking.
Why don’t the companies developing and using these tracking technologies just tell us what they’re doing and give us the option to opt out? Obviously, requiring us to opt in would be the most honorable and least user-hostile approach. But I’ll concede that as being firmly in the “never gonna happen” column.
I am open to the possibility that I set up a straw man in the next section of this article, so feel free to point it out to me if that’s what you think. Just be constructive.
Concerns about using a straw man aside, the only logic I can see undergirding the failure to offer an opt-out mechanism is a concern that a large number of users would in fact opt out. That would obviously reduce or, in a worst-case scenario for tracking companies, eliminate the population of tracked individuals.
The only problem with that is that it’s bullshit.
We opt in to terms of service and privacy policy all over the web every day without reading a word of them. Projects like ToS;DR and TOSback aim to make us better informed about what we’re agreeing to and how those agreements change over time. They are fascinating and important projects but primarily the domain of geeks like me (and, since you’re reading this, possibly you, as well).
The truth is the overwhelming majority of people click “Yes” or “Agree” or “Continue” or whatever other button or link gets them to the web content or software they want to use. Here’s a quote from an AdWeek article published in May 2015, citing a survey done by photography website ScoopShot:
More than 30 percent of the 1,270 survey respondents said they never read the ToS when signing up to a social network. 49.53 percent only read the ToS ‘sometimes,’ and only 17.56 percent of people ‘always’ read the ToS.
Yes, that’s only one study, and yes, it was conducted on SurveyMonkey, but it’s a decent sample size. And can you honestly tell me that you or anyone else you know read the terms and policies of the sites and software you use? Probably not.
Is there any other reason, then, that creepy advertising tracking technology doesn’t offer an opt-out, just like the ones we never actually make use of throughout the rest of the web? Yes, I think there is.
Most websites have terms of service and privacy policies, although they are usually relegated to miniscule links at the very bottom of the website’s footer section. The European Union requires cookie notifications. But when is the last time you decided not to use a website like Facebook or the BBC website because you read their policies and didn’t consent to them? I’ll answer for the overwhelming majority of us: never, ever.
It’s their ubiquity coupled with the dominant user response of wildly clicking “Yes” until you get what you came for that makes website policies such a compelling topic of discussion. The companies building the technology that uses inaudible sound to tell advertisers that your phone, computer, television and tablet all belong to the same person can minimize conversation about their products by refusing to present you with an opt-out mechanism.
It’s that desire to remain invisible and as uncontroversial as possible for as long as possible that motivates them to be so sneaky. One commenter on Goodin’s Ars article puts it very well:
that advertisers keep basing their technological "progress" off of malware research and techniques is very telling.
It sure is. The reality is that I am one of those weirdos who doesn’t care if I’m tracked, but I do care when I’m not asked to consent to it. I propose that some privacy-minded geeks more intelligent than I develop some sort of ultrasonic ad-cancelling noise generation software for us to use in our homes and offices to thwart secret ultrasonic cross-device ad tracking. You have to take that one and run with it, I’m just an ideas man.
Facial Recognition Software Moves From Overseas Wars to Local Police
Facial Recognition Software Moves From Overseas Wars to Local Police
This is troubling:
Lt. Scott Wahl, a spokesman for the 1,900-member San Diego Police Department, said the department does not require police officers to file a report when they use the facial recognition technology but do not make an arrest. The department has no record of the stops involving Mr. Hanson and Mr. Harvey, and Lieutenant Wahl said that he did not know about the incidents but that they could have happened.
Should police departments be allowed to use facial recognition?
Yes.
Should they be able to use it with minimal consent, oversight and reporting requirements?
No.
Image from Wikimedia

The ethics of modern web ad-blocking
The ethics of modern web ad-blocking
Marco Arment, creator of Instapaper and, more recently, Overcast:
This won’t be a clean, easy transition. Blocking pop-ups was much more incisive: it was easy for legitimate publishers to avoid one narrowly-useful Javascript function to open new windows. But it’s completely reasonable for today’s web readers to be so fed up that they disable all ads, or even all Javascript. Web developers and standards bodies couldn’t be more out of touch with this issue, racing ahead to give browsers and Javascript even more capabilities without adequately addressing the fundamental problems that will drive many people to disable huge chunks of their browser’s functionality.
I vascillate between Ghostery and uBlock, but they do the same thing: disable the scripts that power advertisements and tracking on the web. Some sites respect their visitors and present unobtrusive, high-quality advertisements. I whitelist those because, even if I’m unlikely to look at the ads and far less likely to actually click on them, the respect the publisher showed me deserves reciprocation.
But Arment is right. There’s no nice way to say it: publishers with shitty ads won’t remain viable much longer in the face of increased user awareness and response. The ability to use ad blockers in iOS 9 will only accelerate the downfall of sites with shitty ads.

Federal Court's data breach decision shows new tilt toward victims, class-action lawsuits
Federal Court’s data breach decision shows new tilt toward victims, class-action lawsuits
John Fontana writes at ZDNet:
In an interesting twist, the Court said the fact Neiman Marcus offered free credit monitoring services was evidence that there was harm to these victims. The ruling turned on its head the way courts historically view such services as compensation for harm while negating a victim's right to file a lawsuit (re: standing).
This may get very interesting very fast: if companies are at risk of being held ot have tacitly admitted liability by offering credit protection services to potential breach victims, they will stop offering that stuff.
The possibility of class actions instead of free credit monitoring may appeal to those whose data has been stolen, but it’s not really a great trade at all. Credit monitoring is expensive and the industry is still suffering growing pains, but class actions usually net plaintiffs an insignificant amount of money in damages while making lawyers very, very rich.
China-Tied Hackers That Hit U.S. Said to Breach United Airlines
China-Tied Hackers That Hit U.S. Said to Breach United Airlines
This is starting to look like a concerted effort to gather a specific data set for some sort of coordinated use:
The previously unreported United breach raises the possibility that the hackers now have data on the movements of millions of Americans, adding airlines to a growing list of strategic U.S. industries and institutions that have been compromised. Among the cache of data stolen from United are manifests -- which include information on flights’ passengers, origins and destinations -- according to one person familiar with the carrier’s investigation.
Tor Project seeks Executive Director
Tor Project seeks Executive Director
The Tor Project, makers of anonymizing browsing tools, is looking for a new Executive Director:
The position provides the high-profile opportunity to assume the voice and face of Tor to the world, and particularly to the global community of Internet organizations dedicated to maintaining a stable, secure and private Internet. In this position, the successful candidate will be able to exercise their deep leadership experience to manage a virtual team of culturally diverse volunteer developers. The candidate will have the opportunity to draw support from their stature in the wider community of Internet privacy foundations and activist organizations to advance external development initiatives.
Tor is used by everyone from political dissidents to child pornographers to access a darknet, unreachable from the Internet most people know. Read more about the Tor Project at Wikipedia.
Tor Project logo uploaded by Wikimedia Commons user Flugaal

When a Company Is Put Up for Sale, in Many Cases, Your Personal Data Is, Too
When a Company Is Put Up for Sale, in Many Cases, Your Personal Data Is, Too
I have written about this before, but it’s worth reminding you. These days many companies offer an official privacy policy and an easier-to-read but not so official abridged version. Sometimes the two do not agree:
One example is Nest, an Internet-connected thermostat company that enables people to control their home energy use via their mobile devices. Acquired by Google for $3.2 billion last year, Nest has different online privacy pages with seemingly conflicting statements.One page, in colloquial English, says that the company values trust: “It’s why we work hard to protect your data. And why your info is not for sale. To anyone.”
Another page, containing Nest’s official privacy policy, however, says: “Upon the sale or transfer of the company and/or all or part of its assets, your personal information may be among the items sold or transferred.”
I know privacy policies are long and boring, but it’s worth at least scanning them to get a sense of what will happen to the information the company collects about you if they ever sell or go under. You may not like what you find.
Photograph by KylaBorg, of graffiti by Zabou

NASA, Verizon developing tech to track drones via cell towers

Mark Harris reports at The Guardian:
That $500,000 project is now underway at Nasa’s Ames Research Center in the heart of Silicon Valley. Nasa is planning the first tests of an air traffic control system for drones there this summer, with Verizon scheduled to introduce a concept for using cell coverage for data, navigation, surveillance and tracking of drones by 2017. The phone company hopes to finalise its technology by 2019.
This is fascinating to me because the documents obtained by The Guardian describe the purpose of the partnership as to “jointly explore if cell towers and communications could possibly support Unmannned Aerial Systems (UAS) Traffic Management (UTM) for communications and surveillance of UAS at low altitudes” (emphasis added).
NASA typically focuses on altitudes so high they’re, well, in space, so why are they involved with developing low-altitutde drone tracking technology?
I want to note that I’m not necessarily opposed to someone in the government being able to keep track of all the drones that will inevitably be zipping around. I’m just not sure why NASA is involved, and I wonder whether their choice of Verizon as a partner serves as a tacit confirmation of that cellular network’s claims of coverage supremacy over its competitors.
There will be some related surveillance stories in tomorrow’s Modern Law newsletter, so sign up to get an email with five links I haven’t blogged about yet.
Image © Nevit Dilmen
EFF Wins Battle Over Secret Legal Opinions on Government Spying
EFF Wins Battle Over Secret Legal Opinions on Government Spying
The EFF said in a press release yesterday:
The U.S. Department of Justice today filed a motion to dismiss its appeal of a ruling over legal opinions about Section 215 of the Patriot Act, the controversial provision of law relied on by the NSA to collect the call records of millions of Americans. As a result of the dismissal, the Justice Department will be forced to release a previously undisclosed opinion from the Office of Legal Counsel (OLC) concerning access by law enforcement and intelligence agencies to census data under Section 215.
That’s good news. Census has historically been analyzed only in the aggregate, with individual records held by the United States Census Bureau for 72 years before public release. I’m interested in reading the OLC opinion when it’s finally released.
Message scanning lawsuit against Facebook won't go away
Message scanning lawsuit against Facebook won’t go away
John Timmer reports at Ars Technica:
The court responded to this request by pursuing an extraordinarily rare course of action: it read Facebook’s entire terms of service. And, in this case, their vague language—typically used to provide broad immunity—became a liability: “[the document] does not establish that users consented to the scanning of their messages for advertising purposes, and in fact, makes no mention of ‘messages’ whatsoever.”
Be specific with those Terms of Service. Really specific.
Americans’ Cellphones Targeted in Secret U.S. Spy Program
Americans’ Cellphones Targeted in Secret U.S. Spy Program
Devlin Barrett reports at The Wall Street Journal:
The program cuts out phone companies as an intermediary in searching for suspects. Rather than asking a company for cell-tower information to help locate a suspect, which law enforcement has criticized as slow and inaccurate, the government can now get that information itself. People familiar with the program say they do get court orders to search for phones, but it isn’t clear if those orders describe the methods used because the orders are sealed.
Subprime auto lenders use technology to compel payment
Subprime auto lenders use technology to compel payment
Michael Corkery And Jessica Silver-Greenberg, reporting at the New York Times DealBook blog:
Ms. Bolender was three days behind on her monthly car payment. Her lender, C.A.G. Acceptance of Mesa, Ariz., remotely activated a device in her car’s dashboard that prevented her car from starting. Before she could get back on the road, she had to pay more than $389, money she did not have that morning in March.
This is as stark an illustration of the intersection of law and technology as I’ve linked to in a while. While the tech can be a blunt instrument in a world of nuance (some borrowers are doing their best, others are surely not), I don’t oppose it. Assuming everyone was aware of the terms of the loan, it’s a valid contract, etc.
But this sentence gave me pause:
Using the GPS technology on the devices, the lenders can also track the cars’ location and movements.
Again, there probably isn’t anything illegal about it, assuming a valid contract. But in a world of automated license plate scanning and associated geo-behavioral profiling, is a GPS device overkill?
I suppose the business model itself is unnerving. After all, if you need to use a GPS device to manage risk, maybe you shouldn’t be making the loan in the first place. Borrowers using subprime auto loans probably just can’t afford to get a car.
Some drivers volunteer for activity-tracking devices as a way of qualifying for reduced car insurance premiums. Such people can already afford insurance though, and allow their provider to track their behavior as an added savings.
Maybe it’s less the tech involved and more the word “subprime,” which to me invariably suggests a corporation taking advantage of someone who can’t actually afford what they’re getting, and will inevitably default.
FBI Director dislikes encryption on Apple and Google devices
FBI Director dislikes encryption on Apple and Google devices
Encryption of data on mobile devices is a big selling point in our post-Snowden world. But FBI Director James Comes isn’t happy about it:
What concerns me about this is companies marketing something expressly to allow people to place themselves beyond the law.
David Kravets of Ars Technica reports Comey has “reached out” to the companies about the issue. Absent new or amended legislation, though, there is little he can do about it, precisely because there is such a sales incentive to marketing encryption these days.
Anonymous Instagram users role-play with stolen baby photos
Anonymous Instagram users role-play with stolen baby photos
Blake Miller of Fast Company has this chilling article:
Jenny had become a victim of a growing—and to many, alarming—new community that exists primarily on Instagram: baby role-players. Instagram users like Nikki steal images of babies and children off the Internet, give them a new name, and claim them as their own. Sometimes they create entire fake families.
The sad thing is there is relatively little protection to be had from the law in situations like this. You may be able to sue someone using your likeness in a commercial venture without your permission, but non-commercial use of the nature described above is rarely protected in the same way.
Instagram users should review their privacy settings by reading the company’s help pages about controlling your visibility and setting photos and video as viewable only to approved followers.
Keep this in mind, though: even if you set your content as private, sharing a link to a photo or video on a social network like Twitter or Facebook will allow anyone with that link to view it.
People who do steal your photos and pretend they’re your child or your child’s parent are violating Instagram’s Terms of Service, which prohibits impersonation, among other things (emphasis mine):
You must not defame, stalk, bully, abuse, harass, threaten, impersonate or intimidate people or entities […]
The Fast Company article to which I link above includes a statement from Instagram that the company does remove the stolen images when users report such activity.
This story is another lesson to be mindful of not only what you share online, but how you share it. After all, a private company like Instagram could simply choose to ignore concerns like these, and users would have no recourse. Social networks can be a rich and vibrant way to stay in touch, but we are all responsible for what and how we share.
Apple can't bypass your iOS passcode
Apple can’t bypass your iOS passcode
Apple says in the latest revision of its page on government information requests:
On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data. So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.
Sure enough, the company also updated its Legal Process Guidelines (PDF) to reflect the increase in user privacy:
For all devices running iOS 8.0 and later versions, Apple will no longer be performing iOS data extractions as the data sought will be encrypted and Apple will not possess the encryption key.
This is obviously good news for people concerned about the amount of our data swishing around in the binary ocean, ripe for government fishing expeditions.
However, it’s also worth noting the overwhelming majority, 93 percent, of law enforcement requests to Apple are made at the behest of the customer themselves, usually in the case of a lost or stolen device.
You can find more information about what Apple discloses to law enforcement at its transparency reports page.
Privacy advocates, tech companies nudge Congress to protect ‘abandoned’ e-mails
Privacy advocates, tech companies nudge Congress to protect ‘abandoned’ e-mails
The Email Privacy Act would prevent the government from using mere administrative subpoenas to access email older than 180 days. The distinction, included in the Stored Communications Act , was based on the need for users to access and download email from a service provider’s servers. The logic was that if someone hadn’t downloaded their email in six months or more, they had effectively abandoned it.
Of course, things are no longer that simple in the age of constant synchronization and push notifications on mobile devices.
Previously: