Google strikes back May 13, 2013Posted by Cameron Shelley in : STV202 , add a comment
I have posted before about some of the dissing that Google Glass has received so far, even before its introduction. For example, it has been compared to the Segway, a personal mobility vehicle that never took off as its inventors intended.
(Loic Le Meur/Wikimedia commons)
But Google has been fighting back, countering the criticism with some positive PR. For example, Google has publicized some warm feelings from beta testers:
Mary Lambert got cooking instructions using Glass. “The friend who I was doing it with could see what I was doing and was like ‘No no no, that’s all wrong,’ which was really helpful and I didn’t expect it,” she says.
So, interacting with Glass is better than interacting with friends? Well, friends are not always well-informed or compliant, unlike Glass.
Critics, I suppose, would argue that early adopters like everything new–like the Segway–whether it is really a good idea or not.
Another time-honoured way to take some heat off Glass is to identify a marginalized social group with whom everyone is sympathetic and who might surely benefit from the new technology. In this case, Google points out that war veterans could use Glass to have a better experience at war memorials:
[Sarah] Hill is convinced that leading a virtual tour for veterans while wearing Google Glass would be completely different for them than showing the group just a DVD. She says it gives them the ability to ask questions and request certain sights and sounds, like the waves on the beaches of Normandy or the waterfalls at the World War II memorial.
This use of Glass is similar to another new technology, also discussed by NPR, namely Sony’s new Entertainment Action Glasses. The purpose of these specs is to help deaf people enjoy movies in theaters:
Sony Entertainment Access Glasses are sort of like 3-D glasses, but for captioning. The captions are projected onto the glasses and appear to float about 10 feet in front of the user. They also come with audio tracks that describe the action on the screen for blind people, or they can boost the audio levels of the movie for those who are hard of hearing.
This technology might well be really enjoyable for deaf movie fans and could help to boost attendance in theaters somewhat. Of course, Sony’s Glasses won’t be recording any movies, perhaps unlike sets of Google Glass in front of the big screen.
This comparison suggests that veterans might be better served by simpler and more specialized gear like the Action Glasses, which would also likely be a lot cheaper than Glass.
I do not know whether Google Glass will succeed in the marketplace or not. However, its road to success would be smoother if Google could somehow assure institutions like restaurants and movie theaters that its gear won’t imperil their business enough to make them want to ban it.
(Stop the cyborgs/Wikimedia commons)
More dissing for Google Glass May 7, 2013Posted by Cameron Shelley in : STV202 , comments closed
Google Glass is months from its commercial introduction, but it is already getting a rough ride. I noted in a recent posting that some commentators have found Glass to be too “dorky”. The basic complaint is that wearing Glass will make you seem rude and out-of-it to the people around you. Fear of this condition (“Glass eye”?), that is, not wanting to appear dorky in public, will inhibit uptake of Google Glass by consumers.
As noted by the New York Times, resistance to Google Glass is already gathering steam:
The glasses-like device, which allows users to access the Internet, take photos and film short snippets, has been pre-emptively banned by a Seattle bar. Large parts of Las Vegas will not welcome wearers. West Virginia legislators tried to make it illegal to use the gadget, known as Google Glass, while driving.
“This is just the beginning,” said Timothy Toohey, a Los Angeles lawyer specializing in privacy issues. “Google Glass is going to cause quite a brawl.”
Glass is poised to turn all its wearers into paparazzi, recording one another on the sly. Google responds that they have considered the possible dork factor: Google Glass must be turned on via voice or manual command, and the subject must be directly in the line of sight of the wearer before recording can start. Thus, people will know when they are being recorded. Maybe. The article notes that some developers have already hacked Glass, allowing the user to begin recording with merely a wink.
Then there is the issue of distraction. The state of Virginia has already considered legislation that would ban drivers from using Google Glass behind the wheel, which would otherwise be permitted as a hands-free device under current law. No doubt, other jurisdictions will be considering similar measures in the near future.
Google Glass will not be permitted in some private venues. The “5 point bar” in Seattle, for example, has already banned customers from using the gear there on the grounds that patrons want a private experience there. Also, Glass will not be permitted in casinos in Las Vegas, which prohibit people from using any recording device.
In a CNN article, former secretary of Homeland Security, Michael Chertoff, compares Google Glass to drone technology:
Now imagine that millions of Americans walk around each day wearing the equivalent of a drone on their head: a device capable of capturing video and audio recordings of everything that happens around them. And imagine that these devices upload the data to large-scale commercial enterprises that are able to collect the recordings from each and every American and integrate them together to form a minute-by-minute tracking of the activities of millions.
I guess that Chertoff has the unarmed variety of drone in mind, although more than a few Americans do carry heat. Certainly, there is potential for abuse here, but Google’s use of the data could presumably be regulated, as their use of Street View data is. Perhaps Google’s computers could track only those who have explicitly agreed to it.
Google views concerns like this one as over-reactions. By and large, they feel, people will continue to treat each other much as before:
Thad Starner, a pioneer of wearable computing who is a technical adviser to the Glass team, says he thinks concerns about disruption are overblown.
“Asocial people will be able to find a way to do asocial things with this technology, but on average people like to maintain the social contract,” Mr. Starner said. He added that he and colleagues had experimented with Glass-type devices for years, “and I can’t think of a single instance where something bad has happened.”
How hard were they looking?
And what is Google’s view of the social contract? My idea is that I do not expect to be tracked under normal circumstances, even when in public. I do not expect to be tailed by police, for example, unless they have “probable cause” to suspect that I am up to something nefarious. Is that Google’s view?
Google does seem to have something different in mind:
Like many Silicon Valley companies, Google takes the attitude that people should have nothing to hide from intrusive technology.
“If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place,” said Eric Schmidt, then Google’s chief executive, in 2009.
In other words, Google takes the view that, by appearing in public, people have implicitly agreed to be tracked; otherwise, they should not have made themselves visible. Is that really the social contract?
More likely, gear like Google Glass will require people to re-negotiate the social contract. It will be interesting to see how that plays out.
In the meantime, Google Glass got some more love over the weekend, this time from Saturday Night Live:
Perhaps users of Glass will be the ones to worry about their public appearances.
In the privacy of your car? April 5, 2013Posted by Cameron Shelley in : STV302 , comments closed
The term “black box” can refer to a recording device, often associated with planes or trains, that records operational data during usage. The black box was in the news not long ago in connection with the fate of Air France flight 447 that crashed in the mid-Atlantic in 2009. The black box contained data from the instruments and control systems of the plane that helped investigators to reconstruct the accident, with a view to preventing similar disasters in future.
What is less well known is that cars often contain black boxes as well, where they are known as Event Data Recorders (EDRs). Like black boxes in airplanes, EDRs were designed to record instrument and control data with a view to safety, that is, the reconstruction of accidents for purposes of preventing future problems. The nature of an EDR depends on the make of the car and local regulations, but a typical list of data captured would include:
- engine rpms;
- applications of the brakes;
- applications of the accelerator.
Naturally, such data is useful not only for safety but for legal purposes, for example, determination of fault in an accident. So, who owns the data, and who can gain access to it? For what purpose?
The issue is not new but has received renewed interest because the US National Highway Traffic Safety Administration has proposed making the devices mandatory on all cars, starting next year.
The Electronic Frontier Foundation has proposed that people should be able to opt out of having the data recorded, and that police should require a warrant before having access to any data available.
If US law is unclear on the subject, it appears that Canadian law is no further ahead. Here is an excerpt from a recent article on the topic:
There is no legislation in Canada specifically governing the admission into evidence of EDR data. The federal and provincial Evidence Acts should be amended to permit the admission of the evidence either as a “business record” or on a basis similar to blood-alcohol test devices.
I am not sure what that means, but it sounds interesting.
What sort of privacy protections, if any, are appropriate for EDRs?
TV watches you! March 21, 2013Posted by Cameron Shelley in : STV302 , comments closed
In America, you watch television.
In Soviet Russia, television watch you.
Soon, America will become more like the old Soviet Union. FastCompany notes that Samsung is set to offer a television that tracks the viewing habits of its users, in order to make entertainment recommendations. Google and Panasonic have already announced similar plans for their upcoming TV offerings.
Along with their resolution, the level of surveillance offered by these screens will be quite high, which will be of interest to advertisers:
Rob Enderle, an analyst and consultant with Enderle Group, said this model will become the norm as television gravitates to Internet platforms.
“Increasingly, TVs will know who is watching them and I expect advertisers will know shortly thereafter. This should result in shows and commercials you like more and even better products, but far less privacy.”
Stu Lipoff, a fellow at the Institute for Electrical and Electronics Engineers, said TV on mobile devices will have similar characteristics, with considerable amounts of data which can be gleaned about viewers.
Of course, millions of Xbox users have already agreed to this sort of arrangement, so it is not completely new.
Still, does it sound creepy or Orwellian? Unlike the novel 1984, users will be given options regarding the level of privacy that they want:
[Chinese manufacturer] TCL’s [Haohong] Wang says, meanwhile, the TV makers are not interested in tracking people and will allow them options.
“We are an equipment company. What we want is to give a good user experience,” he said. And if viewers feel uncomfortable with being monitored they don’t have to use those features, he said: “They can just turn it off.”
In other words, the default will be full surveillance. The alternative will be something less. Nice.
(Lali Masriera/Wikimedia commons)
Child pornography in the cloud March 12, 2013Posted by Cameron Shelley in : STV302 , comments closed
Bill Snyder at Infoworld provides an interesting piece on a case of child pornography found in the cloud. The home and computer of a Maryland church deacon were searched by police:
When Baltimore County police served a search warrant at the home of 67-year-old William Steven Albaugh, they recovered numerous files allegedly containing graphic images and videos of young children being subjected to sexual abuse. The police found the material on his home computer and a number of USB drives, but how did they know it was there in the first place?
It turns out that Albaugh is a subscriber to Verizon’s high-speed Internet service, and used its cloud feature to back up his data. Its terms of service, which Albaugh–like many–probably did not read carefully, allows Verizon to examine the data their customers keep there:
Verizon “shall have the right, but not the obligation, to monitor use of the of, and to screen, refuse, move or remove any content transmitted to or from, any Additional Service for compliance with law or the terms of this Agreement.”
In fact, Verizon uses a Microsoft program, PhotoDNA, to examine data and identify files that might contain child pornography. American law allows Verizon to do this. Once child pornography is identified, Verizon is bound by law to alert the authorities.
Canada has implemented a project called “Project Cleanfeed Canada“, based on a British model, that maintains a list of websites containing child pornography that Canadian ISPs have agreed to block access to. Last year’s Bill C-30, providing police with broad powers to monitor the Internet activities of Canadians–and justified as a means to deal with the problem of child pornography–was recently withdrawn by the government due to public backlash.
It is not clear to me–and I am no expert–how cloud data is treated in Canadian law. Are your storage providers allowed to examine the data you have stashed in the cloud? Do they examine it, and for what purpose?
There are legitimate reasons for governments, and perhaps service providers, to examine data stored in their facilities. However, the rights and responsibilities of each party seem to be unclear. Also, many users of cloud services are probably unaware that their data may be considered the property of the service provider under the law. I suspect that this issue is one that it would be good to sort out.
Instagram’s day in the doghouse December 19, 2012Posted by Cameron Shelley in : STV202, STV302 , comments closed
- Instagram can share information about its users with Facebook, its parent company, as well as outside affiliates and advertisers.
- You could star in an advertisement — without your knowledge.
- Underage users are not exempt.
- Ads may not be labeled as ads.
- Want to opt out? Delete your account.
Instagram has since clarified its position, arguing that it never meant to appropriate users’ photos in this way and it is clarifying its language to assure its users of this. Whether Instagram really made a mistake or employed the trial balloon strategy usual for Facebook is a matter for debate.
“Instagram isn’t going to sell your photos, Facebook is going to use them to access data to further help them push more and more targeted ads,” said Kerry Morrison, CEO of Endloop Mobile, a Toronto-based app developer.
Facebook did buy Instagram–for $1 billion–remember?
So, whatever else it means, this faux pas by Instagram illustrates a couple of issues regarding privacy and free services on the Internet. First, terms of service may be changed at any time, particularly when a service provider is bought by another company. Second, the default will always be opt-out, that is, you are a part of the service’s monetization program unless you explicitly opt out (if that is even possible).
One way to opt out is to adopt a service that you pay for up front. However, opting out of a popular service is difficult and even costly, as Instagram is well aware. Many people have invested much time, effort, and money in setting up their Instagram accounts. Deleting those accounts would mean losing that investment and the connections that come with it. Another option might be to pay to opt out while remaining in the service. This solution might be reasonable, although it hardly seems to be in Facebook’s “DNA”.
Children on Facebook November 29, 2012Posted by Cameron Shelley in : STV302 , comments closed
It is not news that some children lie to get accounts on Facebook. Young people tend to be intensely social and Facebook provides an means for connecting with friends when they are not otherwise available. However, the US Children’s Online Privacy Protection Act (COPPA) requires stringent safety and privacy controls to providers of services to children under 13. Since Facebook does not, by design, provide such strict control, it does not comply with the Act. Instead, it forbids children under 13 from joining the service.
(Deryk Hodge/Wikimedia commons)
If children under 13 join the service, Facebook has measures in place to remove their accounts. For example, users can report suspicious users using a reporting tool. According to Facebook, about 20,000 users per day are removed for violation of the age restriction. Whether or not this measure is adequate is a matter of controversy. As of last year, some 7.5 million under-age users were on the service in the US.
Facebook would like to allow young children to use its service and is exploring ways it might do so. According to Mark Zuckerberg, their motivation is that Facebook could be key in childhood education:
“Education is clearly the biggest thing that will drive how the economy improves over the long term,” Zuckerberg said. “We spend a lot of time talking about this.”
At least, getting more children on Facebook would be good for the company’s economy.
A new study of underage Facebook use provides more fuel for the fire. The issue concerns privacy measures that Facebook takes for users between 13 and 18 years of age:
Facebook has long said that it is difficult to ferret out every deceptive teenager and points to its extra precautions for minors. For children ages 13 to 18, only their Facebook friends can see their posts, including photos.
A child could be found, for instance, if she was 10 years old and said she was 13 to sign up for Facebook. Five years later, that same child would show up as 18 years old – an adult, in the eyes of Facebook — when in fact she was only 15. At that point, a stranger could also see a list of her friends.
As a result, the current situation creates an incentive for children to lie about their ages, a lie that then makes information about them publicly available a few years later when they, and their friends, are still minors. Without COPPA, children would be less likely to lie and would be subject to more stringent privacy measures for longer.
The situation provides an example of unintended consequences. In this case, a law that was designed to protect the privacy of minors tends to expose them instead because of the circumstances in which it operates. Such examples remind us that even the seemingly most rational and straightforward plans can fail due to limitations of knowledge or capacity to judge how our actions will turn out.
Privacy and emails November 19, 2012Posted by Cameron Shelley in : STV302 , comments closed
The General Petraeus affair has brought new attention to an established issue, that is, privacy of emails. Petraeus was brought low when the FBI accessed an anonymous Gmail account that Petraeus had used to conduct an affair with his biographer, Paula Broadwell. As many commentators have pointed out, it seems odd or ironic that the head of the CIA resorted to a simple (and not very secure) ruse to conduct secret business.
In any event, the ease with which the FBI obtained access to Petraeus’s emails underlines the lack of privacy that people with such accounts enjoy, according to critics:
The fact that police can get that information with a subpoena — just a letter, usually without approval of a judge — is deeply disturbing to civil libertarians like Chris Calabrese, legislative counsel for privacy issues at the ACLU.
The position of the ACLU is that such information should require a warrant, just as if the police were intending to search your house. Needless to say, the police do not take the same view of the matter:
But Scott Burns, executive director of the National District Attorneys Association, says Americans should understand how much more work that would create. “The difference is, an investigative subpoena is a one- or two-pager, and a search warrant is a book report,” Burns says.
All those extra warrants, he says, would make life “incredibly difficult” for police.
This issue of the security of someone’s email illustrates a common trade-off: Enjoying security means being able to conduct your affairs without enduring the scrutiny of the state. As then-Canadian-Minister-of-Justice (and later Prime Minister) Pierre Trudeau once said, “There is no place for the state in the bedrooms of the nation.” However, privacy also means being able to hide misdeeds from the public, even when the public has a stake in the outcome. (What if Petraeus’s mistress had been a foreign agent?) The issue becomes one of how to balance these competing and legitimate interests in a way that is fair to all concerned.
Although I am sure that it would be small comfort to Petraeus, it is worth noting that his arch-adversaries have similar difficulties. According to ABC News, the Taliban has had problems keeping its email list secret:
In a Dilbert-esque faux pax, a Taliban spokesperson sent out a routine email last week with one notable difference.He publicly CC’d the names of everyone on his mailing list.
The names were disclosed in an email by Qari Yousuf Ahmedi, an official Taliban spokesperson, on Saturday. The email was a press release he received from the account of Zabihullah Mujahid, another Taliban spokesperson. Ahmedi then forwarded Mujahid’s email to the full Taliban mailing list, but rather than using the BCC function, or blind carbon copy which keeps email addresses private, Ahmedi made the addresses public.
The email list consists mostly of journalists, who would be the natural recipients of a press release. However, it also includes a number of Afghan legislators, academics, and activists, whose loyalties are now, no doubt, under review.
Stewart Brand once said that, “information wants to be free.” The tendency of information to escape confinement on the Internet is notorious, as these examples demonstrate. If you are determined to communicate in secret on the ‘net, then you should look at these hints from the New York Times. Good luck!
E-textbooks track students November 9, 2012Posted by Cameron Shelley in : STV302 , comments closed
A recent article in the Chronicle outlines how e-textbooks can be used to track students’ reading behavior:
Say a student uses an introductory psychology e-textbook. The book will be integrated into the college’s course-management system. It will track students’ behavior: how much time they spend reading, how many pages they view, and how many notes and highlights they make. That data will get crunched into an engagement score for each student.
The idea is that faculty members can reach out to students showing low engagement, says Sean Devine, chief executive of CourseSmart. And colleges can evaluate the return they are getting on investments in digital materials.
So, the payoff is that the tracking data can allow the educator to tailor the educational experience to each student, to help ensure their (mutual) success. Given the important role of education in society, that outcome is a clear win.
Of course, using textbooks to track students sounds a bit creepy. Purveyors of the e-textbooks have thought about the privacy issue as well. They have designed the system so that students can opt out of having their behavior tracked if they so choose. Problem solved!
I submit that this provision does not completely settle the issue: Why is an opt-out scheme the correct choice? Perhaps students should be required to opt in before tracking occurs.
This problem of setting the default in a tracking system is neither trivial nor novel. A couple of years ago, for example, I discussed a similar issue with the Kindle, which also tracks it users’ reading behavior by default. Probably, many Kindle users never realize their data is being collected and mined, thus diminishing their freedom to choose. An opt-in system would do a better job of assuring that users know that they are being surveilled by their gear. Of course, the opt-in default would probably lower compliance, unless students could be convinced that participation is in their best interests.
Surveillance and resistance July 19, 2012Posted by Cameron Shelley in : STV202, STV302 , comments closed
Steve Mann, a professor of computer engineering at the University of Toronto, has endured a strange experience at a McDonalds in Paris. Professor Mann wears a special set of augmented reality eyeglasses called EyeTap. Apparently, the device drew the ire of some employees of the restaurant, who confronted Mann and tried to take the glasses away. In his blog post, Mann notes that the device is attached to his head and “does not come off my skull without special tools”. In the end, Mann and his EyeTap were forcibly ejected from the premises.
(EyeTap blog/Steve Mann)
It is unclear what motivated the alleged assault. However, Mann points to an incident last year in which an American woman was forcibly ejected from a Paris McDonalds after she photographed the menu and, in the process, one of the employees. In both cases, McDonalds has denied that any physical altercation took place. However, it seems plausible to conclude that, in both cases, the employees were responding adversely to what they took to be intrusive surveillance on the job.
There is some irony in this situation. Both Professor Mann and his assailants view their actions as acts of resistance, that is, opposition to the introduction of unwelcome technologies. Professor Mann’s EyeTap device is meant, among other things, to promote sousveillance, that is, the surveillance of the authorities by members of the public. Through such sousveillance, government authorities such as the police can be held to account. This use of technology acts as a counter to the powers of surveillance that authorities can acquire through technology.
However, the difference between sousveillance and surveillance is only one of perspective. It may be that the McDonalds employees took Mann with his EyeTap as a surveilling authority figure, whether official or self-appointed. Not that this consideration excuses their hostile response.
The incident illustrates the tensions that come along with the arrival of ubiquitous, networked, and now mobile sensors such as Google glasses. It seems that the time has come to start resolving these tensions. McDonalds, for example, needs to set a policy for how employees may deal appropriately with members of the public wielding cameras and other recording devices. What should that policy do?