High-tech soup bowl February 7, 2013Posted by Cameron Shelley in : STV202, Uncategorized , comments closed
From FastCompany comes this piece on the Anti Loneliness Ramen Bowl (ALRB). As a look at the picture suggests, the ostensible purpose of the bowl is to allow people to eat their miso soup without missing a Tweet.
Taken at face value, this design would seem to qualify as a gimmick, that is, a design that is more clever than it is useful. Of course, the design may not be intended seriously. The article suggests that the bowl is, in fact, “a wry commentary on the complex relationship between food and phone.” Perhaps, then, it should be compared to Philippe Starck’s Juicy Salif. The JS is ostensibly a lemon squeezer but not a particularly good one (by most accounts).
Instead, the JS was deliberately made as a conversation piece and even a invitation to ponder your decisions as a consumer. (Do you want a useful implement to squeeze lemons or an artwork to show off to visitors? Why?)
So, is the purpose of the ALRBs really to facilitate (or even give you permission) to watch your iPhone while eating, or is it to invite you to ponder the secondary role that food occupies during wired meals? Let me put the question another way: Would you buy them? Why, or why not?
Danger! Texting! December 1, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
From Technology Review comes this brief article about a smartphone app that warns its users of approaching cars. The app is called WalkSafe and is being developed by researchers at Dartmouth College.
This device brings to mind to a trope about how people distracted by their gadgets do dumb things, and how they may be protected from their folly. In 2006, there was Rick Mercer’s Blackberry helmet to protect the addled craniums of Blackberry addicts. In 2008, there was a story about padding lampposts in London to soften the blow as Blackberry addicts walked heedlessly into them. Earlier this year, there was the actual story of a woman who fell into a fountain in a shopping mall while texting, which was captured by CCTV cameras and posted to YouTube. More recently, Rick Mercer ranted about the people he almost ran over while they crossed the street, texting without looking:
The WalkSafe app will help to alleviate this problem. Maybe?
As ever, one first worries about the miracle of risk compensation. Recall this earlier discussion of the aware car, a system that monitors drivers for symptoms of exactly the same sort of distraction. A potential problem is that that such a system could actually encourage drivers to indulge in distractions, under the impression that the system will save them. Similarly, pedestrians busily texting may assume that WalkSafe will let them know if a car is approaching, at least on the camera side of their phone. In that event, having outsourced their situational awareness to their gear, pedestrians may walk and text even more obliviously than before. Such behavior could negate any safety gains provided by the app.
Here is my suggestion: Create an app that temporarily locks out the texting function of the smartphone when the carrier is in a crosswalk. Many crosswalks in Canada are equipped with speakers that beep or chirp in order to alert blind pedestrians. Perhaps the smartphone mike could pick up the noise, lock out texting, and snap texters into a heightened state of situational awareness, allowing them to save themselves from collisions.
See anything suspicious? May 13, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
FastCompany has an interesting piece on a smartphone app that lets you report suspicious items or events, e.g., an strange, parked car. The app, called “iSee-iSay”, was developed by a concerned citizen, George Perera, as a kind of suggestion to the US Department of Homeland Security. Perhaps they will buy the app from him, or develop a similar one, he hopes.
(Image courtesy of CZmarlin via Wikimedia Commons.)
Of course, one question that arises in connection with this app is what constitutes a suspicious thing or event, and will the system be open to abuse?
Users can simply and anonymously report white vans, brown paper bags, loud ticking, people with cameras, the sounds of footsteps in the woods, or anything else suspcious that they see. That info is sent to a “fusion center” in their respective state, which has staffers from the FBI, DHS, and so on at hand to review the reports, according to Perera.
Does he worry about a high rate of false positives? “You get false positives now,” he says. “Yes, you get those. But you know what? If you get one good one and can stop a 9/11 situation, or like the train in England, if you can stop one of those and save 100s or 1,000s of lives, it was worth it.”
Perhaps. With some experience, it might be possible to do a good enough job in sorting out the actual problems from all the inevitable false alarms. There are many similar security schemes afoot, e.g., one for scanning crowds for would-be arsonists.
There could be some unintended and undesirable consequences too. As noted by Anna Minton, the simple presence of security devices can serve to make people feel less secure, reminding them of their fears, even if their security is not really under threat. Having a suspicious activity monitor in your smartphone could make you more suspicious of the innocent activities of others, thus generating not only more false positive reports but also more mistrust among people.
You might reply that this problem would not necessarily come with an iSee-iSay app. After all, we have had the 911 emergency reporting system in place for years, and it does not seem to have overwhelmed emergency responders or generated undue fear of disasters among the public. That is true. However, as a society, we already had a pretty good grasp of what should be considered an emergency. Our notion of what is suspicious, it seems to me, is not so well fixed. Thus, it could be altered in an unhealthy way by the presence of suspicion-reporting software.
Candid smartphone January 27, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
First, it was Candid Microphone, a radio show in which Allen Funt took a microphone into the world to record the funny things said by ordinary people. Then, there was Candid Camera, in which Funt did something similar with a TV camera. It was amusing for the audience to see and hear the odd or embarrassing things that their fellow-citizens said and did.
(Image courtesy of Editor at Large via Wikimedia Commons.)
Today, as you know, we have YouTube, which allows anyone to upload videos of people doing noteworthy things. The latest such “viral” video involves a woman falling into a shopping mall fountain while texting. “It’s funny ’cause it’s not me”, as Homer Simpson once said.
This video was apparently captured by the security guards from a security camera in the mall. Nothing new there: Mall cops have been putting together tapes of amusing and naughty shoppers’ activities since CC cameras were introduced. Well, if your job were to watch parked cars all day, you would crave some stimulation too.
Of course, it’s different when the shoe is on the other foot. Mike Masnick of TechDirt has recently drawn our attention to people who use their smartphone cameras to film authorities (mis)handling their complaints. Here are a few recent examples:
- A woman was complaining to police about being assaulted by a police officer. The police response was unsympathetic, at which point she began to film them. She was then arrested for eavesdropping.
- A pilot is being disciplined by the TSA after he filmed some shortcomings in security at San Francisco Airport.
- A man was charged with various offenses by the TSA after he refused to show ID before boarding a flight although showing ID is not required. The man filmed the whole episode and uploaded it to YouTube.
In each of these cases, the people filming video seem to have been well within their rights to do so. Furthermore, the transparency that ubiquitous filming brings with it can promote a social good, that is, it can pressure authorities to follow the rules instead of acting arbitrarily (in the end). Of course, recording people’s behavior, and making a show of doing so, also turns up the social “temperature”. That is, it may make people defensive or angry where they would not otherwise be so.
So, incidents like these raise an interesting question (among others): Are we prepared for the emotional consequences of radical transparency in public places? Will we simply adjust, as Mark Zuckerberg suggests, to the new world in which transparency, not privacy, is the default setting of our lives? Or are we simply ill-equipped to deal with a world in which anything we do may be subject to the review of numberless people whom we do not know and who do not know us?
Assault alert March 26, 2010Posted by Cameron Shelley in : STV202, STV302 , comments closed
A FastCompany article mentions a smartphone app, called LightAlert that has been developed for a competition by two university students. The app will alert the user when she enters an area where a sexual assault has occurred. The idea is to allow the user then to make an informed decision about whether or how to proceed.
There are some clear potential wins with such an app. A woman who is not familiar with a region might not know where sexual assaults tend to occur. This app would provide that knowledge, gleaned from police reports on the Internet. In an era where people are becoming less knowledgeable about their physical surroundings, such an app could provide important information in a timely way.
I can see three social issues that need to be considered:
- What constitutes an “area”? That is, how near to a reported assault location do you need to be to trigger an alert? Probably, this feature could be configured by the user. An issue of fairness arises: If the distance is very large, then perfectly safe areas will trigger alerts. People and businesses in them could be tarred with a bad reputation that they do not deserve. If the distance is small, then unsafe areas will fail to trigger alerts, leading women who think they are safe (no alert) into danger spots.
- A similar point would arise from the latency of the assault reports considered. Should an assault generate a warning if it occurred ten years ago? Five? One? The user could probably configure this setting too, but an issue of fairness still persists, as with distance.
- What sort of liability will the app provider assume? Suppose that a woman is assaulted in an area she entered when no alert sounded, but the lack of alert was due to a bug or technical glitch in the app. (Software of any complexity will fail to work as intended sometimes.) Would the app developer be liable to a lawsuit on the grounds of negligence? If so, for how much? Probably, the vendor of the app will need some good insurance before going to market.
I suppose you could imagine other issues, a little more far-fetched. With the aggressive marketing of weapons like personal tasers to women, a woman who receives an alert may be more likely to identify someone as a potential assailant and take pre-emptive action, perhaps without justification.
None of these points establishes that the app is a bad idea. Instead, it simply needs to be thought through a little further.