If the floppy disk fits? November 22, 2010Posted by Scott Campbell in : STV100, STV202 , comments closed
I’m not going to take credit for the initial observation for this one. Look closely at the “save” button at the bottom.
Clearly, using a 3.5″ floppy disk to represent ’save’ is out of place on a smart phone. More so since Sony has stopped selling the once-ubiquitous-now-obsolete storage technology.
Out of curiosity, I went looking for other obsolete icons and quickly found the awkward floppy in the OpenOffice.org for OSX toolbar.
There is is, third from the left. I don’t use Microsoft Office, but it looks like Word 2010 is no better (look way up in the top left).
Fourth from the left on my toolbar is an envelope to represent “email document”, which might be another obsolete problem. In the US, first-class mail volume is below 1964 levels. Then again, Gmail, Google’s mail service, seems to be doing okay and cleverly co-opted the envelop for their logo.
They famously crowdsourced a promotional video a few years ago, using the “m-velope” as the star of a video.
Going back to that toolbar, you’ll also notice that second from the right is an icon that represents “print document”:
The only problem, of course, is that the printer in the icon is ejecting paper on the top, like some obsolete tractor-feed dot-matrix printer from the 1980s. Look around the internet or the average consumer electronics store, and you won’t see many top-eject printers left that resemble that icon.
Computer icons change all the time, and evolve for various reasons. Presumably, usability enters into the designers criteria, which is why we can get stuck with the familiar to represent abstractions of activities that are technologically obsolete.
All of which reminds me of Marshall McLuhan, who once said:
If it works, it’s obsolete.
Which was his way of pointing out that we live and work with obsolete technologies all the time: they are the technologies which we’re comfortable with that aren’t disruptive or altering the status quo. Which is no excuse for lazy design, of course. Not changing something simply to avoid change might be as bad a change for the sake of change.
The Panaudicon November 22, 2010Posted by Cameron Shelley in : STV202, STV302 , comments closed
The philosopher Jeremy Bentham is well known for his suggested design for a prison, namely the Panopticon. The design called for a circular array of prison cells wrapped around a central watch tower. The cells would be arranged so that the prisoners could be seen at all times, both by the guards in the central tower, and by the other prisoners. The tower, however, would be equipped with one-way mirrors so that the guards could see the prisoners but not the other way around. The principle of the design was that the radical transparencies of the prisoners’ lives would enable the guards (or other prisoners) to see and deal appropriately with every transgression of the rules. The prisoners would learn the essentials of good behaviour and the futility of bad behaviour and so would be rehabilitated for their eventual return to society.
(Image courtesy of Majorly via Wikimedia Commons.)
The Panopticon calls for a trade-off: the total loss of privacy through ubiquitous surveillance in exchange for the prevention of wrongdoing. Although Bentham’s proposal has had some, limited influence on prison design, some commentators have worried that we are in the process of constructing a surveillance society, that is, a ubiquitous surveillance system for the entire public realm, fashioned from CCTV, cell phone cameras, airport scanners, aerial drones, and the like. In short, we may be in the process of setting up a Panopticon in which everyone in public are the prisoners, and all the prisoners are also the jailers. The question is: Is this trade-off a good one or a Faustian bargain?
Well, Bentham neglected one issue in his design, namely the role of sound. Transparency might enable prison guards to catch beatings or thefts but one-way mirrors might filter our other sorts of socially unacceptable behaviour such as uttering threats or singing copyrighted songs like “Happy birthday.” However, we need not pass over the matter in silence today, especially when there exists a ubiquitous array of sensitive microphones in available in public. Most cell phones have the ability to record conversions occurring over them, or can simply be used as audio recorders. Thus, it is a simple matter to record your conversations or ambient sounds and share them over the ‘net. At a more organized level, ShotSpotter system comprises a network of microphones that relays information to a central program employed by some police forces in order to sort out gunshots from other noises. If only such a system could be re-purposed to report suspicious sounds of all kinds to a central database.
(Image courtesy of Ousk via Wikimedia Commons.)
Done! A British company called Audio Analytic has developed software that can take sounds as input and pick out the ones that contain aggressive speech:
“A lot of incidents just can’t be picked up by video only systems,” said Chris Mitchell, Audio Analytic’s boss, on BBC World Service’s Digital Planet.
“For example in a hospital where somebody, or a nurse, is being threatened early hours in the morning – that’s a very difficult thing for CCTV guards who monitor hundreds of channels worth of video signals on 20 screens or so to pick up.”
“Our system picks out the most salient characteristics. These are things related to pitch, tone, intonation.”
Of course, not all aggressive vocalizations are actionable. The article mentions that, in testing, the software identified as aggressive a man who got mad at a coffee machine because it “ate” his money. Such instances are known as “false positives”, events that the system classifies as hits (here, unacceptable verbal aggression) that are actually misses (here, inoffensive or unactionable vocalizations). The false positive rate, Mitchell comments, is quite low with this system.
Of course, it would be easy for any system to have a low false positive rate in this situation: merely classify almost all sounds as inoffensive. If your system hardly ever classifies any sound as aggressive, it will not likely classify innocent sounds as aggressive.
What about false negatives, that is, sounds that are offensive but are judged innocent by the software? The article contains no comments on the matter, nor does any other article that I have come across. An important design issue for the software is to balance off the interests of people who are harmed by false positives, in this case, people who are deemed aggressive but are not (e.g., just repeating something they heard on YouTube) and the the interests of those people who are harmed by false negatives (e.g., nurses in a hospital whose patients become verbally abusive but are not identified as such). I would suggest that a public discussion of the abilities of software such as this one is in order, to ensure that such trade-offs are made appropriately.
Of course, this point brings us back to the Panopticon. By adding the ability to collect, aggregate, and analyze sounds occurring in public, we now have the ability to achieve total auditory transparency. Let’s call it the “Panaudicon”! As in a hospital or prison setting, this ability brings with it the problem of dealing with the errors such a system will inevitably make, both false positive and false negative. Balancing the frequency of these errors will be a test of our sense of fairness in the public deployment of technology.
Megamind and technology November 18, 2010Posted by Cameron Shelley in : STV100 , comments closed
Last weekend, I went with my daughter and her friend to see Megamind 3D. It is good if you want 90 minutes of light entertainment and/or look back kindly on the Superman movies with Christopher Reeve and the earlier TV serials starring George Reeves. (Alert – spoilers follow!)
(Image courtesy of Imdb.com.)
The movie revolves around the super-villain Megamind, an alien refugee on Earth gifted with an extraordinary cranium and technological inventiveness. Naturally, he struggles to defeat the super-hero Metro Man, a different alien refugee gifted with extraordinary physical powers, invulnerability, and good looks. The plot twist comes when Megamind unexpectedly succeeds in killing Metro Man and is able to bend Metro City (which he pronounces to rhyme with “atrocity”) to his evil will. His problem is that, since his evil consisted entirely in his opposition to Metro Man, Megamind had never considered what his policies as leader of Metro City would be. He eventually returns to a dominant motivation from his childhood, that is, trying to win the love of others, the lovely reporter Roxanne Ritchi in particular.
(Image courtesy of Imdb.com.)
Anyway, the movie parodies many of the themes and motifs from the Superman of the Cold War era. In that way, it displays an old ambivalence in American culture about the nature of high technology and its role in society. Americans have harnessed technology to industrialize their country and massively raise their standard of living. America as we know it is hardly imaginable without its technology. Yet, technology, unlike Superman with his unalterable regard for “truth, justice, and the American way,” sometimes seems like a hired gun ready to work for the highest bidder no matter what the consequences.
One of the seminal moments in the American technology-society relationship in the 20th Century was the launch of Sputnik by the Soviet Union in 1957. The notion that the Russians would soon own the space above the United States was shocking and demanded a response. Thus, the “Space race” was born.
(Image courtesy of the US Air Force via Wikimedia Commons.)
One component of the shock was that the US might not inevitably dominate the world of technology. Instead, it appeared that technology might allow an inferior political system, that is, communism, to dominate a superior system, that is, democratic capitalism. It is an old sort of concern. In his play the Clouds, the ancient Greek comedic playwright Aristophanes presents the new-fangled learning of Socrates as merely a way of making a weak argument look better than a strong argument. In brief, he presents Socrates’ philosophical ideas as a kind of intellectual fancy dress that can be applied to a lot of stupid and subversive ideas in order to make them appear like the smartest and most progressive ideas around. Truly great but old-fashioned looking ideas seem to pale in comparison. (This trope of intellectuals dressing up dumb ideas to hoodwink people has been fruitfully employed by conservatives ever since.) Substitute “communists” for philosophers and “technology” for fancy dress and you get something like the following: communists may be able to use technology to help their inferior political system appear better than the superior, American political system. Thus, we should regard technology with some suspicion.
The trope comes through in Megamind as it picks up on and parodies the Superman of the Cold War era. Metro Man is physically powerful, invulnerable, completely virtuous, and also handsome. Megamind is none of these things. Instead, he is clever, skulking, grasping, and unsightly. However, he is able to use technology to make up some ground on Metro Man, e.g., by developing death rays, a dehydration gun, an invisible car, a powerful robotic exoskeleton, a holographic projector that alters his appearance, and so on. He is like Sputnik, a reminder that natural American strength and virtuosity are always in danger from weak, evil, but cunning and inventive opponents. Of course, the same perspective could be applied to Islamist terrorists today.
So, how to reconcile the threat represented by technology with a deep regard for it and its importance to the good life in American culture? Megamind performs the maneuver creatively. Metro Man simply removes himself from the scene. Being super-powerful, invulnerable and totally virtuous means that he faces no true challenges in life: He is always able to save Metro City and is never tempted to do it harm. His victories have become as tedious as they are inevitable. So, he fakes his own death in order to pursue a career in an area outside his gifts, namely music. The consequences for Megamind are devastating. Like Wile E. Coyote without the Roadrunner, Megamind has no idea what to do with Metro City now that it is within his grasp. His life was all about his opposition to Metro Man. Without that opposition, he is rudderless. By accident, he also starts to become involved with the reporter Roxanne Ritchi and, when she is threatened by the super-villain Tighten, he discovers the virtue that he had lost sight of as a youth, namely the desire to love and be loved by others. And so, he takes on the role of super-hero in defending Ritchi and Metro City from Tighten. Of course, not possessing any of Metro Man’s natural gits, Megamind must make use of his cleverness and technological prowess to accomplish his new goal. Thus, the conundrum is resolved: technology may not make you virtuous but, in the hands of good people, it is enough to ensure the triumph of virtue.
Well, it works for Megamind, but is it really true?
Knowledge = power = energy = mass = gravity? November 15, 2010Posted by Scott Campbell in : STV100 , comments closed
Many people are familiar with the aphorism that “Knowledge is power”. I’ve often attributed it to Francis Bacon, but the notion is much older, even showing up in the Book of Proverbs (24, 5): “A wise man is strong; yea, a man of knowledge increaseth strength.” (Given how much Google must know about us all, I guess we should be glad that their motto and philosophy includes “Don’t be evil”.)
With that in mind, I often ask my students who holds the most power with respect to technology. As it turns out, aside from the occasional technocratic movement, very few engineers reach a position of high political power (Quick! Can you name the any U.S. Presidents or Canadian Prime Ministers with a technical background?) For a variety of reasons, experts often have very little power when it comes to fruit of their labours, at least when things escape the narrow confines of their sub-specialized discipline. As many experts, both technical and scientific, have discovered, their specialized knowledge can rarely be converted to social influence or power when ideologies or politically-expedient decision making gets in the way.
This was the lesson last week when the U.S. White House was accused of tinkering with a deepwater drilling report:
The Interior Department’s inspector general issued a report this week asserting that officials in the office of Carol M. Browner, the White House coordinator for energy and climate change policy, had changed some wording and moved some sentences in an agency report that ended up misrepresenting the views of the technical experts.
The Interior Department report, issued at the end of May, made two dozen recommendations for improving the safety of offshore drilling and said that until those changes were adopted, all drilling in water deeper than 500 feet should be suspended. It said that the recommendations had been “peer-reviewed” and approved by a panel of outside engineers and oil drilling experts.
Shortly after the report appeared, the technical advisers angrily complained that while they had endorsed a number of the safety recommendations, they had not concurred that a blanket deepwater drilling ban was needed. They said such a ban would punish companies with good safety records and lead to thousands of lost jobs.
This scenario highlights what may be the worst part of technical experts getting tangled in a political mess: the veneer of scientific legitimacy resting atop a decision crafted by political self-interest.
And for the record, Herbert Hoover was a mining engineer and Alexander Mackenzie was a mason.
Why cities are green November 15, 2010Posted by Cameron Shelley in : STV202 , comments closed
A recent FastCompany article summarizes why cities are more environmentally sustainable than rural populations. Although cities effect devastating changes to the land they are sited on, the residents of cities have a smaller environmental footprint, per person, than do their rural counterparts. Efficiency is the key.
Cities hold over half the world’s population, but only cover 3% of its surface. Density also lends itself to energy savings overall. A recent study found that the average London resident produced half the greenhouse gas emissions of the average Brit, and that the average New Yorker produced about a third of the emissions of the average American.
The compactness of cities and their infrastructure means that people’s social and economic activities can occur more efficiently than similar activities would in rural areas.
(Image courtesy of chensiyuan via Wikimedia Commons.)
Of course, it does not follow that because cities are more efficient than the alternative that they should create a more sustainable lifestyle. Increased efficiency can sometimes result in decreased sustainability, through a phenomenon called Jevon’s paradox. Basically, when a resource can be consumed more efficiently, more people may start to consume it, and consume it more voraciously, resulting in an overall increase in consumption. If this paradox applies to cities, then we might expect the growth of cities to touch off a spate of population growth and increasing consumption of resources per person. Why does this not happen?
Part of the answer, the article suggests, comes from social norms inculcated by city life, especially lowering the birth rate. Urban dwellers tend to have fewer children, on average, than do rural dwellers. The reason may be due, in part, to the economic and educational opportunities that city life affords to women. Basically, women find that there are other routes to fulfillment and prosperity than having a large family, and they come to prefer those options.
Also, in his recent book Whole Earth Discipline, Stewart Brand argues that cities are a hotbed of innovation, not just at the level of, say, the high-tech industry but also at the level of the slums where slum-dwellers make efficient use of the physical resources at hand and of their social network.
Roughly speaking, cities seem to allow people to fulfill their life goals, both in terms of material prosperity and social standing, in a way that requires less overall consumption than do rural settlements. This conclusion is not beyond dispute: City life imposes externalities on rural areas, e.g., pressure to adopt industrialized agriculture to feed the city’s population, that skew the assessment in favour of cities. However, a plausible case can be made that encouraging city living is a major step in the direction of an overall sustainable lifestyle. Importantly, it is not the mere efficiency of cities relative to rural settlements that supports this case. It is also the social rewards that cities provide that prevent their inhabitants from simply using the efficiency of cities to increase their overall consumption of the world’s resources.
Location, location, location November 11, 2010Posted by Cameron Shelley in : STV302 , comments closed
Besides his claim that, “The medium is the message,” Canadian scholar Marshall McLuhan is famous for his prediction that the Western world was set to become a “global village.” To make a long story short, McLuhan thought that electronic media such as television tend to collapse distance. People could be anywhere and literally get a picture of what was happening elsewhere in the world. Almost any location or event could be seen from anywhere else. The effect of TV coverage of the war in Vietnam seemed to bear out McLuhan’s view. This collapse of distance would tend to reduce the world into an elaborate small town or village, where everyone can keep an eye on everyone else’s doings and perhaps get on each other’s nerves.
One implication of this view seems to be that location would cease to be an important datum for people. That is, it would no longer matter so much where in the world things happened, so long as they were televised or, at least, open to viewing from elsewhere. If you could turn on the tube and see what was happening in any given place, who would care to distinguish one place from another one?
Although the Internet seems to satisfy McLuhan’s notion of a global village, it has not brought about the demise of location. It is true that powerful Internet services, such as Facebook, have mitigated distance as an obstacle to communication and observation. Services such as Ebay allow netizens to shop for and purchase goods from around the world. Online news services have brought the news from dozens of news services into easy reach of anyone, anywhere, with Internet access. However, people are hardly done with location.
Online maps and map services retain huge importance in people’s lives. Google services such as Maps, Streetview, and Earth, allow users to explore and learn about particular places, to distinguish them from other places through their unique appearance and history. Users can also augment these services to further embellish and deepen their connection with particular places. For example, Patrick Cain has constructed a service called Poppy Files, that presents users with a map showing where in Toronto Canadian war veterans used to live. The map is covered in poppies designating these locations. Click on a poppy, and you will receive information about the veteran who lived at that location. If you choose, you can add information to the map.
In addition, there are services such as Foursquare, Google’s Lattitude, and Facebook Places, that allow users to track their travels, often with the assistance of GPS enabled smartphones, and post them easily online for their friends and others to see. Users can also “check in” to a given location when they arrive, which allows them to build up their association with the place, even becoming its “mayor” if they hang out at that spot often enough.
Also, governments have been asserting their geographic sovereignty on the Internet. The Chinese government has constructed the so-called Great Firewall of China, a suite of filtering software designed to keep netizens in China from locating sensitive information from outside the country. Nations such as India, Saudi Arabia, and the United Arab Emirates have been pressuring RIM to make cell phone traffic in their territory open to police monitoring. Even the BBC’s popular iPlayer software, that allows Internet users to watch BBC shows over the ‘net, works only if accessed from a .uk site. (Recent reports suggest that iPlayer may go global next year, however.)
Clearly, people’s interest and investment in location and place is far from over, despite the powerful solvent embodied by the broadly and massively connected nature of the ‘net. However, perhaps the recent re-assertion of location is only a swansong, a temporary and rearguard action against the reduction of location. Will location continue its resurgence, and how? Or will it fade in importance as the ‘net continues to develop?
Complexity vs. consumerism: The Chevy Volt November 8, 2010Posted by Cameron Shelley in : STV202, STV302 , comments closed
In an earlier post, I noted the argument made by Matthew Crawford that modern design is tending more and more towards consumerism. That is, designs have a tendency to become more complex and less accessible over time. Crawford contrasts early motorbikes with modern ones as an example: Early bikes were open to intervention and modification by their owners. They positively invited exploration. Modern bikes, by contrast, are highly computerized and actively resist intervention and modification by their owners. Crawford deplores this trend as a promotion of consumerism, that is, modern gear is designed to appear like a black box that is merely to be consumed, used up, and then thrown out when it has become boring or broken down. The opportunity for designers to educate users about their gear via its design is deliberately passed over.
Of course, another explanation for the increasing complexity of our gear is that the computerization that makes for more complex designs makes for more efficient designs too. So, designers computerize things like motor bikes to make them work better (a point that Crawford seems to concede) and the trade-off is that their users are excluded from tinkering with them.
(Image courtesy of Mariordo via Wikimedia Commons.)
Wired.com notes the presence of this trade-off in the Chevy Volt. In case you had not heard, the Volt is GM’s entry into the electric car market, with an all-electric range of some 40 miles (ca. 65 kms). The Volt is highly computerized, sporting over 100 electronic controllers, a unique IP address for each vehicle, and an astonishing 10 million lines of code!
For comparison, the new Boeing 787, which is widely considered to be the most electronic airliner ever, has around 8 million lines of code. And that includes the complex avionics and navigation systems. The new F-35 Joint Strike Fighter? Around 6 million.
I can personally image a program (or suite of programs) containing hundreds or even thousands of lines of code, but 10 million just boggles the mind! How could anyone really understand what is going on in the digital depths of the Volt?!
As the Wired.com article points out, this level of complexity will surely inhibit owners from messing with the Volt’s inner workings:
We’re not software engineers here at Autopia, but with all those lines of software code, anybody looking to tweak a Volt may have quite a puzzle on their hands. Sure the days of a new intake manifold and a four barrel carb are long gone, but now it looks like the modern version of ‘chipping’ a car is far from adequate for the new cars on the block.
So, is the Volt the ultimate (to date) consumer gadget, the latest way Detroit has found to turn its clients into dependent ignoramuses? Or is it part of a necessary progression of technology in which consumer education must be traded off for gains in efficiency and performance?
Did you remember your electric toothbrush? November 8, 2010Posted by Scott Campbell in : STV100 , comments closed
A while ago I was fortunate enough to find someone giving away a free tablesaw. The cast-iron Craftsman was at least three decades old, and while it obviously hadn’t been mistreated in any way it was equally obvious that it hadn’t been used recently, because the electric motor that powered the saw blade wouldn’t start. Fortunately, it was a belt-driven system so it was trivial (and cheap) to remove the motor and have it repaired. Had the motor been impossible to repair, replacement would have been a relatively straight-forward option: the table saw was designed to accommodate a range of widely available small industrial motors. In fact, I already had a few sitting around that could have been used.
Which got me thinking about how many electric motors I actually did have around the house:
- In the garage, I have at least half a dozen power tools, a powered garage door opener, and a vehicle with a starter, powered windows, CD player, windshield wipers, and almost certainly more motors that I’m unfamiliar with
- In the kitchen, the refrigerator compressor, the microwave turntable, the oven exhaust fan accounts for the big appliances, but we have a variety of electric mixers, blenders, and choppers. So, probably a dozen more there.
- In the living areas we a few ceiling fans, and the furnace and air conditioning systems use at least three or four more motors. Maybe seven or eight in total? Plus the exhaust fan in the bathroom.
- Of course, most computers still have hard drives and cooling fans and a DVD drive. That’s another dozen little motors in our house. The DVD player, a handful of CD players, and at least a dozen children’s toys have little battery powered motors.
That’s at least five dozen electric motors, and I’m sure I’ve missed a few.
The whole exercise made me think of Paul David’s “The dynamo and the computer: An historical perspective on the modern productivity paradox.” Written in 1990, he was speculating about one of the more disappointing aspects of the computer revolution to that point: the productivity paradox, which David summarizes by quoting economist Robert Solow, who once observed that:
“We see computers everywhere but in the productivity statistics.”
David’s point was to think about previous large scale technological transitions and take the long view. Perhaps, even in 1990, despite being over a decade past the personal computer revolution, about two decades since the invention of microprocessors and at least four decades since the invention of digital computers, we hadn’t given the technology enough time to have a significant effect. David draws this from the long shift from one general-purpose engine to another during the “Second Industrial Revolution”, when it took over two decades after the invention of the dynamo before the use of electric motors in manufacturing exceeded that of steam engines. It wasn’t until the 1920s that electrification could be said to have provided an industrial productivity revolution. David admits other factors were probably relevant to that analysis, and that computers are clearly not the same as dynamos, though both can be said to provide general purpose “power”.
For what it’s worth, we have four personal computers at home, far fewer general purpose computing devices than there are general purpose motors. Most of the motors will last much longer, and many are more replaceable and repairable than could be said of my computers. That the motors outnumber the computers is not a particularly profound observation, but we rarely think about them and how vital they are. The imbalance ought to give one pause to remember the many invisible and relatively ancient technologies that we simply (and probably literally) could not live without. I don’t really know that I’m more productive now than in 1990.
All in all, it was, as David Edgerton might have said, a “shock of the old“.
The movie screen that watches you November 4, 2010Posted by Cameron Shelley in : STV302 , comments closed
Russian comedian Yakov Smirnoff was famous in the 1980s for his “In Soviet Russia …” jokes, especially this one:
In America, you watch television.
In Soviet Russia, television watch you.
Good for a laugh. Once.
Anyway, technology has caught up with the old joke. FastCompany reports that movie theater companies are interested in technology that will allow them to gauge your reaction to the film:
Some theaters have already been watching you for years, but only to make sure you’re not recording the show. Cameras embedded in the screen can detect the tell-tale infra-red signature of a digital camera. But that was just the first step. Aralia Systems, a U.K. high-tech security firm, just earned nearly $350,000 in a grant from the University of the West of England to turn those cameras into a system for gauging audience reaction to films and advertising.
We’re not talking about a dumb clapometer-style system, either. The intention is to produce rich data that can measure the details of an individual’s face. Aralia will leverage 3-D face recognition technology that the university is already developing. When you sit in the audience of a theater with their system, you’ll be illuminated with an infra-red beam, and three or more cameras will continually monitor the crowd to create stereoscopic images–just like the 3-D digital cameras that are now launching on the consumer markets.
Cool! The immediate idea is to use the software to gauge the audience’s reaction to the film. If they do not like it, then perhaps the presentation can be tweaked before the next showing. The article downplays any privacy concerns on the grounds that the system will not care who you are, just how you react to what you are seeing.
(Image courtesy of Lampak via Wikimedia Commons.)
Perhaps, but it is not hard to imagine how such a system could be harnessed to one that does care who you are. Theaters could use facial recognition software to identify you via your Facebook photos (and the like) and build up a database of what movies you go to, what you eat there, where you sit, when you go, who you go with, and so on. This data could be “monetized” in a number of ways, including targeted advertising, and sales of consumer profiles to businesses and governments.
So, you had better behave at the theater. Don’t see The Social Network if you are hoping for a job at Facebook! And why were you seated next to three different women on three succeeding nights? Playing the field? On the upside, your e-health software can remind you not to have butter on your popcorn next time.
iPhone users oversleep in Europe: Coming soon to Canada? November 2, 2010Posted by Cameron Shelley in : STV302 , comments closed
The Globe and Mail reports that an iPhone glitch has made millions late for work across Europe:
iPhones in Europe did properly make the switch to Daylight Savings today. But due to a bug in the phone, the switch caused alarms to go off an hour later. So if you had an alarm set for 8 a.m., your phone switched properly to daylight savings and then went off at 9 a.m.
The article goes on to point out that the same thing happened earlier in Australia and New Zealand, so it wasn’t wholly unexpected.
Be warned, Canada! Your turn comes this weekend!
(Image courtesy of Art Renewal Center via WikiMedia Commons.)
I suppose that this incident could be more ammunition in the debate over the logic of daylight savings time. I would observe, instead, that the name “iPhone” seems like more and more of a misnomer. Only at hotels did I ever rely on a phone to wake me up on time (the original “wake up call”). What would be a better name for a device that has become so indispensable to so many? The “iCantLiveWithoutIt”? What would you suggest?