3D food printer February 28, 2011Posted by Cameron Shelley in : STV202 , comments closed
On the latest Spark, Nora Young discusses 3D food printing with Jeffrey Lipton at Cornell. Yes, that is the use of a 3D printer to construct food. The printer can, for example, print out a cookie! Right now, this technology is working in university labs but you may find it available for home use in five to ten years.
(3D printer image courtesy of CharlesC via Wikimedia Commons.)
What implications will this have for society and our concept of food? Hard to say but here are some possibilities:
- The art of cooking may become more intellectual. As you may know from your personal experience or from watching the Iron Chef and its descendants, good cooking is, in no small part, a physical skill. The feel, taste, and smell of food as it is prepared is an important component of the ability to cook. With food printing, the physical skills seem less necessary. After all, the printer does the work of putting the food together. What is left is the intellectual effort of conceiving of the food and planning its composition. Doubtless, this activity will be done largely on a computer where food printers are concerned.
- Another issue could be intellectual property. Foods from the printer would be copies of prototypes stored as computer data files. If those files were copyrighted, then each confection would involve a royalty payment to the author. Of course, there would be open-source food too, no doubt. “Anyone want a copyleft croissant?”
- As I look around, my eyes light upon my inkjet printer. It was quite cheap to purchase; most of the expense comes from buying those damn little ink cartridges. If I become dependent on a food printer for my meals, does that mean I will have to pay through the nose for goo cartridges from HP?
Anyway, promoters of home computers back in the ’80s said they would our store recipes for us. No one imagined that we might come to rely on them for turning the recipes into meals too.
This sentence ends with a computer. February 25, 2011Posted by Scott Campbell in : STV100, STV302 , comments closed
Definitions can be tricky things. New words appear regularly (like retox, magnetricity, and splinternet) just as others are dropped from common usage (like snollygoster, fusby or skirr). Natural languages and words evolve continuously and meanings can change to include or exclude particular circumstances.
What, for instance, is a computer? As I’ve already noted, the word originally referred to a person and their occupation. From the OED:
1. A person who makes calculations or computations… Now chiefly hist.
It’s a word that goes back to the 17th century:
1613 ‘R. B.’ Yong Mans Gleanings 1, I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number.
This meaning persisted well into the 20th century, but overlapped with other understandings. By the middle of the nineteenth century, the word included devices or machines “for performing or facilitating calculation”, and by the middle of the 20th century the word had acquired its more modern definition:
An electronic device (or system of devices) which is used to store, manipulate, and communicate information, perform complex calculations, or control or regulate other devices or machines, and is capable of receiving information (data) and of processing it in accordance with variable procedural instructions (programs or software); esp. a small, self-contained one for individual use in the home or workplace, used esp. for handling text, images, music, and video, accessing and using the Internet, communicating with other people (e.g. by means of email), and playing games.
Which brings me to a recent story. A US Federal Court has accepted the argument that a cellphone is also a computer, at least when it comes to sentencing guidelines.
Neil Kramer pleaded guilty in U.S. District Court in Springfield to bringing a 15-year-old girl across state lines in 2008 to have sex. But the man objected when federal prosecutors moved to make his sentence longer for use of a computer. Prosecutors argued his cellphone qualifies as a computer under the definition in federal law. U.S. District Judge Richard Dorr agreed, sentencing Kramer to 14 years in prison, a term that the judge said was more than two years longer than he otherwise would have imposed.
Kramer appealed, arguing he only used his phone to make calls and send text messages, so it shouldn’t be considered a computer. But a three-judge panel of the St. Louis-based 8th Circuit upheld the sentence, finding the federal definition of computer is broad enough to encompass cellphones.
According to the article, the relevant statutory definition of a computer “includes a data processing device that performs `logical, arithmetic or storage functions.’”. Apparently typewriters and handheld calculators are excluded from this definition, a rather arbitrary and, it must be said, obsolete boundary. Does it include cash registers? Even their primitive mechanical forms in the late 19th century would easily fulfill that definition. And how many crimes can be carried out without buying something at some point?
What would suddenly bring cellphones within the realm of computers? I suppose a cynic might say that the prosecutor was looking to increase the sentence by any means necessary. Certainly, the typical smart phone today fits the definition from the OED, and arguably the simpler cell phones that preceded them were no less so. Both are complex electronic devices packed with microprocessors, but they have evolved from special to general purpose devices, capable of replacing the desktop computer for many daily personal computing tasks, including virtually everything from the OED when it comes to handling text, images, music, video, email and games. So, as the uses evolved, the meaning did to.
I’ve seen a parallel trend among undergraduates in the last few years. Each term I ask my students to rank the most important technologies. Until about a year ago the top three answers were consistently, and in order: computers, internet and electricity. No big surprises, really, and is a useful way of showing students their own technological biases. But then the list started to shift, with the internet overtaking computers, and then smartphones rose up the list to tie and also overtake computers. What made a technology important, they argued, was in the applications and uses and for them, computers were fixed desktop devices which they had little connection with.
Sherry Turkle has written quite a bit about the most recent generation’s preferences for abstractions and simulations rather than low level understandings. Perhaps this helps explain the shift? Though smartphones are clearly computing devices, they are also emphatically consumer devices, and unfriendly to hardware hacking. Many are impossible to open and batteries cannot be replaced easily. Smartphones (and their sibling, tablets) are redefining the personal computing experience, and in the hands of a younger generation together they are redefining what computing is and, it seems, they are quite literally giving it a new name.
Good vibrations? February 25, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
This week comes news that Boeing is considering installing vibrating seats into its cockpits as a way of warning the pilots that some action is needed:
The idea comes from patent documents filed by the Boeing company. The airline maker could integrate a simple device that would provide a tactile signal to the crew that would remind them when certain actions need to be taken such as radio calls, and could even be activated remotely by air traffic controllers.
I suppose that it makes sense to explore a haptic alarm modality, in addition to the traditional flashing lights and beeping speakers.
(Image courtesy of Brandrodungswanderfeldhackbau via Wikimedia Commons.)
For me, the notion is eerily reminiscent of the use of the “stick shaker” alarm built into the Bombarder Dash-8 Q400 that crashed two years ago near Buffalo, New York. The Colgan Air pilots appeared not to notice several visual alerts that their plane was not where it was supposed to be. Finally, the pilot’s stick shaker vibrated the control columns to warn of a stall. Regrettably, the pilot’s response was to pull up and cause a stall, instead of putting the nose down to maintain lift.
The pilots were faulted for spending too much time chatting and texting, and not enough monitoring their instruments. Also, the co-pilot complained of being tired from a cross-country flight she took to arrive at her flight assignment. In addition, the pilot’s training was faulted because of his improper, but instinctive, reaction to the shaker.
To judge from these news items, flying commercial planes is becoming a case of de-skilling, or what Donald Norman calls inappropriate automation. That is, humans who once had the job of operating a piece of equipment are instead given the job of monitoring it as it becomes increasingly automated. Commercial aircraft are now highly computerized and carry software that can fly them through most conditions. As a result, pilots do less of the flying and are placed in the more passive and less stimulating role of monitor-readers.
One response is to accept this situation and even to put the software in charge of the pilots. This approach is evident in the seat vibrator design. Like the crew of the Discovery One in 2001, the pilots of the plane are becoming more like cargo, to be stored in stasis and then awakened only when the computer requires them. This approach seems not very stimulating, nor very dignified, for the pilots.
Perhaps this is a case that could benefit from gamification. Instead of boring pilots for hours with largely uninteresting instrument readouts, a way could be found to design the pilots into the process of flying the “smart” plane. For example, what if they could stay sharp with some simulated maneuvers during flight that were related to the planned route (assuming conditions allow for it, of course)? Could they use weather and radar data about the route ahead to optimize their flight path?
I’m just “thinking out loud” here. My point is simply that it is not a given that automation requires designers to treat operators like an afterthought or a fifth wheel. Although that route may be the easiest, it is certainly not the most innovative or imaginative one.
Genetic determinism, public safety, and personal responsibility February 23, 2011Posted by Cameron Shelley in : STV203 , comments closed
An American appeals court has recently overturned an unusually long sentence handed down to Gary Cossey for possession of child pornography. In 2009, the judge sentenced Cossey to 6.5 years plus subsequent supervision instead of the more usual 4.5 years. The judge’s reason for the longer sentence alluded to his belief in genetic determinism: that our genes dictate our behavior. Specifically, the judge held that Cossey’s behavior was due to some genetic condition that medical science would soon discover. The appeals court rejected the sentence on the grounds that the judge based it on speculation about what science might (or might not) show in future. Judges should stick to established facts and not pet theories, of course.
(Image courtesy of Hekerui via Wikimedia Commons.)
As Pete Shanks points out, the appeals court did not take issue with the underlying notion of genetic determinism. Our genes are not our destiny, but the ruling raises questions about to what extent judges hold that notion to be true. Is it common among judges? If so, what effects is it having on sentencing? Let’s think about that for a moment. The judge’s reasoning seems to based on the following idea:
- This man has a gene that leads to antisocial behavior.
- Such genetic conditions increase danger to public safety.
- Increased danger to public safety justifies increased sentences.
- So, this man should receive an increased sentence.
In short, the judge’s ruling was motivated by a combination of genetic determinism and a concern for public safety.
Now, contrast this case with the case of Abdelmalek Bayout who confessed to a murder in 2007. In November, 2009, an Italian judge reduced Bayout’s sentence on the grounds that he tested positive for genetic variants related to aggression. The judge’s reasoning might be summarized as follows:
- This man has a gene that leads to antisocial behavior.
- Such genetic conditions reduce a man’s responsibility for his actions.
- Reduced responsibility justifies reduced sentences.
- So, this man should receive a reduced sentence.
Like the first argument, this one makes an appeal to genetic determinism (premise 2). Yet, each argument arrives at opposed conclusions: The first leads to a shorter sentence whereas the second leads to a longer one.
Perhaps the problem is that judges are being asked to apply genetic tests on defendants as a kind of testimony to their character. A person’s character has long been considered relevant to their sentencing, with people of good character being given more breaks than people of bad character. Genetic tests are a poor indicator of character in this sense. It makes me wonder if this use of genetic tests should be regarded simply as inadmissible.
“Repo games” February 18, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
There has been a lot of attention around the concept of “gamification“. The basic idea is to apply design elements of video games to activities in real life. FourSquare, for example, allows people to “level up” to become the “mayor” of a place by checking in to it more often than anyone else. Jane McGongial, in her book Reality is broken argues that gamification is a potent way of improving aspects of the real world that are not working.
(Image courtesy of Flor Anette Zuniga Rivero via Wikimedia Commons.)
I was reminded of this perspective when I found out about new game show called Repo Games, which is going to be offered by Spike TV. On this show, people whose cars have been repossessed will have a chance to win them back from the repo men. All they have to do is correctly answer 3 out of 5 trivia questions, and they can drive away with their car, all debts forgiven!
As impressive as this show sounds, I was even more impressed by a suggestion made by John Hodgman in his “You’re welcome” segment on a recent Daily Show. He put forward that the US Government could effect much-needed budget cuts if it turned Social Security into a tontine, in which the last old person alive would win all the money. Of course, one old person would not need trillions of dollars currently in the Social Security budget, so a huge cost reduction could be effected. Also, if Repo Games is any guide, the elderly competition could provide a profitable, ThunderDome-like spectacle for a TV audience. But I digress.
So what? Computer games are becoming a spectator sport. Tournaments of StarCraft draw hundreds of thousands of viewers in South Korea, for example. If conventional video games can be (re-)monetized through a secondary, spectator market, then so can gamified activities. Quiz shows such as Repo Games are perhaps evidence of this. If a gamified activity is interesting enough, then people may pay to watch it and talk about it. Indeed, if it proves possible to monetize gamified activities in this way, then designers of these games might prefer to gamify them in ways that maximize their potential for secondary, commercial sales. Higher education, for example, might be made more engaging (and watchable) if students were occasionally able to vote others off the campus or engage in combat for bonus marks, a la Survivor.
Lasers arrive too slowly to Marines in Iraq February 17, 2011Posted by Cameron Shelley in : STV202 , comments closed
Here is an interesting pieced from Wired about how bureaucracy delayed the arrival of a laser-based security system destined for Marines in Iraq. The system in question is a non-lethal laser gun designed to dazzle car drivers by shining an intense light in their eyes. The expected result is that the drivers must slow down and stop or else lose control of their cars. The idea is that this system can be used at military checkpoints to give soldiers an alternative to firing bullets at cars that are not slowing when approaching the checkpoint.
The Marine Corps made an urgent request for the technology in 2005. As the article explains, Iraqi driving habits have long featured weaving and speeding. The result is that drivers would unwittingly speed towards military checkpoints in a manner similar to that of a suicide bomber. Marines at the checkpoints would flash their headlights and fire warning shots but not always to effect. In the end, the Marines would fire on the drivers to protect themselves, resulting in a number of innocent casualties. Having the laser guns on hand would give the soldiers a way of halting the cars without killing the drivers.
An urgent request for the dazzlers was made in September 2005. Such requests are supposed to take no more than six months to resolve. However, this request was not even addressed until six months had passed. The delay was caused, in part, by a dispute over which system to purchase. Also, the Marine force in Iraq tried to purchase the dazzlers directly, circumventing the bureaucracy at Development Command. Only after Development Command had stopped that purchase did they proceed with their own purchase. In the meantime, estimates a report done for the Corps brass, 50 innocent Iraqis were killed at checkpoints who might have been merely incapacitated had the dazzler been available.
This incident is, among other things, an illustration of what R. G. Little calls institutional bias. Institutions and organizations can have a kind of built-in bias towards precaution or permissiveness in their adoption of designs or technologies. For various reasons, the bias is not always healthy. Famously, the Space Shuttle program has a permissive bias, resulting in the Challenger disaster. In the current example, the Marine Corps could be accused of having an overly precautionary bias.
Of course, it is easy to dump on bureaucracies for being slow-moving and self-serving. However, we should bear in mind that precaution has its uses. A permissive system allows for the rapid acquisition of both useful and useless or even harmful technologies. Imagine, for instance, that the Corps urgently acquired a bomb detector (cf. this story about dogs vs high-tech detectors) that turned out to be defective. Probably, many soldiers and civilians would be harmed before the detector was abandoned. In that event, the bureaucracy might be blamed for not being slow-moving enough.
I suppose the moral is that we certainly want to keep watch on the biases of our institutions to ensure that they fit with the circumstances that they are in. However, in order to do so, we have to consider and balance the errors that come with both permissive and precautionary approaches to progress.
Medical information technology is not a panacea February 10, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
In reading this TIME article about a doctor’s experience with medical information technology, I was struck by his conclusion that, although the technology can be a great boon, it is not a panacea. That expression provides a nice medical analogy to the claim, discussed before in this blog, that technology is just a tool. As noted previously, this expression has several, contradictory meanings but, here, it means that we have an obligation to be critical about computerizing an existing system.
(Image courtesy of Jejecam via Wikimedia Commons.)
In this article, Dr. Meisel makes it clear that health information technology has a lot to offer. He recounts when an elderly women was brought to ER obviously in a bad way but without medical records or family members. Thus, he had to treat the patient with almost no knowledge of her previous medical history. Had her medical records been available for retrieval on a distributed database, her care could have proceeded more immediately and accurately than it did.
However, although health information systems help to open some channels of communication, they tend to close others. Dr. Meisel mentions two:
- Patient status used to be represented in hospital wards on a giant whiteboard (which you may well have seen). Health IT systems replace this board with small computer monitor. Unfortunately, secluded monitors do not invite impromptu conferences among doctors and nurses in the way that the big whiteboards did. Such conferences could produce insights that helped with patient care. Has the computerized system unintentionally reduced this benefit?
- Previously, when Dr. Meisel ordered an X-ray, he would have to go to the Radiology department to get it. There, he would encounter the radiologist. They would often discuss the X-ray, and the conversation might reveal something that neither had noticed on their own. Now, X-rays are delivered electronically with the radiologist’s comments. Such a system tends to discourage casual discussion. What important information is going unnoticed as a result?
Computerization of health information systems can and does produce great benefits but important aspects of the previous, informal information system can get lost because they go unnoticed by the analysts designing the system, or because the analysts do not acknowledge their importance.
Why do designers make these mistakes? There are many contributing factors but I will comment briefly on one only: The designers are trained an paid to make the existing system more efficient by reducing it in various ways. For example, networking different medical record databases reduces the time and effort needed to dig up relevant information about a patient. Not a bad thing! However, the computerized system also reduces the number of channels through which information flows. This reduction makes things speedier but also reduces feedback loops, think casual meetings, built into the informal system. Since they are achieving desirable reductions in time and effort, designers may not think critically about whether all the reductions are a good thing. This phenomenon is an instance of what I have called motivated design. When designers aim to do good, they sometimes do not think through or properly evaluate the consequences of their work. If Dr. Meisel is right, this phenomenon may be adversely affecting patient care.
What can be done? Perhaps the designers of computerized systems need to develop a greater appreciation for the drama of human interactions in informal information systems. That is, what characters are involved in the informal system? What roles do they play? What happens when they interact? Is it important? If so, should it be reduced at all? Or, can it be accommodated in a computerized system?
Will this Tweet be on the test? February 9, 2011Posted by Scott Campbell in : STV100, STV302 , comments closed
How might political revolutions in Tunisia and Egypt be connected to internet Usage Based Billing in Canada? Why, the power of social media and networks of course!
It has been fascinating watching revolutions unfold half a world away — thanks largely to power of electronic mass media, live video, commentary and analysis can be had as event transpire. Even when governments try to pull the plug, information can escape and the people can still march. That suggests an important lesson about the apparent relevance of social media and maybe Facebook and Twitter aren’t as significant as we in the West would like to think. As Malcolm Gladwell recently observed about the protests and changes underway in Egypt,
There are a thousand important things that can be said about their origins and implications … but surely the least interesting fact about them is that some of the protesters may (or may not) have at one point or another employed some of the tools of the new media to communicate with one another. Please. People protested and brought down governments before Facebook was invented. They did it before the Internet came along.
While not everyone is happy with Gladwell’s understanding of social networks and media, here his observation does not seem controversial.
What about the 2009 Iranian “Twitter Revolution”? Again, not everyone thinks that social networking sites are the keys to a cyber-utopia or human rights panacea. Evgeny Morozov has studied the relationship between technology and authoritarian governments, and his recent book and writings have pointed out that a) technology is not new to rebellion–Xerox machines were used to organize Polish anti-communist protests, to say nothing of the printing presses and books that have driven countless others –so we should avoid overestimating the role of the technology dujour, and b) these tools are not limited to the disaffected masses. Dictators can and probably would use them to their own advantage. For instance, most social networking services seem to decrease privacy and many have begun incorporating geolocation services. These are features the 21st century’s Che Guevaras might rather avoid, at least if they wanted to retain their freedom. Tracking individuals is almost too easy these days.
Personally, I really don’t know if “The Revolution will be Twittered”. Can political change happen in 140 characters or less?
I guess it can if you are Canada’s Industry Minister Tony Clement. Famous for his fast fingers at home or in Parliament, he has been known to twitter about personal events and major policy decisions, and to even hold impromptu political debates in the twitterverse. And when the CRTC announced a new Usage Based Billing policy not too long ago, it was Clement’s twitter that announced that his government would review and likely overturn that decision. All of which might explain why he has been crowned the Tory Twitter King.
Hmm. Is this the new way of government, or is it just an old-fashioned policy leak? And if Clement would live by the sword, he might die by it as well: he has been made to look foolish in Twitter-land. And while there might be a few hundred people who read Hansard for kicks, thousands subscribe to Clement’s Twitter feed, correspond with him very publicly, and the archives are readily accessible.
To an extent, this discussion turns on the issue of whether technology is “just a tool”, something that comes up quite a bit on this blog. But what came to my mind, in keeping with Cameron’s rumblings earlier this week about the future of education, will lectures soon be twittered? (No, I did not miss the irony that these thoughts are being published on a blog.)
iConfess February 9, 2011Posted by Cameron Shelley in : STV302 , comments closed
Thanks to Darcy K for pointing out this tempting item: The Catholic Church has giving its blessing to an iPad/iPhone app that guides users through confessions. This app is reportedly the first to receive an official imprimatur from the Church. Also, it is currently one of the top-selling apps in the AppStore in Canada.
(Image courtesy of Steve Nagata via Wikimedia Commons.)
The app is designed to “walk” sinners through confession and so can be used right in the confessional. It also acts as an aide memoire:
It reminds users when their last confessions were and keeps track of sins they have previously confessed.
It also advertises features such as password protection to allow multiple users, a “custom examination of conscience” based on age, sex and marital status, the ability to add sins that aren’t listed and a choice of seven different acts of contrition — prayers that express sorrow for sins.
Wow! It tracks your sins? Can you say “privacy issue”?
Anyway, given my posting from yesterday, this app raises the question: Does the app automate absolution from sin? Apparently not:
However, absolution or release from the sin can still only come from a priest.
I assume that a future release will take care of this deficiency.
Well, we have already commented on robo-ethics and ethical drones in this blog. At the moment, robots are not well equipped to make complex ethical decisions. But then, people are not perfect at it either and it is conceivable that computers will become better at grinding out ethical judgments than human beings. Could they also be better at meting out absolution? I seem to recall a segment on Michael Moore’s show, TV Nation, where Sarah Silverman went around confessionals in New York “pricing” absolution from the sin of lust. Each priest suggested a different one. An app could at least make the cost consistent across churches.
Crowdsourcing education February 7, 2011Posted by Cameron Shelley in : STV302 , comments closed
One interesting thing that blog entries can do is to juxtapose two unrelated items to see how they might connect. Today, I have found a couple of things that make an interesting pair. First, there is a New York Times article that discusses the possibility of online courses with no human instructors. So far, no institution has created a course without a human being running the show. Why not? I suppose no one has yet figured out how to do it, although it seems quite plausible.
This is where the second article comes in. This Technology Review blog discusses an experiment where a program managed a group of humans in the construction of an encyclopedia entry, using the crowdsourcing tool Mechanical Turk. Appropriately, the project was called “My boss is a robot.” The experiment was successful enough that its creators are looking for other projects to manage with a similar approach.
(Image courtesy of Mikael Nordin via Wikimedia Commons.)
So, I would suggest that the robot boss be put in charge of organizing university courses. It would be interesting to see what courses are most amenable to this approach. I suspect that many introductory classes, e.g., Calculus 101 or Introductory Psychology, might lend themselves to almost total automation. More specialized courses would be tougher to automate entirely as they involve more specialized knowledge of a field and often need to be conducted in a more flexible and responsive manner. Of course, I could be wrong.
Would such a system appeal to universities? Maybe. Of course, many universities employ inexpensive, human sessional instructors to deliver the low-level mega-courses, so development costs for the robot boss would constitute a substantial barrier to entrance in the marketplace. However, Bill Gates is throwing a fair amount of money into this forum, and he has very deep pockets and was a university drop-out. So, your professor may be a robot sooner than you think!
And I, for one, welcome our robot overlords to uWaterloo!