Link to the University of Waterloo home page

Link to the Centre for Society, Technology and Values home page

jump to navigation

Robo-grading is great! August 15, 2014

Posted by Cameron Shelley in : STV202 , add a comment

Automatic or robo-grading software assigns marks to written material submitted by students. These software packages have some advantages, including quickness of response and the ability to handle large numbers of assignments at once. Also, they seem to perform about as well as human graders, on standardized assignments, at least. As such, the use of robo-graders for large classes is clearly an attractive prospect for administrators of such courses.

At the same time, robo-graders are controversial. Les Perelman of MIT argues that, since robo-graders do not really understand what they are doing, their use will encourage the wrong sort of writing:

Robo-graders do not score by understanding meaning but almost solely by use of gross measures, especially length and the presence of pretentious language.

Recently, three computer science students, Damien Jiang and Louis Sobel from MIT and Milo Beckman from Harvard, demonstrated that these machines are not measuring human communication. They have been able to develop a computer application that generates gibberish that one of the major robo-graders, IntelliMetric, has consistently scored above the 90th percentile overall. In fact, IntelliMetric scored most of the incoherent essays they generated as “advanced” in focus and meaning as well as in language use and style.

Having students turn in gibberish to game robo-graders would clearly undermine the purpose of the assignments.

So, are robo-graders good or bad?

Some recent work suggests that robo-graders might have a positive role as assistants to students during the writing process. Students who are able to use robo-graders to obtain feedback on drafts of their work show more willingness to revise and improve it. For example, a recent study by Khaled El Ebyary of Alexandria University and Scott Windeatt of Newcastle University showed that students responded much more positively to robo-feedback than they did to human feedback:

Comments and criticism from a human instructor actually had a negative effect on students’ attitudes about revision and on their willingness to write, the researchers note. By contrast, interactions with the computer produced overwhelmingly positive feelings, as well as an actual change in behavior—from “virtually never” revising, to revising and resubmitting at a rate of 100 percent.

Crucially, the students’ writing seemed to improve as a result of their revisions.

The key to this improvement seems to be that students did not feel judged by the robo-grading system. Receiving criticism from a human may induce feelings of shame but the same does not appear to be the cases with criticism from a software package. In other words, receiving robo-grading is socially technotonic, at least when contrasted with the alternative.

This observation suggests that robo-graders may have a positive role to play in assignment marking, as long as they are not given the final say.

Theodore Van Kirk, died at 93 August 14, 2014

Posted by Cameron Shelley in : Uncategorized , add a comment

Theodore Van Kirk died recently at the age of 93. Van Kirk was a US airman who was the last surviving crewman of the Enola Gay, the plane that dropped the first atomic bomb on Hiroshima on August 6, 1945.

Van Kirk had some interesting thoughts about war in general and atomic weapons in particular. For example, he believed that the use of the bomb shortened the war and, thus, prevented continuing loss of life:

“I honestly believe the use of the atomic bomb saved lives in the long run. There were a lot of lives saved. Most of the lives saved were Japanese,” Van Kirk said.

On this view, the alternative to the bomb was an invasion of Japan itself, an undertaking that promised to be destructive and bloody. Not an inviting prospect. It was that alternative that justified the use of the bomb, Van Kirk argued:

“We were fighting an enemy that had a reputation for never surrendering, never accepting defeat,” he said. “It’s really hard to talk about morality and war in the same sentence.”

He continued: “Where was the morality in the bombing of Coventry, or the bombing of Dresden, or the Bataan Death March, or the Rape of Nanking, or the bombing of Pearl Harbor? I believe that when you’re in a war, a nation must have the courage to do what it must to win the war with a minimum loss of lives.”

Even if this argument is accepted, it still leaves open the matter of dropping the bomb on civilians.

At the same time, Kirk became skeptical about the usefulness of warfare:

“The whole World War II experience shows that wars don’t settle anything. And atomic weapons don’t settle anything,” he said. “I personally think there shouldn’t be any atomic bombs in the world — I’d like to see them all abolished.

This point reminds us that there are still atomic weapons in the arsenals of at least eight countries, making about 17,300 weapons in all.

The selfie files August 12, 2014

Posted by Cameron Shelley in : STV202, STV302 , comments closed

If the drone has any competition as the technology darling of the year, it is the selfie. So, here are some recent items of interest where the backward-facing camera on that smart phone is concerned:

The technology scholar Neil Postman argued that technologies have a “philosophy”:

… which is given expression in how the technology makes people use their minds, in what it makes us do with our bodies, in how it codifies the world, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards.

Do these examples suggest anything about the “philosophy” of that backward-facing camera on the smart phone?

And now, the drone news August 11, 2014

Posted by Cameron Shelley in : STV202 , comments closed

I hate to drone on, but those neat little aerial vehicles keep making the news:

What would Johnny Dronehunter say to that?

Ideology of the TV remote August 1, 2014

Posted by Cameron Shelley in : STV202 , comments closed

Caetlin Benson-Allott has posted an article in The Atlantic giving a précis of her upcoming book on the history of the TV remote control. It is an interesting post and bodes well for the longer treatment.


remote

(Dave Croker/Wikimedia commons)

In an earlier posting, I have talked about some effects that the TV remote had on the nature of TV programming. Benson-Allott’s piece, however, has more to do with how the introduction of the remote changed the way that people organized their houses and their lives around their entertainment systems:

Yes, in fact—this seemingly innocuous media accessory has also changed the way we inhabit our houses and experience our families. The effects of remote controls have cascaded through the home, affecting how we arrange our domestic spaces, whom we share them with, and what we do there.

Here, I will just summarize the author’s claims about the ideology of the remote, that is, about the “ideal” lifestyle that remote controls were designed to bring about.

Early remote controls in the 1920s were boxes hooked up to radio sets by cables. They allowed owners to adjust volume, change stations, and turn the set on or off without getting up. The space in the living room around the radio set had already become somewhat specialized for that purpose, that is, as the place where the family gathered to consume entertainment. The remote control had the effect of reinforcing that assignment by making consumers of entertainment even more immobile. No need to get up with the remote handy!

This specialization and the immobility associated with it were conveyed as part of a luxury lifestyle. However, this luxury also locked householders in as merely consumers (and not producers) of in-house entertainment:

Many households still embraced the “luxury” of sedentary media consumption that these early remotes provided, but the devices offered only a negative form of liberty (rather like the leash that allows the dog to go outside).

Of course, most homeowners could not afford sets with remote controls. So, they really did remain luxury items.

With the advent of TV remotes in the 1950s, the promise they made expanded to the provision of greater control over the boob tube. One TV remote introduced in 1953 was called the “Blab-Off” because it was designed to give users the ability to tune out advertising chatter or other unwelcome noises without having to get up or turn the set off. Still, TV remotes remained a luxury:

Television sets occupied over 70 percent of U.S. households by 1956 and over 95 percent by 1969, but as of 1979, only 17 percent of U.S. television households were using a remote control.

The situation changed with the adoption of cable TV and VCRs. With these new sources of programming, people had much more selection in what to watch, necessitating some aids in navigating from source to source. The remote control was just the solution for this problem. People become much more willing to shell out the extra expense for a remote control when faced with managing a multi-component entertainment system.

Thus, the “entertainment center” was born as both a new piece of furniture and a new lifestyle:

As furniture and ideology, the entertainment center draws on the media-as-furniture design of radio and television consoles to create a television stand with extra shelving that accommodates but also demands ancillary components like cable boxes and VCRs. Remote control became the interface through which to command the family’s new centralized multi-media environment.

The entertainment center became the dominant piece of furniture in the living room and the focus of activity there. Chairs in living rooms are typically oriented for optimum viewing of the center and not so much for conversation or other non-TV-related activities.

The remote control became the indispensable interface to the center. To this day, most living rooms (like my own) have several of them displayed conspicuously. (If they are hidden away, Benson-Allott notes, there is the fearful possibility they will get lost or, at least, not be around when you want to watch something.)

For a while, the TV remote could still be a sign of the luxury lifestyle, showing that the owner possessed a high-end media system. However, remote controls increasingly came to suggest the tech-savvy and modern lifestyle led by the owner:

Remotes suggested that their owner was himself high-tech and in demand, so busy and important he could not possibly cross the room to change the CD himself.

As with other high-tech stuff, the author notes, the remote is also perceived as falling in the masculine social sphere. Ads for remote controls typically portray male users. Blocky and button-laden design and stubbornly phallic shape tend to reinforce this association. The level of control that remotes grant to the individual who wields them seems to go along with the assertiveness that is often seen as a masculine trait.

Given this association, it is interesting to note that most functions provided by modern remote controls go unused by the majority of people. Also, most people do not use universal remotes that attempt to do the work of the many remotes that come with each component of an entertainment system. Perhaps the design of remote controls is geared more to providing the illusion of god-like omnipotence than the reality of it.

One exception to the trend of increasing visual complexity is the Apple remote, which harkens back to the design of the first TV remote controls equipped with only a dial and two or three buttons. The Apple remote seems to represent a different ideology, that of integration into an Apple-ly lifestyle with its promise of simplicity and discernment:

Today, the Apple Remote offers to restore order to the household—as long as the home remains an iHome. After all, the Apple Remote can only deliver on the simplicity of its design if you use it to the exclusion of all other remotes, and by extension all other brands.

In other words, the Apple remote is like the original iPod, which achieved its simplicity through integration into a complex music service provided through iTunes.

Whether or not the ideology of the Apple remote overhauls that of the competition remains to be seen. In any event, I look forward to the appearance of Benson-Allott’s book “Remote control” later this year.

Designer babies or reproductive choices? July 31, 2014

Posted by Cameron Shelley in : STV203 , comments closed

A Calgary fertility clinic, the Regional Fertility Program, made news recently when a doctor at the clinic advised a client that he would not help her conceive a baby of a different race. Dr. Greene said that the policy against mixing races has been in force for decades and that it is better for families to have children who resemble their parents:

“I’m not sure that we should be creating rainbow families just because some single woman decides that that’s what she wants,” Dr. Greene said. He went on to say the clinic’s approach is consistent with the spirit of Ottawa’s Assisted Human Reproduction Act, which discourages doctors from helping create “designer babies.”

The clinic says that the policy was revised last year to remove the prohibition on mixing races (although their website was updated to reflect this fact only days ago). Thus, Dr. Greene was voicing his own opinion and not that of the company.

Federal Health Minister Rona Ambrose says that the policy contravenes the Assisted Human Reproduction Act. The Act does contain an anti-discrimination clause:

Since this clause applies to the client and not the offspring, it is not clear that Dr. Greene was in contravention of it. However, his description of his client as “some single woman” suggests he might be guilty of discrimination on the basis of marital status.

Instead, Dr. Greene may have in mind the first clause of this section of the Act:

His argument is that it is better for babies to resemble their parents, apparently an appeal to this stricture.

It could be argued that Dr. Greene does not have a right to make such a determination:

And Sara Cohen, a fertility law attorney in Toronto, said she assumes the clinic had good intentions and was looking out for what it considered to be the best interests of the child. “But it is inappropriate for a clinic to make greater social policy for a province,” Ms. Cohen said.

However, neither Dr. Greene nor the clinic claim to be setting policy for the province as far as I can tell. Moreover, doctors are enjoined to recommend therapies to their patients solely for the good of those patients and not for the good of society. It may be that Dr. Greene views his recommendation against “rainbow families” in this light.

Also, as noted in the Globe and Mail article, the Supreme Court “gutted” the Act when it ruled that fertility clinics fall under the jurisdiction of the provinces. For the most part, the effect is that private clinics can set whatever restrictions they see fit on their services.

Then there is the issue of “designer babies,” a concept that Dr. Greene appeals to in justifying his position. The Act does forbid certain genetic modifications to embryos created through IVF:

Of course, merely fertilizing an egg of a woman from one race with a sperm from a man of another race does not qualify as an alteration. That is simply how IVF works. Moreover, a woman can select as a partner a man of another race and have children with him without fear of violating any law regarding miscegenation.

(Imagine if Dr. Greene had advised a female patient against having sex with a man of another race because, “Think of the babies!”)

This point also undermines Dr. Greene’s appeal to the child’s best interests. In a society where women may freely conceive children of mixed race, we should presume that those children are not regarded as problematic or unacceptable, the continued presence of some racism in the society notwithstanding. So, we should not infer that a mixed-race child would face injury or harm that would make non-existence in its best interest.

Canada has an unfortunate and oft-forgotten past with eugenical policies, that is, policies about what sort of babies are acceptable or unacceptable to society. Overtly eugenical laws have been repealed. Is it appropriate that they can still be applied in private clinics?

GM humans July 30, 2014

Posted by Cameron Shelley in : STV203 , comments closed

A report in The Independent discusses debate over the meaning of the term genetically modified, as applied to people. The issue arises in the context of the British government’s move towards approval of mitochondrial DNA replacement during IVF procedures.

Each cell in the human body has a mitochondrion, a body whose function is to provide energy for cellular metabolism. This function is regulated by DNA within the body which, unlike nuclear DNA, is inherited from the person’s mother. Some illnesses, such as MS, result from defects in the mitochondrial DNA (mDNA).

Advances in IVF technology will soon allow for replacement of mDNA in a cell with mDNA from a different cell. Such replacement could allow parents to ensure that their offspring to not inherit serious mDNA defects. Of course, replacement of mDNA with that of a donor means that those offspring will have three genetic parents, a conceptual novelty.

Because of this reason, and because it involved genetic manipulation, use of the mDNA replacement technology is controversial. In an apparent attempt to avoid some of the controversy, the British government has adopted a “working definition” of the expression genetically modified on which cells produced via mDNA replacement would not be considered GM officially:

The Health Department accepts the germ-line of future generations will be altered, but it insists, in its official response to the public consultation published last week, that this does not amount to genetic modification. “There is no universally agreed definition of ‘genetic modification’ in humans – people who have organ transplants, blood donations or even gene therapy are not generally regarded as being ‘genetically modified’,” the response says.

Of course, this statement is somewhat incoherent: If “genetically modified” has no precise meaning, then how does the Health Department know that people who have undergone genetic therapy, for example, are not considered genetically modified?

Interestingly, both supporters and critics of the policy deride the government’s method of getting the therapy approved:

Lord Winston told The Independent: “The Government seems to have come to the right decision but used bizarre justification. Of course mitochondrial transfer is genetic modification and this modification is handed down the generations. It is totally wrong to compare it with a blood transfusion or a transplant and an honest statement might be more sensible and encourage public trust.”

David King, from the pressure group Human Genetics Alert, said the Government is “playing PR games based on very dubious science” because any changes to the mitochondrial genes will amount to genetic modification. “Their restriction of the term to nuclear inheritable changes is clearly political. They don’t want people like me saying that they are legalising GM babies,” Dr King said.

So, is the government right that modification of non-nuclear DNA is not genetic modification? Or, is this definition just a way of trying to avoid criticism?

Mitochondrial DNA versus Nuclear DNA.gif
Mitochondrial DNA versus Nuclear DNA” by University of California Museum of Paleontology (UCMP) and the National Center for Science Education – “Marshalling the Evidence.” Understanding Evolution. University of California Museum of Paleontology. 22 April 2014. <http://undsci.berkeley.edu/article/0_0_0/endosymbiosis_07>.. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

GPS users have smaller brains! July 29, 2014

Posted by Cameron Shelley in : STV202 , comments closed

Actually, according to a Citylab post, UberX drivers who use GPS navigation systems tend to have less well-developed spatial cognition skills than do professional cab drivers. Studies of London cab drivers, who must know their city by heart, have exceptionally well-developed posterior hippocampi, a brain region associated with memory function.




(-IcyJ-/Flickr.com)

Recent research by Veronique Bohbot of McGill found that drivers who use GPS systems to navigate cities are less well endowed when it comes to their hippocampi:

According to Bohbot’s research, there are two ways of navigating. Spatial navigation methods are what we might use without GPS—using landmarks and visual cues to create cognitive maps that help us orient ourselves and get where we want to go. Stimulus-response navigation, however, is triggered when we go on “auto-pilot,” thoughtlessly following a path because we’ve either done it before or are following the directions of our handy GPS devices.

UberX drivers tend to rely on GPS services, thus their inferiority to London cab drivers and their ilk. As a result, UberX drivers will not know their city as well as professionals.

The conclusion of the post is that UberX drivers may not offer as good a service as pros, not knowing which areas to avoid at certain times, which ones are clogged by construction, and so forth. However, Uber has a fix: Customers can use the service’s driver rating system to incentivize drivers to learn their way around better. A driver who takes the optimal route should get higher ratings than one who gets needlessly stuck in traffic with passengers. Since drivers with higher ratings will get more business, and perhaps charge more, competition for good ratings will make up for the problem.

This procedure may be asking too much of passengers. A stranger to Washington D.C., for example, could not be expected to know that a driver’s performance is sub-optimal. Even the best drivers get stuck in traffic and even the worst get lucky from time to time. How is an outsider to know the difference?

It seems more likely that Uber itself has the required information. Comparison of the performance of different drivers at similar tasks at similar times would seem to be a good way of rating their skills.

Also, UberX drivers could subscribe to navigation services that take account of real-time traffic conditions. One example would be Waze, discussed in this earlier blog posting. Perhaps Uber could negotiate special terms for its drivers with such services.

In any event, the research provides another perspective on how GPS navigation services can lead to deskilling, that is, reduction in the skills required to perform an activity. Famously, assembly line work deskilled the labor involved. People working in Ford’s auto plant needed to know only how to turn a given nut or join two parts together, rather than how to assemble an entire car. Deskilling enabled Ford to make cars much more cheaply but made the work less intrinsically interesting.

Similarly, GPS navigation systems may end up making chauffeur transportation much cheaper but may also end up making it less mentally challenging for the drivers. This issue does not mean that Uber drivers will necessarily become more stupid. They may compensate by taking on more challenging tasks besides driving, such as witty conversation with passengers or texting while driving.

The drone report July 28, 2014

Posted by Cameron Shelley in : STV202, STV302 , comments closed

More stories from the Drone News Network:

Keep watching the skies!

Crocs and iPads July 24, 2014

Posted by Cameron Shelley in : STV202 , comments closed

There is a conjunction you probably were not expecting. However, recent news has connected them with a common issue: obsolescence.

First, there is news that Crocs is about to downsize. It will be closing 100 of its 600 stores and laying off about 180 of its 5000 employees. This move follows on a 44% drop in profit over the most recent quarter. Is the comfortable but gaudy footwear on its way out?



CrocsAccessories.jpg


CrocsAccessories” by jespahjoy from UnknownFlickr. Licensed under CC BY 2.0 via Wikimedia Commons.

Crocs have enjoyed a good run. The footwear kicked off in 2002 at the Ft. Lauderdale Boat Show as adapted for the yachty set. However, people soon took to them to the tune of $850 in annual sales in 2007. Celebrities such as Jack Nicholson, Al Pacino, Mario Batali, and George W. Bush wore them in public. However, sales have declined, despite attempts by the company to diversify into high heels, dress shoes and winter boots.

Why have Crocs become unpopular? One possibility is stylistic obsolescence, that is, Crocs have simply gone out of fashion. The shoes are strongly styled and thus exposed to the winds of change. Carey Dunne of FastCompany says that consumers simply came to their senses and realized how awful the shoes really look. Andrew Clark of The Guardian argues that Croc management was “caught out” by the 2008 recession:

“That initial style put them on the map. They moved quickly to expand it, not only in the US but around the world,” says Jim Duffy, a sportswear analyst at stockbroker Stifel Nicolaus. “But at the same time, the economy slowed down. They had overdistributed the product, they’d become too heavily dependent on the one style and they had inventory management problems.”

In other words, the recession created an artificial glut from which the product never recovered.

Second comes news of a slowdown in Apple iPad sales. Sales of the tablet were down 19 percent from the previous quarter and 9 percent from the same quarter last year. Although Apple CEO Tim Cook says that he is not worried, it is an unusual situation for the company. This is especially so since sales of Mac computers and iPhones have grown.


iPad stand

So, have tablets become obsolete already? Will Oremus at Slate provides three explanations for the situation:

  1. There are limitations on tablets that do not apply to computers or phones. Tablets are not as powerful as the former and not as portable as the latter. Thus, they are “third devices” which only so many people will have a use for.
  2. Tablets are “too durable.” Most iPad buyers are first-time purchasers, suggesting that people who already have iPads are not replacing them yet. The new iPads are not so much better than the previous ones that many people want to upgrade.
  3. Unlike iPads, iPhone purchases are driven by phone carriers who subsidize the cost of phones in exchange for multi-year contracts. In the absence of this sort of economic stimulus, people who might like to upgrade from their old iPad are unwilling to spend the money.

These points relate to “functional obsolescence”, where a design becomes obsolete because another one with more utility appears. The utility of iPads is somewhat restricted in the first place and has not increased rapidly compared to other devices. Also, the cost of iPads means that people are more likely to find more important things to spend their money on.

In short, despite their decline in sales, iPads should not be considered obsolete. If anything, it is their lack of obsolescence that has depressed sales.

All this suggests that obsolescence is not a simple phenomenon and manifests itself in different ways.

  • Regulus by Ben @ Binary Moon
  • Created with WordPress
  • Top
  • -->