Link to the University of Waterloo home page

Link to the Centre for Society, Technology and Values home page

jump to navigation

Modernism or decorum? April 24, 2015

Posted by Cameron Shelley in : STV202 , add a comment

I have been reading Schlender and Tetzeli’s new biography of Steve Jobs, “Becoming Steve Jobs“. I can recommend it both on its own merits and as a complement to Isaacson’s biography of Jobs. I noted an anecdote in the new book that illustrates a tension in design agendas that is of interest for this blog.

Steve Jobs was known for his support for modernism in design, that is, an emphasis on simplicity, minimalism, and the essentials. One facet of the modernist agenda is a certain disregard for traditions or local customs in design. Modernist housing, for example, tends not to resemble traditional housing designs.

A more traditional approach to design places emphasis on decorum (among other things). Roughly speaking, decorum refers to honoring custom and tradition, and designs that fit with their cultural context. This notion was promulgated by the Roman architect Vitruvius. Housing that displays decorum tends to resemble traditional housing and use materials that help it to fit in with surrounding buildings.

The tension between modernism and traditionalism in design is obvious. This tension was manifested in the design of the headquarters of Pixar, an animation company bought and run by Steve Jobs in 1986. Of course, Jobs wanted a modernist structure. However, Pixar co-founders John Lasseter and Ed Catmull had a different idea, as Schlender and Tetzeli relate (pp. 331-332):

Lasseter and Catmull also resisted the idea of a minimalist, glass-and-steel headquarters. It didn’t fit with either their industrial neighborhood or the rich, colorful, fantastical work being done by Pixar employees. “Pixar is warmer than Apple or NeXT,” says Lasseter. “We’re not about the technology, we’re about the stories and the characters and the human warmth.” They voiced their concerns to Tom Carlisle and Craig Paine, the architects Steve had hired for the job. Carlisle and Paine hired a photographer to shoot the brickwork of the lofts in the surrounding neighborhood, and in San Francisco. Then, at the end of one of the days when Steve was working from Pixar’s Point Richmond headquarters, they laid dozens of those photos out on the table of a conference room. “He walked in and I remember him looking at all these beautiful photographs, all the details, and he walked around and around,” remembers Lasseter. “Then he looked at me and he goes, ‘I get it, you guys are right. John, you’re right.’ He got it, and he became a giant advocate for that look.”

The building used bricks from a foundry in Arkansas that could reproduce the palate of bricks found in the earlier structures in Emeryville. Also, the buildings echo some of the industrial look of the Del Monte factory that used to occupy the site.

You can get some idea of the building from these photos of it, and also the Google Street View perspective outside its main entrance. Swing around and look at the other buildings near by.

Of course, Jobs got to indulge his own tastes more in the Apple Campus 2, which does not emphasize decorum and, as a result, some have compared to a spaceship.

The comparison of these two buildings illustrates that the tension between modernism and decorum remains alive and in play, even for a modern technology giant.

Seeing with new eyes April 22, 2015

Posted by Cameron Shelley in : STV203, STV205 , add a comment

Gizmag has a short piece on an Italian “research studio” called Mhox that has proposed a project to manufacture replacement eyeballs. The project is known as EYE—Enhance Your Eye—and would use bioprinting technology to create its wares.

The replacement eyes would come in three varieties. “Heal” would replace the eyes of the visually impaired with eyeballs set for normal acuity. “Enhance” would replace normal eyes with eyes that are 15/10, that is, that can see at 15 feet what normal eyes can make out only 10 feet away. In addition, Enhance would allow for electronic filtering of vision, e.g., color enhancement. “Advance” would be the Enhance eyeball but also fitted with wireless connectivity, so that viewers can share what they see with others.

The proposal raises many, interesting issues. Designer Filippo Nasetti notes that, “We believe it challenges the contemporary concepts of natural and synthetic”. On the face of it, any eyeball created on a 3D printer would qualify as synthetic simply because it did not result from the normal route of human growth. Arguably, the Heal could be regarded as natural to the extent that it accurately replicates a normal human eye. At a stretch, any of these designs could be regarded as natural in the sense that it would be only normal or natural for at least some people to want them.

These points suggest that the real, conceptual challenge raised by this project is between what is normal and what is abnormal. People with visual defects would not “normally” expect to grow a new eyeball. It is certainly not normal at present for someone to have an eyeball equipped with WiFi. Successful development and marketing of these products could change that. On this view, what is normal is whatever is common or widely accepted, which can be quite malleable.

Normalcy can also refer to what is ideal. In this sense, the normal human body includes two arms, two legs, one head, no tail, and no WiFi. Deviations may be considered weird, disgusting, or even as abominations. The recent, perhaps temporary, shelving of Google Glass suggests that people are wary about the presence of others with enhanced vision. Perhaps, they would similar qualms about people with enhanced eyeballs.

Having said that, amputees with prosthetic limbs have deservedly enjoyed broader acceptance, e.g., as Olympic athletes. Prostheses, once designed to appear rather medical or just odd, are now designed to celebrate their unusualness, without compromising the humanness of their users. Perhaps synthetic, super-eyeballs could also be normal, or even cool, in this sense.

mhox eyeball

(Courtesy of MHOX Design)

The group estimates that the upgraded eyeballs could be ready by about 2027.

Can science be understood backwards? April 21, 2015

Posted by Scott Campbell in : STV 210, STV302 , add a comment

A recent paper in the arXiv pre-print archive by Google researchers came to my attention. Titled “On the Shoulders of Giants”, the authors explored the evolution of the frequency that older research is cited:

First, how often are older articles cited and how has this changed over time. Second, how does the impact of older articles vary across different research fields. Third, is the chadnge in the impact of older articles accelerating or slowing down. Fourth, are these trends different for much older articles.

Their analysis was based on articles published between 1990 and 2013 and their conclusions were rather interesting. In short, the number of older articles being cited is increasing: “In 2013, 36% of citations were to articles that are at least 10 years old; this fraction has grown 28% since 1990″, a trend that appears to be accelerating. Their explanation for the phenomenon is that it’s easier to locate older journal articles thanks to online databases and so “significant advances aren’t getting lost on the shelves” and can continue to influence researchers for many years.

The context that I learned of this research was through a fellow historian, who pointed out that the history of science might finally have begun to influence the actual practice of contemporary science. While I would agree that historians having a practical impact in worldly affairs in their own time does seem to be good thing, I also think we should be careful what we wish for. Like any wish granted by a magic genie–in this case, Google Scholar’s massive database of scientific articles–the manner the wish is granted can be capricious and have unexpected side-effects.

Let me explain. Consider another paper (PDF) by another historian, Ian Milligan: “Illusionary Order: Online Databases, Optical Character Recognition and Canadian History, 1997-2010″. Milligan argues that online newspaper archives have started to skew historical research in an unforeseen way. As I’m sure the Google researchers would agree, easier access to the past via online databases–either newspaper archives or scientific journals–has made the past come alive in remarkable ways. However, there is a bias to be wary of. Milligan notes that with Canadian history dissertations:

In 1998, a year with 67 dissertations, the Toronto Star was cited 74 times. However it was cited 753 times in 2010, a year with 69 dissertations. Similar data appears in the Canadian Historical Review (CHR), a prestigious peer-reviewed journal.

What used to be a laborious process for historians, sitting in a library or archive at a microfilm reader, scanning each paper of a one local newspaper after another, has been replaced by sitting at home conducting keyword searches in an online database. And not just any database, but the online archives of the Toronto Star and The Globe and Mail. Since these two databases are relatively complete and easily accessible by the typical graduate student or Canadian historian, the research is biased towards those sources.

Google goes out of its way to obscure the ranking algorithm it uses for its online search services, for a variety of commercial reasons. However, one consequence for its users is that they can only see the online world through the lens of that hidden algorithm. (Others have said similar things before)

As an aside, we see this all the time in undergraduate studies: in my courses, I call it “collaborating with Google”: the class of students are given a research assignment and all return essays with a small number of highly similar sources. When a likely keyword is entered into Google’s search engine, those highly similar sources all appear on the first page of results. Students may be working independently of each other, but not independently of the technology.

So, to bring it back to my original point, a question worth considering is how this bias will have an impact on the practice of scientific research. Yes, a window on the past has been opened, but (to mangle a metaphor), we should be careful about who controls the blinds on that window and who is does the landscaping on the other side. From what I can tell, the Google researchers failed to consider this.

Bio-inspired recycling bins April 21, 2015

Posted by Cameron Shelley in : STV202 , add a comment

CityLabs has posted a piece about a new design for recycling bins by Kin Wai Michael Siu of Hong Kong. Mr. Siu has observed that compliance with recycling programs in many major cities is quite low. The problem is not awareness: Many cities have mounted educational campaigns to promote recycling. Instead, a significant part of the problem lies with the recycling bins themselves:

The usage of recycling bins in many densely populated cities is still low though recycling has been promoted for years. It is found that many people are more reluctant to use recycling bins because the bins are covered with dirt or damaged.

To me, this suggests that recycling bins should be emptied and cleaned regularly. However, Mr. Siu holds that the design of the bins themselves might help to overcome the problem.

As a result, he has designed bins whose mouths resemble the gaping maws of baby birds. See photos at the A’Design Awards site. The idea is that people will enjoy the appearance of the new bins and be motivated to “feed” them:

Green Hunger’s image of baby birds, i.e. the posture of waiting to be fed, urges an immediate participation in recycling. … The participants, especially children, loved the idea of baby birds as if they were feeding them. It can be concluded that Green Hunger makes recycling become a pleasurable daily practice for everyone, including the young and disabled.

To help with sorting, the bins will even signal instructions to people’s mobile phones. Plus, the bins have been designed to be easy to clean.

Several design agendas are evident in this work. Clearly, the overall agenda is environmentalism, protection of the environment from pollution in the form of trash. Within this agenda, one aim of the design is to make garbage sorting and disposal fun for people. In this respect, it is reminiscent of the World’s Deepest Bin, a trash receptacle that played a cartoonish noise whenever things were thrown in it. In addition, the design is also an example of biophilia, that is, the imitation of natural objects in artifacts as a means of generating positive feelings in people who interact with them. The point of these measures is to counteract the technostress that people feel when interacting with something that they consider to be disgusting or contagious.

The Green Hunger bins appear to have succeeded in their initial trials. The question remains: Would they succeed in general? Would you like to feed trash to these birds?

Cryogenic suspension April 17, 2015

Posted by Cameron Shelley in : STV203 , comments closed

Cyrogenic suspension refers to the freezing of a deceased person in the hope that they may be thawed at a future time when medical know-how will allow them to be resuscitated. The first person subjected to this procedure was apparently Dr. James Bedford, a professor of Psychology at the University of California, who was frozen after his legal death in 1967. His body remains stored in a facility at the Alcor Life Extension Foundation. He has been joined there by about 130 others, including baseball star Ted Williams.

Motherboard reports that the system now has its youngest member, 2-year old Matheryn Naovaratpong of Thailand. Matheryn was diagnosed with a rare form of childhood brain cancer. Despite aggressive treatment, the condition proved fatal earlier this year. Rather than giving up, her parents had neurosurgeons from Alcor extract and freeze the remainder of her brain. Their hope is that, someday, medical technology will enable doctors to cure the cancer and also reconstruct a body for her brain, perhaps through 3D printing.

The idea of freezing people and resuscitating them in future has long been a staple of science fiction. For example, in the cartoon series Futurama, pizza delivery boy Philip J. Fry is accidentally frozen and then revived in the 31st Century, when preserved human heads are often mounted on robot bodies to sustain them indefinitely.

Current technologies make the notion seem somewhat plausible. Embryos are routinely frozen and thawed. 3D printed organs, like external ears, are also in development. In a previous blog posting, I wondered when someone would start printing off entire humans. It seems that this notion has become more widespread. Also, its resemblance to religious doctrines about resurrection and eternal life may lend it some credence as well.

Still, the procedure comes with no guarantees. It may simply not work. If it does work, then it may not provided the expected result. For example, a reconstructed brain fitted to a synthesized body might be considered more a facsimile of a person, like a clone, than the same person revived. However, would the illusion of continuity of personal identity be sufficient? Or, by selling what may well be a mere illusion, is Alcor deceiving people like the parents of poor Matheryn Naovaratpong?

DNA databases and politics April 14, 2015

Posted by Cameron Shelley in : STV203 , comments closed

Most modern police services have access to some sort of DNA database. Typically, the database contains information about the DNA of convicted (or, sometimes, suspected) criminals. DNA taken from crime scenes can be analyzed and compared to DNA information in the database. A “match” between a sample and an individual’s DNA casts suspicion on that individual.

One reason for placing the DNA of convicted criminals in the database is that they are at a greater risk of committing future offenses than are members of the general public. Having convicted criminals in the database, then, effectively gives police a list of possible suspects when further crimes are committed.

Of course, past criminal activity is not the only conceivable predictor of future crime. For example, genetic conditions themselves have sometimes been thought to enhance the risk of criminality. In the 1960s, the XYY syndrome (aka “super-male syndrome”) was linked to criminal behavior. The research caused much concern about men with the syndrome, with several governments considering taking special precautions against them. Happily, further research tended to refute this bit of genetic determinism. However, the idea that genetics can predict criminality persists today, e.g., in current research on genetics and sex crimes.

To past criminal behavior or genetic endowment we might add childhood delinquency. Alaska state senator Charlie Huggins, R-Wasilla, has suggested that school records might be used to place children in the forensic DNA database:

“With some degree of confidence, I think that by the time particularly young men, but maybe young men and women, are in middle school, we can already predict the ones we need to get their DNA samples,” Huggins told a room that included the state’s corrections commissioner, Ron Taylor. “Because they’re going to go see Mr. Taylor in a few years. That’s unfortunate but it’s all too true.”

In other words, in the view of Sen. Huggins, bad behavior at school predicts criminal behavior in later life. The state should prepare itself by requiring a DNA sample for its forensic records.

I do not know if the Senator’s claim is true. Even if it is true to some extent, it raises the issue of how authorities should respond. The Senator’s view seems to be to accept the situation as inevitable and to add delinquents preemptively to the list of suspects of future crimes. Joshua Decker, the executive director of the American Civil Liberties Union of Alaska, takes exception to this view:

“That’s certainly a cynical way of viewing the world,” Decker said. “And we would hope that no one, particularly an Alaska state senator, would write off some of our kids.”

If criminality is not regarded as inevitable, then it may be that delinquency can be approached through interventions, as noted by Marny Rivera, an associate professor at the University of Alaska Anchorage’s Justice Center:

And if the state gets information suggesting that some children are predisposed toward criminal behavior, Rivera said, “we might use some of that to provide services or assistance to reduce the likelihood of bad behavior.”

Such a response is not so high-tech as DNA fingerprinting but is certainly worth consideration.

This case draws attention to a political issue concerning forensic DNA databases. Inclusion in a database is justified by increased risk of criminal behavior as predicted by past criminal convictions. The same logic could be extended to other predictors, such as an individual’s genetics or their school record. As such, construction of forensic databases can become a general policy response to the problem posed by criminality. It may even be seen as an alternative to programs aimed to prevent people from falling into criminality in the first place.

To what extent should expansion of forensic DNA databases be seen as the preferred way to deal with people, even children, who seem especially at risk of becoming criminals?

DNN: 13 April 2015 April 13, 2015

Posted by Cameron Shelley in : STV202 , comments closed

In future, we can look forward to many interactions with small drones. What kind of interactions will those be?

Interactions with drones can be hostile, comforting, scary, annoying, or enlightening. Which ones would you prefer?

Abra-CGA-dabra! April 11, 2015

Posted by Scott Campbell in : STV 210, STV100 , comments closed

The British science fiction writer Arthur C. Clarke was, among other things, famous for his three “Laws“. Since they haven’t been mentioned
yet on our blog, I’ll recount all three:

  1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  3. Any sufficiently advanced technology is indistinguishable from magic.

The third law was always my favourite, but according to Wikipedia, the American writer Charles Fort (known for his interest in supernatural phenomena) presaged it by a good 40 years in his 1932 book Wild Talents with:

“…a performance that may some day be considered understandable, but that, in these primitive times, so transcends what is said to be the known that it is what I mean by magic.”

I felt like I was witnessing this kind of technological sorcery when I watched the following video. It’s a “demo”, which in personal computing culture, is a visual or auditory feat intended to demonstrate a programmer’s skills and ability to exploit the native hardware in an impressive or astounding way. Which is a stunted way of
saying that a demo is a kind of magic. To an extent, this demo speaks for itself, but for maximum effect please watch it in full-screen mode, in the highest resolution, and turn the sound on!

So, what did I just see, you may be asking, and what was so magical? In short, the CGA video hardware is normally only capable of producing just four colours at once (normally black, cyan, magenta, and white) and the speaker is only capable of one mono tone at a time. Somehow (insert the magic here) the demo programmers have managed 1000 colors and, when you get to the end, something that sounds like real music.

The amazing thing here is that the technology is from 1981. It’s an IBM PC, commodity business-oriented hardware from the early 1980s. The standard business personal computer of the time, and never intended to
compete with the likes of a home personal computer like the Commodore 64 that could be use for games (the C64, for example, had a special microchip just for audio, as the demo notes).

The demo is not sufficiently advanced technology or science, just sufficiently determined effort to exploit as much as possible of the 34-year-old hardware in such a way that even people in 2015 were stunned when they first saw the demo. I may as well invoke Clarke’s second and first law as well here because the demo really does appear to achieve the impossible, and if you’re remotely familiar with the computer hardware of the time, you can’t help but grin at the achievement. Or sorcery.

So why post this? In part, who doesn’t like a magic show? But also, it serves as a reminder that obsolescence remains difficult to define. It’s hard to even find an IBM PC that old in working condition, let alone find a group of people this devoted to one.

Cubed, by Nikil Saval April 10, 2015

Posted by Cameron Shelley in : STV202 , comments closed

I recently finished reading “Cubed: A secret history of the workplace” by Nikil Saval. The book relates the development of the modern office and its inhabitants, white collar workers, from the nineteenth century to the modern day. It has deservedly received good reviews. The title refers to the fact that the cubicle has been the most notable design outcome of this development, with all the social and managerial practices that it embodies.

I will not attempt a complete review but it will be instructive to compare two social agendas that Saval identifies in his history. The first is Taylorism, named after the famous efficiency expert Frederick Taylor. On Saval’s account (pp. 45ff), Taylor held the view that manual laborers tended to work as slowly as they could manage without getting fired. Workers spent too much time smoking and chatting and not enough working for Taylor’s liking. His solution was what is now called deskilling. That is, he broke down work routines into component movements, discovered how each movement might be optimized, and enforced the resulting routines on workers. The result was that the knowledge required to do the work was displaced from the laborers themselves and built into their routines instead. Work was done more efficiently and could be performed by cheaper, unskilled workers.

Taylorism also accelerated a similar reorganization of office work through what was called “scientific management”. Such managers recommended a strongly hierarchical model of office organization, with managers directing the paperwork of their subordinates. W. H. Leffingwell, one of Taylor’s disciples, penned a large study showing how the circulation of papers, envelopes, pens, etc. through the office space could be made more efficient. The office itself was altered accordingly. Office spaces became large, open floors in high-rise buildings like the Larkin Building. Documents were stored in banks of file cabinets instead of the desks of individuals. The “efficiency desk”, minus the plethora of pigeon holes and drawers of Victorian desks, was configured into large grids on the office floor. The system, not the individual, was supreme, often overseen literally from above by managers stationed on platforms or balconies.

In the 1960s, another agenda for the office space arose (pp. 220ff). It centered on the “knowledge worker“, the well-educated, white-collar employee whose initiative was crucial to the success of their employers. For work purposes, the most valuable attribute of knowledge workers is not their ability to follow directions from above but their personal initiative guided by specialized knowledge tucked away in their individual brains. Such workers were most productive when left more to their own devices and prompted into casual encounters with each other.

Office design responded in a number of ways. For example, Robert Probst designed the Action Office, set of reconfigurable office furniture, including tables, chairs, cabinets and partitions, that knowledge workers could assemble according to their immediate needs. In addition, office design consultants produced open floor plans that would allow for flexible work arrangements and also force fortuitous and unplanned encounters between employees from which crucial innovations would fly like sparks from an anvil.

As is well known, the unfortunate product of the Action Office was its cheaper and more popular derivative, the cubicle, for which the book is named. Cubicle partitions provide the illusion of privacy and autonomy without depriving management of the control over office work that they have been largely unwilling to give up.

Saval explores these and other agendas that have been applied to the organization of office work, and their effects on office design and life. For anyone with an interest in this important topic, the book is well worth the read.


Office-Cubicals-5205” by Loadmaster (David R. Tribble)

“Man was born free yet everywhere he is in cubicles.” —Saval, after Rousseau.

Navigation apps and personal safety April 8, 2015

Posted by Cameron Shelley in : STV202 , comments closed

Personal navigation assistance is a popular category of smartphone app. Basic navigation apps simply provide directions from point A to point B. More complex ones consider mode of travel, time of day, traffic conditions, etc. One more controversial type of navigation app is the so-called ghetto-avoidance app. These apps typically combine navigational directions with information gleaned from police records with the object of helping people avoid unsavory places.

Of course, people have a legitimate interest in avoiding risk of harm to themselves. However, there is concern that the risk assessments made by the apps reflect not actual risk but prejudice, e.g., helping white people to avoid minorities. The whole idea is unpleasantly reminiscent of the practice of redlining, or sanctioning low income or minority neighbourhoods.

With this in mind, I was interested to read this piece in Slate on another safety-oriented, navigation app called Rudder. Rudder provides users with walking directions that favour routes with good lighting:

Rudder’s data comes from municipal records on local lighting, and so far the service has light information for 12 cities, including Boston, Chicago, and San Francisco, plus international cities like Paris and Vancouver. Rudder can’t offer well-lit routes in other places yet, but more cities should be added soon.

The target audience is college students, particularly female ones, I assume.

The approach makes sense. Good lighting is thought to reduce risk of assault. At the very least, good lighting allows people to monitor their surroundings and make informed risk assessments for themselves. Also, on the broken-windows theory of crime, people looking to commit crimes would take the absence of lighting as a sign that criminal behaviour is more tolerated there. Avoiding such locations would then help to avoid people with criminal intent.

There remains the possibility that Rudder would simply help to perpetuate the stigmatization of certain neighbourhoods. Low income and minority areas may attract less spending on public infrastructure such as street lighting, effectively guiding users to whiter, more affluent places. It may also be that women who do not use the app, or who ignore its directions, and suffer an assault will be blamed for their predicament.

In the end, it may be that campaigns to effect better street lighting and public safety would serve college women better than apps that help them avoid the dark.

  • Regulus by Ben @ Binary Moon
  • Created with WordPress
  • Top
  • -->