Printing people February 21, 2012Posted by Cameron Shelley in : STV203 , comments closed
Technology Review has a piece about researchers who are using 3D printers to print muscle tissue. So far, the purpose of the research is modest, to create tissue that can be used for drug testing. Of course, longer-term goals are more ambitious:
So far, Organovo has made only small pieces of tissue, but its ultimate goal is to use its 3-D printer to make complete organs for transplants. Because the organs would be printed from a patient’s own cells, there would be less danger of rejection.
This research joins other efforts in the area of using 3D printing technology to create living tissues. Dr. Anthony Atala, for example, is pursuing a similar program with the goal of printing up organs for transplant. You can see his recent TED talk below.
All this research sounds great! However, I wonder how far this work will take us. Will it eventually allow researchers to print off entire people, for example? If it could, then what would we do with it?
High-tech weapons won’t clean up warfare February 9, 2012Posted by Cameron Shelley in : STV302 , comments closed
Philosopher Peter Singer has written a piece for the Boston Globe in which he points out that smart weapons like drones won’t clean up warfare. Singer points out that technological progress in weapons has sometimes been seen as bringing an end to warfare itself:
The poet John Donne predicted in 1621 that the invention of better cannons would mean that wars would “come to a quicker ends than heretofore, and the great expence of bloud is avoyed.’’ Alfred Nobel thought the same of dynamite, as did the inventors of everything from the machine gun to the atomic bomb.
Of course, many of these inventions have most obviously succeeded in making warfare nastier and broadening its impact on civilian populations.
Thus, we should be cautious about claims that modern “smart” weapons will make warfare somehow cleaner or more acceptable. Remotely piloted drones, for example, seem to have made warfare more precise, allowing for “surgical” strikes against enemies, in contrast with the carpet bombing of past campaigns. In future, the hope is that, as drones become more computerized and thus more autonomous, clever programming will make them even more judicious killers than their erstwhile human masters:
For instance, there has been research, largely funded by the military, on how to create an “ethical governor’’ for unmanned weapons. Think Watson, the Artificial Intelligence program that won “Jeopardy,’’ given a law degree and dropped into war. Software would program robotic weapons to act ethically, such as only being able to fire in situations that conform to the Geneva Conventions.
Singer is rightly skeptical that such a plan will work out in any straightforward way. (See this previous blog post for more information.) Inevitably, the adoption of more autonomous drones will create problems that will be difficult to anticipate or to encode in an explicit set of rules, just as in the case of self-driving cars.
However, there may be a silver lining to this latest manifestation of the military-industrial complex. Developing and adopting high-tech war gear is damned expensive! The US military’s F35 fighter program, for example, will cost $382 billion, according to The Economist. And the figure only gets better if you consider the program’s total cost:
What horrified the senators most was not the cost of buying F-35s but the cost of operating and supporting them: $1 trillion over the plane’s lifetime. Mr McCain described that estimate as “jaw-dropping”. The Pentagon guesses that it will cost a third more to run the F-35 than the aircraft it is replacing.
There are many reasons why budgets for high-tech military projects have become so astronomical. One of them, I submit, is that it is simply harder to develop systems that increase in complexity according to Moore’s law.
Why is this awful acceleration of expenditure a good thing? Perhaps it is, insofar as it makes warfare too expensive to engage in lightly. The pricier war gets, especially in comparison to its expected rewards, the less policymakers are apt to pursue it. I should assure you, at this point, that I am not naive enough to suggest that expense is the only factor that gets considered when hostilities are contemplated, nor that the outbreak of war is determined by rational economic calculation. I merely assert that it is a factor, and that it may become more important as the expense of developing weapons systems climbs sky-high.
If so, then we may have cause to say something nice about ethical governors for drones. They may not make war any less ugly but they may make it prohibitively pricey.
“Unstoppable, autonomous and out of control.” October 3, 2011Posted by Scott Campbell in : STV100 , comments closed
The post title sounds a bit like the tag-line to a Hollywood blockbuster about talking robots intent on taking over the Earth, or maybe a ragtag group of misfits intent on taking over the Earth.
What I was trying to summarize is the deterministic feeling some people have about technology in general: that it is an inescapable and certain force that shapes human fate and social circumstances. It is often linked to the idea that technological change known as “progress” is inevitable.
Writing on the Atlantic Monthly’s website, Alexis Madrigal observed this same kind of technological determinism in the acceptance or rejection of Facebook’s new “frictionless sharing” feature:
…where applications will be allowed to post about activities, like what news articles you’ve read, or what music you are listening to, without your explicitly deciding to share or “like” that bit of content (definition via http://www.informationweek.com/thebrainyard/news/social_networking_consumer/231700022).
At least one pundit feels this new and highly controversial feature is here to stay (“Why Facebook’s frictionless sharing is the future“), but Madrigal points out that not everyone agrees and some online services have declined to participate. In his words:
What’s important here, I think, is that Facebook is trying to push the idea that their version of ‘frictionless sharing’ is some kind of inevitable technological development about which people have no choice. “It’s like resisting cars, boyo!” But the idea that technologies run these independent paths with no intermediation from humans is far from established fact. People shape technologies as much as technologies shape people. When’s the last time you heard about supersonic flight? That was supposed to be the next big thing! But it had some problems and people said, “No, thanks.”
Indeed! Societies do reject technologies, and Madrigal is referring to the very same example we discuss in STV100: passenger supersonic transport planes. Though the French and British were able to cooperate for once and build the Concorde, the Americans never got that far, largely because the public rejected the technology outright. Not just because such a plane hopping from New York to LA or Chicago would be a noisy nuisance, but because it represented the inevitability of the government-military-academic-industrial complex of the mid- to late 20th century. It wasn’t just “No, thanks” to the notion of a supersonic plane, or that the technology was held up by some technical issues that couldn’t be resolved (after all, the Concorde flew for decades with a very good record), but instead it was “No, thanks” to the entire idea that expensive, blindly-funded and military-derived technology was unstoppable or out of control. Thus, no American SST.
Societies can and do reject technologies all the time. Even massive technological systems, which seem incredibly hard to shift and may persist for decades, will eventually decline. To borrow from Thomas Hughes, for every Charles Darwin who is there to explain the rise and evolution of a technological system, there must also be an Edward Gibbon to chronicle the decline and fall. In the late 1990s, who could have predicted Microsoft would fall beneath Apple’s shadow? I suspect that Facebook will meet its fate just as well. More worrisome to many, and without blaming this directly on Facebook or even technology, is how much our concept of privacy will change in the meantime.
“I’ve been working on the Facebook, all the livelong day” August 9, 2011Posted by Scott Campbell in : STV404 , comments closed
Is there a better technological symbol of Canadian nationhood than the railway? Unquestionably, the Canadian Pacific Railway was vital to the foundation of the country, and the image of Lord Strathcona hammering “The Last Spike” is one every school child must encounter at some point in their Canadian history lectures.
E. J. Pratt’s epic poem, “Towards the Last Spike” memorializes the moment, but according to one dissertation Pratt may be the exception to the rule, at least culturally speaking: “Canadian literature has in fact made very little deliberate effort to propagate the idea that the railway is a vital symbol of Canadian unity and identity.”
While investigating this, I was reading R. Douglas Francis’ The Technological Imperative in Canada and came across a remarkable passage describing the Victorian experience of railways, particularly among those who grew up in the pre-rail era:
It was only yesterday; but what a gulf between now and then! Then was the old world. Stage-coaches, more or less swift, riding-horses, pack-horses, highway-men, knights in armour, Norman invaders, Roman legions, Druids, Ancient Britons painted blue, and so forth — all these belong to the old period… But your railroad starts the new era, and we of a certain age belong to the new times and the old one. We are of the time of chivalry as well as the Black Prince of Sir Walter Manny. We are of the age of steam.
Could anyone in their mid-thirties or older not read that and think that instead of railroads, we ought to substitute the internet? How easily it can be rewritten to coincide with the opening decade of the 21st century:
It was only yesterday; but what a gulf between now and then! Then was the old world. Postcards, more or less swift, newspapers, books, the six o’clock news, TV networks, news anchors, hockey on the radio, encyclopedias printed on paper, and so forth — all these belong to the old period… But your internet starts the new era, and we of a certain age belong to the new times and the old one. We are of the time of fingers smudged with ink as well as Black Prince of Hollinger International. We are of the age of electronics.
The whole thing could possibly be drawn from the Beloit Mindset list, the annual tongue-in-cheek list of common knowledge that an 18 year old entering university or college won’t have. (From this year’s list: “12. Clint Eastwood is better known as a sensitive director than as Dirty Harry.”) The list that probably wouldn’t exist without email (it started as a popular email forward in the late 1990s) but ironically has now been published in a book.
Progress? Everything old is new again? History repeating itself? Or am I just getting old?
More on nuclear risks April 1, 2011Posted by Cameron Shelley in : STV202 , comments closed
Yesterday brought more commentary from Science and Nature regarding what can be learned from the disaster at Fukushima. Let me continue the discussion from this post by noting some points relevant to risk assessment.
(Image courtesy of César via Wikimedia Commons.)
This article in Science notes that the possibility of a large earthquake in the region had already been raised in the scientific literature. Japanese researchers excavated sediments in the region found evidence for a major earthquake that resulted in a large tsunami, one that had been recorded by Japanese historians in 869 AD. Their work also prompted them to estimate the hazard of another such quake occurring:
They estimated the Jogan earthquake’s magnitude at 8.3 and concluded that it could recur at 1000-year intervals. “The possibility of a large tsunami striking the Sendai Plain is high,” they wrote in a 2001 article in the Journal of Natural Disaster Science.
In spite of this article, the possibility of such a large quake and tsunami were not considered in risk assessments of the safety of the Fukushima plant. Yukinobu Okamura, the lead scientist in studies that confirmed the initial work, states that an expert panel did not heed his concerns during a review of the safety of the Fukushima plant in 2008. The reasons for not attending to this concern remain unclear.
There is also uncertainty about the cause of the explosion in the spent storage pool for reactor 4. The purpose of this pool is to cool the fuel for reactor 4 when it is not in use, and to shield workers from the radiation it gives off. There was an explosion in the pool on March 15, four days after the initial disaster. Calculations had suggested that such a problem should take several weeks to develop:
During normal operation, 7 meters of roughly 40°C water sit between the top of the fuel rods and the surface of the 1425-ton pool. The water is constantly circulated and replenished. There’s little doubt that temperatures in the pool would have risen steadily after power was lost. But several scientists have independently calculated that it would take much longer than 4 days—perhaps as much as 3 weeks—for the heat of the fresh fuel in the #4 pool to evaporate or boil off the water.
The upshot is that there is some failure mode for this pool that its designers and operators do not yet understand.
Finally, this article in Nature outlines some of the lessons from the Chernobyl disaster that might be applied to Fukushima. One of those lessons concerns the effect of general disinterest in shutting down the Chernobyl reactors once the dust has settled. Funding from international bodies is needed to study the continuing effects of radiation on the people and environment affected by the disaster there, as well as for the construction of new containment structures to prevent any further problems from arising.
But the international Chernobyl Shelter Fund that supports the US$1.4-billion effort still lacks about half of that cash, and the completion date has slipped by almost ten years since the shelter plan was agreed in principle in 2001.
The disaster at Fukushima will likely mobilize the international community to pony up the dough to get this work accomplished. Hopefully it will not take another disaster in future for the consequences of the Fukushima disaster to be probably understood and dealt with.
Among other things, these points serve to remind us that the likelihood of some events, and the hazards that they pose, have been subject to uncertainty and disagreement. So, one of the unfortunate lessons of the Fukishima disaster is that we must avoid overconfidence in assessing the risks posed by new technologies.
Nuclear risks March 30, 2011Posted by Cameron Shelley in : STV202 , comments closed
Is now the time for a discussion of the pros and cons of nuclear power? In the aftermath of the Fukushima Power Plant I disaster, doubts about the advisability of nuclear power are proliferating like atoms of radioactive iodine. Given the “hot” climate, it would seem reasonable to postpone decisions about the future of nuclear power until the facts, and cooler heads, can prevail. This approach is recommended in a recent Globe and Mail article by uWaterloo Professor Jatin Nathwani:
There’s a compelling need for a perspective based on solid evidence and assessments to help guide our decisions as they pertain to management of the crisis and subsequently a plan for energy futures. In the unfolding tragedy in Japan – the earthquake and the tsunami – depicting the ferocity of Mother Nature to deliver unforgiving destruction and pain is the central story. And yet, we have grafted onto this bleak tale our anxieties about nuclear risks, driven largely by incomplete information.
I don’t know if the term “grafted” is the best choice of terms here. After all, the Fukushima disaster has surely demonstrated that the ability of nuclear power plants to resist natural events such as earthquakes and tsunamis is relevant to a complete assessment of the risks posed by nuclear power. Certainly, we may have much to learn in this regard, as the notable past nuclear disasters of Three Mile Island and Chernobyl were man-made.
The risks involved in nuclear power are usefully outlined in this article by Elizabeth Kolbert. Professor Nathwani notes that relatively few people have been killed or injured from accidents in nuclear power plants. As Kolbert adds, the threats to life and limb from other power sources may be more considerable:
Every time there’s an accident, proponents of nuclear power point out that risks are also associated with other forms of energy. Coal mining implies mining disasters, and the pollution from coal combustion results in some ten thousand premature deaths in this country each year. Oil rigs explode, sometimes spectacularly, and so, on occasion, do natural-gas pipelines. Moreover, burning any kind of fossil fuel produces carbon-dioxide emissions, which, in addition to changing the world’s climate, alter the chemistry of the oceans.
So, nuclear power seems like a win in terms of public safety and climate-friendliness.
However, there are other risks to be considered, as Kolbert points out. One, of course, is terrorist attack. Another is the problem of evacuating people from the area of a power plant in the event of a disaster. Many plants reside near populated areas that would be difficult to evacuate. Also, there is the problem of what to do with the spent fuel:
After several decades and billions of dollars’ worth of studies, the U.S. still does not have a plan for developing a long-term storage facility for radioactive waste, much of which will remain dangerous for millennia.
Regulating nuclear power is expensive, in part because of people’s fears about it. Those fears, groundless or not, must be addressed, adding considerably to the cost of the system.
Then there are risks involved in simply postponing public discussion. One such risk is that, in the absence of much public interest in the matter (other matters tend to attract public attention more consistently than nuclear power), an attitude of complacency may set in. As mentioned in this previous posting, operators of nuclear plants prefer to simply not think about what could go really wrong with them, leading to a lack of preparation. Zealous public engagement certainly presents challenges, but so does public apathy.
Another matter to ponder is that, when the facts about nuclear power have been gathered and consensus reached, they may be inadequate to determine public policy. Rare events, such as a devastating earthquake, are perhaps too difficult to predict with much accuracy. Also, simply amassing facts does not, by itself, necessarily lead to a consensus of interpretation in the public or among experts. Thus, multiple and mutually incoherent narratives about the future of nuclear power may fit equally well with the empirical record.
Finally, decisions about how to proceed with nuclear power are determined not only by whatever facts are available but by values as well. As Kolbert points out, nuclear power did not arrive on the (American) scene as the result of a rational calculation but, in part, as a means of reconciling Americans to the ongoing development of nuclear weapons. In the words of President Eisenhower at the ground-breaking of the Shippingport, PA, plant:
“My friends, through such measures as these, and through knowledge we are sure to gain from this new plant we begin today, I am confident that the atom will not be devoted exclusively to the destruction of man, but will be his mighty servant and tireless benefactor,” the President said.
We, as a society, are apparently still unsure about how nuclear power fits in with our priorities and our way of life. Time will surely bring new and relevant facts to light. However, it will also bring novel plant designs, new environmental circumstances, and unforeseen and persistent social challenges. So, it is not clear that tomorrow will be a more advantageous time to discuss nuclear power than is today.
Biotechnolgy and design March 28, 2011Posted by Cameron Shelley in : STV202, STV203 , comments closed
Check out this TEDx talk by Paul Wolpe concerning ethical issues in bioengineering. Wolpe outlines a number of ways in which the abilities of researchers to design organisms presents challenges to our collective wisdom. Among the possibilities are:
- remote controlled rats, that is, rats with cybernetic implants that allow people to guide their actions with a remote control, and
- Rat-bots, that is, robots with brains composed of networked rat neurons.
One of the many ethical questions raised by this research, as Wolpe points out, is, “At what point is it not acceptable to deprive a ‘bot of either type of its autonomy?” As our creations get smarter and more independent, they will try to do things that their minders do not wish them to do. So, the minders will use the remote control to overrule the ‘bot. However, when a ‘bot gets smart enough and capable enough, does that veto power become unethical? I assume it would be unethical to implant such a remote control in a human being. So, where does the line fall?
(Image courtesy of Jeblad via Wikimedia Commons.)
In my classes on design, I point out that there are two (out of many other) principles of ethics that are particularly important for designers:
- Things that are morally impermissible may be physically possible, and
- Things that are morally obligatory may be physically unnecessary.
The second point is usually good news for designers. It implies that it is possible to make the world a better place, through design. The first point presents the flip side, if you like. It implies that it is possible to make the world a worse place, through design. Clearly, the designer faces a dilemma: How to tell when a design will make things better or worse?
This bio-engineering research challenges us on many levels. What is the difference between living and non-living things? What sort of living things is it good to design and create? What limits ought there to be the autonomy of those things? Both remote-control rats and rat-bots seem, to me, already to be pushing the limits of the permissible if only because they seem so creepy and coercive. Have a look at the video and see what you think.
GMO mosquitoes January 28, 2011Posted by Cameron Shelley in : STV202, STV203 , comments closed
The Malaysian Institute of Medical Research has concluded an experiment in which 6000 genetically modified mosquitoes were released into the wild. The male mosquitoes were engineered to be able to mate with wild females but to be unable to produce offspring. They take mating opportunities away from fertile males, thus producing a drop in the local mosquito population. With fewer mosquitoes around, it is hoped that the incidence of dengue fever, which can result from mosquito bites, would be significantly reduced. Results of the experiment await analysis.
(Image courtesy of USDA via Wikimedia Commons.)
The scheme is somewhat controversial because it was carried out without much in the way of prior consultation with the local population:
“I am surprised that they did this without prior announcement given the high level of concerns raised not just from the NGOs but also scientists and the local residents,” said Third World Network researcher, Lim Li Ching. “We don’t agree with this trial that has been conducted in such an untransparent way. There are many questions and not enough research has been done on the full consequences of this experiment.”
A very similar incident came to light recently in the Grand Cayman Island. British biotech firm Oxitec released GM mosquitoes on the Island in 2010 with essentially the same plan as the Malaysian researchers. The experiment generated controversy for the same reason, that is, the local population was not much consulted:
But that lack of a public debate doesn’t sit well with the collaborators in a big international project, in which Oxitec is a key member, to develop and test GM mosquitoes. The program, funded by a $19.7 million grant from the Bill & Melinda Gates Foundation and led by Anthony James of the University of California, Irvine, has spent years preparing a study site in the Mexican state of Chiapas for cage studies and a possible future release of another strain of Oxitec mosquitoes. The work includes extensive dialogues with citizen groups, regulators, academics and farmers. The project, one of Gates’s Grand Challenges in Global Health, would “never” release GM mosquitoes the way Oxitec has now done in Grand Cayman, says James.
Do the local residents have a right to greater participation in the introduction of GM animals?
The situation is reminiscent of the introduction of GM crops and foods. In North America, it was done with a minimum of public debate or notification. In Europe, debate was much more extensive. The result is that North Americans (perhaps unknowingly) grow and consume a great deal of GM food, whereas Europeans have proven quite reluctant to do the same.
Besides people’s right to know, the issue of GM mosquitoes might seem like a kind of illicit medical experiment. The Tuskegee syphilis experiment involved doctors observing poor, black residents of the town as they endured the illness, in spite of the fact that effective treatments were readily available and the nature of syphilis was already well enough understood. Only last year, it was revealed that a number of soldiers, prisoners, and mental patients in Guatemala were also involved in this study, also without their informed consent or offers of treatment. The GM mosquito experiments are not of the same variety and are intended to help reduce disease. Still, the conduct of experiments of a medical nature without consent of the people who might be affected is troublesome.
More silver nanoparticles January 21, 2011Posted by Cameron Shelley in : STV202 , comments closed
Silver nanoparticles are microscopic pieces of silver, between 1 and 100 nm in size. That’s small! They have an antibiotic effect, that is, they tend to kill microbes on exposure. This effect has made them very attractive as alternative to antibacterial chemicals. They have been impregnated into socks, to keep them from smelling so much, medical supplies such as face masks, vacuum cleaners, food washers, and are available simply by the jug.
(Nanosilver image courtesy of the Connections Website.)
Concerns have been raised about the eventual health effects of unleashing all this nanosilver into the environment. The particles can and do become detached from socks, jugs, etc., and get flushed down the drain, where they can accumulate in sewage sludge or the broader environment. The ecological effects of this accumulation are not well understood.
Of course, these issues have not stopped researchers from developing more applications. For example, researchers have recently developed “killer paper”, a food wrapping impregnated with nanosilver. The object of this development is to help prevent food from spoiling while in transit. Perhaps this treatment will prove more effective than current measures, such as heat treatment, refrigeration, and irradiation.
The research raises several questions. First, what will be the effects of the nanosilver on the environment? At present, no one knows. Second, is this technology really going to contribute most effectively to the prevention of food waste? According to Jonathan Bloom, author of American wasteland: How America Throws Away Nearly Half of Its Food (and What We Can Do About It), Americans waste between 1/4 and 1/2 of all the food they produce. Although spoilage does contribute to this problem (have you ever discovered a rotten lettuce in the back of your fridge?), far more crucial is wastage in production, over-stocking in stores, and over-consumption at home and in restaurants. If we are really serious about reducing food waste, then we have bigger fish to fry (if you’ll excuse the expression) than inventing alternatives to current food preservation methods. From that perspective, “killer paper” appears to be more of a solution in search of a problem than a remedy to the problem of food waste.
Bell Labs and Beehives, but no more Kodachrome. January 5, 2011Posted by Scott Campbell in : STV100 , comments closed
A set of old computer photos has been showing up on a few blogs recently. The photos are the property of Lawrence Luckham, who worked at the famous Bell Labs in the late 1960s, and captured a few of his colleagues and computers on film. The rest of my post will probably make more sense if you take a quick look at his photo gallery.
As you can see, there is a wide variety of hardware: big mainframes, like an IBM 360; much smaller minicomputers, like the Honeywell DDP-516; some experimental data terminals and the vast tape library.
What is more striking however is that the women clearly outnumber the men. This could be a product of Mr. Luckham’s photographic preferences, but it does offer some reassuring visual evidence that women really have been involved in the history of computing. Indeed, in the late 19th and early 20th century, the word “computer” likely referred to a person who job involving manual computation, and a significant percentage were women. When the modern era of computing began in the 1940s many of the first programmers were women, including the famous Grace Hopper, and the less famous Canadian Beatrice “Trixie” Worsley. Unfortunately, the number of women involved in computer science and the computing industry is declining, for a variety of reasons (UW has an excellent summer program for high-school girls that is trying to reverse this trend).
The brilliant colours in the photos are also impressive — and not just because it proves the 1960s weren’t in black and white! What they brought to my mind was the news from last week that the last role of Kodachrome film was just processed a year after Kodak stopped producing the film and related developing chemicals.
Why would they stop, after 75 years of making the most popular colour photographic film? Digital cameras, of course, have made analog film obsolete. And what is inside a digital camera instead of film? Computers orders of magnitude smaller than the big iron in Luckham’s photos. More specifically, a special semiconductor sensor known as a CCD (Charged Couple Device) that captures the image and a specialized microchip known as a DSP (Digital Signal Processor) that processes the image. And where were these two technologies invented? Bell Labs, of course. (The inventors of the CCD were recognized for their work with the 2009 Nobel Prize in Physics)