Sliding injuries April 26, 2012Posted by Cameron Shelley in : STV202 , comments closed
(Deutsche Fotothek/Wikimedia commons)
Tara Parker-Pope of the New York Times points out how children may be injured on slides because their parents go down with them. What happens is that parents sometimes use the slide with their children in their laps, either for the fun of it (admit it!) or at the request of reluctant children. Unfortunately, this configuration of sliding parent and child can have an unanticipated outcome:
But without warning, Hannah’s sneaker caught on the side of the slide. Although Ms. Dickman grabbed the leg and unstuck her daughter’s foot, by the time they reached the ground, the girl was whimpering and could not walk. A doctor’s visit later revealed a fractured tibia.
The reason for the increased risk is that the impact of the child’s foot on the side of the slide is harder due to the force imparted by the weight of the parent. By themselves, children who get shoes stuck on the side can simply stop and extricate themselves.
Of course, the risk can be mitigated through technique, either by adults not sliding with children in their laps or, at least, by removing the children’s shoes and making sure their feet do not touch the sides of the slide. There might also be some possibilities in design, perhaps having a hoop at the slide entrance that is too small for adults to fit through easily so as to discourage them from using the slide. Any other ideas?
Stay in your lane! January 24, 2012Posted by Cameron Shelley in : STV202, STV302 , comments closed
The New York Times has an article discussing computerized systems that allow cars to stay in their lanes by themselves. Basically, forward-facing cameras on the rear-view mirrors attempt to track the painted lines on the road and keep the car in between them. If the car begins to drift out of the lane, then the system can display a message, issue a verbal warning, or even instruct the steering system to bring the car back within its lane. The system can be disabled by the driver and remains inactive if the car’s turn signal is on.
(Don O’Brien/Wikimedia Commons)
The basic idea is definitely a win. After all, when a car leaves its lane unintentionally, then the risk of an accident increases. The article also notes some of the limitations of this kind of system. For example, marks on the roadway may not be very distinct and thus would be impossible to follow. Also, glare from the sun or from the headlights of oncoming cars can wash out the image in the cameras, rendering the system unusable. I presume that the system is programmed not to take any action when it detects such circumstances.
Another issue pointed out in the article is that of risk compensation, the tendency of people to increase their acceptable level of risk due to their knowledge that safety equipment is present. If drivers believe that their cars will automatically keep them in their lanes, then they may cease to pay close attention to the road, raising the risk of an accident due to some unnoticed problem. The likelihood of such behavior increases further given that most drivers will have other gear, e.g., cell phones, that soak up any spare attention.
I would just add that there is the potential for privacy and security issues as well. The article does not say, but I imagine that the system tracks the history of its use in an on-board computer. Thus, someone could access the car’s computer and figure out whether or not it left its lane. It might be possible to correlate this information with any GPS or other data available in order to reconstruct a complete log of the car’s whereabouts and behavior. Who should be allowed access to this data? Of course, the police could obtain it with a court order. But does it belong to the car’s owner or the manufacturer? Car manufacturers might find uses for the data, perhaps to mine it to analyze how their cars are actually driven on the roads, and how they perform under different conditions. Nissan supplies similar data from its cars to a social network called Car Wings that it runs. This network tells drivers how they have been doing, compared to their past history and to other drivers. These comparisons are intended to help drivers use their cars more efficiently. Will data from lane-keeping system end up on a proprietary network? Will manufacturers strike a deal with Facebook to post the data there?
Then there is the security issue. The computer systems in cars are vulnerable to hacking, even from outside the car itself. Security researchers have been able to hack into cars and control the vehicle brakes, one wheel at a time, for example. A car with an autonomous lane-keeping system might allow a malicious intruder the opportunity to take over the steering system as well. It is important for the acceptance of more automated automobiles that they do not become more vulnerable to outside interference.
Is phoning/texting while driving addictive? December 20, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
Here is an interesting piece from the New York Times on how to understand why drivers use their phones while driving, even when they understand the issues attached to that behavior. The debate over driving while distracted by networked gizmos is not new (see here, for example). As distracted driving continues to contribute to accidents, calls for action increase. The NTSB recently called for a ban on talking or texting on phones while driving throughout the US. Ontario recently enacted a ban on hand-held cell phone use while driving.
However, such laws seem to be honored more often in the breech than in the observance. Drivers in BC, for example, seem to be largely ignoring the ban in their province. Furthermore, it is not clear how well such a ban could be enforced.
So, why do people engage in behavior they know to be risky and that has, in some places, been made illegal? Driving while texting has often been compared with driving while drunk, which poses a similar risk of injury. Yet, the analogy does not help to explain the behavior, as Dr. Greenfield of the University of Connecticut School of Medicine notes:
… people who drive drunk do not find any satisfaction in doing so. In contrast, checking e-mail or chatting while driving might relieve the tedium of being behind the wheel.
Instead, Greenfield and others compare driving while distracted with smoking, an addictive activity that some people also enjoy in the car.
Although cell phones are clearly not narcotics, they do have qualities that could make them addictive or, at least, habit-forming:
Part of the lure of smartphones, he said, is that they randomly dispense valuable information. People do not know when an urgent or interesting e-mail or text will come in, so they feel compelled to check all the time.
“The unpredictability makes it incredibly irresistible,” Dr. Greenfield said. “It’s the most extinction-resistant form of habit.”
In other words, cell phones provide a randomly scheduled operant conditioning regime to the user, a well-known and powerful way of creating behaviors that become ingrained and are difficult to undo (“extinguish”).
Dr. Paul Atchley of the University of Kansas argues that addiction is not the right notion, instead preferring an economic analysis. In his view, people respond quickly to cell phone beeps and burps because the information they provide rapidly loses value. A text that has just arrived may contain “hot news” that is most informative if read right away. The same text becomes less informative or valuable if the user puts off reading it for a while. It gets stale, and newer items pop up in the meantime, which tends to devalue the old ones.
Along the same lines, Clifford Nass of Stanford University points out that texts contain not just any old information but often convey information from friends. Because people are social animals, such information is important to them and, thus, they are responsive to it:
Drivers are typically isolated and alone, he said, and humans are fundamentally social animals.
The ring of a phone or the ping of a text becomes a promise of human connection, which is “like catnip for humans,” Dr. Nass said.
“When you tap into a totally fundamental, universal human impulse,” he added, “it’s very hard to stop.”
Each of these explanations suggests various approaches to address the problem. Perhaps cell phones could have a “driver mode”, like airplane mode, that disables or re-schedules the behavior of the cell phone. If the cell phone provided messages on a regular schedule, say every 5 minutes, then the compulsive reinforcement could be reduced or avoided. The phone could even generate boring auto-messages (spam) for the purpose. After a while, perhaps drivers would no longer care so much about that last beep and leave the phone on the “hook”.
Outgoing texts or calls could also be diverted to a temporary storage queue, for delivery after the driving mode is turned off. In that way, there would be less motivation for sending the message during driving itself instead of simply waiting for a break in the trip.
Of course, measures like these would work only so long as drivers honored the “drive mode”. They could cheat just by turning drive mode off and returning to normal operating mode. Dr. Atchley’s work suggests that there might be a way of dealing with this problem also. He tested teens to see if they would accept some sort of reward in lieu of receiving a text right away. He found that they would. Perhaps turning on “driving mode” could be associated with a reward which would be diminished or lost if it is turned off before the trip is over. We might, in effect, gamify driving without texting.
I note that this approach is taken by the TextNoMore Android app. It will be interesting to see if it works.
There is one more factor that may contribute to texting/phoning while driving. We have all seen other people doing so while driving virtuously ourselves. Next time you are on the road, glance at the drivers of other cars around you and you will soon note some who seem to be looking down at their flies quite a bit, or who have a phone nailed to their ear. Being social animals, people will always feel the sting of seeing scofflaws getting away with something. A natural response is to feel that this situation is unfair: Why do they get the benefit of texting while driving without suffering any penalty, while I do not? One way in which people seek redress is to engage in the illicit behavior themselves.
Perhaps a way to address this issue is to publicly shame those who text and drive. I do not mean they should be put in the stocks in the town square. Think instead of those radar speed signs that measure and display your speed as you pass them by. These signs seem to be somewhat effective in getting people to slow down. I suggest that similar signs be developed that can detect the presence of phone/text signals emanating from cars in traffic and display this fact for all to see. In that way, transgressors can be reminded to use “drive mode” and everyone else can see this occur. Then the appearance of unfairness is also addressed. Of course, the coverage of such signs must be limited, so it is not a perfect solution.
Danger! Texting! December 1, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
From Technology Review comes this brief article about a smartphone app that warns its users of approaching cars. The app is called WalkSafe and is being developed by researchers at Dartmouth College.
This device brings to mind to a trope about how people distracted by their gadgets do dumb things, and how they may be protected from their folly. In 2006, there was Rick Mercer’s Blackberry helmet to protect the addled craniums of Blackberry addicts. In 2008, there was a story about padding lampposts in London to soften the blow as Blackberry addicts walked heedlessly into them. Earlier this year, there was the actual story of a woman who fell into a fountain in a shopping mall while texting, which was captured by CCTV cameras and posted to YouTube. More recently, Rick Mercer ranted about the people he almost ran over while they crossed the street, texting without looking:
The WalkSafe app will help to alleviate this problem. Maybe?
As ever, one first worries about the miracle of risk compensation. Recall this earlier discussion of the aware car, a system that monitors drivers for symptoms of exactly the same sort of distraction. A potential problem is that that such a system could actually encourage drivers to indulge in distractions, under the impression that the system will save them. Similarly, pedestrians busily texting may assume that WalkSafe will let them know if a car is approaching, at least on the camera side of their phone. In that event, having outsourced their situational awareness to their gear, pedestrians may walk and text even more obliviously than before. Such behavior could negate any safety gains provided by the app.
Here is my suggestion: Create an app that temporarily locks out the texting function of the smartphone when the carrier is in a crosswalk. Many crosswalks in Canada are equipped with speakers that beep or chirp in order to alert blind pedestrians. Perhaps the smartphone mike could pick up the noise, lock out texting, and snap texters into a heightened state of situational awareness, allowing them to save themselves from collisions.
Balls on the playground? November 16, 2011Posted by Cameron Shelley in : STV202 , comments closed
Matt Gurney at the National Post comments on Earl Beatty Public School in Toronto that has banned the use of balls in the playground. Apparently, the school board was reacting to several near misses and one incident in which a parent was struck in the head by a soccer ball and suffered a concussion. In response, the board has banned the use of soccer balls, footballs, volleyballs, basketballs and tennis balls. Only foamy, Nerf-style balls are allowed for play on the grounds. (I imagine that non-foam balls are still allowed for gym classes.)
Gurney is sarcastic, calling the ban “brave”, and wondering rhetorically what other risks the board and parents will see fit to terminate next. Play involves risk, so the only way to eliminate risk is to ban play altogether.
I suspect that there is more going on here than Gurney lets on. A concussion can be a serious injury and calls for a serious response. Probably, many readers are aware of the ongoing issue of NHLer Sidney Crosby whose concussion has sidelined him since this January. With this in mind, the school board may have decided that a ban was the only action in their arsenal that acknowledged the seriousness of the incident on the playground.
Playground safety has become a minor controversy. On the face of it, it seems that playgrounds could not be made too safe. After all, no one wants the children to get injured, so any affordable safety measure seems warranted. However, as Edward Tenner has noted, there is a payoff to having safety risks on playgrounds: It teaches kids how to deal with risk and danger at a time when they are fairly resilient to falls and bruises. Encountering moderate safety risks helps children to acquire confidence in themselves that stands them in good stead later in life. Without such experience, they may grow up to be overly fearful of danger. This point does not imply that we should place children in mortal peril or anything like that (unlike at Hogwarts), but that there is a “sweet spot” of risk that can be considered healthy.
So, there are grounds to object to a ban on balls in the playground: It may reduce injuries in the short term but hamper development in the long term. As schools are intended to promote development, banning balls from the playground is counter-productive.
Of course, this point does not address the other aspect of the problem that the school faces, namely that of dealing appropriately with a serious injury to a parent. Without knowing more about the exact circumstances, it is difficult to say anything specific. In general, however, I would think it appropriate that the school should carry insurance to help compensate anyone who is injured in this sort of “freak” accident. A payout may or may not undo the damage, but it could demonstrate that the school cares and takes its responsibilities seriously, without detracting from the children’s education.
First armed, law-enforcement drone purchased November 4, 2011Posted by Cameron Shelley in : STV202 , comments closed
Just recently comes the news that the police department of Montgomery County, Texas has purchased an aerial drone capable of carrying weapons. The ShadowHawk, from Vanguard Defense Industries, looks like a miniature black attack helicopter, and is piloted remotely by an operator on the ground, much like the aerial drones that the US military uses in Afghanistan and Pakistan.
The drone carries a set of cameras and is designed to gather information, although it can also be used to take less-than-lethal action:
He [Michael Buscher, CEO of Vanguard Defense Industries] said they are designed to carry weapons for local law enforcement. “The aircraft has the capability to have a number of different systems on board. Mostly, for law enforcement, we focus on what we call less lethal systems,” he said, including Tazers that can send a jolt to a criminal on the ground or a gun that fires bean bags known as a “stun baton.”
Wow! Extreme discomfort from above!
My initial reaction was that I hope it will prove more reliable than the toy helicopter that I got (myself) for last Christmas. It flew for about a dozen flights and then could no longer get off the ground. However, judging from the video, that fear is probably ungrounded.
I suppose that there are two obvious concerns regarding the uptake of this drone. The first concerns privacy. Clearly, the drone can be used for surveillance. Under what authority can it be deployed? Would training its cameras on someone’s backyard constitute a police search under the law? If so, a judge would have to issue a warrant. If not, then are the police acquiring an intrusive new power?
The second concern turns on fairness. I can foresee circumstances where the drone would come in handy in dealing with following lawbreakers or even incapacitating people who pose a threat to public order. However, there may come times when the use or even the mere presence of a drone could be used to, say, tazer people who do not deserve it or to restrain people from exercising their freedom of expression out of fear of a potential hovering menace.
There are also issues of flight safety, as raised in a report by the Government Accounting Office:
Pilots of small aircraft have expressed concerns that drones cannot practice the see-and-avoid rule that keeps aircraft from colliding in mid-air. Since the drone’s camera may be aimed somewhere else, pilots said police controllers may not be able to see and avoid other aircraft in the area during a sudden police emergency.
Finally, there is the possibility that the drones could exacerbate fears of crime. The presence of fences, guards, and cameras can have the effect of heightening peoples’ fears about criminal activity, even out of proportion to the actual threat. Such things remind them about the worst-case scenario of what may occur in a public space shared with strangers. The over-use of drones could increase this tendency. Over-use may be incentivized for police, who may benefit from fear of crime, especially in an era of public budget cutbacks. It will be interesting to see if drone activity tends to increase when the police budget is discussed in council meetings.
Your robot co-driver June 15, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
According to this FastCompany article, researchers at MIT have been designing prototype automated driving systems. One of the goals of this research is to develop a kind of co-driver, a program that tracks your driving and that of other drivers around you so that it can intervene in the case of an emergency:
… they’ve built an algorithm that tries to predict how cars will accelerate and decelerate at intersections or corners and can thus compensate to move itself out of an area where it predicts the two vehicles could collide while maneuvering. It uses a game theory-like decision system, grabbing data from in-car sensors and other sensors in roadside and traffic light units, elements of the future intelligent driving system.
Imagine the first time that you are approaching an intersection and the car countermands your driving instructions, e.g., your use of the accelerator, to slow down to reduce the risk of collision.
(Image courtesy of Magnus Manske via Wikimedia Commons.)
It sounds like an intriguing development, and who would not approve of a system that seems likely to save lives? Of course, there could be unintended consequences. Readers of this blog will probably ask themselves whether or not this safety system could induce drivers to be more careless in their driving practices, on the assumption that their robot co-driver will bail them out of any difficulties. In his book Why things bite back, Edward Tenner discusses examples of how safety equipment in sports, for example, made players more reckless or aggressive, resulting in more frequent or more devastating injuries. The failure of anti-lock brakes to reduce accidents has been attributed to similar causes, so the concern is plausible.
Beyond that, the development of robot co-drivers poses some thorny ethical issues. It appears, for example, that the aim of the current system is to save the life of the driver in an accident. However, why is that outcome the best one to aim for? If the car finds itself in a situation where a bad accident, a pile-up, say, seems likely, then it might be preferable to sacrifice the driver in order to save others in the vicinity. This scenario sounds somewhat like the notorious Trolley problem, in which people are asked what they would do if they had to, for example, push a man under a trolley in order to prevent it from running over a group of others standing on the tracks further away.
In driver education, we do not train drivers to make these sorts of calculations. Instead, folklore suggests that drivers instinctively protect themselves. In the folklore of driving, the front passenger seat of a car is known as the death seat because a car driver will swerve away from an oncoming vehicle, thus placing the person in the passenger side between the driver and the threat. This maneuver is not a calculated decision, just a natural instinct in a split-second situation. Of course, a computerized co-driver with lots of information about the situation may well have the opportunity to decide who gets to live and who does not. So, we need to think about how this decision is to be made. Is it, as in the movie I, robot, to be based on a risk assessment? If so, how? If not, why not? How would you program the co-driver to behave in such circumstances?
More on nuclear risks April 1, 2011Posted by Cameron Shelley in : STV202 , comments closed
Yesterday brought more commentary from Science and Nature regarding what can be learned from the disaster at Fukushima. Let me continue the discussion from this post by noting some points relevant to risk assessment.
(Image courtesy of César via Wikimedia Commons.)
This article in Science notes that the possibility of a large earthquake in the region had already been raised in the scientific literature. Japanese researchers excavated sediments in the region found evidence for a major earthquake that resulted in a large tsunami, one that had been recorded by Japanese historians in 869 AD. Their work also prompted them to estimate the hazard of another such quake occurring:
They estimated the Jogan earthquake’s magnitude at 8.3 and concluded that it could recur at 1000-year intervals. “The possibility of a large tsunami striking the Sendai Plain is high,” they wrote in a 2001 article in the Journal of Natural Disaster Science.
In spite of this article, the possibility of such a large quake and tsunami were not considered in risk assessments of the safety of the Fukushima plant. Yukinobu Okamura, the lead scientist in studies that confirmed the initial work, states that an expert panel did not heed his concerns during a review of the safety of the Fukushima plant in 2008. The reasons for not attending to this concern remain unclear.
There is also uncertainty about the cause of the explosion in the spent storage pool for reactor 4. The purpose of this pool is to cool the fuel for reactor 4 when it is not in use, and to shield workers from the radiation it gives off. There was an explosion in the pool on March 15, four days after the initial disaster. Calculations had suggested that such a problem should take several weeks to develop:
During normal operation, 7 meters of roughly 40°C water sit between the top of the fuel rods and the surface of the 1425-ton pool. The water is constantly circulated and replenished. There’s little doubt that temperatures in the pool would have risen steadily after power was lost. But several scientists have independently calculated that it would take much longer than 4 days—perhaps as much as 3 weeks—for the heat of the fresh fuel in the #4 pool to evaporate or boil off the water.
The upshot is that there is some failure mode for this pool that its designers and operators do not yet understand.
Finally, this article in Nature outlines some of the lessons from the Chernobyl disaster that might be applied to Fukushima. One of those lessons concerns the effect of general disinterest in shutting down the Chernobyl reactors once the dust has settled. Funding from international bodies is needed to study the continuing effects of radiation on the people and environment affected by the disaster there, as well as for the construction of new containment structures to prevent any further problems from arising.
But the international Chernobyl Shelter Fund that supports the US$1.4-billion effort still lacks about half of that cash, and the completion date has slipped by almost ten years since the shelter plan was agreed in principle in 2001.
The disaster at Fukushima will likely mobilize the international community to pony up the dough to get this work accomplished. Hopefully it will not take another disaster in future for the consequences of the Fukushima disaster to be probably understood and dealt with.
Among other things, these points serve to remind us that the likelihood of some events, and the hazards that they pose, have been subject to uncertainty and disagreement. So, one of the unfortunate lessons of the Fukishima disaster is that we must avoid overconfidence in assessing the risks posed by new technologies.
Nuclear risks March 30, 2011Posted by Cameron Shelley in : STV202 , comments closed
Is now the time for a discussion of the pros and cons of nuclear power? In the aftermath of the Fukushima Power Plant I disaster, doubts about the advisability of nuclear power are proliferating like atoms of radioactive iodine. Given the “hot” climate, it would seem reasonable to postpone decisions about the future of nuclear power until the facts, and cooler heads, can prevail. This approach is recommended in a recent Globe and Mail article by uWaterloo Professor Jatin Nathwani:
There’s a compelling need for a perspective based on solid evidence and assessments to help guide our decisions as they pertain to management of the crisis and subsequently a plan for energy futures. In the unfolding tragedy in Japan – the earthquake and the tsunami – depicting the ferocity of Mother Nature to deliver unforgiving destruction and pain is the central story. And yet, we have grafted onto this bleak tale our anxieties about nuclear risks, driven largely by incomplete information.
I don’t know if the term “grafted” is the best choice of terms here. After all, the Fukushima disaster has surely demonstrated that the ability of nuclear power plants to resist natural events such as earthquakes and tsunamis is relevant to a complete assessment of the risks posed by nuclear power. Certainly, we may have much to learn in this regard, as the notable past nuclear disasters of Three Mile Island and Chernobyl were man-made.
The risks involved in nuclear power are usefully outlined in this article by Elizabeth Kolbert. Professor Nathwani notes that relatively few people have been killed or injured from accidents in nuclear power plants. As Kolbert adds, the threats to life and limb from other power sources may be more considerable:
Every time there’s an accident, proponents of nuclear power point out that risks are also associated with other forms of energy. Coal mining implies mining disasters, and the pollution from coal combustion results in some ten thousand premature deaths in this country each year. Oil rigs explode, sometimes spectacularly, and so, on occasion, do natural-gas pipelines. Moreover, burning any kind of fossil fuel produces carbon-dioxide emissions, which, in addition to changing the world’s climate, alter the chemistry of the oceans.
So, nuclear power seems like a win in terms of public safety and climate-friendliness.
However, there are other risks to be considered, as Kolbert points out. One, of course, is terrorist attack. Another is the problem of evacuating people from the area of a power plant in the event of a disaster. Many plants reside near populated areas that would be difficult to evacuate. Also, there is the problem of what to do with the spent fuel:
After several decades and billions of dollars’ worth of studies, the U.S. still does not have a plan for developing a long-term storage facility for radioactive waste, much of which will remain dangerous for millennia.
Regulating nuclear power is expensive, in part because of people’s fears about it. Those fears, groundless or not, must be addressed, adding considerably to the cost of the system.
Then there are risks involved in simply postponing public discussion. One such risk is that, in the absence of much public interest in the matter (other matters tend to attract public attention more consistently than nuclear power), an attitude of complacency may set in. As mentioned in this previous posting, operators of nuclear plants prefer to simply not think about what could go really wrong with them, leading to a lack of preparation. Zealous public engagement certainly presents challenges, but so does public apathy.
Another matter to ponder is that, when the facts about nuclear power have been gathered and consensus reached, they may be inadequate to determine public policy. Rare events, such as a devastating earthquake, are perhaps too difficult to predict with much accuracy. Also, simply amassing facts does not, by itself, necessarily lead to a consensus of interpretation in the public or among experts. Thus, multiple and mutually incoherent narratives about the future of nuclear power may fit equally well with the empirical record.
Finally, decisions about how to proceed with nuclear power are determined not only by whatever facts are available but by values as well. As Kolbert points out, nuclear power did not arrive on the (American) scene as the result of a rational calculation but, in part, as a means of reconciling Americans to the ongoing development of nuclear weapons. In the words of President Eisenhower at the ground-breaking of the Shippingport, PA, plant:
“My friends, through such measures as these, and through knowledge we are sure to gain from this new plant we begin today, I am confident that the atom will not be devoted exclusively to the destruction of man, but will be his mighty servant and tireless benefactor,” the President said.
We, as a society, are apparently still unsure about how nuclear power fits in with our priorities and our way of life. Time will surely bring new and relevant facts to light. However, it will also bring novel plant designs, new environmental circumstances, and unforeseen and persistent social challenges. So, it is not clear that tomorrow will be a more advantageous time to discuss nuclear power than is today.
Send in the ‘bots March 21, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
Here is a short, interesting article combining the topics of my recent posts on the 11 March earthquake in Japan and recent strides made by robots. In particular, this article notes the curious fact that, although Japan is one of the most robot-friendly nations around, it does not have robots working in the stricken Fukushima nuclear power plant. Instead, a small group of human operators have been struggling to keep the power plant under control themselves.
(Image courtesy of Jiuguang Wang via Wikimedia Commons.)
One can only hope that these workers are not suffering excessively from radiation exposure. I am reminded of the “human-robots” or “bio-robots” of Chernobyl: Military personnel employed to shovel debris on top of the failed reactors in an attempt to contain the radiation leakage. Needless to say, many of those people suffered badly from radiation exposure.
In any event, why is it that the Japanese plant lacks robots, when these are common in the nuclear plants of the EU, for example? The article offers a few reasons. The first is simply bad timing:
Kim Seungho, a nuclear official who engineered robots for South Korea’s atomic power plants, said: “You have to design emergency robots for plants when they are being built so they can navigate corridors, steps and close valves.”
The Fukushima plant was built in the 1970s, well before robots were able to work on sophisticated tasks.
Of course, this point raises the issue of why robots could not have been designed to do useful work in the plant after its construction.
A second reason is simple denial:
Kim, a deputy director in nuclear technology for the Korea Atomic Energy Research Institute, said budget constraints and denial have kept emergency robots out of many plants in his country and around the world.
“Nuclear plant operators don’t liked to think about serious situations that are beyond human control,” he said by telephone.
There often a trade-off between efficiency and robustness in design. That is, people often prefer to have cheaper but less resilient systems, especially if they do not credibly foresee a failure of the system. The result, in the event of a failure, is that people other than the designer’s maker or owner pay the extra price. In the case of the failure of the Fukushima reactors’ safety system, the price is evacuation and possible radiation exposure for the plant’s operators and its neighbours.
Another issue might be the amount of autonomy to be granted to emergency robots. This is an emerging issue for military drones that fly armed over the territory of potential military targets. In the event of some problem, e.g., lack of communication with home base, under what circumstances would the drones be granted permission to fire without explicit authorization? A similar issue arises in the case of emergency robots in a power plant: In the event that human operators are unavailable or out of contact, what should the robots be authorized to do? I image that this problem is no small one. Still, it seems as though we should be discussing it.
Of course, given their absence, it is unclear what difference robots might have made to efforts to cope with this disaster. However, it may be a good bet that power plants currently without robots for assistance are now in the market for some.