Your robot valet is here May 2, 2012Posted by Cameron Shelley in : STV202 , comments closed
A recent column on robots in FastCompany describes a condo in Florida that will have robot valets. Well, almost. Residents in the building will drive their cars into parking bays where an automatic system will take over. The system acts like a the automated manager in a storage facility, taking the car to a slot in a set of (large) shelves within the bowels of the condo or retrieving it on command. A video explains:
The system promises some advantages for users over self-parking in a parking structure:
- Convenience: the system takes some driving time off the hands of car owners. This might especially fruitful for taking the car out of storage, since the driver can send a command remotely so that the system will have the car ready when the driver steps out.
- Efficiency: A shelving system without the need for driving lanes, ramps, etc. should be able to fit more cars into a given space than a conventional parking structure.
- Safety: Although not raised in the video, an automated parking system could improve safety. Transitioning between modes is challenging for safety in most systems, and entering or leaving a parking lot creates new opportunities for damage. An automated system might well do a better job than the manual one.
- Happiness: The valet system is undoubtedly cool and will also relieve owners of the need to enter parking structures, which tend to be unpleasant at best. So, drivers will be happy!
As ever, there are some potential challenges that remain:
- Casual access: It is not clear what access people have to their cars when they do not want to drive. Ever left a bag in the trunk by accident? It is easy and efficient to just visit your car in a parking structure and retrieve your stuff. It would be wasteful if the system had to fetch the car just so that the driver can get into the trunk.
- Entropy: At some point, the system will not match the right car with its owner. How frequently will it make such mistakes and how well will it cope with them?
- Efficiency: A car share system would be still more efficient than a bank full of idle, individually-owned vehicles. Such a system would also help to reduce the entropy problem, since the need to match cars to owners would not occur. Of course, the condo owners may not be into sharing cars.
- Security: Since the system is accessible remotely, e.g., by text message, it will make the cars available to hackers. What sort of measures will be in place to prevent tampering or theft?
The system represents an interesting idea, the transfer of storage technology to the parking garage. Still more interesting would be a real robot valet that could park your car in an existing structure, making it compatible with existing facilities. Of course, that would be even more of a challenge.
Sliding injuries April 26, 2012Posted by Cameron Shelley in : STV202 , comments closed
(Deutsche Fotothek/Wikimedia commons)
Tara Parker-Pope of the New York Times points out how children may be injured on slides because their parents go down with them. What happens is that parents sometimes use the slide with their children in their laps, either for the fun of it (admit it!) or at the request of reluctant children. Unfortunately, this configuration of sliding parent and child can have an unanticipated outcome:
But without warning, Hannah’s sneaker caught on the side of the slide. Although Ms. Dickman grabbed the leg and unstuck her daughter’s foot, by the time they reached the ground, the girl was whimpering and could not walk. A doctor’s visit later revealed a fractured tibia.
The reason for the increased risk is that the impact of the child’s foot on the side of the slide is harder due to the force imparted by the weight of the parent. By themselves, children who get shoes stuck on the side can simply stop and extricate themselves.
Of course, the risk can be mitigated through technique, either by adults not sliding with children in their laps or, at least, by removing the children’s shoes and making sure their feet do not touch the sides of the slide. There might also be some possibilities in design, perhaps having a hoop at the slide entrance that is too small for adults to fit through easily so as to discourage them from using the slide. Any other ideas?
Woman falls off pier while texting! March 22, 2012Posted by Cameron Shelley in : STV302 , comments closed
I am sure this headline is one that every editor has secretly desired to write. Unfortunately, it accurately captures an incident in St. Joseph, Michigan, in which a woman fell into Lake Michigan while texting and walking along a pier. Her husband and a passerby jumped into the water to save the victim, and both were rescued by emergency responders summoned by a call to 9-1-1.
Because no serious injuries resulted, the incident can be viewed with amusement. It is also reminiscent of other times when people became too distracted for their own good while walking and texting.
What to do? Some intrepid app creators have tried to make the phone part of the solution instead of part of the problem. For example, there is the Walksafe app that uses the camera on a smartphone to try to detect and warn the bearer of oncoming vehicles as they text and walk heedlessly. How, though, will they deal with clients walking along piers or around shopping malls?
I would suggest a new approach to the problem: flip it! People are generally better at contextual awareness than are phones, whereas phones can generate text messages more efficiently than people can. What we need, then, is an app that writes and sends text messages for the client, while the ambulatory client watches out for traffic and water, thus keeping the phone safe from harm. Call it “AutoText”. There is now a smartphone at the bottom of Lake Michigan that would agree with me, I am sure, if it could.
Social robots January 25, 2012Posted by Cameron Shelley in : STV202, STV302 , comments closed
The recent issue of the ACM Communications has an article about the social life of robots. In the article, Wright points out that people’s established view of robots is that of the solitary automaton, single-mindedly carrying out its programmed function. Think of Robby the robot or even the Terminator. Social robots, however, would be at home working in groups and even interacting with humans.
(German Federal Archive/Wikimedia Commons
As Wright points out, the potential of social robots seems clear:
In theory, collaborative robots hold enormous potential. They could augment human workers in high-risk situations like firefighting or search and rescue, boost productivity in construction and manufacturing, and even help us explore other planets.
Given the potential payoff of robots that collaborate to complete their assigned tasks, why has it taken so long for research in robots to engage with this problem?
One answer is, of course, that organizing the activities of a team of complex robots is a non-trivial endeavor:
At the most basic level, collaborative robots need access to each others’ sensory data, so they cannot only “see” via their fellow robots, but in some cases reconcile perceptual differences as well. They then must learn to merge that shared spatial data into a unified whole, so that the robots can converge effectively in a physical space.
To function as a team, robots must learn to negotiate decision-making processes in a distributed, multi-agent environment.
Unstated but also present is the issue of safety. That is, if the activities of a group of robots is difficult to co-ordinate, then the result of letting them loose is difficult to predict. Single robots can occasionally act in unpredictable ways, sometimes creating issues of safety for anyone around them. Imagine then the problems that an ensemble of robots might create.
This issue is not necessarily cause for pessimism or alarm. I suspect that diligent research can lead to an acceptable level of safety and control for collaborative robots. My suggestion would simply be that roboticists consider the safety issue from the start, and build safety features into the basic design of their robots rather than treating safety as something that can just be bolted on sometime later.
The “aware” car September 6, 2011Posted by Cameron Shelley in : STV202 , comments closed
Here is an interesting segment from PBS about the development of the “aware” car at the MIT AgeLab. Researchers at the Lab are developing a system that monitors drivers for signs of distraction, impairment, etc. in case some intervention is needed for safety reasons. The emphasis is on problems that might arise with senior drivers, whose capacities may be reduced by age, impaired by medications, and so on.
Prof. Coughlin points out that this project is an urgent one, given the wave of baby boomers now entering retirement and old age. Both men and women in this cohort will probably expect to drive longer and maintain their independence, in an environment where alternatives such as public transportation are not always or widely available. So, to protect drivers and the public, and to delay institutionalization of seniors, rapid introduction of monitoring systems is appropriate. Plus, as Prof. Coughlin also points out, baby boomers are precisely the group that will be able to afford all this expensive new gear.
All of these points are correct but the conclusion that increased safety and longevity will ensue is not assured. Consider the now well-known story of ABS brakes. Decades were spent developing braking systems for cars that would help prevent skidding and loss of control for the purpose of increasing driver safety. However, the increase in safety did not materialize as expected. The reasons are controversial but may include the phenomenon of risk compensation (previously mentioned here), the tendency of people to respond to perceived increases in safety, perhaps induced by the presence of safety gear, by increasing the riskiness of their behavior.
So, we cannot know that new safety systems, however technically proficient, will increase actual safety without knowing how seniors will respond to their presence. Perhaps their driving habits will remain unchanged and the new systems will steer them clear of accidents or wake them up when they fall asleep at the wheel. Or, seniors may respond to these systems by driving more aggressively and even when they feel tired, thus negating the effects of the new gear. If the project proceeds apace, as Prof. Coughlin suggests it will, we may start to find out soon.
Your robot co-driver June 15, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
According to this FastCompany article, researchers at MIT have been designing prototype automated driving systems. One of the goals of this research is to develop a kind of co-driver, a program that tracks your driving and that of other drivers around you so that it can intervene in the case of an emergency:
… they’ve built an algorithm that tries to predict how cars will accelerate and decelerate at intersections or corners and can thus compensate to move itself out of an area where it predicts the two vehicles could collide while maneuvering. It uses a game theory-like decision system, grabbing data from in-car sensors and other sensors in roadside and traffic light units, elements of the future intelligent driving system.
Imagine the first time that you are approaching an intersection and the car countermands your driving instructions, e.g., your use of the accelerator, to slow down to reduce the risk of collision.
(Image courtesy of Magnus Manske via Wikimedia Commons.)
It sounds like an intriguing development, and who would not approve of a system that seems likely to save lives? Of course, there could be unintended consequences. Readers of this blog will probably ask themselves whether or not this safety system could induce drivers to be more careless in their driving practices, on the assumption that their robot co-driver will bail them out of any difficulties. In his book Why things bite back, Edward Tenner discusses examples of how safety equipment in sports, for example, made players more reckless or aggressive, resulting in more frequent or more devastating injuries. The failure of anti-lock brakes to reduce accidents has been attributed to similar causes, so the concern is plausible.
Beyond that, the development of robot co-drivers poses some thorny ethical issues. It appears, for example, that the aim of the current system is to save the life of the driver in an accident. However, why is that outcome the best one to aim for? If the car finds itself in a situation where a bad accident, a pile-up, say, seems likely, then it might be preferable to sacrifice the driver in order to save others in the vicinity. This scenario sounds somewhat like the notorious Trolley problem, in which people are asked what they would do if they had to, for example, push a man under a trolley in order to prevent it from running over a group of others standing on the tracks further away.
In driver education, we do not train drivers to make these sorts of calculations. Instead, folklore suggests that drivers instinctively protect themselves. In the folklore of driving, the front passenger seat of a car is known as the death seat because a car driver will swerve away from an oncoming vehicle, thus placing the person in the passenger side between the driver and the threat. This maneuver is not a calculated decision, just a natural instinct in a split-second situation. Of course, a computerized co-driver with lots of information about the situation may well have the opportunity to decide who gets to live and who does not. So, we need to think about how this decision is to be made. Is it, as in the movie I, robot, to be based on a risk assessment? If so, how? If not, why not? How would you program the co-driver to behave in such circumstances?
Send in the ‘bots March 21, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
Here is a short, interesting article combining the topics of my recent posts on the 11 March earthquake in Japan and recent strides made by robots. In particular, this article notes the curious fact that, although Japan is one of the most robot-friendly nations around, it does not have robots working in the stricken Fukushima nuclear power plant. Instead, a small group of human operators have been struggling to keep the power plant under control themselves.
(Image courtesy of Jiuguang Wang via Wikimedia Commons.)
One can only hope that these workers are not suffering excessively from radiation exposure. I am reminded of the “human-robots” or “bio-robots” of Chernobyl: Military personnel employed to shovel debris on top of the failed reactors in an attempt to contain the radiation leakage. Needless to say, many of those people suffered badly from radiation exposure.
In any event, why is it that the Japanese plant lacks robots, when these are common in the nuclear plants of the EU, for example? The article offers a few reasons. The first is simply bad timing:
Kim Seungho, a nuclear official who engineered robots for South Korea’s atomic power plants, said: “You have to design emergency robots for plants when they are being built so they can navigate corridors, steps and close valves.”
The Fukushima plant was built in the 1970s, well before robots were able to work on sophisticated tasks.
Of course, this point raises the issue of why robots could not have been designed to do useful work in the plant after its construction.
A second reason is simple denial:
Kim, a deputy director in nuclear technology for the Korea Atomic Energy Research Institute, said budget constraints and denial have kept emergency robots out of many plants in his country and around the world.
“Nuclear plant operators don’t liked to think about serious situations that are beyond human control,” he said by telephone.
There often a trade-off between efficiency and robustness in design. That is, people often prefer to have cheaper but less resilient systems, especially if they do not credibly foresee a failure of the system. The result, in the event of a failure, is that people other than the designer’s maker or owner pay the extra price. In the case of the failure of the Fukushima reactors’ safety system, the price is evacuation and possible radiation exposure for the plant’s operators and its neighbours.
Another issue might be the amount of autonomy to be granted to emergency robots. This is an emerging issue for military drones that fly armed over the territory of potential military targets. In the event of some problem, e.g., lack of communication with home base, under what circumstances would the drones be granted permission to fire without explicit authorization? A similar issue arises in the case of emergency robots in a power plant: In the event that human operators are unavailable or out of contact, what should the robots be authorized to do? I image that this problem is no small one. Still, it seems as though we should be discussing it.
Of course, given their absence, it is unclear what difference robots might have made to efforts to cope with this disaster. However, it may be a good bet that power plants currently without robots for assistance are now in the market for some.
Flight simulators can contribute to accidents September 9, 2010Posted by Cameron Shelley in : STV100, STV202, STV302 , comments closed
A study recently done for USA Today indicates that flight simulator training can, in rare instances, contribute to accidents. Their report claims that 522 fatalities in US Airline accidents can be attributed to problems stemming from simulator training.
(Image courtesy of US Navy, via Wiki Commons.)
The basic idea is that the picture of reality painted in a simulation can be at odds with reality itself, thus creating expectations about plane handling in pilots that do not pan out in practice. For example:
Last month, the NTSB blamed deficient simulator training in part for the Dec. 20, 2008, crash of a Continental Airlines jet in Denver.
The Boeing 737-500 skidded off a runway at high speed and burst into flames because of the pilot’s inability to steer while trying to take off in gusty cross-winds, the NTSB ruled. Six people suffered severe injuries.
Of course, simulator training generally works well and, indeed, can be credited with saving many casualties since its introduction in the 1970s. But its record of success may produce a sense of complacency about its upgrading and revisions.
This case presents an interesting illustration of one of the recurring themes of this blog, namely people’s relationships with their tools. Is technology “just a tool”, as is often said? This example shows that the answer is “no” and “yes”, depending on how you interpret the expression itself:
- In one sense, a tool is something that facilitates some activity, e.g., a hammer facilitates driving nails. A hammer is “just a tool”, then, if it effects no other significant changes in people’s lives. Marshall McLuhan argued that media, like TV, are not merely tools because, in addition to conveying data, they present a particular way of perceiving the world that users feel compelled to adopt. Computer flight simulators seem to provide a good example of this point: They present a picture of reality that is so convincing to pilot trainees that they perceive the real world as the simulator has it.
- Sometimes, when people say that technology is “just a tool”, they mean that merely applying new technology does not necessarily solve all our old problems. The use of e-voting machines, for example, does not somehow mean the end of electoral fraud or even vote counting issues. In this sense, flight simulators are “just a tool”, meaning that their application does not justify complacency about the quality of pilot training.
Probably, few people would want technology to be “just a tool” in the first sense: Imagine if technology never significantly changed people’s lives! Would you care to return to a stone-age lifestyle? The price of accepting technologies that have the power to transform our lives and perceptions of things is sometimes being blind to or unprepared for the downside of our new tools. Sticking to certain values, such as the dignity of human life, helps to ensure that technology remains “just a tool” in the second sense, that is, something that makes life better as time goes on.
iPads already installed in cars April 7, 2010Posted by Cameron Shelley in : STV202, STV302 , comments closed
Despite barely making to the store shelves, the iPad has already been installed in cars. The safety hazard, to drivers and others, of this use of the iPad is obvious enough. The FastCompany article notes, somewhat tongue in cheek, that the iPad will be a relief for bored drivers:
As it is now, in-vehicle screens are usually out of sight of the driver, who has that lame, media-free job of driving the car from point A to point B. So boring. But now that the rich colors, the zooming menus, the practically unlimited library of games is available literally at your iFingertips, will the driver be able to resist looking over, or, Heaven forbid, participating?
Driving can be boring. However, it seems to me that some people are more like druggies who cannot be away from their Internet fix for more than a minute. Thus, it would be useless simply to urge people to refrain simply out of a sense of self-preservation: Driving while iPadding is clearly not a rational act; drivers who play with their iPads will just tell themselves that they are so good at multitasking that they are an exception to the rule.
Well, Korea and China have camps where Internet addicts go for treatment (or torment). Perhaps therapy instead of fines would be an appropriate measure for those who cannot resist the siren song of the iPad while at the wheel.