The smart guns are here May 16, 2013Posted by Cameron Shelley in : STV202 , comments closed
Guns have been in the news a lot lately, due to the development of 3D printable ones. However, gun news is not confined to additive manufacturing. From NPR comes an item about a “smart gun”, that is, a rifle that does the aiming and firing for the shooter. The Tracking Point rifle has a computerized scope with laser range-finger and heads-up display that corrects the shooter’s aim for environmental conditions, and can also delay pulling the trigger until it computes that the shot will hit its mark. It also allows the shooter to record videos of each shot for review, or posting to YouTube.
The video promotes the rifle’s use as a means of making hunting more efficient. As noted in the article, this efficiency will not suit purists, who point out that the system has the effect of deskilling the practice of hunting. I am reminded of a quote from an episode of The Simpsons, where Lenny, a gun enthusiast and NRA member, discourses on why hunters need assault rifles:
“Assault weapons have gotten a lot of bad press lately, but they’re manufactured for a reason: to take out today’s modern super animals, such as the flying squirrel, and the electric eel.”
Of course, as is often the case when tools become automated, purists will stick to the old ways, while people who otherwise would not engage in hunting may take it up with the new gear that makes it easier to score. They will be encouraged by the familiar, first-person-shooter look of the Heads-Up Display.
Naturally, the new technology poses security issues. The TrackingPoint rifle seems like a godsend to anyone planning an assassination. Aware of the issue, company President Jason Schauble notes that the scope is password protected:
“It has a password protection on the scope. When a user stores it, he can password protect the scope that takes the advanced functionality out. So the gun will still operate as a firearm itself, but you cannot do the tag/track/exact, the long range, the technology-driven precision guided firearm piece without entering that pass code,” he says.
I wonder how many of the devices will have their passwords stuck on them with Post-it notes? In any event, the password scheme seems unimpressive. Given that the scope requires users to look into it, eye-scanning might be more a propos. Even in that case, it is unclear how robust the password system will be, or whether or not having to think of a password will deter people who want the system for malicious purposes.
Besides assassinations, some users may be inclined to appropriate the system for various stunts. Some will imitate William Tell and shoot objects perched on heads. Others may find excitement out of getting the system to do odd things that the designers have probably not considered. Think of Autotune, a system that was originally designed to correct variations of pitch in singing, but was quickly used to produce odd and inventive, new sound effects instead. TrackingPoint hackers will likely find ways to get the system to produce interesting patterns of shots, playing “X”s and “O”s or spelling names with bullet holes, perhaps.
It will be interesting to see how this gun factors into the ongoing gun control debate in the US. Is access to smart guns an inalienable right? Or, should they be regulated in some way? Perhaps the best move would be not to ban smart guns but to produce a weapon smart enough not the pull the trigger at all.
Responsive cities September 28, 2012Posted by Cameron Shelley in : STV 201 , comments closed
Here is a recent TEDx talk by Kent Larson of MIT on how new technologies of mobility, public spaces, and architecture can make cities more “responsive” or livable in future.
The technologies on display are impressive and I like the respect shown for people’s autonomy in his ideas. (That is, he does not design for people as if they were cattle or gas molecules.) One question left in my mind is the affordability of the designs he is working on: They strike me as pricey. He clearly hopes to knock down the cost through economies of scale, but the issue remains an obstacle. This is especially so because much of the coming wave of urbanization will take place in developing countries that do not have piles of cash to spend on fancy gear.
Another issue would be the lack of consideration for public transit. Larson seems to hold that sharing systems like Velib and personal, autonomous vehicles will get people around. However, as Jarrett Walker argues in Humantransit, mass transit will likely remain the most efficient way of moving large numbers of people around cities.
Is your car too smart? May 18, 2012Posted by Cameron Shelley in : STV202 , comments closed
Robert Charette at IEEE Spectrum reports on complaints that computerized amenities in modern cars are too confusing for many drivers. Car producers are packing more and more computerized features into the cabins of cars for operators to interact with. Unfortunately, the operators sometimes have difficulty figuring out how to work their new toys. The MyFord Touch system, for example, has attracted complaints that it is too complex for people to operate.
To deal with the problem, Ford is trying to persuade dealers to become more expert in the operations of these new, in-car electronics. With greater training, dealers can then help to train buyers in the ins and outs of their new computerized gear.
Charette puts the problem down to poor design:
Of course, it might help if car designers spent a little more time with their human factors counterparts to make the operations of the electronics more transparent and easy to use. There has been several occasions where I would have been more than pleased to explain in detail to the designers of several of the electronic systems on my Toyota Sienna how they got it dreadfully wrong. Needing a couple of hundred page manual to explain how to use my car’s electronics is a symptom of the problem.
It would not be the first time that poor design sabotaged a new, in-car system. Early versions of BMW’s iDrive system were notoriously difficult to deal with, requiring drivers to use a joystick to navigate a hierarchical menu system to operate the simplest functions, e.g,. the radio, all while driving at speed. Usability has always been an issue for the software industry, which has a tendency to present functions in a way that prioritizes their abstract relationships rather than their practical uses. Now that cars are ever more computerized, these problems with software design have become problems for automotive design.
The response of having dealers train users is reminiscent of the problem of training new drivers when cars themselves were a new invention. In his book, “User unfriendly“, Joseph Corn documents how the first automobiles confounded their users. Cars were radically different than the familiar horse and buggy, and early operators experienced many problems with driving and just keeping their machines working. Auto manufacturers responded by training dealers enough to talk up the features of their cars, and by issuing manuals that purported to explain their purchases to new car owners. Neither effort was much of a success. The situation was amended only when new technologies made cars reliable enough to work without so much fuss by the drivers.
It may be that this will have to happen with in-car electronics too. That is, instead of training drivers to adapt to their gear, automakers will have to design the gear so that drivers can operate it without so much preparation.
Your robot valet is here May 2, 2012Posted by Cameron Shelley in : STV202 , comments closed
A recent column on robots in FastCompany describes a condo in Florida that will have robot valets. Well, almost. Residents in the building will drive their cars into parking bays where an automatic system will take over. The system acts like a the automated manager in a storage facility, taking the car to a slot in a set of (large) shelves within the bowels of the condo or retrieving it on command. A video explains:
The system promises some advantages for users over self-parking in a parking structure:
- Convenience: the system takes some driving time off the hands of car owners. This might especially fruitful for taking the car out of storage, since the driver can send a command remotely so that the system will have the car ready when the driver steps out.
- Efficiency: A shelving system without the need for driving lanes, ramps, etc. should be able to fit more cars into a given space than a conventional parking structure.
- Safety: Although not raised in the video, an automated parking system could improve safety. Transitioning between modes is challenging for safety in most systems, and entering or leaving a parking lot creates new opportunities for damage. An automated system might well do a better job than the manual one.
- Happiness: The valet system is undoubtedly cool and will also relieve owners of the need to enter parking structures, which tend to be unpleasant at best. So, drivers will be happy!
As ever, there are some potential challenges that remain:
- Casual access: It is not clear what access people have to their cars when they do not want to drive. Ever left a bag in the trunk by accident? It is easy and efficient to just visit your car in a parking structure and retrieve your stuff. It would be wasteful if the system had to fetch the car just so that the driver can get into the trunk.
- Entropy: At some point, the system will not match the right car with its owner. How frequently will it make such mistakes and how well will it cope with them?
- Efficiency: A car share system would be still more efficient than a bank full of idle, individually-owned vehicles. Such a system would also help to reduce the entropy problem, since the need to match cars to owners would not occur. Of course, the condo owners may not be into sharing cars.
- Security: Since the system is accessible remotely, e.g., by text message, it will make the cars available to hackers. What sort of measures will be in place to prevent tampering or theft?
The system represents an interesting idea, the transfer of storage technology to the parking garage. Still more interesting would be a real robot valet that could park your car in an existing structure, making it compatible with existing facilities. Of course, that would be even more of a challenge.
Run that red light! December 2, 2011Posted by Cameron Shelley in : STV202 , comments closed
Traffic lights are a common way to establish control over how people share intersections. Of course, not everyone obeys traffic lights (although Monty Python likes them), especially when they are red. One solution has been the red light camera, an automatic camera that identifies cars that run the red lights and then issues tickets to the owners. Of course, those cameras are themselves somewhat controversial, with people complaining that they are abused by local authorities looking to make money on traffic violations.
A new idea being pursued by researchers at MIT is software that predicts who is likely to run a red light:
Using data collected from DOT-sponsored surveillance of a busy intersection in Christianburg, Virginia to track vehicle speed and location, the researchers could determine, within two seconds of a car approaching an intersection, with 85 percent accuracy whether it would run a red light.
Eventually, the researchers propose, this software could be incorporated into inter-vehicle communications systems, so that cars on the road can predict the behavior of other cars and react accordingly. For example:
“Even though your light might be green, it may recommend you not go because there are people behaving badly that you may not be aware of,” said Jonathan How, an aeronautics and astronautics professor who co-created the algorithm.
The system is obviously far from production, but it does sound like something that might appear on the road someday.
Although the system is envisioned as an aid to drivers, giving them recommendations about how to proceed at an intersection, it is really an advance in driving automation. That is, cars will, in effect, see around corners and thus be in a better position to control the car than the driver is in. A recommendation such “Stop! A car may be about to run the red light ahead!” about two seconds before the event will be of little use to a driver and, indeed, could just cause alarm and panic. It would be better for the car simply to slow itself down while informing the driver of what is happening.
Of course, the situation in which a car is constantly intervening in the control of the car will be very frustrating for drivers. Think of the recent Eco-Pedal by Nissan, a gas pedal that pushes back if the driver pushes too hard on it. The idea is to save fuel, but many drivers will likely find this sort of negative feedback too interfering and just turn it off.
As ever, there is also the possibility that such a safety system will create an incentive for drivers to behave badly. In this case, if someone thinks that most other cars on the road have this system installed, they may think it less risky to run a red light. After all, the safety gear in other cars will take care of the problem. Risk compensation strikes again.
In addition, we have to consider what sorts of bias might be present in how the system operates. Although it seems to be highly accurate, it will make mistakes. As a safety system, I assume that drivers who have the system installed in their cars will want to minimize false negatives, that is, instances where the system falsely concludes that another driver intends to obey the red light when, in fact, he will run it. This bias will help to reduce t-bone collisions in intersections. However, this bias will allow relatively more false positives, that is, instances where the system falsely concludes that another driver intends to run the red light when, in fact, he will obey it. In those cases, cars with the safety system will slow or stop needlessly when faced with a green light. This will reduce traffic flow and could result in rear-end collisions as drivers further back fail to anticipate this outcome. How shall we program the cars to deal with these conflicting interests?
Finally, when we have cars programmed to do all the driving for drivers, they may start to wonder why we have traffic lights at all.
Does IT take away jobs? November 10, 2011Posted by Cameron Shelley in : STV302 , comments closed
The Economist has recently published an intriguing blog posting on the displacement of jobs by IT. The entry is framed as an examination of the so-called Luddite fallacy: the view that efficiencies realized through the automation of work lead to job losses. It was this view that supposedly led the Luddites to smash their bosses’ sock making machines. (I feel constrained to point out that this mischaracterizes the Luddites of history who actually approved of automation but fought their employers’ project of undermining established labour standards.) As the article explains, this view is seen as a mistake:
Economists see this as a classic example of how advancing technology, in the form of automation and innovation, increases productivity. This, in turn, causes prices to fall, demand to rise, more workers to be hired, and the economy to grow.
In brief, the unemployed find new jobs and move on.
The article then raises the current economic situation as a counter-example. Automation and innovation continue apace today, yet unemployment remains stubbornly high. Why have the unemployed not yet moved on to find other jobs? The answer may be that automation really has eliminated that work.
The contention in the article, if I understand it correctly, is basically that automation is beginning to dominate the economy as a whole. In the past, for example, agricultural workers whose jobs were made redundant by farm equipment could move to the city and find work in factories. Now, however, the ability of computerized machinery to perform not only manual labour but cognitive labour has left people without another sector where they can migrate to. In effect, the article points out, the machines have become not only manual labourers but also mid-level, white collar employees.
This is where the problem hits close to home, if you like. Until now, people could find remunerative, white-collar work by undergoing advanced training, so that they could perform cognitive work that computerized technology was not capable of. In short, people could go to university. However, as doctors, lawyers, and programmers are being made redundant, it becomes harder for universities to give human beings a competitive advantage, certainly one that will last them for long in view of the pace of technological progress. Perhaps the increasing drive to make university studies more efficient or, at least, cheaper through putting courses online could be viewed as a response to this pressure. However, when students can no longer expect to find jobs as a result of their studies, they will probably stop signing up.
The article tries to finish on an up-beat note. Humans still have comparative advantages over computers: “the ability to imagine, feel, learn, create, adapt, improvise, have intuition, act spontaneously.” Whether these advantages will translate into employment is unclear, but it could happen. It may also suggest a resurgence of the liberal arts in education; these are the areas where imagination, feeling, creativity, and so on are traditionally cultivated. This is not to say that such qualities are lacking in the more technological disciplines, but these have tended to emphasize analytical skills and approaches more and more over the years. Shifting to a more artsy, less analytic curriculum might not prove easy or agreeable. However, it may prove necessary.
Quilting 2.0 January 3, 2011Posted by Cameron Shelley in : STV202, STV302 , comments closed
Any quilters out there? What kind of sewing machine do you use? A recent IEEE Spectrum blog outlines how quilting has gone hi-tech with heavily computerized sewing machines. Unlike your parents’ machines, the new machines come equipped with computers that can help quilters both to design new quilt patterns and to execute the fancy stitching needed to realize their designs. The newest machines thread themselves, provide digitally enhanced images of the sewing area, and have sophisticated Computer Aided Design software built in.
These new machines provide some fantastic benefits, including:
- Enabling less-skilled quilters to design and create beautiful and challenging quilts that would have been beyond their skills with a regular machine;
- Automating the stitching process to make the sewing less tedious than before;
- Computerizing the stitching process to make the sewing more accurate than before;
- Creating a community of quilters who can share their designs and experiences over the Internet.
The computerized sewing machines seem to do for quilting what the computer printer did for the typing pool:
“For us, it’s really printing with thread,” explains Dean Shulman, the senior vice president of Brother’s sewing division.
Of course, with all these gains come some potential losses.
- The repetitive nature of sewing can be meditative for some quilters;
- Computerization of sewing machines will accelerate their obsolescence. Until now, a sewing machine could be expected to work fruitfully for decades. Computerized machines, like laptops, will become uncompetitive in a few years.
- The ready availability of complex sewing designs and techniques could overwhelm the design skills of some quilters. I am thinking of the development of Chart junk in the 1980s, that is, distracting graphics made possible by software packages that provide lots of do-dads but not much guidance in how to use them properly.
One quilter sums up her perspective in this way:
“Computerized technologies can be fickle and fragile,” complains Stephanie Gordon, the owner of Swamp Quilts, in Gainesville, Fla. “I can understand the allure of a very high-tech computer, but I like to stick with the basics. My machine is not digital, but it is high quality, durable, and does exactly what I need it to do.”
For me, the apparent progress in removing the (hand) sewing from quilting also raises an epistemological question: Can someone really be said to know how to quilt if they do not know how to sew? I am reminded of a recent experience at an airport restaurant where I ordered a sandwich but asked to substitute an ingredient. The cook replied that he could not satisfy my request: The ingredients are delivered from a central location and the cooking process does not allow for alterations. The cook’s abilities and, I had the impression, skills were limited to assembling ingredients prepared elsewhere and processing them only as the cooking gear allowed. In short, my food was made by someone who did not know how to cook. Likewise, I am led to ask, have the new sewing machines made it possible for someone to make a quilt who does not know how to quilt? Or should we just say that the concept of quilting has to change along with the means of doing it?
Ethical drones? April 1, 2010Posted by Cameron Shelley in : STV202, STV205, STV302 , comments closed
A recent article in The Economist notes one response to the continuing controversy over the use of drones for attack and assassination by the US military and CIA: The drone could be programmed to think for itself on the ethics of its missions.
The software conscience that Dr Arkin and his colleagues have developed is called the Ethical Architecture. Its judgment may be better than a human’s because it operates so fast and knows so much. And—like a human but unlike most machines—it can learn.
After each strike the drone would be updated with information about the actual destruction caused. It would note any damage to nearby buildings and would subsequently receive information from other sources, such as soldiers in the area, fixed cameras on the ground and other aircraft. Using this information, it could compare the level of destruction it expected with what actually happened. If it did more damage than expected—for example, if a nearby cemetery or mosque was harmed by an attack on a suspected terrorist safe house—then it could use this information to restrict its choice of weapon in future engagements. It could also pass the information to other drones.
The Ethical Architecture seems to involve a limited form of utilitarian calculus, weighing benefits against harms to reach a decision. The harms and benefits considered are those more easily and immediately observed and quantified, e.g., the number of casualties and damage to infrastructure. Less tangible considerations, like fairness or resentment, will likely not be factored in. Hopefully, the operators of the drones will understand the limitations of the Ethical Architecture and not concede ethical deliberations to them.
I should point out, too, that military drones are not the only place where ethical software might soon be deployed. Consider the increasing deployment of automated driving features in cars, for example. As computers in our cars take over more and more of the tasks of driving, they will have to make more of the decisions. A collision avoidance system, for instance, may have to decide who lives and who does not when a sudden collision becomes unavoidable and the only variable left to consider is how it will occur.