Excerpt from The Technical and Social History of Software Engineering
(To be published by Addison Wesley in the autumn of 2013)
Copyright © 2013 by Capers Jones. All rights reserved.
As many readers know Google has already been working on a prototype of a wearable computer called “Google glass.” The device looks like an ordinary pair of glasses but contains an embedded computer with lenses can display information.
This concept has been greeted with both favorable and unfavorable comments. The favorable comments are that the new device can provide useful information such as weather alerts, traffic problems, and emergency messages. The unfavorable comments are that the views might occlude or interfere with ordinary or peripheral vision and hence cause automobile accidents. In fact laws and regulations prohibiting Google glasses from being worn under certain circumstances are already being promulgated. However not enough empirical evidence is available to know whether the favorable or unfavorable views are the most realistic.
It is premature to judge the device because it is not yet commercially available, but the concept is of both of technical and social interest. In thinking about the implications of the Google glasses, it is fairly obvious that computers are now small enough so that they can easily be embedded in clothes or worn as glasses. The question is, what benefits might they provide over and above normal computers, pads, and smart phones?
Here are some hypothetical features that may or may not be included in the Google glasses but are certainly technically possible. The first and most compelling feature would be the ability to have the glasses monitor the health condition of the person wearing them. Factors such as pulse rate, temperature, blood pressure, and other surface conditions could be monitored in a real-time fashion. With an accelerometer the glasses could also check for accidents such as falls or collisions. In case of a medical emergency the glasses could automatically summon assistance, which might not be possible if the wearer had a stroke or heart attack and was unconscious.
A second potential use might be improved night vision by light amplification. This is perhaps a bit tricky in 2013 but should be feasible by the end of the decade. A potential down side is that most forms of light amplification for night vision are somewhat bulky, but that is perhaps a solvable problem in the future.
A third potential would be very valuable to those who are hard of hearing. This feature would show closed captions for movies and television shows which do not currently have captions. Also of value for the hard of hearing users would be to include capabilities such as those provided by Dragon Naturally speaking, or translating spoken words into visible text that would appear on the glasses.
This instant translation would allow a profoundly deaf person to understand verbal information in close to a real-time mode. In fact computers are fast enough today, and will certainly be faster in 2019, so that real-time translation could easily occur. This idea might be opposed by the deaf community, but since it does not actually exist in 2013 that is an unknown factor.
Yet another service for Google glasses might be synchronization with hearing aids or cochlear implants, so that important messages such as storm warnings or evacuation orders arriving via the Web could be routed to hearing aids and cochlear implants as well as being displayed, assuming they had blue tooth or some other short-range connectivity.
Once spoken words are captured, it would also be possible to use an automatic natural language translation program. This would be very useful for international travelers. It is theoretically possible to have kind of science-fiction capability in which, for example, a conversation between a Japanese speaker and an English speaker would be simultaneously translated into both languages. If both parties were wearing Google glasses, they might be able to carry on what would be pretty close to a normal conversation. The Google translate application already does this, and coupling it with a verbal tool similar to Dragon would make global travel a great deal more convenient than it is today.
In fact if the translated conversations could also be routed to hearing aids and cochlear implants, two deaf people who speak totally different languages might be able to converse fairly well.
Somewhat surprisingly, Google glasses would also be of benefit to blind people if they could include sensors and artificial intelligence routines and could communicate with hearing aids or audible devices. For example a blind person approaching an intersection could receive a verbal warning that the traffic light was red. It would also be possible that by turning the head to the left and right, the glasses would provide additional warnings such as “high speed auto approaching. Danger.”
Yet another feature for the blind would be the ability to scan text and convert it into spoken words. This might enable a blind person wearing Google glasses to “read” ordinary books and eBooks by merely aiming the glasses at them.
Google glasses would have more value for the hard of hearing than for the blind, but the capabilities exist in 2013 to create a new family of wearable assistive devices that integrate voice to text and natural language translations.
Google glasses might also be of use to those with physical handicaps such as quadriplegics. It the glasses respond to voice commands then those who can speak can use them to communicate. There are other future possibilities besides the ones discussed here, but these are all fairly important for those potential users with physical handicaps.