Progress in the interface between Us (the Humans) and It (Intelligent Technology) has mainly consisted in extending and improving the range of physical and communication-al channels by which we are shown the virtual world and effect our will upon it or, indirectly through it, on the real world through extensions.
Better screens, 3D Virtual reality headsets, spatialized audio and haptic feedback (shakers/vibrators) bring us closer to the virtual environment. Augmented reality blends that with the real world.
Going the other way we’ve gone beyond keyboards and mice with voice, face and gesture recognition.
But these are things that let Us tell It what we Want to tell it. The mechanical things related to movement and control. The informational things like textual content and commands. We are more than our words and gestures.
In a face to face conversation we are also responding to what we sense about the other person’s emotional and physical state. Are they happy or sad? Or curious or mad? We sense this from what we see about their stance, facial expression dynamics, skin sheen, sweat and breathing. This context can change the meaning of their words; can explain and inform why they are saying and doing what they are. Are they saying Ow as a comment or because they are in pain?
Progress has been made giving machines the ability to sense things that can be used to guesstimate those emotional states.
The simplest idea to wrap your head around is interpreting facial expressions to determine emotions. This can be achieved with a camera and software. As time passes the accuracy of systems improves and proliferates. As this page shows ’emotion recognition’ is alive and thriving. It also references semantic analysis of speech to determine emotional state and context.
But those systems generally require the single user to cooperate by being stationary facing / talking to the system. Not everyone or every situation can fit within these restrictions.
What if the system is passively monitoring passers-by? What if the people are standing and not sitting? (people come in different shapes and sizes). What if they’re wearing sunglasses? Have facial hair? A speech impediment or strong accent?
Those are all potential fail cases for brittle systems. The real world needs robust systems that can cope with all of these cases and more.
Back in 2008 ScienceDaily.com ran this “‘Tiny Radio Antennas’ Under Skin Could Act As Remote Sensors Of Humans’ Emotional, Physiological State” The researchers discovered that another use for sweat glands is as very high frequency antennae. The summary: “Scientists have discovered a method for remote sensing of the physiological and emotional state of human beings. The researchers believe the discovery could theoretically help remotely monitor medical patients, evaluate athletic performance, diagnose disease and remotely sense the level of excitation — which could have significant implications for technology in the biomedical engineering, anti-terror and security technology fields”
They’re talking about Remote Sensing.From a Distance. Of course they have to shine a 75Ghz or 100Ghz ‘light’ at you . . .
I don’t know what has happened with this tech in the past 8 years but the latest along these lines uses the same frequencies as your wireless modem does. Meaning likely more easily implemented by adapting technology we already manufacture.
This article “Detecting emotions with wireless signals” describes a system “EQ-radio” that “is 87 percent accurate at detecting if a person is excited, happy, angry or sad — and can do so without on-body sensors” It can detect your breathing and your heartbeat. It can do this with fairly good precision. “By recovering measurements of the heart valves actually opening and closing at a millisecond time-scale, this system can literally detect if someone’s heart skips a beat” Wow!
On the same day I saw that I also saw this article “Link between walk, aggression discovered“. In this article researchers have definitively shown that yes, a swagger is a pretty good indicator of a person who is likely to be aggressive. It might sound like they are just restating something ‘we all know already’ but so-called common knowledge is not usually amenable to computer use. Computers need more definitive information than a ‘feeling’ that we all get.
In some situations this type of information is potentially important to know.
A medical assist system would be more effective knowing more about the state of a patient. An advertising instrument might better serve its purpose if it could gauge the emotional reaction of the viewer – you don’t really want to anger people if that is not your aim.
As the swagger = aggression study mentions a security CCTV system would benefit from being able to determine mental states of the people it sees so that those exhibiting aspects of hostility, suspicion and furtive behaviour could be flagged.
Between those extremes are all the more mundane services that we don’t have yet that will be more effective if they take into account the emotional and physical state of Us. back in the late 1970s it was nearly impossible to describe to people what the micro-computer revolution was going to mean even when you had glimpses of what was possible. I know this from personal experience. Just the idea that you would be carrying a telephone around in your pocket was a stretch. Add in all the other things that makes possible and almost no one would be willing to believe all this would happen within their lifetimes. (every record I own in a little box in my pocket? Get outta here!)
When I recounted a few of these news items to a coworker her first response was something to do with Big Brother. I understand that and I suspect it would be the same with a significant portion of the people in our society. But that is a reaction to the potential uses and abuses of a tool and these are not the only possible, or likely, uses.
Do I feel comfortable with the idea that I and other citizens who pose no threats would be subjected to scrutiny of this nature? Well maybe not 100 percent but I’m not as uncomfortable as some either. It would be nice to know that CCTV systems can guesstimate who deserves more scrutiny from humans but I also know that some potential scenarios would include people who get flagged because they have legitimate reasons for agitation, fear and / or suspicion. Which is why properly trained humans should be the final arbiter of whether some action needs to be taken or not. (and that does not include over worked, poorly trained and vetted workers – wishful thinking I know).
Whatever we may think, this technology is rapidly pervading our world and we would be better served by meeting it with understanding and reason instead of fear and suspicion. You can’t fix what you don’t understand and those who can will filter you out if they think you’re just ignorantly parroting fear mongering you heard on the web.
Here’s one for you: As people age their health changes for the worse and many end up taking a daily regimen of medications. They have to employ simple techniques like day of the week labelled pill holders. And they have to try and monitor their changing physiology and report things back to their doctor. What if the pill dispenser did that? If it could measure hear, respiration and perspiration in real-time then it might be able to provide a better record to the health care professionals managing this patient. It might be able to provide on-the-spot recommendations about what and when to medicate with based on current reading of the state of the patient. And if it could do a lot of this without requiring the person be properly hooked up to the equipment then that’s all to the better. With the Baby Boomer bulge reaching old age the costs of caring for them become a real burden on the health care system. Anything that helps alleviate or improve that should be welcomed.
Wait . . . what’s that smell? Is that my computer overheating and is it experiencing emotions about all this?