How are Alexa, Siri and artificial intelligence (AI) impacting and intervening in dangerous situations in daily life? That’s an evolving issue that SUNY Oswego communication studies faculty member Jason Zenor continues to explore, including in an award-winning publication.
In “If You See Something, Say Something: Can Artificial Intelligence Have a Duty to Report Dangerous Behavior in the Home,” published in the Denver Law Review, Zenor recounted a 2017 incident where police reported a jealous man threatening his girlfriend at gunpoint unknowingly caused their Amazon Echo’s Alexa to call the police, leading to his arrest.
While the incident made national news -– in part because of its relative rarity –- Zenor noted it represents the tip of an iceberg for how AI evolves to interact with daily online activity.
How AI can save lives
“You can find a few dozen stories over the last several years where Siri or Alexa save a life, such as with crime, accidents, heart attacks or the like,” Zenor explained. “In those situations the victim has their phone or in-home device set up to recognize ‘Call 911’ or ‘emergency.’ This is a simple setting and most are now set up for this automatically.”
Zenor’s publication, recognized as a top paper in the 2021 National Communication Association Conference’s Freedom of Expression division, explored the trend further, and his research found that smartphones and in-home devices are not capable of enabling anything beyond direct requests to call 911. But artificial intelligence is at work behind the scenes in other situations.
“Facebook and other tech companies can monitor things like bullying, hate speech and suicidal tendencies in online spaces through AI,” Zenor noted. “But it is still looking for certain words and will respond with pre-approved responses like hotline numbers. In-home AI and other devices are set up to listen when we want them to -- but it still needs certain prompts, though the language ability is getting better.”
AI is not yet making a big difference in home safety –- other than in-home audio and video, as after-the-fact evidence –- because of the complicated nature of doing, while “in fact, it is more likely right now that perpetrators will use apps to track and surveil their victims than it is that an AI will help a victim, although certainly not proactively,” Zenor noted. But the field is making strides elsewhere.
Predictive prevention
“Outside the home, predictive AI is being used in both health care and law enforcement,” Zenor “This is admirable in health care and similar to screenings that health care facilities now give to patients – such as depression, drug abuse or safety in the home. But with both of these spheres, it is only predictive and we also run into issues of implicit bias programmed into AI leading to disparate treatment based on race, sexuality, income and other factors, and this is already happening in the justice system. Any time someone is reported it can lead to unnecessary involvement with law enforcement or mental health systems that changes the trajectory of someone's life. This can have grave consequences.”
Related to that, these questions also take into account such legal issues as privacy, criminal procedure, duty to report and liability.
“The first question that will need to be answered is what is the 'status' of our AI companions,” Zenor explained. “ The courts are slowly giving more privacy protection to our connected devices. No longer can law enforcement simply just ask the tech companies for the data. But if AI advances to be more anthropomorphic and less of a piece of tech, then the question is what is the legal parallel? Is it law enforcement seizing our possessions -- as it does with phones and records -- or will the in-home AI be more like a neighbor or family member reporting us? The former invokes the Fourth Amendment, the latter does not, as committing a crime or harm is not protected by general privacy laws.”
The other side of the coin involves proactive duties to report. “Generally, people have no duty to report,” Zenor said. “The exception is certain relationships –- such as teachers, doctors or parents –- who would have a duty to report possible harms when it comes to those to whom they have a responsibility such as students, patients or children.”
New legal issues
Liability issues could complicate the picture even further, and could lead to unexpected lawsuits for companies using AI.
“Once you do act, then you do have a duty of due care,” Zenor said. “If you do not use due care and it leads to an injury, then there could be liability. So, companies may open themselves up to liability if they program AI to be able to respond and it goes wrong. Conversely, if the companies could program AI to do this and choose not to, then there will certainly be at a minimum PR issues, but I could see it turning into class action negligence cases when deaths do occur.”
Like many issues related to evolution of technology, individuals and society have to consider trade-offs.
“Ultimately, we have to consider how much more encroachment into our private lives we are willing to accept in exchange for protecting us from harm,” Zenor noted. “This is not a new question – it arises everytime we have an advancement in technology. Ultimately, privacy is a social construction in the law -- what we as a society consider to be the boundaries. We seem to become more comfortable as time passes and technology natives see no issue while older generations think of it as gross violation.”
As for the future of how and how often AI will intervene while attempting to provide help?
“My best guess is that there will be incidents that make the news where AI saves a life and there will be public pressure to add more safety features to the technology,” Zenor said. “AI will advance enough that machines become companions like our pets so we will have a relationship with them that includes divulging private information that it could keep permanently. As it is today, we would expect that if our companion could save us, then they will try to – many people own pets as a form of protection or as service pet. The big issue from this will be liability. I assume companies will seek out liability protections either through waivers in terms of agreement or through special legislation similar to 'good samaritan' laws.”