I’m not saying that that is an actual viable tactic that would work but after reading this article you might consider it to be a viable last ditch attempt, a hail Mary pass as they say, to save your life from an AI with a weapon.
Readers of this blog might be forgiven for thinking I’ve drunk the AI Kool-Aid because I guess I have in a way but nothing is perfect and that goes for AI’s, even the AIs that train themselves.
What the article gets to essentially is that the new crop of AIs that are self-trained may not suffer from the same vulnerabilities to corrupted data in the training set because they don’t have a training set of data but they still can be attacked another way. Self-training AIs develop what we’re calling policies and those help it deal with what it might encounter out there in the real world. Think of a policy as a way to handle a generic type of situation that it might find itself in. Much as a fencer learns all the standard attacks and ripostes and AI, and we, learn by experience what to expect.
If an AI is confronted with completely unexpected behavior it gets confused and the current crop of AIs exhibit confused responses which can lead to unexpected behaviors and outcomes. If this happens on the road with a self-driving car AI or in the battlefield for instance with a self-propelled killer robot AI then the consequences could be catastrophic for the AI. And for any people around it.
So my tongue in cheek title hilarious is not 100% joke or jest – it just might work in the right conditions and under those conditions you may not have any other alternative.