Turks segment
If you take non-AI software and hook it up to a machine gun (akin to the one used in the Breaking Bad finale…), of course that opens up dangerous possibilities. Hooking up software -including software that writes software, and can do so in unpredictable ways- to physical systems that can impart harm, whether it be accidental or not, is of course fraught with hazardous possibilities. It’s the same diff: a human or company makes non-AI software that does harm; or a human or company hooks up canned AI software/slop that does harm.
When a programmer writes non-AI software that uses defective software that s/he did not buy, it falls on him/her (based on the principle that there was no “implied contract” from having used a free component…) And when you use AI software, if it was purchased and the software did not do what it was supposed to do (or even what it was implied to do…), it will fall on the AI software company. If the AI software was free, it falls on the genius who decided to hook up the AI software to a physical system that can impart harm.
[Note that a company making a boilerplate disclaimer for software they are selling does NOT get them off the hook for liability -there’s a lot of precedent out there enshrining that principle.]
At the end of the day, this is all about software liability -NOT AI.
So we need to make sure good software liability laws are in place -they are not. Just look at the way the AI industry wants to trample on existing intellectual property laws. And they also want to continue to leave software liability to be loosey-goosey, where after the damage is done, victims will only have the recourse of civil litigation that many cannot possibly afford.
Frankly, much of this “the AI is dangerous” talk is just a hype vehicle for the AI Industrial Complex (Google, Microsoft, Meta, X, …) to generate interest in their slop. Look at how that works out: they control what you see online, and you’re seeing a message that AI is mind-blowing (I myself think it merits research and can help with queries -but sorry; it ain’t mind blowing!)
For better insights on AI, watch some Ed Zitron videos.
https://www.youtube.com/results?search_query=ed+zitron+ai
What I have seen so far is an attempt by the AI Industrial Complex to buffalo users into diminished expectations from software. It’s a greed thing. <sarcasm>Surprising, considering the stalwarts of corporate America that are backing it</sarcasm> (though Anthropic is not as greasy as the other players, like Sam Altman…)
For this, we lock up all software development funding on it.