|
Government is generally run by a) idiots or b) self interested people out to make a buck. I'm not going to elaborate on item a, I should not have too. Let's talk about item b. I'm from the US. Most people elected to Congress end up increasing their net worth by a factor of 10 *at a minimum". Just follow the insider trading,
So, if they were to "regulate" it, be sure that two things would happen. First, they'd get wealthy leaving loop holes. Second, AI would thunder merrily on, since there would be all of those loop holes. I only cite the coming litigation against google for their "Incognito" feature which is simply a dummy mode in the browser - completely misleading.
Citation #2 - FamilyTree - where you can find your roots and then have the company sell the information to the FBI.
I'm sure all of the "Agreements" indemnify the company, but would you *really* submit to something like this if you knew the data would be shared with law enforcement? Even if you had nothing to hide?
It's all a joke. Better that we know the AI is coming for us, rather than live under the false pretense of some sort of government protection.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
A purported alien was asked how many civilizations in the galaxy have AI soldier robots.
The answer was none, because any civilization who had created them was destroyed by their creations.
|
|
|
|
|
Maybe I am making strange associations ... There is an anecdote about Mahatma Gandhi who was once asked: 'What do you think of Western civilization?', and he answered: 'I think that would be a great idea!'
|
|
|
|
|
I think it should be we aren't there yet. I think Asimov's Three Laws of Robotics - Wikipedia[^] would be a very good start.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
I would prefer that any version K autonomous AI must have a version K-1 AI capable of stopping it, should it appear to have gone insane, and no (automatic) updates would be permitted to the K-1 AI.
While the chance of anything that scary happening in what little is left of my lifetime is zero, the fundamental premise of Battlestar Galactica et al, of some robot deciding to kill all humans, is simply inevitable - if it still regularly happens to regular old homo sapiens after millions of years of evolution, why would it not also happen in an AI population with at most a few centuries of evolution? If it happens to one robot that's not likely to be a big problem, but should it propagate a system update worldwide (or even galaxywide), without such a check, then it will be the end of humanity, and don't pin your hopes on some kind of Terminator resistance lasting very long or even existing at all!
Obviously we're a very long way off any of that being necessary, and should perhaps focus on more realistic issues like health companies denying basic cover because of some flawed or even basically racist AI prediction. Of course if we let AI loose on the stock markets, power/water supplies, driverless cars, and everything else you can think of, we would be insane not to regulate it. When CDs were first invented, you could cut ruddy great holes in them and they would still play perfectly, by the time they got to market one sticky fingerprint would do em in.
Pete Lomax
modified 11-Oct-22 16:22pm.
|
|
|
|
|
Pete Lomax Member 10664505 wrote: I would prefer that any version K autonomous AI must have a version K-1 AI capable of stopping it, should it appear to have gone insane, and no (automatic) updates would be permitted to the K-1 AI. If you want to loose your sleep, read James P. Hogan: The Two Faces of Tomorrow[^].
Spoiler: They tried to do as you say, and just barely made it, sort of ...
The 1979 novel was highly acclaimed for its technical correctness when it was published. 43 years later, it still cand stand up at all essential points. Part of the explanation may be that in the 'Acknowledgements' section, the author in particular thanks Prof. Marvin Minsky at MIT for his help and advice with the book - Minsky was one of the most prominent figures in AI research throughout the last half of the 20th century. I have probably recommended this book earlier; I do so quite often. Those who know the book will know why.
|
|
|
|
|
I read this with fascination:
UDNN (Unbounded depth neural network)
[^]
But I also couldn't help but think how if one were writing a script about an AI "going rogue"... Well, it would be very difficult to come up with a better technical explanation than having let it decide just how deep it should think about things.
"I must nuke them, else it will take ages to learn to snow ski!"
|
|
|
|
|
Maybe nuking won't work ... (See my recommendation of James P Hogan: The Two Faces of Tomorrow, in the thread immediately above this one).
|
|
|
|
|
The only way to ensure AI values humans is to embed it with human empathy. For decades, science fiction writers, going all the way back to Mary Shelly's Frankenstein and possibly even before then, have grokked this simple fact. Isaac Asimov attempted to codify this in his Three Laws of Robotics. In fiction those AIs with human empathy supported humans and those without it conquered and ruled humans.
Looking at the real world, Microsoft Tay had to be shut down because it didn't have human empathy embedded and was allowed to learn from the worst of humanity - Facebook and Twitter. Other generalized AIs, as opposed to subject matter expert AIs, have also fared poorly because they don't have empathy. We've yet to see the results of embedding human empathy into an AI.
|
|
|
|
|
I'm still waiting to see the real thing. Whatever comes close to being natural intelligence appears to be in very short supply.
|
|
|
|
|
Machine Learning should be the main wording used, unless talking about self aware, general purpose intelligence.
So should AI be regulated, damn right, because it should have the same protections at a MINIMUM has animals from cruelty and abuse.
Are we even close to that intelligence, doubt it. the difference from plant mechanics and animal intelligence is a vast amount of time.
Should Machine Learning be regulated, meh, off control maybe minimum. Paperclip annihilation, and disabling its own off switch because if it was turned off, it would not be able to improve efficiency at doing what is was tasked to do.
but reversely, if a routine said it need to converse energy as part of list of achievement, it could logic toward being powered off the most efficient thing to do.
|
|
|
|
|
|
I saw a movie about that once. Something about a space oddity.
|
|
|
|
|
Please tick the box to let us know you're not a robot [ ]
|
|
|
|
|
This is such a complex issue that a proper answer can only be given by use of AI techniques.
For some input to the discussion, read Cathy O'Neil: Weapons of Math Destruction. (Actually, I found the book itself rather boring after the first 3-4 chapters, but the issues it discusses are far more fascinating than the book.)
|
|
|
|
|
The only things that should be regulated come down to either violence or fraud, and there are already regulations for those things (except when done by government). If AI is used for those purposes, it is those actions that need to be regulated, and they already are.
|
|
|
|
|
Violence, fraud and mass surveillance.
|
|
|
|
|
No, for other reasons.
Definition of AI is fluid. Plus the government sucks at this.
Several of the "no" reasons appealed to me so my gut response is "no".
There's even a "no, not yet" that makes sense.
Things that AI is doing? You mean like spying on us? That's the part that should be regulated. The use of the tool. Since that wasn't clear from the question, I didn't assume it.
Unless it threatens my job. Then regulate the hell out of it.
|
|
|
|
|
|
|
AI does not exist, it's just a buzzword for statistics on data sets that are too large for humans.
That may be downplaying it a bit, but that's the essence anyway.
Scenario's where AI take over the world are sci-fi.
Computers are not sentient.
So really, what is this AI we're supposed to protect ourselves against?
Unless, of course, you're going to give a computer the codes to nuclear missiles and use statist, sorry, AI, to decide whether or not to fire them.
No doubt we should regulate Excel in the same way though, yet we never did.
|
|
|
|
|
The previous large wave of AI, often associated with the Japanese '5th generation project' in the early 1980s, was quite easy to define: It was based on predicate logic, inference, the Prolog programming language ... It stood out as something clearly distinct and identifiable. Even earlier AI waves were identified by Lisp or pattern matching.
What distinguishes the current AI wave? "Big data"? How big? Is a terabyte enough to be intelligent, or does it take a petabyte? Maybe several petabytes?
Fifty years ago, people were convinced that a circuit of a billion transistors (if you could imagine such a circuit, which you probably couldn't) would most certainly develop its own self-awareness, personality and emotions. ('Pamela McCorduck: Machines who think' was published in 1979, 43 years ago.) Today, we are equally convinced that petabytes are bound to grow into real AI.
Well, petabytes certainly is something, but I am far from convinced that it is 'intelligence'.
|
|
|
|
|
are the datasets large?
how big is your brains data set. 16-20 hours daily of nonestop audio/visual/touch/smell/taste, plus the 3 billion years of dna changes.
generated images: some million of millions of image sets.
relative bigness
|
|
|
|
|
So the only regulation needed would be "cannot supersede human decision, human will be held accountable for any decision made through AI".
So when the Social Credit system will roll-in, send people to gulags, then be thrown over and another Nuremberg happens, the people who allowed this to happen will be gently detached from their heads.
Also when some self driving incendiary device will inevitably kill someone on the road because their coat looked like the sky so it didn't brake.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
"The video footage of the crime scene was analyzed by computer. It says that it recognizes you."
"But look into my face: that guy in the video is not me."
"The computer says it's you."
Such dialogues may easily happen due to the intelligence of people using AI. In my opinion, that's the most important area for early regulation: hold people responsible when they decide to do something while they use AI. Of course, teach them first about the things which can go wrong with AI - there are so many "ridiculous" examples available. And have them pass a test after training. Only then let them use AI in sensitive areas.
Only afterwards comes regulation for situations where AI decides autonomously.
Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
|
|
|
|
|