This forum is in permanent archive mode. Our new active community can be found here.

GeekNights Monday - Will AI Destroy Us?

Tonight on GeekNights, we consider whether Artificial Intelligence (in the future tech magic sense) will destroy humanity or make everything awesome. Will AIs replace us, or will they treat us the way we treat ants? In the news, T-Mobile continues to play catch-up, but will have Uncarrier 8.0 tomorrow, HIV is evolving to be less infectious/deadly, and morons are angry at flickr for their own moronitude about what the Creative Commons license actually means.

Download MP3
Source Link
«1

Comments

  • edited December 2014
    In re: idiots being mad about CC on flickr: Don't agree to something if you don't fully understand what that means. They think Creative Commons is this magical thing that is all benefits and no tradeoffs and just pick the default one. That's why there are different licenses! READ, MOTHERFUCKERS!
    I've had musicians try to takedown videos of mine on YouTube because I used their song "without their permission" (i.e. without paying them) only for me to point out, "look, you put it on Jamendo with the most liberal license you can (CC-BY). It doesn't matter if you have the song on Jamendo Pro, I can still use your song commercially and there's not a god damn thing you can do about it."
    "Harumph! Well, I'm revoking your license to use it. Now pay me!"
    "Read the license again, fucker. That shit is permanent and irrevocable. I'm attributing you just like the license requires. Try to take down my video again and I'll report you to YouTube AND Jamendo."

    *grumble*... wasting my time because they didn't read. Ugh.
    Post edited by Victor Frost on
  • I think your argument that a superintelligent AI can be safely contained by denying it any access to the "physical world" is badly mistaken. As long as it has a communication channel of any kind (which of course it would have to have in order to be of any use), it can use that communication channel to manipulate humans.
  • edited December 2014
    The only way to deny superintelligent AI access to the physical world is to deprive them of hardware, i.e. never make one in the first place.
    Post edited by Linkigi(Link-ee-jee) on
  • Will AI destroy us?

    No.
  • I just asked Google Brain, and it said it definitely won't intentionally destroy us puny fleshbags.
  • Andrew said:

    Will AI destroy us?

    No.

    The exact words I came here to post.

    I mean, I'm trying to build SkyNet.
  • Well, I did just reread Neuromancer, which means I'm basically an expert…
  • Andrew said:

    Will AI destroy us?

    No.

    Why not?
  • Okay, so I listened to the episode and first I should protest that I don't actually know that much about machine intelligence - just enough to understand buzzwords and embarrass myself in front of people who do it professionally.

    Anyways, Nelson made a point on Twitter that I think is worth repeating here: given how differently computers think from humans, it's entirely likely that when we do create something sufficiently intelligent, its thought processes will be so foreign to us that we won't actually notice for a long time.
    (And by then it'll be too late).
  • edited December 2014

    Andrew said:

    Will AI destroy us?

    No.

    Why not?
    Why would it? Is there ever likely to even be a reason to create true intelligence, let alone necessity? Why would we need anything more than "smart" tools that do what they're told? If it is autonomous and possesses the capacity for original thought, why not use a human? Why and how would we make something more intelligent than ourselves? Why would it possess any instinct for survival or desire for independence and autonomy at all? In my prediction, AI and robotics will create the perfect slaves, not the perfect intelligence. There's simply no reason to go further except for research.
    Post edited by Ilmarinen on
  • Ilmarinen said:

    Why and how would we make something more intelligent than ourselves?

    If we knew the how, AI would be a much more direct and imminent threat than it is now.

    As for the why, it's rather simple. Greater intelligence means faster technological development, greater productivity, more economic benefits, etc. There is plenty of reason for big companies and governments to want software that is as intelligent as they can get it to be.
    Ilmarinen said:

    Why would it possess any instinct for survival or desire for independence and autonomy at all?

    For almost any ultimate desire you might have, your own survival and your own autonomy are instrumentally useful subgoals.
  • Ilmarinen said:

    Why and how would we make something more intelligent than ourselves?

    If we knew the how, AI would be a much more direct and imminent threat than it is now.

    As for the why, it's rather simple. Greater intelligence means faster technological development, greater productivity, more economic benefits, etc. There is plenty of reason for big companies and governments to want software that is as intelligent as they can get it to be.
    For focused activities like research and development, processing information, manufacturing, what advantage does a truly intelligent ai have over a reasonably "smart" calculator? Considering that it is a tool implemented by humans for human use, why does it need to be truly intelligent? Where are the productivity gains and economic benefits?

    Ilmarinen said:

    Why would it possess any instinct for survival or desire for independence and autonomy at all?

    For almost any ultimate desire you might have, your own survival and your own autonomy are instrumentally useful subgoals.
    Why would a program desire anything?
  • Ilmarinen said:

    Why would a program desire anything?

    Capacity for desire. Wanting something, but not needing it. It depends on what the limitations/ constraints are for that intelligent being.

    If an intelligence is self-aware, doesn't that afford them the freedom to choose on some level?

  • Ilmarinen said:

    For focused activities like research and development, processing information, manufacturing, what advantage does a truly intelligent ai have over a reasonably "smart" calculator? Considering that it is a tool implemented by humans for human use, why does it need to be truly intelligent? Where are the productivity gains and economic benefits?

    Surely you see how useful human intelligence is when applied to various human jobs, even very basic ones? The economic benefit should be completely obvious---we can replace the requirement for expensive human intelligence with cheaper machine intelligence.

    Ilmarinen said:

    Why would it possess any instinct for survival or desire for independence and autonomy at all?

    For almost any ultimate desire you might have, your own survival and your own autonomy are instrumentally useful subgoals.
    Why would a program desire anything?Just translate the term "desire" into "goal". Any somewhat intelligent software will be programmed with some kind of goals, and will evaluate different ways of achieving those goals.
  • Will AI destroy us?

    I hope so!
  • Surely you see how useful human intelligence is when applied to various human jobs, even very basic ones? The economic benefit should be completely obvious---we can replace the requirement for expensive human intelligence with cheaper machine intelligence.

    Exactly.

    Right now, New York Subway trains are run by humans augmented with information systems. That's because humans possess a whole bunch of soft skills and are able to react to unforeseen situations without needing explicit training for all of these edge cases.

    An AI that could synthesize the information train operators synthesize and act in a goal-oriented fashion would be able to entirely automate train operations in a way that current non-human systems cannot easily do.

    So you replace the human (used only for his wide array of soft processing skills) with a machine that needs no human comforts or human salary. More trains can run all night long because the only incremental costs of doing so become energy and wear: expensive night salaries are no longer required.

    Next, since you now have a machine instead of a human, you tighten the tolerances for error, speed the machine up, and increase your capacity to run trains. The existing system, with all of the existing assumptions about what gaps the human fills, runs the same way: just faster and more consistently.

    In general, as soon as you can actually replace the human glue in a system, you find that the human itself is the primary bottleneck on increased efficiency.
  • @Rym‌ T-Mobile, my homie! *high fives*
  • Rym said:


    Next, since you now have a machine instead of a human, you tighten the tolerances for error, speed the machine up, and increase your capacity to run trains. The existing system, with all of the existing assumptions about what gaps the human fills, runs the same way: just faster and more consistently.

    With your subway example though you still have to take into account that there's humans still on the train, just not driving it anymore, who are still subject to the fact our bodies are squishy and tend to slosh around. High speed acceleration and deceleration would be adverse to commuters.
  • Ilmarinen said:

    Why would it possess any instinct for survival or desire for independence and autonomy at all?

    For almost any ultimate desire you might have, your own survival and your own autonomy are instrumentally useful subgoals.
    Why would a program desire anything?
    Just translate the term "desire" into "goal". Any somewhat intelligent software will be programmed with some kind of goals, and will evaluate different ways of achieving those goals.To take things down to a neurochemical level, human "desires" basically boil down to chasing that next sweet dopamine hit, and making decisions to maximize your long-term number of said dopamine hits. Current machine learning researchers are basically trying to create a general AI by hooking a similar feedback loop up to a reasonably large neural net and letting it run for a while.

    As to the question, AI almost certainly won't go Terminator on us, but it will very likely make post-AI human society basically unrecognizable to pre-AI humans.
  • Ilmarinen said:

    For focused activities like research and development, processing information, manufacturing, what advantage does a truly intelligent ai have over a reasonably "smart" calculator? Considering that it is a tool implemented by humans for human use, why does it need to be truly intelligent? Where are the productivity gains and economic benefits?

    Surely you see how useful human intelligence is when applied to various human jobs, even very basic ones? The economic benefit should be completely obvious---we can replace the requirement for expensive human intelligence with cheaper machine intelligence.
    I'm not saying automation and ai won't have uses, but rather that there is no reason to have a strong, general purpose ai with anything resembling free will. Rym's example of a subway ai is probably very likely, but does it require free will? I don't think so, it just needs to be able to respond to specific and general criteria, and in any other cases, just call a human. How does an ai handle security, material damage, theft and robbery? It shouldn't, as justice is a human thing and innately irrational. Rather than an intelligent, self aware, and autonomous ai, all you need is enough ai to handle customer service and route some trains.

    Ilmarinen said:

    Why would it possess any instinct for survival or desire for independence and autonomy at all?

    For almost any ultimate desire you might have, your own survival and your own autonomy are instrumentally useful subgoals.
    Why would a program desire anything?Just translate the term "desire" into "goal". Any somewhat intelligent software will be programmed with some kind of goals, and will evaluate different ways of achieving those goals. Anything we turn over to ai will already has set processes and best practices. Why would ai need to be able to step outside these?
  • Ilmarinen said:

    there is no reason to have a strong, general purpose ai with anything resembling free will.

    Sure there is: we want to show that we can do it. (Also learning to make a general-purpose mind will likely give us a hell of a lot of insight into how our own minds work). It may also arise accidentally: as I mentioned above, machine cognition may be so different from human cognition that we won't notice that we've made something that would qualify as a general-purpose AI until it's far smarter than us.
    Ilmarinen said:

    I don't think so, it just needs to be able to respond to specific and general criteria, and in any other cases, just call a human. How does an ai handle security, material damage, theft and robbery? It shouldn't, as justice is a human thing and innately irrational.

    Who says? A machine that can learn to interact with humans in an unstructured way would have to learn notions of justice as a matter of course in order to interact with humans well. I don't think justice is particularly "innate": it's something we learn as children from interacting with society and from watching how other people interact with society.
  • RymRym
    edited December 2014

    With your subway example though you still have to take into account that there's humans still on the train, just not driving it anymore, who are still subject to the fact our bodies are squishy and tend to slosh around. High speed acceleration and deceleration would be adverse to commuters.

    Yeah, obviously.

    That's what the humans are doing now. If we replace the humans, the expectation is that the AIs have the same goal of getting the train moving along without harming the passengers.

    The delays and low efficiencies aren't because the trains accelerate or travel too slowly; they're because the distance between trains has to be very very large to account for human fuckups (mostly due to fatigue or lack of rapid information processing capabilities).

    It's deeply obvious that these basic constraints are assumed...
    Post edited by Rym on
  • Also, some of you are really inappropriately assuming that human intelligence is special or in any way anything other than an information processing feedback loop.

    We're just deterministic or random machines. Complex ones, but still just machines.
  • edited December 2014
    Here ya go. Some researchers mapped a worm's entire neural network and then translated that into a LEGO robot.

    http://www.i-programmer.info/news/105-artificial-intelligence/7985-a-worms-mind-in-a-lego-body.html

    Emergent behavior of a complex system of biochemical interactions etc.
    Post edited by TheWhaleShark on
  • Ilmarinen said:

    I don't think so, it just needs to be able to respond to specific and general criteria, and in any other cases, just call a human. How does an ai handle security, material damage, theft and robbery? It shouldn't, as justice is a human thing and innately irrational.

    Who says? A machine that can learn to interact with humans in an unstructured way would have to learn notions of justice as a matter of course in order to interact with humans well. I don't think justice is particularly "innate": it's something we learn as children from interacting with society and from watching how other people interact with society.
    I didn't say justice is innate, I said it's innately irrational. If a machine handed out judgement upon humans, that would remove the humanity from justice, and therefore the whole point of the process.
    Rym said:

    Also, some of you are really inappropriately assuming that human intelligence is special or in any way anything other than an information processing feedback loop.

    We're just deterministic or random machines. Complex ones, but still just machines.

    So what? Human intelligence /is/ special, and creating machine intelligence would just be further proof of that point. I don't think autonomous and independent machine intelligence will ever take off even if we create it though, simply because it's more pragmatic to have a smart tool than a slave.
  • Ilmarinen said:

    I didn't say justice is innate, I said it's innately irrational. If a machine handed out judgement upon humans, that would remove the humanity from justice, and therefore the whole point of the process.

    I don't think computers are rational in the way that you mean. They're programmed by irrational programmers, for one, and any program complex enough to learn new behaviors (a key part of a "general-purpose AI) would certainly involve a large amount of nondeterminism, making for a lot of irrational-seeming behavior.
    Ilmarinen said:

    So what? Human intelligence /is/ special, and creating machine intelligence would just be further proof of that point.

    Human intelligence is only special until we create something that can do everything the human brain can, something that I'm fairly confident is possible.
  • Ilmarinen said:

    So what? Human intelligence /is/ special, and creating machine intelligence would just be further proof of that point. I don't think autonomous and independent machine intelligence will ever take off even if we create it

    Human intelligence is limited by our biological evolution, a process which takes longer than the timeframes anyone alive today will interact with or care about.

    Human-like intelligence created on a different substrate can self-modify and evolve on an entirely different timescale. Biological brains, without understanding them to the same degree as, say, computer hardware, will never be able to keep up. It's just a matter of time.

    Human brains aren't going to change in any appreciable way in the lifetime of anyone alive today.

    If a machine handed out judgement upon humans, that would remove the humanity from justice, and therefore the whole point of the process.
    So what? I'd like to remove humans from the process.

  • Rym said:

    With your subway example though you still have to take into account that there's humans still on the train, just not driving it anymore, who are still subject to the fact our bodies are squishy and tend to slosh around. High speed acceleration and deceleration would be adverse to commuters.

    Yeah, obviously.

    That's what the humans are doing now. If we replace the humans, the expectation is that the AIs have the same goal of getting the train moving along without harming the passengers.

    The delays and low efficiencies aren't because the trains accelerate or travel too slowly; they're because the distance between trains has to be very very large to account for human fuckups (mostly due to fatigue or lack of rapid information processing capabilities).

    It's deeply obvious that these basic constraints are assumed...
    Then we still get to the issue of if something fucks up due to mechanical reasons (which, besides "an asshole on the tracks," I assume is one of the more common reasons for delays). As long as things run smoothly of course we could have arrivals at stations every five minutes, yes, but if something malfunctions, while it is a given the AIs will be able to react to emergent information, we still get delays as AI slow down or even stop their trains until the situation resolves. Much more efficient when it runs smoothly, yes, but it makes the kinks even more glaring and jarring when they happen.
  • Scott's concept of buying local SIM cards to put in phones is utterly misguided. It's a pain for calls and text messaging, but possible. For data connections, it's not even remotely worth it in terms of hassle AND price. Just suck up the roaming charges, because it just works and for small tasks isn't too expensive. Or it's free, like for Rym.

    When it's not free, I just reply to a text message and get a day or week or two weeks of data for 2 euro or 10 euro, or whatever deal is on offer.

    Also, while you admitted it would be so, you babby's first discussion on the topic of AI was very shallow. I'm disappointed in you.
Sign In or Register to comment.