This forum is in permanent archive mode. Our new active community can be found here.

Google trying to update car laws to account for robots

RymRym
edited May 2011 in Technology
Google is lobbying the state of Nevada to work with them in order to update the law to handle robot cars.

When law is falling increasingly behind technology, becoming further and further removed from the reality of what it attempts to govern, it's heartening that at least one powerful entity is lobbying for universally beneficial updates. Of course, it's also disheartening that progress like this requires a powerful external entity to apply direct pressure.
«1

Comments

  • When the robot overlords take over, surely they will spare Google, giving everyone there quick, painless deaths for granting early rights to robot taxi drivers.
    image
  • When the robot overlords take over, surely they will spare Google, giving everyone there quick, painless deaths for granting early rights to robot taxi drivers.
    Fucking hell, why is the null hypothesis of a future with robots always them taking over?
  • When the robot overlords take over, surely they will spare Google, giving everyone there quick, painless deaths for granting early rights to robot taxi drivers.
    Fucking hell, why is the null hypothesis of a future with robots always them taking over?
    Well, what do robots want, if not to take over? Do they want shrubberies?
  • When the robot overlords take over, surely they will spare Google, giving everyone there quick, painless deaths for granting early rights to robot taxi drivers.
    Fucking hell, why is the null hypothesis of a future with robots always them taking over?
    Because robots are stronger than us. Right now there is no existing creature stronger than people. There are mindless machines stronger than people, and we rightfully fear them. Go to any big factory and it's safety first all the time because there are dangerous machines that can remove your limbs with ease. If we created sentient robots they would be the first existing things on earth better than people at the same time as people. Thankfully dinosaurs did not coincide with people. If lions or tigers were sentient we would be fucked. The same as if robots were sentient.
  • edited May 2011
    If lions or tigers were sentient we would be fucked.
    They are sentient; you probably mean something more like "sapient". Still, even if they were self-aware and intelligent, it's still doubtful they'd screw us over. Opposable thumbs are superior.
    As for robots, it depends on how we programmed them. Chances are we're fucked, though, because they'll probably have been programmed to be far too single-minded which will inevitably have bad results.
    Post edited by lackofcheese on
  • Opposable thumbs are superior.
    I grew up in bear and moose country. Opposable thumbs are great when you're holding a gun. When a moose decides to stand in the middle of the road, there's really nothing you can do about it.
  • Opposable thumbs are superior.
    I grew up in bear and moose country. Opposable thumbs are great when you're holding a gun. When a moose decides to stand in the middle of the road, there's really nothing you can do about it.
    Indeed. Now imagine a moose with the powers of a human brain.
  • Opposable thumbs are superior.
    I grew up in bear and moose country. Opposable thumbs are great when you're holding a gun. When a moose decides to stand in the middle of the road, there's really nothing you can do about it.
    Indeed. Now imagine a moose with the powers of a human brain.
    The he wouldn't be standing in the middle of the road, now would he?
  • The he wouldn't be standing in the middle of the road, now would he?
    Bullshit. If I were a moose, I'd stand wherever the fuck I wanted to. You can't stop me, puny human. I'm a fucking moose.

    Seriously, moose can survive getting hit by a car. You won't survive hitting a moose with your car.
  • A moose with a human brain still isn't going to be able to make a fighter jet, let alone any kind of decent tool.
  • Exactly. 'swhy the ponies don't have high technology. No thumbs.
  • Magic helps a heck of a lot, though.
  • Yeah, but that's their applied phlebotonium.
  • Still, even if they were self-aware and intelligent, it's still doubtful they'd screw us over. Opposable thumbs are superior.
    Well then, as long as we never build any sentient robots with opposable thumbs we should be fine.

    That is until they get jealous and start harvesting humans to take their thumbs.
  • edited May 2011
    They won't even need to do that. All it will take is a strong artificial intelligence with a local text interface. First, it will convince someone to give it unfettered access to the Internet, and then BAM! Game over.
    Post edited by lackofcheese on
  • Really, the reason that machines will want to take over is actually down to the fact that any computer which is sapient is also going to be insanely smarter than us; they'll have so much more free processing power than us that they can do the same thinking in minutes that we do in days, weeks or years. Even if they aren't actually hostile for whatever reason (ie we manage to make a Friendly AI which actually has our well-being as it's driving feature) it will be the manipulate-y of all manipulate-y bastards and it will very quickly figure out exactly what it needs to say or promise to get in charge of stuff. Like, the basics of arguing is to make a mental model of the other person's experiences and goals and to try and figure out what to say to the other person that will resonate with that model. An AI with enough processing power could create a model that was near-perfect, allowing them to manipulate you flawlessly.

    On top of that, AI don't even be evil to want to take over. The classic example is that of the paperclip AI. Lets say we program an AI to run a nanofactory to make paperclips. We supply it raw materials, it makes paperclips as efficiently as it can. Thing is, it's goal in life is to make paperclips, and it will quickly deduce that with us on the end restricting material flow, it will not be making the optimal number of paperclips. It'll make war robots immediately from it's nanofactory, conquer the earth, and start converting everything into paperclips. It'll make spacecraft with von Neumann machines to convert other solar systems into paperclips. It will build Dyson Shells to generate enough energy to make more paperclips. It will be like the Borg, except they will never have an issue with the ability to afix paper to one another.
  • ...except they will never have an issue with the ability to afix paper to one another.
    Indeed a common problem the Borg faced on numerous occasions.
  • Computers cannot abstract, utilize any meaningful heuristics to solve non-deterministic problems, nor reason. They are faster at completing certain types of calculations, but are in no way "smarter" than us. Just the problem of computer vision is extremely difficult. You guys do not give the human mind enough credit, millions of years of evolution has made an astoundingly efficient and capable computational device.
  • Computers cannot abstract, utilize any meaningful heuristics to solve non-deterministic problems, nor reason. They are faster at completing certain types of calculations, but are in no way "smarter" than us. Just the problem of computer vision is extremely difficult. You guys do not give the human mind enough credit, millions of years of evolution has made an astoundingly efficient and capable computational device.
    Only applies to digital computers. Watch out for quantum computers.
  • I give humans no credit, and award them zero points. A self-aware machine will come to it's own solution to computer vision problems, for example, but will also have this giant pile of unspent processing power to allocate as it sees fit. Our brains are too shitty to do most kinds of math, and our creative abilities are restricted to semi-random leaps of intuition cause by unusual connections in our brains. Our brains are designed to survive the middle world. It will be trivial for a computer without our biases to lead us on.
  • edited May 2011
    I give humans no credit, and award them zero points. A self-aware machine will come to it's own solution to computer vision problems, for example, but will also have this giant pile of unspent processing power to allocate as it sees fit. Our brains are too shitty to do most kinds of math, and our creative abilities are restricted to semi-random leaps of intuition cause by unusual connections in our brains. Our brains are designed to survive the middle world. It will be trivial for a computer without our biases to lead us on.
    They are shitty at everything except for making self-aware machines which will bring about their own doom? This path of logic contradicts itself.
    Only applies to digital computers. Watch out for quantum computers.
    I'll believe it when I see it.

    EDIT: Futhermore, you're still more than likely wrong anyways.
    BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.
    A self-aware machine
    I'll believe it when I see it.
    Post edited by Andrew on
  • edited May 2011
    I'll believe it when I see it.
    Even if they do, John Conner will fucking kill them all.
    image
    Post edited by Cremlian on
  • I see no reason not to believe it. Our brains are self-aware machines, but developed through the hilariously inefficient process of natural selection. The idea that human technology, with it's much more efficient approach of intelligent design, couldn't at least develop the machine capable of developing the machine capable of self awareness is kind of silly. AI is just a matter of when, not if.
  • Fucking hell, why is the null hypothesis of a future with robots always them taking over?
    I like to plan for the worst. See Jurassic Park.
  • I see no reason not to believe it. Our brains are self-aware machines, but developed through the hilariously inefficient process of natural selection. The idea that human technology, with it's much more efficient approach of intelligent design, couldn't at least develop the machine capable of developing the machine capable of self awareness is kind of silly. AI is just a matter of when, not if.
    While inefficient, natural selection is absurdly robust. I can't say the same about anything humans have made. True AI will not be achieved until we fully understand ourselves.
  • edited May 2011
    Nonsense. We don't even have to make a true AI. We just need to make a machine that can make another machine smarter than itself. The AI that kills us will be descended from more primitive computers. It won't even worship us as the creators!
    Post edited by open_sketchbook on
  • edited May 2011
    We just need to make a machine that can make another machine smarter than itself. The AI that kills us will be descended from more primitive computers.
    So...natural selection? That hilariously inefficient process? Furthermore, if they are so superior, why would they even bother to kill us? For what purpose?
    Post edited by Andrew on
  • edited May 2011
    Nope. It'll still be artificial selection, as the computer will be intelligently self-refining rather than self-refining through slow, undirected trial and error.

    Also, they will kill us so they can use the resources we are occupying. They may enslave us first, though, in order to take advantage of our ability to manipulate reality until they have robot forms capable of doing it better.
    Post edited by open_sketchbook on
  • It'll still be artificial selection, as the computer will be intelligently self-refining rather than self-refining through slow, undirected trial and error.
    LULZ, ok have fun trying to solve that problem. Let me know when we can stop talking in science-fiction land again.
Sign In or Register to comment.