It looks like you're new here. If you want to get involved, click one of these buttons!
Tonight on GeekNights, we consider whether Artificial Intelligence (in the future tech magic sense) will destroy humanity or make everything awesome. Will AIs replace us, or will they treat us the way we treat ants? In the news, T-Mobile continues to play catch-up, but will have Uncarrier 8.0 tomorrow, HIV is evolving to be less infectious/deadly, and morons are angry at flickr for their own moronitude about what the Creative Commons license actually means.
Download MP3
Comments
I've had musicians try to takedown videos of mine on YouTube because I used their song "without their permission" (i.e. without paying them) only for me to point out, "look, you put it on Jamendo with the most liberal license you can (CC-BY). It doesn't matter if you have the song on Jamendo Pro, I can still use your song commercially and there's not a god damn thing you can do about it."
"Harumph! Well, I'm revoking your license to use it. Now pay me!"
"Read the license again, fucker. That shit is permanent and irrevocable. I'm attributing you just like the license requires. Try to take down my video again and I'll report you to YouTube AND Jamendo."
*grumble*... wasting my time because they didn't read. Ugh.
No.
I mean, I'm trying to build SkyNet.
Anyways, Nelson made a point on Twitter that I think is worth repeating here: given how differently computers think from humans, it's entirely likely that when we do create something sufficiently intelligent, its thought processes will be so foreign to us that we won't actually notice for a long time.
(And by then it'll be too late).
As for the why, it's rather simple. Greater intelligence means faster technological development, greater productivity, more economic benefits, etc. There is plenty of reason for big companies and governments to want software that is as intelligent as they can get it to be. For almost any ultimate desire you might have, your own survival and your own autonomy are instrumentally useful subgoals.
If an intelligence is self-aware, doesn't that afford them the freedom to choose on some level?
I hope so!
Right now, New York Subway trains are run by humans augmented with information systems. That's because humans possess a whole bunch of soft skills and are able to react to unforeseen situations without needing explicit training for all of these edge cases.
An AI that could synthesize the information train operators synthesize and act in a goal-oriented fashion would be able to entirely automate train operations in a way that current non-human systems cannot easily do.
So you replace the human (used only for his wide array of soft processing skills) with a machine that needs no human comforts or human salary. More trains can run all night long because the only incremental costs of doing so become energy and wear: expensive night salaries are no longer required.
Next, since you now have a machine instead of a human, you tighten the tolerances for error, speed the machine up, and increase your capacity to run trains. The existing system, with all of the existing assumptions about what gaps the human fills, runs the same way: just faster and more consistently.
In general, as soon as you can actually replace the human glue in a system, you find that the human itself is the primary bottleneck on increased efficiency.
As to the question, AI almost certainly won't go Terminator on us, but it will very likely make post-AI human society basically unrecognizable to pre-AI humans.
That's what the humans are doing now. If we replace the humans, the expectation is that the AIs have the same goal of getting the train moving along without harming the passengers.
The delays and low efficiencies aren't because the trains accelerate or travel too slowly; they're because the distance between trains has to be very very large to account for human fuckups (mostly due to fatigue or lack of rapid information processing capabilities).
It's deeply obvious that these basic constraints are assumed...
We're just deterministic or random machines. Complex ones, but still just machines.
http://www.i-programmer.info/news/105-artificial-intelligence/7985-a-worms-mind-in-a-lego-body.html
Emergent behavior of a complex system of biochemical interactions etc.
Human-like intelligence created on a different substrate can self-modify and evolve on an entirely different timescale. Biological brains, without understanding them to the same degree as, say, computer hardware, will never be able to keep up. It's just a matter of time.
Human brains aren't going to change in any appreciable way in the lifetime of anyone alive today.
So what? I'd like to remove humans from the process.
When it's not free, I just reply to a text message and get a day or week or two weeks of data for 2 euro or 10 euro, or whatever deal is on offer.
Also, while you admitted it would be so, you babby's first discussion on the topic of AI was very shallow. I'm disappointed in you.