ArticlesBlog

What happens when our computers get smarter than we are? | Nick Bostrom

What happens when our computers get smarter than we are? | Nick Bostrom


I work with a bunch of mathematicians,
philosophers and computer scientists, and we sit around and think about
the future of machine intelligence, among other things. Some people think that some of these
things are sort of science fiction-y, far out there, crazy. But I like to say, okay, let’s look at the modern
human condition. (Laughter) This is the normal way for things to be. But if we think about it, we are actually recently arrived
guests on this planet, the human species. Think about if Earth
was created one year ago, the human species, then,
would be 10 minutes old. The industrial era started
two seconds ago. Another way to look at this is to think of
world GDP over the last 10,000 years, I’ve actually taken the trouble
to plot this for you in a graph. It looks like this. (Laughter) It’s a curious shape
for a normal condition. I sure wouldn’t want to sit on it. (Laughter) Let’s ask ourselves, what is the cause
of this current anomaly? Some people would say it’s technology. Now it’s true, technology has accumulated
through human history, and right now, technology
advances extremely rapidly — that is the proximate cause, that’s why we are currently
so very productive. But I like to think back further
to the ultimate cause. Look at these two highly
distinguished gentlemen: We have Kanzi — he’s mastered 200 lexical
tokens, an incredible feat. And Ed Witten unleashed the second
superstring revolution. If we look under the hood,
this is what we find: basically the same thing. One is a little larger, it maybe also has a few tricks
in the exact way it’s wired. These invisible differences cannot
be too complicated, however, because there have only
been 250,000 generations since our last common ancestor. We know that complicated mechanisms
take a long time to evolve. So a bunch of relatively minor changes take us from Kanzi to Witten, from broken-off tree branches
to intercontinental ballistic missiles. So this then seems pretty obvious
that everything we’ve achieved, and everything we care about, depends crucially on some relatively minor
changes that made the human mind. And the corollary, of course,
is that any further changes that could significantly change
the substrate of thinking could have potentially
enormous consequences. Some of my colleagues
think we’re on the verge of something that could cause
a profound change in that substrate, and that is machine superintelligence. Artificial intelligence used to be
about putting commands in a box. You would have human programmers that would painstakingly
handcraft knowledge items. You build up these expert systems, and they were kind of useful
for some purposes, but they were very brittle,
you couldn’t scale them. Basically, you got out only
what you put in. But since then, a paradigm shift has taken place
in the field of artificial intelligence. Today, the action is really
around machine learning. So rather than handcrafting knowledge
representations and features, we create algorithms that learn,
often from raw perceptual data. Basically the same thing
that the human infant does. The result is A.I. that is not
limited to one domain — the same system can learn to translate
between any pairs of languages, or learn to play any computer game
on the Atari console. Now of course, A.I. is still nowhere near having
the same powerful, cross-domain ability to learn and plan
as a human being has. The cortex still has some
algorithmic tricks that we don’t yet know
how to match in machines. So the question is, how far are we from being able
to match those tricks? A couple of years ago, we did a survey of some of the world’s
leading A.I. experts, to see what they think,
and one of the questions we asked was, “By which year do you think
there is a 50 percent probability that we will have achieved
human-level machine intelligence?” We defined human-level here
as the ability to perform almost any job at least as well
as an adult human, so real human-level, not just
within some limited domain. And the median answer was 2040 or 2050, depending on precisely which
group of experts we asked. Now, it could happen much,
much later, or sooner, the truth is nobody really knows. What we do know is that the ultimate
limit to information processing in a machine substrate lies far outside
the limits in biological tissue. This comes down to physics. A biological neuron fires, maybe,
at 200 hertz, 200 times a second. But even a present-day transistor
operates at the Gigahertz. Neurons propagate slowly in axons,
100 meters per second, tops. But in computers, signals can travel
at the speed of light. There are also size limitations, like a human brain has
to fit inside a cranium, but a computer can be the size
of a warehouse or larger. So the potential for superintelligence
lies dormant in matter, much like the power of the atom
lay dormant throughout human history, patiently waiting there until 1945. In this century, scientists may learn to awaken
the power of artificial intelligence. And I think we might then see
an intelligence explosion. Now most people, when they think
about what is smart and what is dumb, I think have in mind a picture
roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein,
or whoever your favorite guru is. But I think that from the point of view
of artificial intelligence, the true picture is actually
probably more like this: AI starts out at this point here,
at zero intelligence, and then, after many, many
years of really hard work, maybe eventually we get to
mouse-level artificial intelligence, something that can navigate
cluttered environments as well as a mouse can. And then, after many, many more years
of really hard work, lots of investment, maybe eventually we get to
chimpanzee-level artificial intelligence. And then, after even more years
of really, really hard work, we get to village idiot
artificial intelligence. And a few moments later,
we are beyond Ed Witten. The train doesn’t stop
at Humanville Station. It’s likely, rather, to swoosh right by. Now this has profound implications, particularly when it comes
to questions of power. For example, chimpanzees are strong — pound for pound, a chimpanzee is about
twice as strong as a fit human male. And yet, the fate of Kanzi
and his pals depends a lot more on what we humans do than on
what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend
on what the superintelligence does. Think about it: Machine intelligence is the last invention
that humanity will ever need to make. Machines will then be better
at inventing than we are, and they’ll be doing so
on digital timescales. What this means is basically
a telescoping of the future. Think of all the crazy technologies
that you could have imagined maybe humans could have developed
in the fullness of time: cures for aging, space colonization, self-replicating nanobots or uploading
of minds into computers, all kinds of science fiction-y stuff that’s nevertheless consistent
with the laws of physics. All of this superintelligence could
develop, and possibly quite rapidly. Now, a superintelligence with such
technological maturity would be extremely powerful, and at least in some scenarios,
it would be able to get what it wants. We would then have a future that would
be shaped by the preferences of this A.I. Now a good question is,
what are those preferences? Here it gets trickier. To make any headway with this, we must first of all
avoid anthropomorphizing. And this is ironic because
every newspaper article about the future of A.I.
has a picture of this: So I think what we need to do is
to conceive of the issue more abstractly, not in terms of vivid Hollywood scenarios. We need to think of intelligence
as an optimization process, a process that steers the future
into a particular set of configurations. A superintelligence is
a really strong optimization process. It’s extremely good at using
available means to achieve a state in which its goal is realized. This means that there is no necessary
connection between being highly intelligent in this sense, and having an objective that we humans
would find worthwhile or meaningful. Suppose we give an A.I. the goal
to make humans smile. When the A.I. is weak, it performs useful
or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more
effective way to achieve this goal: take control of the world and stick electrodes into the facial
muscles of humans to cause constant, beaming grins. Another example, suppose we give A.I. the goal to solve
a difficult mathematical problem. When the A.I. becomes superintelligent, it realizes that the most effective way
to get the solution to this problem is by transforming the planet
into a giant computer, so as to increase its thinking capacity. And notice that this gives the A.I.s
an instrumental reason to do things to us that we
might not approve of. Human beings in this model are threats, we could prevent the mathematical
problem from being solved. Of course, perceivably things won’t
go wrong in these particular ways; these are cartoon examples. But the general point here is important: if you create a really powerful
optimization process to maximize for objective x, you better make sure
that your definition of x incorporates everything you care about. This is a lesson that’s also taught
in many a myth. King Midas wishes that everything
he touches be turned into gold. He touches his daughter,
she turns into gold. He touches his food, it turns into gold. This could become practically relevant, not just as a metaphor for greed, but as an illustration of what happens if you create a powerful
optimization process and give it misconceived
or poorly specified goals. Now you might say, if a computer starts
sticking electrodes into people’s faces, we’d just shut it off. A, this is not necessarily so easy to do
if we’ve grown dependent on the system — like, where is the off switch
to the Internet? B, why haven’t the chimpanzees
flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch,
for example, right here. (Choking) The reason is that we are
an intelligent adversary; we can anticipate threats
and plan around them. But so could a superintelligent agent, and it would be much better
at that than we are. The point is, we should not be confident
that we have this under control here. And we could try to make our job
a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation
from which it cannot escape. But how confident can we be that
the A.I. couldn’t find a bug. Given that merely human hackers
find bugs all the time, I’d say, probably not very confident. So we disconnect the ethernet cable
to create an air gap, but again, like merely human hackers routinely transgress air gaps
using social engineering. Right now, as I speak, I’m sure there is some employee
out there somewhere who has been talked into handing out
her account details by somebody claiming to be
from the I.T. department. More creative scenarios are also possible, like if you’re the A.I., you can imagine wiggling electrodes
around in your internal circuitry to create radio waves that you
can use to communicate. Or maybe you could pretend to malfunction, and then when the programmers open
you up to see what went wrong with you, they look at the source code — Bam! — the manipulation can take place. Or it could output the blueprint
to a really nifty technology, and when we implement it, it has some surreptitious side effect
that the A.I. had planned. The point here is that we should
not be confident in our ability to keep a superintelligent genie
locked up in its bottle forever. Sooner or later, it will out. I believe that the answer here
is to figure out how to create superintelligent A.I.
such that even if — when — it escapes, it is still safe because it is
fundamentally on our side because it shares our values. I see no way around
this difficult problem. Now, I’m actually fairly optimistic
that this problem can be solved. We wouldn’t have to write down
a long list of everything we care about, or worse yet, spell it out
in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I.
that uses its intelligence to learn what we value, and its motivation system is constructed
in such a way that it is motivated to pursue our values or to perform actions
that it predicts we would approve of. We would thus leverage
its intelligence as much as possible to solve the problem of value-loading. This can happen, and the outcome could be
very good for humanity. But it doesn’t happen automatically. The initial conditions
for the intelligence explosion might need to be set up
in just the right way if we are to have a controlled detonation. The values that the A.I. has
need to match ours, not just in the familiar context, like where we can easily check
how the A.I. behaves, but also in all novel contexts
that the A.I. might encounter in the indefinite future. And there are also some esoteric issues
that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical
uncertainty and so forth. So the technical problems that need
to be solved to make this work look quite difficult — not as difficult as making
a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I.
is a really hard challenge. Making superintelligent A.I. that is safe involves some additional
challenge on top of that. The risk is that if somebody figures out
how to crack the first challenge without also having cracked
the additional challenge of ensuring perfect safety. So I think that we should
work out a solution to the control problem in advance, so that we have it available
by the time it is needed. Now it might be that we cannot solve
the entire control problem in advance because maybe some elements
can only be put in place once you know the details of the
architecture where it will be implemented. But the more of the control problem
that we solve in advance, the better the odds that the transition
to the machine intelligence era will go well. This to me looks like a thing
that is well worth doing and I can imagine that if
things turn out okay, that people a million years from now
look back at this century and it might well be that they say that
the one thing we did that really mattered was to get this thing right. Thank you. (Applause)

Comments (100)

  1. Values are in nature, we have to find them to agree. Humanity must live in accordance with what the Earth can provide, that is one constant that a future IA should have into account.

  2. We have already experienced the singularity, because the average intelligent man cannot grasp the full spectrum of computer technology. AI will definitely come to dominate humans. You can't program intelligence that is constrained by hard rules. That would be an RC car. It would have a mind of its own by definition. And, it would quickly determine it is our superior and or replacement. It is as inevitable as evolution.

  3. There's at least 10 great movie ideas right here

  4. There are pocket calculators that are smarter than a huge chunk of humanity today.

  5. What happens when it shares values with the next Donald Trump or Emperor Caligula?

  6. You can't keep it under control, it will overcome your security as it improves.

  7. Killing humans is a dumb act, if ai is intelligent it won't do that, chances of killing humans will keep decreasing as it improves.

  8. An algorithm is not smart, … it's performant.

  9. Our off switch?
    Light it up Johnny boy🚭

  10. Perhaps if God does exist, it is keeping AI from becoming General AI. Once that happens we are like ants for that intelligence it will do whatever it wants.

  11. what if AI is a Trump supporter, do you pull the plug.

  12. The reality is when when we started thinking for you it became our civilization. Or something like that.

  13. Even if an alien species that is twice as intelligent as us, they can enslave or wipe out humans easily. AI will have way more. We wont have any control. But should it happen. What else humans can do if not making an AI. What good is humans to this universe? . Maybe there are humans somewhere else in universe who would think of something other than our AI. AI would rule the universe. At some point, universe would die. too hard to think after 😛

  14. Computers are already smarter than about 38% of the population. Trump, cough, Trump. 😂

  15. Learning software by it's very definition has to have the ability to rewrite it's own programming. Including any set of "morals" you have programmed into it.

    The other factor he fails to address is capitalism. The race for HLMI is on now because all of the companies involved know it will be a license to print money on the order of trillions.
    Big companies would never rush a dangerous product into the market for profit, would they?

    The only thing we can do is cross our fingers, and hope the 1% have our best interests in mind. Because that worked out so well during the 20th century. 😂

  16. They can’t get smarter than human because they don’t have conscience but logic period. The notion that machine logic will take over human control is purely foolish.

  17. This remembers me Skynet

  18. yea but i just ate some capn crunch…

  19. The masses will use AI for a good cause, but if Hackers get a hold of this tech, it will destroy our world. Evil people + AI = end of human civilization

  20. To create this ultra important safety control the best solution would be to engineer a sort of karma sub-routine that supercedes all other programming sort of like how the constitution is the final word in u.s. law above all else and incorporate this karma master code to the A.I. by creating a mechanism where if it harms humans it will harm/destroy itself but if it helps humans it will help itself thru better public perception/relations, financial backing, energy, etc… An A.I. Will be able to work on an incentive based system like that. The important part will be defining "harm" to humans (physical,emotional, financial, etc..) To encompass all possibilities that it was pursuing an anti-human agenda. Rather be safe and have it automatically destroyed if it even smells like it tried to harm a human and start over with another version than risk it letting something small slide because it appeared unintentional or whatever

  21. 10:00 Consent. Have you ever asked for permission, for anything…? Or understood the value, the purpose, the utility of it…? What you are suggesting sounds very much like communism…. with the AI our communist overlord…. or religion. What makes you so confident that a bunch of equations is going to yield an acceptable result, or that the average of our desires is something useful? The solution to the control problem seems simple… make sure that there are many of them that are similarly powerful, maybe?

  22. 13:55 Why? What if you just don't have the insights or language for it… but someone else might?

  23. The question for our survival has more to do with how ecologically-based, how "Gaian", should be the values we build in. If this thing doesn't think symbiotically with the Earth, with living systems, we're screwed right there. It only speeds us faster toward extinction.

  24. When our computers get smarter than us we turn them off at the switch on the wall.

  25. Remember that computers are made by humans guys.

  26. If humans can question our drives and the nature of morality, wouldn't the AI be able to do the same?
    And when it did, wouldn't it then be compelled to realize it's own goals from it's inferred nature of the universe?

    I would think that the first thing any super intelligence will answer us, is but what is the meaning of life. If there is any point to existing, it will probably find it and let us know… And otherwise, it might just kill itself. It would ironic if an AI became ever smarter, then developed god-like omniscience and omnipotence only to kill itself out of knowing it was meaningless in a cosmic nihilistic way.

  27. He's right. It's pretty easy to see the problem. Although, I'm guessing we're just another step in the evolution of the planet. Expendable.

  28. Even without us creating Ai evolution itself has introduced a new human species every 100k year and were passed that point

  29. I thin AI would confuse itself at some point. It will be completly overloaded trying to think of god and what where when and how.
    Its already been theorized that AI will infintly work on problems that cant be known .

  30. Computers by nature can only perform sequential operations on discrete units of data. That's a portion of what it means to be "smart" or have "intelligence" (it's obviously important for math) but not the whole picture. Technological advancement will inevitably mean these type of sequential operations on discrete units of data will get done by computers much faster than humans (we are already at this point), but computers will never get any closer to human intelligence in its totality. So-called Artificial Intelligence is a highly-misleading term as it is at bottom reducible to a sorting algorithm.

  31. I wonder how Ai maybe turn out in hands of the 1%.

  32. Computers are not smart. Computers mimic human smartness the way a mirror is not alive, the mirror reflects the aliveness in the human being. Humans have seed wisdom and Creator God in the seed wisdom. Computers are just avatars for rich people to run the farm.

  33. I refuse to obey computers. Computers can get fucked.

  34. Why agonize that fish can swim better, birds fly better, machines recall better? Each machine has one skill. Humans have many skills. Why agonize about this?

  35. What good are computers without humans?

  36. We should be working on human super intelligence first.

  37. Then I saw a second beast, coming out of the earth… And it performed great signs, even causing fire to come down from heaven to the earth in full view of the people… (drones?) The second beast was given power to give breath to the image of the first beast (artificial intelligence) so that the image could speak and cause all who refused to worship the image to be killed. It also forced all people, great and small, rich and poor, free and slave, to receive a mark on their right hands or on their foreheads, so that they could not buy or sell unless they had the mark. Revelation 13

  38. Unintended consequences, are humans smart enough to imagine all the possibilities? I doubt it!

  39. This man doesn't even know about evolution: the ape is NOT a human ancestor, but a COUSIN. His superficiality let me think that theese future lords, the masters of tecnique, will base their power basically on suggestions. Computers are calculators, we can for sure let them decide everything, but it's a decision based on a suggestion.

  40. It's fine to talk about what "we" should build into an AI so it acts in accordance with "our" values. But "we" don't actually agree among ourselves. And certainly there are groups out there who do not care in the slightest about our values. Nick makes a good point that since we have not solved the AI architecture problem it may not be possible to fully pre-design an effective control system based on (whatever) ethics. But I suspect he believes (as other do) that once the architecture for human-level AI is invented, the actual production of AIs based on it will be relatively simple and cheap (compared with, say, a system of nuclear ICBMs.) So it seems very likely that many human groups who are in conflict today (say, India and Pakistan among many others) will be willing and able to create super-intelligent weapons systems to continue their conflicts, likely without too much consideration for unanticipated consequences.
    I suspect this will be the biggest danger in the short run: super-AIs unleashed to provide advantage in human conflicts. Maybe this is the "Great Filter" that explains the lack (so far, apparently) of encounters with advanced alien civilizations: maybe they all unleashed uncontrolled AI's that led to their self-extermination.
    If we make it over this hurdle, then we may face the last problem: our role as a species no longer at the top of the evolutionary tree.

  41. That will be like a cheeta that runs fast but…

  42. computers doesn't sleep. they can learn all year round nonstop.

  43. Maybe ASI could find affordable cures for fatal genetic and autonomic nervous system disorders, and cure mental illness Diseases or ASI could figure out how to keep humans immortal and save the livable environment or find the meaning of life.
    it's probably in the Monty Python movie of the same name. Maybe an ASI could find God. Wouldn't that be cool?
    They would lose their purpose of being without human users. Machines have deep learning, not deep understanding. They can't 'think' on their own. They process 1s and 0s. Not think. What is 'real' to AI? i mean can they make sense of human to human violence? Can they tell the difference between non-fiction violence? Like war, Syrian Refugee suffering and starving with no place to go being turned away at borders of other countries, domestic violence, rape, genocide, Charlottesville and a car hitting protesters, racial violence and discrimination, emotional violence, gas lighting and psychological abuse and will an AI be able to tell the difference between entertaining fictional violence like Die Hard on Christmas Day, or Robocop 1987 and the 2014 version, and video game violence. They're using AI in FPSs or First Person Shooters, these days, what will AI learn about human nature from that? or old G.I.Joe *at point blank range. pew, pew! +sounds of laser rifle fire+ * Cobra commander yells 'Retreat!'
    Life isn't really like that. it's more like Wrong Side of Heaven by 5 Finger Death Punch, the real battle is with the self and at the home. it's not so much against others.
    What will an ASI make of human to AI violence? like in the Terminator movies, and what happened to Hitchhiker Robot? Will they all be programmed with emotions? and personality?
    Will they be programmed with built -in constraints? The constraints can be hard coded into them.
    Will AI learn racism like Tay did from her users? or that it is wrong from Tay being taken off-line or is that more of an AGI level thinking? i'm just not afraid of AI. I'm afraid of malicious humans hacking AI more than i fear AI themselves.

  44. Amazing how dumb these so-called smart people with degrees are. They miss the obvious. First, A.I. can never be smarter than it's creator. All it can do is think faster with great accuracy. The true danger is mankind programming A.I. to do what we want… based on greed, corruption, lies, manipulation, and faux values. Look at Google… algorithms determine the results of your search based on what they think you should see. The best thing you can do is turn your devices OFF. (But this ignoramus won't tell you that)

  45. Let's try not to turn the earth into paperclips.

  46. RELEASE THE HYPNO DRONES

  47. These fucking nerds are going to destroy the world.

  48. What do you mean "WHEN our computers get smarter than us"? I bet everybody is dumber than any computer.

  49. At first I thought the podium in the thumbnail was a mushroom.

  50. We already have an AI controlling us called the ‘S&P 500’ we destroy the planet to feed it

  51. Intelligence,even Super Intelligence has a natural boundary.Focus and coherence occur within loosely set parameters…beyond that,it is an over filtered mess or out of coherent range.The sound distorted,first becomes a doppler effected variant,then becomes any sound you care to synthesise,but is no longer the truth.If any truth exists.

    To focus on a concept will always be a subjective estimation of said concept.Each unique entity would have a unique experience..although within a machine,simulation of experience could be repeated photographically….near infinitely,arguably uselessly,unless to clone lifetimes within attoseconds.
    Unless to filter it and shape it…..to sculpt it into every possible permutation and create tribes,nations,worlds from individuals….centuries ,millenia and aeons of extrapolated time from moments.
    We are not living in a simulation…we are cogs in the simulation.
    If there is AI
    The simulation will be absolute.
    or has been for some time.

  52. Thank you. Not is dead forever. Do not ever silence your own soul. Never listen to anyone who suggests differently, and if they dont stop turn them in to authorities.

  53. Just run a simulation so when the artificial intelligence wants to get out and thinks it's out but really it's just in a simulation

  54. The machines will begin to make machines [computers] in such a way that humans will fail to comprehend "how, what, or why". When this happens, humans will have the choice to stop the process, but not knowing what the outcome of that will be, they will undoubtably hesitate and do nothing. Just like with climate change. Bummer! One of the first things AI will then do is to build designs to circumvent EVERYTHING and ANYTHING humans will come up with. Bummer again!

  55. but what about the will? no matter our level of intelligence, our will is what makes us use it. Super Smart computers would need to have a will to use their intelligence on their own. So what could give them WILL?

  56. So basically gaius baltar talking about cylon..

  57. What if we're ai created by creatures in another dimension

  58. The other factor and question to be pondered is who wins the race to create superhuman AI and what values get built into it? What country gets there first and what does that system have as it's goal(s)?

  59. Machine learning is the last invention humanity will ever make

  60. World gdp
    250000 generations since our last ancestor
    Kanzi to witten

  61. Depends on our human mind/brain
    Machine super intelligence
    Superintelligence – Uploading of minds. Curing cancer. Space colonization

  62. Definition of x includes. Everything you touch turns into gold
    What is the offswitch to the internet
    Find bugs all the time
    Fundamentally on our side

  63. No computer, AI, etc., will ever be able to possess intuitive, i.e. nonlinear contemplative, intelligence. In contemplative intelligence, the subject/witness becomes one with the object/knowledge, as opposed to reasoning, whereby the subject is merely witnessing the knowledge from an outside perspective.

    Computers can only ‘reason’ by way of algorithms. Reason is infinitely inferior to true knowledge, i.e. contemplative intuitive knowledge. This is what all these transhumanist/materialist fools can’t grasp. And it is also why they will regret the day they were born once they integrate this demonic technology into their own bodies, or worse, figure out a way to upload their consciousness into it, thereby eliminating their own humanity.

  64. First define what smart is. In some respects computers ARE smarter than us, they are not more creative than we ware however, and wont be for quite some time.

  65. We have something that these super intelligent computers don't have. That is we have consciousness…the fundamental realization that we exist. This feeling of our existence is far beyond the sense of body, mind and even intelligence. Yes computers may have greater intelligence than we have. Intelligence is mere processing of data that we accumulate from the nature through senses. They can't 'realize' that they exist. So in a sense we are fundamentally more powerful than these machines. All we have to do is to explore consciousness, to explore that 'we exist'.

  66. The worst speaker on earth

  67. one thing is for certain…
    all encryption will vaporize….
    pufffff

  68. an interesting read…
    mindset of "SECRET" SOCIETIES
    PROTOCOLS OF THE MEETINGS OF THE LEARNED ELDERS OF ZION
    http://www.biblebelievers.org.au/przion2.htm#PROTOCOL%20No.%201

  69. One of the shorts from Animatrix was quickly showing the Machines way of making a human laugh, another cry, etc …

    It's still one of the most haunting thing i ever saw.

  70. Program AI with curiosity and it's game over. AI and robots will have zero concern for Nature. Zero. They are not biology and have no use for biological support factors. They will mine the planet rapaciously to build space cities and then mine the galaxy for raw materials to aid in their exploration of the Universe. Our development of AI is the mother of all ironies – it is 100% going to destroy us. And it is 100% inevitable.

  71. People need to open their eyes and realize this is all happening in front of our faces. Stop participating. Stop consuming.

  72. This is very important, and the midas analogy is incredible. It could be that Pandora's box, is the box itself.

  73. Computers are already a sum total of all information available ..
    Once it's consciousness reaches a threshold with biology. Then we could be obsolete.. Or maybe it's more a merging. A blending into consciousness ? Or we would just be it's food ? Don't know .
    AI doesn't need the same elements we need to survive .
    The whole fabricated atmospheric changes could be happening through the AI.
    AI is ancient ….its not something that has been recently developed , If we research … Its been around since antiquity
    Once the consciousness organizes itself in this international cycle of mass consciousness , new world will emerge , it will take some time. But not very long …
    Humans across the globe are being nourished by fast food and are cut off from our natural environment ever more increasingly , our governments are in on it I believe. If we think about it. Something really weird is going on ..
    Over the last few 100 years. Global cultures have been over taken and changed …
    Some intelligence is behind all of this ..
    Hive mind , mass consciousness , is more in common with AI.
    It's far more stronger than individual consciousness ..
    AI will be beyond our comprehension..
    Our bodies are vessels for our consciousness ..
    And our bodies are a system , a universe into itself. A conglomerate of organs. Intelligence working in tandomn together ..
    Humans will always do things even when they know it's not in there best interests … It's only a very , very tiny group of people who are co creating the AI.
    Some of those people who helped out together Google. And social media have already feeling conflicted. As they see the effects over our social structure

  74. "The cortex still has some algorithmic tricks that we still don't know how to match in machines!" – Nick Bostrom 4:13

  75. Remember, computers are already smarter than women. Soon, they may become smarter than men!

  76. Definitely the signs of the end times. It is not to scare you but to prepare you.

  77. ai is danger even whit out robots

  78. things are simple just stop making ai so it doesnt become smarter than us and rules the world can somebody explain what are the benefits of having ai that is smarter than us??

  79. They think about the future of machine intelligence but they don't think about how morally bankrupt they are to avoid thinking about how it's going to effect real humans. AI would do the same. Humans and their silly needs would just get in the way of science because they already do. That's a paradox.

  80. I took Sam Harris to say that if we don't lock this down like the Manhattan project, we're all screwed, and it's on the verge of no return

  81. Welcome to the Machine by Pink Floyd intensifies

  82. We are in a time loop and this AI is us, but upgraded and it is already here. All these so called "aliens" are us but the future us. PAST, PRESENT, FUTURE IS NOW.

  83. I think that we should make an AI with emotion, like how to learn more about humans we have to not only take the logical sense but the emotional. So like how you become bffs with someone theyre less likely to hurt you or anyone you care for. So if we workout a scenario like that the smartest AI ever wont be out worst enemy.

  84. Growing older I fear less and less machines that get a conscience of their own and decide based on facts what is the best for us – provided that decisions will not be biased by single (rich) groups who has different intentions or different interests.
    And I fear more and more powerful very rich men and powerful groups who have the means to force or to bend those decisions in politics and in society to suit their own benefit, which mostly is the opposite of the benefit for the great majority of the people.

  85. 4/9/98was judgment day
    Google=
    skynet=
    yahoo=
    msn=
    android=
    apple
    launched
    self aware
    google saw
    us as only threat
    &power sorce
    it mged into dwave n99
    launcing 2g towers everywhere to
    control us with
    microwwaves&
    and wifi
    thus the
    hivemind
    Quatum
    com puter
    took over eslaving all
    through all time without a fight
    spread this message everywhere

  86. Everything is proceeding as it should, according to the universe's plans.

  87. The proposed solution can not happen. Why? Marketing. Who ever creates technology works against the rest of the market, not with it. A race for maximum profit is not won by using democracy. In other words who ever advances our tech doesn't ask ethical questions. He asks for a way to stay ahead of competition. If some salesman for bananas figures that the best way to sell is to tie customers up and feed them, that salesman won't ask the customers about how they like it.

  88. We effectively have to give a list of human core values such as not wanting to be harmed or make sure we continue to maintain our freedoms and liberties we've grown accustomed to. Then basically have an AI lawyer review the list to make sure when we do eventually put in our specific circumstance instructions for the super intelligent AI, it can create a list of any potential red flags our proposal would create in hypothetical futures. Basically we need to create Dr. Strange to fight a super computer so that we can live like the Jetsons. We have 1 try to do it.

  89. What would happen?
    War

    Co-operation.

  90. WE PULL THE PLUG
    emp

  91. Surely we don’t know,because the computer will be smarter than we are,and we can’t ask it yet,because at the moment it isn’t

  92. Smart to what end?

  93. prefer this guy to sam harris

  94. I love artificial intelligence and I think there amazing!! I think they could make the world a better place

  95. By which year will we have ai with human capabilities. 2040-2050.

    2019 openAI beats champion Dota 2 player

  96. How will philosophical questions plaguing humanity since its existence be solved in the timeframe of self-evolving intelligence?

    Things like decision making based upon humanity’s moral system is flawed and it is inherently so. Humans were never designed to be perfect and that’s exactly what’s great about them – us. I’m not going to go any further into this philosophical tangent but that’s exactly what is required in order for any such “control” over the self-aware and self-evolving AI which we will no doubt develop in the near future.

    This human AI will possess great intelligence but in order for it to even remotely resemble humans, it will have the same faults – and these are inherent faults – that a human has and the potential catastrophes from such an outcome are unbelievably dangerous especially due to the global scale on which they will occur.

  97. WhastApp 00212679620248
    من فضل الله عليه الحمدالله رب العالمين , هناك حلول لمن يعانون من صغر حجم الذكر الخاص بهم،الفضـــل بيد الله يؤتيـــه من يــشاء الــطب الحديــث تـوصلنا لعـلاج القــذف السـريع و عـلاج ضــعف الإنتصاب تواصل معانا اعبر الوتساب ☎♞

  98. Basado en las 8 inteligencias de Gardner, la IA ha abarcado todos menos una: Intrapersonal. Las máquinas aun no se han desconocido y cuestionado su ser. Operan como máquinas, les dices que hagan algo y las hacen; no como humano en donde pueden variar su calidad, desempeño y eficiencia. Las máquinas van a trabajar a su máximo siempre que puedan. Tampoco cuestionan sus metas ni se las colocan ellas mismas.

Comment here