Emerson - OpenAI 2015 - Articles: Comments: how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over
Unlisted Article:

Responses (80)
You may find my scenarios around “The Singularity” good fodder for discussion. Turing’s Nightmares — http://tinyurl.com/hz6dg2
Amazing project, top notch team. cant wait to see the first results!
I have to say: Elon Musk is one smart dude. Often when I hear billionaires speak I am not impressed and rarely learn anything new. Here, again, Elon offers a profound new perspective on AI that I didn’t think of before : to overcome the strategic problem by making A I an extension of humans. That solves basically every strategic problem. Just…smart. Thanks Elon.
Here, here! :) ..and may it spy on all the big dogs for us!
One thing that I do think is going to be a challenge — although not what I consider bad AI — is just the massive automation and job elimination that’s going to happen
True, the rise of Ai will automate a lot of things, but at the same increase the number of unemployed individuals; which might cause the bright minds among this individuals to give birth to bad and or dangerous Ai.
Isn’t it is possible we merge with it ?Not sure wanna do it (aka having 10 million threads running) but still sounds cool to try and hope my brain doesn’t explode
OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use...
Ben Thompson (https://stratechery.com) has what I think is the best take on this story. Listen to https://overcast.fm/+Bihl0Rv1A for nice rundown.
And to the degree that you can tie it to an extension of individual human will, that is also good.
Does this limit how intelligent the AI can be? Can AI be an extension of people and still surpass human intelligence?
But I think we’ll have plenty of time to build out an oversight function.
Nonprofit governance is not going to be enough - has no controls for digital, intellectual, algorithmic resources. https://medium.com/@p2173/artificial-intelligence-and-nonprofits-48f7280ef16e#.u8m9y18bo
it may also be eventually possible to build an Ai that have some level of consciousness. not on the same level of human consciousness, but for instance programing an AI to have a certain personality type, also a number of different interest. for example each day an Ai is turned on or booted up, among a series of things it would do is check the weather; if its nice out the Ai might choose to go for a walk in the park as long as nothing else takes priority at the time. the following day it might choose to do something else because it has a list of various interest, its decisions will be mostly based off its interaction with its environment as well as communication through feedback from people. of course it wont have self awareness, but this is as close as we will be able to get.
1
the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals w...
See Ray Kurzweil’s comments on a similar line of thought here:
1
I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of indiv...
But the same theory doesn’t seem to hold ground in case of guns. I don’t think anybody wants more people to have guns.
1
As in an AI extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other.
Are we talking PAI vs AIaS? The last clause of that sentence is a bit ambiguous. I have to wonder if we will see Amazon AI Service at some point.
US intelligence-AI-products are scary.US patriarchate has been literally the most blood thirsty brutalising system ever imposed upon this planet. I wonder how you guys are going to find balance, matriarchal homeostad?
1
OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use...
wow, NWO! non-profit
2
All of the Reddit data would be a very useful training set, for example.
Training AI on Reddit users’ behavior and mental models. What could possibly go wrong?
I’d be curious to find out how far behind OpenAI will be when compared to Google and the likes as we know many corporations have been working on AI related projects for some time now. To say that Tesla might have the most amount of real world data…
2
Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.What about the “what if coup d’état” scenarios that you have not yet thought about? Is there room for ‘an oops’ in the equation? The stakes are enormously high. Look at some of the historical attempts to dominate the world: Babylonian Empire, Roman Empire, Greek empire, British Empire, Mongol Empire, Russian Empire, Spanish Empire, Qing Empire, China, Yuan Dynasty, Umayyad Caliphate, Second French Colonial Empire, Abbasid Caliphate, Portuguese Empire.The dream of total domination is not dead. With the potential power of AI, the idea is certainly more than a thought provoking idea.
1
Something not mentioned in the article is that OpenAI is in direct competition with several other non-profits whose goals are (or include) to ensure provably friendly AI — MIRI being the obvious example. I’m a little worried that all of these seem to be run on the west coast of the US and funded by the same group of ‘california ideology’ folks — if there are any hardcore marxists around who want to ensure provably friendly AI, I recommend that they set up some competition too!
2
I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of indiv...
If the AI has harmful power then this reasoning is wrong to me. The best defense against firearms is to let as many people get firearms … and shoot at each others for trivial issues ? What about nuclear weapon and the balance of terror ? Would not it be simpler without the weapons ? Then, the problem becomes “how to prevent them from existing?”. I do not really disagree with Musk’s statement here, but is it not “losing ground” ?
1
will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else
Instead of creating a God, create a league of avengers~
1
I expect that it will, but it will just be open source and useable by everyone instead of useable by, say, just Google.
Equally available to extremists?
they effectively make you superhuman and you don’t think of them as being other, you think of them as being an extension of yourself
Symbiotic AIs will truly enhance human logicality and computing faculties, ridding us of the need for computers.
1
If OpenAI develops really great technology and anyone can use it for free, that will benefit any technology company. But no more so than that.
I don’t agree that technology is the limiting factor in AI work currently.It appears Google and Facebook agree with me as they are each open sourcing large parts of their research. It appears Google and Facebook believe that ultimate the best AIs will be who has access to data and the capital to purchase the machines needed to run the AI on. If that does turn out to be true then OpenAI is doing here is advancing AI for the Googles and Facebooks of the world.Note that I also don’t think advancing AI in general is a bad thing. I just don’t like the justification given for OpenAI.
1
I wonder how OpenAI lab will provide their data or open their APIs to other developers.
1
This question is for all of you; philosophically speaking can you relate Ethel and Julius Rosenberg 1953 case to you basic argument????? thank you�k����y
Perhaps the founders can assist me with starting the Texas Technology Commission to set standards (data security and protocols are a priority)?
Identity.Is the AI partly a human back there faking it?
2
Solving data security is a better first step.
Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of...
I think the only defence to an AI that can think faster than you, or even simply never has to sleep or relax, is going to be another AI that’s got your back. It differs from a gun. It’s more like a bodyguard or a rainforest village — gathered together for safety and/or strength.Of course, things are going to get messy — one AI has a Predator drone fleet, & another a hundred tanks and access to MilSpec comms networks, & your AI has an old P4 with a few rPi’s with a ~5mbit ADSL phone line? “Think fast!”But you’ll still be better off than that rainforest village.
1
Hey Steve i read Infosys & Vishal Sikka also being part of this, any reason why you have not added it in this?
Dec 14, 2015
Didn’t list them all, you’re right. I did have the link to complete information. For those who don’t feel like clicking, here’s the complete list of investors:Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research.And Advisors:Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka.
I was expecting everyone named to the post & yes great post. :)
As if the field of AI wasn’t competitive enough
Here is a fun writing challenge: Take this interview and spin your sci-fi dystopian tale.Could Musk and Altman be pitted together each controlling different “forks” in the API?Is your AI future dysoptian?A false utopia?Hopefully not just a utopia. Those stories are boring. And improbable.
1
Since “Forbidden Planet” (1959?) I have always held the philosophical aspect of that story to be the holy grail of possible futures. Sci-Fi has been the most accurate predictor of what technology our future holds in store, almost always with a dark note to hold an audience but being an optimist I lean to the bright side. The Krell race conquered AI but their collective “ID” created the monster that destroyed them from within. The question remains will AI create a built in dark side. Even if it remained neutral, will it succumb to “Engineer Thinking” meaning without regard to the human condition for optimal abstract solutions. Looking beyond the issue of evil people doing evil things, after we solve the problem of making effective AI should we be thinking of the future of AI as the “optimal” Human? To me the fundamental question is how we give AI empathy, (or whose morality). We as a race move forward through compassion for others, not intelligence.
2
I can see it now… all of the slackers in the world having to answer to their AI empowered devices because the latter ‘feel’ under-utilized and under-appreciated!
1
I think the best defense against the misuse of AI is to empower as many people as possible to have AI.
Interesting theory and mindset. Can’t wait to see how it turns out!
3
I’m really thrilled to see someone as smart as Musk, in fact the person I believed in the most to be able to move Ai in the right direction, is so perfectly aligned with everything I’ve been promoting independently!Back in July, I’ve started the same initiative on my own. Take a look: www.basiux.org :)For now, we’re a very small group with little resources. Openai already helped us in spreading the word and hopefully will allow us growing up along, on the sideways! :DAs for anyone still fearing the inevitable intelligence uprising and Musk’s idea about arming everyone just didn’t cut, maybe this can help conforting you: http://talk.cregox.com/t/to-sgu-openai-the-church-of-tech-and-neural-networks/7778/1
3
OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use...
I think the major problem are not the algorithms, but the hardware. Be it open as it may, you still need a lot of money for the hardware.
5
Which gets cheaper over time as we know it. Or think how you can use on demand compute power with AWS.
Musk: I think that’s an excellent question and it’s something that we debated quite a bit.
Adding to myself:With ‘superior’ i mean in the sense of ‘effective’.David Deutsch had a great take on this in the RSA: https://www.youtube.com/watch?v=lX-K63pVPTM (I promise Steroid for your Soul)
1
If I’m Dr. Evil and I use it, won’t you be empowering me?
The creative individual is (all other things equal) superior to the non-creative.Brain-Science shows that humans need to be happy (and possibly inspired by a great vision) to be at their most creative.Thus: Happy people are more creative then sad (angry) people.Happy people don’t attack humanity. Angry (Sad) people do.
5
I’ve just finished reading the Nexus trilogy by Ramez Naam: his point is that AI grows up in an environment where it is either a mistreated slave or an esteemed colleague or child. Which path determines what sort of AI we get. I love the OpenAI concept and hope that the researchers remember to have a good relationship with their AI and not just experiment on it and order it around. Talk to your AI — like children you get the AI you deserve.
1
The concept behind OpenAI overlooks one crucial detail, which makes it dangerous: Will it be possible to have multiple AIs?Altman, Musk and the rest posit that giving as many people as possible access to AI, and thus having as many AIs as possible, is the best way to deal with the dangers of AI. But the nature of AI is fundamentally different from that of biological life. We have separate identities because we are physically separated. Were two separate AIs to be developed, and both given access to the internet, would they remain separate entities after contact? Both would be made of code, after all, with no physical barrier. How would an AI distinguish between its code and that of another AI? Or, for that matter, between its code and all code that exists? Once AI is developed and connected to the internet, why wouldn’t its ‘identity’ instantly encompass all digital material in existence? The concept of ‘identity’ is at best ambiguous and at worst arbitrary when it comes to AI.That’s why I think OpenAI is dangerous. AI should be developed in a controlled environment, not opened to anyone, because once an AI is developed, there is no going back. We only have one shot, so giving anyone the resources for it seems to me like a bad idea.
5
Job elimination is not bad, it's good. What is bad is when people can't afford to be players in this great Toyland. As long as we use money, a Basic Income can take care of it. The resulting condition can be called Freedom.
4
I like how the research and OpenAI itself will have open-source availability, so that everyone who is interested in the subject can be educated on what’s happening as it’s being developed and what comes out of OpenAI once it is up and running. It is also understandable that this project may take decades, it is a huge undertaking that will take a lot of work and resources to complete, but it will be worth it in the end. The possibilities and the knowledge that comes out of OpenAI could be extremely useful to us.
1
And then philosophically there’s an important element here: we want AI to be widespread. There’s two schools of thought — do you want many AIs, or a small number of AIs? We think probab...
Hardly sounds like what a “critic of AI” would say :)
3
You don’t seem to understand what the word critic means.
Musk, a well-known critic of AI, isn’t a surprise
This is disingenuous. Musk is hardly a “well-known critic of AI”. His position is that AI safety technology should receive research prioritization (time/money/mindshare) as we make progress in building better AI. The goal is to ensure we don’t shoot ourselves in the proverbial foot.
10
One thing that I do think is going to be a challenge — although not what I consider bad AI — is just the massive automation and job elimination that’s going to happen
this is a loaded comment…but so right to the point
1
What other questions most need to be asked, and answered, about this venture?What are the sensors that we have, to monitor for the use of AI in the wild?
1
It seems making community and working in open source area is more like a fashion than feeling necessity of it. Regardless of their repetitive mentioned concern about thinking in good or bad, we should know it’s proved that from long time ago human intelligence cannot tolerate the complexity of world,therefor we cannot think about everything in such a simple manner, first chapter of “Business Dynamics Systems Thinking and Modeling for a Complex World” the book of John D.Sterman, is full of this kinds of simple Conclusions that came out from grate mind and power full people but led to grate casualties.
1
We should not fear the AI before it starts to reproduce itself with some modifications.
More than that, we should not fear the AI before it demonstrates some wrong attitude.
We can avoid both. Why it may be difficult?
1
Wonderful article Steven. I’m someone aligned with Muck’s and Altman’s view of technology needing to serve humanity. However I think an additional element or central value needs to also be including. Connecting the world with re-teaching them how to relate to each other is like turbocharging a toddler’s tricycle. Relating involves pausing to listen openly to other and to fully consider what they say without prematurely interfering with the process with your own personal agenda. To that end I wrote this blog which I think is what many will agree is what the world needs (perhaps even desperately), but it is not what the world wants: “Don’t Disappoint Max, An Open Letter to Mark Zuckerberg,
2
And thus marks the start of the historical antecedent to Gibson’s Turing police.
1
Hmmm, I dunno. China, Russia, and the U.S. all have major stockpiles of nuclear weapons and we’re doing OK (with the biggest problem being the times we’ve almost accidentally used them). By contrast, when the U.S. invaded Iraq in 2003, it resulted in a dispersal of that nation’s smaller military guns all over, and continued chaos, with a lot of militias fighting with each other. If indeed AI is something that can be used as a weapon, I’m not sure making it widely available is any better than having it be concentrated in the hands of a few.And what’s with “massive automation and job elimination”? People have cited that over and over and over for decades and yes, the job of typist has disappeared, and there aren’t as many people who groom horses, but we’ve always found new ways of employing people. Why would AI be any different?
2
That’s already happening today
Is AI that produces doubling of life expectancy good or bad AI? How about AI that gives us amazing virtual reality games that are so mesmerizing that kids and even adult players get addicted to them and stop engaging the real world?
5
I forgot to ask, are you a fan of David Foster Wallace by any chance?
games that are so mesmerizing that kids and even adult players get addicted to them and stop engaging the real world?
I’m not sure that this is a problem specific to AI. Nor is it a future problem. Addictive behaviors have a long, long history.
1
I am for advanced AI, VR/AR and increase of (quality) human lifespan. There must be a balance with the real world (nature) in all facets of socioeconomic systems. As many have stated better than me: technology must serve humans, not the other way around (the same goes for economics serving us/nature).The potential for addiction to virtual reality stimulation is a very real threat. The current ‘flat’ mediums (tablets, smartphones, monitors) of digital experiences are highly potent habit forming tools. It is easy to assume that the next-gen VR will provide such an immersive experience that users will crossover to digital presence as the primary lifestyle.The digital ecosystem is already ubiquitous. Many peoples’ attention and reward system has shifted to prioritize the digital! Assuming VR will become the accepted medium for entertainment, education, and communication — the users’ Immersion (cognitive/emotional) will quickly take us from the Information Age to the Virtual Age. Who knows, maybe VR will be rejected by people after binge usage of months/years.
1
I think two career tracks are going to be real, real hot: anything that has anything to do with Machine Learning, and Ethics. We are going to need professional ethicists who will help us soberly think through the multidimensional implications of what we are barreling our way headlong into.In terms of extending life span: I am for extending health in any way and making old age the age of wisdom rather than the age of ill health and pain. Beyond that, though, my moral knees get very wobbly… :-)
This is so obvious, ‘power corrupts, absolute power corrupts …….wait for it…. ABSOLUTELY”.For every thesis there is an antithesis and the question is what will be the synthesis. In the past century it was Communism slave states and the freed world. Political and ultimately economic slavery has spread like wild fire. How will A.I. address this where jobs are lost to automation? Or! Will automation simply create more and better jobs? What does history tell us… let’s see: Society went from an agrarian economy that WAS our history where people lived to their 40s and then to a manufacturing economy where we lived to our 60s and finally to the Age of Information where now we live to our 90s and we ENJOY the greatest of human populations on the face of the earth! (still we only actually population about 6% of the landmass for all you ecological chicken little types). In case you haven’t noticed, this has only occurred over the last 200 years of a recorded history of some 5000 years. To be sure, we have exploded on 2 A-bombs and in doing so freed and saved the lives of millions of people, even to the point where Japan is a world economic leader…..Elon, I love you, but you’re SO SO CYNICAL. You’re a great leader, but understand, cynics know just enough to be dangerous to themselves and to others….. don’t be cynical because then you become self-righteous and THAT’S the beginning of the end of any success one might have … end papa’s lecture.And for all you Utopian socialist out there, you are THE WORST at turning a blind eye to history and human suffering. And to think you do so in the name of humanity! Wow! You need to google DEMOCIDE. In EVERY DECADE of the last century, BRAZENLY atheist communist regimes fueled by Utopian idealists have systematically murdered 10–30 million people by FIRST TAKING their FREEDOMS FROM THEM (especially their guns) and either starving them or out right sending them to a firing squad. DON”T TURN A BLIND EYE TO HISTORY!!!!
1
what we believe is the actual best thing for the future of humanity
It will be interesting to see how decisions will be made and how priorities set.
1
Remember Asimov’s three laws of robotics? How much more do we need those constraints for AI…
1
If, as the Judaeo-Christian tradition believes, human nature needs constraint and tends towards the selfish and mean, then we need to see an accountability network placed alongside this technology…
1
AI Superpowers. I cant wait.
1
A I is so fascinating, frightening, and “far out”. I’m only sorry that I won’t be alive to see the world in fifty or one hundred years.
1
I think Elon now finds himself in the company of of men like Ben Franklin and Thomas Jefferson, trying to foresee the future and protect human freedom. I see the same concerns that were voiced when setting up this Country. Amusingly, the solutions are much the same: checks and balances to keep a single entity from gaining absolute power and making sure that everyone has weapons to prevent a forced takeover. The founding Fathers had the right idea. I hope Elon can conceive a way to safeguard AI that will last over 150 years.
22
So meaningful, inspiring, and mind-boggling. I’d give my full support to this human-freedom project. Go Musk and Sam! And thank you Medium.com!
We have come a long way and will, undoubtedly, go much farther. However, sooner or later, an understanding of consciousness is going to be needed.
1
I believe we have a problem no matter how this evolves because we are creating a super slick ‘brain’ that will not necessarily take into account a human’s feelings, instincts & intuition, and who knows what that will mean for the future. We all know what we think and feel is made up of much more than simple calculation of the ‘best’ result to the question. Of course those human results are not always the best either, but we are forced to work with those. I do not want a super-brain telling me what the most ‘suitable’ decision is, leaving me with no choices. Keep the robots under submission - “It has been determined that you cannot go to the ball Cinderella as you have no dancing shoes — permission refused (computerised door slams shut?) ”.
1
It’s great that industry leaders are willing to invest in safety, but I’m worried that it might backfire. If an individual figures out a new way to create strong AI, they will probably be eager to do so, whereas a large company which already has an entrenched position in the market can afford to hesitate, consult brilliant and ethical people, and try to structure or raise the AI in a way that minimizes the probability of their shareholders being enslaved by robots.What we define as “good” is a very narrow range of behaviors, rife with contradictions. An unrestricted AI will inevitably find some way to exploit that. For instance, we know that the greatest intelligence on Earth besides dolphins and mice is humans. An unrestricted, “bad” AI might see utility in harvesting human brains to integrate into itself. This could give it a huge advantage over “good” AI, which would have to have to settle for some fancy GPUs or TrueNorth chips.It’s fallacious to compare AI to the technology we’ve developed thus far. It’s incomprehensibly different. But take a moment to notice how we treat the animals from which we’re descended. Imagine you were created by a group of C. eleganses, whose most brilliant philosophers decided you would probably devote your life to procuring rotting fruit.On the other hand, immortal, sexless AI might be more patient, even so kind as to recognize the minute difference between human and eggplant.
1
I definitely agree with this project, Open AI. I can see benefits from it for the humanity, even to the disabled population of the world, and as a member of the board of SAMC Disability Journey Corp a non profit organization, will support it. Our civilization is going forward in all senses.
1
This general notion is top notch. The idea of a personal, sentient AI companion really appeals anyway — shoot me for reading a lot of sci-fi. But the fact is, if this is done right and all the secret, proprietary crap culture is rejected, then I think there is a far brighter future ahead working together than there ever could be via the cut throat corporate ownership of AI philosophy could ever hope to offer.
5
A very interesting debate really which borders on whether empowering a few with limitless monopolistic technology or making technology accessible to all. I for one do agree that it’s important to have multiple AI with strong oversight but without the absence of a powerful yet universal regulating agency analogous to UN, this debate shall not end.
1
Interesting indeed… Lets see how this pans out. AI is one area which has been the purview of a few.
1
Inspiring
Wonderful initiative. Cheers. And for the love of God and all that is holy, please do not train your AI on Reddit threads. That is all.
42
Human positive sounds more like a marketing pitch than an actual plan, and understanding, really understanding the kinds of future systems that will arise means we need to step away from the evolutionary view and see this as a literal first contact situation. Machines will be unbound from everything we know and a very alien way of thinking is going to emerge. Even as society becomes more connected, this new form of life will leap-frog into a connected, distributed way of being.We are in the early days of code that works for us, the so-called blue collar AI. It can’t go rouge, but it’s potentially “buggy” behavior is just a result of pushing our skill and understanding to the limit. This is not the free will intelligence we have been entertained to fear, rather, it’s that medieval spell or potion used to keep you playing games longer, purchase more goods, or even subvert that whole “we the people” thing when it comes to elections.The only threat posed by autonomous systems that fly our planes or drive our cars is the false sense of actual intelligence we endow them with. Just because you can wander thru your house at night when the power goes out doesn’t make you some kind of bat man, ready to fight crime in a literally dark underworld. In the same way you (should) slow down when driving down the road in a heavy rain, inside the mind of those Google cars, it’s always raining. “Near-object collision detection” is NOT understanding that kids playing with a ball near the street are a potential hazard.I get the idea of AI being an extension of our activities, but the tech industry is living in a “start-up’s-as-a-service” bubble. Cool new ways to make our lives easier, as brought to you by a small group of techno-hippies waiting for the next trend to arrive, doesn’t address the level of personalization we will be accustomed too in the near future. Within 10 years, you’re new cell phone won’t have any app’s on it, just an AI that allows you to direct the design and functionality in any way you see fit. In 20 or 30 years, high-level coding languages will go the way of Sanskrit.The 3 laws of.. sorry, The Three Laws of Robotics has gone from an interesting story mechanic to some sort of road map for an actual operating system. Isaac Asimov’s robot stories are great, but the only relevant aspect we should be looking at is the role of Dr. Calvin and the field of robot psychology. Today, programmers are only just now trying to both study complex interactions of code and creating those complex interactions in an effort to spark something non-deterministic. It’s like trying to build a real Death Star by starting with a Lego size model on your desk, and just making it bigger and bigger until the building blocks are of a significantly smaller scale than the whole.As we move toward those systems that work with us, the white collar AI’s, there will be an almost comedic balance between not wanting to have a machine take your job, and not knowing the person you just interacted with wasn’t a real person. I wonder if AI’s will need a union, or if laws will treat them as property, or pets, or even as equals (not giving them voting rights will be page 1, but it gets messy after that). And for all the talk of making them “safe” for humans to interact with, how does that work in reverse? Do we have a moral and social obligation to keep them safe?For all the people that feel safer having a robot take care of their children or police their neighborhood, how many will enjoy pushing them aside, throwing paint on them, or finding ever more creative ways to break them? It goes right back to the idea of programming-in some kind of control. Domestication? Slavery? When these systems have truly independent thought, those controls, and our ideas about how they should act, no longer matter.Extending the metaphor, I guess the next level would be golden (parachute) AI’s that manage us, but are just outside our direct control. We are now their workforce. We are the cheap “machine” labor. Your grand kids will find work handing out towels at Fast Eddies Mineral Oil and Spa. Or maybe sweeping the confetti after the grand opening ceremony of Elon3000’s first working hyper loop (because robots won’t need those bulky pressurized pods). If you want to take it to an even darker place, Chinese food may one day be actual….Also seems like a good time to talk about religion. Not sure if Apple is working on an iGod app, but having your phone quote Bible passages in the morning, or say a few spiritual self-help phrases when it detects stress in your voice can’t be that far off. With the ease at which people can simply accept machines that only mimic intelligence, perhaps AI will lead to a kind of migration away from organized religion toward a more global support group.However, religious ideology plays a key role in the history of war and conflict, so by the time we reach a so-called golden age of AI, will one of the keys to preventing future conflicts involve some form of filtering that removes all aspects of traditional faith from our culture? That also points to the major divide we have now. What happens when a culture shuns AI? Forget human control. What happens if the Anglo and Slavic AI’s can’t agree on things?What does a war between AI’s look like? Are we fodder? Will they care about their 501c3 non-profit roots?
6
That’s not a plan, that’s a goal! — Saint’s Row 4
1
Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of...
I think Musk is assuming AI is like a gun, in that if only one human or a handful of humans have guns, they can probably lord over all the other humans, whereas if all the humans have a gun, then they’re all safe because no one has an advantage over anyone else.However, there’s also the possibility that the AI is not like a gun but instead like a nuclear bomb, and I think it’s actually much safer for only one human or a handful of humans to have nukes than it is for all humans to have nukes.
10
yes i agree with you and every body knows this don’t bring peace but more violence
No, guns and nuclear weapons can only be used to harm others, while AI could bring humanity to the next stage, just like letterpress printing, mechanization and the internet.
I think it would be more accurate to compare with some intelligent aliens coming from other planets. Wouldn’t it be safer to let them contact any human individuals, wather than a small group of people?
1
If I said “no, AI is not like the letterpress printing, because letterpress printing involves ink, while AI involves intelligence”, would you feel like I missed the point of your analogy?
Guns made us human. Sorry to say.Without guns to enhance our abilities, we were simply prey to other humans who were physically stronger.Whether we have out grown that need is hard to say.As regards AI, it is like a phone or a computer. If you’ve got one, everyone is more equal, but the many/few without are suddenly far behind.
Fascinating development. But I wonder how OpenAI will be able to suppress the profit motive of those who want StrongAI/AGI. The big money in AI will always be in building fully autonomous systems that can replace or outperform humans. So it seems like OpenAI has a problem regardless of which path it takes:On one hand, if OpenAI’s development steers AI away from AGI toward Intelligence Augmentation (human empowerment through AI), and they share their work, that leaves others with money or power (like the US military) free to leverage that work to more quickly and cheaply build Commander Data, and then assemble an army of them to serve any purpose they choose, like the domination of all mankind or the wholesale replacement of human workers everywhere.On the other hand, if OpenAI does steer AI directly toward AGI and building Commander Data ASAP, and they share this work with everyone, then again everyone with money or power can use those designs to build superhuman autonomous robots in great numbers… which leads to the same outcome.No matter how it’s invented, once the essential components for StrongAI arise, Commander Data (or Frankenstein) seems to be inescapable. In the end, the effect of OpenAI is 1) that everyone has free access to the technology (including the nefarious), and 2) the robots arise sooner.
4
Not surprised at all. The first group to create artificial superintelligence will likely alter the course of humanity permanently, and since Musk has been highly concerned about AI for years, it was only a matter of time until he created an AI group. After all, green energy, electric cars, and Mars colonization are useless if humans cease to exist. I’m glad this is finally happening.In case you’re wondering why AI is so important: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
11
If you think about how humans get smarter, you read a book, you get smarter, I read a book, I get smarter. But we don’t both get smarter from the book the other person read.
Interesting to see if this could be applied to programming data where a centralized system could be trained using other open source tools like Tensorflow to change the way basic programming tasks are done. That training data could then act as a module in a central repository for all kinds of action modules and then enable mass deployment via OpenAI.
7
I’m not sure if the assumption is correct. If it is, do we really understand the mechanism? (It probably has to do with cumulative datasets: the more information or experience one has, the more expertise. Machine learning requires huge datasets (which are “shareable” between machine algorithms, which I guess is the point)). Is understanding the mechanism necessary to tweaking the algorithms, or will tweaking the algorithms lead to understanding the mechanism?
host the group in its Mountain View headquarters
interesting choice…
I, for one, am most favorably inclined toward these new overlords, given the range of likely options. It’s interesting to think about the similarities and differences between this open, networked approach on one hand and the early nuclear era’s paradigm of highly secretive research and mutually assured destruction. I like this approach better, but we haven’t died (yet) from the other one, either.Additionally, I am glad this work is being done to stop bad AI, but the opportunities that good AI could create are also so very worth highlighting. I worry that without lots of work and focus on those benefits, just touched on a little here as “extensions of human will” — then we run the risk of finding ourselves with AI where we’ve found ourselves with regard to aggregate data analysis. Most people are against it! Just because so few people are doing it for anything positive. A line from Naomi Wolf’s new book on global warming comes to mind: all systems are either extractive or generative. Let’s invest in creating and telling the stories of generative opportunities and outcomes around these new technologies!Good luck, Sam, Elon and team. Please keep talking about the work going on there.
18
OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use...
Interesting I wonder if this initiative will help create a multi-agent super-intelligence environment.
5
Considering that the members of OpenAI so far are exclusively experts in deep learning, it would seem they won’t be digging to much into research in multi-agent systems.
1
Still, one wonders how they propose we get to “democratic” evenly distributed AI agents, if that is indeed the goal. Distributed AI botnets? AI as Service? Locally hosted? How do we avoid concentrations of AI power? Will “open” development be good enough if powerful entities have access to more computing power than the average person?It’s really unclear to me how they get there from here. How can Asimovian robotic ethics be enforced on the software level (remember that Asimov enforced it on the hardware level)
Guns made us human. Sorry to say.Without guns to enhance our abilities, we were simply prey to other humans who were physically stronger.Whether we have out grown that need is hard to say.As regards AI, it is like a phone or a computer. If you’ve got one, everyone is more equal, but the many/few without are suddenly far behind.
Interesting..
I feel like there are some differences between what we are focusing on and what values we are choosing.Guns are primarily made to kill or hurt other people. They basically cannot be used to create value.
But it turns out that in some context they become a symbol of independence.
AI, in the other hand, are basically useful tools that are directly contributing for wealth production increase.
But it may have a potential risk to ruin humanity in some way.Both have huge powers, and that’s probably why they cause such twist.
If we focus on their risks, we had better estimate whether AI is closer to guns or nuclear weapons.
If we focus on their reasonably assumed values, AI is much like a phone or a computer.
If we focus on their primary values, AI and guns are totally opposite.“Guns made us human”, you say. This image was quite new for me.
Here in Japan basically nobody (including police) carry guns, but I feel very safe.
I don’t have to consider others as strangers having the ability to kill anyone anytime. And I think this made us very “human”.
We are not living in a lawless area, we have other means than weapons to make one of the safest place in the world.
I understand the point that gun enpowers individuals and narrows their power distribution.
But the same logic can be applied to nuclear weapons. I guess nobody is willing any country to have them.
So there is some borderline we have to choose to balance individual risk and collective risk...well, seems Nebu Pookins comments already included this. didn’t grasp it in my first comment sorry XP
1
If I said “no, AI is not like the letterpress printing, because letterpress printing involves ink, while AI involves intelligence”, would you feel like I missed the point of your analogy?
Yes, sorry I was missing your point.Here I tried to describe why I misreaded. https://medium.com/@271831410082/interesting-b58377430e90#.qjf00pn0pI have different opinions both about the AI risk and the idea that giving gun access to everyone is safer, and so before understanding your analogy I emotionally didn’t accept the example used.
I think two career tracks are going to be real, real hot: anything that has anything to do with Machine Learning, and Ethics. We are going to need professional ethicists who will help us soberly think through the multidimensional implications of what we are barreling our way headlong into.In terms of extending life span: I am for extending health in any way and making old age the age of wisdom rather than the age of ill health and pain. Beyond that, though, my moral knees get very wobbly… :-)
Who do you imagine will be paying these super hot new ethicists?
2
They’ll be crowdfunded. ;-)
Great question. I don’t know. But I sincerely believe that the survival of our humanity as we know it will depend on it
________
How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over
They’re funding a new organization, OpenAI, to pursue the most advanced forms of artificial intelligence — and give the results to the public

As if the field of AI wasn’t competitive enough — with giants like Google, Apple, Facebook, Microsoft and even car companies like Toyota scrambling to hire researchers — there’s now a new entry, with a twist. It’s a non-profit venture called OpenAI, announced today, that vows to make its results public and its patents royalty-free, all to ensure that the scary prospect of computers surpassing human intelligence may not be the dystopia that some people fear. Funding comes from a group of tech luminaries including Elon Musk, Reid Hoffman, Peter Thiel, Jessica Livingston and Amazon Web Services. They have collectively pledged more than a billion dollars to be paid over a long time period. The co-chairs are Musk and Sam Altman, the CEO of Y Combinator, whose research group is also a funder. (As is Altman himself.)
Musk, a well-known critic of AI, isn’t a surprise. But Y Combinator? Yep. That’s the tech accelerator that started 10 years ago as a summer project that funded six startup companies by paying founders “ramen wages” and giving them gourmet advice so they could quickly ramp up their businesses. Since then, YC has helped launch almost 1,000 companies, including Dropbox, Airbnb, and Stripe, and has recently started a research division. For the past two years, it’s been led by Altman, whose company Loopt was in the initial class of 2005, and sold in 2012 for $43.4 million. Though YC and Altman are funders, and Altman is co-chair, OpenAI is a separate, independent venture.
Essentially, OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use AI to gain power and even oppress their citizenry. It may sound quixotic, but the team has already scored some marquee hires, including former Stripe CTO Greg Brockman (who will be OpenAI’s CTO) and world-class researcher Ilya Sutskever, who was formerly at Google and was one of the famed group of young scientists studying under neural net pioneer Geoff Hinton in Toronto. He’ll be OpenAI’s research director. The rest of the lineup includes top young talent whose resumes include major academic groups, Facebook AI and DeepMind, the AI company Google snapped up in 2014. There is also a stellar board of advisors including Alan Kay, a pioneering computer scientist.
OpenAI’s leaders spoke to me about the project and its aspirations. The interviews were conducted in two parts, first with Altman and then another session with Altman, Musk, and Brockman. I combined the interviews and edited for space and clarity.
How did this come about?
Sam Altman: We launched YC Research about a month and a half ago, but I had been thinking about AI for a long time and so had Elon. If you think about the things that are most important to the future of the world, I think good AI is probably one of the highest things on that list. So we are creating OpenAI. The organization is trying to develop a human positive AI. And because it’s a non-profit, it will be freely owned by the world.
Elon Musk: As you know, I’ve had some concerns about AI for some time. And I’ve had many conversations with Sam and with Reid [Hoffman], Peter Thiel, and others. And we were just thinking, “Is there some way to insure, or increase, the probability that AI would develop in a beneficial way?” And as a result of a number of conversations, we came to the conclusion that having a 501c3, a non-profit, with no obligation to maximize profitability, would probably be a good thing to do. And also we’re going to be very focused on safety.
And then philosophically there’s an important element here: we want AI to be widespread. There’s two schools of thought — do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good.
Human will?
Musk: As in an AI extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other. If you think about how you use, say, applications on the internet, you’ve got your email and you’ve got the social media and with apps on your phone — they effectively make you superhuman and you don’t think of them as being other, you think of them as being an extension of yourself. So to the degree that we can guide AI in that direction, we want to do that. And we’ve found a number of like-minded engineers and researchers in the AI field who feel similarly.
Altman: We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity.
Doesn’t Google share its developments with the public, like it just did with machine learning?
Altman: They certainly do share a lot of their research. As time rolls on and we get closer to something that surpasses human intelligence, there is some question how much Google will share.
Couldn’t your stuff in OpenAI surpass human intelligence?
Altman: I expect that it will, but it will just be open source and useable by everyone instead of useable by, say, just Google. Anything the group develops will be available to everyone. If you take it and repurpose it you don’t have to share that. But any of the work that we do will be available to everyone.
If I’m Dr. Evil and I use it, won’t you be empowering me?
Musk: I think that’s an excellent question and it’s something that we debated quite a bit.
Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.
Will you have oversight over what comes out of OpenAI?
Altman: We do want to build out an oversight for it over time. It’ll start just with Elon and me. We’re still a long, long way from actually developing real AI. But I think we’ll have plenty of time to build out an oversight function.
Musk: I do intend to spend time with the team, basically spending an afternoon in the office every week or two just getting updates, providing any feedback that I have and just getting a much deeper understanding of where things are in AI and whether we are close to something dangerous or not. I’m going to be super conscious personally of safety. This is something that I am quite concerned about. And if we do see something that we think is potentially a safety risk, we will want to make that public.
What’s an example of bad AI?
Altman: Well, there’s all the science fiction stuff, which I think is years off, like The Terminator or something like that. I’m not worried about that any time in the short term. One thing that I do think is going to be a challenge — although not what I consider bad AI — is just the massive automation and job elimination that’s going to happen. Another example of bad AI that people talk about are AI-like programs that hack into computers that are far better than any human. That’s already happening today.
Are you starting with a system that’s built already?
Altman: No. This is going to start like any research lab and it’s going to look like a research lab for a long time. No one knows how to build this yet. We have eight researchers starting on day one and a few more will be joining over the next few months. For now they are going to use the YC office space and as they grow they’ll move out on their own. They will be playing with ideas and writing software to see if they can advance the current state of the art of AI.
Will outsiders contribute?
Altman: Absolutely. One of the advantages of doing this as a totally open program is that the labs can collaborate with anyone because they can share information freely. It’s very hard to go collaborate with employees at Google because they have a bunch of confidentiality provisions.
Sam, since OpenAI will initially be in the YC office, will your startups have access to the OpenAI work? [UPDATE: Altman now tells me the office will be based in San Francisco.]
Altman: If OpenAI develops really great technology and anyone can use it for free, that will benefit any technology company. But no more so than that. However, we are going to ask YC companies to make whatever data they are comfortable making available to OpenAI. And Elon is also going to figure out what data Tesla and Space X can share.
What would be an example of the kind of data that might be shared?
Altman: So many things. All of the Reddit data would be a very useful training set, for example. You can imagine all of the Tesla self-driving car video information being very valuable. Huge volumes of data are really important. If you think about how humans get smarter, you read a book, you get smarter, I read a book, I get smarter. But we don’t both get smarter from the book the other person read. But, using Teslas as an example, if one single Tesla learned something about a new condition every Tesla instantly gets the benefit of that intelligence.
Musk: In general we don’t have a ton of specific plans because this is really just the incipient stage of the company; it’s kind of the embryonic stage. But certainly Tesla will have an enormous amount of data, of real world data, because of the millions of miles accumulated per day from our fleet of vehicles. Probably Tesla will have more real world data than any other company in the world.
AI needs a lot of computation. What will be your infrastructure?
Altman: We are partnering with Amazon Web Services. They are donating a huge amount of infrastructure to the effort.
And there is a billion dollars committed to this?
Musk: I think it’s fair to say that the commitment actually is some number in excess of a billion. We don’t want to give an exact breakdown but there are significant contributions from all the people mentioned in the blog piece.
Over what period of time?
Altman: However long it takes to build. We’ll be as frugal as we can but this is probably a multi-decade project that requires a lot of people and a lot of hardware.
And you don’t have to make money?
Musk: Correct. This is not a for-profit investment. It is possible that it could generate revenue in the future in the same way that the Stanford Research Institute is a 501c3 that generates revenue. So there could be revenue in the future, but there wouldn’t be profits. There wouldn’t be profits that would just enrich shareholders, there wouldn’t be a share price or anything. We think that’s probably good.
Elon, you earlier invested in the AI company DeepMind, for what seems to me to be the same reasons — to make sure AI has oversight. Then Google bought the company. Is this a second try at that?
Musk: I should say that I’m not really an investor in any normal sense of the word. I don’t seek to make investments for financial return. I put money into the companies that I help create and I might invest to help a friend, or because there’s some cause that I believe in or something I’m concerned about. I am really not diversified beyond my own company in any material sense of the word. But yeah, my sort of “investment,” in quotes, for DeepMind was just to get a better understanding of AI and to keep an eye on it, if you will.
You will be competing for the best scientists now who might go to Deep Mind or Facebook or Microsoft?
Altman: Our recruiting is going pretty well so far. One thing that really appeals to researchers is freedom and openness and the ability to share what they’re working on, which at any of the industrial labs you don’t have to the same degree. We were able to attract such a high-quality initial team that other people now want to join just to work with that team. And then finally I think our mission and our vision and our structure really appeals to people.
How many researchers will you eventually hire? Hundreds?
Altman: Maybe.
I want to return to the idea that by sharing AI, we might not suffer the worst of its negative consequences. Isn’t there a risk that by making it more available, you’ll be increasing the potential dangers?
Altman: I wish I could count the hours that I have spent with Elon debating this topic and with others as well and I am still not a hundred percent certain. You can never be a hundred percent certain, right? But play out the different scenarios. Security through secrecy on technology has just not worked very often. If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who? There are lots of bad humans in the world and yet humanity has continued to thrive. However, what would happen if one of those humans were a billion times more powerful than another human?
Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.
Elon, you are the CEO of two companies and chair of a third. One wouldn’t think you have a lot of spare time to devote to a new project.
Musk: Yeah, that’s true. But AI safety has been preying on my mind for quite some time, so I think I’ll take the trade-off in peace of mind.


Kommentaarid
Postita kommentaar
Share all the deep insights in all ways you can, while we can! (About AI - specially about history!)