Announcement

Collapse
No announcement yet.

The Singularity

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Singularity

    Very interesting blog entry by Paul Allen in Technology Review about Ray Kurzweil's Singularity theory.

    Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they'll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It's heady stuff.

    While we suppose this kind of singularity might one day occur, we don't think it is near. In fact, we think it will be a very long time coming...
    Paul Allen: The Singularity Isn't Near

    Some of the best stuff is in the comments:

    The problem of course is that you can automate a great deal of mundane tasks, and hand production over to robot factories for better, higher quality, lower cost production. Most intellectual tasks such as accounting can be handled by computers. Most service tasks can be handled by automated voice systems (though at present they are rather awful, that need not be the case, and they could be improved), etc.

    In the end the question becomes "What will all of the surplus people do to earn a living?"...

  • #2
    Re: The Singularity

    Originally posted by suki View Post
    Some of the best stuff is in the comments: In the end the question becomes "What will all of the surplus people do to earn a living?"
    Mining!

    Comment


    • #3
      Re: The Singularity

      The article is a nice counterpoint to the technotopian's panglossian view.

      What is really interesting is looking at technotopian views before and after past 'singularity' events like the Industrial Revolution.

      At the beginning of the Industrial Revolution, the technotopians of that era saw the vast increases in productivity as leading to a new era of prosperity and happiness for all.

      Instead the ugly twin realities of rural displacement and urban poverty, one new and the other old, was introduced.

      The technotopian's view was, however, vindicated for the upper classes at the time, and eventually - with much reform - translated into overall societal improvement.

      Still a far, far cry from prosperity and happiness for all.

      The other major component missing which neither Allen et al nor the technotopians address is the economic reality of complexity.

      As I've noted before in other posts, until we can achieve true nuts to soup full resource to product cycle automation, the sheer cost of the technologies necessary to achieve even ongoing improvements in computer hardware itself provides an upper limit.

      The entire 1st world, plus the upper tiers of the BRICs and emerging economies, is able to support a total of 250 fabs. The majority of these make products with the complexity of a slide rule.

      Comment


      • #4
        Re: The Singularity

        i'm not sure if scott aaronson coined the description of "the singularity" as "the rapture of the nerds," but i love it.


        The Singularity Is Far

        In this post, I wish to propose for the reader’s favorable consideration a doctrine that will strike many in the nerd community as strange, bizarre, and paradoxical, but that I hope will at least be given a hearing. The doctrine in question is this: while it is possible that, a century hence, humans will have built molecular nanobots and superintelligent AIs, uploaded their brains to computers, and achieved eternal life, these possibilities are not quite so likely as commonly supposed, nor do they obviate the need to address mundane matters such as war, poverty, disease, climate change, and helping Democrats win elections.

        Last week I read Ray Kurzweil’s The Singularity Is Near, which argues that by 2045, or somewhere around then, advances in AI, neuroscience, nanotechnology, and other fields will let us transcend biology, upload our brains to computers, and achieve the dreams of the ancient religions, including eternal life and whatever simulated sex partners we want. (Kurzweil, famously, takes hundreds of supplements a day to maximize his chance of staying alive till then.) Perhaps surprisingly, Kurzweil does not come across as a wild-eyed fanatic, but as a humane idealist; the text is thought-provoking and occasionally even wise. I did have quibbles with his discussions of quantum computing and the possibility of faster-than-light travel, but Kurzweil wisely chose not to base his conclusions on any speculations about these topics.

        I find myself in agreement with Kurzweil on three fundamental points. Firstly, that whatever purifying or ennobling qualities suffering might have, those qualities are outweighed by suffering’s fundamental suckiness. If I could press a button to free the world from loneliness, disease, and death—the downside being that life might become banal without the grace of tragedy—I’d probably hesitate for about five seconds before lunging for it. As Tevye said about the ‘curse’ of wealth: “may the Lord strike me with that curse, and may I never recover!”

        Secondly, there’s nothing bad about overcoming nature through technology. Humans have been in that business for at least 10,000 years. Now, it’s true that fanatical devotion to particular technologies—such as the internal combustion engine—might well cause the collapse of human civilization and the permanent degradation of life on Earth. But the only plausible solution is better technology, not the Kaczynski/Flintstone route.

        Thirdly, were there machines that pressed for recognition of their rights with originality, humor, and wit, we’d have to give it to them. And if those machines quickly rendered humans obsolete, I for one would salute our new overlords. In that situation, the denialism of John Searle would cease to be just a philosophical dead-end, and would take on the character of xenophobia, resentment, and cruelty.

        Yet while I share Kurzweil’s ethical sense, I don’t share his technological optimism. Everywhere he looks, Kurzweil sees Moore’s-Law-type exponential trajectories—not just for transistor density, but for bits of information, economic output, the resolution of brain imaging, the number of cell phones and Internet hosts, the cost of DNA sequencing … you name it, he’ll plot it on a log scale. Kurzweil acknowledges that, even over the brief periods that his exponential curves cover, they have hit occasional snags, like (say) the Great Depression or World War II. And he’s not so naïve as to extend the curves indefinitely: he knows that every exponential is just a sigmoid (or some other curve) in disguise. Nevertheless, he fully expects current technological trends to continue pretty much unabated until they hit fundamental physical limits.

        I’m much less sanguine. Where Kurzweil sees a steady march of progress interrupted by occasional hiccups, I see a few fragile and improbable victories against a backdrop of malice, stupidity, and greed—the tiny amount of good humans have accomplished in constant danger of drowning in a sea of blood and tears, as happened to so many of the civilizations of antiquity. The difference is that this time, human idiocy is playing itself out on a planetary scale; this time we can finally ensure that there are no survivors left to start over.

        (Also, if the Singularity ever does arrive, I expect it to be plagued by frequent outages and terrible customer service.)

        Obviously, my perceptions are as colored by my emotions and life experiences as Kurzweil’s are by his. Despite two years of reading Overcoming Bias, I still don’t know how to uncompute myself, to predict the future from some standpoint of Bayesian equanimity. But just as obviously, it’s our duty to try to minimize bias, to give reasons for our beliefs that are open to refutation and revision. So in the rest of this post, I’d like to share some of the reasons why I haven’t chosen to spend my life worrying about the Singularity, instead devoting my time to boring, mundane topics like anthropic quantum computing and cosmological Turing machines.

        The first, and most important, reason is also the reason why I don’t spend my life thinking about P versus NP: because there are vastly easier prerequisite questions that we already don’t know how to answer. In a field like CS theory, you very quickly get used to being able to state a problem with perfect clarity, knowing exactly what would constitute a solution, and still not having any clue how to solve it. (In other words, you get used to P not equaling NP.) And at least in my experience, being pounded with this situation again and again slowly reorients your worldview. You learn to terminate trains of thought that might otherwise run forever without halting. Faced with a question like “How can we stop death?” or “How can we build a human-level AI?” you learn to respond: “What’s another question that’s easier to answer, and that probably has to be answered anyway before we have any chance on the original one?” And if someone says, “but can’t you at least estimate how long it will take to answer the original question?” you learn to hedge and equivocate. For looking backwards, you see that sometimes the highest peaks were scaled—Fermat’s Last Theorem, the Poincaré conjecture—but that not even the greatest climbers could peer through the fog to say anything terribly useful about the distance to the top. Even Newton and Gauss could only stagger a few hundred yards up; the rest of us are lucky to push forward by an inch.

        The second reason is that as a goal recedes to infinity, the probability increases that as we approach it, we’ll discover some completely unanticipated reason why it wasn’t the right goal anyway. You might ask: what is it that we could possibly learn about neuroscience, biology, or physics, that would make us slap our foreheads and realize that uploading our brains to computers was a harebrained idea from the start, reflecting little more than early-21st-century prejudice? Unlike (say) Searle or Penrose, I don’t pretend to know. But I do think that the “argument from absence of counterarguments” loses more and more force, the further into the future we’re talking about. (One can, of course, say the same about quantum computers, which is one reason why I’ve never taken the possibility of building them as a given.) Is there any example of a prognostication about the 21st century written before 1950, most of which doesn’t now seem quaint?

        The third reason is simple comparative advantage. Given our current ignorance, there seems to me to be relatively little worth saying about the Singularity—and what is worth saying is already being said well by others. Thus, I find nothing wrong with a few people devoting their lives to Singulatarianism, just as others should arguably spend their lives worrying about asteroid collisions. But precisely because smart people do devote brain-cycles to these possibilities, the rest of us have correspondingly less need to.

        The fourth reason is the Doomsday Argument. Having digested the Bayesian case for a Doomsday conclusion, and the rebuttals to that case, and the rebuttals to the rebuttals, what I find left over is just a certain check on futurian optimism. Sure, maybe we’re at the very beginning of the human story, a mere awkward adolescence before billions of glorious post-Singularity years ahead. But whatever intuitions cause us to expect that could easily be leading us astray. Suppose that all over the universe, civilizations arise and continue growing exponentially until they exhaust their planets’ resources and kill themselves out. In that case, almost every conscious being brought into existence would find itself extremely close to its civilization’s death throes. If—as many believe—we’re quickly approaching the earth’s carrying capacity, then we’d have not the slightest reason to be surprised by that apparent coincidence. To be human would, in the vast majority of cases, mean to be born into a world of air travel and Burger King and imminent global catastrophe. It would be like some horrific Twilight Zone episode, with all the joys and labors, the triumphs and setbacks of developing civilizations across the universe receding into demographic insignificance next to their final, agonizing howls of pain. I wish reading the news every morning furnished me with more reasons not to be haunted by this vision of existence.

        The fifth reason is my (limited) experience of AI research. I was actually an AI person long before I became a theorist. When I was 12, I set myself the modest goal of writing a BASIC program that would pass the Turing Test by learning from experience and following Asimov’s Three Laws of Robotics. I coded up a really nice tokenizer and user interface, and only got stuck on the subroutine that was supposed to understand the user’s question and output an intelligent, Three-Laws-obeying response. Later, at Cornell, I was lucky to learn from Bart Selman, and worked as an AI programmer for Cornell’s RoboCup team—an experience that taught me little about the nature of intelligence but a great deal about how to make robots pass a ball. At Berkeley, my initial focus was on machine learning and statistical inference; had it not been for quantum computing, I’d probably still be doing AI today. For whatever it’s worth, my impression was of a field with plenty of exciting progress, but which has (to put it mildly) some ways to go before recapitulating the last billion years of evolution. The idea that a field must either be (1) failing or (2) on track to reach its ultimate goal within our lifetimes, seems utterly without support in the history of science (if understandable from the standpoint of both critics and enthusiastic supporters). If I were forced at gunpoint to guess, I’d say that human-level AI seemed to me like a slog of many more centuries or millennia (with the obvious potential for black swans along the way).

        As you may have gathered, I don’t find the Singulatarian religion so silly as not to merit a response. Not only is the “Rapture of the Nerds” compatible with all known laws of physics; if humans survive long enough it might even come to pass. The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to us than we are to dogs (or mice, or fish, or snails). After all, we might similarly expect that there should be models of computation as far beyond Turing machines as Turing machines are beyond finite automata. But in the latter case, we know the intuition is mistaken. There is a ceiling to computational expressive power. Get up to a certain threshold, and every machine can simulate every other one, albeit some slower and others faster. Now, it’s clear that a human who thought at ten thousand times our clock rate would be a pretty impressive fellow. But if that’s what we’re talking about, then we don’t mean a point beyond which history completely transcends us, but “merely” a point beyond which we could only understand history by playing it in extreme slow motion.

        Yet while I believe the latter kind of singularity is possible, I’m not at all convinced of Kurzweil’s thesis that it’s “near” (where “near” means before 2045, or even 2300). I see a world that really did change dramatically over the last century, but where progress on many fronts (like transportation and energy) seems to have slowed down rather than sped up; a world quickly approaching its carrying capacity, exhausting its natural resources, ruining its oceans, and supercharging its climate; a world where technology is often powerless to solve the most basic problems, millions continue to die for trivial reasons, and democracy isn’t even clearly winning over despotism; a world that finally has a communications network with a decent search engine but that still hasn’t emerged from the tribalism and ignorance of the Pleistocene. And I can’t help thinking that, before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 18th-century Enlightenment to the 98% of the world that still hasn’t gotten the message.

        http://www.scottaaronson.com/blog/?p=346

        Comment


        • #5
          Re: The Singularity

          Originally posted by jk View Post
          Quoth Aaronson: "Secondly, there's nothing bad about overcoming nature through technology."

          Comment


          • #6
            Re: The Singularity

            i should have known that was jonathon coulton!
            who else?
            i'm still waiting for the things that make me weak and strange to get engineered away. but i'm not holding my breath.

            Comment


            • #7
              Re: The Singularity

              Oh, dear, every silver cloud has a dark lining.

              What Paul Allen says is misdirected, or is some kind of straw man. There are other ways than to simply mimic the human brain.
              http://www.azorobotics.com/news.aspx?newsID=2162
              Will we download our consciousness to computers by 2045? That seems to be the most extreme claim. But why does that mean that all the others, that computers will exceed us (they already have in many ways), that things will get drastically better, at least for some, that there will be radical health/life extension, that energy will become essentially free (grid parity has already been passed in the tropics), etc., not happen?

              Isn't this like my dad being taken to the ER in cardiac arrest, and then after he is stabilized, someone saying, well, he still has cancer, doesn't he?

              We went from feudalism and serfdom and slavery a century ago. We went from one quarter of people dying of small pox, half of people dying of simple communicable disease that is relatively easy to control. No surgery, no pain control, murders 50 times the rate now, many people dying from simple dental infections. From malnutrition and illiteracy. From my mom using kerosene lamps when she was a girl. From my great grandmother being blind for the last 30 years of her life due to cataracts that can be now be repaired for a few thousand dollars. From it taking 6 months to go from New York to Hawaii. From the street in Yokohama reproduced in The Last Samurai of 1876. From the dirt roads in Gangs of New York (which people from New York asked me why there were dirt roads in the movie, and I told them, "because the roads were dirt roads"). From no knowledge of bacteria and viruses. From physicians not knowing they need to wash their hands after performing autopsies before they go to the maternity ward. From polio. From nearly no communication. From no knowledge of DNA. From no birth control. From no civil aviation. From no satellites. From no knowledge of the structure of this universe. From a life expectancy half of what it is now in the developed world. From no instant messaging. From HIV being almost certain death in 1990 to a life expectancy now of 46 years after diagnosis with treatment.

              If I had an iPad and the current Internet, I could have stood on a stage and outperformed every single human being on Earth in 2000.
              And that change happened in just a decade.

              I have always been a technooptimist, and I have always been wrong... because I was not optimistic enough. Biotech has advanced far faster than I ever dreamt possible. The fundamental fact remains: the cells in our bodies know what to do, they just need to be signaled properly.

              We will become Greek gods. Unfortunately, very stupid Greek gods, but Greek gods nonetheless. And reaching for immortality. And that our reach should forever exceed our grasp, or what's a Heaven for?

              Comment


              • #8
                Re: The Singularity

                Originally posted by mooncliff
                If I had an iPad and the current Internet, I could have stood on a stage and outperformed every single human being on Earth in 2000.
                I disagree.

                Your iPad doesn't allow you to play a musical instrument, only to play a recording someone else has made.

                Your iPad doesn't help you build a house, build a car, build a factory, lay down a road, put in an electrical grid, perform an operation, govern, or any of a myriad of activities necessary to modern human existence.

                Your iPad lets you read email, watch videos, listen to music, and chat. So does your cell phone. So does your PC.

                There is value in this, but the value is exactly what is noted above: it isn't that the iPad provides anything fundamentally different than you can get via other means.

                It only speeds things up. What you can do with an iPad is no different than you could see via television in the 1950s (videos), telegraph in the 1850s (chat and email), or record players (1890s).

                It isn't a revolution in the sense of fundamental change as you saw with electrification, the internal combustion engine, or even mass irrigation.

                Comment


                • #9
                  Re: The Singularity

                  Originally posted by c1ue View Post
                  The article is a nice counterpoint to the technotopian's panglossian view.

                  What is really interesting is looking at technotopian views before and after past 'singularity' events like the Industrial Revolution.

                  At the beginning of the Industrial Revolution, the technotopians of that era saw the vast increases in productivity as leading to a new era of prosperity and happiness for all.

                  Instead the ugly twin realities of rural displacement and urban poverty, one new and the other old, was introduced.

                  The technotopian's view was, however, vindicated for the upper classes at the time, and eventually - with much reform - translated into overall societal improvement.

                  Still a far, far cry from prosperity and happiness for all.

                  The other major component missing which neither Allen et al nor the technotopians address is the economic reality of complexity.

                  As I've noted before in other posts, until we can achieve true nuts to soup full resource to product cycle automation, the sheer cost of the technologies necessary to achieve even ongoing improvements in computer hardware itself provides an upper limit.

                  The entire 1st world, plus the upper tiers of the BRICs and emerging economies, is able to support a total of 250 fabs. The majority of these make products with the complexity of a slide rule.
                  Well the problems associated with the supposed technotopian prediction of the impacts of Industrial Revolution are easy to discern the source of. The plain fact is that the massive increases in productivity simply allowed more humans to exist than ever before. There have always been the poor, the destitute, and those that live on the very knife edge of subsistence; presumably this will always be the case, unless and until some Malthusian food supply disaster rears its ugly head. What happened after the Industrial Revolution is that it allowed the whole of humanity to have more for less effort per capita, and that effect was concentrated in those areas that quickly adopted the new technologies.


                  This is what I think will happen during all technological advances. The commentary asks,
                  In the end the question becomes "What will all of the surplus people do to earn a living?"...
                  I think the answer will be the same as it's always been with technology--the humans will supply less and less strenuous and/or "real" labor to earn their keep. Right now you can live, though not luxuriously, by standing for 8 hours a day flipping burgers at a slow rate for 6 of those, and at a "feverish pace" for about 2 hours. Death was the wage for such a trivial labor effort under the agrarian subsistence agricultural model for all the years prior to several generations ago, but the current wage is an uncomfortable life with some of the amenities of a modern lifestyle. In the future, when "machines do everything and are smarter than everyone," which is probably as certain as flying cars and jetpacks for everyone, then I'd guess that the Jetsons economic situation is the most likely outcome--push a button for two hours a week for an upper-middle-class income.

                  Comment


                  • #10
                    Re: The Singularity

                    Originally posted by c1ue View Post
                    I disagree.

                    It isn't a revolution in the sense of fundamental change as you saw with electrification, the internal combustion engine, or even mass irrigation.
                    The iPad is not a revolution in itself, but there is no doubt that we have lived/are living through a revolution in IT and communications.

                    Comment


                    • #11
                      Re: The Singularity

                      Originally posted by unlucky View Post
                      The iPad is not a revolution in itself, but there is no doubt that we have lived/are living through a revolution in IT and communications.
                      And information (via improved communications). Look at the leap the US made when libraries were placed around the country. Now consider what people in even the remotest regions will be able to access.

                      One of the things that struck us traveling through Asia was nearly everyone had a cell phone (the plans are *dirt* cheap) and computers with Internet access were nearly everywhere we went.

                      OK, the access speeds were not the greatest, but you usually could get what you wanted.

                      I wonder if years from now we will be taking advantage of some technological breakthrough that would have never occurred if some kid in a backwater hadn't been able to get his start from the internet....

                      Comment


                      • #12
                        Re: The Singularity

                        Originally posted by jpatter666 View Post
                        I wonder if years from now we will be taking advantage of some technological breakthrough that would have never occurred if some kid in a backwater hadn't been able to get his start from the internet....
                        Yes, and it's impossible really to predict all the effects that these changes will bring. Kurzweil and futurologists in general rarely get it right. I recall in the 70's we were promised flying cars... but nobody predicted the Internet and if they had, we'd probably have thought, "Lots of computing devices connected to each other.. so what? I want my flying car!"

                        Comment


                        • #13
                          Re: The Singularity

                          Kurzweil recently wrote a response to Allen's blog entry - also published in Tech Review. Here he addresses this point by noting that advances are unpredictable in both their timing and direction, but that it is sum of research in a large number of directions that expand the envelope of technological capability, and the rate of this expansion has led to his prediction:

                          Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.

                          Kurzweil Responds: Don't Underestimate the Singularity

                          http://www.technologyreview.com/blog/guest/27263/?p1=A4

                          Comment


                          • #14
                            Re: The Singularity

                            Originally posted by suki
                            Here he addresses this point by noting that advances are unpredictable in both their timing and direction, but that it is sum of research in a large number of directions that expand the envelope of technological capability, and the rate of this expansion has led to his prediction:
                            The sheer number of companies engaged in semiconductor research is falling, not increasing - primarily because the overall number of companies in the semiconductor business is falling.

                            And that is happening because the sheer cost of creating a product each successive technology node is increasingly geometrically - such that it is literally impossible for a startup to even afford to make its first cutting edge technology chip.

                            Thus by Kurzweil's own logic the singularity is receding into the distance.

                            Comment


                            • #15
                              Re: The Singularity

                              There is the alternate viewpoint; that natural processes will turn the full circle and we will be driven by events over which we will have no control into a new phase of human history. This report about the near miss of a comet that had broken into many pieces is an excellent example. http://www.wired.co.uk/news/archive/...ronomy-mystery

                              There was another report some years ago that South East Asia had been hit by seven separate asteroid fragments that, today, would have wiped humanity off the local surface. Again, the East coast of the US was hit by an asteroid that devastated the entire seaboard. So the very first thing to remember is there is no certainty.

                              Personally I think we are well on the way to the same problems faced by the science fantasy movie; Forbidden Planet, where the civilisation had been destroyed by their building huge computers that then reproduced their "ID" to destroy them. A wonderful movie that shows us today, where we might be in another few hundred years. http://en.wikipedia.org/wiki/Forbidden_Planet

                              Comment

                              Working...
                              X