User:Vuara/hyperintelligence hyperlanguage

From Wikibooks, open books for an open world
Jump to: navigation, search

G. Bugliarello, Hyperintelligence—The next evolutionary step, The futurist, dec. 1984 (Quote: R. Kriesche, No. 1 "Brainwork", p. 11/12).

"...a most fascinating perspective appears: the development of a new language for the description of the entire knowledge and the feelings that are included in a NETWORK. Such a NETWORK will have 'feelings' different from those of the nodes within the system—not oppositional but, of a more comprehensive nature, just as the mind is more than the sum total of the processes in the individual neurons. At the same time, this NETWORK will dispose of a knowledge that is more comprehensive and homogeneous than that contained in the sum total of its nodes ... We might design an entire hierarchy of functions within a NETWORK analogous to those in the brain. We might look into the reflexes, the formation of pulsations in the NETWORK. And with the relation between stimuli and centrifugal signs, which is the key towards the decisive function of the feedback in the brain ... the 'hyperlanguage' of NETWORKS will be an expression of their 'hyperintelligence', and vice versa—in two processes dependent upon and enforcing each other."

+++

http://transhumanism.com/index.php/weblog/print/26/#note34

What is Transhumanism? Why is a Transhumanist? Mark Walker Transhumanism, I will argue, is the philosophical thesis that that we ought to employ technology in the near-term ... to radically reengineer humans, to create new and better person…

Mark Walker, June 29, 2002

Mark Walker Research Associate, Trinity College, University of Toronto

There is a specter that ought to be haunting the world, the specter of Transhumanism. Transhumanism, I will argue, is the philosophical thesis that that we ought to employ technology in the near-term for the purpose of attempting to perfect ourselves. Even those not friendly to Transhumanism ought to pay heed, for within the lifetime of most of those alive today humanity will, in all likelihood, possess the technology to radically reengineer humans, to create new and better persons. Clearly, the philosophical, political and social ramifications of Transhumanism are staggering indeed; arguably, they are unprecedented in human history. Moreover, the fact that the technologies necessary for such experiments are imminent - most expert opinion clusters around the years 2020 to 2050 [1] means that Transhumanism ought to be on everyone's lips. It is no exaggeration to say that Transhumanism ought to be the headline news every evening on television and front page on every morning newspaper. We stand at the crucial juncture in human history. My aim here is to outline what I consider to be the essential core of Transhumanism, specifically; the definition I offered above contains three theses: 1. The Technology Thesis: Within a hundred years humanity will possess the technology to reengineer Homo sapiens. 2. The Ideal Thesis: The goal in the reengineering task is to perfect ourselves. 3. The Ethical Thesis: We ought to employ technology to realize this ideal.

I will discuss each of these in turn. The exposition of Transhumanism will also reveal why one ought to be a Transhumanist.

1. Person Engineering Technologies

In this section I shall discuss the three of the more radical person engineering technologies (PETs) of relevance to Transhumanism: genetic engineering, artificial intelligence research and nanotechnology. I will concentrate on the case of using PETs to improve on human level of intelligence; although we should bear in mind that, to the extent that Transhumanists may want to perfect other properties of a person, the discussion will perforce be limited. Let us take as our point of departure the view of humanity from a scientific and naturalistic perspective [2]. Darwin's great conceptual innovation was to lend plausibility to the idea that we may think of biological organisms as exhibiting design without having to postulate a divine artificer. Part of Darwin's argument turned on the observation that species are not fixed types, but are evolved and evolving. Applying these insights to the topic of human wisdom or intelligence [3] we may conclude that our intelligence is not the product of the benevolent activities of some divine artificer but the result of the natural selection of random mutations. The history of our intelligence lies in a secular phylogeny, that is, with our apelike ancestors, and indeed, with even more "primitive" organisms. Of course this insight speaks only to the question of "where have we come from?" not to the question of "where are we going?" Of course, if we are concerned exclusively with the course that natural selection might take we are engaged in some serious long-range forecasting. Natural evolution typically [4] takes tens of thousands, if not hundreds of thousands of years [5]. This is not what Transhumanism is about; it is about the near-term. Let us consider first how genetic engineering will allow us to alter Homo sapiens in ways in which it would take natural selection hundreds of thousands if not millions of years to duplicate. Take as our first observation the familiar correlation between intelligence and brain size, that is, other things being equal, a larger brain correlates with greater intelligence [6]. For example, our brain is larger than that of an orangutan, and an orangutan's brain is larger than a Great Dane's. The level of intelligence among these three species follows this same progression, i.e., we are more intelligent than orangutans, and they are more intelligent than Great Danes. It seems plausible to hypothesize that the hypothetical species Homo bigheadus, with a brain volume of 2600 cc, ought to be more intelligent and have greater conceptual abilities than Homo sapiens with their measly 1300 cc. Certainly this is the sort of reasoning that is used to explain the vast difference in intelligence between humans and apes, i.e., apes (although similar in body weight) have much smaller brains. Technologically speaking, there does not seem to be any principled reason why we could not genetically engineer a primate with a 2600 cc brain. Thus, if the correlation between brain size and intelligence cited above holds, then it seems that there is a good probability that Homo bigheadus will be much more intelligent than humans. In other words, it seems a perfectly valid piece of naturalized speculation to investigate the following scientific hypothesis:

Hypothesis 1: A primate with a brain volume of 2600 cc will exceed humans in intelligence by the same margin as humans exceed that of chimpanzees.

To put this in some perspective, a great ape with the same body size as a human would have a brain of about 400 cc in size, while an Australopithecines of human body weight projects to a brain of approximately 600 cc. Homo sapiens of course enjoy a brain of approximately 1300 cc. If we create Homo bigheadus how intelligent might we expect them to be, given the relationship between intelligence and brain size versus the log of body weight? It is difficult to say in part because we have no interval measure for interspecies comparisons of intelligence. That is, we do not have some (recognized) scale which would allow us to state that humans are say 15 times as intelligent as an Orang but only 5 times as smart as Australopithecus robustus. At best we have some rough and ready ordinal rankings of intelligence. As noted, we may say that Orangs are more intelligent than a Great Dane, and Homo sapiens more intelligent than Orangs, with Australopithecus robustus falling somewhere in between. Nevertheless, even with mere ordinal rankings of intelligence we might guess that Homo bigheadus might eclipse us in intelligence in a very dramatic fashion indeed, e.g., we might properly expect that the difference between our intelligence and theirs would be more like the difference between human and Australopithecine intelligence, than say human and intelligence with that of Homo erectus. Again, since we have a grasp only on the ordinal intervals between intelligence it is hard to be much more precise than this. We might even suppose that this is some sort of iterative process, Homo Bigheadus creates the Homo Biggerheadus, creatures with brains 4000 cc in size, and Homo Biggerheadus creates Homo Evenbiggerheadus, and so on. No doubt many will find the thought of such an experiment "fantastic" (to put it mildly). Yet incredible as it may seem, it is not a question of whether we will have the technological ability to perform an experiment along the lines suggested by this hypothesis. The only question is when will we have the ability. Consider that the basic information and techniques necessary for such an experiment are already available; it is really a matter of working through the myriad of details. There are, for instance, several methods for genetic engineering. One such technique is the microinjection procedure. Basically, DNA is injected into the developing egg of an organism; this DNA attaches itself to the chromosomes and then can be passed on genetically to succeeding generations in the usual fashion. Over fifteen years ago, researchers were able to partially correct a genetic defect in mice employing this method. The strain of mice in question suffers from reduced levels of a growth hormone that results in dwarfism. By inserting the DNA that contains information for a rat's growth hormone, the researchers were able to reverse this condition [7]. Since the technology necessary for genetic engineering is already available to us, the real trick is finding the appropriate genes that control the growth of the brain. This may not be that difficult. The crude map of the human genome we now possess certainly could be of some assistance. There is also evidence from our phylogenetic cousin the common chimpanzee. As is well known, there is an incredible genetic similarity between the species, e.g., King and Wilson have found that "…the average [human] polypeptide is more than 99 percent identical to its chimpanzee counterpart." [8] The idea would be to discover the genes that have altered the allometric curve of the brain in humans as compared with chimps. From there it would be a relatively simple matter to manipulate them in the genome of a human zygote, and the recipe should be complete [9]. The ease in which we might create a larger brain through genetic engineering is underscored by the fairly recent discovery of homeobox genes: genes that control the development of the body plans of a larger number of organisms. For our purposes what is of interest is that there are a number of homeobox genes that that control the growth of various brain regions [10]. For example, if you want to make a larger brain in a frog embryo simply insert some RNA from the gene X-Otx2 and voilà - you have a frog embryo with a larger brain, specifically, the mid and forebrain mass is increased [11]. Homeobox genes also come in various forms of generality. Otx2 is obviously very general in its scope; in contrast, for example, Emx1 controls the growth of the isocortex (one of the two regions of the neocortex). Thus, if we believe that intelligence and wisdom might be aided by tweaking one area of the brain or another there may be just the right homeobox gene for this task. Of course this simplifies many, many problems. It is much like as if one had said back in 1957, with the launch of Sputnik, that landing men on the moon was merely a question of working through a myriad of details. This was of course true, but it is not to belittle all the problems and technical innovations that were required to achieve this end, e.g., problems of miniaturization. Remember vacuum tubes were still in use back in 1957! Similarly, there are a host of difficulties that would have to be solved in creating such creatures, let me just mention a couple in passing. First, there are general considerations of physiology e.g., a larger brain might require increase blood flow, which might mean increasing the size or strength of the heart. Would we have to adjust the allometric curve of the heart and other vital organs? Perhaps the skeletal structure would have to be altered in order to support the additional cranium weight. We might have to look at extending the life span of these creatures in order to allow them enough time to develop to their full potential [12]. Second, one may wonder about the sufficiency (or perhaps even necessity) of creating greater intelligence by dramatically increasing the gross brain size. It has been speculated, for example, that it is the greater development of our neocortex, as compared with other primates, that is primarily responsible for our greater intelligence, or that due consideration ought to be given to the fact that we exhibit much more hemispheric specialization of cognitive tasks. It may be that the task of attempting to create more intelligent beings ought to focus on the quality as opposed to the quantity of the brain [13]. Thus, it should be clear from what has just been said that there is really nothing so simple as "the crucial genetic engineering test". There are a number of tests that we might perform depending on the relative weight we assign to these variables. For instance, one group of researchers might suppose that doubling the mass of the neocortex ought to be sufficient for testing whether we can make more intelligent creatures, while another might focus on increasing the total mass of the brain by 50%. What could reasonably be expected from such tests would probably require input from a number of diverse academic fields. Whether increasing the gross size of the brain to 2600 cc would be necessary or sufficient for a radical increase in intelligence is thus an open question. The general principle - that we might be capable of experimentally manipulating the intelligence of various creatures including humans - does seem scientifically respectable. Certainly it seems scientifically respectable to suggest that we might be able to experimentally increase the intelligence of any non-human animal. It is difficult to see why humans might be exempt from this inductive line of inductive reasoning. How long would it take to prepare this recipe? As a conservative estimate, it would be safe to say that sometime in the twenty-first century we should posses the relevant knowledge and technology. If nothing else, it seems, that we could in fairly short order have some idea of the efficacy of such procedures by studying other species such as rats. We might, for instance, today attempt to genetically engineer a rat with a brain twice the normal size and observe how this affects its level of intelligence. Such procedures would be achievable in the short-term and provide some evidence as to what might be feasible in our own case. Another PET is based on extrapolations from computer science. The possibility that computers might be able to out-think us has been put forward by a number of researchers; one of the most prominent is Professor Hans Moravec at Carnegie Mellon University [14]. Moravec's conjecture has two essential components: (1) an estimate of how long it will take to develop (affordable) computers with the requisite amount of computing power, and (2), an estimate of how much computing power will be necessary to simulate human intelligence. The key unit of measurement here is MIPS, a million machine instructions per second. Moravec predicts that robots capable of executing 100 million MIPS will be commercially available around 2040 and these should equal or surpass human intelligence. He claims that "…mass-produced, fully educated robot scientists working diligently, cheaply, rapidly and increasingly effectively will ensure that most of what science knows in 2050 will have been discovered by our artificial progeny!" [15] Presumably, the artists and philosophers etc., in 2050 will also be our artificial progeny. Moravec's estimate of how much computer power is necessary to simulate the power of the human brain relies on two quantities that have a fair degree of empirical support. One is the 0.2-gram of neural processing circuitry at the back of the human retina. This tissue is devoted to detecting edges and motion in the visual field. Moravec notes that these tissues perform about 10 million detections per second, that is, there are approximately a million image regions performing 10 detections per second. Data from experiments in robot vision suggest that 1,000 MIPS would be necessary to simulate the .02-gram of neural tissue at the back of the retina. Moravec then reasons that, since the entire human brain is 75,000 times heavier than the .02 gram of neural tissue, a computer with 75,000 times the computing power is necessary to model human intelligence. In round numbers then a computer with 100 million MIPS should be equal to humans in intelligence. It perhaps goes with out saying that Moravec's claims are contentious [16]. I do not propose to defend his estimates here, rather, I think the important point to observe is that his inductive reasoning is grounded in empirical data and as such is scientifically respectable. Moravec may be wrong (as he himself admits) that robots will usurp humans as the scientists and (presumably) philosophers and artist of the future, but it seems a conjecture that is at least worthy of our attention. The third PET we need to discuss is nanotechnology. Nanotechnology works from a very simple idea: we can build anything we want to if we have the right building blocks, a design schematic, and a device to put the building blocks together. Nanotechnologists argue that we can use some of the building blocks of our universe itself: atoms and molecules [17]. The schematic in this case would be the complete atomic structure of the item we would like to create. The means to assemble the atoms as specified by the schematic are miniature robots (nanobots). To see how this works, let us take a simple example. Suppose you are thirsty so you type into your computer 'water'. The computer looks up the schematic for water and of course finds H2O, at which point the computer directs the nanobots begin assembling water molecules based on this formula. The nanobots would trundle off to your atom warehouse and beginning taking out hydrogen and oxygen molecules from storage containers. When enough of these are procured and placed in your glass you will have a glass of water. This example simplifies a number of problems [18] but it does serve to illustrate the most important moral of nanotechnology: anything can be made so long as we have a supply of atoms or molecules and the right schematic. In terms of perfecting our intelligence there are at least a couple of ways in which nanotechnology could be utilized. One of which is to simply augment the existing human brain. Suppose we obtain the schematic for making neurons and then we instruct the nanobots to add several million neurons to your brain every day until you have a brain the same size as Homo bigheadus (obviously the nanobots would need to do something about expanding your skull) [19]. A second way is to have the nanobots reverse engineer a brain and use this information to upload a person's mind to a computer. It is generally believed, for example, that some complex but ultimately physical process in the brain stores a person's memories. By having the nanobots provide us with a detailed neural schematic of a brain we should be able to recreate these memories when we use a computer to model this atomic structure. The logical conclusion of this line of thought is that once the nanobots have completely analyzed the atomic structure of your brain and uploaded this information to a computer your essence will thereby be transferred to a computer environment [20]. Obviously there are a number of ways of combining these technologies and procedures: one could, for instance, imagine a genetically engineered cyborg with an AI implant which works with an interface manufactured by nanotechnology. Clearly, much more needs to be said about the feasibility of these technologies, however, the primary aim of this paper is not to review the technical aspects of this project. It is to be hoped that enough has been said to indicate that the technical expertise required by the project is imminent - or at least worthy of considering whether it is indeed imminent. To round out our discussion of technology thesis we need to discuss its status within the overall Transhumanism theory. The first point to make in this connection is that Transhumanists consider PETs, for all practical purposes, as necessary and sufficient as the means for achieving the ideal of Transhumanism. Obviously there are other logically possible ways that the ideal endorsed by Transhumanism might be realized. Various religions have suggested that some form of prayer or worship will allow humans to move to higher cognitive levels, but this is not part of the Transhumanist thesis. Also, historically philosophers like Plato maintained that it was possible for some humans to move beyond a merely human understanding of the universe to a godlike understanding of the universe via philosophical reflection [21]. Plato's theory, in other words, is that philosophical reflection was both necessary and sufficient for moving beyond a merely human cognitive level. So while there are other logically possible means to achieve the ideal, they are not of particular relevance to Transhumanists. Why? Not to put too fine a point on it, religion [22] and the a priori conception of philosophy [23] have failed, and failed miserably, in this connection. The second point follows naturally from the first: the technological thesis of Transhumanism is open to empirical confirmation or refutation in a number of ways. The most straightforward way would be if it turns out that it is technologically impossible to reengineer persons. This seems exceedingly implausible given what we know about the nature of persons; but it cannot be ruled out a priori. Imagine, for example, that it turns out that the essence of a person is an immaterial soul that is beyond the reach of any technological manipulation. If this is the case then I believe that we will have to concede that Transhumanism is not viable as one of its central tenets is false. (And who knows? Perhaps in this case, appearances to the contrary notwithstanding, a priori philosophy or religious practice are more viable means to pursue the ideal of perfection). The third point is simply to underscore the fact that the discussion here has concentrated on intelligence enhancing PETs, but similar remarks apply to other Transhumanists technologies. Human Cryogenics, for example, is the freezing the dead in the hope that future technology will allow them to be resuscitate and repaired. In this way, some Transhumanists hope to obtain immortality. Other less radical technologies are of interest to Transhumanists as well, e.g., wearable computers, artificial organs, designer drugs, etc. Ultimately, these technologies are part of the transitory stage for humans as they redevelop their very natures. Finally, technology for Transhumanists is a means to an end; it is not the end in itself. Thus, the criticism that Transhumanists are mere technology worshippers is misplaced. As a sociological observation, it is probably true that most Transhumanists find technology intrinsically appealing and fascinating, but this does not form part of the theory of Transhumanism. Thus, for example, encouraging space exploration is sometimes mentioned as being part of the Transhumanist project and on occasion even works its way into definitions of Transhumanism [24] . Unless it can be shown that exploration of space is somehow directly connected with the project of reengineering persons, this seems to me to be a mistake [25]. The insistence that one have a certain protechnology attitude beyond achieving the ideals of Transhumanism would, in my view, be an unnecessary accretion to the theory.

2. The Ideal of Transhumanism: Perfection

The second thesis of Transhumanism, as I have defined it, is that the telos or goal of Transhumanism is to perfect ourselves. This certainly does not agree with all suggested uses of the term. Robin Hanson, for example, defines 'Transhumanism' in a way that does not mention this point at all: "Transhumanism is the idea that new technologies are likely to change the world so much in the next century or two that our descendants will in many ways no longer be "human"." [26] There is no suggestion here that our ancestors will in any way be better than ourselves, only different. It is consistent with Hanson definition, for example, that our ancestors no longer resemble us because they are bacteria-like. Similarly, attempts to define Transhumanism in terms of becoming posthuman suffer the same sort of difficulty [27]. A posthuman, at least in one sense of the term, is a creature that has evolved (using technology) from humans. But unless we specify that the posthuman is an improvement in some manner then it does not help us understand the importance or viability of Transhumanism. Again, a posthuman might be a bacteria-like organism that has evolved from humans. On the other hand, if 'posthuman' means something like a being that is an improvement on human beings, then, this does not differ significantly from the present definition, since the meaning of 'posthuman', in this case, will imply the ideal thesis. Mitch Porter's definition seems to come closer to articulating something like the ideal thesis. "Transhumanism" he writes, "is the doctrine that we can and should become more than human." [28] Of course if we use genetic engineering to become a chimerical creature with a human head and an elephant body then will have become "more than human", at least in terms of weight. Presumably Porter means that whatever we become ought to be considered an improvement or an attempt to move towards perfection. A definition that comes closest to explicitly announcing the ideal thesis is Anders Sandberg's, he writes: "Transhumanism is the philosophy that we can and should develop to higher levels, both physically, mentally and socially using rational methods." Sandberg's definition seems to indicate the idea of improving or perfecting ourselves with the suggestion that we should develop to "higher levels". So what is it to perfect ourselves? To approach this question we must distinguish between 'type-perfection' and 'property-perfection'.

Type-Perfection: The thesis that those individuals who best realize the essential properties of the individual's type or species best exemplify the ideal of perfection. [29]

Property-Perfection: The thesis that those individuals who best realize some property or properties best exemplify the ideal of perfection.

An example may help to clarify this distinction. Let us suppose we are in charge of perfecting the intelligence of a monkey. If we do this according to the type conception of perfection then there is only so much we can do for our simian friend. We might use technology to make him better able to problem solve tasks like procuring food and mates, and how to avoid predators; but we would have to be careful not to go too far. If we perfect his intelligence too much then we are in danger of making him something other than a monkey. If we opt for the property conception of perfection then this is not a worry. Suppose we reengineer the monkey's brain to give him the linguistic capacities on par with the best of humanity, and the mathematical abilities of our best mathematicians, and the scientific abilities of our best human scientists and so on. Suppose too that we reengineer our friend to walk upright; so he can get around libraries and laboratories easier, and provide him with reengineered hands with the same dexterity of that of a human, and finally (so he can attend grad school) we remove his tail. Our efforts here, arguably, will not result in a smart monkey but a postmonkey. Property-perfection is the key concept for Transhumanists. Transhumanists seek not to perfect (say) intelligence within the parameters of what is possible for species Homo sapiens, but rather, they seek to perfect the property of intelligence as far as possible. Pursuing a course of property-perfection will, in all likelihood, lead to the speciation of Homo sapiens, i.e., our descendents (and perhaps even some of us) will become posthuman. As is perhaps obvious, a type-perfection of humans could never in itself lead to a speciation event, i.e., humans becoming posthumans, for by definition type perfection means perfection of an entity qua its type or species. Thus, type-perfection is logically incompatible with Transhumanism since the goal is to develop beyond the limitations of our human form. One consequence of this is that in general the idea of a perfect human or human perfectibility - which plays such a pivotal role historically in the perfectionist intellectual tradition [30] - is of little interest to Transhumanists for precisely the same sort of reasons that the idea of the perfect monkey is not of interest either. In both cases there is little one can do without exceeding the bounds of the species. True, most of us would be pleased to have the intellectual abilities of Plato, Mozart, and Einstein, and jump at the chance to take a pill that gave us such great (but human) abilities; but, in the grand scheme of things, this may be a very modest ambition. The intellectual abilities of a posthuman philosopher, musician or scientist may make even the celebrated geniuses of our species look apelike in comparison. Which properties ought we to develop and attempt to perfect and in what manner? Type-perfectionists have at their disposal a natural answer to the question of which properties to perfect and to what degree, namely, those properties that are essential to a thing's nature should be perfected to the extent that they are consistent with and realize the nature of that species or type. For humans, of course, these are the properties that constitute our human nature; and these properties should be developed only in so far as they fall within the scope of being human [31]. Since, as we have said, type-perfection is logically incompatible with Transhumanism, we do not have this "natural" answer to fall back on. So what should Transhumanists say to the question of which properties to develop and in what manner? I don't propose to answer this important and difficult question here in detail, rather, let me lay out somewhat schematically the sorts of considerations that might be involved. One step, and seemingly the first, would be to decide which properties of ourselves we would like to develop, which we would like to see remain unchanged, and which we might hope to eliminate. Let us take the case of intelligence again. Presumably this is a property we would wish to perfect, while selfish-drives or murderous impulses are perhaps examples of properties we may wish to eliminate altogether [32]. This then is what we might think of as the 'inventory step': deciding which properties we would like to enhance, which we would like to eliminate, and which we would like to see remain unchanged. As part of this process there is what we may think of as the 'analytic step', which asks us to consider whether the property is itself a composite of simpler properties. Thus, with our present example, we must ask whether intelligence itself is a univocal property. Some psychologists believe in something called 'general intelligence' while other see intelligence as composed of distinct faculties, e.g., Howard Garner, argues that there are eight types of intelligence, so if Garner is correct then we might have to ask which of these types of intelligence we would like to develop [33]. Then there is what we might think of as the 'vector analysis' step: in what direction do we want to develop the property, e.g., do we want to increase the mathematical computation speed, to be more efficient, or perhaps slow it down so we might savor the creative process? The 'means step" requires us to cogitate question of the technological means to achieve the proposed development, for example, are we to use genetic engineering, the science of AI, or some other technology? Then there is the 'synthetic step': how is the property in question to be combined with the other properties? As we noted previously, increasing the intelligence of a Hominid by increasing its brain mass might require increasing the efficiency of the circulatory and respiratory system. The steps just listed constitute the theoretical aspect of articulating the ideal of property-perfection. Understanding the ideal will also require an experimental step. We might think, for example, that a perfect memory means never forgetting anything but then realize that it is sometimes desirable to forget. Finally, the process of perfection will require a reevaluation step as we assess the desirability of attempts at property-perfection. Specifically, this will be the form of a dialectical interplay between theory and experimentation: experiments may cause us to revise our theories about perfection, and our revised theories may lead to new experiments in perfection. Property-perfection, then, encourages, or at least is consistent with, any number of different experiments in obtaining the good life, or indeed the perfect life. It may be that, ultimately, one unique set of properties instantiated in one particular way proves superior to all others. (God was often said to be the paragon of such a being). In which case, it might be that all posthumans after a certain amount of experimentation tend to freely gravitate towards this one superior form. On the other hand, it may that different combinations of properties make for equally good lives and it is simply a matter of personal preference which to adopt. Obviously, from our present intellectual vantage point we do not know which is the case. Property-perfection, to its credit, is consistent with either scenario. One final reason to prefer property-perfection is that it may be that the most desirable sorts of properties are not in our possession, even in a protean form. Think again of our simian friend. Let us suppose, as many in fact believe, that monkeys do not possess a language. Obviously, one property that they do not possess then is the capacity for language. Yet, millions of years of "experimentation" in our evolutionary history have developed the property of language capacity in humans. The evolutionary precedent provides some hope that if there are other desirable properties which we do not possess-suppose for example it is possible for creatures of a certain intelligence to possess a hyperlanguage [34], which is as sophisticated in comparison to languages as languages are to simian protolanguages - our experiments may lead to such fortuitous discoveries. The connection with type and property-perfection is fairly straightforward: if we were concerned exclusively with type-perfection then we could not seek to develop properties that do not belong to our type; just as if we were to type-perfect a monkey it would be in appropriate to add the capacity for language (of the complexity of humans) for then our monkey would no longer be a monkey but a postmonkey. On the other hand, if we were in the process of property-perfecting the intelligence of a monkey, then, adding something like linguistic capacity is much to the point. We may discover that the ideal of perfection, properly understood, is not so much about becoming more of what we are, but becoming what we are not. No doubt the mere mention of 'perfection' will raise the hackles of some. I believe that I have removed one of the most offensive applications of this term: that we should attempt to become perfect human types. This view is sometimes associated with political ideologies that do not even merit mentioning. By appealing to property-perfection we move well beyond the unpleasantness of this association. However, it still may be wondered why our theory should appeal to something as "rigid" and "puritanical" sounding as 'perfection'. Perhaps, it might be thought, we should be content with simply stating that the goal is to improve ourselves. Admittedly there is not a lot of difference between these different ways of stating the goal of transhumanism, at least in the short-term. For it looks as though if our goal is to perfect ourselves, this will be a long and iterative process. The first iteration of this process may lead us to improving ourselves, but it is not likely to terminate in anything like perfection. Take again the case of intelligence: I believe that we have the technology to attempt to improve upon the level of human intelligence but I don't think we are close to being able to perfect intelligence - for any number of reasons - although perhaps our descendents will realize this task. One reason to use the term 'perfection' rather than merely 'improvement' is that it clarifies the axiological structure: 'perfection' is an end in itself. To say that our goal is merely to 'improve humans' is to invite the questions of 'how far?' and 'to what end?' I believe when we spell-out our answers to these questions we will be lead back to property-perfectionism. For the answer to the question how far we should improve ourselves is that we should develop properties of ourselves in the manner described above, i.e., to the extent that they contribute to the type of beings that we would like to be. We are forced to answer the second question for to leave this it unanswered could potentially land transhumanists in the company of those who want to improve humans to (say) make them better servants of some evil regime. Obviously this is not what Transhumanism is about. Any improvement in ourselves is intended for its own sake, i.e., self-improvement is taken as an end in itself. Of course when we combine these answers - that improvement is directed to the task of developing ourselves as far as we can towards being the type of beings that we would like to be, and that this improvement is not intended to serve some other purpose but is an end in itself - we are landed right back into property-perfectionism. It might be wondered whether property-perfection, thus construed, is too weak to provide any constraint on the application of PETs. After all, to say that we ought to experiment with any number of combinations and permutations of properties seems to be saying, "Anything goes". To respond to this let me close this section by considering some non-Transhumanist application of PETs. Take the example of 'Devolutionists' (pace the great band Devo) who argue as follows: "Creatures with human (or better) intelligence pose too great a risk to their own survival and the survival of other life forms. Therefore, what humanity should do is use genetic engineering to remake ourselves, as more like our apelike ancestors, i.e., we ought to refashion ourselves along the lines of Homo erectus or one of the Australopithecines." According to the devolutionist's manifesto no more humans will be born. Parents who want offspring will raise genetically modified apelike children. The hope is that, in the end, as the last of humanity dies off, there will be no creatures intelligent enough to use the dangerous technology that we have created. Our apelike descendents will return to foraging on the plains and in the forests. Our cities will be abandoned and eventually be overgrown as nature reclaims her dominion. The Devolutionist's position looks, prima facie, like a Transhumanist position, albeit a nonstandard one. The Devolutionists, after all, advocate the use of PETs and the creation of posthumans (specifically, apelike posthumans). One might think that it is the (ultimate) anti-technological stance that prohibits them from joining the ranks of Transhumanists. However, as we noted in section 2, Transhumanists do not necessarily think that technology is an end in itself. In any event, I think that concentrating on the technology aspect of the Devolutionist's program does not get to the heart of the matter, namely: they have a fundamentally different goal from that of Transhumanists: Devolutionists are concerned to promote merely the continued existence of life on this planet whereas Transhumanists are concerned with the process of the perfection of posthuman life forms (and thus a fortiori, with the continued existence of at least some life). It is the absence of concern for the improvement or perfection that prohibits the Devolutionists from joining the ranks of Transhumanists. Similar remarks apply to say those - let us call them the 'Heracliteans' - who might advocate using PETs simply because they want to promote change itself. The Heracliteans' do not specify, however, that change be towards perfection. As we noted above, Transhumanists also sharply diverge from those who might seek to use PETs to improve humans with the view to subordinating of one group of people by another. Such ambitions are logically incompatible with the goal of Transhumanism. To summarize, defining Transhumanism in terms of the goal of perfection provides a clear and comprehensible formulation of the goal of Transhumanism. It articulates the somewhat vague proposals that we ought to become "more than human" or "move to higher levels". It assists in distinguishing Transhumanism from other proposals for the use of PETs and it helps to explain what it is that Transhumanists are struggling for.

3. The Imperative to Perfect Ourselves

The third and final thesis of Transhumanism is the claim that we ought to attempt to perfect ourselves. This idea is captured in two of the definitions cited above. Porter, recall, says, "Transhumanism is the doctrine that we can and should become more than human". Sandberg says that we "can and should develop higher levels, both physically, mentally, and socially." The use of 'should' in these definitions is, I believe, intended to have moral force. Robin Hanson's definition, "Transhumanism is the idea that new technologies are likely to change the world so much in the next century or two that our descendants will in many ways no longer be "human"", does not contain any normative elements, rather it is entirely descriptive, (or more precisely, predictive). How plausible is this as a definition? Suppose one believed that it is true that there is a strong probability (but not a certainty) that in the next century or two technologies will likely change the world so many of our descendents will in many ways no longer be "human"; but that one thinks this will be a disaster; and strains every nerve (as Aristotle says) of one's being to work against this future, even while acknowledging that the probabilities are not favorable. It seems that on Hanson's definition this person ought to be considered a Transhumanist. In contrast, reflect on someone who thinks that the probability of a future where technology is used to perfect our ancestors (or ourselves) is extremely unlikely, but nevertheless believes that this is the best possible future and strains every nerve to bring this future about. This person would seem not to be a Transhumanist according to Hanson's definition. This result seems strange. When we think of other 'isms' like 'socialism' or 'capitalism' it is not agreement upon the likelihood that these projects will succeed that binds their adherents, but the fact that they believe that the project is of positive value and seek to promote the project. The lesson to be drawn here is that it is not the claim that humans will succeed in applying PETS, but that they ought to, which best characterizes Transhumanism. One of the issues Transhumanists must consider as they develop accounts of their ethical commitments is the scope to which the ethical imperative applies. One of the weakest forms of the imperative is to say that one ought to attempt to attempt to employ technology for the purpose of attempting to perfection only if one chooses to adopt the ideal thesis. This is much like the circumstance of a violinist whose snooze alarm has gone off for the third time and says to herself, "I ought to get up and practice the violin". Given her choice to pursue perfection in violin playing we can make sense of her claim. She is not saying that this is in some sense a duty that all of humanity shares, but only those like herself who have chosen to pursue a certain goal in a certain way. For practical purposes this is much the same as that individuals have the right to opt for a Transhumanist future for themselves. The parallel follows from the fact that rights may be invoke or waived at the discretion of the individual and that rights define in part the domain of ethical conduct, e.g., the right to an abortion does not mean that one must have an abortion, one may waive this right; and it would not make sense to say that one has the right to some immoral activity like murder. Thus, to say that individuals have the right to a Transhumanist future means that they may, at their discretion, invoke or waive the claim to this future for themselves and that this future is not in itself an immoral activity. One of the broadest forms of the ethical imperative that might be of interest is to the position that asserts that it applies to all of humanity, i.e., the 'we' of "we ought to use technology to perfect ourselves" is an imperative that enjoins all of humanity to strive to for self-perfection [35]. This might strike some as parochial at best, and verging on totalitarian at worst. This may be so, but it may not be quite so obvious as it first appears. It certainly does not follow from the broad scope of the imperative that any political or moral sanctions ought to be invoked against those individuals that do not strive for a Transhumanist future - just as presently we might feel that it is an ethical failure on the part of individuals who do not develop or perfect their talents, but we do not invoke political or moral sanctions against individuals. Imagine some brilliant young mathematician decides to give up mathematics for a life of drinking and generally carousing. Here we might be inclined to say that she ought not to waste her life in this way, i.e., we may feel that this choice marks an ethical failure on her part, even though we would not be inclined to incarcerate her for her slothful ways. Obviously, the scope of ethical imperative is something that Transhumanists must investigate. This much seems at least certain: Transhumanism is committed to at least to the weaker thesis that it is not immoral for some individuals to use PETs to pursue their own perfection. We have examined some possible answers to the question of to whom the ethical imperative is intended to apply to; we should consider two sorts of objections to the Transhumanist's imperative: principled objections and specific objections. By 'specific objections' I mean considerations that say that we ought not to employ PETs in the near-term but leave open the possibility that we might be ethically justified in doing so in the distant future. By 'principled objection' I intend the thesis that it is wrong always and everywhere to employ PETs. An example might help us differentiate these two. Suppose in five hundred years our sun unexpectedly starts to cool down and it is determined that life on earth will be extinct unless something drastic - like moving the earth closer to the sun - can be done. However, any proposed technological fix is beyond the capabilities of humans. The most promising course of action is to use PETs to attempt to create beings smarter than humans to see if they can fathom some means to save the planet from certain destruction. Would it be wrong in this instance to apply PETs? Someone who had raised principled objections to Transhumanism would have to say yes. On the other hand, it is quite consistent with the position of those who raise merely specific objections to say that circumstances dictate that we ought to use PETs now. Given that one accepts the claim of the 'earth is near death' (END) scenario, it is hard to see what the objection to Transhumanism could be in this circumstance. Not only is the survival of humanity at stake, but of all life on earth. Perhaps the most plausible position would be one that attempted to demonstrate the unethicalness of Transhumanism follows from consideration of some basic moral principles of an ethical system. What I propose to do then is consider this thought in light of two well-known ethical systems: Utilitarianism and Deontology. I shall argue that rather than demonstrating the unethicalness of Transhumanism, they may in fact imply (other things being equal) the ethical imperative of Transhumanism. My argument is based on a simple point: PETs may allow us to create more ethical persons, which seems a clear case of moral progress. In any event, let us consider the principled objection in light of the two ethical theories mentioned. Utilitarianism says to always act so as to promote the greatest happiness or pleasure for all. Can Utilitarianism serve as the foundation for a principled ethical objection to Transhumanism? If we are considering principled objections Transhumanism the question for Utilitarianism resolves to the question of whether pursuing Transhumanism will always and everywhere fail to increase the total happiness or pleasure of the relevant class of individuals [36]. However, in at least some circumstances, such as the END scenario, Transhumanism looks like the most promising means to increase the total happiness. Since we know that the long-term consequence of the END state of affairs is that there will be no happiness or pleasure, the only sort of objection a Utilitarian could raise to Transhumanism under these circumstances is that Transhumanism could only bring about great suffering and little happiness. But of course we have no reason to suppose that this is the case. Indeed, we have reason to suppose that PETs could lead to much greater total happiness. Even with essentially unaltered humans the application of technology might allow for an increase in the total happiness or pleasure of humanity, e.g., pharmaceuticals, or electrical stimulation of the pleasure centers of the brain. PETs offer the potential of a much larger reward. Consider that the range and depths of pleasure or happiness that a human can experience as compared with say a mouse is enormous. Both humans and mice take pleasure in some eating and mating experiences, but there is a whole world of pleasures beyond the experience of mice: reading a good book, creating a work of art, playing chess, etc., etc., PETs introduce the possibility that there may be whole worlds of pleasure that our beyond our cognitive purview, just as there are many pleasure we experience that our beyond that of mice. One way in which we might perfect ourselves, then, is to make able to better experience deeper pleasures and sources of happiness. Far from constituting a principled objection to Transhumanism, then, it may be that Utilitarianism actually constitutes a promising means to support the ethical imperative of Transhumanism. Deontology might look like a natural place to resist Transhumanism for it does not evaluate all moral actions in terms of their consequences. Perhaps the easiest way to think of this theory is negatively: it says that judgments of moral right or wrong are (at least partly) independent of our evaluation of the consequences of those actions. Take a crude example. What should you say when Aunt Martha asks how you like her new green hat? A Utilitarian perhaps might say that you ought to tell a little "white lie" and tell her that the hat is beautiful. The little lie does no appreciable harm and makes Aunt Martha happy, thus, in keeping with Utilitarianism; your action increases the total happiness of the universe. A Deontologist might argue that even though telling her the truth - that the hat is an affront to all good taste - will hurt her feelings (i.e., lead to a bad consequence) this cannot over ride our obligation not to lie. Similarly, a Deontologist might argue that despite all the good consequences that might ensue from implementing PETs, we have a duty not to use PETs. Thus, given that the Deontologists accept the claim that we ought not to use PETs, they can quite consistently admit the points made against the Utilitarian, i.e., Deontologists can admit that by employing PETs we might avoid the fate suggested by the END scenario, and that it may well be that PETs may allow us to explore entire realms of pleasure or happiness unavailable to unaltered humans. For, as we have said, the Deontologist can claim that such positive consequences cannot override our duty not to employ PETs. We have said that this seems to be a consistent position for a Deontologist to hold, but this is (obviously) not to say that it is justified or justifiable. So we must ask what is the basis for the judgment that we have a duty not to use PETs? Certainly there might be a good prima facie case for this if the idea was to apply this technology to people against their will. Let us then restrict the idea of using PETs only on rational, fully informed, consenting adults. What considerations could a Deontologist then raise against this proposal? Such a procedures do not seem to violate the strictures laid out by Immanuel Kant, the most famous deontologist. The application of PETs does not seem to violate his famous categorical imperative: treat others as ends rather than means [37]. For again we are not imagining that these procedures are being applied against an individual's will, so it is not that Transhumanists are urging that we treat people as mere means. In fact, there is an argument that says that not only is Deontology compatible with Transhumanism but that Deontology actually implies Transhumanism. This argument starts with the premise that we have a duty to increase the scope of where we might ethically act, i.e., it is our duty to increase the realm of our duties. Think for example of a drunk who says that he was not under any obligation to save the drowning child because he was too inebriated. In a sense he is of course right. If we say that he ought to have saved the child this implies that he can save the child. His defense is that he could not save the child; hence, it could not be the case that he ought to save the child. Of course we might not be so willing to let him off the ethical hook. We may think that he has a duty not to be so incapacitated in the first place. Similarly, one might think that as individuals we are not under any obligation to help all the starving children of the world, since we cannot help all of them it is false to say that we ought to help all of them. But what if the application of PETs meant that one could help all of them or at least a lot more of them? Would we have a duty to then to use PETs on ourselves? Some Transhumanists may think so. If this is the case then the claim, that we are not being obligated to reduce a lot of the suffering of this world because we cannot, will ring hallow, just as the drunk's claim to diminished obligation is true but marks a moral failure on his part. In any event, if we have a duty to increase the scope of the realm where we act morally, and PETs are the best way to realize this end, then it follows that we have a duty to pursue PETs. Obviously a lot more needs to be said about the relation between general ethical theories and the ethical judgments of Transhumanists. My limited ambition here is simply to indicate that a principled objection based on the ethical theories of Utilitarianism or Deontology does not look particularly promising. Let us turn now to the question of specific objections to Transhumanism. As indicated above, the basic thought is that the inherent dangers of employing PETs, at least at this stage of our understanding of these emerging technologies, means that we should postpone their application for the foreseeable future. Let us call those who advocate this position the 'Moratoriumists'. The Moratoriumists do not say that we should use PETs in the future; rather, they are agnostic on this question. Their position, then, is that humanity simply does not possess sufficient wisdom presently to use PETs. Whether humanity will enjoy sufficient wisdom in a 100, or a 1000 or 10,000 years is not for us to say today. On first inspection, the Moratoriumists' position seems to have prudence and ethics on its side. They can quite consistently admit that much good might come from the application of PETs, but so too might enormous amounts of evil. Prudence and ethics seem to advise us to postpone the use of this technology so that we have time to study it. To do otherwise would seem reckless and, indeed, immoral. The plausibility of the Moratoriumists' thinking, I believe, stems from paying insufficient attention to the details of different possible future histories. Let me describe briefly three quite different developments. The 'Frozen' future scenario is where all technological progress is stopped. This point is worth underscoring: with in the Frozen future we must imagine not only basic research into PETs being banned, but all technological research is prohibited. Thus, if it were instituted today, the world would be no more technologically advanced in the year 2101 than it is in 2001. The Transhumanists' future is of course where PET is pursued and implemented. Finally, there is the 'Steady-As-She-Goes future' (SASG) that allows for a comparatively unrestricted development of nonPET but severally curtails and regulates PET. The SASG future of course describes the policy of most of the developed world. In the U.S., for example, there is almost no public awareness of the potential dangers of nanotechnology but even (merely) therapeutic PETs like those based on stem cell research cause a huge public outcry. Let us ask which of these possible future histories is consistent with the stated goal of the Moratoriumists, which is to avoid a technological disaster. Clearly not the Transhumanist future, for near-term deployment of PETs is what specifically defines the opposition between Transhumanists and Moratoriumists. The Frozen future looks consistent with the Moratoriumists position; after all, one way to stop the development of PET is to ban all technological research. Obviously the feasibility and desirability of the Frozen future makes this scenario quite unpalatable. A worldwide ban on technological improvement would likely need to be enforced by a tremendous amount of military force, for no doubt it would meet resistance from any number of quarters, e.g., it conflicts with the goals of any number of state, business, scientific and technological organizations. The sorts of draconian measures that would be necessary to institute this future may well be worse than the sorts of evils that it is designed to avoid. Certainly forcing the idea of a Frozen future on large parts of an unwilling humanity has the potential for triggering a world war. Presumably most Moratoriumists would be tempted by the SASG future. The thinking, no doubt, is that such a policy does not require the sorts of draconian measures necessitated by the Frozen future option, yet it will provide us with protection from technological disaster. By continuing the policy of banning or severally limiting the exploration of PET we avoid the risks of technology spawning uncontrollable postmonsters [38]. However, there are a number of problems with the thinking that says that the SASG policy will achieve what the Moratoriumists hope for. First, and perhaps, foremost, the potential for technological disaster from the development of nonPETs may be greater than that of PET, e.g., the chance that all human life will be wiped-out by an accidental or purposeful deployment of nanotechnology (designed for nonPET purposes) are not negligible and may be quite high. If this is the case then it may be that the SASG policy does not fulfill its stated purpose of significantly reducing the possibility of harm, in which case it may be that Moratoriumists are forced to either adopt the Frozen future option, or renounce their position and side with Transhumanists who believe that wiser creatures than humans might have a better chance of avoiding disaster. Second there is the proliferation problem. Even if no more basic research in say genetic engineering were to occur throughout the entire world, there is, as I argued above, sufficient information now to attempt to genetically reengineer persons. There is every reason to suppose that this knowledge will tend to proliferate overtime. What this means is that the present state of affairs where because of their hegemony Western governments control this technology is not likely to last long. As non-Western countries gain access to this technology the same sorts of problems that the Frozen future faces must be faced by the SASG policy, how can these technologies be controlled and surveyed on a global basis? Third, the line between PETs and nonPETs is blurry and will continue to be eroded over time. This sort of problem is familiar enough already. One of the problems with selling nuclear energy equipment to is that the same technology will provide a basis from which to construct a nuclear weapons program. The same problem is inherent in the three PETs we discussed in section 1: AI research, genetics and nanotechnology have a number of nonPET applications, yet, these same developments may be quite easily turned to person engineering. What these considerations show is that the SASG policy is not likely to satisfy the objectives of the Moratoriumists. The SASG policy might succee