User:Vuara/Hyperlanguages transcend human languages

From Wikibooks, open books for an open world
Jump to navigation Jump to search

http://www.hyperreal.org/~mpesce/knowable.html

Knowable and Unspeakable

"Everything must have a natural cause." "Everything must have a supernatural cause." Let these two asses be set to grind corn.

                 - Malaclypse the Elder, The Honest Book of Truth

Krishnamurti, in the tradition of the Eastern mystics, taught that language functioned as a mediation of the real. It's not surprising, actually. Wittgenstein said that philosophy existed only because language is imperfect. But language is not a static thing, its powers are not complete; the world we describe today - on the edge of an Eschaton - is not the same world we spoke of ten millennia ago. Except in one very important way...which I will come to presently.

Perception and consciousness can not effectively be separated from our innate linguistic capabilities; what the mystics teach is abandonment of the practice of language. What arrives, in the silence, is still spoken, but is perhaps the voice of the world, rather than the voice-in-your-head. The Zen Abbot takes the students to a place beyond words so that they can listen to the reality of the moment. It is not the abandonment of language that is sought after, it is the abandonment of our own language.

Why? Because words do make up the world. Everything that we know - and everything that we can know - must be linguistically mediated; even the revelatory states of mind - which are necessarily indescribable - have their own internal language; we can not speak of these experiences because we do not have the words for them. But these words do exist.

Our project, in part, is to uncover these words, bring them out into manfest being, and let them work out their memetic infections of mind to produce a new understanding in all who encounter them.

This project, in fact, happens to be isomorphic with the Great Work of Western magic, a tradition at least as old as John Dee (and, quite probably, goes back to Pythagoras, who got it from the Medes, who may have scooped the Egyptians, etc.). The angelic encounters described by Dee in "A True Relation" gave us an entirely new language - Enochian - which, to all appearances, has a comprehensible syntax, regular & irregular in all of the ways a human language ought to be. The angels Dee contacted sought to instruct him in a new language, with new words; this would necessarily lead to a new opening of possibilites for the Elizabethian mystic, and gave him curious new powers which eventually got him into a bit of trouble.

Enochian has, in the four hundred years following "Relation", been studied extensively by such Western mystics as Eliphas Levi, Macgregor Mathers, Aliester Crowley, etc. It's considered one of the principle touchstones of the tradition of Magick as taught in the principal Western schools.

The trick of the game, or so it seems to me, is to go from human language into hyperlanguage, where every object is itself and simultaneously conflated with the universal, a place where words do not divide, but rather, implicitly reflect consciousness back toward an underlying unity of being. It may be that Enochian has this capability; I know that poets reach toward this Eschaton in their own use of their confined words, and - occasionally - strike their mark.

A few of the words from this century have had a particularly magical effect, words like "Noosphere" and "Gaia" and "Global Village" in actuality announced and pronounced something into being. Each may have existed before these words had been articulated, but none could be seen, even though they might be completely self-evident. Language shapes perception completely.

Words do make the world. This is the basic teaching of all the magical traditions I've encountered; each takes a different approach to broadening the lexicon of the postulant. The koan uses words to tie knots in reason and frees the being for a greater revelation than common sense will allow; the mage meditates on the names of his allies, and invokes them into form; the kabbalist contemplates the Tetragrammaton and sees the godhead as the vowels behind the prison walls of the consonants YHVH. The Eschatologist searches for the word which sums everything and integrates the cosmos into a singularity.

That we can even contemplate such a thing means it likely does exist. So we wait, mouths open, babbling a nonsense stream of glossalia, listening for angels' tongues.

Santa Monica 1 Ix (18 January 1999)

+++

http://www.transhumanist.com/volume10/prolegomena.html

PROLEGOMENA TO ANY FUTURE PHILOSOPHY

Mark Alan Walker

Research Associate, Trinity College, University of Toronto

www.markalanwalker.com

Abstract

Since its inception, philosophy has struggled to reconcile the apparent finitude of humans with the traditional telos of philosophy—the attempt to unite thought and Being, to arrive at absolute knowledge, at a final theory of everything. In response, some pragmatists, positivists, and philosophical naturalists have offered a deflationary account of philosophy: the ambitions of philosophy ought to be scaled back to something much more modest. Inflationism is offered as an alternative: it is conjectured that philosophy might make more progress towards the traditional telos if we attempt to create beings (through the application of technology) who are as far removed from us in intelligence as we are from apes. Rather than deflating the ambitions of philosophy we ought to consider inflating the abilities of philosophers.




1. Introductory

The turn of the millennium provides a natural opportunity to reflect on the future of philosophy. For the last hundred years or so we have heard the call for the end or “death” of traditional philosophy—what might be thought of as the “Plato to Hegel” cannon. This call has been issued by some of the most important thinkers and movements in this period: from James to Rorty, Nietzsche to Derrida, from logical positivism to naturalized epistemology. If there is a common thread here, it is that there is a gap between the ambitions of philosophy, and the abilities of human philosophers. On one side looms the seemingly transcendent telos of philosophy, namely: what has been described as the attempt to unify thought and Being, to obtain the absolute conception, to realize absolute knowledge, to discover a final theory of everything. On the other side of the gap stand humans; human, all too human humans. The trouble is how to square this vaulting ambition with a modern understanding of the etiology of humanity. If we believe, for example, that Homo sapiens are the result of natural selection of random mutations, then how plausible is it to believe that we might pursue the transcendent conception of wisdom embodied in the telos of traditional philosophy? Certainly such an aspiration would seem less formidable if we believed that we had a divine element within us—if, for instance, we believed that a divine artificer had created our souls out of some divine stuff, or at least if humanity turned out to be the embodiment of Geist. What are we to say about the ambition of philosophy given that our ancestry can be traced not to the divine, but to slime?

         There are, I believe, three answers currently in the offering. One is to deny that there is ultimately any teleological gap as described. Perhaps the mistake is to believe that we require a divine phylogeny to realize this ambition. As we will see below, Davidson’s reflections on the notions of ‘truth’, ‘belief’, and ‘meaning’, seem to place him firmly in this category of “denial”. A second alternative is what is sometimes described as a ‘deflationary response’: we ought to adjust the teleology of philosophy to make it more “human”, i.e., to abandon the attempt to unify thought and Being. There have been various proposals by pragmatists, naturalists, and others as to how to scale down the ambition of philosophy. How to provide philosophy with a “human face”. Philosophy, in other words, is to be given an easier more realistic—as it were—task like “breaking the crust of convention” (Dewey) or figuring out how “things hang together” (Sellars). A third riposte is what might be termed ‘stoic resolve’. The idea is that we can acknowledge that there may well be a chasm between our abilities and the ambitions of philosophy; nevertheless, we ought to keep a stiff upper lip and soldier on.  Nagel, for instance, bravely inveighs against the self-image of the age by adopting exactly this sort of view: 


…if truth is our aim, we must be resigned to achieving it to a very limited extent, and without certainty. To redefine the aim so that its achievement is largely guaranteed, through various forms of reductionism, relativism, or historicism, is a form of cognitive wish-fulfillment. Philosophy cannot take refuge in reduced ambitions. It is after eternal and nonlocal truth, even though we know that is not what we are going to get.



While we may never adequately realize the ambitions of philosophy, the activity of philosophizing, says Nagel, is ennobling and important for the human spirit: “Philosophy is the childhood of the intellect, and a culture that tries to skip over it will never grow up.”[1]

These are the three familiar responses. There is actually a fourth response, one that has yet to make itself heard. The idea, in a slogan, is that it is not we who ought to abandon philosophy, but that philosophy ought to abandon us. Consider that as a mere point of logic, if there is a gap between the telos of philosophy and humanity then there are at least two means to close this gap: either philosophy can be scaled down into something more human, or philosophers can be scaled up into something more than human. The idea would be to create better philosophers, ones more naturally suited to realizing the ambitions of philosophy. This view then is diametrically opposed to deflationary accounts which would alter philosophy to provide it with a more “human face”. The “inflationary’ experiment proposes to create philosophers with a more “god-like face”.

         Inflationism offers the most profound challenge to philosophy and indeed humanity in its entirety.[2]  In order to make this case I argue as follows. As a purely practical point, I believe that it may be that soon—very soon—we will have the technological means to attempt to create beings who may usurp our position as the most intelligent on earth. These creatures, with their superior intellect, may well turn out to be better philosophers—the philosophers of the future. On the theoretical plane I hope to show that reflection on the possibility of creating such creatures provides a compelling challenge to the three competing metaphilosophical views outlined above. Finally, I argue that philosophy is uniquely endowed among the academic disciplines to reflect on the potential impact of such a technological revolution. As such, philosophy has a particular responsibility to take up the sorts of questions addressed here, and, to do so with some urgency. 

2. Some Questions Concerning Technology

It may seem strange (to put it mildly) to suggest that technology will have such profound implications for the metaphilosophical positions outlined above. The link here is provided by the Darwinian revolution in biology. Two points are salient for our discussion: One is that we may think of biological organisms as exhibiting design without having to postulate a divine artificer, the other is that species are not fixed but have evolved and are evolving. Applying these insights to the topic of human intelligence we may conclude that our intelligence is not the product of the benevolent activities of some divine artificer but the result of the natural selection of random mutations. The history of our intelligence lies in a secular phylogeny, that is, with our apelike ancestors and indeed even more “primitive” organisms. Since some grand architect has not fixed our intelligence, we may also ask where it might evolve. Of course, if we are concerned exclusively with the course that natural selection might take we are engaged in some serious long-range forecasting. Natural evolution typically takes tens of thousands, if not hundreds of thousands of years.[3] However, there are other means that will allow us to alter Homo sapiens in ways in which it would take natural selection hundreds of thousands if not millions of years to duplicate. Let us consider three.

One is that which would result from applying the techniques of genetic engineering to the task of creating a more intelligent species. Consider as some very preliminary evidence the familiar correlation between intelligence and brain size, that is, other things being equal, a larger brain correlates with greater intelligence.[4] For example, our brain is larger than that of an orangutan, and an orangutan’s brain is larger than a Great Dane’s. The level of intelligence among these three species follows this same progression, i.e., we are more intelligent than orangutans, and they are more intelligent than Great Danes. It seems plausible to hypothesize that a creature who had a brain size of 2200 cc ought to be more intelligent and have greater conceptual abilities than Homo sapiens with their measly 1300 cc. Certainly this is the sort of reasoning that is used to explain the vast difference in intelligence between humans and apes, i.e., apes (although similar in body weight) have much smaller brains.

Technologically speaking, there does not seem to be any principled reason why we could not genetically engineer the aforementioned creature with a 2200 cc brain. If the correlation between brain size and intelligence cited above holds, then it would seem that this creature has a good probability of being more intelligent than humans. In other words, it seems a perfectly valid piece of naturalized speculation to investigate the following scientific hypotheses:


Hypothesis 1: A primate with a brain volume of 2200 cc will exceed humans in intelligence by the same margin as humans exceed that of chimpanzees.


As a corollary to hypothesis 1 the following hypothesis might be entertained.


Hypothesis 2: A primate as described in hypothesis 1 will be capable of gleaning information and thinking about aspects of the universe that will exceed human ability in this regard by the same order of magnitude that human ability exceeds that of chimpanzees.[5]


To put this in some perspective consider that adjusting for body size a great ape with the same body size as a human we would expect to have a brain about 400 cc in size. Australopithecines of similar body size we would project a brain of approximately 600 cc. Homo sapiens of course enjoy a brain of approximately 1300 cc. If we engineer a creature, let us call it ‘Homo bigheadus’, with a brain of 2200 cc how intelligent might we expect it to be, given that the same relationship between intelligence and brain size versus the log of body weight? It is difficult to say in part because we have no interval measure for interspecies comparisons of intelligence. That is, we do not have some recognized scale which would allow us to state that humans are say 15 times as intelligent as an Orang but only 5 times as smart as Australopithecus robustus. At best we have some rough and ready ordinal rankings of intelligence. As noted, we may say that Orangs are more intelligent than a Great Dane, and Homo sapiens more intelligent than Orangs, with Australopithecus robustus falling somewhere in between. Nevertheless, even with mere ordinal rankings of intelligence we might guess that Homo bigheadus might eclipse us in intelligence in a very dramatic fashion, e.g., we might properly expect that the difference between our intelligence and theirs would be more like the difference between human and Australopithecine intelligence, than say human and intelligence with that of Homo erectus. Again, since we have a grasp only on the ordinal intervals between intelligence it is hard to be much more precise than this. We might even suppose that this is some sort of iterative process, Homo Bigheadus creates the Homo Biggerheadus, creatures with brains 4000 cc in size, and Homo Biggerheadus creates Homo Evenbiggerheadus, etc.

         No doubt many will find the thought of such an experiment “fantastic” (to put it mildly). Yet incredible as it may seem, it is not a question of whether we will have the technological ability to perform an experiment along the lines suggested by these hypotheses. The only question is when will we have the ability. Consider that the basic information and techniques necessary for such an experiment are already available; it is really a matter of working through the myriad of details. There are, for instance, several methods for genetic engineering. One such technique is the microinjection procedure. Basically, DNA is injected into the developing egg of an organism; this DNA attaches itself to the chromosome and then can be passed on genetically to succeeding generations in the usual fashion. Over fifteen years ago, researchers were able to partially correct a genetic defect in mice employing this method. The strain of mice in question suffers from reduced levels of a growth hormone that results in dwarfism. By inserting the DNA that contains information for a rat’s growth hormone, the researchers were able to reverse this condition.[6] 

Since the technology necessary for genetic engineering is already available to us, the real trick is finding the appropriate genes that control the growth of the brain. This may not be that difficult. The crude map of the human genome we now possess certainly could be of some assistance in this regard. There is also evidence from our phylogenetic cousin the common chimpanzee. As is well known, there is an incredible genetic similarity between our species, e.g., King and Wilson have found that “…the average [human] polypeptide is more than 99 percent identical to its chimpanzee counterpart.”[7] The idea would be to discover the genes that have altered the allometric curve of the brain in humans as compared with chimps. From there it would be a relatively simple matter to manipulate them in the genome of a human zygote, and the recipe should be complete.[8] The ease in which we might create a larger brain through genetic engineering is underscored by the fairly recent discovery in developmental genetics of homeobox genes: genes that control the development of the body plans of a larger number of organisms. For our purposes what is of interest is that there are a number of homeobox genes that that control the growth of various brain regions.[9] For example, if you want to make a larger brain in a frog embryo simply insert some RNA from the gene X-Otx2 and voilà—you have a frog embryo with a larger brain, specifically, the mid and forebrain mass is increased.[10] Homeobox genes also come in various forms of generality. Otx2 is obviously very general in its scope; in contrast, for example, Emx1 controls the growth of the isocortex (one of the two regions of the neocortex). Thus, if we believe that intelligence and philosophical wisdom might be aided by tweaking one area of the brain or another there may be just the right homeobox gene for this task.

         Of course this simplifies many, many problems. It is much like as if one had said back in 1957, with the launch of Sputnik, that landing men on the moon was merely a question of working through a myriad of details. This was of course true, but it is not to belittle all the problems and technical innovations that were required to achieve this end, e.g., problems of miniaturization.  Remember vacuum tubes were still in use back in 1957! Similarly there are a host of difficulties that would have to be solved in creating such creatures, let me just mention a couple in passing. First, there are general considerations of physiology e.g., a larger brain might require increase blood flow, which might mean increasing the size or strength of the heart. Would we have to adjust the allometric curve of the heart and other vital organs? Perhaps the skeletal structure would have to be altered in order to support the additional cranium weight. We might have to look at extending the life span of these creatures in order to allow them enough time to develop to their full potential.[11] Second, one may wonder about the sufficiency (or perhaps even necessity) of creating greater intelligence by dramatically increasing the gross brain size. It has been speculated, for example, that it is the greater development of our neocortex, as compared with other primates, that is primarily responsible for our greater intelligence, or that due consideration ought to be given to the fact that we exhibit much more hemispheric specialization of cognitive tasks. It may be that the task of attempting to create more intelligent beings ought to focus on the quality as opposed to the quantity of the brain.[12] Thus, it should be clear from what has just been said that there is really nothing so simple as “the crucial genetic engineering test”. There are a number of tests that we might perform depending on the relative weight we assign to these variables. For instance, one group of researchers might suppose that doubling the mass of the neocortex ought to be sufficient for testing whether we can make more intelligent creatures, while another might focus on increasing the total mass of the brain by 50%. What could reasonably be expected from such tests would probably require input from a number of diverse academic fields. Whether increasing the gross size of the brain to 2200 cc would be necessary or sufficient for a radical increase in intelligence is thus an open question. The general principle—that we might be capable of experimentally manipulating the intelligence of various creatures including humans—does seem scientifically respectable. Certainly it seems scientifically respectable to suggest that we might be able to experimentally increase the intelligence of any non-human animal. It is difficult to see why humans might be exempt from this inductive thinking. 
           How long would it take to prepare this recipe? As a conservative estimate, it would be safe to say that sometime in the twenty-first century we should possess the relevant knowledge and technology. If nothing else, it seems, that we could in fairly short order have some idea of the efficacy of such procedures by studying other species such as rats. We might, for instance, today attempt to genetically engineer a rat with a brain twice the normal size and observe how this affects its level of intelligence. Such procedures would be achievable in the short term and provide some evidence as to what might be feasible in our own case. Ethics aside, genetically engineering the human zygote in this way is technically feasible today. 
         Another possibility for creating greater intelligences is based on extrapolations from computer science. The possibility that computers might be able to out-think us has been put forward by a number of researchers, one of the most prominent is Professor Hans Moravec at Carnegie Mellon University.[13] Moravec’s conjecture has two essential components: (1) an estimate of how long it will take to develop (affordable) computers with the requisite amount of computing power, and (2), an estimate of how much computing power will be necessary to simulate human intelligence. The key unit of measurement here is MIPS, a million machine instructions per second. Moravec predicts that robots capable of executing 100 million MIPS will be commercially available around 2040 and these should equal or surpass human intelligence. He claims that “…mass-produced, fully educated robot scientists working diligently, cheaply, rapidly and increasingly effectively will ensure that most of what science knows in 2050 will have been discovered by our artificial progeny!”[14] Presumably, the artists and philosophers etc., in 2050 will also be our artificial progeny. Moravec basis his estimate of how much computer power is necessary to simulate the power of the human brain on two quantities that have a fair degree of empirical support. One quantity is the 0.2-gram of neural processing circuitry at the back of the human retina. This tissue is devoted to detecting edges and motion in the visual field. Moravec notes that these tissues perform about 10 million detections per second, that is, there are approximately a million image regions performing 10 detections per second. Data from experiments in robot vision suggest that 1,000 MIPS would be necessary to simulate the .02-gram of neural tissue at the back of the retina. Moravec then reasons that, since the entire human brain is 75,000 times heavier than the .02 gram of neural tissue, a computer with 75,000 times the computing power is necessary to model human intelligence. In round numbers then a computer with 100 million MIPS should be equal to humans in intelligence. It perhaps goes with out saying that Moravec’s claims are contentious.[15] I do not propose to defend his estimates here, rather, I think the important point to observe is that his inductive reasoning is grounded in empirical data and as such is naturalistically respectable. Moravec may be wrong (as he himself admits) that robots will usurp humans as the scientists and (presumably) philosophers of the future, but it seems a conjecture that is at least worthy of our attention. 
         Another means to attempt to continue the vector of the natural selection of intelligence with directed selection. The idea then would be to selectively breed humans on the basis of intelligence. Estimates for how many generations it would take to make a statistically significant difference in intelligence, based on studies from other species such as rats and dogs, range from three to ten generations.[16] As is well known, Plato advocated selective breeding of the best humans; it is interesting to think what might have happened if we had taken up the master’s suggestion.[17] If we conservatively allow 3 generations per century then this would mean that under Plato’s program there would have been sufficient time for 75 generations. Probably more than ample time to reap tremendous leaps in intelligence. Indeed, this may have been a sufficient interval for speciation to have occurred given than it can happen in 10 generations with fruitflies, and as little as thirteen generations with salmon. 
         Clearly there are a number of ways of combining these technologies and procedures: one could, for instance, opt for selectively breed genetically engineered cyborgs.  And there other procedures that I have not mentioned, for example, in vivo augmentation of our brains with neurons created with stem cells. However, the primary aim of this paper is not to review the technical aspects of this project. On the other hand, I have covered some of the empirical details here at some length with the hope of convincing the reader that the expertise required by the project is imminent—or at least worthy of considering whether it is indeed imminent. Even if one assigns a very low probability to the likelihood of any of these coming to fruition, the enormity of the consequences ought to be cause for reflection. For ease of reference, in what follows we may think of super-intelligent beings (SIBs) as any creature—genetically enhanced or non-naturally selected humans or future supercomputer—whose intelligence transcends our own by the same magnitude as ours eclipses that of the apes. 


3.0 Implications for Philosophy


(Fragment 83) The wisest man will appear an ape in relation to God, both in wisdom and beauty and everything else.

(Fragment 79) Man is called childish compared with divinity, just as a boy compared with man.[18]


This is Heraclitus’ description of the human epistemic situation. These analogies suggest just how the idea of a higher intelligence might be directly relevant to philosophy.[19] The “phylogenetic analogy” proposes that we simply imagine adult humans as standing at some midpoint between less developed forms of intelligence, such as an apes’, and the higher form of intelligence often attributed to divine forms of understanding or knowledge. The “ontogenetic analogy” makes the same point, but with reference to a human child as opposed to an ape. The connection with philosophy is straightforward if we think of ‘philosophy’ in its etymological sense as “the love of wisdom.” If Heraclitus is correct, we may never fully grasp wisdom. We may not in principle be capable of passing, as Hegel quipped, from the love of wisdom to wisdom itself.[20]

         Perhaps the greatest challenge the SIBs present to philosophy concerns the idea of “epistemic superiority”. Yet it might be remarked that as suggestive as Heraclitus’ analogies are, they lack the articulation that one might expect in a philosophical theory on the subject. The hope, in other words, is that these analogies might be elucidated in more detail, but an immediate and obvious problem presents itself. We are able to describe certain ways in which the perspective of a child or a chimp’s is more limited, e.g., we might cite the fact that no chimpanzee will be capable of understanding the Critique of Pure Reason, Gödel’s Incompleteness Proof, Munch’s “Scream", Baudelaire’s  “Spleen”, or Dostoevsky’s “Notes from Underground”. The trouble lies in the fact that it is difficult to say how our understanding is limited, without presupposing access to a higher understanding. Just as only we can appreciate exactly what it is that a child fails to know or understand, so too, it seems, only creatures who transcended our understanding should be able to detail our limitations. Perhaps a full philosophical account of our epistemic limitations is not something we are in a position to formulate or even appreciate. It is perhaps conceivable that only creatures like the SIBs can provide the appropriate sort of philosophical theory on this subject—at least as it concerns humans. 
         This seems to leave us in a philosophical quandary. On the one hand, these analogies are, well, merely analogies. They are insinuative of how our perspective might be circumscribed, but they do not provide a philosophical theory of this limitation. On the other hand, if we try to articulate a philosophical theory about these limitations, it seems we are in danger of begging the question: we cannot know too much about that which we do not know. Is there a means to further articulate these analogies in a manner that does not beg the question? 

One proposal for further explication of the idea of the (conjectured) epistemic superiority of the SIBs is via the notion of conceptual schemes. The basic idea is that the SIBs might said to be epistemically superior in that they possess a conceptual scheme that is more encompassing than our own. Whether this proposal is itself an improvement on Heraclitus’ analogies is open to question, since the coherence of the notion of conceptual schemes has been disputed. This is precisely the line Davidson takes in “On the Very Idea of a Conceptual Schemes”—perhaps the best-known recent discussion of the question. A brief look at what Davidson has to say in this regard is in order.

One of Davidson’s most important recommendations is to connect the idea of conceptual schemes with that of language translation:



where conceptual schemes differ, so do languages. But speakers of different languages may share a conceptual scheme provided there is a way of translating one language into the other. Studying the criteria of translation is therefore a way of focusing on criteria of identity for conceptual schemes.[21]



Davidson’s conclusion, that no solid meaning can be attached to the idea of a conceptual scheme, turns on his argument for the claim that all languages are essentially intertranslatable. In other words, the idea is that if we are to make sense of the idea of conceptual schemes, then there must be a failure of translation between languages; but since all languages are intertranslatable, the idea of a conceptual scheme is a mere philosophical fiction.

         An interesting, and I believe important, omission in Davidson’s argument is the possibility of asymmetrical failure of translation. This is not an oversight on his part, for Davidson remarks parenthetically that “…(I shall neglect possible asymmetries).”[22] Given that the idea of asymmetrical failure of translation seems such an obvious maneuver for the conceptual scheme proponent to adopt, it is curious Davidson does not inform us as to why he neglects these asymmetries. In any event, it is perhaps worth considering for a moment what the conceptual scheme proponent might say in favor of the idea of asymmetrical failure of translation. 
         If we limit the field to two languages, our home (H) language and the target (T) language to be translated, then the following translation possibilities present themselves: 

1. H <— > T

2. H —> T

3. H <— T

4. H — T

The arrows in the schema represent the direction of full translatability. Thus, the first case represents the idea that our home language is fully translatable into the target language, and the target language is fully translatable into our home language. The second case expresses the idea that our language is fully translatable into the target language, but our language lacks the expressive powers necessary to fully translate the target language. If we allow the idea of giving some sort of numerical indices to languages to represent their expressive powers, then we might think of the second case as that where the home language has less expressive power than the target language. The third case represents the idea that the home language is greater than the target language, thus the target language is not able to fully translate the home language. The fourth case represents the idea of symmetrical failure. We cannot translate the target language with our language, and the target language cannot translate our home language. It might be plausible to maintain in this case that the languages differ in their expressive resources, although one is not necessarily richer than the other.

         Of these four, prima facie at least, logical possibilities, Davidson considers only two, the first and the fourth case. Thus, Davidson’s argument amounts to attempting to demonstrate that the fourth case is a conceptual impossibility, that is, we cannot make sense of mutually untranslatable languages. The conclusion of Davidson’s argument is that all languages must conform to the first case, i.e., all natural languages are essentially intertranslatable. The second and third possibilities are precisely the sorts of cases that, as we have seen, Davidson says he shall neglect. 

Heraclitus’ analogies, however, indicate just how such asymmetries might occur. If we take the language of an adult human as the home language, and compare it with the language that a five year old speaks, the target language, then the third translation possibility best describes this situation. We are able to fully translate what five year old children say, but there are aspects of adult speech that transcend their understanding and linguistic resources. The situation, it seems, might be reversed if we were to apply this translation framework to the speech of the SIBs. We might imagine that the SIBs are able to fully translate what we say into their language, but we might not be able to translate all their utterances into our language. Obviously, this is the sort of scenario that the second translation possibility describes.

         The intention here is not to argue that Davidson is wrong to dismiss the idea of conceptual schemes. The limited ambition here is merely to indicate that Davidson’s argument is incomplete precisely along the vector that it would be most plausible to describe our conceptual resources compared with that of how we might envision the SIBs.[23] But this is critical for Davidson’s project since he wants to show that the idea is incoherent—this would mean that he has to analyze every permutation. 
         Leaving aside for the moment the question of conceptual schemes, let us attend again to the original question about ‘epistemic superiority’.  A slightly less direct means to criticize this notion would be to question it by attending to its implications. One thing that seems certain is that the idea of epistemic superiority implies some sort of skepticism; for we are conjecturing that the SIBs might know things that we are incapable of comprehending. One way to criticize the idea of epistemic superiority, then, would be to reject the sort of skepticism that it implies.  But what sort of skepticism is at stake here? 

Consider the contrast between ‘justificatory’ and ‘noetic’ skepticism. The former concerns the limits of our ability to justify claims, the later concerns the limits of our thoughts. Although these limits are not always clearly distinguished, both, it seems, are relevant to the skeptical doctrine.[24] A preliminary way to get a handle on this distinction is to think of it in terms of attributions of error and ignorance. Typically, although not invariably, justificatory skeptics present their case via the possibility of error, while noetic skepticism is explicated in terms of ignorance. It might be useful to think of such attributions in terms of a (quasi) scientific example. For simplicity sake let us suppose that there are just two contested hypotheses about the nature of the universe. The standard model, or big-bang theory, H1, describes the universe as evolving from some primordial singularity. The steady state hypothesis, H2, proposes that the universe has always been more or less as it is now. The justificatory skeptic might be seen as arguing that we do not know that H1 is true because there is at least a conceptual possibility that we are in error. While our best available empirical evidence supports H1, it is logically possible that H2 is true. We might be asked by a skeptic to imagine an evil demon has arranged all sorts of false clues; e.g., the “alleged” background radiation of the universe leftover from the big bang was simply planted there by the epistemic fiend in an attempt to mislead us. The justificatory skeptic does not suggest that we are ignorant of the conceptual alternatives, for they allow that we might entertain the possibility that H2 is true.

Noetic skepticism, in contrast, does not challenge the justification for any particular hypothesis, but questions whether we are capable of formulating the correct hypothesis in the first place. Noetic skepticism claims that the hypothesis that correctly describes the truth might be beyond the “reach of our minds”—to use Nagel’s formulation. We cannot even entertain the true hypothesis as a possible object of belief, according to this line of skepticism, never mind the subsidiary question of whether such a hypothesis can be justified. To extend the example above, the noetic skeptic might agree that H1 and H2 describe the only two hypotheses about the universe that are worthy of human scientific scrutiny. However, suppose the “complexity” hypothesis H3 is true. It suggests that the theory that best describes the universe must posit a billion billion billion billion billion billion initial conditions, and each of these initial conditions requires at least the same number of bits of information to describe it. Such a hypothesis, let us suppose, is far too complex for any human to conceive. The noetic skeptic then argues that the possibility of H3 demonstrates that we might forever be ignorant about the truth of the universe.

         Noetic skepticism is not without its critics. Of the a priori strategies to reject such doubts, perhaps the most famous is Hegel’s attempt to vanquish noetic skepticism. In the Phenomenology of Spirit, we are told that humans have reached the point of absolute knowledge. In the Logic we are provided with (Hegel’s version of) a complete description of (at least the main features of) reality. Davidson also falls in the a priori camp. As we have seen, the argument in “On the Very Idea of a Conceptual Scheme” is that the idea of conceptual schemes turns on the idea of a failure of translation between languages. Davidson then provides us with a transcendental argument to demonstrate that intertranslatability is a condition sine qua non of languagehood in general.[25] Having ruled out the possibility of (massive) ignorance, i.e., noetic skepticism, Davidson also rules out the possibility of (massive) error[26], i.e., justificatory skepticism, with his “Omniscient Interpreter” argument.[27] Davidson thus is epistemically “optimistic” in much the same way as Hegel. 

It is possible to conceive of an empirically based epistemic optimism as well. Such optimism seems to be wide spread (although certainly not ubiquitous) in the sciences. In physics, for example, the question of how close theoretical science is to finding a “final theory,” or what is sometimes known as a “theory of everything,” is often mooted. Perhaps the most prominent recent contribution to this debate is Stephen Hawking’s lecture “Is the End in Sight for Theoretical Physics?”, where he argued that the goal of theoretical physics might be achieved by the end of this century. Realizing this goal would mean that we “have a complete, consistent, and unified theory of the physical interactions which would describe all possible observations.”[28] Hawking is not alone among physicists in making such prophetic statements—although most extend the time frame beyond the end of this century.

On the other hand, there are thinkers who are firmly planted in the naturalistic tradition who seem to make a compelling case for taking skepticism seriously. Fodor, for instance, raises exactly this sort of point:

…so long as the class of accessible concepts is endogenously constrained, there will be thoughts that we are unequipped to think. And, so far, nobody has been able to devise an account of the ontogeny of concepts which does not imply such endogenous constraints. This conclusion may seem less unbearably depressing if one considers that it is one which we unhesitatingly accept for every other species. One would presumably not be impressed by a priori arguments intended to prove (e.g.) that the true science must be accessible to spiders.[29]

Fodor and Chomsky seem to endorse noetic skepticism, as they entertain the possibility that human reason might be limited with respect to the sorts of thoughts and truth that we might be capable of entertaining. Furthermore, they do so according to what seems to be naturalistic precepts, i.e., they see noetic skepticism as a consequence of considering Homo sapiens as a biological product formed by the process of natural selection. In effect, then, their views are theoretical analogues to the experiment described above.

         The line of thought we are considering, then, is that the notion of epistemic superiority ought to be rejected because it implies a false or absurd consequence, namely, noetic skepticism. It is beyond the domain of the present work to explore the rejection of noetic skepticism on empirical grounds, other than it is difficult to see how the physicists could be confident in their epistemic optimism in advance of the experimental outcome described above.[30] 

But what of Davidson's a priori argument? I believe that there is room to speculate whether Davidson’s argument against noetic skepticism (even if correct) is sufficient to squash the notion of epistemic superiority. For let us assume that Davidson is correct that all languages are intertranslatable. What follows? It would seem that the idea that the SIBs might possess a view of the universe that transcends our own would have to be abandoned. For if such a view is to be expressed in a language, then Davidson’s argument shows that this idea must be rejected. But must the (conjectured) transcendent view of the SIBs be couched in a language?

         Consider first the relation of the idea of language to what seems to be the more general concept of communication. In other words, language may be used as a form of communication, but all communication is not in the form of language. [31] If all natural languages are intertranslatable, then, it follows that creatures that are “lower” on the phylogenetic scale, such as chimpanzee and honey bees, do not possess languages. Nevertheless, such creatures are able to glean information about the world and communicate it to conspecifics. The fact that such creatures possess these abilities indicates that their means of communicating might be thought of as ‘protolanguages’. By analogical reasoning, then, it seems that, for all we know, it is possible that there are other forms of communicating and gleaning information about the world that stand to languages, as languages stand to protolanguages. Imagine that beings that stand to us in intelligence, as we do to chimpanzees, communicate by means of a ‘hyperlanguage’. Hyperlanguages transcend human languages in the same manner which human languages transcend the protolanguages of chimpanzees. If this case can be made, then it is possible to maintain the skeptical position that thought and language might not be able to comprehend reality in all its complexity, even though all languages are essentially intertranslatable. 

To fill in the details of this argument reflect on why chimpanzees and monkeys might be thought of as having a “protolanguage”. To what extent the other primates possess a language, if at all, is a much-contested issue. For the present purposes it is sufficient to make the somewhat banal observation that at least some of the other primates possess a form of communication that enables them to announce information about their environment. It has been know for some time, for example, that the East African vervet monkeys make different-sounding calls in response to three different predators: leopards, eagles and snakes. Commenting on the observation, Seyfarth and Cheney write:



Each call elicited a distinct apparently adaptive, escape response from nearby vervets. Alarm calls given about leopards caused monkeys to run into trees, where monkeys seemed safe from feline attack. Eagle alarms caused them to look up in the air or run into brushes. Snake alarms caused the animals to stand on their hind legs and look into the grass.[32]



Yet on the Davidsonian view, the information gleaned and communicated by the nonhominid primates does not merit the appellation of a ‘language’, since even the chimpanzee is not capable of understanding all the information that might be communicated by means of a language. No chimpanzee, for example, is going to be able to translate the terms necessary to express quantum physics. Thus, according to Davidson’s criteria of languagehood, chimpanzees do not have a language.

But at this point it may be wondered whether Davidson has the argumentative resources to deny the “hyperlanguages” since the argument in “On the Very Idea of a Conceptual Scheme” does not seem to speak to this possibility. The fact that languages are said to be intertranslatable is of no concern since hyperlanguages, by definition, are not intertranslatable with languages. For hyperlanguages stand in the same relation to languages which languages stand to protolanguages, i.e., hyperlanguages, languages, and protolanguages all share the feature of being employed by creatures in gleaning information about the world, and communicating it with conspecifics. Where they differ is in the complexity of the information that might be represented. Thus, languages are capable of representing information of greater complexity than that of protolanguages, and hyperlanguages are capable of representing information of greater complexity than languages.[33]

It might be thought that this line of argument begs the question against Davidson’s original association of languages and conceptual schemes. However, the same motivation that underlies the distinction between languages, hyperlanguages and protolanguages, can be employed for distinguishing between concepts, hyperconcepts, and protoconcepts. Clearly the stacks of papers on concept mastery in animals and human infants that fill our libraries are mistaken if Davidson is correct. For such creatures cannot employ concepts since they do not possess a language. At best what such creatures might be said to possess, on the Davidsonian view at least, are protoconcepts and a protoconceptual scheme. If this is the case, then it would seem that we ought to say that hyperconcepts and a hyperconceptual scheme, stands in the same relation to concepts and a conceptual scheme, as the latter stands to protoconcepts and a protoconceptual scheme.

If this line of argument can be made out, then Davidson’s transcendental argument, which concludes that all natural languages are intertranslatable, is less significant that might first appear. This argument “defines” language in a manner such that there is little reason to suppose that languages are the only means of communicating information, or that they are the most sophisticated means of communication. Natural languages might be just that: a natural kind in the order of communication. Other forms of communication are more primitive, and hence, are termed ‘protolanguages’, and some are more sophisticated, they are ‘hyperlanguages’. [34] 

The skeptical upshot turns on the question of whether the final philosophical and scientific theory of everything might ultimately be expressed in a language? Are languages just too primitive to express the true theory of everything? Does the final philosophical and scientific account of everything require a hyperlanguage for its expression? Is it possible that the epistemic superiority of the SIBs might lie precisely in the fact that they have evolved beyond the need to express their views in a language? What shall we term this—‘logos skepticism’?

A similar line of thought leads one to wonder whether truth itself is an adequate “vehicle” for the final philosophical theory of everything. One “modern” way to understand truth is that it is a property that is properly applied only to (declarative) sentences. If we accept this modern view in conjunction with the Davidsonian position on language intertranslatability, then this implies that chimpanzees are never in possession of the truth. Since chimpanzees are not language users (according to Davidson), a fortiori, truth does not even come into play. If not truth, then, it seems that there still has to be some neutral description of chimpanzees by which we may describe them as communicating correct or incorrect information about their environment. Some primates, for example, have been observed “intentionally” attempting to mislead their comrades by evincing “false” cries of “food over in this direction”. The “ploy” is to misdirect the troop while the signaler doubles back to the real food source. We might say that, while chimps do not communicate truth or falsehoods, they do communicate both information and misinformation or “prototruths” and “protofalsehoods”. If the SIBs communicate with a hyperlanguage, then they too are not in possession of the truth. Perhaps they seek the Hypertruth. Truth (even if it is a woman) may be human, all too human.[35] Have philosophers hitherto set their sights too low? Should philosophy seek Hypertruth? What shall we term this sort of skepticism? Aletheia skepticism?[36] That is, the doctrine that philosophical wisdom might require a standard higher than truth itself.

What has been said thus far tell mostly against those—like Davidson or Hegel—who are inclined to think that there is no gap between human abilities and the telos of (traditional) philosophy. It would be rash to believe that these remarks constitute a “refutation” of Davidson’s position. Rather, I think they are best seen as providing an inflationist’s perspective on his position. A more considered treatment is beyond the scope of this paper.[37]

I want to turn now very briefly to the other two contending metaphilosophical positions mentioned above, namely, “deflationism” and Nagel’s stoic resolve. Since these positions acknowledge the teleological gap there is no reason to think that they might be adverse to the critique of Davidson. James, one of the founding members of modern deflationism, for instance, writes:


I firmly disbelieve, myself, that our human experience is the highest form of experience extant in the universe. I believe rather that we stand in much the same relation to the whole of the universe as our canine and feline pets do to the whole of human life. They inhabit our drawing rooms and libraries. They take part in scenes whose significance they have no inkling. They are merely tangent to curves of history, the beginnings and ends and forms of which pass wholly beyond their ken. So we are tangent to the wider life of things.[38] 


The critique of Davidson offered above is obviously in the same spirit as the dose of humility that James offers. This is of course not surprising since James’ deflationism and inflationism are agreed that there is a potential teleological gap. What would it take to close this gap? Concerning this question Putnam, a contemporary heir to the deflationary tradition, writes:


…I can sympathize with the urge to know, to have a totalistic explanation which includes the thinker in the act of discovering the totalistic explanation in the totality of what it explains. I am not saying that this urge is “optional,” or that it is the product of events in the sixteenth century, or that it rests on a false presupposition because there aren’t really such things as truth, warrant, or value. But I am saying that the project of providing such an explanation has failed.

It has failed not because it was an illegitimate urge—what human pressure could be more worthy of respect than the pressure to know—but because it goes beyond the bounds of any explanation that we have. Saying this is not, perhaps, putting the grand projects of Metaphysics and Epistemology away for good—what another millennium, or another turn in human history as proud as the Renaissance, may bring forth is not for us today to guess—but it is saying that the time has come for a moratorium on Ontology and a moratorium on Epistemology. Or rather, the time has come for a moratorium on the kind of ontological speculation that seeks to describe the Furniture of the Universe and to tell us what is Really There and what is Only a Human Projections, and for a moratorium on the kind of epistemological speculation that seeks to tell us the One Method by which all our beliefs can be appraised.[39]



What is interesting is Putnam’s conditional rejection of the grand questions of philosophy. For Putnam does not say that these questions ought never to be asked by humanity again, only that at present we ought not to ask them. He allows that in another historical epoch these questions might be worth pursuing. As Putnam correctly observes, we cannot guess today what “another millennium, or another turn in history as profound as the Renaissance, may bring forth.” Rorty, it seems, would have us smash the “mirror of nature”. Putnam, with a keener historical sense in this instance, would have us merely put the mirror of nature in the closet to wait for happier times.

         What Putnam says in this quote would seem to apply, a fortiori, to a change in our biology as well. It would seem that we could not today guess what a change in our biology of the same magnitude as that of the development of the hominid line from australopithecines would bring. If we genetically alter the human zygote in such a way as to create larger brained creatures, they may well possess the sort of godlike perspective presupposed by traditional philosophy. At the very least, I do not think we can discount such a possibility a priori. Sometime in the next 5 to 25 years we will in all likelihood possess the expertise to genetically engineer the human zygote in the aforementioned manner. 

Thus, deflationists are flat-out wrong to say that the ambition of philosophy cannot be pursued. What is required—at least by the traditional telos of philosophy—is that we radically improve ourselves. Obviously there is no guarantee that we will be successful in this endeavor, nor is likely to be easy. But this is not exactly late breaking news. Plato, the greatest inflationist of them all, in the Republic instructed us that the philosopher’s ascent from the cave would be difficult; and results in a radical transformation of those individuals who could complete this arduous task. Specifically, they will become godlike—a point on which Aristotle agrees.[40] For those who want to remain shackled to the cave wall we might gladly donate our copies of Contingency, Irony, and Solidarity, as manuals for self-help. Qua footnotes to Plato, it is clear where our duty lies.

         Obviously it is still open to the deflationist to say that we ought not to pursue the telos of philosophy. They might argue, for example, that the procedure outlined—genetically altering the human zygote, selective breeding or the construction of transcendent computers—is unethical. But this would be to argue for deflationism on totally new grounds For traditionally deflationists have argued that we cannot realize the telos of traditional philosophy, i.e., the unity of thought and Being—not that we ought not.[41] 
         Of the positions delimited, inflationism agrees most with stoic perseverance both with the difficulties facing philosophy and the viability of the alternatives. We saw, for instance, how Nagel, one of its leading exponents, criticized Davidson’s view that we cannot make sense of a transcendent conceptual scheme.[42] 

We saw as well, in the initial survey of the four options, that Nagel rejects deflationism in the strongest terms. The residual question is whether we ought to become something more or wallow in stoicism. Nagel at times seems to point the way to inflationism itself:


         There is a persistent temptation to turn philosophy into something less difficult and more shallow than it is. It is an extremely difficult subject…I do not feel equal to the problems treated in this book. They seem to me to require an order of intelligence wholly different from mine.[43] 


For if philosophical questions require an order of intelligence wholly different from that of Homo sapiens, as Nagel seems to suggest, then the obvious move is to attempt to create a different intelligence. But like deflationism, it is open to those that opt for stoic perseverance to argue that inflationism is not a “live” option since it requires us to perform unethical acts. As noted above, I do not propose to respond to this charge.

         Yet clearly this ethical question looms large, i.e., ought we to attempt to create transcendent thinkers? A negative answer would obviously foreclose the possibility of exploring inflationism—at least for those that hold and heed such norms. As noted in the introduction, the ambition here is merely to point out the theoretical possibility of inflationism. The practical question of which of these four positions we ought to adopt is beyond the scope of the diminutive ambitions of this paper. 
         There is one further obstacle, in addition to any potential ethical barriers, that might stand in the way of inflationism, namely, that pesky thing known as ‘reality’. As noted above, it is an empirical question whether we can in fact create higher intelligences. If it proves impossible then, as I see it, inflationism is done. It is hard to see how, for instance, a deflationist might be moved (other than to laughter) if one insists that there is a possible world where the telos of philosophy is realized. The cash value of inflationism lies in the fact that it is an empirically testable means to further the ambitions of philosophy. 



4.0 Prolegomena to Any Future Philosophizing.



         In section 2 I argued that there is a technological revolution afoot that may radically alter human history and indeed humanity itself. In section 3 I indicated how such a revolution might affect our understanding of philosophy. In this section I want to indicate how philosophy might affect our understanding of the nature of this technological revolution. To focus this discussion let us take three questions as central and which we might suspect (following Kant) that philosophy exhibits some acumen: What can we know? What should we do? What should we hope? 

4.1 What can we know?

Let us suppose that technology raises the prospect of transcendent intelligences. Who better than philosophers to cogitate this prospect? The debate between Davidson and Nagel on the viability of transcendent intelligences is merely the latest incarnation of a controversy that extends back to the dawn of western philosophy itself. For instance, epistemologically, this is perhaps the divisive issue between Hegel and Kant. That is, Kant allows the possibility of transcendent intelligences; such creatures may know things in themselves, while such knowledge is inaccessible to humans. Hegel of course had little patience for the notion of things in themselves, at least at the end of history. [44] While philosophy has all but forgotten the question of its relation to human and higher intelligence, prior to this century, the topic was of central concern. Indeed, philosophy has over twenty-five hundred years of experience to bring to bear on this question. [45] I think it is incumbent upon philosophers to draw on this history in order to help frame the terms of this debate.

         Here is but one example. A recent article has us imagine what sort of constraints might operate on creatures with brains the size of Jupiter. The author’s fundamental premise is that “The laws of physics impose constraints on the activities of intelligent beings regardless of their motivations, culture or technology.”[46] The author then proceeds to investigate the sort of information processing ability that such a brain might enjoy, e.g., the author supposes that the speed of light imposes a constraint on the processing speed of a Jupiter sized brain. Let us grant the author’s fundamental premise. It certainly does not follow that the Jupiter-sized brains are bound by the speed of light unless it turns out that we happened to get this physical law correct with our puny 1300 ccs of brain matter. In other words, even if physical laws constrain the activities of intelligent beings we need some independent support for the claim that our view of these laws is even close to the mark. The author does not seem to take serious the possibility that there might be intelligences that radically transcend our own. Perhaps such creatures might smile at our claim that the speed of light is a fundamental physical law in the same way we smile at Lord Kelvin’s claim that heavier than air flying machines are impossible. To think of the Jupiter-sized brains in such human terms is to seriously underestimate what might be at stake in implementing the technologies noted above. Philosophy ought to bring its expertise on the question of transcendence to this debate. 

4.2 What should we do?

If the assessment here is correct, then it may be that we are on the verge of a technological and cultural revolution of unparalleled proportion. At present there seems to be two clear options. We might embrace this technology and attempt to create beings who are better equipped to complete philosophy (and science). The other option is that we find that the wisest course of action is to not employ this technology. Either course presents us with huge moral and ethical challenges. Obviously the former option raises the possibility of the extinction of the human species. What if our transcendent children turn on us? If the latter course is chosen we have the difficult moral challenge of justifying this course of action, and the moral challenge of policing this policy. To effect a worldwide moratorium on such research would require international cooperation on a scaled not yet realized. Our success with stopping the proliferation of nuclear weapons technology—which is by its very nature much more tractable, since it requires a much larger industrial base—ought not to encourage us. The genetic research outlined above could be carried out in any number of labs throughout the entire world. If philosophy has primary justification over any domain of discourse then morality is as good a candidate as any. Given the enormity of these moral questions philosophers, I believe, have a duty to take them up in conversation.

4.3 What May We Hope For?

How we answer this question (as Kant clearly saw) is intimately intertwined with how we answer the previous two. The upper bound on what we may hope for might be that our transcendent children will fulfill the prophecy of the second coming with only a small inversion of the etiology: It is not God that creates a man-god but we humans create god-like beings. The lower bound on what we may hope for is that in repressing the proliferation of these technologies we do not have to institute such a repressive regime of social control that George Orwell’s 1984 looks like complete anarchy in comparison.

4.4 A Call to Arms

         Our battle is with the future. Thus far the battle has been engaged by those most closely linked with technology—engineers, scientists and science-fiction writers. A case in point is a recent article by Bill Joy; co-founder of sun Microsystems, which has a caused a stir in certain circles.[47] Joy brings his computer background to bear in assessing the prospects for technology in the twenty-first century. Joy’s version of the future is bleak in the extreme, in contrast to the mostly upbeat or utopian vision offered by other technologists such as Moravec and Kurzweil. Wherein lies the truth? The answer to this will depend on how we answer the questions of ‘What can we know?’, ‘What should we do?’ and ‘What might we hope for?’ Even if there is a remote possibility that I am correct about the imminent nature of this technology, and the potentially radical implications for humanity, I believe that philosophers are duty bound to cogitate these questions with some urgency. This is the call to arms.  




Acknowledgements:

   I’m grateful for comments  from Peter Menzies, Malcolm Murray, Philip Pettit, Huw Price, Graham Priest, two anonymous referees, and to the audience at the Canadian Philosophical Association May 2001, Quebec, Quebec City, where an earlier version of this paper was presented. The paper has also benefited from correspondence with Jason Grossman, and Richard Rorty.  


References

Barnes, J. (ed) 1984. The Complete Works of Aristotle. 2 vols. Princeton: Princeton University Press.

Boncinelli, E., and Mallamaci, A. 1995. Homeobox genes in vertebrate gastrulation. Current Opinion in Genetics and Development 5, 619-627.

Bonner, J. T. 1980. The Evolution of Culture in Animals, Princeton: Princeton University Press.

Crow, T. 1999. Did Homo Sapiens Speciate on the Y Chromosome? available on line (http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.001)

Davidson, D. 1984a. Inquiries into Truth and Interpretation, Oxford: Oxford University Press.


1984b. In Defence of Convention T. Reprinted in 1984a.


1984c. The Method of Truth in Metaphysics. Reprinted in 1984a.


1984d. On the Very Idea of a Conceptual Scheme. Reprinted in 1984a.


1986. A Coherence Theory of Truth and Knowledge. In Truth and Interpretation, ed., Ernest Lepore, Oxford: Basil Blackwell.

Descartes, R. 1981. The Philosophical Works of Descartes, Vol. 1, translated by E. S. Haldane and G.R.T Ross, Cambridge: Cambridge University Press. 

Dretske, F. 1981. Knowledge and the Flow of Information, Cambridge: MIT press.

Dunbar, R. I. M. 1993. Coevolution of neocortical size, group size and language in humans. Behavioral and Brain Sciences, 16, 681-735.

Finkelstein, R. and Boncinelli, E. 1994. From fly head to mammalian forebrain: The story of otd and Otx. Trends in Genetic 10, 310-315.

Fodor, J. 1983. The Modularity of Mind. Cambridge: The MIT Press.

Frankfurt, H. 1974. The Logic of Omnipotence. Reprinted in Readings in the Philosophy of Religion, ed. By B. A. Brody, Englewood Cliffs, N. J.: Prentice-Hall.

Hamilton, E. and Huntington. C., (eds). 1961. The Collected Dialogues of Plato. Translated by Lane Cooper, Princeton: Princeton University Press.

Hammer, R. E., Palmiter, R. D., and Brinster, R. L. 1984. Partial Correction of Murine Hereditary Growth Disorder by Germ-Line Incorporation of a New Gene. Nature, 311, 65-67.

Hawking, S. 1980. Is the End in Sight for Theoretical Physics. Reprinted in John Boslough’s Stephen Hawking’s Universe, New York: Avon Books.

Hegel, W. F. 1979. Phenomenology of Spirit. Translated by A.V. Miller Oxford: Oxford University Press.

Hendry, A.P., J.K. Wenburg, P. Bentzen, E.C. Volk, and T.P. Quinn. (2000). “Rapid evolution of reproductive isolation in the wild: evidence from introduced salmon.” Science 290: 516-518.

Holland, P., Ingham, P., and Krauss. S. 1992. Development and Evolution: Mice and flies head to head. Nature 358, 627-628

James, W. 1995. Pragmatism: A New Name for Old Ways of Thinking. London: Dover publications.

Jerison, H. 1973. Evolution of the Brain and Intelligence. New York: Academic Press.

Joy, B. 2000. Why the Future Does Not Need us. Wired.

Kant, I. 1950. Prolegomena to Any Future Metaphysics. Translated by Carus and Beck, New York: the Bobbs-Merrill Company, Inc.


1965. The Critique of Pure Reason, translated by Kemp Smith, Toronto: Macmillan.

King, M. C. and Wilson, A. C. 1975. Evolution at two levels in Humans and Chimpanzees. Science, 188, pp. 110-4.

McGinn, C. 1993. Problems in Philosophy: The Limits of Inquiry, Oxford: Blackwell.

Moravec, H. 1998a. Robot: mere machine to transcendent mind, Oxford: Oxford University Press.


1998b. When will computer hardware match the human brain? Journal of Evolution and Technology, (www.Transhumanism.com).


1999. Rise of the Robots. Scientific American, Vol. 281, pp. 124-135.

Nagel, T. 1986. The View from Nowhere. Oxford: Oxford University Press.

Nietzsche, F. 1974. The Gay Science. Translated by Walter Kaufmann, New York: Vintage Books.


1986. Human all too Human. Translated by R. J. Hollingdale, Cambridge: Cambridge University Press.


1989. Beyond Good and Evil. Translated by W. Kaufmann, New York: Vintage Books preface; and

Putnam, H. 1992. Why is a Philosopher? Reprinted in Realism with a Human Face, Cambridge: Harvard University Press.

Rorty, R. 1979. Transcendental Arguments, Self-reference, and Pragmatism. In Transcendental Arguments and Science, ed. by P. Bieri, R. P. Horstmann, and L. Kruger, Dordrecht: D. Reidel Publishing co.


1982. The World Well Lost. Reprinted in Consequences of Pragmatism, Minneapolis: University of Minnesota Press.

Rovane, C. 1986. The Metaphysics of Interpretation In Truth and Interpretation, ed. Ernest Lepore, Oxford: Basil Blackwell.

Sandberg, A. 1999. The Physics of Information Processing Superobjects: Daily Life Among the Jupiter Brains, This Journal, http://www.transhumanist.com/, Vol. 5.

 Sawaguchi, T. 1992. The size of the neocortex in relation to ecology and social structure in monkeys and apes. Folia Primatologica, 58, 131-45.

Seyfarth, R. M. and D. L. Cheney, D. L. 1992. Meaning and Mind in Monkeys. Scientific American, Volume 267, no. 6.

Walker, M. 1994. Becoming Gods. Unpublished Ph.D. thesis, Australian National University.


1999. "On the Intertranslatability of All Natural Languages. Unpublished manuscript.


2000. Naturalism and Skepticism: Can Skepticism be Scientifically tested? Manuscript. (www.markalanwalker.com)


2002. On the Fourfold Root of Philosophical Skepticism. Manuscript in Preparation.

Westphal, M. 1998. History and Truth in Hegel’s Phenomenology, third edition, Indiana: Indiana University.

Wilbur, H. J., and Allen, H. J. (eds). 1979. The Worlds of the Early Greek Philosophers, Buffalo: Prometheus Books.





[1] Thomas Nagel, The View from Nowhere, Oxford: Oxford University Press, 1986, p. 10 and p.12.

[2] It has been suggested to me that 'inflationism' has a number of negative connotations to it and that 'amplificatory' might work better here. While I agree with the sentiment I think 'inflationism' tends to make more perspicuous the contrast with 'deflationism'.

[3] Although on occasion a single genetic mutation may be sufficient for speciation. It has even been argued (controversially) that Homo sapiens speciated on a single genetic change involving the lateralization and hemispheric specialization of the brain. See, T. Crow (1999) Did Homo Sapiens Speciate on the Y Chromosome?” available on line (http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.001)

[4] The locus classicus on this subject is H. Jerison’s Evolution of the Brain and Intelligence, New York: Academic Press, 1973. The correlation is actually between intelligence and brain volume versus the log of body weight. A direct comparison between humans and chimps is a little unfair, since the chimps’ body weight is less, on average, than a human’s. This does not affect the main point, which is that the chimpanzee has a proportionately smaller brain than Homo sapiens. The point about comparing body weight is that the intelligence of a creature is thought to be a function of its “surplus” brain mass: a larger body requires more brain mass to control and motor its operations. Thus, although a whale has a larger brain than Homo sapiens it also has a much larger body. The correlation is by no means perfect. Dolphins have a higher brain versus body weight ratio; if we are to believe this correlation has no exceptions then we ought to accept the conclusion that dolphins are more intelligent than humans. Jerison provides much illumination on these issues.

[5] I discuss this in more detail in my manuscript "Naturalism and Skepticism" (www.markalanwalker.com/natu.htm)

[6] R. E. Hammer, R. D. Palmiter, R. L. Brinster, Nature, 311, 65, 1984. Unfortunately, the treatment was not a total success as the growth hormone production was inappropriately controlled. An excess of growth hormone in the treated mice resulted in giganticism—mice one and a half times their normal size. (The sound scientific basis for a bad sci-fi movie).

[7] M. C. King and A. C. Wilson, “Evolution at two levels in Humans and Chimpanzees,” Science, 188, pp. 110-4.

[8] In fact I have argued elsewhere that it might be best to look at the genetic differences between the common chimpanzee and the “pygmy chimpanzee” since the latter has a smaller probably neotenous brain compared with the former.

[9] See P. Holland, P. Ingham, and S. Krauss (1992), “Development and Evolution. Mice and flies head to head”, Nature 358, 627-628; and R. Finkelstein and E. Boncinelli (1994) “From fly head to mammalian forebrain: The story of otd and Otx”, Trends in Genetic 10, 310-315.

[10] See Boncinelli, E., and A. Mallamaci (1995) “Homeobox genes in vertebrate gastrulation”. Current Opinion in Genetics and Development 5, 619-627.

[11] In nature we see a fairly reliable correlation between length of juvenile period and brain size in primates, see J. T. Bonner, The Evolution of Culture in Animals, Princeton: Princeton University Press, 1980, p. 50.

[12] It has been argued, for example, that the neocortex ought to be used as the relevant brain structure in studying the evolution of primate intelligence since primate encephalization is principally a result of an increase in neocortical size, and “higher” cognitive functions are primarily attributed to the neocortex. In addition to the classic discussion in Jerison, op. cit., see R.I.M. Dunbar, “Coevolution of neocortical size, group size and language in humans,” (with commentary), Behavioral and Brain Sciences, 16, 681-735 and Sawaguchi, T. “The size of the neocortex in relation to ecology and social structure in monkeys and apes,” Folia Primatologica, 1992, 58, 131-45 and numerous references therein.

[13] Hans Moravec, Robot: mere machine to transcendent mind, Oxford: Oxford University Press, 1998, and “ Rise of the Robots,” in Scientific American, Vol. 281 no. 6 December 1999, pp. 124-135 and “When will computer hardware match the human brain?,” Journal of Transhumanism, www.Transhumanism.com, vol. 1, March 1998.

[14] “Rise of the Robots,” op. cit., p. 135.

[15] Some thoughtful criticisms can be found in the replies to Moravec’s Transhumanism article, op. cit.

[16] See Hendry, A.P., J.K. Wenburg, P. Bentzen, E.C. Volk, and T.P. Quinn. (2000). “Rapid evolution of reproductive isolation in the wild: evidence from introduced salmon.” Science 290: 516-518.

[17] Plato, The Republic 459-461.

[18] The Worlds of the Early Greek Philosophers, edited by J. B. Wilbur and H. J. Allen, Buffalo: Prometheus Books 1979, p. 72.

[19] Cf. Plato, Laws, 631, 716.

[20] The Phenomenology of Spirit, paragraph, 5.

[21] Donald Davidson, “On the Very Idea of a Conceptual Scheme”, reprinted in Inquiries into Truth and Interpretation, Oxford: Oxford University Press, 1984, p. 184.

[22] Ibid. p. 185.

[23] Nagel, op. cit., makes a similar argument against Davidson, pp. 93-99.

[24] This distinction seems to be operating at some level in Descartes and Kant’s work. Thus Descartes: “I would dare not even dare to say that God cannot arrange that a mountain should exist without a valley, or that one and two should not make three; but I only say that He has given me a mind of such a nature that I cannot conceive a mountain without a valley or a sum of one and two which would not be three, and so on, and that such things imply contradictions in my conception.” (Letter to Arnauld, 29 July 1648). Quoted in H. Frankfurt, “The Logic of Omnipotence” reprinted in Readings in the Philosophy of Religion, ed. By B. A. Brody, Englewood Cliffs, N. J.: Prentice-Hall, 1974, p. 343, note 3. Kant too clearly thought our understanding was limited in comparison to God. Kant contends that we labor under the misfortune of not having an intellectual intuition like god, but merely a sensuous intuition, hence, we may only know objects as phenomena not noumena, The Critique of Pure Reason, translated by Kemp Smith, Toronto: Macmillan, 1965, B308-310. Sometimes the problem is put that we have to think rather than using pure intuition: “…all his [God’s] knowledge must be intuition, and not thought, which always involves limitation,” ibid., B 71. Of course Descartes and Kant both tended to concentrate on what I am calling ‘justificatory skepticism’ to the neglect of noetic skepticism. Much of the tradition, unfortunately, in my view, has had a similar focus. A closely allied distinction is made by Nagel in his The View From Nowhere, op. cit., p. 90: “In the last chapter we discussed skepticism with regard to knowledge. Here I want to introduce another form of skepticism—not about what we know but about how far our thoughts can reach. I shall defend a form of realism according to which our grasp on the world is limited not only in respect of what we can know but also in respect o what we can conceive. In a very strong sense, the world extends beyond the reach of our minds.” His distinction is not exactly the same as the one discussed here. The difference lies in the fact that Nagel seems to suggest at certain points that the world does in fact transcend our ability to conceptualize it, whereas, the skepticism here asserts merely that we leave open the possibility of such a transcendence. I discuss these various types of skepticism in my unpublished "On the Fourfold Root of Philosophical Skepticism".

[25] Davidson acknowledges the transcendental nature of his argument in “In Defence of Convention T”, in Inquiries into Truth and Interpretation, op. cit. p.72. Carol Rovane argues in “The Metaphysics of Interpretation”, in Truth and Interpretation, ed. Ernest Lepore, Oxford: Basil Blackwell, 1986, 417-29; that there is a strain of transcendental argumentation in Davidson. Rorty argues that Davidson’s conceptual scheme argument is a transcendental argument to end all transcendental arguments, “Transcendental Arguments, Self-reference, and Pragmatism”, in Transcendental Arguments and Science, ed. by P. Bieri, R. P. Horstmann, and L. Kruger, Dordrecht: D. Reidel Publishing co., pp. 95-103.

[26] Although Davidson does not directly speak to the possibility of higher intelligences in his discussion of conceptual schemes, this is clearly an implication of his argument as Rorty, “The World Well Lost”, reprinted in Consequences of Pragmatism, Minneapolis: University of Minnesota Press, 1982, and Nagel, The View from Nowhere, op. cit., have clearly seen.

[27] Davidson appeals to the notion of an omniscient interpreter as a means to guarantee the veracity of our belies in his “The Method of Truth in Metaphysics”, reprinted in Inquiries into Truth and Interpretations, op. cit., pp. 199-214. The omniscient interpreter has a return engagement in “A Coherence Theory of Truth and Knowledge”, reprinted in Truth and Interpretation, op. cit., p. 307.

[28] “Is the End in Sight for Theoretical Physics”, reprinted in John Boslough’s Stephen Hawking’s Universe, (New York: Avon Books, 1985) p. 119.

[29] The Modularity of Mind, (Cambridge: The MIT Press, 1983), pp. 125-6. I would be impressed if the spiders themselves made the arguments—although I am not sure I would believe the arguments.

[30] See my "Naturalism and Skepticism: Can Skepticism be scientifically tested" op. cit., for details on how to naturalize noetic skepticism.

[31] Cf. F. Dretske’s opening remarks in Knowledge and the Flow of Information, Cambridge: MIT press, 1981, p. vii: “In the beginning there was information. The word came later.”

[32] R. M. Seyfarth, and D. L. Cheney, “Meaning and Mind in Monkeys”, Scientific American, December 1992, Volume 267, no. 6, p. 122.

[33] Cf., Plato, Cratylus, 392.

[34] An alternate means to reflect on the import of Davidson’s position is the a priori aspect of his transcendental argument. We know a priori that if we encounter any language user then their language is translatable into our own. Now suppose we create or happen on beings with brains the size of a football stadium, (where ever would they find a good fitting hat)? Wheeling out our favorite Davidsonian transcendental argument, we announce to them that if they think or employ a language, then we can translate their language. They respond (in our language) that if that is the way we want to define ‘language’ so be it. When they (the stadium brain creatures) communicate with one another they employ a hyperlanguage and think in hyperthought. It is only when they communicate with us puny brained creatures that they must resort to using a language. Just as when we communicate with apes we do not employ a language but only a protolanguage. It is difficult to see how the argument could prove anything more than this about creatures with such large brains—in an a priori fashion—unless one thought of language as analogous to the transcendental ego or Hegel’s Geist, as opposed to an evolutionary adaptation. In other words, at best what Davidson has shown is that it is inconsistent to speak about languages failing to be mutually intertranslatable. To show that our view of the universe dovetails with an omniscient view would require showing that an omniscient interpreter must speak only in a language. That is, Davidson seems to require is some sort of “completeness” theorem to show that language exhausts the possibility of “higher” forms of communication.

[35] Cf. Nietzsche, Beyond Good and Evil, preface; and Human, all Too Human.

[36] Re Heidegger: Is there a poet’s irony in the idea that the question of Being had to be successively submerged in the Greek trinity of metaphysics, science, and technology, only to be reborn by technology itself. Perhaps the famous Der Spiegel remark—Only a god can save us now—does not look so hopeless.

[37] Some of these issues are discussed in my unpublished Ph.D. dissertation, "Becoming Gods". I discuss Davidson's views in more detail in my "On the Intertranslatability of all Natural Languages."

[38] James (1995). Pragmatism: A New Name for Old Ways of Thinking, London: Dover publications, p. 299.

[39] “Why is a Philosopher?”, reprinted in Realism with a Human Face, op. cit. pp. 117-8.

[40] See note 45 for references for Plato and Aristotle’s view on this matter.

[41] Some popular discussions of this issue suggest that the implementing of these technologies cannot be stopped. My own view is not nearly so fatalistic. But there certainly is a problem here. The more these various technologies advance the easier it will be for fewer and fewer individuals to perform the relevant sorts of experiments to attempt to create higher intelligences. For example, Moravec estimates that 100 million MIPS are sufficient for this sort of experiment. We might legislate then that no computer should be above say 10 million MIPS in size. But then how do we stop 10 graduate students from linking 10 of these computers together after a night at the pub? (Graduate student supervision could take on a whole new meaning). Many biological laboratories, particularly ones funded from private sources, operate in a certain amount of secrecy for “proprietary” reasons. Even one such small laboratory may be able in the coming century to genetically engineer Homo bigheadus.

[42] We are perhaps in a position to add a few other names to this list: Fodor and Chomsky, op. cit., and Colin McGinn, (Problems in Philosophy: The Limits of Inquiry, Oxford: Blackwell, 1993).

[43] Nagel, op. cit. p. 12.

[44] Cf. M. Westphal, History and Truth in Hegel’s Phenomenology, third edition, Indiana: Indiana University Press, p. 37: “I have argued in another place that Kant’s dualism and finitism are the expression of a religious world-view, since the thing-in-itself is so clearly defined as the thing-for-God. If this is true, we will have to conclude that as the problem of the Phenomenology developed and took on dimensions transcending the narrowly epistemological, Hegel moved closer to the spirit of Kant and to real engagement with his thought. For both of them the question of knowledge becomes the question of man in relation to God.” To which we might add that this makes Kant and Hegel very Greek.

[45] Xenophanes, fragments 18 and 34, The Worlds of the Early Greek Philosophers, op. cit., p. 56. Heraclitus, op. cit. Plato, Phaedrus, 247; Parmenides, 134-5; Timaeus, 1178-9. Aristotle, Nicomachean Ethics, 1177b. Aquinas, Summa Theologica, 1a, 3, prologue. Descartes, Meditation 5. Spinoza, Ethics, propositions 14 and 15. Kant op. cit., B71 and Prolegomena to Any Future Metaphysics, section 58. Hegel, Phenomenology of Spirit, paragraph 8. Nietzsche, The Gay Science, section 125.

[46] Anders Sandberg, "The Physics of Information Processing Superobjects: Daily Life Among the Jupiter Brains," This Journal, http://www.transhumanist.com/, Vol. 5, 1999.

[47] “Why the Future Does Not Need us” in Wired, April 2000. While technologists have lead the debate thus far philosophers have had some part. Joy recounts an encounter with John Searle in his article:

I had missed Ray's [Kurzweil] talk and the subsequent panel that Ray and John [Searle] had been on, and they now picked right up where they'd left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn't happen, because the robots couldn't be conscious.

While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken a back, especially given Ray's proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.”