Consciousness Studies/The Philosophical Problem/Machine Consciousness
Elementary Information and Information Systems Theory
[edit | edit source]When one physical thing interacts with another a change in "state" occurs. For instance, when a beam of white light, composed of a full spectrum of colours is reflected from a blue surface all colours except blue are absorbed and the light changes from white to blue. When this blue light interacts with an eye it causes blue sensitive cones to undergo a chemical change of state which causes the membrane of the cone to undergo an electrical change of state etc. The number of distinguishable states that a system can possess is the amount of information that can be encoded by the system.
Each distinguishable state is a "bit" of information. The binary symbols "1" and "0" have two states and can be used to encode two bits of information.
The binary system is useful because it is probably the simplest encoding of information and any object can represent a binary "1". In electrical digital systems an electrical pulse represents a "1" and the absence of a pulse represents a "0". Information can be transferred from place to place with these pulses. Things that transfer information from one place to another are known as "signals".
Information is encoded by changes of state, these changes can occur over time or as variations in density, temperature, colour etc. in the three directions in space. The writing on this page is spatially encoded.
It is interesting that our spoken communication uses a narrow band of sound waves. This favours the temporal encoding of information, in other words speech is largely a one dimensional stream of symbols. In vision, somesthesis, sound location and some of the other senses the brain uses spatial encoding of information as well as encoding over time.
The rearrangement or replacement of a set of information so that some or all of the original information becomes encoded as another set of states is known as "processing". Devices that perform these actions are known as "information processors". The brain is predominantly an information processor.
Information systems in general have transducers that convert the state of signals in the world into signals impressed on another carrier, they then subject these signals to various processes and store them.
The spatial encoding in the brain generally preserves the relation of what is adjacent to what in the sensory field. This allows the form (geometry) of stimuli to be encoded.
Information transfers in the brain occur along numerous parallel "channels" and processes occur within each channel and between channels. Phenomenal consciousness at any moment contains a continuum of simultaneous (parallel) events. Classical processes take time so phenomenal experience is likely to be, at any instant, a simultaneous output of processes, not a classical process itself.
Classification, signs, sense, relations, supervenience etc.
[edit | edit source]A sign is a symbol, combination of symbols such as a word or a combination of words. A referent is "...that to which the sign refers, which may be called the reference of the sign" (Frege 1892). Statements and concepts usually express relations between referents.
The sense of statements depends on more than the simple referents within them, for instance "the morning star is the evening star" is true in terms of the referents but dubious in terms of the sense of the morning and evening stars because the morning star is Venus as seen in the morning and the evening star is Venus as seen in the evening. So the sense of the expression "the morning star" depends on both the referent "Venus" and the referent "Morning" and probably other associations such as "sunrise", "mist" etc..
Each sign is related to many other signs and it is these groups of relationships that provide the sense of a sign or a set of signs. A relation is an association between things. It can be understood in the abstract as "what is next to what". Relations occur in both time and space. When a ball bounces the impact with the floor changes the direction of the ball so "direction" is related to "impact", the ball is round so "ball" is related to "round". For instance, the morning is next to the presence of the morning star so "morning" and "morning star" are related. Relations are the connections that allow classification.
According to the physical concept of information all abstract signs are physical states of a signal and are only abstract according to whether they are related to a physical thing or exclusively to another sign. The process of treating an abstract idea as if it were a concrete thing that contains other concrete things is known as reification.
It is possible to have statements that have a sense but apparently no reference. As Frege put it, the words 'the celestial body most distant from the Earth' have a sense but may not have a reference. There can be classes of things that have not yet acquired any members or have no members. In a physical sense a particular class is a sign that refers to a particular state or set of states. Classes can be arbitrary such as "big things" being all things that have a state of being over one metre long. Classes and sets are very similar, sometimes sets are defined as being a class that is an element of another class. The term "set" has largely superseded the term "class" in academic publications since the mid twentieth century.
The intension of a set is its description or defining properties. The extension of a set is its members or contents. In mathematics a set is simply its members, or extension. In philosophy there is considerable discussion of the way that a given description can describe more than one thing. In other words, one intension can have several extensions. The set of things that are "tables" has the properties "legs", "flat surface" etc. The extension of "tables" is all the physical tables. The intension of "tables" may also include "stools" unless there is further clarification of the properties of "tables". Intensions are functions that identify the extensions (original members of a set) from the properties.
Classification is performed by information systems and by the information processing parts of the nervous system. A simple classification is to sort symbols according to a set of rules, for instance a simple sort classifies words by letter sequence. There are numerous classification systems in the visual system such as arrangements of neurons that produce a single output when a particular orientation of a line is viewed or a particular face is seen etc. The processes that identify attributes and properties of a thing are usually called filters.
The output of filters becomes the properties of a set and specifies the relations between sets. These relations are stored as address pointers in computers or connections in the nervous system.
An intension uses these properties and relations to identify the things that are members of the set in the world. Clearly the more specific the filters the more accurate the intension.
A database is a collection of signs. A fully relational database is a database arranged in related sets with all relationships represented by pointers or connections. In conventional usage a relational database is similar but more sophisticated, redundant relationships and wasteful storage being avoided. Conventional relational databases obey "Codd's laws". An hierarchical database only contains pointers that point from the top of a classification hierarchy downwards. Events and persistent objects are also known as entities, the output of filters related to entities are known as the attributes of the entity. In practice a system requires an event filter to record an entity (in a computer system the event filter is usually a single data entry form and the attributes are filtered using boxes on the screen to receive typed input).
In information systems design there are many ways of representing classification hierarchies, the most common is the entity diagram which assumes that the attributes of an entity define it and are stored together physically with the symbols that represent the entity. This adjacent storage is purely for convenient management of storage space and reduction of the time required for retrieval in modern computers.
Filters contain processing agents of varying degrees of sophistication from simple sorting processes to "intelligent" processes such as programs and neural networks. It is also possible to arrange filters in the world beyond an information processor. For instance, an automatic text reading machine might turn over the pages of a book to acquire a particular page. A human being might stroke an object to confirm that the texture is as it appears to be and so on.
Scientists routinely use external transducers and filters for the purpose of classification. For instance, a mass spectrometer could be used to supply details of the atomic composition of an item. External filters allow us to distinguish between things that are otherwise identical (such as two watery compounds XYZ and H2O) or to acquire properties that are unobservable with biological transducers such as the eyes and ears. The scientist plus his instruments is a single information system. In practice the referent of a set is determined by applying transducers and filters to the world and looking up the results in a relational database. If the result is the original set then a referent has been found. A sophisticated system may apply "fuzzy logic" or other methods to assign a probability that an object is truly a member of a particular set.
It is also possible to classify information according to relationships in time (i.e.: starting a car's engine is related to car moving away). Within an information system the output from the filter for "starting engine" might precede that from the filter for "starts moving". In information systems design procedures that involve successions of events can be arranged in classification structures in the same way as data; this technique is known as structured programming (esp. Jackson structured programming).
Hierarchies related to a single entity are frequently stored together as objects and the information processing that results is known as object oriented programming. A fully relational database would, in principle, contain all the objects used in a structured information system. In Part III the storage and sequential retrieval of related functions in the brain is described.
It has been pointed out by (McCarthy and Hayes (1969)) that an information processor that interacts with the environment will be producing continuous changes in all of its classifications (such as position etc.) and also changes in theories (structured programs that are predictive processes) about the world. In a serial processor, such as a Turing Machine with a one dimensional tape, the presence of changes in the world would create a huge burden on the machine. In a parallel processor, such as a biological neural network, the reclassifications should be straightforward. The problem of adapting an information system to changes in the world, most of which have little effect on the processes performed by the system, is known as the frame problem. The frame problem is usually stated in a form such as "how is it possible to write formulae that describe the effects of actions without having to write a large number of accompanying formulae that describe the mundane, obvious non-effects of those actions?" (Shanahan 2004).
Chalmers(1996) introduced the terms primary intension and secondary intension. Primary intension is a high level description where the properties of a set may be insufficient to specify the contents of the set in the physical world. For instance, the term "watery" might specify several liquids with various compositions. Secondary intension is specific so that it applies to one substance in the world (H2O). In the context of information systems primary intensions differ from secondary intensions as a result of inadequate filtering and classification. (See note below for details of Putnam's twin earth thought experiement).
The problem of matching the properties and relations of an item in a relational database with an item in the world involves the problem of supervenience. Supervenience occurs when the properties and relations in the database for an item are the same as the output from filters applied to the item. In other words, in an information system information does not supervene directly on a thing, it supervenes on information derived from the thing. Chalmers described supervenience in terms that are accessible to an information systems approach:
"The properties of A supervene on the properties of B if no two possible situations are identical with respect to the properties of A while differing with respect to the properties of B (after Chalmers 1996)."
In terms of information processing the properties are changes in state derived from a transducer that are subject to classification with a filter. The properties of a predictive program would supervene on the input from transducers applied to an object if it correctly identified the sets and sequence of sets that are discovered at all times.
Information theory is consistent with physicalism. Philosophers coined the term physicalism to describe the argument that there are only physical things. In token physicalism every event is held to be a physical event and in type physicalism every property of a mental event is held to have a corresponding property of a physical event. Token physicalism is consistent with information theory because every bit of information is a part of an arrangement of a physical substrate and hence a physical event. Type physicalism would be consistent with information theory if it is held that mental events are also arrangements of substrates. It is sometimes held that the existence of abstract mental entities means that token physicalism does not correspond to type physicalism. In terms of information theory abstract entities would be derived sets of information that are arrangements of substrates. Hence information theory does not distinguish between type and token physicalism.
The reader should be cautioned that there is an extensive literature associated with supervenience that does not stress the way that information is embodied and representational. (The removal of these constraints will lead to non-physical theories of information).
It is sometimes asked how conscious experience containing a quale that is a colour, such as blueness, can supervene on the physical world. In terms of information systems the question is back to front: blueness is very probably a phenomenon in the physical brain - it is certainly unlike an arrangement of stored bits in an information system. The question should read "what physical theory supervenes on information in the signals related to the phenomenon called blue?"
The simple answer is that there is no widely accepted description available of the physical nature of the experience called blue (there are several theories however). A common mistake is to say that the secondary intension of the quale blue is known - this is not the case, the physical basis of em radiation or absorption of light is known to some extent but these are almost certainly not the physical basis of the "blue" of experience. The quale "blue" is probably a particular substrate that has a state, not an encoded state on a generalised substrate.
Information is the patterns and states of an underlying substrate or carrier, this leaves us with exciting questions such as: what is it like to be the substrate itself rather than simply the information impressed upon it? Can only particular substrates constitute conscious experience? How can we relate the properties of this experience to information about the physical world?
The substrate of information is not part of the problem of access consciousness which deals with the problem of the flow of information from place to place.
Frege, G. (1892) On Sense and Reference. http://en.wikisource.org/wiki/On_Sense_and_Reference
Pruss, A.R. (2001) The Actual and the Possible. in Richard M. Gale (ed.), Blackwell Guide to Metaphysics, Oxford: Blackwell. http://www.georgetown.edu/faculty/ap85/papers/ActualAndPossible.html
Menzies, P (2001). Counterfactual Theories of Causation. Stanford Encyclopedia of Philosophy http://plato.stanford.edu/entries/causation-counterfactual/
McCarthy, J. and Hayes, P.J. (1969), "Some Philosophical Problems from the Standpoint of Artificial Intelligence", Machine Intelligence 4, ed. D.Michie and B.Meltzer, Edinburgh: Edinburgh University Press, pp. 463–502.
Shanahan, M. (2004) "The frame problem". Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/frame-problem/
The construction of filters: Bayesian and Neural Network models
[edit | edit source]This is a stub and needs expanding
Qualia and Information
[edit | edit source]The problem of the generalised nature of information is addressed by several "thought experiments" which are described below.
The problem of "intensions" is tackled in Putnam's twin earth thought experiment which was discussed above but is given in more detail below.
Absent and fading qualia
[edit | edit source]Absent qualia
[edit | edit source]Block (1978) argued that the same functions can be performed by a wide range of systems. For instance, if the population of China were equipped with communication devices and a set of rules they could perform almost any function but would they have qualia? The argument considers the fact that systems which process information can be constructed of a wide range of materials and asks whether such systems will also have qualia (see illustration below).
This argument also occurs when the physical structure of computing devices is considered, for instance a computing machine could be constructed from rolling steel balls. Would the steel balls at one instant possess the quale 'blue' and then, as a result of the movement of one ball to another position, possess the quale 'red'? Can an arrangement of balls really have qualia or are they absent? It is incumbent upon proponents of functional organisation to describe why identical balls arranged as O O OOO can be the quale red and yet those arranged as OOO O O can be the quale blue. They must also take into account Kant's "handedness problem": the balls OOO O O look like O O OOO when viewed from behind. Red and blue, as arrangements of things, would be identical depending on the viewing point. How can a processor have a viewing point when it is itself the steel balls?
Fading qualia
[edit | edit source]Pylyshyn (1980) introduced a thought experiment in which a human brain is progressively replaced by synthetic components and it is asked what would happen to consciousness during this replacement of the brain.
Chalmers (1996) considers the problem in depth from the point of view of functional organisation. (i.e.: considering replacement of biological components with components that perform the same functions). The argument is straightforward: if phenomenal consciousness is due to functional organisation then replacement of biological parts with artificial parts that duplicate the function should allow phenomenal consciousness to continue.
But suppose phenomenal consciousness is not due to functional organisation. What would we expect then?
Chalmers argues that consciousness could not suddenly disappear during replacement of the brain because functions could be replaced in tiny stages so unless qualia could reside in a single tiny place in the brain Disappearing qualia would be ruled out.
Chalmers considers the alternative idea of fading qualia, where slow replacement of parts reduces experience progressively. This "fading" is described in terms of qualia fading from red to pink and experience in general becoming more and more out of step with the world. Chalmers dismisses the idea of fading qualia on the grounds that people do not have abnormal experiences, like fading colours, except in the case of pathology. More specifically, he argues that since it seems intuitively obvious that silicon implants could be devised to stand in, in any relevant functional role, for the original brain matter, we might reasonably assume that during the carbon - silicon transformation the organism's functional state, including all its dispositions to notice and report what experiences it is having, can be preserved. The absurd consequence is then supposed to consist in a being whose qualia have significantly faded continuing to report them as they originally were; without noticing the change.
Crabb(2005) has argued that there are hidden premises in this argument, and once these are exposed the desired conclusion is seen to be unwarranted. Thus, consider the assumption that during the silicon implantation process the person's functional state can be preserved in any relevant respect. This is very likely the case. Certainly, we have no a priori reason for ruling out the possibility; for surely technology might be employed to achieve any functional state we desire. In principle, then, it just has to be possible to preserve such functional traits as the noticing and reporting of the original qualia. But then, as Crabb observes, the alleged absurdity of issuing such reports in the presence of severly faded qualia depends on a further assumption; that during the implantation process the noticing and reporting functions have been preserved in such a way that we should still expect that noticing and reporting to remain fairly accurate. Chalmers completely overlooks this requirement. In effect, then, he is arguing in a circle. He is arguing that faded qualia in the presence of the original functional states are very unlikely, because a conscious being will tend to track its own conscious states fairly accurately. Why? Because the preservation of the original functional states during the implantation process is of the sort required to preserve the faithfulness of the subject's tracking. How do we know this? Well, because it is just generally true to say that a conscious being would be able, in respect of noticing and reporting, to track its conscious states. In short, then, he is saying that qualia could not fade with functional states intact, because in general that just could not happen.
Consider the following example. The original human subject Joe starts out seeing red things and experiencing vivid red qualia. He reports them as such. Then an evil scientist implants a device between Joe's visual cortex and his speech centre which effectively overrides the output from the red zone of the visual cortex, and ensures that come what may experientially, Joe will report that his qualia are vivid. We could assume a similar intervention has also been effected at the noticing centre, whatever that might be. Plausibly, then, Joe will continue to notice and report vivid qualia even though his own are severely faded. Now Crabb's question is this: why would Chalmers assume that the item-for-item silicon substitutions he envisaged would not themselves allow this sort of noticing and reporting infidelity? And unless he can provide a good reason, his thought experiment with Joe and his fading qualia simply does not work. Of course the functional states can be preserved during the silicon substitutions, but we have no reason to suppose that noticing and reporting fidelity can too. Consequently, there is no inference to an absurd situation, and therefore no reason to reject the possibility of fading qualia.
It is possible that at some stage during the replacement process the synthetic parts alone would have sufficient data to identify objects and properties of objects so that the experience would be like blindsight. The subject might be amazed that subjective vision was disappearing. However, Chalmers denies that new beliefs, such as amazement at a new state, would be possible. He says that:
"Nothing in the physical system can correspond to that amazement. There is no room for new beliefs such as "I can't see anything," new desires such as the desire to cry out, and other new cognitive states such as amazement. Nothing in the physical system can correspond to that amazement."
On the basis of the impossibility of new beliefs Chalmers concludes that fading qualia are impossible. Again, though, he has failed to explain why he thinks the original belief set can be preserved come what may, and in such a way as to preserve belief and reporting fidelity.
Notwithstanding these objections, then, according to Chalmers, if fading qualia do not occur then qualia must also exist in "Robot", a totally synthetic entity, so absent qualia do not occur either. Therefore, Robot should be conscious. He concludes the fading qualia argument by stating that it supports his theory that consciousness results from organizational invariance, a specific set of functions organised in a particular way:
"The invariance principle taken alone is compatible with the solipsistic thesis that my organization gives rise to experience. But one can imagine a gradual change to my organization, just as we imagined a gradual change to my physical makeup, under which my beliefs about my experience would be mostly preserved throughout, I would remain a rational system, and so on. For similar reasons to the above, it seems very likely that conscious experience would be preserved in such a transition"
The response to this should now be obvious. What exactly does remaining 'a rational system' entail? If it entails the preservation of noticing and reporting fidelity, then it follows that Joe's qualia would not fade. But there is no independent support for this entailment. It remains perfectly reasonable to assume that Joe's qualia would fade, and therefore that the only way he could end up misreporting his fading qualia as bright would be through a breakdown in fidelity, of the sort Crabb describes.
Chalmers notes that if qualia were epiphenomenal and not due to functional organisation then the argument would be false. This is rather unfortunate because it makes the argument tautological: if it is assumed that conscious experience is due to functional organisation then the argument shows that conscious experience is due to functional organisation. The role of epiphenomenal, or apparently epiphenomenal, consciousness brings the philosopher back to the problem of change, where consciousness does not appear to be necessary for change (functions) but change does not seem to be possible without consciousness.
There are other interesting questions related to the fading qualia argument, for instance: Can all of organic chemistry be replaced by inorganic chemistry - if not why not? If information always has a physical substrate and conscious experience is the arrangement of that substrate then how could conscious experience be the same if the substrate is replaced? At the level of molecular and atomic interactions almost all functions involve electromagnetic fields, if identical function is achieved at scales below the size of an organelle in a cell in the brain would the functional elements, such as electromagnetic fields, have been changed? (i.e.: is the replacement feasible or would it be necessary to use organic parts to replace organic parts at small scales?).
The reader may have spotted that Chalmers' fading qualia argument is very similar to Dennett's argument about the non-existence of qualia. In Dennett's argument qualia are dubiously identified with judgements and then said to be non-existent. In Chalmer's argument an attempt is made to identify qualia with beliefs about qualia so they can be encompassed by a functionalist theory.
The reader may also have noticed that the argument, by using microscopic progressive replacement, preserves the form of the brain. The replacement is isomorphic' but it is not explained anywhere why form should need to be preserved as well as function. To examine functionalism the argument should allow each replacement module to be of any size and placed anywhere in the world. Furthermore, it should be possible for the functions to be asynchronous. But the argument is not a simple examination of functionalism. If form is important why is it important? Would a silicon replacement necessarily be able to achieve the same four dimensional form as the organic original?
Pylyshyn, Z. (1980) The "causal power" of machines. Behavioral and Brain Sciences 3:442-444.
Chalmers, D.J. (1996). The Conscious Mind. Oxford University Press.
Chalmers, D.J. Facing Up to the Problem of Consciousness (summary of above at http://cogprints.org/316/00/consciousness.html).
Crabb, B.G. (2005) "Fading and Dancing Qualia - Moving and Shaking Arguments", Deunant Books
Putnam's twin earth thought experiment
[edit | edit source]The original Twin Earth thought experiment was presented by philosopher Hilary Putnam in his important 1975 paper "The Meaning of 'Meaning'", as an early argument for what has subsequently come to known as semantic externalism. Since that time, philosophers have proposed a number of variations on this particular thought experiment, which can be collectively referred to as Twin Earth thought experiments.
Putnam's original formulation of the experiment was this:
- We begin by supposing that elsewhere in the universe there is a planet exactly like earth in virtually all respects, which we refer to as ‘Twin Earth’. (We should also suppose that the relevant surroundings of Twin Earth are identical to those of earth; it revolves around a star that appears to be exactly like our sun, and so on.) On Twin Earth there is a Twin equivalent of every person and thing here on Earth. The one difference between the two planets is that there is no water on Twin Earth. In its place there is a liquid that is superficially identical, but is chemically different, being composed not of H2O, but rather of some more complicated formula which we abbreviate as ‘XYZ’. The Twin Earthlings who refer to their language as ‘English’ call XYZ ‘water’. Finally, we set the date of our thought experiment to be several centuries ago, when the residents of Earth and Twin Earth would have no means of knowing that the liquids they called ‘water’ were H2O and XYZ respectively. The experience of people on Earth with water, and that of those on Twin Earth with XYZ would be identical.
Now the question arises: when an earthling, say Oscar, and his twin on Twin Earth (also called 'Oscar' on his own planet, of course. Indeed, the inhabitants of that planet necessarily call their own planet 'earth'. For convenience, we refer to this putative planet as 'Twin Earth', and extend this naming convention to the objects and people that inhabit it, in this case referring to Oscar's twin as Twin-Oscar, or Toscar.) say 'water' do they mean the same thing? Ex hypothesi, their brains are molecule-for-molecule identical. Yet, at least according to Putnam, when Oscar says water, the term refers to H2O, whereas when Toscar says 'water' it refers to XYZ. The result of this is that the contents of a persons brain are not sufficient to determine the reference of terms he uses, as one must also examine the causal history that led to his acquiring the term. (Oscar, for instance, learned the word 'water' in a world filled with H2O, whereas Toscar learned 'water' in a world filled with XYZ.) This is the essential thesis of semantic externalism. Putnam famously summarized this conclusion with the statement that "meaning just ain't in the head."
In terms of physical information systems such as occur in the brain this philosophical argument means that if there are inadequate external filters available the information system will confuse XYZ with H2O; it will conclude that they are the same thing and have no difference in meaning. For the information system meaning is in the classification structures assigned by the system. If the system is provided with better transducers and filters then new meanings will arise within the system. However, for an information system 'meaning' is no more than a chain of relations because this is the nature of information (i.e.: arrangements of an arbitrary carrier). Other types of meaning would require phenomena other than simple information processing.
In Putnam's thought experiment the world can be different but the meaning for the individual is the same if the brain is the same. If there is a type of meaning other than a chain of relations would Putnam's experiment suggest that this type of 'meaning' occurs as a phenomenon in the brain or in the world beyond the body?
Putnam, H. (1975/1985) The meaning of 'meaning'. In Philosophical Papers, Vol. 2: Mind, Language and Reality. Cambridge University Press.
The Inverted Qualia Argument
[edit | edit source]The possibility that we may each experience different colours when confronted by a visual stimulus is well known and was discussed by John Locke. In particular the idea of spectrum inversion in which the spectrum is exchanged, blue for red and so on is often considered. It is then asked whether the subject of such an exchange would notice any difference. Unfortunately it turns out that colour is not solely due to the spectrum and depends on hue, saturation and lightness. If the colours are inverted all the axes of colour would need to be exchanged and the relations between the colours would indeed still be discernably different.
Some philosophers have tried to avoid this difficulty by asking questions about qualia when the subject has no colour vision. For instance, it is asked whether a subject who saw things in black and white would see the world differently from one who saw the world in white and black.
This sort of discussion has been used as an attack on Behaviourism where it is argued that whether a tomato is seen as black or white the subject's behaviour towards the tomato will be the same. So subject's can have mental states independent of behaviours.
Block (1990) has adapted this argument to an inverted earth scenario in which it is proposed that a subject goes to another planet which is identical to earth except for the inversion of visual qualia. He points out that behaviours would adjust to be the same on the inverted earth as on the actual earth. All functions would be identical but the mental state would be different so it is concluded that mental states are not processes.
Chalmers(1996) approaches this argument by assuming that the absent and fading qualia arguments have proven his idea of organisational invariance. He then introduces the idea that conscious experience only exists for the durationless instant and notes that, given these assumptions a person would not be aware that the quale red had been switched for the quale blue.
"My experiences are switching from red to blue, but I do not notice any change. Even as we flip the switch a number of times and my qualia dance back and forth, I will simply go about my business, noticing nothing unusual."
Block, N. (1990). Inverted Earth, Philosophical Perspectives, 4: 53–79.
See also: Block, N. Qualia. http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/qualiagregory.pdf Byrne, A. (2004). Inverted Qualia. Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/qualia-inverted/ Shoemaker, S. (2002). CONTENT, CHARACTER, AND COLOR II: A BETTER KIND OF REPRESENTATIONALISM Second Whitehead Lecture. http://web.archive.org/20040306235426/humanities.ucsc.edu/NEH/shoemaker2.htm
The Knowledge Argument
[edit | edit source]Much of the philosophical literature about qualia has revolved around the debate between physicalism and non-physicalism. In 1982 Frank Jackson proposed the famous "Knowledge Argument" to highlight how physical knowledge might not be enough to describe phenomenal experience:
"Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. (It can hardly be denied that it is in principle possible to obtain all this physical information from black and white television, otherwise the Open University would of necessity need to use color television.)
What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not? It seems just obvious that she will learn something about the world and our visual experience of it. But then it is inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false. Jackson (1982).
The Knowledge argument is a category mistake because a description of the universe, such as information about science, is a set of symbols in a particular medium such as ink on paper. These symbols provide the recipe for experiments and other manipulations of nature, and predict the outcome of these manipulations. The manipulations of nature are not the same as the set of symbols describing how to perform these manipulations. Scientific information is not the world itself and the truth or falsehood of Physicalism is unaffected by the knowledge argument.
If the Knowledge Argument is interpreted as an argument about whether information about the nature of the colour red could ever be sufficient to provide the experience that we call red then it becomes more relevant to the problem of consciousness but it is then a debate about whether information processors could be conscious, this is covered below. Those interested in a full discussion of the Knowledge Argument should consult Alter (1998) and especially the link given with this reference.
The problem of machine and digital consciousness
[edit | edit source]Information processing and digital computers
[edit | edit source]Information processing consists of encoding a state, such as the geometry of an image, on a carrier such as a stream of electrons, and then submitting this encoded state to a series of transformations specified by a set of instructions called a program. In principle the carrier could be anything, even steel balls or onions, and the machine that implements the instructions need not be electronic, it could be mechanical or fluidic.
Digital computers implement information processing. From the earliest days of digital computers people have suggested that these devices may one day be conscious. One of the earliest workers to consider this idea seriously was Alan Turing. Turing proposed the Turing Test as a way of discovering whether a machine can think. In the Turing Test a group of people would ask a machine questions and if they could not tell the difference between the replies of the machine and the replies of a person it would be concluded that the machine could indeed think. Turing's proposal is often confused with the idea of a test for consciousness. However, phenomenal consciousness is an internal state so the best that such a test could demonstrate is that a digital computer could simulate consciousness.
If technologists were limited to the use of the principles of digital computing when creating a conscious entity they would have the problems associated with the philosophy of 'strong' artificial intelligence. The term strong AI was defined by Searle:
..according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind (J. Searle in Minds, Brains and Programs. The Behavioral and Brain Sciences, vol. 3, 1980).
If a computer could demonstrate Strong AI it would not necessarily be more powerful at calculating or solving problems than a computer that demonstrated Weak AI.
The most serious problem with Strong AI is John Searle's "chinese room argument" in which it is demonstrated that the contents of an information processor have no intrinsic meaning -at any moment they are just a set of electrons or steel balls etc. The argument is reproduced in full below:
"One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply this test to the Schank program with the following Gedankenexperiment. Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch a "script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call the "program." Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view—from the point of view of someone reading my "answers"—the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program."
In other words, Searle is proposing that if a computer is just an arrangement of steel balls or electric charges then its content is meaningless without some other phenomenon. Block (1978) used the analogy of a system composed of the population of China communicating with each other to suggest the same idea, that an arrangement of identical things has no meaningful content without a conscious observer who understands its form.
Searle's objection does not convince Direct Realists because they would maintain that 'meaning' is only to be found in objects of perception.
The meaning of meaning and the Symbol Grounding Problem
[edit | edit source]In his Chinese Room Argument Searle shows that symbols on their own do not have any meaning. In other words, a computer that is a set of electrical charges or flowing steel balls is just a set of steel balls or electrical charges. Leibniz spotted this problem in the seventeenth century.
Searle's argument is also, partly, the Symbol Grounding Problem; Harnad (2001) defines this as:
"the symbol grounding problem concerns how the meanings of the symbols in a system can be grounded (in something other than just more ungrounded symbols) so they can have meaning independently of any external interpreter."
Harnad defines a Total Turing Test in which a robot connected to the world by sensors and actions might be judged to be indistinguishable from a human being. He considers that a robot that passed such a test would overcome the symbol grounding problem. Unfortunately Harnad does not tackle Leibniz's misgivings about the internal state of the robot being just a set of symbols (cogs and wheels/charges etc.). The Total Turing Test is also doubtful if analysed in terms of information systems alone, for instance, Powers (2001) argues that an information system could be grounded in Harnad's sense if it were embedded in a virtual reality rather than the world around it.
So what is "meaning" in an information system? In information systems a relation is defined in terms of what thing contains another thing. Having established that one thing contains another this thing is called an attribute. A car contains seats so seats are an attribute of cars. Cars are sometimes red so cars sometimes have the attribute "red". This containing of one thing by another leads to classification hierarchies known as a relational database. What Harnad was seeking to achieve was a connection between items in the database and items in the world outside the database. This did not succeed in giving "meaning" to the signals within the machine - they were still a set of separate signals in a materialist model universe.
Aristotle and Plato had a clear idea of meaning when they proposed that ideas depend upon internal images or forms. Plato, in particular conceived that understanding is due to the forms in phenomenal consciousness. Bringing this view up to date, this implies that the way one form contains another gives us understanding. The form of a car contains the form we call seats etc. Even things that we consider to be "content" rather than "form", such as redness, require an extension in space so that there is a red area rather than red by itself (cf: Hume 1739). So if the empiricists are correct our minds contain a geometrical classification system ("what contains what") or geometrical relational database.
A geometrical database has advantages over a sequential database because items within it are highly classified (their relations to other items being implicit in the geometry) and can also be easily related to the physical position of the organism in the world. It would appear that the way forward for artificial consciousness would be to create a virtual reality within the machine. Perhaps the brain works in this fashion and dreams, imagination and hallucinations are evidence for this. In Part III the storage of geometrically related information in the "Place" area of the brain is described. But although this would be closer to our experience it still leaves us with the Hard Problem of how the state of a model could become conscious experience.
- Harnad, S. (2001). Grounding Symbols in the Analog World With Neural Nets—a Hybrid Model, Psycoloquy: 12,#34 http://psycprints.ecs.soton.ac.uk/archive/00000163/#html
- Powers, D.M.W. (2001) A Grounding of Definition, Psycoloquy: 12,#56 http://psycprints.ecs.soton.ac.uk/archive/00000185/#html
Artificial consciousness beyond information processing
[edit | edit source]The debate about whether a machine could be conscious under any circumstances is usually described as the conflict between physicalism and dualism. Dualists believe that there is something non-physical about consciousness whilst physicalists hold that all things are physical.
Physicalists are not limited to those who hold that consciousness is a property of encoded information on carrier signals. Several indirect realist philosophers and scientists have proposed that, although information processing might deliver the content of consciousness, the state that is consciousness is due to some other physical phenomenon. The eminent neurologist Wilder Penfield was of this opinion and scientists such as Arthur Stanley Eddington, Roger Penrose, Herman Weyl, Karl Pribram and Henry Stapp amongst many others have also proposed that consciousness involves physical phenomena subtler than information processing. Even some of the most ardent supporters of consciousness in information processors such as Dennett suggest that some new, emergent, scientific theory may be required to account for consciousness.
As was mentioned above, neither the ideas that involve direct perception nor those that involve models of the world in the brain seem to be compatible with current physical theory. It seems that new physical theory may be required and the possibility of dualism is not, as yet, ruled out.
The Computability Problem and Halting of Turing Machines
[edit | edit source]The Church-Turing thesis
[edit | edit source]In computability theory the Church–Turing thesis, Church's thesis, Church's conjecture or Turing's thesis, named after Alonzo Church and Alan Turing, is a hypothesis about the nature of mechanical calculation devices, such as electronic computers. The thesis claims that any calculation that is possible can be performed by an algorithm running on a computer, provided that sufficient time and storage space are available.
This thesis, coupled with the proposition that all computers can be modelled by Turing Machines, means that Functionalist theories of consciousness are equivalent to the hypothesis that the brain operates as a Turing Machine.
Turing machines
[edit | edit source]A Turing Machine is a pushdown automaton made more powerful by relaxing the last-in-first-out requirement of its stack. (Interestingly, this seemingly minor relaxation enables the Turing machine to perform such a wide variety of computations that it can serve as a model for the computational capabilities of all modern computer software.)
A Turing machine can be constructed using a single tape. There is no requirement for data to be arranged congruently with input or output data so a two dimensional square in the world would be handled as a string or set of strings in the machine yet still calculate a known function. This is problematic in consciousness studies because phenomenal consciousness has many things simultaneously present in several directions at an instant and this form is not congruent with a one dimensional tape.
A Turing machine consists of:
- A tape which is divided into cells, one next to the other. Each cell contains a symbol from some finite alphabet. The alphabet contains a special blank symbol (here written as '0') and one or more other symbols. The tape is assumed to be arbitrarily extendible to the left and to the right, i.e., the Turing machine is always supplied with as much tape as it needs for its computation. Cells that have not been written to before are assumed to be filled with the blank symbol.
- A head that can read and write symbols on the tape and move left and right.
- A state register that stores the state of the Turing machine. The number of different states is always finite and there is one special start state with which the state register is initialized.
- An action table (or transition function) that tells the machine what symbol to write, how to move the head ('L' for one step left, and 'R' for one step right) and what its new state will be, given the symbol it has just read on the tape and the state it is currently in. If there is no entry in the table for the current combination of symbol and state then the machine will halt.
Note that every part of the machine is finite; it is the potentially unlimited amount of tape that gives it an unbounded amount of storage space.
Another problem arises with Turing Machines is that some algorithms can be shown to be undecidable and so the machine will never halt.
The halting problem
[edit | edit source]The proof of the halting problem proceeds by reductio ad absurdum. We will assume that there is an algorithm described by the function halt(a, i)
that decides if the algorithm encoded by the string a will halt when given as input the string i, and then show that this leads to a contradiction.
We start with assuming that there is a function halt(a, i)
that returns true
if the algorithm represented by the string a halts when given as input the string i, and returns false
otherwise. (The existence of the universal Turing machine proves that every possible algorithm corresponds to at least one such string.) Given this algorithm we can construct another algorithm trouble(s)
as follows:
function trouble(string s) if halt(s, s) = false return true else loop forever
This algorithm takes a string s as its argument and runs the algorithm halt
, giving it s both as the description of the algorithm to check and as the initial data to feed to that algorithm. If halt
returns false
, then trouble
returns true, otherwise trouble
goes into an infinite loop. Since all algorithms can be represented by strings, there is a string t that represents the algorithm trouble
. We can now ask the following question:
- Does
trouble(t)
halt?
Let us consider both possible cases:
- Assume that
trouble(t)
halts. The only way this can happen is thathalt(t, t)
returnsfalse
, but that in turn indicates thattrouble(t)
does not halt. Contradiction. - Assume that
trouble(t)
does not halt. Sincehalt
always halts, this can only happen whentrouble
goes into its infinite loop. This means thathalt(t, t)
must have returnedtrue
, sincetrouble
would have returned immediately if it returnedfalse
. But that in turn would mean thattrouble(t)
does halt. Contradiction.
Since both cases lead to a contradiction, the initial assumption that the algorithm halt
exists must be false.
This classic proof is typically referred to as the diagonalization proof, so called because if one imagines a grid containing all the values of halt(a, i)
, with every possible a value given its own row, and every possible i value given its own column, then the values of halt(s, s)
are arranged along the main diagonal of this grid. The proof can be framed in the form of the question: what row of the grid corresponds to the string t? The answer is that the trouble
function is devised such that halt(t, i)
differs from every row in the grid in at least one position: namely, the main diagonal, where t=i. This contradicts the requirement that the grid contains a row for every possible a value, and therefore constitutes a proof by contradiction that the halting problem is undecidable.
The simulation argument
[edit | edit source]According to this argument (Bostrom 2003) the universe could be a giant computer simulation that contains people as well as objects. Bostrom seems to believe that at any instant a collection of bits of information like electrons on silicon or specks of dust on a sheet could be conscious, he states that:
"A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences."
He then goes on to argue that because of this assumption human beings could be simulations in a computer. Unfortunately, without tackling the problem of how a pattern of dust at an instant could be a person with 'conscious experience' the simulation argument is flawed. In fact even a person made of a moving pattern of dust over several instants is problematical without the assumptions of naive realism or dualism. Bostrom, puts 'mental' states' beyond physical explanation (i.e.: simply assumes that conscious mental states could exist in a pattern of electrons, dust or steel balls etc.). In view of this dualism, Bostrom's argument reduces to the proposal that the world is a digital simulation apart from something else required for endowing the simulations of people in the world with consciousness.
Notes and References
[edit | edit source]Note 1: Strictly this is the quantum 'amplitude' for the electron to go in a particular direction rather than the probability.
The philosophical problem
- Chalmers, D. (1996). The Conscious Mind. New York: Oxford University Press.
Epiphenomenalism and the problem of change
- Huxley, T. H. (1874.) On the Hypothesis that Animals are Automata, and its History. The Fortnightly Review: 16:555-580.
The Problem of Time
- Atmanspacher, H. (1989). The aspect of information production in the process of observation, in: Foundations of Physics, vol. 19, 1989, pp. 553–77
- Atmanspacher, H. (2000).Ontic and epistemic descriptions of chaotic systems. In Proceedings of CASYS 99, ed. by D. Dubois, Springer, Berlin 2000, pp. 465–478. http://www.igpp.de/english/tda/pdf/liege.pdf
- de Broglie, L. (1925) On the theory of quanta. A translation of : RECHERCHES SUR LA THEORIE DES QUANTA (Ann. de Phys., 10e s´erie, t. III (Janvier-F ´evrier 1925).by: A. F. Kracklauer. http://www.nonloco-physics.000freehosting.com/ldb_the.pdf
- Brown, K. (????) Mathpages 3.7 Zeno and the Paradox of Motion. http://www.mathpages.com/rr/s3-07/3-07.htm
- Brown, K. (????) Mathpages Zeno and Uncertainty. http://www.mathpages.com/home/kmath158.htm
- Franck, G. (1994). Published in: Harald Atmanspacher and Gerhard J. Dalenoort (eds.), Inside Versus Outside. Endo- and Exo-Concepts of Observation and Knowledge in Physics, Philosophy, and Cognitive Science, Berlin: Springer,1994, pp. 63–83 http://www.iemar.tuwien.ac.at/publications/GF_1994a.pdf
- Lynds, P. (2003). Time and Classical and Quantum Mechanics: Indeterminacy vs. Discontinuity. Foundations of Physics Letters, 16(4), 2003. http://doc.cern.ch//archive/electronic/other/ext/ext-2003-042.pdf
- McCall, S. 2000. QM and STR. The combining of quantum mechanics and
relativity theory. Philosophy of Science 67 (Proceedings), pp. S535-S548. http://www.mcgill.ca/philosophy/faculty/mccall/
- McTaggart, J.M.E. (1908) The Unreality of Time. Published in Mind: A Quarterly Review of Psychology and Philosophy 17 (1908): 456–473. http://www.ditext.com/mctaggart/time.html
- Petkov, V. (2002). Montreal Inter-University Seminar on the History and Philosophy of Science. http://alcor.concordia.ca/~vpetkov/absolute.html
- Pollock, S. (2004) Physics 2170 - Notes for section 4. University of Colorado. http://www.colorado.edu/physics/phys2170/phys2170_spring96/notes/2170_notes4_18.html
- Weyl, H. (1920). Space, Time, Matter.(Dover Edition).
Further reading:
- James, W. (1890). The Principles of Psychology. CHAPTER XV. THE PERCEPTION OF TIME.
- Ellis McTaggart, J.M. (1908) The Unreality of Time. Mind: A Quarterly Review of Psychology and Philosophy 17 (1908): 456-473.
- McKinnon, N.(2003)Presentism and Consciousness. Australasian Journal of Philosophy 81:3 (2003), 305-323.
- Lamb, A.W. (1998) Granting Time Its Passage. Twentieth World Congress of Philosophy Boston, Massachusetts U.S.A. 10-15 August 1998
- Franck, G. (1994). Physical Time and Intrinsic Temporality. Published in: Harald Atmanspacher and Gerhard J. Dalenoort (eds.), Inside Versus Outside. Endo- and Exo-Concepts of Observation and Knowledge in Physics, Philosophy, and Cognitive Science, Berlin: Springer,1994, pp. 63-83
- Lynds, P. (2003). Subjective Perception of Time and a Progressive Present Moment: The Neurobiological Key to Unlocking Consciousness.
- Alfred North Whitehead. (1920) "Time". Chapter 3 in The Concept of Nature. Cambridge: Cambridge University Press (1920): 49-73.
- Franck, G. HOW TIME PASSES. On Conceiving Time as a Process. Published in : R. Buccheri/ M. Saniga/ W.M. Stuckey (eds.), The Nature of Time: Geometry, Physics and Perception (NATO Science Series), Dodrecht: Kluwer Academic, 2003, pp. 91-103
- Savitt, S.F. (1998). There's no time like the present (in Minkowski space-time).
- Le Poidevin, R. (2004) The Experience and Perception of Time. Stanford Encyclopedia of Philosophy.
- Norton, J. (2004) The Hole Argument. Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/spacetime-holearg/index.html
- Rovelli, C. (2003) Quantum Gravity. Book. http://www.cpt.univ-mrs.fr/~rovelli/book.pdf
- Penrose, R. 1989. The Emperor's New Mind: Concerning Computers, Minds, and Laws of Physics. New York and Oxford: Oxford University Press
- Stein, H. 1968. On Einstein-Minkowski Space-Time, The Journal of Philosophy 65: 5-23.
- Torretti, R. 1983. Relativity and Geometry. Oxford, New York, Toronto, Sydney, Paris, Frankfurt: Pergamon Press.
The existence of time
- Clay, ER (1882). The Alternative: A Study in Psychology, p. 167. (Quoted in James 1890).
- Gombrich, Ernst (1964) 'Moment and Movement in Art', Journal of the Warburg and Courtauld Institutes XXVII, 293–306. Quoted in Le Poidevin, R. (2000). The Experience and Perception of Time. Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/time-experience
- James, W. (1890) .The Principles of Psychology http://psychclassics.yorku.ca/James/Principles/prin15.htm
- Lindner et al. (2005) Attosecond double-slit experiment. Accepted for Physical Review Letters. http://arxiv.org/abs/quant-ph/0503165
- Paulus, GG et al. (2003) PRL 91, 253004 (2003), http://web.archive.org/web/20040702094913/http://mste.laser.physik.uni-muenchen.de/paulus.pdf
- Physics Web. New look for classic experiment. http://physicsweb.org/articles/news/9/3/1/1?rss=2.0
- Rea, MC. (2004). Four Dimensionalism. The Oxford Handbook for Metaphysics http://web.archive.org/web/20040407104606/http://www.nd.edu/~mrea/Online%20Papers/Four%20Dimensionalism.pdf
- Romer, H. (2004) Weak Quantum Theory and the Emergence of Time http://arxiv.org/PS_cache/quant-ph/pdf/0402/0402011.pdf
- Amjorn, J et al. (2004). Emergence of a 4D world from causal quantum gravity. Phys.Rev.Lett. 93 (2004) 131301 http://www.arxiv.org/PS_cache/hep-th/pdf/0404/0404156.pdf
Useful Links
- The web site of Dr Paulus, one of the principle physicists working on these femtosecond laser projects. http://faculty.physics.tamu.edu/ggp/
Relationalism, Substantivalism etc..
- Earman, J. (2002). Thoroughly Modern McTaggart. Philosophers’ Imprint. Vol. 2 No. 3. August 2002. http://www.umich.edu/~philos/Imprint/frameset.html?002003+28+pdf
- Einstein, A. (1916b). Die Grundlage der allgemeinin Relativitatstheorie, Annalen der Physik. 49, 769 (1916); translated by W.Perrett and G.B.Je The Foundations of the tivity General Theory of Relativity, in The Principle of Relativity (Dover, New York, 1952), pp. 117–118. Pointed out by Lusanna and Pauri in their draft of "General Covariance and the Objectivity of Space-Time Point Events".
- Gardner, M. (1990). The New Ambidextrous Universe: Symmetry and Asymmetry, from Mirror Reflections to Superstrings.WH Freeman & Co. New York.
- Gaul, M. & Rovelli, C. (1999). Loop Quantum Gravity and the Meaning of Diffeomorphism Invariance. http://arxiv.org/PS_cache/gr-qc/pdf/9910/9910079.pdf
- MacDonald, A. (2001). Einstein's Hole Argument. Am. J. Phys. 69, 223-225 (2001). http://faculty.luther.edu/~macdonal/HoleArgument.pdf
- Norton, J.D. (1993). General covariance and the foundations of general relativity: eight decades of dispute. Rep. Prog. Phys. 56 (1993) 791–858. http://www.pitt.edu/~jdnorton/papers/decades.pdf
- Norton, J.D. (1999) A Conjecture on Einstein, the Independent Reality of Spacetime Coordinate Systems and the Disaster of 1913. http://philoscience.unibe.ch/lehre/sommer05/Einstein%201905/Texte/113
- Pooley, O. (2002). Handedness, parity violation,To appear in Katherine Brading and Elena Castellani (eds), in preparation, Symmetries in Physics: Philosophical Reflections (Cambridge: Cambridge University Press). http://web.archive.org/web/20030624084411/http://users.ox.ac.uk/~ball0402/papers/parity.pdf
Quantum theory and time
- Hagan, S., Hammeroff, S.R. and Tuszynski, J.A.(2002). Quantum computation in brain microtubules: Decoherence and biological feasibility. PHYSICAL REVIEW E, VOLUME 65, 061901. http://arxiv.org/abs/quant-ph/0005025
- Hawking, S. (1999) The future of quantum cosmology. http://web.archive.org/web/20030311142458/http://www.hawking.org.uk/ps/futquan.ps
- Isham, C.J. (1993). Canonical quantum gravity and the problem of time.In Integrable Systems, Quantum Groups, and Quantum Field Theories, pages 157–288. Kluwer Academic Publishers, London, 1993. http://arxiv.org/PS_cache/gr-qc/pdf/9210/9210011.pdf
- Isham, C.J. Structural Issues in Quantum Gravity. http://lanl.arxiv.org/PS_cache/gr-qc/pdf/9510/9510063.pdf
- Jacobson, T. (1995). Thermodynamics of Spacetime: The Einstein Equation of State. Phys.Rev.Lett. 75 (1995) 1260-1263 http://lanl.arxiv.org/PS_cache/gr-qc/pdf/9504/9504004.pdf
- Tegmark, M. (2000). The Importance of Quantum Decoherence in Brain Processes. Phys.Rev. E61 (2000) 4194-4206 http://arxiv.org/PS_cache/quant-ph/pdf/9907/9907009.pdf
- Zeh, D. (2001) The Physical basis of the direction of time. Fourth edition ISBN 3-540-42081-9 )- Springer-Verlag http://www.rzuser.uni-heidelberg.de/~as3/time-direction/
The problem of qualia
- Alter, T. (1998). "A Limited Defense of the Knowledge Argument." Philosophical Studies 90: 35–56. But especially the discussion at the following web site: http://host.uniroma3.it/progetti/kant/field/ka.html
- Anglin, J.R. & Zurek, J.H. (1996). Decoherence of quantum fields: decoherence and predictability. Phys.Rev. D53 (1996) 7327-7335 http://arxiv.org/PS_cache/quant-ph/pdf/9510/9510021.pdf
- Bacciagaluppi, G. (2004). The role of decoherence in quantum theory. http://plato.stanford.edu/entries/qm-decoherence/
- Dennett, D. (1991), Consciousness Explained, Boston: Little Brown and Company
- Dretske, F. (2003). Experience as Representation. Philosophical Issues 13, 67–82. http://web.archive.org/20021124185049/humanities.ucsc.edu/NEH/dretske1.htm
- Jackson, F. (1982) Epiphenomenal Qualia. Philosophical Quarterly, 32 (1982), pp. 127–36. http://instruct.westvalley.edu/lafave/epiphenomenal_qualia.html
- Lehar, S. (2003) Gestalt Isomorphism and the Primacy of the Subjective Conscious Experience: A Gestalt Bubble Model. (2003) Behavioral & Brain Sciences 26(4), 375–444. http://cns-alumni.bu.edu/~slehar/webstuff/bubw3/bubw3.html
- Levine, J. (1983) “Materialism and Qualia: The Explanatory Gap”, Pacific. Philosophical Quarterly, 64: 354–61.
- Lycan, W. (1987). Consciousness, Cambridge, Mass : The MIT Press.
- Ogborn, J. & Taylor, E.F. (2005) Quantum physics explains Newton's laws of motion. Physics Education 40(1). 26–34. http://www.eftaylor.com/pub/OgbornTaylor.pdf
- Strawson, G. (1994). Mental Reality, Cambridge USA: the MIT Press, Bradford Books.
- Tye, M. (1995). Ten Problems of Consciousness (Bradley Books, MIT Press),
- Tye, M. (2003). Visual qualia and visual content revisited. Ed. David Chalmers. OUP. http://sun.soci.niu.edu/~phildept/MT/Visual.pdf
- Tye, M. (2003). Qualia. Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/qualia/
- Zurek, W.H. (2003). Decoherence, einselection and the quantum origins of the classical. Rev. Mod. Phys. 75, 715 (2003) http://arxiv.org/PS_cache/quant-ph/pdf/0105/0105127.pdf
Machine and digital consciousness
- Block, N. (1978). "Trouble with functionalism", In W. Savage (ed.),Perception and Cognition: Minnesota Studies in Philosophy of Science, Vol IX, Minnesota University Press, 1978, pp. 261–362; reprinted in Block (ed.) (1980), vol. I, pp. 268–305; reprinted (excerpt) in Lycan (ed.)(1990), pp. 444–468.
- Sternberg, E. (2007). Are You a Machine? The Brain, the Mind and What it Means to be Human, Prometheus Books.
- Searle, J.R. 1980. Minds Brains and Programs. The Behavioral and Brain Sciences, vol. 3. Copyright 1980 Cambridge University Press.
- Bostrom, N. 2003. Are you living in a Computer Simulation? Philosophical Quarterly, 2003, Vol. 53, No. 211, pp. 243-255.