SL Psychology/Memory

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Directions[edit | edit source]

This content should include the following items:

  1. Types of memory(episodic, semantic, procedural)
  2. Processes (encoding, storage, recall)
  3. Early Research: Theories: Ebbinghaus and Bartlett
  4. Models of memory: Dual Process, Levels of Processing, Working memory, Parallel Distributed Processing Model
  5. Ecological Approaches to memory research
  6. Theories of Forgetting: Amnesia, repression, and context dependency
  7. Biological elements of memory (engrams)
  8. Memory aids

Content[edit | edit source]

Models of Memory[edit | edit source]

Dual Process Model of Memory[edit | edit source]

In the dual process of memory there are two stores of memory, which are long term and short-term memory. In 1968 Atkinson and Shriffin proposed this two-process model of memory and how information was able to flow through these two stores. -Within this duel-process model there is also sensory memory that precedes short-term memory. -Sensory memory in this model is represented as what is used for what all sensory stimuli must go through in order for it to become part of one's memory. - Moreover, this model puts emphasis on rehearsal and the ability of a person to attend to the information that is incoming. - Lastly, the dual process model of memory depicts how data is lost within sensory memory through information not being attended to, while data in STM can be lost via displacement and how in LTM data can be lost through interference.

In this model, sensory input is received by the brain. The brain then processes this information and "pays attention" to one type of information. The rest of the sensory input is forgotten. The input that is attended to is then sent to the short-term memory, where it is either remembered for up to 20 seconds and forgotten or it is sent to the long-term memory where it can stay for very long periods of time without ever having to refresh the memory of it. This model explains why a person can only pay attention to so much during a period of time and remember what was going on later.

Research that supports the existence of a dual process model of memory would be cases of anterograde amnesia that Clive Wearing studied in 1988. From his research with this form of amnesia Wearing was able to make a firm distinction between STM and LTM. With anterograde amnesia those suffering from this amnesia are unable to retain information but for a certain period. Therefore, these people are trapped in a world purely based on their STM and none of the data they have in their STM can be transferred to the LTM (due to damage to the hippocampus). In addition to this research, free recall experiments really convey the difference between LTM and STM. The results of most of these experiments fall into a pattern known as the serial position curve. Participants are presented with a list of words, the serial position curve is a plot of the percentage of participants remembering each word, versus the position of that word in the list. The curve consists of the primacy effect, an asymptote, and the recency effect. The primacy effect is a result of words from the beginning of the study to be recalled by the participant. Due to that it can be inferred that the participant was able to place that word into their STM and further rehearse those first words and in turn transfer that word into the LTM. The asymptote comes about when words from the middle of the free recall list are not able to be recalled. This is due primarily to the fact that there are too many words in the middle portion of this list, so the STM is able to hold that chunk of information long enough to rehearse and then transfer to the LTM. Lastly, the recency effect is based on the fact participants of these particular experiments are able to recall more words that are towards the end of the list. The pieces of information enter the STM and are not displaced for there is no new information to further displace this information. Thusly, participants are able to use this short-term-memory store to their advantage. (Murdock, 1962) Murray Glanzer and Anita Cunitz (1966) tested the idea that rehearsal of the early words might lead to better memory by presenting the list at a slower pace, so there was more time between each word and participants had more time to rehearse. Just as hypothesized, if the primacy effect is due to rehearsal, increasing the time between each word increased memory for the early words (Glanzer et al., 1966). Superior memory for stimuli rpesented[check spelling] at the end of a sequence is called the recency effect, as discussed above (Murdoch, 1962). One possible explanation for the better memory for words at the end of the list is that the most recently presented words are still in short-term memory. To test this idea, Glanzer and Cunitz had their participants count backwards for 30 seconds right after hearing the last word item on the list. This counting prevented rehearsal from occurring and allowed time for information to be lost from short-term memory. The result therefore was as hypothesized - the delay caused by the counting eliminated the effect. Glanzer and Cunitz therefore concluded that the recency effect is due to storage of recently presented items in short-term memory; thereby showing better memory for words at the end of the serial position curve. The idea behind Glanzer and Cunitz's conclusions is that words that are rehearsed during the presentation of the list get transferred into long-term memory. But what is the evidence that short-term memory (working memory) and long-term memory are separate processes? In fact, there has been neuropsychological evidence that has shown that short-term memory (working memory) and long-term memory are actually two separate processes, occurring in two separate processing mechanisms, independently of each other in the sense that they each have divided allocated resources and capacities (see Clive Wearing's case, H.M.'s case, and K.F.'s case).

Limitations[edit | edit source]

Some criticisms of this model are that it is too simplistic and under-emphasises interaction between the stores. Also, the STM and LTM are more complex and not as unitary as the dual-process model would lead some to believe. This criticism is supported by the Working memory model of STM by Baddeley and Hitch (1974) and by research into the semantic, episodic, imagery and procedural encoding of LTM. Lastly, some critics feel that rehearsal is too simple of a process to account for the transfer of information from the STM to the LTM. Levels of Processing (Craik and Lockhart, 1972) are the postulates that deal with how someone might go about retaining information.

Levels of Processing[edit | edit source]

When a person receives information what they do with it is as important as how the information is received. Craik and Lockhart argued against the predominant view of fixed memory stores and proposed that there were many different ways in which data can be retained. However both men came to a consensus on the fact that data is easier to transfer from STM to LTM when the data is understood and can be related to past memories. Craik and Lockhart felt that rehearsal was too simplistic and really did not allow for data to be stored into LTM. The longer data was processed and analyzed the longer the memory trace would last in the LTM. These psychologists were then able to establish three levels at which verbal data could be processed at. The three levels were structural, phonetic and semantic, and each had their own specific characteristic to aid in further memory retention. The structural processing level consisted of merely looking at how the words were structured or organized to gain some meaning. Analyzing a word through the phonetic level looked at how the word sound compared to another word in order to retain meaning. Semantic process looked at the meaning of the word, and this level of processing was the most effective compared to the other two.

The levels that these men came up with were tested and proven true in 1975 by Craik and Tulving. In their study they tested the effect of depth of processing on memory by giving subjects words with questions that required the use of the different levels of processing. From the results the researchers were able to see that words that were remembered on the semantic level were recognized more frequently then words processed on the two other levels. Due to testing such as this many researchers began to wonder why deep processing occurred at all. In turn, modifications to the levels of processing were made, and some ways made all three levels equal in the ability in which it allowed people to retain information. Elaboration (Craik and Tulving, 1975) was one modification made and this particular modification found that complex semantic processing produced much more recall of words than simple semantic processing. Distinctiveness (Eysenck and Eysenck, 1980) showed how when words were processed phonetically recall would be better if the words were distinctive or unusual. The last two modifications were Effort (Tyler, 1979) and Personal Relevance (Rogers, 1977). Effort conveyed the idea that words would have better recall if words were presented as difficult antagrams, and Personal Relevance show that words had better recall if the words had a associative question that related to that specific participant.

Strengths and Limitations[edit | edit source]

The Evaluation of the Levels of Processing approach to memory is that there are strengths and weaknesses. The strengths are that it really has implications on what actually occurs during the process of learning. However, the word deep is very ambiguous and defining what deep can at times be difficult. Moreover, why deep processing is so effective is another mystery to many psychologists. Later it was found that Semantic processing does not always lead to better retrieval (Morris, 1977), which refutes this approaches findings and beliefs. Lastly, this approach describes rather than explains causing for negative judgment on the validity of this approach.

Working Memory Model of Memory[edit | edit source]

As many people continued to argue against the dual process model of memory, only few psychologists took the initiative to introduce new models based on memory. However, Baddeley and Hitch (1974) were able to create the working model of memory to challenge the dual-processes model and its short-term memory store. Working memory is an active store that is conscious about its currents surroundings and separates into three specific components. Those three different components are the central executive, the phonological loop, and the visuospatial sketchpad. The central executive is the controlling mechanism, which must be very attentive due to its limited capacity, of the other two components and in essence is a modality free mechanism. The phonological loop consists of two subsystems, the articulatory control system and the phonological store. The articulatory control system is the ‘inner voice’ that uses verbal rehearsal in order to retain information. This system's capacity is based on time, and ultimately the information in this system is data that we are preparing to speak or maintain for later use. The second part of the phonological loop, the phonological store is considered the ‘inner ear’ and is able to hold information as a phonological memory trace. This normally lasts for 1.5 to 2 seconds if it is not able to refresh via the articualtory control system. Memory is also able to transfer from the sensory register of from the LTM into the phonological store. The visuospatial sketchpad is the ‘inner eye’ that holds visual and spatial information from either the sensory register or the LTM.

Evidence and Support[edit | edit source]

Evidence for this model of memory is shown in experiments that use concurrent tasks. In other words, two tasks are being performed at the same time. These tasks are able to show that the STM cannot be as simplistic as the dual-process makes it. With the use of concurrent tasks, different modalities must be used and thus the different components of the working memory model must be used separately. These type of experiments further show that if one type of modality or one specific component is being used then another component that is similar in nature will be interfered with and non-functional. (Baddeley and Hitch, 1974)

Strengths and Limitations of the Work Memory Model[edit | edit source]

Evaluation of this model provides a more complex and thorough explanation of the first stages of memory storage than the STM does. This model is more generalizable to reading, mental arithmetic, and verbal reasoning. Moreover, the working model of memory makes of for where the STM lacks at when it comes to explaining brain damage patients (and their ability to use STM). However, with the numerous positives about this model the central executive component of the working model of memory seems to be unclear in terms of what its true function is.

Parallel Distributed Processing Model[edit | edit source]

In the Parallel Distributed Processing Model the storage of memory is outlined a very different way. This model is the youngest out of all the previous models that have been discussed. It wasn't until the 1980s that this model truly came into favor. PDP is model that stresses the parallel nature of neural processing, and the distributed nature of neural representations. Moreover, this model uses basic principles of technology as well as mathematics to its advantage in order to convey how memories are stored. As mentioned above the Parallel Distributed Processing Model is primarily made up of neural networks that interact to store memory. The two basic principles that this model follows are:

1. Any given mental state can be described as a (N)-dimensional vector of numeric activation values over neural units in a network.
2. Memory is created by modifying the strength of the connections between neural units.

The connection strengths, or "weights", are generally represented as a (N×N)-dimensional matrix These principles are created based on the idea of connectionism, which the PDP model is a huge representation of. Connectionism is the approach in cognitive science, which aims to model mental or behavior phenomena as interconnected networks that consists of simple units. The framework in which the PDP model works in consists of:

• A set of processing units, represented by a set of integers.
• An activation for each unit, represented by a vector of time-dependent functions.
• An output function for each unit, represented by a vector of functions on the activations.
• A pattern of connectivity among units, represented by a matrix of real numbers indicating connection strength.
• A propagation rule spreading the activations via the connections, represented by a function on the output of the units.
• An activation rule for combining inputs to a unit to determine its new activation, represented by a function on the current activation and propagation.
• A learning rule for modifying connections based on experience, represented by a change in the weights based on any number of variables.
• An environment which provides the system with experience, represented by sets of activation vectors for some subset of the units.

This framework is purely mathematically based and allows all the researchers that work with this model to operate within the PDP model with ease.

Support[edit | edit source]

Evidence for this model has been generally accepted in psychology. Some studies that have been done sought to distinguish the priming effect and the PDP as a valid storage of memory. These types of studies were in laboratory settings and were based upon recalling information back that had already been given. More evidence that was gained from these tests were that the priming is due to spreading activation of information. ( Mcclelland and Rumelhart, 1985) Data for this model has also been collected through computer simulation. It is evidence of this kind that really meets the disapproval of other psychologists in the cognitive field. The major questions that these psychologists asked is how can this computer generated data truly relate to something in the human realm. In addition, some believe that human thought is more systemic than the PDP will allow. These psychologists further state that computers are not able to correctly display behavior that humans can.(Fodor and Pylyshyn, 1988)

Evaluations of this model for the most part are that it is very effective. This model puts an emphasis on learning and other phenomena of memory. Moreover, although there is testing on computers, another key part to the data collected is in laboratory settings. In turn Connectionist psychologists are able to validate their data to the real world. However, these men have been unable to provide clear predictions and explanations about recall and recognition memory and how the PDP model is able to account for this.

Possible Essay Question How will and has modern technology influenced memory research? Discuss triangulating methods, and strengths and limitations of some of the corresponding memory models.

How Has the advent of Parallel Distribution Model impacted the legitimacy of other memory models?

What are some of the strengths and limitations of lab prepared research?

Types of Memory[edit | edit source]

Procedural: This type of memory refers to our knowledge of how to do something rather than something that has happened. Examples usually include skills, such as knowing how to walk, talk, write or ride a bike. While we may have episodic memories of learning how to ride a bike, these memories need not be recalled every time we ride a bike. Compared with the other types of memory, procedural memory is highly resistant to forgetting as well as to brain damage that can harm other types of memory. Patients suffering from amnesia forget many of their past experiences but often do not forget how to speak or write.
Declarative Memory: This type of memory refers to knowledge of facts and concepts as well as specific events. Declarative memory is divided into two categories; episodic and semantic.
Episodic: This category of memory refers to our recall of specific events which are connected to a place and time. This type of memory is what is referred to when someone talks about remembering something with your “mind’s eye.”
An example of a type of episodic memory is a flashbulb memory of a dramatic or traumatic event, such as the World Trade Center falling or the death of Princess Diana. Researchers Brown and Kulik argued that a special neural mechanism is triggered by such an event, especially if the event is shocking and has emotional repercussions (1977). These memories are generally permanent and many specific detail can be recalled from them, such as the place where the news was heard.
Semantic: This type of memory refers to abstract ideas and general knowledge, regardless of any context in relation to time or location of when the memory was stored. Examples include: knowing the meaning of words, learning facts from a text book and placing objects into categories.

Some scientists believe that the different types of memory are stored in different parts of the brain. Wheeler et al. (1997) reported that out of 26 PET scan studies, 25 studies showed more brain activity in the right prefrontal cortex during episodic memory retrieval than during semantic memory retrieval.

Memory Processes[edit | edit source]

Encoding: This refers to the processing of information for storage in either the short term or long term memory. Information must be processed so that it can easily be retrieved later.
Storage: Not all of the sensory information that we perceive actually enters into our memory. Only that sensory information that we attend to is sent to our sensory memory. The more relevant information is encoded and stored in the short term memory while the rest is filtered out to simplify our memory processing. Information that is repeated in the short term memory is more likely to then be encoded into the long tem memory along with information that is more personally relevant. Craig and Lockhart (1972) determined that: 1.the deeper the processing, the more memorable

2. deeper levels on analysis produce stronger and longer lasting memory traces than shallow levels of analysis.

Retrieval: This is what we do when we remember something from our long term memory. Information is not retrieved from our short term memory because short term memory is only held there until it can be sent to the long term memory. Short term memory is attended to until it is either forgotten or sent to the long term memory. According to the primacy effect the first few items on a list are better recalled than the middle ones because people have more time to rehearse the first few items, therefore encoding it into the long-term store (Atkinson and Shiffrin, 1968).

Early Memory Research[edit | edit source]

Herman Ebbinghaus was a German psychologist and one of the earliest researchers of memory. He tested the long term memory by memorizing nonsense syllables [consonant-vowel-consonant trigrams which do not spell any word in any language]. Ebbinghaus conducted this experiment by reading a list of about twenty nonsense syllables; reading each one first then saying it aloud to himself. He followed this procedure for each word and spent the same amount of time on each nonsense syllable as well. His results showed that the memory was forgotten at a higher rate after the first hour and the forgetting rate increased at a slower rate thereafter. This is referred to as the forgetting curve.
Ebbinghaus also came up with the learning curve where he tried to recall 20 sets of nonsense syllables on a list. The more he repeated the series, the more he could remember until finally he recalled the whole list. However, his theory is somewhat negated by Craig and Lockhart (1972) who found that elaborative rehearsal (the depth of understanding and processing) better determined memorability than just maintenance rehearsal (simple repetition).
Ebbinghaus also determined what is coined as the overlearning effect. According to Ebbinghaus, being able to recall [in this case the twenty nonsense syllables] perfectly, two times in a row constitutes as mastering the subject. Therefore, Ebbinghouse concluded that when a person continues to memorize material, he or she is “overlearning” because she or he has already succeeded in mastering the material. When “overlearning” occurs, the forgetting curve becomes shallower and it takes more time for material to be forgotten.

Ebbinghaus invented several tests of retention such as:

RECALL -- simply try to remember each item. Ebbinghaus used two types of recall task:
-FREE RECALL -- attempt to recall the list items; order is not important.
-SERIAL RECALL -- attempt to recall the list items in the order studied.
RECOLLECTION – subjects are given a large list of nonsense syllables and try to recognize which of them had been on the list studied. This technique is usually a simpler test of memory than recall because a person may be able to recognize an item from the prompter [the list] that he or she could not recall from memory alone.
SAVINGS -- rememorize the list (usually used after a long retention interval, when neither recall nor recognition gives much evidence of prior learning). Compare the number of repetitions required to learn the list the first time to the number required the second time. Savings is the most helpful test of memory, as it will indicate some remaining effect of previous learning even when recall and recognition don’t.
Frederick Bartlett was a British psychologist and professor at University of Cambridge. His major contributions to the study of memory include research into reconstructive memory. Bartlett theorized that we do not passively memorize information but rather we try to make sense of memories and store them in a way that makes sense based on what we already know. In this sense, he presumed that we may often remember events or information incorrectly because we remember what we think should have happened or remember what we prefer to remember. He referred to this as “effort after meaning.” Bartlett’s research tested British subject’s recall of an unfamiliar North American Folk story titled “The War of the Ghosts.” Due to many cultural differences, subjects tended to recall he story incorrectly because they tried to alter the story to make it more coherent and omitted the details that were unfamiliar.

Hill, Grahame (2001). A Level Psychology Through Diagrams. Oxford: Oxford University Press.

"Human Memory." Part II. Cognitive Psychology. p. 292-316