Models and Theories in Human-Computer Interaction/The Paradox of Technology

From Wikibooks, open books for an open world
Jump to navigation Jump to search

The Paradox of Technology (Zack Stout)[edit | edit source]

Donald Norman proposed a model in The Psychology of Everyday Things to account for user interaction with a wide variety of systems. This model provides insight into the reasons why a system may fail to provide a better solution to a problem, even though it may contain more technological improvements.

The Model[edit | edit source]

The model provides for a mapping that provides the relationship between objects. Norman proposes a natural mapping, which takes advantage of physical relationships between things. This mapping then leads to immediate understanding, making the system easier to use.

When using a system, Norman proposes a 7-stage model for the formation of the map and executing an action based on the map, which roughly corresponds to the average interaction between a person and a system. This model is divided up into three main areas: goal creation, execution, and evaluation.

Goal creation consists of the simple act of the user deciding what they want to accomplish. Execution can be divided into three steps: formation of an intention, or how the goal will be accomplished, specification of the action, detailing the exact steps to complete the goal and execution of the action, or actually performing the action. The last part of the interaction, evaluation is exactly the definition of evaluation, determining what the results of the action were and how that fits with the goal. Evaluation can also be divided into three steps: perception of the state of the system, or seeing what happened after the action, interpretation, or aligning the perception with what the action was supposed to do, and evaluation, which is determining if the correct outcome occurred.

Paradox[edit | edit source]

The central theme of the article seemed to be that as more features are added, more controls need to be added, which may not always be to the benefit of the user. This leads to a gulf of execution, basically the difference between intention and allowable actions. If this difference is large, the system appears to be difficult to use. Another issue with the addition of more controls to what may have been an already simple system is that the user may end up blaming themselves for deficiencies in the system, because the task they try to accomplish with the system itself should be simple.

Applying the previous model, the basic foundation of these assertions is found in the execution and evaluation phases. Basically, the user would define a goal, such as setting time on an alarm clock. They would go through the execution phase, determining that they should adjust the time by pressing the button labeled “hour” to adjust the hour, performing that action, and observing the time does not change. This would not match their goal, and at this point they need to perform the 7 steps again. At this point, they notice the button labeled “clock” and press that while changing the time, at which point the time actually changes. At this point, they may blame themselves for not pressing the clock button in the first place, rather than blaming a poorly designed system. While this is a simple example, it provides a good approach for assigning correct blame to the system rather than the user.

Taken together, this fits with the idea that further technological advances without careful consideration of user interaction may result in a better system on paper, but one that fails to be adopted after actual interaction with the user.