Models and Theories in Human-Computer Interaction/What’s a Model
Is HCI really the most visible part of computer science?
[edit | edit source]- Paradigm of windows, menus, icons, and mouse pointing
- New mobile and virtual trends
- Early work on office contexts
- HCI pervades every field/discipline from daily interfaces like an online banking system to highly specialized like cockpits and surgery
- Methods: experimental quant through context-sensitive qualitative
- HCI has become a primary test- bed for two broad innovations in design methods, participatory design and ethnographically driven design.
- HCI is young
- Needs to be multidisciplinary—ergo renaissance oriented
GOLDEN AGE?
[edit | edit source]- originally a joining of software engineering and human-factors engineering.
- Waterfall development crisis in 70s- linear model of dev
- PC came about crash with design methodology not suitable for systems development that was user oriented . maybe in the past this had not been as necessary because systems were not as widespread and only used by elites who had the time and ability to cope with systems shortfalls.
- Cognitive science and theory of mind?XXX
- Card moran and newell GOMS model-from thoughts to behavior, prior human factors models did not seek to integrate and/or explain the human micro components—mind, thoughts, behavior
Cognitive science foundation
[edit | edit source]- 1985 newells’ chi address provoked
- I would argue even before that you could sense the boiling over of a need for a unified concern in this multidisciplinary science… from cybernetics wiener’s, systems theory to [ADDRESS ABOUT FUTURE OF COMPUTERS] in the mid 20th century there was already a sense that computers were going to be important and widespread enough that there needed to be a concern with the future and how to effectively integrate them in human society
- Newell’s vision implicitly marginalized this work, motivating the emergence of alternative cognitive-science paradigms.
- Situatedness - Suchman’s (1987) study of photocopier use de- scribed a variety of usability problems with advanced photocopier-user inter- faces. She considered the interaction between the person and the machine as a sort of conversation that frequently fails because the participants do not under- stand one another.
- Study as critique – roots on derrida’s desconstructionism?
- Uchman brought field-study concepts, techniques, and sensibilities from anthropology, ethnomethodology, and sociology, and she applied them to a real problematic situation of human-computer interaction.
- Internationalization of hci thanks to International Federation for Information Processing (IFIP) conferences held in Europe, and by initiatives within major computer companies.
- Bødker’s (1991) application of activity theory to HCI. Russian vigotsky beginnings, Marxist foundations, culture, tools, cooperation
- Historical Factors (source of ideas) in development of theory in HCI:
- differentiation within the original cognitive-science community of HCI. (distancing from GOMS)
- growing multidisciplinary constituency of cognitive science itself.
- Internationalization
- technology
- pc, networks
- future unsuspected, current challenges, multidisciplinarity has brought about
- fragmentation, making up of camps
- tension between depth and breadth in scientific expertise
- researchers and practitioners
- it has been supplanted by a more interactive view in which practice plays a more central role in articulating requirements for theory and technology and in evaluating their efficacy in application
- my sense is we are ever more driven toward producing practical/applied knowledge
- the mediocrization of the field--- external factors such as schedule, budget, and compatibility with standard solutions often prevail, no understanding of underlying theoretical aspects and not following exigencies of methodologies to address validity and veracity questions
- Carroll indicates the need for synthesizing a comprehensive and coherent methodological framework
- need to remain educated, lifelong learner model
- A 1988 Association for Computing Machinery (ACM) task force enumerated HCI as one of nine core areas of the computer-science discipline (Denning et al., 1989). And a joint curriculum task force of the ACM and the IEEE (Institute of Electrical and Electronic Engineers) recommended the inclusion of HCI as a common requirement in computer-science programs (Tucker & Turner, 1991). While this was recommended, even now we only see it made a requirement for a subset of Computer Science tracks in undergraduate education. For example, at the University of Iowa the Informatics track has a Human Computer Interaction course requirement but the Computer Science track does not. (University of Iowa Computer Science Department, accessed 2014). This recommendation did not migrate to computer engineering curriculums which can cause a disconnect between professionals on what is necessary to overall software development. (University of Iowa Electrical and Computer Engineering Undergraduate tracks, accessed 2014.)
- inaccessibility of advanced-level materials on HCI science and theory. This could be mitigated if there were a more centralized sphere to the research-meaning if HCI could publish in it's own journal and conferences, that a professional organization maintained and put effort into combining the works of various researchers and practitioners. In mathematics, two main professional associations are the Mathematical Association of America (MAA) and the American Mathematical Society (AMS). While these two organizations do not necessarily capture all mathematical research, they do capture large segments (the MAA is considered by some for the more pure and the AMS for the more applied) of mathematical research and teaching. If HCI were to follow this example, even as fragmented and multidisciplinary as the field is, we could see increased access to knowledge and materials on HCI science and theory.
What's a Model (Sivabalan Umapathy)
[edit | edit source]A step before the Models
[edit | edit source]Computing has spread rapidly into everyday life. Even before personal computing was pushed to the forefront by smart personal devices, people had been interacting with machines in a micro scale. Systems such ATM machines, ticket vending machines, printers with displays were providing micro interactions. Those interactions were smaller, sparsely used and task focused. Due to these reasons it was either easy to design or the users did not care to complain.
Carroll notes that the traditional waterfall method prevented a user oriented design approach, resulting in ill designed software interfaces. And the actors involved in the waterfall method overlooked the importance of the HCI. This oversight cannot be attributed to the project owners, rather should be attributed to the methodology. A careful look at the companies who employed human oriented design (such as Xerox) will help us to understand where root cause of the issue.
Companies which were selling mass consumer machines adhered to the basic principle of "Making it Simple". They had clear driving factors to make it simple. For instances Xerox doesn’t want a loaded customer support or a fleet of technicians servicing the machines. Something made simple had a direct effect on the business, hence a visible ROI existed. Thus people had a direct business case to justify the user based research.
But not all software systems had such a driving business case. For instance, an e-commerce site's business model (such as lack of support, after sales service etc.) lacked the strength to impose a compelling business case. This problem is very similar to that of an ATM machine or ticket vending machine. In both the cases, unsatisfied customer's had no sway on the business.
This highlights the need to have "Cases" for HCI. A case is a high level goal and should be measurable. For instance measurable outcomes such as efficiency, reduction of support apparatus etc. form a good case. Qualitative outcomes such as user satisfaction or reduction of cognitive load can only serve as a starting point for the case. For example reducing the cognitive load to enable all spectrum of users is measurable case.
Clearly defining these cases helps one to draw up the boundaries on the breadth of the focus required.
Model as a guiding force
[edit | edit source]Once the cases are defined, a designer needs a way to translate those goals to deliverable.
So, how were they normally approached in the past? Most of the past interfaces were defined on "engineering hunches", while some were designed by artists. The translating principles applied by the engineers were totally different from the ones created my media designers. Both lacked the perfection. The situation can be best summarized as blind folded people left to find a path in a forest.
The first help here comes in the form of Human factor principles. It can be said that the human factor principles were the founding fathers of the HCI principles. However they were too abstract to convert them into practice. Only leading practitioners (Such as Alan Cooper, Don Norman) could successfully translate them into implementation. While principle is one thing, applying these principles in context is another.
Cult of skeuomorphism in iOS and the later shakeup to tear it apart is an excellent example to demonstrate the principle-context gap. Skeuomorphism is supposed to evoke a positive emotion and help to recall a familiar mental model. Applying them in a modern context or design interactions (such as rolodex) does not deliver the right solution for the applied context.
This gap between principles and applied context resulted in non standard methodologies.
A model fills this gap. It guides the designer to translate the goals to design. It provides a uniform way to apply principles to context. Model can be used as a communication tool for reviewing and validating the design. HCI Researchers (not usability evaluations) can use a model to define the factors of the system they are studying.
Inclusive multidisciplinary of HCI is a strength, not a weakness (Amara Poolswasdi)
[edit | edit source]Because of the expansion of human-computer interaction and its scientific foundations, it is challenging to attain the breadth of working knowledge but certainly not impossible. The scope of HCI is quite broad, but as a multidisciplinary field that is to be expected. When this type of scope is applied with great multiplicity to all of the fields of study it touches, it may seem particularly overwhelming the volume of theories, methods, application domains, and systems. However, these are all tools and perspectives by which to interpret and analyze the value of HCI across disciplines. It is not meant to be limiting in scope, overwhelming by volume, or paralyzing by nature -- these theories and models are all different ways to interpret and understand the same problem.
As HCI continues maturing as an academic field of study, the scientific foundations will continue expanding. The concept of fragmentation should not be embraced by current academics and practitioners of HCI. It is in our nature to understand and synthesize this information, and the practical fragmentation of this field achieves the opposite intended effect.
At the practitioner level and the academic level, there will naturally be quite a bit of fragmentation. That is a necessary byproduct of specialization. It is the responsibility of the individual practitioners and academics, in collaboration with those in leadership roles, to provide an environment by which a holistic HCI perspective still permeates the underpinnings of our field while allowing for differentiation and specialization. This is a practical application of Gestalt theory, where the whole is greater than the sum of its parts.
Is the framework “Computer as Human, Human as Computer” able to explain all our behaviors in everyday tasks? (Wei-Ting Yen)
[edit | edit source]Everyday tasks VS. Non-everyday tasks
[edit | edit source]Most tasks we do in daily life are routine and only require little thought and planning, such as brushing teeth, taking shower, and watching TV. They are usually done relatively quickly, often simultaneously with other activities because neither time nor mental resources may be available. Thus, in most cases the everyday activities are actually done subconsciously. On the other hand, non-everyday activities, such as computing income tax, learning new software, and making complex online shopping, are done consciously and usually require considerable mental processing efforts. Therefore, it can be found that the tasks most frequently studied by HCI researchers are non-everyday tasks.
Conscious behavior VS. Subconscious behavior
[edit | edit source]The exact relationship between conscious and subconscious thought is still under great debate. The resulting scientific puzzles are complex and not easily solved. Norman (2002) explained the differences between the two. He believed subconscious thought matches patterns because it operates by finding the best possible match of one’s past experience to the current one. It proceeds rapidly and automatically without efforts, so it is good at detecting general trends, at recognizing the relationship between what we now experience and what has happened in the past. People can make predictions about the general trend based on few examples. However, Norman also found that subconscious thought could be biased toward regularity and structure. It may not be capable of symbolic manipulation and of careful reasoning through a sequence of steps.
Conscious thought is quite different. Norman believed that the conscious thought tends to be slow and labored. People use conscious thought to ponder decisions, compare choices, rationalize decisions, and find explanations. Accordingly, formal logic, mathematics, decision theory are the tools of conscious thought. However, Norman pointed out that conscious thought is severely limited by the small capacity of short-term memory.
Conscious and subconscious modes of thought are not against each other. Instead, Norman believed that they are both powerful and essential aspects of human life. Both can provide insightful leaps and creative moments. And both are subject to errors, misconceptions, and failures.
Reference: Norman, D.A., (2002), The design of everyday things, Basic Book, New York
Similarities and Differences between Models and Theories (Daniel Giranda)
[edit | edit source]Both "Model" and "Theory" are common terms when discussing HCI (Human Computer Interaction) and while both are used in similar ways, they are not the same. In order to more accurately communicate an idea, these terms should not be used interchangeably. I agree with the assessment outlined in session 2.
Similarities
[edit | edit source]Both Models and theories are used to get a better understanding of observable phenomena. While the methods might be different in the process and in scale, models and theories can be tested, expounded, improved and debunked. Both can be used in multiple fields. More often than not models and theories have practical uses or can be translated into a practical use. The two are also predictive tools that can allow one to have an idea of an outcome before it has been observed.
Differences
[edit | edit source]In HCI design, models are used to guide the creation of an interface, this guide is often less technical than a theory is. One common model outside of HCI would be the blueprint used in the construction of a house. While there is certainly a technical and scientific approach to a model there are other aspects used in its creation. A model will take into account certain human factors such as psychological, socio-technical, and collaboration more than most theories will. Models are often more limited in scope than a theory will be.
Theories are different in that they need to be tested repeatedly, they are bound by science and scientific theory to explain phenomena. The amount of human elements included are limited. They are less concerned with graphical models and more with raw data to prove a point. All theories considered credible in the scientific world are peer reviewed and tested multiple times to produce the same results. This causes theories to be harder than models to change or modify since there is a rigorous process involved in proving a theory.
Technology's Affect on Model Development (Zack Stout)
[edit | edit source]The history of HCI presented by the chapter discusses the growth of HCI theories from the 1970s to the 1990s. In the 1970s, Carroll states that software development had stagnated under the waterfall model, with human factors also facing difficulties. This led directly to the initial golden age of HCI studies.
Carroll attributes this golden age to the idea that computing was in trouble, and needed the multidisciplinary approach to overcome this crisis. This initial approach had the main goal of bringing cognitive science to computer software development.
Carroll goes on to list the developments of the 1980s, listing four major areas increasing the knowledge of HCI. The first was the increased usage of cognitive science to explain abductive reasoning, learning by exploration, and development of mental models. The second area mentioned was the multidisciplinary focus of HCI research. The third was increased internationalization. The last area is actually the most interesting. The fourth area is the increase in technology, both in capability and proliferation.
Increase of Technology Leads to Increase in Models
[edit | edit source]Carroll doesn’t get into the details of what brought the multidisciplinary approach to HCI about. Perhaps instead of the increase of technology being relegated to the sidelines, it should be thought of as the catalyst of the involvement of other disciplines. As technology proliferated, it became easier for other disciplines to become involved with computer interaction. This is evidenced by the rise of personal computers in the 1980s, and then the Internet in the 1990s. At the same time, the cost of computers decreased, allowing for more people to purchase them. This led to the involvement of people outside the direct area of computer science (CS). As more non-CS computer users got involved with computers, the difficulties became more apparent. This led to the increased development and incentive to develop models for HCI.
Fragmentations Benefits Outweigh any Negatives (Tyler Liechty)
[edit | edit source]The Fragmentation of HCI (Human Computer Interface) is attributable to the growth of knowledge and theories as noted by Carroll. As noted, with the over-abundance of theories comes difficulty in maintaining a comprehensive knowledge of HCI. It is noted as a downside in the growth of the field. For a comprehensive knowledge of any field to be feasible the field has to be static and discrete. With a young field such as HCI this is an unreasonable expectation. Carroll notes specifically in the last paragraph of section 1.3 that ‘fragmentation is a threat to future development’. It would seem that any efforts to prevent this fragmentation would be more harmful than the fragmentation.
With each new group of ‘immigrants’ to HCI comes another set of use cases and practitioners to expand the scope of HCI and to further test the theories. To see this as a detriment to the field would undermine the new users as a source of further refinement of the theories. This may lead to many ‘quick and dirty’ ethnographies, but provides a large source of data to analyze the theories that are being applied. This fragmentation can enable the other broad principle of HCI, participatory design. Without this fragmented and sometimes insulated development of similar theories, practitioners may not be able to apply as readily the theories of HCI, and will not produce the data to confirm the quality of the theories.
Real Costs of HCI Fragmentation (Richard Lee)
[edit | edit source]Carroll’s material on the scientific fragmentation of HCI in the 80’s, 90’s, and 00’s was informative, but it would be a mistake to view the trend in the past tense. The fragmentation has in no way diminished, and has instead become further institutionalized and must be met with ever more passionate evangelism.
The hard fact is that fervor without facts has little impact on the annual budget, yet the evidence one might present to justify the effort, time and expense of both HCI research and implementation of subsequent product improvements is itself time consuming and expensive to produce.
I would argue that modern advances in both real user monitoring and in behavioral metrics through both the gathering of applicable user-driven data and its analysis should be heavily leveraged in driving forward the justification for the continued application and evolution of HCI models and theories.
The argument is often made that ‘good enough’ is sufficient when bringing a product to market, yet regardless of the scope of use (not every example is a downed plane or a raging nuclear reactor) the improvement of the users’ experience via the human/computer interface is guaranteed to have a positive net impact for the organizations who choose to invest in such efforts.
Direct revenue is of course a primary driver and metric in determining the feasibility of HCI research and application, but other factors are in play. Consumer confidence in both specific products and in companies as a whole is at stake.
WIth the proliferation of software and hardware choices and their integration into our daily lives becoming more complete every day, a user’s interaction with a small, seemingly insignificant member of a much larger family of products can determine the likelihood of further adoption. Negativity bias combined with the one-to-many aspects of social media and broadcast entertainment on-air and online can lead to instances where one user’s negative experience literally determines the success of a product or company in the marketplace.
Of course, it’s important to remember that there’s more to improving interaction than simply addressing usability issues, but those of us willing to take on the challenge of advancing the state of humanity’s relationship with technology are those best positioned to continue combating the fragmentation in our multifaceted field of science.
Understanding Functionality Purpose of Human-Computer Interaction (HCI) - (Hestia Sartika)
[edit | edit source]According to Carroll, HCI is and intersection between the social and behavioral sciences that correlates to computer and information technology. This correlation helps developers to understand how users navigate, visualize and process these virtual environments, which pioneered the development and functionality of voice, video, hypertext links, digital libraries, privacy, information visualization, interactive tutorials, graphical interfaces. It also has changed how home, offices, schools or universities use technology and become accustomed to it. HCI's past focus has been integration, analysis, support, and design for professionals and organizational processes. An important element is how can HCI to continue its success 20 years later, which means that HCI has to focus in other areas such as in-home and school learning integration.
In 1970's waterfall development method had been an issue in software engineering since the method is meticulously slow and unreliable that had caused a challenge in developing user interfaces and end-user applications. At the end of 1970's cognitive science presented multi-disciplinary focus that consist of linguistics, anthropology, philosophy, psychology and computer science and came out with two principles of cognitive science:
1. "Cognitive science was the representational theory of mind, the thesis that human behavior and experience can be explained by explicit mental structures and operations."
2. "An effective multidisciplinary science should be capable of supporting and benefiting from application to real problems."
Therefore HCI was a golden age of science in a sense that it solved all issues within language, perception, motor activity, problem solving and communication within software development process. For example, Card, Moran and Newell (1983) developed the Goals, Operators, Methods and Selection Rules (GOMS) that provoked controversy and new research, Malone (1981) developed analyses of fun and the role of intrinsic motivation in learning based on studies of computer-game software. Carroll (1985) developed a psycho linguistic theory of names based on studies of file-names and computer commands.
HCI is a model since it has molded and created a structure for software development processes issues in connecting technology to human behavior and experience.
Let 100 flowers blossom, yet be sure that all the 100 bloom from the earth. (Jieun Lee, Eric Vorm)
[edit | edit source]Carroll’s expression on inclusive multidisciplinarity, “Let 100 Flowers Blossom” was a perfect metaphor for HCI’s beauty of diversity. However, as he pointed out, the irony as “the tension between depth and breadth," its success, and the broad coverage of the field bears a problem: fragmentation.
I thought it was interesting to read Carroll’s argument, which is expressed as such: the greater scope an intellectual theory appears, the stronger researchers insulate knowledge selectively. At some point, the large scope of the field of HCI was accelerated by the rapid convergence of technologies and study areas. This made researchers overly narrow-scoped. Like a vicious cycle, the need for foundation theory has been increasing, and along with it, the fragmentation of the field.
In my opinion, the major cause of fragmentation came from HCI’s heavy focus on practical applications. Practitioners are the majority, and they want instant and efficient solution to be readily adapted for their work.
This case is similar to how a gardener wants to have blossoms of various flowers, but disregards the condition of the earth. Even though the soil is not nourished, flowers will continue blooming for several years. However, in the end, the earth will dry out, and the recovery will take a long time.
I don't mean that practical approaches are worthless, rather, these are the flowers of HCI. In other words, fragmentation - focusing on breadth over depth - is inevitable. I argue that fragmentation may enable the success of HCI. However, focusing too heavily on applications may dry out the source, the foundation of theories.
Another perspective of this issue is also explored through Carroll's writings, that is the concept of identity. In this case, we speak of identity as in the answer to the question: "what is HCI?"
The problem of fragmentation is that as a field grows in scope and begins to blend so much with other disciplines, it runs the risk of losing its distinctness and may end up as nothing at all. This is where the problem of breadth over depth comes into play.
The breadth is the scope of the field; both the variety of problems it seeks to address as well as the number of tools and approaches it seeks to employ. In this case, we must consider how many of these domains, paradigms, and tools are unique to HCI. Participatory and ethnographic design methodologies may be said to be distinctly HCI (though some may disagree). Other than these, however, most tools or techniques are largely shared or borrowed from other fields such as sociology, psychology, cognitive science, computer science, engineering, etc. Similarly, the domains in which HCI practitioners work are domains of interest for many other fields, such as those earlier listed. This makes answering the question "what is HCI" increasingly difficult, since we cannot directly answer by pointing to either a domain of interest, nor an approach as being uniquely HCI.
The answer to this problem, in part, could be to increase focus on depth. In this case, depth refers to the roots of a field, which primarily come in the form for major theories. Theories must be rigorously developed, rigorously tested, and rigorously validated. This process strengthens the depth of a field much like repeatedly heating and cooling steel adds strength. HCI, for the most part, lacks its own major theories, hence, it has very shallow roots.
The identity of the field, as illustrated by Carroll (and others), may cease to exist if we focus too much on breadth and not enough on depth.