Models and Theories in Human-Computer Interaction/Introduction to Models and Theories in HCI

From Wikibooks, open books for an open world
Jump to navigation Jump to search

The Golden Age of HCI: Sheena Bove[edit | edit source]

I completely agree that we are in the golden age of HCI and I think we will continue to make discoveries and theories for quite some time as technology advances and becomes easier to access for everyone. In the 1980s and 90s, the personal computer was becoming available, but still fairly expensive and owners would need to be able to invest quite a bit of time learning how to use them. During this time it was very important to try to improve usability and learnability to sell more computers or at least the idea of them. Until learnable operating systems came out people had to understand syntax to use a computer. When Microsoft introduced Windows anyone could purchase a computer and begin to use it.

Technology has advanced quite a bit since then and now everyone not only has a personal computer, but mobile devices and watches that are used all throughout the day for multiple things. As technology advances even further and we interact with technology even more, HCI studies will become even more vital. I don't see this golden age slowing down anytime soon, nearly everything we use has some type of interface or unique interactions from vehicles to TVs and even refrigerators.

We are in a gold age (Qing Guo)[edit | edit source]

As Caroll claims that it is the gold age of HCI. I firmly agree with this statement. Looking back to the history of HCI, we can see the process of many different fields getting together and than combined together to be a multi-disciplinary based on application engineering and technology. From a business point of view, HCI can do software, hardware, interaction, physical environment, virtual world, etc. In a word, actually, HCI is used in all aspects in our daily life as long as there are needs of communicate between human and machines. HCI do nothing but make the interaction between human and machines to be maximum efficient, easy and enjoyable.

Take smart phones as an example right on this point. In the past, the machine refers to computers generally; we used mouse, keypad and screen as the input and output systems. But because of mouse and keypad are too large to be portable, the place to use these interfaces was limited, until Apple introduced touch screen smart phones. Now we can have input and output at the same time just on a small screen, that is why modern people are much more relied on electronic mobile devices in daily life. HCI can be used in any places and environment at anytime by any kind of users. So we think it is the time that HCI is blooming out since it is prevalence in everywhere.


Trending Toward The Invisible: Dana Lynn[edit | edit source]

In response to the scientific fragmentation of HCI and the blending of software engineering and human factors there is no doubt that the field is experiencing a renaissance. As we move toward more ambient technology and invisible computing the intersection between humans and computers is less oblique and more transparent. Human-centric computer interaction design is where the computer is actually interacting with humans rather than vice versa. Innovative companies such as Google are proving this theory with products like Google Glass and Google’s self driving car. These examples are seemingly light-years from the boxy calculators and clunky PC’s that are shelved in the technology archives of places like Hewlett Packard and Apple computer.

The constraints and limitations of human and computer interaction are practically non-existent if not invisible. HCI however is progressively going beyond computational thinking.

“Computational thinking is about engaging with what the computer can and cannot do, and explicitly thinking about it.”

“If Computational Thinking involves, for example, understanding the power and limits of digital representations, and how those serve as metaphors in thinking about other problems, then those representations have to be visible.”

Humans no longer have to think about how they interact with computers. Computers are embedded into the fabric of our existence and have become acutely aware of the human presence.

If companies need to rethink their approach to human centered design, then the science of HCI should undergo the same litany. With computers being complex and human needs articulated through simple gestures or interactions, the two are like oil and water. They don’t mix. Ubiquitous computing seems to be the current trend and HCI needs to catch up. Technologies that are anticipatory, responsive and adaptive to the needs of human interaction are soon to become natural in form.

[1]


Human Meets Machine - We are the Metaphore (Eric Andren)[edit | edit source]

Relating human to machine provides a great way to explain technologies on a level familiar to our being. It also provides a wonderful outlet for the connection of technology and nature and the expansion of technology into a direction better informed by the systems observed in nature each day.

By relating and realizing that the technologies or machines we create are more or less reverse engineered low functioning version of natural phenomenon we can allow the development of newer technologies to follow and be informed by these physical systems and increase usability.

We use metaphors in interface such as windows and other iconic representations like the file or trashcan. Metaphors are used to increase familiarity to make the user more comfortable and make the interface reach a higher level of usability. What would be more comfortable than an interface that reflected the systems we are influenced by since the moment we are conceived?

Tapping into this emotional outlet can create a marriage between human and system that reaches a level of unity provided by the phenomenon that created humanity and that we are slowly gaining a greater understanding of and infusing into our technologies.

Fragmented Flower Blossoms (Hestia Sartika)[edit | edit source]

HCI as a fragmentation factor in a sense that it is a multidisciplinary of cognitive, social, psychological, information technology and computer science. Understanding these disciplines shouldn't be a challenge in expanding the success of HCI. The cost benefiting factor is finding how can HCI implement the human factor integration with technology easily. As society become a lot more dependent and accustomed with the rapid growth of technology, simplicity has became more scarce because we're trying to understand minuscule details that makes it more complex.

According to Carroll is that the ironic downside to these fragmentation is that there are too many theories and methods that it would be challenging for individuals to attain it or to find someone that posses all these knowledge. Since HCI has diversified rapidly while finding the lack of good model of effective technology development between research and practitioners with external factor such as schedule, budget and compatibility, HCI communities has decided that it's important to manage cost-benefit trade offs in methods and techniques.

There are too many theories and methods, but if we let that aside and understand usability as it is, we can understand that less is more. For example, the downfall of MySpace and rise of Facebook was caused by the simplicity of its own interface. Facebook is easy to look at, less clutter, advertisements are chosen by users and more personalized.

Do visual displays of information help in problem solving and creative thinking? (Mike Morgan)[edit | edit source]

In Carroll's book, HCI Models, Theories and Frameworks, he talks about the 3-stage model of human visual processing. The second stage focuses on pattern identification, the visual system making sense of the lower level processes found in the first stage, which are features. He references Zhang and Norman's research on visual displays which indicates that visual displays of information actually help in problem solving and creative thinking, particularly complex problems that are difficult for the human mind to recreate in their own minds. The external visual display functions like an extension of the mind, as an information store to further reflect upon the problem and helps us to not only surface the salient aspects and solutions to the problem, but also to reimagine alternative solutions to the problem.

The UX research which I have done in my lab supports this claim. In attempting to assist our customers who are end users of enterprise HR software in visualizing conflicting events (for example, an employee having multiple salaries in the same time period, which is something that is not allowable in the system) via a horizontal timeline with colored bars as a metaphor, participants were able to quickly recognize the salient regions of the timeline where conflicts existed in the form of overlaps. Participants were then able to more easily manipulate the visuals to correct the conflicts than if the conflict was represented in a less visual and more semantic manner. For example, representing this same information via sets of rows of dates via Excel is more cognitively burdensome for end users to determine conflicts, because they are forced to use their mathematical skills to calculate date ranges and overlaps. They don't have the benefit of utilizing an external memory aid (like a timeline) to solve the problem and need to rely on their own internal representations. With an external timeline visualization as a memory aid and input for an enhanced internal representation of the conflict, they are not only able to immediately identify the overlaps (conflicting date ranges), they are also able to provide feedback and suggestions on improving the user experience. In this case, they offered suggestions on adding color-coding and patterns to the overlapping sections in order to differentiate it from the rest of the timeline.

The ability to more easily solve a problem due to an external visualization might also be improved with learning strategies. In John Hattie and Gregory Yates' book Visible Learning And The Science of How We Learn they describe five learning strategies summed up as CRIME: chunking, rehearsal, imagery, mnemonics and elaboration. Specifically, imagery and elaboration could be relevant tools for solving problems when used with external visual displays. Imagery is when you are able to utilize your own mind to recreate a picture of something in order to improve recall. An external visual display representing a complex problem with sufficient salient cues for pattern identification can be used as input to “upgrade” an individual’s pre-existing internal visual representation of the problem.

Elaboration is another learning strategy that might improve the utility of external visual displays as problem-solving and creative thinking tools. Elaboration involves using information in the world which needs to be remembered and linking it to meaningful information stored in long-term memory. The act of linking this new information to old ensures that the new information will become accessible in the future. If the patterns identified in a complex external visual display can be linked to meaningful information from long-term memory and retrieved easily for future recall, this strategy could become a way to also enhance an individual’s visual representation of a problem. Experts within a specific field who are trying to solve a complex problem could probably benefit greatly using this learning strategy since they are cognitively invested in their work so that bridging meaningful connections between their work and the external world probably comes more easily than novices.


Introduction to Models and Theories in HCI (Jeff Bazer)[edit | edit source]

In Carroll’s book, 'HCI Models, Theories and Frameworks', he states that HCI “is concerned with understanding how people make use of devices and systems that incorporate or embed computation, and how such devices and systems can be more useful and more usable.” Carroll goes on to talk about what HCI professionals do “analyze and design user interfaces and new user-interface technologies”, “created software tools and development environment to facilitate the construction of graphical user interfaces”, “pioneered the user of voice and video in user interfaces, hypertext links, interactive tutorials and context-sensitive help systems.”

Carroll also goes on to talk about some of the work in mobile computing, information visualization for digital libraries and navigation techniques for virtual environments. Since this book was published in 2003, I think that this section is even more important today than ever. According to Smart Insights the number of mobile uses has increased dramatically and surpassed desktop users. This is important as while the concepts in the book are a solid starting point, they must be adapted to anything in use with mobile since it has a different style of working.

Another concept Carroll talks about is scientific fragmentation and the current challenges. Carroll states that in the 1980s it was reasonable to expect HCI professionals to have a fairly comprehensive understanding of the concepts and methods in use, but in today’s world it is far more challenging for individuals to attain that breadth of knowledge since there are so many theories, methods, application domains and systems. I think this is spot on because as HCI has grown over the past decade there have only been more and more models and theories come up, many of them having to do with mobile. This makes it harder to know all of them and we start to run into individualization where researchers and HCI professionals start to focus on just a couple concepts instead of all of them.

Works Cited Bosomworth, Danyl (2015, January 15). Statistics on mobile usage and adoption to inform you mobile marketing strategy: http://www.smartinsights.com/mobile-marketing/mobile-marketing-analytics/mobile-marketing-statistics/


HCI…Golden Age? (Valerie Van Ee)[edit | edit source]

"This first decade of HCI was a golden age of science in the sense that there was wide tacit agreement as to the overarching research paradigm. And a lot got done.” (Page 3)

Carroll writes a in the introduction how HCI brought cognitive-science to software development. A joining of a variety of multi-disciplinary field in order to bring about positive influence by including human behavior and experience at part of the development process. This produced many new theories and models (GOMS) where the focus was on the development with humans as a critical extension of the development process. One major change was the change from the waterfall development to agile process. Getting users involved early and having cyclical software development process.

I agree that this was a major time in HCI history. It produced a Golden Age. However, the focus was in the workplace; making people more efficient and effective. I disagree that it was the only Golden Age or that now that age has passed. It was the start of an Age that is still growing Golder and expanding today.

As devices and technology exceed expectations and grows into new dimensions, it would be naive to think that HCI research has peeked. New depth and breadths are being explored every day in relation to HCI. As personal computing expands, personalization (such as wearables) and augmentation increases. HCI is still establishing its roots and learning what it is to become.


HCI Today and Tomorrow (Shaun Broyhill)[edit | edit source]

When looking back at the beginning of computer development, it can be seen that design was more about getting the job done versus how to get the job done. After all, computers were a new undiscovered country- very few people had explored the new domain.

As others have written in this wiki, I am investigating what Carroll writes about the HCI Golden age. I too disagree with the belief that we have already experienced the golden age, but rather feel that the Golden Age of HCI is only now beginning. While previous advancements in Human Computer perception were gradual, with changes progressing through advancements in GUI’s and other minimal peripherals such as computer mice and keyboards, there were not any major leaps forward into the arena. However, in the last decade, more advances in the HCI field have been brought to fruition than in the previous 50. Introductions such as the IPhone, Virtual Reality, Advanced Voice Recognition, Health Care initiatives, Artificial Intelligence and the wearables markets are bringing HCI models, theories, and ideas to the front and center stage.

I feel that while we still have some major advances in these areas, we may be reaching the peak of scientific achievement within the HCI area in the next couple of decades. Some advances will still occur, just not at the rapid fire rate that is currently occurring. What we need to investigate to keep allowing advances to occur is the responsible uses of these theories and to design practical applications of these theories and models.


Mental Models is about what the user belief, not fact? (CheeKang Tan)[edit | edit source]

What is mental models? Mental models represent people's though of process about the logic to interact with real world in life. It refer to each individual's unique life experience, perception, and their view of world. We study mental model to conclude theory that can be used to analyze, make conclusion, systematize base on individual behaviors. This will help human develop mechanisms more advance and improve in management and natural resources.

I agree that our mental models allows us to make assumption about how things works, and the thinking help us make instantaneous decision and behavior. In computer, we understand how the application or system works through learning and using it. By time, we think of what will be the expected situation and get familiar with the system or application from interacting with the computer. It also help us develop a better system conceptual model, that increase motivation, simpler complexity, better performance in accuracy, efficiency and problem solving skill.

I also agree that a mental models is what the user belief, each individual users have their own mental model, different user have their own unique mental models of the same user interface. Therefore, it is important for a developer or designer to make a user interface basic nature enough to communicate with users that form an accurate metal model.


Old School: Waterfall (Magann Orth )[edit | edit source]

An organization must identify their methodology for software development. In addition, explaining and training on roles and respnsiilities is extremely important. The organization I work in typically will build their own software instead of purchasing software. Often times the technical teams will promise the world, so functional users think "this is great! We can have exactly what we want!". However, the technical team will then start to come forth with the compromises the user will have to make. The technical team is used to the Waterfall method. The Waterfall method for software development indicates the designer will go through a full development cycle, before involving the users. The users, while they do not actually know the term, want an Agile environment. Users and functional teams want to give feedback throughout the development process. There are multiple problems with this push (Devleopers - Waterfall) and pull (Users - Agile) that I see everyday at work. First, the technical team is only taking what they think the functional people want. In my experience, functional users are not the best at explaining what they want. This is why it is important, it is an iterative process (Agile). Second, the software development team usually mimics what is currently done on paper or the current process. However, this leaves room for conflict with users who want to expand and improve the overall experience. The users often times need something to react to (Agile) instead of simply thinking up what they want the end result to be in the beginning (Waterfall). Third, the technical teams often get frustrated that they 'designed' already, and do not want to go back through the process to reiterate or recode. Lastly, a huge issue with the waterfall method is time. Essentially, my organization is working in a Waterfall perspective, but with Agile feedback. Therefore, this causes is the technical development time to linger, the whole product is developed, but then users give feedback. The feedback, typically causes the development to be pushed back even further. I have been on implementations where they were meant to be 6 months and have lasted 3 years, because of the overall mentality of Waterfall vs Agile development.

This is why it is important for organizations to clearly define their methodology, roles, and responsibilities.

The Benefits of Being Multi-disciplinary: Multiple Sources of Models and Theories (Nick Sturtz)[edit | edit source]

HCI is a multi-disciplinary field of study, bringing together technology and human factors. While that can lead to fragmentation in the field, one benefit that that HCI researchers are able to apply (sometimes with modifications) the different established models and theories from the various disciplines that make up HCI. In his first chapter, Carrol discusses that Cognitive Science (itself a multi-disciplinary field) coalesced to begin to describe how people think and start to model that process. HCI took these cognitive science theories and models and evolved them into models and theories that can be used to discuss how people interact with computers. This became the GOMS (Goals, Operators, Methods, and Selection) Model. HCI researchers also have models and theories from Computer Science which help to describe the inner workings of computers and applications as well as how computer systems interact with each other and with their input and output devices. From these Computer Science models and theories, HCI researchers can get an understanding of how the computer systems will work and can then begin to study the interaction of the human and the computer. While it is a benefit to have so many different models from various disciplines, Carrol also correctly mentions that this can cause some trouble for HCI researchers because there are so many things that they need to be familiar with. This can lead to the HCI researcher having a very shallow knowledge of the models and theories from the various disciplines. A possible remedy to this problem could be to have specialties form within the HCI field that allows people to focus more on the human aspect while others can focus on the computer piece and the interactions. While HCI is still a young field of study, we are lucky that, being multi-disciplinary, we had foundations in older fields of study and have been able to borrow from each of them to help move our field of study forward more quickly than if we had to create these models and theories on our own.

The Importance of Being Multidisciplinary: Computer Science meets Social Science (Jessica Ashdown)[edit | edit source]

The current field of Human Computer Interaction is a field, as Carroll astutely points out, is young, and whose future depends on its commitment to multidisciplinary science. This could not be more truthful or more important in the field today. Traditionally design was focused on what programmers wanted to program, and then what designers wanted to design. It was about what looked "cool" or "new" or "innovative" with little to no consideration to the end user. It might look "cool", but is it easy to use? It might be "clean" or "minimal" design, but can the user figure out how to use it? A simple Google search of interfaces and objects with bad usability pulls countless results - all stemming from poor or no consideration of the end user and how an interface or object will actually be used by a user. Enter user-centered design which pulls on many disciplines - not just that of computer science or graphic design. User-Centered design draws also upon the social sciences. Sciences that focus solely on people and how they work; how they perceive things, what motivates them, and how they interact with the world. Once these three disciplines came together (programming, design, and social science) came together, an astounding thing began to happen. Interfaces and objects were being celebrated by people as "easy to use" and even "delightful to use". This was only made possible by the interaction between human factors and computer science (hence, human-computer interaction). It is through the continued work of these disciplines, and others working together to help create the future of technology and user-centered design. Drawing upon expertise in multiple areas, I believe, can only strengthen and empower interfaces and thus user experiences to become better and better.

Ethnographically Driven Design (Lawrence Greer)[edit | edit source]

One of the first things which really struck a chord with me from our readings was the concept of ethnographically driven design. It states that you need to study the actual work practices and processes of users when developing new technology to support them, not just the procedures and guidelines they are supposed to be following. Basically, look at how the users are actually working, not how they are supposed to be working. I enthusiastically support this approach, and have shared this method with some of my coworkers.

I have seen many issues arise when this methodology is not followed, and tools are made based on what the users are supposed to be doing instead. When I first began working for the hospital system one of the earliest projects I worked on was making new electronic assessment forms for our nurses to use with our electronic medical record system. I would meet with a committee of nurse managers to review their existing documentation, review the capabilities of the new system we were implementing with them, and then work with them to create assessments that would fit the needs of their various units. Some nurse managers would bring floor nurses with them to these meetings to get their input, while others would not. After I finished creating the new assessments, the nurse managers would review it with their staff before it was approved to go to production.

I quickly learned that the assessments for units whose nurse managers had brought floor nurses with them to the design meetings required far fewer revisions, and were accepted by the staff and ready to go to production far more quickly. The floor nurses, who would be the ones actually using the assessments on a daily basis, knew the actual processes they used every day, while the nurse managers were basing things on the formal procedures, and those two things did not always match up. Furthermore, after go live and the new system went fully into production, the units which had not originally had floor nurses involved in the initial design process often required additional revisions to be made to compensate for processes issues which had need been taken into account in the initial build or caught on the staff review. I did not encounter nearly as many issues with the assessments I built for units which had input from staff more familiar with the true day to day operations and processes of their unit.


Scope and HCI practitioners (Cara MacLaughlin)[edit | edit source]

In the beginning, a fairly narrow focus allowed practitioners to work closely with researchers. The role of the practitioner was to further develop and apply the researchers' concepts solely in terms of the end product. Practitioners eventually became integrated into the development cycle where they were expected to not only apply the models and theories, but also understand the foundation of said models and theories. While the need for researchers has not diminished at all, this understanding can be seen as an additional hat practitioners must wear.

As more fields of study are enveloped under the umbrella of human-computer interaction, the scope has expanded greatly. While practitioners definitely have a "home" field of study, each must have a base knowledge on all intersecting fields. But beyond this base knowledge, Carroll is correct that practitioners often purposefully limit which aspects of HCI they choose to immerse themselves in. Depending on the demands of the employer and the interests of the practitioner, it is far easier to learn only what needs to be understood and applied. This ensures that "modern day" practitioners are less knowledgeable of every aspect of the all-encompassing field than the original HCI practitioners.

While it should be seen as progress that the value of and need for human-computer interaction practitioners has increased, the expectation that a practitioner is a Renaissance wo/man in such a vast field is illogical at best. At the same time, compartmentalizing skills is not a solution either. This is a multidisciplinary field and the strengths and weaknesses of it as such need to be accepted and dealt with by all parties involved.

Fragmentation is Bad? (Chris Vermilya)[edit | edit source]

Over the past 30-40 years, the modern field of HCI has birthed and expanded, in large part following the creation and development of “the personal computer, and it’s word-processing and spreadsheet software”. HCI attracted people from numerous other fields to study their own overlapping studies with the emerging domain, including anthropology, ethnomethodology, psychology, sociology, and communication studies as well as many more. And with this influx of people who came from different backgrounds over time inevitably came with it a fragmentation of the field. What once was possible for someone to have a “fairly comprehensive understanding of the methods in use,” is today “far more challenging for individuals to attain that breadth of working knowledge.”

It is curious that Carroll finds this fragmentation a wholehearted negative aspect of the growth of the subject. With the expansion of any field of study, yes, it is unfortunate that one cannot study all there is to know, but this unfortunate feeling is a nostalgia. It would not make sense for an orthopedist to also have a complete working knowledge of a brain surgeon’s procedures, nor would it make sense two HCI researchers focusing on different topics (perhaps optics and haptics) to have a complete working knowledge of each other’s work. As the field continues to develop it makes sense that those within it might specialize in certain areas instead of learning the “too many theories, too many methods, too many application domains, too many systems” when they may not pertain to their line of research. Fragmentation is a healthy sign of growth in any subject because it indicates both progress and increasing interest.

Fragmentation in HCI (Tim O’Brien)[edit | edit source]

As the field of computing has grown, so too has the field of HCI. This has led to an increased fragmentation of the field. It is no longer realistic for HCI professionals to have familiarity with the full breadth and depth of the subject. In many ways this fragmentation is to be expected and represents the significant integration of HCI into the field of computer science. After all who is an expert on all programming languages and networking protocols?

In his book HCI Models, Theories, and Frameworks Carrol expresses concern for the growing fragmentation of the HCI field and lack of depth in its application. He references the increasing breadth of the field and narrow scoped practitioners as drivers for the fragmentation and believes the future success of HCI relies on decreasing this fragmentation.

I agree with Carrol in that the field would be improved if practitioners were more knowledgeable and engaged on the subject. But, I disagree that excessive fragmentation threatens the field of HCI. On the contrary narrowing the scope of HCI would threaten its future even more. The computing field is a wide and varying landscape that is continually changing and HCI specialties will arise in all corners of it. Much of the theoretical work will have broad application, but as the field grows new research will be increasingly specialized and find applications only within specific sectors.

Jack of All Trades, Master of None (Alex Whigham)[edit | edit source]

In Carroll's textbook, "HCI Models, Theories and Frameworks Toward a Multidisciplinary Science", he discusses the fragmentation of the HCI field and suggests that it is a "threat to future development." He presents a valid concern that as researchers diverge from one common goal they tend to spend less time working together and sharing information and findings which could lead to a more complete understanding of HCI as a whole. However, I disagree with the idea that fragmentation of the field would prevent development.

It is nearly impossible to understand the relationships among many disciplines and further develop and implement interactions between those disciplines without first gaining a complete knowledge of their fundamentals and underlying concepts. For example, it would be impossible for a computer scientist to sit down and write an artificially intelligent being without first having a very thorough knowledge of neuroscience, what it means to be intelligent, and everything that plays a role in the intelligence of a being. For this reason, the artificial intelligence field has been broken down into research subgroups such as natural language processing, motion and manipulation, perception, social intelligence, and more. This does not mean that these more focused research areas are not working toward a common goal. Once there is a general knowledge of how the brain perceives stimuli such as signals generated by light entering the eyes as well as how the brain generates motion and manipulates objects in order to understand them, the areas can begin to merge and focus on the interactions between the two.

Because HCI is still very young, there isn't a significant knowledge base defining the field or the many disciplines that are encompassed by HCI. Therefore, in order to move forward and gain a more complete understanding of the breadth of Human Computer Interaction, we must first strip it down to its foundation to be sure we fully understand each individual area. Only then can we begin to understand how they work together.

HCI – A Dynamic and Ever-changing Field (Brian Finn)[edit | edit source]

One of the introductory notions that is found in the study of HCI is that the field itself is dynamic and multidisciplinary. Although historically the core research and purpose of HCI has be well defined, the borders of the field can be a gray area as important topics in HCI are rooted in computer science, design, behavioral science, as well as other fields of study. According to Human Computer Interaction - brief intro [2]: “This dialectic of theory and application has continued in HCI. It is easy to identify a dozen or so major currents of theory, which themselves can by grouped (roughly) into three eras: theories that view human-computer interaction as information processing, theories that view interaction as the initiative of agents pursuing projects, and theories that view interaction as socially and materially embedded in rich contexts” (Carroll, 2014). Technology advancements also contribute to the scope of study for HCI and have highlighted how dynamic the field can be. Technology trends, development of contributing fields of study, as well as furthered advancement of HCI-specific theories will continuously drive change in the field and will ultimately lead to new applications of HCI and how humans interface with the digital world.

Fragmentation vs Specialization (Breann Bowe)[edit | edit source]

Warning that “fragmentation is a threat to future development,” Carroll expresses his concern that multidisciplinary HCI practice may depart from the field’s foundation and decrease practitioner expertise. Although I understand this point of view, I believe this fragmentation would be better described as specialization and would loosely liken it to specialization in the medical field.

In a hospital setting, a patient may be admitted for treatment after a motor vehicle accident under the care of a Trauma physician. However, should they develop heart, kidney, or brain issues, the Trauma physician would consult a specialist to determine the best course of action. All of these physicians went to medical school, but eventually went on to specialize in a certain part or parts of the human body. Now they can work together and assist one another in achieving a common goal—healing the patient.

Just as HCI practitioners may specialize, much like physicians, they must also stay up-to-date on the evolution of their field and be mindful of any major developments. It may not be necessary to maintain a vast breadth of knowledge on the field as a whole but, rather, be considerate of how current research may inform one’s specialty. In the end, I believe specializing does not pose a threat to the future development of HCI.

Knowing the User (Jason Ugie)[edit | edit source]

HCI is continuing to evolve as much as technology continues to be the dominant interface for humans. In turn, the social sciences and cognitive science will continue to expand their influence. As Carroll points out, the emergence of the cognitive sciences influenced how software is developed. Waterfall methodologies put customer needs last and over the years as the Internet and mobile have become the new platforms for customer interaction agile development includes HCI/UX members as part of the product team thus every step of development has a user advocate.

Understanding interaction between humans and systems has never been more important. Emerging technologies such as wearables are going to reshape current paradigms like social media and texting has already done. The pains that products solve requires knowing who the users are and building interfaces and functionality that they will understand and motivation them to return. Employing interdisciplinary disciplines to cross reference demographics is important. Carroll talks about Suchman’s research on the how people interact with photocopiers. Her research illustrates the important roles that the cognitive and social sciences play to improve and enhance applications to make users connect with solutions that make their lives better.

Carroll also brings up fragmentation and while his points are understood, I feel there is a lot of fragmentation in the market too. This adds to some of the fragmentation in HCI. Right now there are a lot of different touch points for users and HCI is continually finding ways to introduce new research to understand usability considerations with gadgets to determining whether or not brains of millennials are being rewired due to their hyperconnected life. Further, every product has a different understanding of their user making common theories and practices difficult to standardize.

Digital Literacy and User Experience (Karen Doty)[edit | edit source]

User experience is concerned with designing products that are not only easy to use, but also purposeful. There are many ways to define user experience. According to Russ Unger and Carolyn Chandler, authors of A Project Guide to UX Design: For User Experience Designer in the Field or in the Making, “User experience design is the creation and synchronization of the elements that affect users’ experience with a particular company, with the intent of influencing their perceptions and behaviour”. We are currently living in a world where the user is being required to become more and more digitally literate in order to understand how to communicate. Becoming literate today encompasses navigating through the use of technology, understanding how to best incorporate technology into daily use and being able to be a productive user of technology.

As productive designers and theorists of user experience, we must have an understanding how to best reach a changing society. Children born in the past five years are often considered digital natives as in they don't remember a time without tablets, smart phones and wifi. However, not every child falls into this category. Providers of good user experience must create to fit the needs of all users while continuing to support a continuing digitally literate society.