Human Computer Interaction:
Analytic Evaluation

Author

Mick McQuaid

Published

March 23, 2023

Week ELEVEN

(ten if you don’t count spring break)

Today

  • Q and A from last time
  • Discussion leading (Mick)
  • Design Critique (Mick)
  • Article Presentation (Mick)
  • Break (break may be earlier or later in sequence)
  • Tutorial on Components

Q and A from last time

Learning

  • It was cool seeing and evaluating a usability test! It was also important to review the 5 Es and acknowledge that some of them have changed and why.
  • The most important point I learned today is about User test, The video you show us about the girl is really interesting.
  • I don’t know if it was necessarily the most important, but my favorite part of the class was the usability test with the little girl. I’m very interested in the adaptations in usability tests for different age groups and levels of tech familiarity. This was a perfect example of some of those adaptations.

More Learning

  • Empirical evaluation in UX run on a spectrum; there are different tools for generating different insights.
  • We learned lots of things related to the user test.

Q&A

Similar to interviewing, are usabilities tests something that you get better with over time?

Answer: Experience helps but it’s easier to scaffold a usability test so beginners can be successful at them.

Discussion

Focus Groups

I am really interested in using focus groups for UX evaluation; however, I’m confused why this isn’t considered to be an empirical UX method. Maybe the distinction between empirical methods and analytic methods are another example of a categorization that just varies based on the company, but I would find focus groups helpful to get a breadth of perspectives on a certain design.

More on focus groups

My question is if focus groups are beneficial in the research process and in what scenarios? Because some participants may not pitch in - either because they’re shy or they don’t get a chance to. And there’s always a chance of participants getting influenced based on others’ answers.

Even more on focus groups

potential limitations include the fact that participants can be influenced by others and that some people may feel uncomfortable sharing in a group setting. To address these issues, UX designers can organize discussions to encourage open dialogue, limit group size, and use a skilled facilitator. It is important to use focus groups in conjunction with other research methods to ensure the accuracy and reliability of the data.

Subjectivity

What are some of the biggest challenges in analyzing qualitative user data? How can we ensure that subjective opinions are not given undue weight and that findings are accurately represented?

Success

How do we determine the success of a UX evaluation? Is it based on the impact it has on product design and development, or on the satisfaction and feedback of individual users?

Iteration

Chapter 26 talks about knowing when it becomes time to stop iterating based on data from evaluations, but how do teams determine when it is time to stop evaluating, or when evaluation processes may be lacking in thoroughness?

Experts

I found myself wondering whether methods like HE and UX expert tests result in UX design teams designing products/systems for UX designers, rather than for their desired target user base.

Trust

Building on the discussion about the emotional impact of design and intuitive user interfaces, I wanted to ask how would one use heuristic evaluation to make privacy/ data collection understandable for the user in a way that truly gives them freedom to take relevant action? How does one go about analyzing whether a system truly communicates trust and care with respect to user data without repeating clichés like “we care about user privacy”? Also, as a novice researcher, how do I utilize heuristics/ cognitive evaluation without limiting myself to my personal perspective?

Critical Incidents

The identifiers of critical incidents vary from the user to the observer. For me, critical incidents are a new concept, which puts a name to something routinely done in evaluative research. My question is, would failure to complete a task be a critical incident, or is task completion considered a metric? Are critical incidents more detailed than simply stating a participant failed at a task?

Metrics

how can UX designers ensure they are using metrics effectively and making data-driven design decisions? What are the best practices for collecting, analyzing and interpreting metrics in UX design?

Design Critique

  • My twenty-year-old camera bag!
  • It influences my choice of laptop!
  • I forgot I had it on my shoulder at the airport TSA chockpoint
  • It’s quiet and dependable and parts that wear out can be replaced
  • The design has been updated (I’ll show you)

Article Presentation

Prereqs

If you don’t know anything about ableism, disability identity, or the history of disability rights, check out https://mickmcquaid.com/accessibilitySlides.html

Initial interviews found …

… two forms of access inequality

  1. Access differential: the gap between the access that non/disabled students experience, and
  2. Inequitable access: the degree of inadequacy of existing accommodations to address inaccessibility

(reported in Shinohara, McQuaid, and Jacobo (2020))

Method

Reflexive thematic analysis (Braun and Clarke (2022))