I really enjoyed conducting the Think Alouds and felt that I learned a lot from doing them with real users, using a real website. Our partipants included a graduate student at UMBC who was familiar with the Erickson School, a working graduate of UMBC who was not familiar with the Erickson School and was not interested in graduate school, and a working graduate of UMBC who was not familiar with the Erickson School but was interested in pursuing some type of graduate education. I found it valuable to interview people who were both familiar and unfamiliar with the Erickson School because our primary users represent both of these groups. Also, being familiar with the school seemed to make some tasks easier to do and resulted in less harsh comments when something went wrong.
I found that it was sometimes difficult to capture all of the good comments I heard as I furiously took notes, and it was often difficult to not respond to amusing comments or comments that I really agreed with. I felt that, by using this method, I was able to gather information about the website’s usability that I couldn’t easily guess by myself. Thinking about the UARs, I felt that I didn’t learn many new things from the Heuristic Evaluations that I hadn’t already flagged as issues. I found it difficult to identify critical incidents in certain situations because the existence of the list of tasks motivated participants to keep trying to do things they might have given up on if they didn’t have a clear goal or were not part of these users tests. For example, all participants were able to complete Task #3, but all participants also made negative comments about the location of the content. If they hadn’t been directed to look for an article, they might have never noticed the content at the bottom of the page.