A multinational company understood it needed to rethink its IT ticketing system. Dated interfaces, complex processes, and complicated language were causing employee dissatisfaction. Ultimately, this led employees to waste significant amounts of time trying to solve IT issues both within the current system and via workaround solutions. My role as a design researcher was to test the design concepts and the performance of specific user interface elements.
Sprint 1 Research Questions
/ How do we increase self-service adoption of the ticket resolution application?
/ How do we reduce user error and decrease the time taken to successfully close a ticket?
/ What features would people find useful in a future version of the application?
I conducted 7 one-on-one moderated concept testing sessions. I shared concepts with participants, who were located across the world, through screen-sharing software. The concepts were presented to each participant in a different order to overcome
recency or primacy biases
Participants were guided through a series of open ended questions that about the following-
1. Initial preference
2. Feature identification
3. Cognitive walkthrough
4. Rating against previously defined design principles
The lead designer and I worked together to ensure that the concept prototypes were meaningfully different from each other and the prototypes were developed to a level of fidelity that was appropriate to the specific research questions.
Concept Testing Findings
I probed participants along themes of visual preference, screen and task-specific the flows of information, apparent, intended and ideal information hierarchy, usefulness of user interface controls, and interactions with user interface controls.
There was a clear preference for one concept, in terms of visual preference and overall ratings. Participants also preferred seeing more information, even if they did not understand the details because it gave them the feeling of progress. The cognitive walkthrough helped us discover previously unidentified use cases like bulk processing and submitting tickets on behalf of others. Some findings went beyond the scope of the research questions but shaped hypotheses for future product development sprints.
Sprint 2 Research Questions
The second sprint focused on detailed design. Research questions included:
/ What are the most intuitive search criteria?
/ What are people's expectations of filtering mechanisms?
/ How might people use 'filter', 'sort', and 'search' in unison?
/ What is the most clear and visually appealing style for progress bars?
/ How specific should estimated completion times be?
/ What is the perceived utility of activity logs used internally by resolution teams?
/ What kinds of notifications should be surfaced and how?
I conducted hour-long one-on-one moderated usability tests with 9 participants. While the participant sample in the previous sprint consisted of 'super-users', this round included those with varied usage. 2-3 different options for 10 different user interface elements were tested.
Usability Testing Findings
There were distinct differences in how the two user groups use the current system and develop workarounds, and in what their needs from a self-service application are. This led the team to prioritize the needs of one group over another, and to certain terms being relabeled and highlighted to address the prioritized use cases. Specific findings and recommendations around the needs of the other user group led to future considerations for features and a strategy for transitioning all user groups to the new system.
While the findings of the research in the first sprint led to a decision to show more information rather than less, this round of testing with a group of users with more varied usage patterns showed the need to have more upfront filtering mechanisms before displaying detailed views. The difference in the volume of tickets raised by superusers ( who open 50-100 tickets at a time) and by low-volume users (who only open 2-3 tickets per year) was significant enough that there were differences in how they would use user interface controls. Low volume users did not need to use filters at all while high volume users did not need to use sorting functionality at all, and instead needed to operate on bulks. The user experience goal was to increase the adoption of self-service and therefore reduce the need for 'super-users'. But by addressing the needs of the superusers in the short term, we ensured that the transition was smooth and effective.
Logic and meaning had to be consistent with what people expected, and at the same time, logic and visual design had to set people's expectations correctly. For example, when the title of the ticket was visually highlighted through bigger text size, people expected to be able to scan titles to find the one they were looking for. In order for it to be easy to scan, they needed it to make sense to them, not necessarily to the internal team resolving the ticket. So it became important for the ticket title to be editable if no other markers of differentiation made sense to them.
The research informed product strategy, design and development of the new ticketing application. The team also delivered a product roadmap which highlighted the near, mid, and long team features to be designed and built in the next 18 months based on research findings.