Design research for virtual scheduling flow

evaluative   generative mixed-method  SaaS


Overview

Nava PBC is a civic technology consultancy. My contract was with the Veterans Affairs Administration, where I was building Caseflow, an internal SaaS program used to keep track of Veterans, schedule healthcare benefits hearings, and send out hearings decisions. Unfortunately, this program was designed for in-person hearings and there was a huge backlog of cases at the beginning of the pandemic.

Goal

Reduce the huge backlog of cases that the VA has by adapting the Caseflow program to handle virtual hearings, reduce error rates, and automatically notify participants of scheduled hearing.

(Did we meet it? Yes! Read on to see how...)


Role



UX Researcher/Designer. 
Other designers + engineers 
assisted with the tasks of feedback, prototype creation, synthesis, and pushing to production.

Tools



Figma, Mural

Team



me, 
1 UX designer, 
1 design lead, 
3 engineers, 
1 product manager

Timeline



5 weeks






Identifying Users and Key Needs

To avoid redundancies, I looked at previous usability testing for inspiration. I used this to identify our two user groups and develop key needs from the research goal I wrote above.


Key Needs:

  1. Virtual hearings need unique, non-overlapping hearing times

  2. Virtual hearings notify participants (Veterans and representatives) almost immediately by email when they're scheduled, so there's not much room for error--hearing coordinators need to schedule accurately, and judges need to be able to mark their decisions correctly.


Users:

  1. VA Hearing Coordinators: “Worker bees”; the ones who are doing the scheduling and using Caseflow on a daily basis for most of their day.

  2. Veterans Law Judges: They go into Caseflow once or twice a week to check off their decisions on a case.



Determining research questions

I wrote down some research questions and mapped each one onto Key Need 1 and 2. (This is usually an Excel table). I sorted the research goals into primary (must-answer) and secondary (would be nice to answer) by having a feedback meeting with my sprint team.

Hypothesis

A hearing scheduling experience designed for virtual hearings — one which enables Hearing Coordinators to schedule efficiently and effectively while minimizing errors — will help the Board achieve the goal of increasing the number of hearings held year to year and reducing the average number of days to complete a decision.

Primary Research Questions
  1. How easily and efficiently can Hearing Coordinators schedule virtual hearings to custom times? (Needs 1 & 2)
  2. What are Hearing Coordinators' expectations around flexibility to select custom hearing times and overbook hearing days? (Need 1)
  3. Do Judges understand the redesigned Daily Docket page? (Need 2)
Secondary Research Questions
  1. How do Hearing Coordinators schedule and update virtual hearings? (Need 1)
  2. What contextual information that we haven't considered yet might Hearing Coordinators find helpful when choosing a hearing day and time? (Need 1 & 2)
  3. What concerns might Hearing Coordinators have about displaying both Legacy and AMA hearing requests in the same scheduling queue? (Need 1)



Determining methods: mixed-method moderated interviews

I conducted mixed-method usability tests with Caseflow Hearing Coordinators and Judges over the span of 3 weeks. These included

Moderated Interviews (instead of Unmod)
Why? I could've done unmoderated usability tests with a service like UsabilityHub for the least labor-intensive option , but since I wanted to 1) get contextual information about users' current experience and 2) make participants comfortable enough to keep returning for future testing, I chose to have moderated sessions.
Contextual Inquiry
Why? To answer the secondary research questions, it would be ideal to observe our participants in their natural habitat. So for the first part, I asked them to share their screens and schedule a hearing using the current Caseflow software.
Think out Loud / Concept Testing
Why? We had enough from previous usability testing to develop a new prototype to get in front of users (which IMO, the sooner happens, the better). For the second part, I asked users to complete a series of tasks and “think out loud”.



Planning deliverables


Research Plan

Included recruitment email templates and calendar invites. Link Incoming.
Moderator Guide

2 separate guides: one for the hearing coordinators, and one for the Veterans Law Judges.
Usability Note-Taking Template

I created a note-taking template and then had training sessions with engineers before user sessions to train them on how to take notes. They appreciated having the face-to-face user interaction, and I had great research democratization.
Equity Pause Workshop

Made on Mural; used to mitigate biases before synthesis phase. Link incoming.



Synthesis Process

  1. Equity pause workshop based in Mural in order to mitigate biases in the synthesis process
  2. Notes compared to Transcripts
  3. Significant quotes highlighted on Google Docs
  4. Quotes transferred to hybrid Affinity diagram/ journey map on Mural
  5. High-level insights created after I led an affinity diagramming workshop
  6. Led workshop to prioritize insights and categorize into JIRA


Synthesis Mural
Action Priority Matrix to prioritize insights
Definitions of the high risk / low risk/ high effort / low effort categories as they pertain to the project / client goals.
Affinity diagramming workshop written instructions. Helpful for anyone that has ADHD or that came in late!


Synthesis Mural was structured as a journey map of Figma screenshots of each frame in the prototype, so that you could easily see the quotes under each step in the journey (and remember what it looked like).  

I chose the action priority matrix in order to deliver the user story opportunities with the potential for the highest return on investment, since the V.A. had such a huge backlog of cases. Action priority matrixes require a good understanding of business/client goals in order to be subjective with impact assessment, which was present given our longstanding relationship with the V.A. and that we had been checking in with our client stakeholders consistently throughout the project.



Insights / Design


Examples of actionable insights generated: custom timeslots portion
  • Insight 1: Hearing coordinators often work with multiple tabs open. They use this to keep track of what hearings they’ve already scheduled, but have to keep clicking back and forth.
  • Action 1: We added more detailed information in the success banner at the end of the workflow because that was a direct request.

  • Insight 2: Even though this was a pattern elsewhere in the app, the hearing coordinators couldn’t tell that the Veteran’s name directly linked to the Case Details page.
  • Action 2: Realising that this was not in keeping with the hearing coordinators’ mental model, we made the “Case Details” page a separate, findable link.

  • Insight 3: Hearing coordinators prioritized the Veteran’s timezone and wanted it to be front and center. However, it was still useful  to see the timezone for the Judge at the same time. 
  • Action 3: Instead of having the times sit side-by-side, I used the principle of scale in visual hierarchy to have the more important time zone at the top, and the less important on the bottom in a smaller font.






Immediate Outcomes


  • Main goals as well as secondary questions for the user research were answered
  • The design features that I incorporated into the prototype I tested with users was pushed to production within 2 weeks, and had universal adoption / lots of “thank you” emails from our awesome VA staff. Hearing coordinators felt that they were making “less mistakes” when scheduling hearings.

Click to watch full prototype flow.



Over time the number of hearings increased by 50% in one year, hearing coordinators saved time (one said they saved 10-15 minutes per case), and scheduling efficiency improved by 7% in one month.


“Your genuine empathy and concern for participants when you were running the interview was awesome,”   -Observing Designer

“Thanks to you for continuing to make our research process open and accessible to non-designers” -Sprint Team Engineer

“Thanks so much, I really enjoy these sessions and I really feel heard” 

-Participant