Assessing Product Experience @ Blackbaud
Overview
Problem: The Blackbaud UX team relied on a bi-annual product experience survey to assess all of their major products until two years ago, when the program was halted. The existing instrument was overly complicated (both to fill out and to analyze) and it included sub-optimal benchmarking metrics.
Solution: I redesigned the survey, streamlining the flow of questions, simplifying the language, and adding updated benchmarking metrics and questions that would robustly assess usability and satisfaction. I then distributed the survey to twenty thousand users of a fundraising application.
My Role: UX Researcher
Time Frame: May 2019 - August 2019
  
   
Process
Analysis Highlights
Benchmark Scores
By including updated benchmarking metrics, the UX team can measure how Blackbaud's products compare to previous years as well as other products in the industry.
Tasks by Satisfaction & Ease of Use
After indicating which tasks they perform using RENXT, respondents rated each task by satisfaction and ease of use on a five-point likert scale. 
The tasks that fall within the red oval are those with sub-optimal scores and relatively high numbers of users (areas to focus on moving forward).
Benchmark Scores by Role
By segmenting the survey pool by role, I identified which roles had the highest benchmark scores and which roles had the lowest, thereby highlighting the user types that require more attention.
Qualitative Analysis, Organized Into "Topic Reports"
Using MAXQDA (a qualitative analysis software program), I categorized over 500 responses to the following open-ended questions:
Q: What, if anything, do you find frustrating or unappealing about the web view of RENXT? 

Q: What new capabilities would you like to see for the web view of RENXT?
I produced eleven Topic Reports, each one focusing on a different feature/area of the product, in which I further distinguished comments as either "usability problems" or "new features."
Evaluating the Instrument
The high survey completion rate (the percentage of respondents who completed the survey once they opened it) validates the survey's simplified design:
- My Survey: 78% completion rate
- The Predecessor Survey: 63% completion rate
  
The streamlined analysis process validates the survey's simplified design (I developed the questions and response types such that my analysis process would be as short and simple as possible):
- My Survey: 2.5 weeks to complete the analysis
- The Predecessor Survey: 4-6 weeks to complete the analysis
  
The feedback from my colleagues confirmed that the insights I generated were actionable, and that the product experience benchmarking program I developed would be continued after my leave.