Product Experience Benchmarking @ Blackbaud
Problem: The Blackbaud UX team relied on a bi-annual product experience survey to assess all of their major products until two years ago, when a lead researcher left the team and the program was halted. The existing instrument was overly complicated (both to fill out and to analyze) and it included sub-optimal benchmarking metrics. In short, it did not provide the UX team with the actionable insights they needed.
Solution: I was tasked with relaunching the product experience survey program. I redesigned the survey, adding updated benchmarking metrics and questions that would robustly assess usability and satisfaction both quantitatively and qualitatively, and I ensured that the survey would be easy to fill out and easy to analyze. I distributed the survey to 21,461 users of a Blackbaud fundraising application (RENXT), analyzed the data, and presented my results to the UX team and the RENXT executive leadership team.
My Role: UX Researcher
Time Frame: May '19 - August '19 (four months)

Analysis Highlights
Benchmark Scores
By including common benchmarking metrics in the survey, the UX team can measure how Blackkbaud's products compare to others in the industry. Each time the survey is distributed to users of a certain product, the benchmark scores can be added to a cumulative graph, illustrating change over time.
Tasks by Satisfaction & Ease of Use
After indicating which tasks they perform using RENXT, respondents rated each task by satisfaction and ease of use on a five-point likert scale. The shaded green and blue sections of the graph below represent the zones where tasks are meeting or exceeding industry benchmarks for satisfaction and ease of use. Bubble size indicates the number of users who perform each task. The tasks that fall within the red oval are those with sub-optimal scores and relatively high numbers of users, and are thus areas to focus on moving forward.
Benchmark Scores by Role
The UX team designs with all user types in mind, however, some user types (or "roles") are more populous than others and some use a particular product more completely than others. By segmenting the survey pool by role, I identified which roles had the highest benchmark scores and which roles had the lowest, thereby highlighting the user types that require more attention.
Qualitative Analysis, Organized Into "Topic Reports"
Using MAXQDA (a qualitative analysis software program), I categorized 520 responses to the following open-ended questions:
Q: What, if anything, do you find frustrating or unappealing about the web view of Raiser’s Edge NXT? 

Q: What new capabilities would you like to see for the web view of Raiser’s Edge NXT?
I subsequently produced eleven Topic Reports, each one focusing on a different feature/area of the product, in which I further distinguished comments as either "usability problems" or "new features." I shared the Topic Reports with Product Managers, UX Designers, and Developers working on relevant features/areas.
Evaluating the Instrument
The high survey completion rate (the percentage of respondents who completed the survey once they opened it) validates the survey's simplified design:
- My Survey: 78% completion rate
- The Predecessor Survey: 63% completion rate

The streamlined analysis process validates the survey's simplified design (I developed the questions and response types/choices strategically, such that my analysis process would be as short and simple as possible):
- My Survey: 2.5 weeks to complete the analysis
- The Predecessor Survey: 4-6 weeks to complete the analysis

The feedback from my colleagues confirmed that the insights I generated were actionable, and that the product experience benchmarking program I developed would be continued after my leave.