Declassified
Created a course evaluation feature within Fizz to surface raw, honest class reviews. Led UI/UX design, A/B testing, and AI integration to enhance decision-making for college students.
↓ scroll below for more ↓
Background
Context + Motivation
Fizz is a social media platform for college students exclusive to students at that university. Known for the candor illuminated by its anonymous style of interaction, students often use Fizz for a wide variety of tasks to interact with the raw, honest version of their peers.
When it comes to evaluating whether or not to take a course, university students rely heavily on their peers' posts on Fizz for the same reason that upperclassmen are generally considered the best source of advice: the honesty in course evaluation helps students understand what they're truly getting themselves into with a class.
Having interacted with many Fizz users and being active myself, I designed and prototyped an AI-integrated course evaluation feature; I wanted to see how this process of collecting information on a class from Fizz could be streamlined.
Solution
Declassified is an in-app course evaluation feature that allows Fizz users to see what students really think about their classes based on posts they've made.
Early Research
Initial Insights
To start, I tested the way in which users complete looked for course information on Fizz to identify pain points.
Methods for this data collection included a hierarchical task analysis, annotated pain points and process for a random tester, and a UI audit evaluating the current user journey for this task.
Annotated testing observations
Annotated Hierarchical Task Analysis
UI Audit of Current Evaluation Process
Testing revealed that the current process of looking for class information was much more arduous than users anticipated, the typical process including multiple rounds of searches for a holistic understanding of the course. The most common sentiments expressed were…
It is difficult to find specific course information about specific class aspects in a single query.
Students generally feel very strongly about the classes they choose to post about.
Users assume that most questions related to a course have already been asked.
Students believe course reviews will be more honest on Fizz than on university-run evaluation sites.
From testing observations, I concluded...
Fizz users need a single location within the app where they can find raw opinions on multiple aspects of their university's classes.
As a brainstorming exercise, I created low-fidelity sketches of potential interface strategies to address this.
Deciding to design an in-app evaluation feature, I asked these questions…
HMW utilize the unique candor of Fizz to promote course evaluation?
HMW reduce the cognitive load of users trying to find course information?
HMW make older ideas on common class topics more visible?
HMW better understand which aspects of a course students want unfiltered opinions on?
HMW design an social feature that discourages addictive scrolling habits?
…and used them to create guiding principles for my prototype.
Design
Creating an informative, simplistic process
My main goal in creating the flow of this feature was to condense the multi-step process of searching for class information into the fewest steps possible while still utilizing as many of the 10 Usability Heuristics as I could. The first step to achieving this was being intentional about what I did and did not want my platform to include, some of which included:
✅ What we are…
an internal evaluation system
an advice-seeking space for class-related inquiries
an efficient process that doesn't intentionally create lengthy user visits
a way to see up-to-date/trending thoughts on classes
❌ What we are not…
a course scheduling platform
a hateful, unproductive space to insult professors
an attention trap to deceive students into rabbit hole-esque scrolling on Fizz
Declassified User Journey
Prototyping
What are people looking for? What do people want to know about a class that requires a higher level of honesty?
The goal of this platform was to properly reflect the experience of asking a friend for honest thoughts on a course. Some features testers mentioned looking for included:
attendance requirement
teaching style (lecture, sections, labs, etc.)
comparative workload (ex. don't take with other heavy classes)
must takes/must avoids
how loose the pre-requisite requirements are
Ideation
I created the low-fidelity sketches to better understand what I wanted the flow of information to be. What was the best way to pass along information? How much information that could be accessed on other platforms should be included here?
Low-Fidelity Prototyping
I initially evaluated two different user flows; because there was so much information that needed to be presented while still making sure not to overwhelm or bore the user, I used A/B user flows to understand how users might interact with the system from two alternative starting points: from a class-related post, and from the evaluator tab itself.
These ideas were then translated into quick wireframes that explored both avenues.
Medium-Fidelity Prototyping
The main objective in moving wireframes into interactive prototypes was to create a prototype that could explain the feature's purpose through its functionality alone.
I opted to use a grayscale color scheme to better focus on functionality and flow rather than details and polish.
Creating a Design System
High-fidelity prototypes were based on 2 distinct visual design languages.
In my initial user testing, I found that dark mode is a common preference amongst students, so I wanted my design language to piggyback off of this and set black as the primary color. Fizz's visual language is simple, all white/black (depending on light/dark mode) with hints of color for emphasis.
I created my visual languages to vary in emotion rather than in color palette alone (for the most part) due to the time constraint of this project + the constraint of building within an established platform, one being more calming + structurally elusive and one being more emotive + organizationally candid.
VISUAL A: "The Playful Minimalist"
VISUAL B: "Pop of Color"
A/B Feature comparison
Prototyping AI Integration
The role of AI in my feature was clear; if students wanted to know what everyone else thought without having to look post for post, AI would condense the information and provide it in a summary.
I decided to use AI for three aspects of my prototype:
The "Overview" section of each class' landing page, condensing the complete course evaluations for each quarter of STATS 60 (available to students through the university) along with the basic information about the class from our course catalog into a description that provides insight on student feelings rather than solely on objective course thoughts.
"The Scoop": professor-specific blurbs and quarter-specific reviews and preferences from opinions mentioned in evaluations.
Tags assigned to each course. Though it might appear that tags were added as placeholders, I used AI to review the tags I had along with student feedback to see which tags belonged on which card.
AI information prompt example
Testing
Understanding user comfort
User testing for this product, outside of initial ideation testing, was completed with two primary goals:
to determine if functionality clearly conveyed purpose
to gauge accessibility and usability through the 10 Usability Heuristics (as defined by the Nielsen Norman Group.
Testing Strategy
Each round of testing began with an Initial Testing Plan based on observed behaviors from the previous round.
Sample Testing Plan
For medium and (initial) high-fidelity testing, I tasked 6 users with looking for information about a class from other students using my feature and instructed them to use the "Think Aloud" method to better understand what was and wasn't intuitive.
To facilitate usability tests, I utilized QuantUX to track user movement across the app.
Round 1: Medium-Fidelity
QuantUX heat maps and user journeys
Sample user testing summary from Round 1
Round 2: High-Fidelity (Functionality + Communicated Purpose)
The first part of high-fidelity prototype testing compared user preferences between the 2 visual design languages.
User testing visuals
Usability Testing Quotes from User A
Round 3: High-Fidelity (Usability + Cognitive Load)
The second part of high-fidelity prototype tested cognitive load of the feature through simultaneous foot-tapping at one tap per second (beeping for pace was provided).
One user testing the cognitive load of the feature (foot tapping not picture but activity reflected in accelerometer)
Likert Scale Evaluations
Accessibility audit (WCAG) + some prototype refinements
Testing Summary
A/B testing and usability testing revealed helpful, specific insights along the way that helped me condense the flow of information without overwhelming users or encouraging unhealthy scrolling habits. The purpose of the feature was understood and embraced through functionality and layout alone, and I found that, over time, users gravitated towards Visual B ("Pop of Color).
Reception
Gauging user excitement
I posted the feature mockups on Fizz to gauge user perception; though I was initially prepared for harsh criticism, I was delighted to see that those who chose to interact with the post were very excited by the concept.
Additionally, I had the opportunity to present the final product to the CEO/Co-Founder of Fizz, Teddy Solomon, and the Head of Product, David Vasquez. The idea was received well, and both spoke to the product's novelty and creativity, stating "they were surprised they hadn't thought of this before" (so exciting!!).
Conclusion
Lessons learned + next steps
Since then…
This was the first project I'd undertaken with this intensity and timeline; though this process with testing, designing, and creating so quickly in just 3 weeks was new, it was a challenge I am forever grateful for. I'm proud of the work I was able to produce, the people I got to speak to, and the growth as a designer I couldn't have received otherwise.
I'm incredibly grateful to Saad Riaz, Sarah Chung, Amelie Or, Allie Montoya, and all of the users, testers, and people who helped me along this process. Thank you all so much!
When I revisit this…
In my inevitable revisitation of this project, I hope to explore the aspects of this venture that I previously couldn't due to time constraint. Some of these aspects include:
large-scale testing with 50+ users
AI integration using an LLM
further exploration into the necessity of the product asa feature versus a separate platform
…who knows what else! The world is our oyster!
Feel free to play with the prototype! Best viewed in full screen.
Thank you so much for reading!