Insights Powered by Machine Learning
CATEGORY
NEW FEATURE
DURATION
4 MO
YEAR
2023
FEATURE OVERVIEW
ROLE
TASK
RESPONSIBILITIES
Learning the Algorithm
Collaborated with data science to understand the full potential of the algorithm
Funnel Insights powered by machine learning is a concept for a feature that proactively notifies users on significant conversion rate differences in their funnels.
Lead product designer
Lead design and research efforts to create the first machine learning powered feature in FullStory.
Collaborate with data science to learn the capabilities of the algorithm, recruit participants on UserTesting, conduct 10+ hours of interviews to understand users’ wants and pain points, design a template for testing, conduct 10+ hours of interviews with customers to de-risk value, and present findings to leadership.
The Process
Discovery
Conducted interviews to better understand what users want to be notified on
Value Sessions
Conducted value sessions with customers to determine the viability of the feature
Our Recommendation
Shared findings with the company’s leadership team along with a recommendation for next steps
Learning the algorithm
With the data science team getting to a good point with their new algorithm, I wanted to fully understand its capabilities before jumping into the design process.
Discovery
There was a need to better understand what users want from a feature like this. What types of insights do they care about?
Value Sessions
After learning what’s important to users, we had to evaluate if the algorithm’s current state provided enough value.
Our Recommendation
Following the completion of the pilot program, we gathered as a team to discuss how we should continue forward. Discussions around this feature are still pending.
Gaining Context
To gain the right amount of context to start designing this feature, we had 7+ hours of working sessions with the data science team to slowly walk us through how the algorithm works. Through these sessions I learned what types of insights could be generated and what factors are taken into account to do so. This understanding helped tremendously when trying to figure out what data should be surfaced to users.
Defining a Valuable Insight
An open question we had as a team was what defines a valuable insight? Value lies heavily in the eye of the beholder and it was up to our users to help us define it. We gathered data on what attributes (i.e. user, behavioral, and errors) users care about and to what complexity they want to know about them. Having users identify what’s important to them allowed us to get a better understanding of what the workflow and design might look like.
Session Structure
The structure of these sessions were very intentional and had the participant build a narrative around the concept of data. We asked questions like If Data was an employee at your company, what would you want them to tell you? This exercise sparked really interesting conversations to dig deep into what users really want from our product.
Creating a Pilot Program
De-risking value and feasibility were top priority for the team before moving forward with the feature. Given the current engineering constraints, would we be providing enough value for our customers? My PM and I sought out to create a research pilot program to speak directly to our top customer accounts on if they wanted a feature like this in the product. We assessed this through value sessions where we walked through a design concept and collected scores based on 5 statements that helped assess value.
Wizard of Oz and Value
I collaborated closely with the data science team to identify important data points to display in the interface. This allowed us to do the Wizard of Oz method where we created a design template that would be manually populated with real customer data before each session. The mocks showcased real insights generated from the algorithm and ranged from very simple to complex. After showing each insight, we conducted a value scoring exercise where we walked through 5 statements and asked each participant to give a score between 1-5. These scores were then taken into account during analysis to determine if our current state provides enough value. Below is an example of one of the statements participants scored.
Presenting to Leadership
While value was being de-risked, the data science and engineer teams were conducting research of their own to dig into technical constraints. Taking both research findings into account, we came up with a recommendation on what the future may look like for this feature and recently presented to company leadership to get buy-in.