Highlights
Before you got into product, it seems like you had some experience in high school science teaching. You had said that teaching exposed you to outcome based thinking. Can you talk a little bit about that experience and what outcome-based thinking was like in that environment?
Michelle: I didn't have a typical path into product and I think that's something you hear more and more these days. The path is always fairly windy.
I started out actually as a pre-med student. I was really focused on becoming a doctor. I did everything that I possibly could to get the experiences that would lead me to this path of one day becoming a physician.
So, I spent some time in my senior year of college working on a health mission to El Salvador. One of the things that I was doing there was organizing and leading a group of students to do both onsite one-day, medic brigades in different rural communities across El Salvador, as well as public health workshops. These workshops would focus on things like dental hygiene and feminine care.
However, what I started realizing when I was in that kind of environment, is that I was actually never in the room with the doctor. My goal was to be in the room with doctors talking to patients so I could spend my time hearing, absorbing learning. This wasn’t happening. I’ve had a couple of other experiences too that weren’t quite what I wanted. And so, I actually stepped back and I said, you know, I'm not sure if going straight into medicine is the right thing for me.
That led me to believe that maybe I just needed to take a pause and reevaluate. Medicine is also very cost prohibitive. Considering my options, I also was an education undergrad minor. So I ended up joining Teach for America.
I knew that I could do science. And so I said, “Hey, I would love to teach high school science.” I was looking to go back to my home state of Texas and teach ninth and tenth grade chemistry and biology.
I started to really get the whole curriculum down. I knew how to balance chemical equations, the cycle of life– but I was still doubtful if I could really teach students effectively. How can I measure if I’m getting the students to where they need to be?
At that time, everything was really oriented and organized around an end of year test. It was a state standardized test. In order to progress to the next grade you had to achieve a score of 70 or higher on this test. So as a new teacher, I'm like, “okay, well, let's take a step backwards.”
If my end goal is to get these kids to be able to pass a test at 70% proficiency or higher, how do I organize an entire quarter or a year around the outcomes that are needed to show that they actually understand? Teach for America had a really systematic way to approach teaching using objectives. They also made sure you were thinking about classroom management.
So, one of the things that I really started then leaning into was objective-based thinking. If your objective is, “What are the cycles of photosynthesis?” You have to be able to break those steps down to their components and then be able to align the content and the exercise around those individual steps.
What I would end up doing is creating this tracker that could break down every single objective, align every single objective to a specific point in the text or curriculum that talked about that particular topic.
And then at the end of the day when I had either small quizzes or tests, I can align every question back to a specific objective. Then I can look at my color-coded Excel and see whether or not the students actually got the objective. What percentage of my students understood and mastered that objective? How many were either in yellow or red?
With that, I could actually create different cohorts or sections within my classroom that really paired up people who are stronger with people who are weaker in a particular area or objective. I also could use that to say, “wow, every single one of my students are in the red. They did not mess with this objective.”
It would help me look and target areas where I might need to either rejig the way that I was communicating something or I could go back in and self-reflect.
I was really able to leverage this approach to actually move the outcomes of our students to some of the highest in our district.
Can you tell me more about how you relayed your experience from inside the classroom to a career in product management?
Michelle: The most surprising thing about my career is that when I reflect, I contribute a lot of my success to those formative moments in the classroom.
In the classroom, I wasn’t just looking at an aggregate of data but I was also using it to experiment. I had an A/B testing perspective. It gave me the ability to look at the different tools I used, what prevented the outcomes, or what accelerated them. I could reflect on different tactics I used and evaluate how students responded to them. Now, I’d ask myself, “how can I integrate that into seemingly less obvious objectives or pieces of the curriculum?
Through your career, were there any defining moments where you really started to gain your product sense? And what were those pieces of information you were really soaking in to become the product leader you are today?
Michelle: My background in science set me up for a very outcome-based hypothesis-driven framework to approach a lot of problems. What data do I have that leads me to believe XYZ hypothesis? How am I going about evaluating that? What experiments might I come up with?
In my world of education technology at Pearson and Cengage, we really didn't have the ability to A/B tests. But we did have the ability to talk to users. We had the ability to come up with different solutions based on these hypotheses and then go and put them in front of users, ask them questions, and see how they react.
It wasn't necessarily about what users were saying. It was really about how they were engaging and where we saw friction in their experience. That really helped me build that intuition that I needed.
But it wasn't just about what users are saying or what they're asking for. It was more about digging past what they were saying to understand how they were actually engaging with the product and that flow. Where are they coming up to friction points?
In my earliest years, it was really about talking to users and deeply developing that skill of empathy. I was putting myself directly in their shoes. There were some times where I would flip the persona. I would take the lens of my 60 year old mother. How is she going to interact with this? How do we get her to become an active user? That’s where I started.
Once I was able to join an organization like Kayak that had A/B testing built into its DNA, pairing t user generated insights with actual data and the ability to test those hypotheses really unlocked my ability to move quickly, but also to orient on the right problems.
Is there an example that you experienced where the data just spoke the language for everybody in the room, but your intuition felt like they were being misguided by this data. How did you handle that?
Michelle: At Netflix I worked in the kids and family area where our audience were kids and their parents/caregivers. Generally, if you think about the audience of Netflix, it's a very broad population. Different devices, different demographics, different locations.
But what keeps bringing people back to Netflix is the library of content that continues to be refreshed. Users discover new content, new favorites, and it's exciting. There's always something new to discover. The company has built itself around the “binge” that they’ve created. Netflix is oriented around “how do we ensure that there's always something new for a user to discover?”
For kids, that's not really what the data is saying. The research is saying that kids want their favourites. They don't want the next new thing. They want to rewatch, over and over and over and over again, their favourites.
It doesn't matter if it's the 10th time or the millionth time. They want to see that thing over and over again. And there's a variety of reasons why this is true. From a developmental standpoint, you're still learning the language at that age. You're learning interactions. You want comfort, you want safety, you want something predictable and known.
We could see that in the research.When I first joined Netflix, though, the organization was really trying to continuously push on helping kids find their next favorite.
There’s a number of reasons why that was important to the business at the time. But my intuition was saying that we really need to help build the trust with our youngest members– the kids and their parents.
We came up with this hypothesis that kids aren’t asking their parents for Netflix, they're asking for Pokemon. Kids aren’t turning Netflix on to browse. They know what they want.
And if we can help build up that trust and that comfort– and really that self-sufficiency that a kid can come in and navigate on their own– then it’s better for the business. We’re helping the kids build a system of comfort.
We can then use that comfort to fulfill our business and content goals of helping them discover. That was the big learning for me there. I came into the common belief that we needed to get kids to discover new content in order to get them to come back to Netflix.
But the data was saying something different, the research was saying something slightly different, and I think we proved that out with experimentation.
Can you tell more about those experiences where you’re doing a user interview but your observations of the user are more useful than their actual responses?
Michelle: That ability is one of the things that sets a good and great product manager apart. Being able to move through and move past what users are saying and interpret deeply. Really examining how they're saying it or what they're doing in conjunction with what they're saying.
Here at Hinge, one of our goals is to build a dating app designed to be deleted. We want to help users find their match, their pair.
We are the intentional dating app. Which means that we primarily attract a population of users who are interested in longer-term relationships and not necessarily hookups.
So one of the things that users always come and write to us about is that they want to understand what the other user is looking for. What is their intention?
Are they looking for something casual? Are they looking for something long-term, something serious? Having these answers is going to help them find the cohort of users that best represent their intentions, as well.
Well, it turns out it's not as cut and dry as that. If we just put a feature on the app that says something like, “are you looking for a long-term relationship or for something casual?” that is going to be impacted by who you actually meet on the app.
Do I meet this person or that person? And then how do I respond to them? It’s really not as straightforward as saying yes or no. It’s much more nuanced than that.
When we deliver features like this, it never works out in the way users expect it to be. It's either too restrictive or it’s not specific enough and we're not meeting their exact needs.
So, the question then isn’t “are you looking for a long-term relationship or are you looking for something casual?” It's more, “how do I understand how much you're going to invest in this experience?”
Whether or not it turns out to be a long-term or short-term relationship doesn’t matter as much here. This is the real question users are asking themselves: Do I put a lot of effort into my profile? Do I have longer answers to my prompts when I send it? Do I send it with a comment or not? And so there are other things around intentionality as a topic that are actually more meaningful than me saying “I want a long-term relationship or I'm looking for something casual.”
How are you thinking about the future of what tools you use to gain information? What are the tools you use to help get your team up to 11?
Michelle: I actually think that this is one of the single most important things that product teams can do that will determine whether they are really successful or just meeting the standards. Product teams are a team. They need to be symbiotic and have close relationships with one another– data scientists, researchers, designers, product managers, engineers, QA.
We're all working together to facilitate some end goal and verbal and written communication are the two most important things to ensure that there is alignment. We all need to understand what is happening, when it's happening, and why it's happening. That’s why it’s really important to have written documentation.
Did we really set up the narrative of what's going on? Hey, here's some background context and some data that we have that lends itself to support the idea that if we do X, Y, Z thing, that these outcomes will be the result of them. Beyond just having the framework, this is why documentation is important. The other thing that I think is really important upfront is the decision-making criteria around whether something will or will not be successful.
That's where you really start to dig in with your data scientists to understand how we are going to measure this. What things do we need to ensure we have to make decisions? And then based on this we’ll respond by making that decision. Aligning on that upfront saves a lot of time back and forth in the debate of how we are going to proceed.
That's number one. Before we actually get to the actual execution of a product spec, you need to gain alignment on the roadmap of work.
Then, all of that is aligned to what you as a product team are actually trying to deliver both for your users and then for your business. How are you going to measure that?
I really work my teams to say, “Hey, what research and data do we have? What have we seen being successful? What gaps do we currently have within our current structure that is preventing us from serving those users? Or if we don't do this, are we at risk of major competitors coming in?”
It’s all about enabling product teams to do a lot of that discovery work for themselves. As a leader, it’s important to be clear on those outcomes.
I do think that people oftentimes will jump straight to solutions, right? If we see a 30% drop in a certain metric then people will want to find a solution right away.
In fact, I think it's more important to understand first the variety of reasons that might be contributing to that. And which of those are the biggest areas for us to go and investigate?
Do we have enough information about any one of these opportunities? So I encourage my teams to go very deep in opportunity assessment before they even get down to solutions, because everybody can have solutions. The question is, are you working on the most important areas and do those then actually align to those important opportunities?
Or are you working on a solution that is actually only impacting 10% of users when there's this other big opportunity area that impacts 90% of users. So a test in that area versus the other area would be that much more effective and have higher potential.
Can you talk a bit more about opportunity trees? What does that look like?
Michelle: A friend of mine actually, a few years back, shared an article from Theresa Torres where she talks a little bit about opportunity assessment trees and why we need to do more discovery.
We spend so much time brainstorming what features to build and not a lot of time talking about what area we could go and do research in and explore to potentially build features to solve within.
In this framework you start with your objective, your outcome. For a particular team, it could be to increase the rate of profile completion.
So we want people to complete their profiles at a higher rate than they are today. And there could be a variety of reasons why people aren't completing their profiles. The onboarding process might be too long. The onboarding process may be unclear. Maybe users don't have time.
There are then a bunch of big areas that could be causing this friction. Without a proper discovery process you might assume that onboarding is too long and then run off to fix that.
There are all these gaps that could present themselves, but you would not have explored them. And so the way that this works is rather than jumping to solutions, you actually start with what are all the things that could be potentially contributing to this problem.
That's where we start. We start with OKRs. And then we're going to talk about all the ways that users might be impeded or success might be facilitated through. Next, we'll have to look at the data we have on this.
From there, that’s when you can assign some opportunity sizes to those different areas. Here you can work with your team and let your imaginations run wild with solutions.
Finally, it's just really about executing on different A/B tests and having your hypotheses pretty tight and your experiments frameworks clearly articulated and I found that with this kind of framework, my teams are able to do 30 to 40% more effective tests and actually get way more learnings out because you're not just building to fundamentally change something. You're really building to learn here and to improve.
Latif: Yeah, I see that as a great lateral thinking methodology where someone might fixate on a problem and maybe the problem adjacent to it on each side. But this tree just says every possible problem in that relative domain needs to be uncovered. That sort of a brainstorming experience that does require some creativity can really have an impact.
Michelle: The nice thing about that is that now you have a whole roadmap, right? So you're not having to go back and say, “alright, well, this thing didn't work. Now we have to go through this process all over again.” You kind of did all of that work upfront.
If this particular feature or test that you aligned on doesn't produce the results that you were interested in you have a very clear way to iterate. You're not stuck.
About Our Guest – Michelle Parsons
Big thank you to our guest, Michelle Parsons, for joining us on this episode of Product to Product! To learn more about her product thinking, follow Michelle on LinkedIn.