Blog Home

July Release

Posted on by Nathan in product updates

The July release for Much Finer is now live – we’ve been working tirelessly through the best Seattle summer (ever!) to get all the features in on time. While we may not be as tan as we’d like, we are really excited about how far the product has evolved this month.

What’s new:

Test Interactive Designs using Videos and Prototypes

Conducting experiments with static images works well for some design decisions, such as content layout and information hierarchy. However, it isn’t very useful for getting feedback on more complex interaction models. The July release changes this by adding support for two new types of designs: Hyperlinks and Embedded Videos.

Hyperlinks allow you link directly to a prototype, or existing feature so a respondent from our panel can interact directly with the design. These prototypes can be anything that you can link to: Flash, JavaScript, HTML, etc.

Embedded Videos are short clips of your feature being used that have been uploaded to a video hosting service like YouTube or Vimeo. Simply copy & paste the embedded video code into Much Finer and we’ll show the videos to the panelists so they can see your features in action.

embedded video example

New Experiment for Icons and Logos

We are also introducing a second experiment type specifically for testing Icon and Logo designs. The experiment works just like the Design Concept test, except you can test up to 5 design concepts simultaneously. This makes it easy to get customer data early on in the design process to better understand how your customers will react to the concepts.

screen shot of much finer's new report summary

Automatic Confidence

The biggest quality improvement we are releasing in July is dynamic sizing of the user panel. Much Finer will automatically increase the size of the your panel until we are confident the results are statistically valid, or until we are reasonably confident it is a tie.

This is a really good thing – it will improve the percentage of experiments that have a single clear winner from 56% to an estimated 75%, without raising the cost of each experiment. Over the coming months we will continue to adjust the algorithms and panel sizes to achieve our target of 80% clear wins

What’s Up Next?

While these features are available to all Much Finer customers today, we are still working through a few case studies with the new features, and we expect to post those results over the next couple weeks.

Unfortunately, there will be no August release. We are all going to be on vacation for the month of August, but we’ll be back in September, ready to rock and roll. The deep investment areas for us over the next couple months will be adding additional types of experiments to our suite, and deepening our analytical capabilities.

Have a great month, and we’ll see y'all in September!

Redesigning The Experiment Report

Posted on by nathan in case studies

Doing a big redesign is always an onerous undertaking. But they can also be rewarding – you get to take a new look at old problems to see if you can come up with better, simpler solutions. Fortunately for us, we haven’t had time to acquire very much baggage. But we do have many familiar challenges: some features need to loose 10 lbs, others need to accommodate growth, we have to manage a few very strong opinions, and figure out how to get the site-wide framework to flex enough for our new design. All in a short 4 week release cycle.

These are exactly the types of problems that Much Finer is designed to help solve, so we thought we should dogfood our own tools and see if it really works. In the next couple sections, I’ll take you through a couple design decisions that we tested using Much Finer.

Project Background

The short background is that we are updating the look and feel of the Experiment Results page at Much Finer, where our customers go to see the results of their experiments. We want to make sure that this page makes it really easy to understand your results, and complete your own analysis of the feedback. There were two big feature areas that we looked at as part of the redesign: the Results Summary and the ‘Winner’ treatment. We wanted to create a design that it really easy to understand, prominent enough to anchor the page, and distinctive enough to establish our own brand design.

A Better Result Summary

Customers should be able to quickly understand the results, and everything they need to have full confidence in the experiment should be an easy click away. We also want the summary to look professional and a little bit distinctive. We think this design will be the primary visual our customers use, so we want it to be special. Based on those high-level goals, we came up with a few different approaches:

#1 One Line
Cleanly summarizes all the information in one line so you could see more results on the page.
#2 Designs on Top
Gives more prominence to the design and metrics. Easier to read if designs have more subtle changes between them.
#3 Metrics on Top
Similar to Designs on Top, but assumes that metrics are more important.
#4 One Line, One Metric
Similar to One Line, but it simplifies the metrics and provides a larger image.
 
Experiments A B Responses
Metrics on Top vs. Designs on Top 67% 33% 95
Metrics on Top vs. One Line 88% 12% 50
Metrics on Top vs. One Line, One Metric 84% 16% 55

Of the 4 design directions, Metrics on Top was the hands down winner, destroying everything we put up against it. Interestingly enough, only half of our beta customers chose this option – the rest were evenly split between the other options. And my favorite design was One Line, because it was succinct and didn’t take up too much vertical space. Looking through the verbatim feedback, the reasons folks liked Metrics on Top quickly converge:

  1. Percentage of Votes was the most important bit of content, and almost everyone expected the most important thing to be in the upper left-hand corner. When other content was placed there, it took them more effort to parse the results because they needed to hunt for this metric.
  2. The bigger image was better because it clarifies which two designs are being compared. This context is especially important if you have multiple experiments running, or are sharing the report with your team.
  3. They like the image contained in some type of frame because different designs can radically change how the summary looks. E.g. light background, dark background, a calendar-like-grid, etc.

There were also a few themes that kept appearing with folks that like one of the other 3 design directions:

  • Images are too big – the design comp in Metrics on Top is about 2x the height of the metrics, which attracted too much attention to the image and made it more difficult to see the metrics and the rest of the content on the page. In the final design, we reduced the height to be the same height as the metrics.
  • There is too much noise in the summary – another common reason for choosing an alternative design was that they often had less information and were cleaner. We addressed this by removing the summary title (which was redundant) and diminished the color of the metric headers.

Winner Treatment

The second most important design consideration we had was how to visually represent which design concept was the winner.

#1 Borders
Cleanly summarizes all the information in one line so you could see more results on the page.
#2 Highlight Background
Gives more prominence to the design and metrics. Easier to read if designs have more subtle changes between them.
 
Experiment A B Responses
Borders vs. Highlighted Background 37% 63% 65

The Final Design

In this release we set out to take our rough product and add a professional and distinctive look to it – one that also makes the product easier to use. The feedback from the SideBySide experiments was really helpful in grounding ourselves in the perspective of real people who don’t spend all day thinking about our product, and giving us some real data to help support the design directions. The other really valuable tool was our beta customers. We received a lot of ongoing feedback from them that lead to a lot of very specific improvements.

The moment of truth… once we put all of these changes together, does the new design out-perform the old one? (Of course we had to run that experiment too!)

May Design
June Design
 
Experiment A B Responses
May Design vs. June Design 7% 93% 55

Yes!!! The new design does in fact significantly out perform the old one.

June Release

Posted on by Nathan in product updates

Today we’re happy to announce our June release is now live – just in time for summer in Seattle. The June release brings a lot of professional polish and quality to the product, with improvements in nearly every corner. Most of these ideas have come directly from beta customer recommendations, so please keep the feedback coming!

What’s new:

Radically Improved Reports

We’ve completely redesigned our Experiment Report conducting many Side By Side experiments, and implementing some really good ideas from customers. We started off with three goals for the redesign:

  1. Reports should be super-easy to analyze
  2. We should be as transparent as possible in how we collect and analyze results

We still have a few experiments running (it is hard to not test all your ideas once you start seeing the data ;) and we will publish have published a deep-dive on the redesign and experiments we used in the next week. But until then, you can take a look. (if you’re feeling nostalgic, you can checkout the old report format)

screen shot of much finer's new report summary

More Flexible Experiments

Our experiments continue to get better. With this release you can now ask your own open-ended questions in your experiments. You can uses these to ask follow up questions, or to ask general questions about the use case. You will receive answers to these questions from every respondent.

We’ve also added two additional demographic questions to assess Household Income and Highest Education Level. These will appear in all experiment results from today onwards. Our panel tends to skew a little higher in both household income and education than the general U.S. population.

screen shot of much finer's new download results

Quality Improvements

The quality of your experiments are very important to us, so we allocate time every release to ship features that continually raise the quality bar. These features often don’t have any new user interface so they can be hard to notice, but they are there, quietly making the results better.

  • Feedback Ranking – we are introducing a ranking algorithm for open-ended responses that ensures you see the highest quality responses first. We currently rank the responses based on four attributes, including the length of the response. You can override this ranking in your published reports by using the Star button () to push reviews to the top, or the Flag button () remove them all together.
  • Option Randomization – the order of the answers in a survey can introduce a natural bias over time. This is true for any type of survey, whether it is online, telephone or face-to-face. To mitigate this in our experiments, we have introduced option randomization into the experiments. This will be mostly transparent in your experiment reports, however, when respondents refer to the options as left or right we will clarify by appending the full option name.
  • Click-To-Enlarge Images – we’ve noticed that a few survey respondents have reported difficulty seeing small details in screenshots. To address this, we’ve added a feature to the experiment that enables them to click on each image and see it in its full size.

What’s Up for July?

Now that we’ve got something polished, we’re going to use July to get out and talk to more User Researchers and product teams to better understand how they might use this. I’ll personally be in San Francisco for 2 weeks, meeting with folks in the Bay Area, and then we’ll be back towards the end of the month.

The big features that we’ll be working on for July are two additional types of experiments:

  • Creative Side By Side Experiment – we will be adding a special type of experiment for specifically testing creative designs such as logos, buttons or video game elements. These experiments will be different in that you will be able to test up to 5 design options.
  • Value Prop Side By Side Experiment – we will also add another special experiment type to help teams decide on which problem area they should go after. We will enable you to express the value prop as either a short video or a paragraph, and will also allow up to 5 options.

And you can expect that we’ll continue to improve core quality and continue to improve the UX based on feedback and experimentation.

About

Much Finer provides User Research as a Service for Product Designers, so they can get high quality user reasearch earlier in the development cycle

Follow us

Categories