Improving outcomes by improving trust in tools



10-20% of traffic accidents are caused by multiple parties. Root adjusters assigned complete liability to a single party 99% of the time, which meant that Root was overpaying on insurance claims. The Liability Tool was meant to change all that by providing the adjusters with an easy to use calculator to help determine the liability percentage for each involved party.


  • My role: Lead UX designer

  • Team: Product owner, developers

  • How do we improve the Liability Tool so that more adjusters assign fault to more than one party?

  • Do we need to improve anything else? Like training, KPIs, or other people-based tool.



I interviewed 6 adjusters of different specialties about how they used the Liability tool. I asked them about their process, where the tool fits in, and how they ultimately make decisions for their claims. Through these interviews, I found that both veteran and newbie adjusters didn't trust the tool, they struggled to find the calculate button, and they were only graded on use of the tool--not the outcome.

Improving outcomes by improving trust in tools

The original Liability Tool


The adjusters identified four reasons not to trust the tool:

  • It provided no help in the moment

  • There were no definitions of what high, medium, or low breaches were

  • The numbers were very specific and gave adjusters no wiggle room for negotiations

  • The tool made itself too overwhelming by bringing in too many people to rate

The Calculate Button

No one recognized the button as a button. They only learned that this button existed because they were literally being graded by clicking that button.

Improper Grading

Adjusters only graded based on use of the tool—which implies that we could very easily change just what the users are graded on. I convinced my product manager that if we changed what the adjusters were graded on without updating the tool, we’d be burning through good will. By making fairly minor (hopefully) changes to the UI that make it easier and more trustworthy, adjusters would be less upset by changing how they’re graded.


Contextual Help

Ultimately, the biggest component of contextual help would be providing definitions of each question and what each type of breach meant. While the content experts worked on defining each of those items, I began to work on different ways to display the information.

Ultimately we decided to prioritize the sidebar because it gave us a lot more room to add new information in later and because it could be useful across the entire platform and not just for this tool alone.

More Human Numbers

Algorithms tend to put out exact numbers while people tend to round to the nearest 5 or 10. The adjusters also felt that they had a lot less wiggle room when it came to exact numbers. They felt that their experience was being diminished and that it didn't give them the right kind of ammunition for debating with the other insurance company.

This design became more of an engineering question than a design question. We ultimately decided to round to the nearest 5 or 10 and then create a 10% range around it.

Improving outcomes by improving trust in tools

A demonstration of how the range might appear on the tool

Improving outcomes by improving trust in tools

Options I worked through to illustrate how to manage involved parties

A More Manageable Tool

The experts needed to inform me about the proper way to limit the number of Involved Parties. I didn't want to cripple our adjusters legally.

Ultimately, we decided to limit the calculator to just the drivers (and not witnesses, for example). Version 1 would not allow us to add or subtract people, but future versions might if the need arose.

Improving outcomes by improving trust in tools

Different Calculate button options

The Button

All we needed to do was to make the button more obvious so that new adjusters wouldn't miss it.

Though I pushed for an auto-update feature as it would make the point completely moot, it was not prioritized in version 1 because of the level of effort involved. We instead went with a bold button.

End results
  • The button was easier to find

  • The tool spit out a range of numbers

  • The tool only asked about the necessary people

Future updates would include:

  • The sidebar to allow us to provide in-context help information

  • Changing the adjusters’ KPI to actually measure how often they find multiple parties at fault

Unfortunately, I was laid off before I was able to find out the actual impact of my work.


My kneejerk reaction had been that the tool technically worked, but there was something to the human element that kept it from influencing the behavior in the correct direction. Research showed that it was true, but that there were a lot of things we could do to influence that human element. I always feel lucky when I get a chance to actually perform research. The changes themselves are relatively small, but will provide a lot more help for the adjusters going forward. The change with the biggest impact will likely be that the adjuster’s updated KPI will actually measure the expected behavior.