When using machine learning to master data, a key part of the workflow is understanding how your model is performing, and taking the correct steps to improve the model's precision and accuracy. The standard process for testing a model is technically complex, and is usually done by a manual process. This project aimed to add this functionality directly into the software via a process and outcome that would be understandable to non-technical users.
The standard process for testing how a machine learning model is performing includes selecting test data correctly and validating the data in a way that does not also train the model (avoiding "data leakage"). The goal of this project was to simplify this process for non-technical users, and to present the user with better information about the accuracy of their model, inside the product itself.
Due to the technical nature of the project, my peers on the engineering team had already begun researching solutions from a technical point of view. They outlined the results of these experiments, along with some key machine learning best practices, in a series of documents and meetings. We agreed that a unique challenge of this project would be striking a delicate balance between being technically accurate, and remaining simple to comprehend.
During the initial exploration phase, I dug a bit deeper into the personas that would need to interact with the software in order to calculate the accuracy correctly. This uncovered a realization that the user in charge of the accuracy score may be different from the user who needs to actually train the model (most often by verifying the set of testing records). This realization caused me to investigate a user experience where each persona would have a dedicated interface.
Scope and timing concerns eventually caused us to combine the solicitation and verification interfaces, but the investigation and designs align nicely with future company goals around a more specialized and modular system based on personas, and I expect the design research here to be useful in the future.
Designing the display of accuracy metrics was an extremely collaborative process, with many different stakeholders involved - from our salespeople all the way to our documentation writers. The solution I arrived on opted for progressive disclosure - with simple explanations for metrics via helper text and tooltips at first, with the option to delve into the technical details via additional documentation if interested.
Although scoped back significantly from my initial design explorations, this solution was able to be delivered quickly in order to begin a more comprehensive process of customer feedback. The persona based investigations have already been proven useful for other projects, and initial feedback from early adopting customers is overwhelmingly positive.