UI designs with fun background
Auto Tag
skills
User testing, product thinking, visual design
tools
Figma, Whimsical, Zeplin (handoff)
team
1 designer, 1 product manager

During my time at MSD, I embarked on a cool project – building the data enrichment platform! I had the opportunity of representing data in multiple views and handling different data types in the UI. This case study is a culmination of my learnings working on the project. The designs you'll see are revamped versions of the screens I crafted at MSD. Sadly, I can't show you the exact screens due to NDA stuff, but I've jazzed up the designs a bit for some extra flair!

UI designs in a collage

Context

Vue.ai prioritizes enhancing data quality and organization, utilizing machine learning tools to efficiently structure and label product data. The company offers services such as product tagging, catalog visualization, document data extraction, and content enrichment.  Vue.ai also provides AI-driven business intelligence models for insights and predictions, empowering teams to develop their own ML applications.

No-code data classification & enrichment

The product focuses on enhancing and organizing data. Users can use a preset or build their own custom classification model to predict data from image or text. Users build a taxonomy, tag some products, and train the system. The model then auto tags the rest of the products. QA analysts play a key role by fixing any wrong predictions, improving the model. The trained models and custom taxonomy can be easily exported to fit into existing systems.

Its an AI led model engine that empowers you to build your own custom models, that progressively & automatically enriches your data for you. - Vue.ai
Ideal user flow

So, what's the problem?

User feedback highlighted two main issues.

Inability to tag in bulk

Currently, users have two methods for data enrichment. The cluster view groups products based on similarity to a target attribute value, while the bulk edit view allows users to tag multiple products for an attribute. However, in both cases, users can't classify across different levels of the taxonomy, meaning they can't assign multiple attribute-value pairs at the same time. The time taken to tag products is high.

Inability to localize search

There's no way for users to focus a search on a specific image area for attributes. For instance, finding a detailed attribute like 'neckline closure type' through prediction would be more efficient if the model could narrow its search to just the neckline area of the image.

Pain points for the user

Constraints and Dependencies

The project is tied to the QA analysis platform since there's a lot of overlap in features for data enrichment. The designs need to work smoothly for both platforms and should be flexible enough to handle different data types like images, documents, and text.

💡 Ensuring compatibility and integration between the two platforms is essential for a seamless and effective user experience.

Key design decisions

In designing this project, we opted to reuse components and build upon existing ones. We focused on surfacing more metrics and made a conscious decision to steer away from using popups. This approach aims to enhance user experience by leveraging familiar elements and providing clearer insights without the disruption of pop-up interfaces.

Ideating the solution

Annotate to localize search

The solution for localizing search for an attribute on an image involved using annotations. Throughout multiple iterations, the following key design decisions were made:

- making the UI and interactions similar to other annotation tools to ensure user familiarity
- allow users to tag products while annotating, ensuring a seamless process of annotating and organizing
- facilitate reviewing and approving tags and annotations together as they depend on each other to build the model.
General principles we wanted to follow for annotating

Maintaining mental models across modules

We used a hierarchical tree to handle large taxonomy structures when bulk editing. This was done so that the mental model of taxonomies remained the same across different modules. We iteratively brought in interactions to make the bulk editing more efficient.

How our tagging platform deals with large taxonomies

Outcome

Grid View

Metadata in read-only mode

Metadata in edit mode

Tags in read only mode - shows confidence score along with attribute-value pair

Tags in edit mode

Detailed View

Detailed view - gallery mode

Detailed view - comparison mode

Tabular View

Table view with no products selected

Table view with product selected
adarshyahari@gmail.com