Draft about

B2B PRODUCTS

5 NIQ products including Activate and Discover

TARGET USER

Retailers and Manufacturers

TEAM

5 Product Designers, Product Managers, Architect teams

TIMELINE

3 months (on and off)

MY ROLE

Product Designer Lead

Product Designer for both Activate and Discover

PROBLEM

NielsenIQ users often encounter navigation issues primarily due to the divergent user experiences across the more than 100 NIQ applications.

NielsenIQ offers an extensive array of more than 100 NIQ applications that assist 100K retailers and manufacturers such as Loblaws, Walmart, Johnson & Johnson and Coca-Cola comprehending consumer purchasing behaviors using their own data or market data.

Over time, the divergent technologies across these applications pose significant challenges for the cost associated with maintenance and scalability.

Furthermore, users who have access to multiple NIQ apps have voiced complaints about the divergent user experiences among them.

To address the issue, our primary objective for the Convergence project is to unify the navigation systems across all applications

A glimpse to look at how our navigations and page structures diverge in each app, causing users’ difficulties to use, navigate, and costly to maintain

PROCESS

Double diamond model & Cross-functional collaboration

As a group of 5 designers working together, we collaborate with other internal stakeholders such as: Product Managers, Training Team, Customer Success team, Architect teams. We aim to solve multiple problems within this projects:

  • Global navigation: navigation from one NIQ app to another one
  • In-app navigation: navigation to different module of the app itself
  • Local navigation: navigation within each module 
  • How the navigation affects the overall page structure, how each page hierarchy changes in each specific use case?
DISCOVER
  • Collecting information
  • Review, audit current app navigation, IA and page structure
  • Define specific needs of some application
  • Challenges and tech constraints
DEFINE
  • Define project goals and scope of work
  • Project plan and regular meeting
DEVELOP

Low-fidelity wireframes

Low-fidelity prototypes

DEVELOP

Usability test

A/B test Prototype test

Card sorting

Competitor analysis

Industry benchmarking

Usability test results

HIgh-fidellity wireframes and prototypes

DISCOVER

Challenges: Disparity in information architecture, user flow, special needs, different tech stack

  1. Disparity in information architecture hierachy
  2. Disparity of user flow
  3. Special needs of each apps

A sophisticated navigation solution required for Activate, where we seamlessly merge two platforms into one, representing a challenge for rInformation Architect and multi-level navigations

One app in NIQ with more than 3 side panels, challenge our efforts to accommodate a scalable navigation solution, within the same screen.

Our tech stack in each application is also different, the screenshot is Precima’s tech stack as illustration. We use Angular component, Qlik for charts and visualization.

DEFINE 

Parity in terms of information architecture, user flow and page structure of all the apps

Parity in terms of information architecture and user flow

  1. Finding similar themes
  2. Finding user flows and main user goals
  3. Reach parity for themes and user flows

Parity in terms of page structure

Goals:

  1. Identify local navigations & relevant UI elements (i.e: breadcrumb, dropdown)
  2. Create common page structures (blue prints)
  3. Parity across multiple apps and resolve edge cases

To deal with many complexity, we consolidated the solutions into high-level wireframes. These wireframes served as blueprints, that we presented to our UX collective group to gather additional feedback.

An example of our blue-print to consider how we consider the high-level hierarchy on each specific page

Industry best practices for navigation & Competitor Analysis

We also looked into the best practices for the navigation, and how the competitors design their navigations and page structures.

DEFINE

Narrow down the options into version A & B for usability testing

IDEATE & ITERATE

After multiple trials, ideation, and feedback iterations tailored to fit each application’s specific use cases, we have converged our low-fidelity wireframes and prototypes into two main directions for A/B testing with users: a top navigation featuring a mega menu and a left-side navigation:

  • A top navigation with mega menu
  • A side (left) navigation 

Version A: Navigation on top with the mega menu

Version B: Side-navigation 

TEST & USER INSIGHTS

Usability testing methodologies : Card Sorting, Usability test (prototype), A/B test

Using Maze platform, I created 3 tests to validate our design decisions. The results of the test will be tied to UX KPI such as: success rate, task completion rate, users’ satisfaction

To ensure usability of the design, we test and validate our design assumptions:

CARD SORTING

  • the themes we have identified in the previous step are correct
  • the hierarchy of the category and the relevant items are correct
  • the terminology of the themes are correct
  • Are our terminology correct? Any wording can cause confusion or misleads users?

PROTOTYPE

  • User’s interaction with the new navigation
  • Measure UX KPI both quan metrics and qual insights: success rate, task completion rate, users’ satisfaction

A/B TEST

  • A or B solution is preferred? Why?
CONCLUSION

The usability test winner: Version B

 

CARD SORTING

In conclusion, version B demonstrate superior performance in the A/B test:

  • clear hierarchy
  • visibility of the menu options
  • ease of access
  • reduced task completion time

Other considerations:

  • Scalability: leaving more room for the main content, room for future features
  • Responsiveness and consistent user experience by maintain its position regardless of the screensize
TEST RESULTS

What did our users say and do?

 

CARD SORTING

The findings from the card sorting exercise indicate a positive outcome overall, affirming that our category structures align closely with users’ mental models. While most categories resonated well, there was a notable exception with “Custom Product List,” which users associated more with Reports rather than the anticipated Data Tool category.

The findings lead to the following conclusions:

  • the themes we have identified in the previous step are correct
  • the hierarchy of the category and the relevant items are correct
  • the terminology of the themes are correct

 A/B TEST: USERS’ INTERACTION with the prototype

The A/B test results clearly favor Version A, indicating its superiority over Version B across key metrics such as usability score, task completion speed, and user satisfaction rate. Based on the test results, we still have lots of room for improvement.

Version A (Top navigation)

Version B (Side-navigation)

USERS’ SATISFACTION

Some quotes from our users in the test

ROOM FOR IMPROVEMENTS

Moving forward

Using Maze, we were able to achieve multiple goals in the usability test:

  • Test faster, with more users
  • Test 2 versions at the same time
  • Collect both quantitative UX metrics and qualitative  insights at the same time to validate the design and measure our UX success.

Moving forward, we have identified terminologies and interaction designs can be improved, we will do some iterations and conduct the prototype test again.

(For confidential reasons, more info or prototype can be shared in a presentation. Please contact @amyngo2k2@gmail.com for more details)

POST

UX BOOKS

 

Discovery Research

UX Strategy using Kano Model

Cognition and Psychology

Gestalt Principal and Mental Model

Information Architecture 

Design Principles & Heuristics

Design Thinking

Design Sprints

Lean UX

 

Books:

Preece, J., Rogers, Y., & Sharp, H. (2015). Interaction design: beyond human-computer interaction. John Wiley & Sons.

Unger, Russ. A Project Guide to UX Design (Voices That Matter). Pearson Education. Kindle Edition. 

Process

Write a Research Plan

Define Methodologies

Survey

Interview

Quantity & Quality Data

Data Analysis

Usability Test: Moderate and un-moderate

A/B Test

 

Methodology

Market Research

Competitve Analysis SWOT

Comparative Analysis

Usability Scale

Affinity Map

Gamestorming

Workshop

Moscow method

Tree Test

Card Sorting/ Hybrid Card Sorting

UX Laws

Gestalt Principles

 

 

Book:

Just Enough Research- Erika Hall

Spacing

Books (UX & UI):

Don’t make me think

Useful links

Checklist for Usability Test

Tools:

www.usertesting.com

www.trymyui.com

Books:

Rubin, Jeffrey, and Dana Chisnell. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Indianapolis, IN: Wiley, 2008.

Warfel, Todd Zaki. Prototyping: A Practitioner’s Guide. 1st edition. Brooklyn, N.Y: Rosenfeld Media, 2009.

Gothelf, Jeff, and Josh Seiden. Lean UX: Designing Great Products with Agile Teams. 2 edition. 73 to 89. O’Reilly Media, 2016

Wireframe

Prototype

 

Books:

Warfel, Todd Zaki. Prototyping: A Practitioner’s Guide. 1st edition. Brooklyn, N.Y: Rosenfeld Media, 2009

Gothelf, Jeff, and Josh Seiden. Lean UX: Designing Great Products with Agile Teams. 2 edition. 73 to 89. O’Reilly Media, 2016

 

.