Heuristic evaluation is a usability engineering method for finding the usability problems in a user interface design so that they can be attended to as part of an iterative design process. Heuristic evaluation involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the “heuristics”). The analysis results in a list of potential usability issues.

In general, heuristic evaluation is difficult for a single individual to do because one person will never be able to find all the usability problems in an interface. Luckily, experience from many different projects has shown that different people find different usability problems. Therefore, it is possible to improve the effectiveness of the method significantly by involving multiple evaluators.

Heuristic Evaluation is used to improve the usability, utility, and desirability of your designs.


The best practice is to use established heuristics like Nielsen and Molich’s 10 rules of thumb and Ben Shneiderman’s 8 golden rules as a stepping-stone and inspiration while making sure to combine them with other relevant design guidelines and market research. Designers are encouraged to establish their own design-specific heuristics to evaluate their products, systems, and websites.

Though many groups have developed heuristics, one of the best-known sources is the set developed by Nielsen’s in 1994. Nielsen refined the list originally developed in 1990 by himself and Rolf Molich. Nielsen’s Heuristics include:

  • Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
    • Breadcrumb
    • Display Page title
    • Selected Navigation / Icons
    • Error/Information/Alert messages
    • Wizard – 1 of 3
    • On/Off
  • Match between system and the real world: The system should speak the users’ language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
    • Familiar Icons
    • Navigation in logical ordered
    • Terminology / Labeling (No Error code)
    • Color codes (Red, Green etc.)
    • Clickable text, Buttons
  • User control and freedom: Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Supports undo and redo.
    • Cancel payment 
    • Prev (Back)/ Next
    • Edit / Tab
    • Logout
    • Overlapping windows
    • Undo/Redo
  • Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
    • Industry Standard Colors
    • Uppercase/ Lowercase
    • Formatting (Number, Date, Time)
    • Forms
    • Icons are Label
    • Tooltips
    • Abbreviations
  • Error prevention: Even better than good error messages is a careful design, which prevents a problem from occurring in the first place. Either eliminates error-prone conditions or check for them and present users with a confirmation option before they commit to the action. E.g. Tooltips
  • Recognition rather than recall: Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
  • Flexibility and efficiency of use: Accelerators—unseen by the novice user— may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
      • Shortcut Key
      • Personal info Auto fill data
  • Aesthetic and minimalist design: Dialogues should not contain information, which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  • Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  • Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.


Heuristic Evaluation should be used in the early evaluation phase of the project. It helps in finding out the preliminary usability issues in a project.


Choosing and developing new heuristics is a task in itself; there are no fixed recommendations, as each design presents its own set of different tasks, constraints, functions, styles, and other variables. However, most heuristic evaluations involve

between five and ten items, which are chosen on the basis of their applicability to the overall usability of the system, website, application etc. being tested.

Here’s how you can get started in generating and conducting your own heuristic evaluation:

  • Establish an appropriate list of heuristics: Use Nielsen and Molich’s 10 heuristics and Ben Shneiderman’s 8 golden rules as inspiration and stepping stone. Make sure to combine them with other relevant design guidelines and market research.
  • Select your evaluators: Make sure to carefully choose your evaluators. Choose minimum 2-3 evaluators depending upon the project. Your evaluators should not be your end users. They should typically be usability experts and preferably with domain expertise in the industry type that your product is in.
  • Brief your evaluators: so they know exactly what they are meant to do and cover during their evaluation. The briefing session should be standardized to ensure the evaluators receive the same instructions; otherwise, you may bias their evaluation. Within this brief, you may wish to ask the evaluators to focus on a selection of tasks, but sometimes they may state which tasks they will cover on the basis of their experience and expertise.
  • First evaluation phase: The first evaluation generally takes around two hours, depending on the nature and complexity of your product. The evaluators will use the product freely to gain a feel for the methods of interaction and the scope. They will then identify specific elements that they want to evaluate.
  • Second evaluation phase: In the second evaluation phase, the evaluators will carry out another run-through, whilst applying the chosen heuristics to the elements identified during the first phase. The evaluators would focus on individual elements and look at how well they fit into the overall design.
  • Record problems: The evaluators must either record problems themselves or you should record them as they carry out their various tasks to track any problems they encounter. Be sure to ask the evaluators to be as detailed and specific as possible when recording problems.
  • Debriefing session: The debriefing session involves collaboration between the different evaluators to collate their findings and establish a complete list of problems. They should then be encouraged to suggest potential solutions for these problems on the basis of the heuristics.


Below is a heuristic evaluation for www.irctc.com, the official Indian Railway Site for your reference.


  1. Hello, I desire to subscribe for this blog to obtain most recent updates, thus where can i do it please help.

    1. bridgeuxblog says:

      Hello Johne,

      Will introduce Subscription option soon.
      Thanks for liking the blog.

Leave a Reply to Johne350 Cancel reply

Your email address will not be published. Required fields are marked *