Where would such a checkmark go? Because HashCheck supports multiple files, the concept of a single match or non-match in the file properties dialog does not make a lot of sense. This is why there is no compare hash function in the file properties dialog--there is instead a search function, where you can paste in a hash, and if none of the files have a hash that matches it, it will report not found, otherwise it will find every file that has a matching hash (there might be more than one). A checkmark does not make sense in this multi-file context. If you are suggesting that a checkmark be placed in the results text box, that is not feasible because it is a regular text box, and not a "rich text" box, and there are performance implications to using "rich text", if there are thousands of files hashed and displayed. In the HashVerify window, there is no graphical icon because of performance (for people who hash and verify large directory structures with tens of thousands of files, the use of graphics like that has a negative impact on UI performance and footprint) and because graphics are non-scalable and thus look rather ugly for users with a high DPI. No, it's not. A lot of care has gone into HashCheck's performance and to make sure that it is at least as fast as all of the common checksumming utilities (HashTab, QuickSFV, md5sum, etc.). In my personal experience, the two are about the same in speed in most situations, with HashCheck being slightly faster than HashTab; I've never seen HC work noticeably slower than HT. It should be noted that the most important factor in determining speed is disk access. If a file that HashCheck is working on is fully cached in memory, it can process the file at several hundred MB per second. If the file is not cached in memory, then it's limited by the disk access speed, which is usually around 30-100MB/s, depending on factors like disk speed, fragmentation, where on the disk the file is located, etc. In most use cases, HashCheck's worker thread is idle, waiting for the disk to give it data. HashCheck also uses a very large read buffer to minimize the effect of disk access overhead (this is especially apparent when comparing HashCheck against most other hashing utilities with data over a high-latency network drive; in certain exceptional situations, HashCheck can be over twice as fast than some other checksumming utilities). Because the disk is the bottleneck in most cases, it is very hard to do direct comparisons because if you hash a file with HashCheck and then hash the same file again with HashTab, then HashTab will win because that file would have been covered by the file system cache the second time through. To ensure a fair comparison, one has to fully read the file, and then hash with HC and HT so that they both benefit from the file being in the cache. Or one has to do a lot of other disk access to try to force the file out of the file system cache (which, depending on the size of your system memory, may be several hundred MB in size) before each hash operation to try to ensure that neither HC or HT get the benefit of the file being in the cache (this may not always work, esp. if you also have secondary non-FIFO caching like SuperFetch). Or hash two different files with similar sizes, fragmentation, and physical location on disk (which is hard to control).