You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I saw that this wiki page describes a problem with the LCS approach to linear token diffs. This paper seems to have a solution for the related HCS problem. If tokens are weighted by the number of characters they contain (maybe with keywords at a reduced weight and whitespace at zero weight), I'd think that would result in good diffs in most cases. Per-language weight tuning would be possible as well.
The text was updated successfully, but these errors were encountered:
I saw that this wiki page describes a problem with the LCS approach to linear token diffs. This paper seems to have a solution for the related HCS problem. If tokens are weighted by the number of characters they contain (maybe with keywords at a reduced weight and whitespace at zero weight), I'd think that would result in good diffs in most cases. Per-language weight tuning would be possible as well.
The text was updated successfully, but these errors were encountered: