language-icon Old Web
English
Sign In

Canonicalization

In computer science, canonicalization (sometimes standardization or normalization) is a process for converting data that has more than one possible representation into a 'standard', 'normal', or canonical form. This can be done to compare different representations for equivalence, to count the number of distinct data structures, to improve the efficiency of various algorithms by eliminating repeated calculations, or to make it possible to impose a meaningful sorting order. In computer science, canonicalization (sometimes standardization or normalization) is a process for converting data that has more than one possible representation into a 'standard', 'normal', or canonical form. This can be done to compare different representations for equivalence, to count the number of distinct data structures, to improve the efficiency of various algorithms by eliminating repeated calculations, or to make it possible to impose a meaningful sorting order. Files in file systems may in most cases be accessed through multiple filenames. For instance in Unix-like systems, the string /./ can be replaced by /. In the C standard library, the function realpath() performs this task. Other operations performed by this function to canonicalize filenames are the handling of /.. components referring to parent directories, simplification of sequences of multiple slashes, removal of trailing slashes, and the resolution of symbolic links. Canonicalization of filenames is important for computer security. For example, a web server may have a restriction that only files under the cgi directory C:inetpubwwwrootcgi-bin may be executed. This rule is enforced by checking that the path starts with C:inetpubwwwrootcgi-bin and only then executing it. While the file C:inetpubwwwrootcgi-bin......WindowsSystem32cmd.exe initially appears to be in the cgi directory, it exploits the .. path specifier to traverse back up the directory hierarchy in an attempt to execute a file outside of cgi-bin. Permitting cmd.exe to execute would be an error caused by a failure to canonicalize the filename to the simplest representation, C:WindowsSystem32cmd.exe, and is called a directory traversal vulnerability. With the path canonicalized, it is clear the file should not be executed. In Unicode, many accented letters can be represented in more than one way. For example, é can be represented in Unicode as the Unicode character U+0065 (LATIN SMALL LETTER E) followed by the character U+0301 (COMBINING ACUTE ACCENT), but it can also be represented as the precomposed character U+00E9 (LATIN SMALL LETTER E WITH ACUTE). This makes string comparison more complicated, since every possible representation of a string containing such glyphs must be considered. To deal with this, Unicode provides the mechanism of canonical equivalence. In this context, canonicalization is Unicode normalization. Variable-length encodings in the Unicode standard, in particular UTF-8, may cause an additional need for canonicalization in some situations. Namely, by the standard, in UTF-8 there is only one valid byte sequence for any Unicode character, but some byte sequences are invalid, i. e. cannot be obtained by encoding any string of Unicode characters into UTF-8. Some sloppy decoder implementations may accept invalid byte sequences as input and produce a valid Unicode character as output for such a sequence. If one uses such a decoder, some Unicode characters have effectively more than one corresponding byte sequence: the valid one and some invalid ones. This could lead to security issues similar to the one described in the previous section. Therefore, if one wants to apply some filter (e. g. a regular expression written in UTF-8) to UTF-8 strings that will later be passed to a decoder that allows invalid byte sequences, one should canonicalize the strings before passing them to the filter. In this context, canonicalization is the process of translating every string character to its single valid byte sequence. An alternative to canonicalization is to reject any strings containing invalid byte sequences. In web search and search engine optimization (SEO), URL canonicalization deals with web content that has more than one possible URL. Having multiple URLs for the same web content can cause problems for search engines - specifically in determining which URL should be shown in search results.

[ "Algorithm", "Information retrieval", "Linguistics", "Data mining", "Artificial intelligence" ]
Parent Topic
Child Topic
    No Parent Topic