Machine translation software usability

The sections below give objective criteria for evaluating the usability of machine translation software output.Il y a une chorale falucharde mercredi, venez nombreux, les faluchards chantent des paillardes! ==> There is a choral society falucharde Wednesday, come many, the faluchards sing loose-living women! وعن حوادث التدافع عند شعيرة رمي الجمرات -التي كثيرا ما يسقط فيها العديد من الضحايا- أشار الأمير نايف إلى إدخال 'تحسينات كثيرة في جسر الجمرات ستمنع بإذن الله حدوث أي تزاحم'. ==> And incidents at the push Carbuncles-throwing ritual, which often fall where many of the victims - Prince Nayef pointed to the introduction of 'many improvements in bridge Carbuncles God would stop the occurrence of any competing.' Better a day earlier than a day late. ==> Heath Ledger is dead ==> The sections below give objective criteria for evaluating the usability of machine translation software output. Do repeated translations converge on a single expression in both languages? I.e. does the translation method show stationarity or produce a canonical form? Does the translation become stationary without losing the original meaning? This metric has been criticized as not being well correlated with BLEU (BiLingual Evaluation Understudy) scores. Is the system adaptive to colloquialism, argot or slang? The French language has many rules for creating words in the speech and writing of popular culture. Two such rules are: (a) The reverse spelling of words such as femme to meuf. (This is called verlan.) (b) The attachment of the suffix -ard to a noun or verb to form a proper noun. For example, the noun faluche means 'student hat'. The word faluchard formed from faluche colloquially can mean, depending on context, 'a group of students', 'a gathering of students' and 'behavior typical of a student'. The Google translator as of 28 December 2006 doesn't derive the constructed words as for example from rule (b), as shown here: French argot has three levels of usage: The United States National Institute of Standards and Technology conducts annual evaluations of machine translation systems based on the BLEU-4 criterion . A combined method called IQmt which incorporates BLEU and additional metrics NIST, GTM, ROUGE and METEOR has been implemeneted by Gimenez and Amigo . Is the output grammatical or well-formed in the target language? Using an interlingua should be helpful in this regard, because with a fixed interlingua one should be able to write a grammatical mapping to the target language from the interlingua. Consider the following Arabic language input and English language translation result from the Google translator as of 27 December 2006 . This Google translator output doesn't parse using a reasonable English grammar: Do repeated re-translations preserve the semantics of the original sentence? For example, consider the following English input passed multiple times into and out of French using the Google translator as of 27 December 2006: As noted above and in, this kind of round-trip translation is a very unreliable method of evaluation. An interesting peculiarity of Google Translate as of 24 January 2008 (corrected as of 25 January 2008) is the following result when translating from English to Spanish, which shows an embedded joke in the English-Spanish dictionary which has some added poignancy given recent events:

[ "Example-based machine translation", "Interlingual machine translation", "translation error rate", "Hybrid machine translation", "Evaluation of machine translation", "Arabic machine translation" ]
Parent Topic
Child Topic
    No Parent Topic