Mojibake(文字化け) (IPA: [mod͡ʑibake]; wit. "character transformation"), from de Japanese 文字 (moji) "character" + 化け (bake, pronounced "bah-keh") "transform", is de garbwed text dat is de resuwt of text being decoded using an unintended character encoding.[1] The resuwt is a systematic repwacement of symbows wif compwetewy unrewated ones, often from a different writing system. This dispway may incwude de generic repwacement character � in pwaces where de binary representation is considered invawid. A repwacement can awso invowve muwtipwe consecutive symbows, as viewed in one encoding, when de same binary code constitutes one symbow in de oder encoding. This is eider because of differing constant wengf encoding (as in Asian 16-bit encodings vs European 8-bit encodings), or de use of variabwe wengf encodings (notabwy UTF-8 and UTF-16).

Faiwed rendering of gwyphs due to eider missing fonts or missing gwyphs in a font is a different issue dat is not to be confused wif mojibake. Symptoms of dis faiwed rendering incwude bwocks wif de codepoint dispwayed in hexadecimaw or using de generic repwacement character �. Importantwy, dese repwacements are vawid and are de resuwt of correct error handwing by de software.

To correctwy reproduce de originaw text dat was encoded, de correspondence between de encoded data and de notion of its encoding must be preserved. As mojibake is de instance of incompwiance between dese, it can be achieved by manipuwating de data itsewf, or just rewabewing it.

Mojibake is often seen wif text data dat have been tagged wif a wrong encoding; it may not even be tagged at aww, but moved between computers wif different defauwt encodings. A major source of troubwe are communication protocows dat rewy on settings on each computer rader dan sending or storing metadata togeder wif de data.

The differing defauwt settings between computers are in part due to differing depwoyments of Unicode among operating system famiwies, and partwy de wegacy encodings' speciawizations for different writing systems of human wanguages. Whereas Linux distributions mostwy switched to UTF-8 (around 2004[citation needed]) for aww uses of text, Microsoft Windows stiww uses codepages for text fiwes dat differ between wanguages. For some writing systems, an exampwe being Japanese, severaw encodings have historicawwy been empwoyed, causing users to see mojibake rewativewy often, uh-hah-hah-hah. As a Japanese exampwe, de word mojibake "文字化け", when encoded in UTF-8, is incorrectwy dispwayed as "æ–‡å—åŒ–ã‘" in software dat assumes text to be in de Windows-1252 or ISO-8859-1 encodings, usuawwy wabewwed Western, uh-hah-hah-hah.

If de encoding is not specified, it is up to de software to decide it by oder means. Depending on type of software, de typicaw sowution is eider configuration or charset detection heuristics. Bof are prone to mispredict in not-so-uncommon scenarios.

The encoding of text fiwes is usuawwy governed by de OS-wevew setting, which depends on brand of operating system and possibwy de user's wanguage. Therefore, de assumed encoding is systematicawwy wrong for fiwes dat come from a computer wif a different setting, for exampwe when transferring fiwes between Windows and Linux. One sowution is to use a byte order mark, but for source code and oder machine readabwe text, many parsers don't towerate dis. Anoder is storing de encoding as metadata in de fiwesystem. Fiwesystems dat support extended fiwe attributes can store dis as user.charset.[2] This awso reqwires support in software dat wants to take advantage of it, but does not disturb oder software.

Whiwe a few encodings are easy to detect, in particuwar UTF-8, dere are many dat are hard to distinguish (see charset detection). A web browser may not be abwe to distinguish a page coded in EUC-JP and anoder in Shift-JIS if de coding scheme is not assigned expwicitwy using HTTP headers sent awong wif de documents, or using de HTML document's meta tags dat are used to substitute for missing HTTP headers if de server cannot be configured to send de proper HTTP headers; see character encodings in HTML.

Mojibake awso occurs when de encoding is wrongwy specified. This often happens between encodings dat are simiwar. For exampwe, de Eudora emaiw cwient for Windows was known to send emaiws wabewwed as ISO-8859-1 dat were in reawity Windows-1252.[3] The Mac OS version of Eudora did not exhibit dis behaviour. Windows-1252 contains extra printabwe characters in de C1 range (de most freqwentwy seen being de typographicawwy correct qwotation marks and dashes), dat were not dispwayed properwy in software compwying wif de ISO standard; dis especiawwy affected software running under oder operating systems such as Unix.

Peopwe dink dey are using ASCII, and tend to wabew whatever superset of ASCII dey actuawwy use as "ASCII". Maybe for simpwification, but even in academic witerature, de word "ASCII" can be found used as an exampwe of someding not compatibwe wif Unicode, where evidentwy "ASCII" is Windows-1252 and "Unicode" is UTF-8.[1] Note dat UTF-8 is backwards compatibwe wif ASCII.

When dere are wayers of protocows, each trying to specify de encoding based on different information, de weast certain information may be misweading to de recipient. For exampwe, consider a web server serving a static HTML fiwe over HTTP. The character set may be communicated to de cwient in any number of 3 ways:

in de HTTP header. This information can be based on server configuration (for instance, when serving a fiwe off disk) or controwwed by de appwication running on de server (for dynamic websites).

in de fiwe, as an HTML meta tag (http-eqwiv or charset) or de encoding attribute of an XML decwaration, uh-hah-hah-hah. This is de encoding dat de audor meant to save de particuwar fiwe in, uh-hah-hah-hah.

in de fiwe, as a byte order mark. This is de encoding dat de audor's editor actuawwy saved it in, uh-hah-hah-hah. Unwess an accidentaw encoding conversion has happened (by opening it in one encoding and saving it in anoder), dis wiww be correct.

Many owder hardware are typicawwy designed to support onwy one character set and de character set typicawwy cannot be awtered. The character tabwe contained widin de dispway firmware wiww be wocawized to have characters for de country de device is to be sowd in, and typicawwy de tabwe differs from country to country. As such, dese systems wiww potentiawwy dispway mojibake when woading text generated on a system from a different country. Likewise, many earwy operating systems do not support muwtipwe encoding formats and dus wiww end up dispwaying mojibake if made to dispway non-standard text- earwy versions of Microsoft Windows and Pawm OS for exampwe, are wocawized on a per-country basis and wiww onwy support encoding standards rewevant to de country de wocawized version wiww be sowd in, and wiww dispway mojibake if a fiwe containing a text in a different encoding format from de version dat de OS is designed to support is opened.

Appwications using UTF-8 as a defauwt encoding may achieve a greater degree of interoperabiwity because of its widespread use and backward compatibiwity wif US-ASCII. UTF-8 awso has de abiwity to be directwy recognised by a simpwe awgoridm, so dat weww written software shouwd be abwe to avoid mixing UTF-8 up wif oder encodings.

The difficuwty of resowving an instance of mojibake varies depending on de appwication widin which it occurs and de causes of it. Two of de most common appwications in which mojibake may occur are web browsers and word processors. Modern browsers and word processors often support a wide array of character encodings. Browsers often awwow a user to change deir rendering engine's encoding setting on de fwy, whiwe word processors awwow de user to sewect de appropriate encoding when opening a fiwe. It may take some triaw and error for users to find de correct encoding.

The probwem gets more compwicated when it occurs in an appwication dat normawwy does not support a wide range of character encoding, such as in a non-Unicode computer game. In dis case, de user must change de operating system's encoding settings to match dat of de game. However, changing de system-wide encoding settings can awso cause Mojibake in pre-existing appwications. In Windows XP or water, a user awso has de option to use Microsoft AppLocawe, an appwication dat awwows de changing of per-appwication wocawe settings. Even so, changing de operating system encoding settings is not possibwe on earwier operating systems such as Windows 98; to resowve dis issue on earwier operating systems, a user wouwd have to use dird party font rendering appwications.

Mojibake in Engwish texts generawwy occurs in punctuation, such as em dashes (—), en dashes (–), and curwy qwotes (“,”,‘,’), but rarewy in character text, since most encodings agree wif ASCII on de encoding of de Engwish awphabet. For exampwe, de pound sign "£" wiww appear as "Â£" if it was encoded by de sender as UTF-8 but interpreted by de recipient as CP1252 or ISO 8859-1. If iterated, dis can wead to "Ã‚Â£", "Ãƒâ€šÃ‚Â£", "ÃƒÆ’Ã¢â‚¬Å¡Ãƒâ€šÃ‚Â£", etc.

Some computers did in owder eras have vendor-specific encodings which caused mismatch awso for Engwish text. Commodore brand 8-bit computers used PETSCII encoding, particuwarwy notabwe for inverting de upper and wower case compared to standard ASCII. PETSCII printers worked fine on oder computers of de era, but fwipped de case of aww wetters. IBM mainframes use de EBCDIC encoding which does not match ASCII at aww.

Users of Centraw and Eastern European wanguages can awso be affected. Because most computers were not connected to any network during de mid- to wate-1980s, dere were different character encodings for every wanguage wif diacriticaw characters.

Mojibake caused by a song titwe in Cyriwwic (Моя Страна) on a car audio system

Mojibake may be cowwoqwiawwy cawwed krakozyabry (кракозя́бры, IPA:krɐkɐˈzʲæbrɪ̈) in Russian, which was and remains compwicated by severaw systems for encoding Cyriwwic.[4] The Soviet Union and earwy Russian Federation devewoped KOI encodings (Kod Obmena Informaciej, Код Обмена Информацией, which transwates to "Code for Information Exchange"). This began wif Cyriwwic-onwy 7-bit KOI7, based on ASCII but wif Latin and some oder characters repwaced wif Cyriwwic wetters. Then came 8-bit KOI8 encoding dat is an ASCII extension which encodes Cyriwwic wetters onwy wif high-bit set octets corresponding to 7-bit codes from KOI7. It is for dis reason dat KOI8 text, even Russian, remains partiawwy readabwe after stripping de eighf bit, which was considered as a major advantage in de age of 8BITMIME-unaware emaiw systems. For exampwe, words "Школа русского языка" shkowa russkogo yazyka, encoded in KOI8 and den passed drough de high bit stripping process, end up rendered as "[KOLA RUSSKOGO qZYKA". Eventuawwy KOI8 gained different fwavors for Russian/Buwgarian (KOI8-R), Ukrainian (KOI8-U), Bewarusian (KOI8-RU) and even Tajik (KOI8-T).

Before Unicode, it was necessary to match text encoding wif a font using de same encoding system. Faiwure to do dis produced unreadabwe gibberish whose specific appearance varied depending on de exact combination of text encoding and font encoding. For exampwe, attempting to view non-Unicode Cyriwwic text using a font dat is wimited to de Latin awphabet, or using de defauwt ("Western") encoding, typicawwy resuwts in text dat consists awmost entirewy of vowews wif diacriticaw marks. (KOI8 "Библиотека" (bibwioteka, wibrary) becomes "âÉÂÌÉÏÔÅËÁ".) Using Win-1251 to view text in KOI8 or vice versa resuwts in garbwed text dat consists mostwy of capitaw wetters (KOI8 and Win-1251 share de same ASCII region, but KOI8 has uppercase wetters in de region where Win-1251 has wowercase, and vice versa.) In generaw, Cyriwwic gibberish is symptomatic of using de wrong Cyriwwic font. During de earwy years of de Russian sector of de Worwd Wide Web, bof KOI8 and Win-1251 were common, uh-hah-hah-hah. As of 2017, one can stiww encounter HTML pages in Win-1251 and, rarewy, KOI8 encodings, as weww as Unicode. (Estimated 1.7% of aww web pages worwdwide - aww wanguages incwuded - are encoded in Win-1251.[5]) Though de HTML standard incwudes de abiwity to specify de encoding for any given web page in its source,[6] dis is sometimes negwected, forcing de user to switch encodings in de browser manuawwy.

Prior to de creation of ISO 8859-2 in 1987, users of various computing pwatforms used deir own character encodings such as AmigaPL on Amiga, Atari Cwub on Atari ST and Masovia, IBM CP852, Mazovia and Windows CP1250 on IBM PCs. Powish companies sewwing earwy DOS computers created deir own mutuawwy-incompatibwe ways to encode Powish characters and simpwy reprogrammed de EPROMs of de video cards (typicawwy CGA, EGA, or Hercuwes) to provide hardware code pages wif de needed gwyphs for Powish—arbitrariwy wocated widout reference to where oder computer sewwers had pwaced dem.

The situation began to improve when, after pressure from academic and user groups, ISO 8859-2 succeeded as de "Internet standard" wif wimited support of de dominant vendors' software (today wargewy repwaced by Unicode). Wif de numerous probwems caused by de variety of encodings, even today some users tend to refer to Powish diacriticaw characters as krzaczki ([kshach-kih], wit. "wittwe shrubs").

Swovenian, Croatian, Bosnian, Serbian, de variants of de Yugoswav Serbo-Croatian wanguage, add to de basic Latin awphabet de wetters š, đ, č, ć, ž, and deir capitaw counterparts Š, Đ, Č, Ć, Ž (onwy č/Č, š/Š and ž/Ž in Swovenian; officiawwy, awdough oders are used when needed, mostwy in foreign names, as weww). Aww of dese wetters are defined in Latin-2 and Windows-1250, whiwe onwy some (š, Š, ž, Ž, Đ) exist in de usuaw OS-defauwt Windows-1252, and are dere because of some oder wanguages.

Awdough Mojibake can occur wif any of dese characters, de wetters dat are not incwuded in Windows-1252 are much more prone to errors. Thus, even nowadays, "šđčćž ŠĐČĆŽ" is often dispwayed as "šðèæž ŠÐÈÆŽ", awdough ð, è, æ, È, Æ are never used in Swavic wanguages.

When confined to basic ASCII (most user names, for exampwe), common repwacements are: š→s, đ→dj, č→c, ć→cj, ž→z (capitaw forms anawogouswy, wif Đ→Dj or Đ→DJ depending on word case). Aww of dese repwacements introduce ambiguities, so reconstructing de originaw from such a form is usuawwy done manuawwy if reqwired.

The Windows-1252 encoding is important because de Engwish versions of de Windows operating system are most widespread, not wocawized ones.[citation needed] The reasons for dis incwude a rewativewy smaww and fragmented market, increasing de price of high qwawity wocawization, a high degree of software piracy (in turn caused by high price of software compared to income), which discourages wocawization efforts, and peopwe preferring Engwish versions of Windows and oder software .[citation needed]

The drive to differentiate Croatian from Serbian, Bosnian from Croatian and Serbian, and now even Montenegrin from de oder dree creates many probwems. There are many different wocawizations, using different standards and of different qwawity. There are no common transwations for de vast amount of computer terminowogy originating in Engwish. In de end, peopwe use adopted Engwish words ("kompjuter" for "computer", "kompajwirati" for "compiwe," etc.), and if dey are unaccustomed to de transwated terms may not understand what some option in a menu is supposed to do based on de transwated phrase. Therefore, de peopwe who understand Engwish, as weww as dose who are accustomed to Engwish terminowogy (which are most, because Engwish terminowogy is awso mostwy taught in schoows because of dese probwems) reguwarwy choose de originaw Engwish versions of non-speciawist software.

Newer versions of Engwish Windows awwow de ANSI codepage to be changed (owder versions reqwire speciaw Engwish versions wif dis support), but dis setting can be and often was incorrectwy set. For exampwe, Windows 98/Me can be set to most non-right-to-weft singwe-byte codepages incwuding 1250, but onwy at instaww time.

Hungarian is anoder affected wanguage, which uses de 26 basic Engwish characters, pwus de accented forms á, é, í, ó, ú, ö, ü (aww present in de Latin-1 character set), pwus de 2 characters ő and ű, which are not in Latin-1. These 2 characters can be correctwy encoded in Latin-2, Windows-1250 and Unicode. Before Unicode became common in e-maiw cwients, e-maiws containing Hungarian text often had de wetters ő and ű corrupted, sometimes to de point of unrecognizabiwity. It is common to respond to an e-maiw rendered unreadabwe (see exampwes bewow) by character mangwing (referred to as "betűszemét", meaning "garbage wettering") wif de phrase "Árvíztűrő tükörfúrógép", a nonsense phrase (witerawwy "Fwood-resistant mirror-driwwing machine") containing aww accented characters used in Hungarian, uh-hah-hah-hah.

This was very common in DOS-era when de text was encoded by de Centraw European CP 852 encoding; however, de operating system, a software or printer used de defauwt CP 437 encoding. Pwease note dat smaww-case wetters are mainwy correct, exception wif ő (ï) and ű (√). Ü/ü is correct because CP 852 was made compatibwe wif German, uh-hah-hah-hah. Nowadays occurs mainwy on printed prescriptions and cheqwes.

The CWI-2 encoding was designed so dat de text remains fairwy weww-readabwe even if de dispway or printer uses de defauwt CP 437 encoding. This encoding was heaviwy used in de 1980s and earwy 1990s, but nowadays it is compwetewy deprecated.

The defauwt Western Windows encoding is used instead of de Centraw-European one. Onwy ő-Ő (õ-Õ) and ű-Ű (û-Û) are wrong, but de text is compwetewy readabwe. This is de most common error nowadays; due to ignorance, it occurs often on webpages or even in printed media.

Mainwy caused by wrongwy configured web services or webmaiw cwients, which were not tested for internationaw usage (as de probwem remains conceawed for Engwish texts). In dis case de actuaw (often generated) content is in UTF-8; however, it is not configured in de HTML headers, so de rendering engine dispways it wif de defauwt Western encoding.

These are wanguages for which de iso-8859-1 character set (awso known as Latin 1 or Western) has been in use. However, iso-8859-1 has been obsoweted by two competing standards, de backward compatibwe windows-1252, and de swightwy awtered iso-8859-15. Bof add de Euro sign € and de French œ, but oderwise any confusion of dese dree character sets does not create mojibake in dese wanguages. Furdermore, it is awways safe to interpret iso-8859-1 as windows-1252, and fairwy safe to interpret it as iso-8859-15, in particuwar wif respect to de Euro sign, which repwaces de rarewy used currency sign (¤). However, wif de advent of UTF-8, mojibake has become more common in certain scenarios, e.g. exchange of text fiwes between UNIX and Windows computers, due to UTF-8's incompatibiwity wif Latin-1 and Windows-1252. But UTF-8 has de abiwity to be directwy recognised by a simpwe awgoridm, so dat weww written software shouwd be abwe to avoid mixing UTF-8 up wif oder encodings, so dis was most common when many had software not supporting UTF-8. Most of dese wanguages were supported by MS-DOS defauwt CP437 and oder machine defauwt encodings, except ASCII, so probwems when buying a operating system version were wess common, uh-hah-hah-hah. Windows and MS-DOS are not compatibwe however.

In Swedish, Norwegian, Danish and German, vowews are rarewy repeated, and it is usuawwy obvious when one character gets corrupted, e.g. de second wetter in "kÃ⁠¤rwek" (kärwek, "wove"). This way, even dough de reader has to guess between å, ä and ö, awmost aww texts remain wegibwe. Finnish text, on de oder hand, does feature repeating vowews in words wike hääyö ("wedding night") which can sometimes render text very hard to read (e.g. hääyö appears as "hÃ⁠¤Ã⁠¤yÃ⁠¶"). Icewandic and Faroese have ten and eight possibwy confounding characters, respectivewy, which dus can make it more difficuwt to guess corrupted characters; Icewandic words wike þjóðwöð ("outstanding hospitawity") become awmost entirewy unintewwigibwe when rendered as "Ã¾jÃ³Ã°wÃ¶Ã°".

In German, Buchstabensawat ("wetter sawad") is a common term for dis phenomenon, and in Spanish, deformación (witerawwy deformation).

Some users transwiterate deir writing when using a computer, eider by omitting de probwematic diacritics, or by using digraph repwacements (å → aa, ä/æ → ae, ö/ø → oe, ü → ue etc.). Thus, an audor might write "ueber" instead of "über", which is standard practice in German when umwauts are not avaiwabwe. The watter practice seems to be better towerated in de German wanguage sphere dan in de Nordic countries. For exampwe, in Norwegian, digraphs are associated wif archaic Danish, and may be used jokingwy. However, digraphs are usefuw in communication wif oder parts of de worwd. As an exampwe, de Norwegian footbaww pwayer Owe Gunnar Sowskjær had his name spewwed "SOLSKJAER" on his back when he pwayed for Manchester United.

An artifact of UTF-8 misinterpreted as ISO-8859-1, "Ring meg nÃ¥" ("Ring meg nå"), was seen in an SMS scam raging in Norway in June 2014.[7]

The writing systems of certain wanguages of de Caucasus region, incwuding de scripts of Georgian and Armenian, may produce mojibake. This probwem is particuwarwy acute in de case of ArmSCII or ARMSCII, a set of obsowete character encodings for de Armenian awphabet which have been superseded by Unicode standards. ArmSCII is not widewy used because of a wack of support in de computer industry. For exampwe, Microsoft Windows does not support it.

Anoder type of mojibake occurs when text is erroneouswy parsed in a muwti-byte encoding, such as one of de encodings for East Asian wanguages. Wif dis kind of mojibake more dan one (typicawwy two) characters are corrupted at once, e.g. "k舐wek" (kärwek) in Swedish, where "är" is parsed as "舐". Compared to de above mojibake, dis is harder to read, since wetters unrewated to de probwematic å, ä or ö are missing, and is especiawwy probwematic for short words starting wif å, ä or ö such as "än" (which becomes "舅"). Since two wetters are combined, de mojibake awso seems more random (over 50 variants compared to de normaw dree, not counting de rarer capitaws). In some rare cases, an entire text string which happens to incwude a pattern of particuwar word wengds, such as de sentence "Bush hid de facts", may be misinterpreted.

In Japanese, de phenomenon is, as mentioned, cawwed mojibake(文字化け). It is a particuwar probwem in Japan due to de numerous different encodings dat exist for Japanese text. Awongside Unicode encodings wike UTF-8 and UTF-16, dere are oder standard encodings, such as Shift-JIS (Windows machines) and EUC-JP (UNIX systems). Mojibake, as weww as being encountered by Japanese users, is awso often encountered by non-Japanese when attempting to run software written for de Japanese market.

In Chinese, de same phenomenon is cawwed Luàn mǎ (Pinyin, Simpwified Chinese乱码, Traditionaw Chinese亂碼, meaning chaotic code), and can occur when computerised text is encoded in one Chinese character encoding but is dispwayed using de wrong encoding. When dis occurs, it is often possibwe to fix de issue by switching de character encoding widout woss of data. The situation is compwicated because of de existence of severaw Chinese character encoding systems in use, de most common ones being: Unicode, Big5, and Guobiao (wif severaw backward compatibwe versions), and de possibiwity of Chinese characters being encoded using Japanese encoding.

It's easy to identify de originaw encoding when wuanma occurs in Guobiao encodings:

Newspapers have deawt wif dis probwem in various ways, incwuding using software to combine two existing, simiwar characters; using a picture of de personawity; or simpwy substituting a homophone for de rare character in de hope dat de reader wouwd be abwe to make de correct inference.

A simiwar effect can occur in Brahmic or Indic scripts of Souf Asia, used in such Indo-Aryan or Indic wanguages as Hindustani (Hindi-Urdu), Bengawi, Punjabi, Maradi, and oders, even if de character set empwoyed is properwy recognized by de appwication, uh-hah-hah-hah. This is because, in many Indic scripts, de ruwes by which individuaw wetter symbows combine to create symbows for sywwabwes may not be properwy understood by a computer missing de appropriate software, even if de gwyphs for de individuaw wetter forms are avaiwabwe.

A particuwarwy notabwe exampwe of dis is de owd Wikipedia wogo, which attempts to show de character anawogous to "wi" (de first sywwabwe of "Wikipedia") on each of many puzzwe pieces. The puzzwe piece meant to bear de Devanagari character for "wi" instead used to dispway de "wa" character fowwowed by an unpaired "i" modifier vowew, easiwy recognizabwe as mojibake generated by a computer not configured to dispway Indic text.[9] The wogo as redesigned as of May 2010[ref] has fixed dese errors.

The idea of Pwain Text reqwires de operating system to provide a font to dispway Unicode codes. This font is different from OS to OS for Singhawa and it makes ordographicawwy incorrect gwyphs for some wetters (sywwabwes) across aww operating systems. For instance, de 'reph', de short form for 'r' is a diacritic dat normawwy goes on top of a pwain wetter. However, it is wrong to go on top of some wetters wike 'ya' or 'wa' but it happens in aww operating systems. This appears to be a fauwt of internaw programming of de fonts. In Macintosh / iPhone, de muurdhaja w (dark w) and 'u' combination and its wong form bof yiewd wrong shapes.

Some Indic and Indic-derived scripts, most notabwy Lao, were not officiawwy supported by Windows XP untiw de rewease of Vista.[10] However, various sites have made free-to-downwoad fonts.

The exampwes in dis articwe do not have UTF-8 as browser setting, because UTF-8 is easiwy recognisabwe, so if a browser supports UTF-8 it shouwd recognise it automaticawwy, and not try to interpret someding ewse as UTF-8.

Newwine — The conventions for representing de wine break differ between Windows and Unix systems. Though most software supports bof conventions (which is triviaw), software dat must preserve or dispway de difference (e.g. version controw systems and data comparison toows) can get substantiawwy more difficuwt to use if not adhering to one convention, uh-hah-hah-hah.

Byte order mark — The most in-band way to store de encoding togeder wif de data – prepend it. This is by intention invisibwe to humans using compwiant software, but wiww by design be perceived as "garbage characters" to incompwiant software (incwuding many interpreters).

HTML entities — An encoding of speciaw characters in HTML, mostwy optionaw, but reqwired for certain characters to escape interpretation as markup.

Whiwe faiwure to appwy dis transformation is a vuwnerabiwity (see cross-site scripting), appwying it too many times resuwts in garbwing of dese characters. For exampwe, de qwotation mark " becomes &qwot;, &amp;qwot;, &amp;amp;qwot; and so on, uh-hah-hah-hah.

^ ab"Wiww Unicode soon be de universaw code?" IEEE Spectrum, vow. 49, issue 7, p. 60 (Juwy 2012). The advantage of Unicode is dat if everyone adopted it, it wouwd eradicate de probwem of mojibake, Japanese for "character transformation, uh-hah-hah-hah." Mojibake is de jumbwe dat resuwts when characters are encoded in one system but decoded in anoder.