The DDJ logo, here only for decorative purposes. Dr. Dobb's Portal September, 2007
Article on DDJ site

In the new global economy, companies have the opportunity to market their wares to billions of customers who don’t speak a word of English. Java was designed from the ground-up to help programmers deploy internationalized software. In this article I’m going to show you how Java makes your woes in the area of character sets and encoding melt like cotton candy on your tongue.

Dipping My Toes in the Global Pool

For the past year I’ve been spending a steadily increasing amount of time at work dealing with internationalization of our products. My division of Cisco makes an IP-based phone system that interfaces with users and administrators at dozens of different points. Users have alphanumeric displays on their phones, perform personal and system administration on web pages, and listen to voice prompts when collecting messages. From the user interface perspective there is a lot going on.

Like most new products made in a skunk-works atmosphere, Cisco CallManager was developed with little or no thought towards our international customers. The focus was on quickly developing a stable product with as many features as possible. Of course, our success at this strategy led to immediate discontent from our business partners in Europe and Asia. It turns out that telephone users in France really do want to have their instruction manual to be written in French.

The Four Problems of Text Internationalization

On the surface, modifying your product for users of another language seems simple enough: translate everything and distribute the results. Unfortunately it just isn’t that easy. Translation is only the first of four big problems. Translating written material might present some logistical problems, but these are usually more budgetary than technical.

The other three problems are more technical in nature, and Java provides tools to deal with all three. In order, these problems are:

  • Managing user-seen content once it has been translated into multiple languages.
  • Selecting an appropriate character set and rendering text that uses it.
  • Properly encoding text in a given character set so that it can be stored and transmitted in a world of eight-bit bytes.

Java helps you deal effectively with all three of these problems. The first, management of translated content, is handled using Java Resource Bundles. This article is going to talk about the next two: character sets and encodings.

Character Sets

Cisco’s business-class IP Phones have a nice LCD screen that presents call status information to users in a fairly friendly way. One of the first problems we ran into when internationalizing the entire phone system was that this display only supported the 7-bit ASCII character set. This had the unfortunate effect of changing the name of Señor Nuñez to “Senor Nunez” when stored as a speed dial. That’s because the character set we were using lacked the letters commonly seen in other countries using the Roman alphabet.

A short-sighted solution to this problem might be to expand the character set to a full eight bits, using the upper 128 character positions for the commonly missed characters. And in fact, there is a standard character set called ISO-8859-1 that does just that. Figure 1 shows how ISO-8859-1 populates the upper half of the character space with 112 characters used in Western Europe. (The first 32 positions in the upper half of the space are not used.)

A screen capture of the Character Map application on a Windows PC, with the Western character set selected. The characters with a value greater than 128 are highlighted.
Figure 1 - The Upper 112 Positions of the ISO-8859-1 Character Set

We quickly modified our phone to accept a new font, and soon found that we could properly render names of people in France, Germany, Italy, the Netherlands, and so on. As long as we confined our sales efforts to our friends in NATO all was well. Figure 2 shows our phone happily rendering most of the ISO-8859-1 character set, ready to march onto desktops anywhere the Euro is honored.

A Cisco phone showing the ISO-8859-1 character set in its display, mostly charactes from the western alphabet that have been combined with diacritical marks.
Figure 2 - A Pan-European Phone

Even in today’s global economy, many manufacturers of software and other products find this state of affairs to be just dandy. But as soon as you try to sell a phone in Greece or the Russian Federation, you’re out of luck. Languages such Russian and Greek just don’t have enough characters in common with Western Europe to fit into the ISO-8859-1 character set. Not to fear, because ISO-8859-5 and ISO-8859-7 were created just to deal with this problem.

A table showing the Greek character set betwen 0xA0 and 0xFF.
Figure 3 - The Upper 112 Positions of ISO-8859-7, a Greek Character Set

If you take a look at Figure 3, you can see the seeds of a problem beginning. Adding a new character set to the phone is a manageable problem - one that operating systems like Windows already manage quite effectively. We simply have to make it possible for our telephone to download one font in France, and a different one in Greece.

But now the phone finds itself in a situation that can seem a bit baffling to mossback programmers such as yours truly. If a multinational company has a Señor Nuñez listed in its corporate directory, things will work just fine when a user in France or Italy looks up his name. But unfortunately, a user in Greece will see the name rendered as of Seρor Nuρez. We now have a fundamental problem: a given numeric value is rendered differently depending on the character set in use. A lower case ‘n’ with a tilde over it in ISO-8859-1 transforms itself into a lower case Greek rho when we switch to ISO-8859-7.

This unpleasant situation causes a huge paradigm shift for programmers working on internationalization. There’s no longer any such thing as “plain text.” When we store a user’s name in the database, we now have to also store the name of the character set that properly renders it. The same thing holds true for error messages, speed number labels, soft-keys, you name it.

Or does it?

Java to the rescue - Part 1

Java deals with this problem in an effective way - it coerces you rather firmly into using Unicode for all character strings. C++ developers have a choice between narrow strings and wide strings - which aren’t necessarily Unicode. For better or worse, Java eliminates that choice.

The nice part about this is that in Unicode, U+00F1 is always the ñ character, and the lower case rho, ρ, is always U+03F1. Even if I don’t have a rho character on my keyboard, I know that I can use Java’s escaping mechanism to represent it as “\u03F1”, inconvenient as that may be.

This is a nice feature, because it means that at least internally, a Java string is a string is a string. You don’t have to worry about what character set it is from - it’s Unicode.

It's worse than you think

In a perfect world, Java’s insistence on Unicode would spill over to every file system, network packet type, and so on, and everything would be fine. But unfortunately, there are still billions of web browsers in the world configured to read text from an ISO-8859-X character set. And when our attention turns to Asia, things get even worse, for two reasons.

  1. China, Japan, and Korea have character sets composed of thousands of ideographs. To compound this problem, there are competing character sets used to create Chinese web pages. Taiwan and the PRC tend to use two different character sets, known as Traditional (or Big5), and Simplified (or GB2312.)
  2. These character sets don’t fit in a single byte, and accordingly must be encoded in order to be written into byte-oriented files and networks. Unicode is most commonly encoded as UTF-8, in which a single 16 bit character is encoded as one, two, or three bytes. Other 16 bit characters, including the Chinese, Japanese, and Korean character sets use different encoding schemes, usually a row/column value encoded as two bytes.

Naturally, the different character sets that I’ve mentioned here are incompatible with one another. Needless to say, the encoding schemes are incompatible as well.

Simply storing your data internally as Unicode doesn’t solve the problem of incompatible character sets and encodings. But, the good news is, Java has built-in library support for converting to and from these encodings any time you convert to or from bytes during an I/O operation.

Both the OutputStreamWriter and InputStreamReader class have two constructors: one which takes just a reference to a stream object, and a second which requires both a stream object and an encoding parameter.

If you search through the Java docs for “Supported Encodings”, you’ll see that Java has built-in support for a huge library of character sets and encodings. Converting one of these to or from Unicode is simply a matter of instantiating a class with the correct encoding parameter.

See the Code

Figure 4 shows a sample Chinese language Web page that is written in Unicode and encoded with UTF-8. Users with the latest operating systems and browsers will usually be able to properly render this page properly.

My sample web page, showing Chinese characters properly rendered.
Figure 4 - Welcome to the Classical Music Site

But not everyone has a Unicode capable computer, operating system, and browser. A user who browsed to this page with a browser set to use a Big5 character set would see the screen shown in Figure 5:

A capture of the same web site with the Chinese characters not properly rendered.
Figure 5 - Welcome to the Illegible Music Site

Solving this problem is easy with Java. All I have to do is develop my content in Unicode, then use Java’s built-in classes to churn out localized versions suitable for users of whatever encodings are needed.

Listing 1 shows the WebWriter class that I used for this article. This program has a complete copy of a web page’s content stored in an internal string. By using three different encodings, it creates web pages for browsers set to Unicode, Big5, and GB2312. As you can see, choosing the correct character set and encoding from Java is trivial.

import java.io.*;

public class WebWriter {
  static String eol = System.getProperty( "line.separator" );
  static String s =
  "<HTML>" + eol +
  "<BODY>" + eol +
  "<TABLE cellspacing=\"5\">" + eol +
  " <TR>" + eol +
  "  <TD><img src=\"michael.jpg\"></TD>" + eol +
  "  <TD><H2>" +
  "\u6b61\u8fce\u5149\u81e8\u53e4\u5178\u97f3\u6a02\u7a7a\u9593" +
  "</H2></TD>" + eol +
  "  <TD><img src=\"violin.jpg\"></TD>" + eol +
  " </TR>" + eol +
  "</BODY>" + eol +
  "</HTML>" + eol;

 public static void main(String[] args)
 {
  try {
  FileOutputStream fos = 
      new FileOutputStream("c:/temp/page_utf8.htm");
  Writer out = new OutputStreamWriter( fos, "UTF8" );
  out.write( s );
  out.close();
  fos = new FileOutputStream( "c:/temp/page_gb.htm" );
  out = new OutputStreamWriter( fos, "GBK" );
  out.write( s );
  out.close();
  fos = new FileOutputStream( "c:/temp/page_big5.htm" );
  out = new OutputStreamWriter( fos, "BIG5" );
  out.write( s );
  out.close();
  }
  catch ( Exception e )
  {
    System.out.println( "Exception " + e );
  }
 }
} 
Listing 1 - WebWriter.java

Details

You can see the actual HTML files created for this article using the links below. Note that well written web pages use meta tags or HTTP headers to help a browser figure out what encoding and character set to use. (See the section 5.2.2 of the HTML Spec for details.) These web pages don’t; they provide no information intentionally, making experimentation a bit easier.

To view the web pages in their correct encoding, you will need to change your browser encoding setting, unless it guesses right based on content. In Internet Explorer, you select this from the View|Encoding portion of the menu. Firefox is nearly the same: View|Character Encoding. If you are an English-speaking computer user, you will undoubtedly have to install Chinese or Unicode fonts as well. If you’re lucky, this will be a semi-automatic process.

The web pages can be found here:

If you look at the source code, each Chinese ideograph will be two or more characters, which will look to your ANSI text-editor something like this:

­¹âÅR¹ÅµäÒô˜·¿Õég

The text shown above contains ten GB2312 characters encoded using twenty bytes. For non-Chinese speakers, the following table shows the translation of the individual characters, as well as the more meaningful translation of short phrases consisting of multiple characters.

Ideograph Unicode GB2312 Big5 Character Meaning Phrase Meaning
Unicode character U+6B61 U+6B612722 C577 happy, pleased, glad; joy; enjoy Welcome
Unicode character U_8FCE U+8FCE 5113 AAEF receive, welcome, greet
Unicode character U+5149 U+5149 2566 A5FA light, brilliant, shine; onlyto
Unicode character U+81E8 U+81E8 3357 C17B draw near, approach; descend
Unicode character U+53E4 U+53E4 2537 A56A old, classic, ancient classic
Unicode character U+5178 U+5178 2168 A8E5 law, canon; documentation; class
Unicode character U+97F3 U+97F3 5084 ADB5 sound, tone, pitch, pronunciationmusic
Unicode character U+6A02 U+6A02 3254 BCD6 happy, glad; enjoyable; music
Unicode characer U+7A7A U+7A7A 3153 AAC5 empty, hollow, bare, deserted space
Unicode character U+9593 U+9593 2868 B6A1 interval, space; place, between
Table 1 - Translation of the ideographs in the sample web page

Conclusion

Successful products today need to support customers all over the world. Using Unicode for your core content makes this much easier, and Java is ready to help you on this path. More importantly, Java makes it simple to continue talking to devices on the edges of your network that are still using old-school character sets and encodings.

Unfortunately we still have to use human beings to do the difficult work of translating our content from one language to another, but outside of that Java does everything we need.