I was just looking into the whole topic of drawing something on screen in a .NET WinForms application, so that it appears in the “correct” size. I found a number of issues around the handling of screen resolution and DPI, and in order to remember what’s what, and perhaps help somebody with the same problems, I thought I’d write it all down.

Some definitions

For a start, what does DPI really mean? It is often confused with “resolution”. That is of course not wrong, but still very confusing. The term “resolution” is usually associated with displays — or perhaps that’s just my understanding. But when I hear somebody talking about resolution, I expect to hear things like 1024x768, and I’m sure I’m not alone. This kind of resolution information defines the total number of pixels that can be displayed. Resolution alone doesn’t tell me everything there is to know about how a picture is going to look when displayed. The reason is that there’s a second factor that’s also important: the size of the display area. On a 15 inch screen, 1024x768 used to look okay. On your 21 inch flat screen it’s probably not so great. That’s where DPI comes into play. It means “dots per inch”, and that’s what it defines: the number of pixels an output device can render per inch of physical real estate. For example, my old Dell 2001 FP is a 20 inch screen. That’s of course its diagonal, the size of the visual area is 16x12 inches (I’m surprised to find that this means exactly a 20 inch diagonal - well done, Dell!). It has a native resolution of 1600x1200 pixels. Easy maths — that’s 100 DPI!

On the other hand, I’m now using two 28 inch screens at a resolution of 1920x1200. Their size is 23.39x14.57 inches. DPI in horizontal direction is 82.1, in vertical direction it’s 82.4. With a large screen like this, DPI tends to be much lower because resolution isn’t keeping up well these days with the growth of screen sizes. It is also interesting that the two values I calculated aren’t identical — the difference seems large enough to think that it’s not down to measuring errors. Of course this is perfectly well possible, because DPI values in X and Y directions would only be identical if the pixels on the device were perfectly square.

Why is DPI relevant?

So why bother knowing these precise values for DPI? Well, it’s simple: because unless you know these values, you can’t predict how large something is going to appear when displayed. On printers, for example, knowing the DPI “resolution” of the output is very important. If you print a line that’s defined to be an inch long in your DTP application, you expect it to be an inch long when it comes out of the printer. Although it surprises some people, the same is true for displaying something on screen. An application should be able to display a document on screen in such a way that an element which is going to be an inch long on printed paper is also an inch long on screen. This is easily possible when taking DPI into account, and entirely impossible when ignoring it.

Just in case you still think that’s a weird idea — think about it. When displaying a simple UI element, like a button, or perhaps a piece of text. Is it more useful to configure the button to be 80 pixels wide or 2cm? Does a text height of 10 pixels (I’m talking about labels or the like) give a better result or rather 4mm? In conjunction with text there are other measurements like pt and similar, which confuse things further. The point is, without taking the actual screen DPI values into account, you just don’t know what size something is going to have when the user sees it.

Of course some people realized the importance of DPI a long long time ago. In conjunction with computer displays, the standards DDC and EDID allow your computer to receive information about its size directly from your screen, which of course allows the computer to calculate the DPI values depending on the current resolution.

So what’s wrong in Windows?

There are several things that go wrong in Windows. For a start, it doesn’t seem that display drivers typically evaluate the EDID information to find out about DPI values automatically. I’m not an expert on this level, but I guess it would theoretically possible to do this on the driver level, and maybe there’s even some influence over the actual Windows DPI configuration that the driver has, but which is not passed on to the end user of the system. Maybe some graphics drivers do something in this regard, I don’t know. It doesn’t matter all that much though, because Windows has always (largely) ignored or misinterpreted DPI for its own rendering of its UI. Very recently things are starting to change a bit with the introduction of vector based graphics through WPF and Silverlight, but of course it’s going to take ages until things change for the majority of applications that make up the OS, as well as 3rd party apps.

There are a few places where Windows takes DPI into account to some extent. For one thing, there’s some sort of scaling of UI elements going on, which usually results in rather crazy looking layouts. I don’t know anybody who really understands exactly what’s going on there, and I’ve never found it relevant enough to look into this and understand the details.

There’s one other thing though, which most people have observed at one point or another: the “large fonts” setting in Windows. I seem to remember that traditionally you could only choose between “normal” and “large” fonts. These days, i.e. in Windows 7, there’s a dialog that lets you select a “percentage of normal size” for your font display. So far, so good — this is of course a valuable feature to have.

If only they did it right… let’s think about it. As I said above, knowing about DPI allows me to render a piece of text in such a way that it appears exactly 8mm high on screen. For example. If I want that piece of text scaled by 25%, I’ll render it to appear at 1cm height on screen. Again, knowing my DPI makes it possible, and if I can see 1cm high text better, that’s great. On the other hand, what MS does defies any logic I can come up with (having accepted long ago that there are always several different kinds): they go and change the DPI value for the display depending on the “font size” setting. They just claim that my display has a DPI value different from what it really has. Wow.

As a side note

They don’t even get it right! Check out these two screen shots (from Windows 7, in spite of the appearance):

font100.png

font200.png

Now, as you can see I’m asked to enter the “percentage of normal size”. I haven’t measured precisely, but I can believe that in the second shot, the height of the text is roughly twice that of the first shot, so the scaling of the font itself may be correct. But, tell me: how does that relate to the information that says ”… at 192 pixels per inch”?

Logically, I can see two different scenarios. Number one, I have an element that is, say, 100 pixels wide. When displaying it at a DPI of 96, it would be just over an inch wide. At a DPI of 192, it would be just over half an inch wide. The higher the DPI, the smaller the object, if it has a fixed width in pixels.

Number two, I have an element whose size is not described in device pixels, but rather in a “real world” measurement, like inches or centimeters. If I have an element that is one inch wide, then I’d have to draw it over 96 pixels at 96 DPI and over 192 pixels at 192 DPI. Right? Right.

How MS manage to claim that the size of 9pt Segoe UI grows while the “fake DPI” grows as well, is beyond me.

Oh, and the “point” they are referring to? That’s a real-world measurement, called Point. According to its modern definition it measures 0.3527mm. Go figure.

Moving on — so why might they choose an approach of scaling by DPI? At a glance it seems like this might be a good idea if the objective is to scale your entire UI at once. By claiming to have a DPI that’s only half as large as the one I really have I would effectively double the size of everything on screen. I think this would look crazy, rather like one of those magnifying glass implementations in pixel based drawing programs. I also think it also doesn’t make all that much sense for a computer UI — after all, there are many elements of varying importance on screen, and while I may want some of them to be larger so I can see them better, that doesn’t apply to every separator line and similar graphical gimmick. I’m almost sure MS don’t do this either.

… and there’s more

Okay, so it appears that Windows has at least some semblance of an understanding of DPI internally, regardless of the funny ideas with font sizes and large vs small DPI values. One minor issue, comparatively speaking, is the fact that you can only configure one DPI value, while it seems obvious that there should be two, one for the horizontal and one for the vertical direction. Well, I’m calling it minor… I’m sure it can be a rather big issue when carefully designing documents for large print output on your high res display while using a compromise DPI value that’s the same for X and Y. More important is something else though: Windows enforces a particular range of DPI values. The 100% item in the dialogs shown above is the minimum value you can enter. Remember my 28 inch screens I described above? They have a DPI of 82. It’s impossible to configure Windows to this value. Why? Brilliant, just brilliant.

and some unknowns

I honestly don’t have a clue what the Use Windows XP style DPI scaling checkbox is for in the dialogs. I can’t use it either, it just goes on and off on its own.

Something for developers

If you’re a developer, there’s not too much you can do about the OS’s non-grasp of DPI information, but unfortunately you’re the one who gets criticized by customers if your display looks funny. Whatever use it is, there is some at least some functionality in Windows that lets you query DPI information to the extent that the OS is aware of it. Generally this is quite simple and it’s been supported by Windows API calls forever. There’s a helpful function called GetDeviceCaps that you can call with a device context handle and a constant value called either LOGPIXELSX or LOGPIXELSY (hey, finally separate values! where should they come from at this point…?) in order to retrieve DPI info. Of course there’s a lot to do wrong when using Windows API functions — here’s an example, with the caveat that I’m not checking for errors from the API functions like I should. Oh, and I’m abusing Size as a container for the value pair. Well, if you just copy&paste, that’s your own fault :-)

[DllImport("user32.dll", SetLastError = true)]
public static extern bool SetProcessDPIAware( );
[DllImport("gdi32.dll")]
public static extern int GetDeviceCaps(IntPtr hdc, int nIndex);
[DllImport("user32.dll", SetLastError = true)]
public static extern IntPtr GetDC(IntPtr hWnd);
[DllImport("user32.dll")]
[return: MarshalAs(UnmanagedType.Bool)]
public static extern bool ReleaseDC(IntPtr hWnd, IntPtr hDC);

public static Size GetScreenDPI( ) {
  // no error checking here - being lazy
  var dc = GetDC(IntPtr.Zero);
  try {
    return new Size(
      GetDeviceCaps(dc, (int) DeviceCap.LOGPIXELSX),
      GetDeviceCaps(dc, (int) DeviceCap.LOGPIXELSY));
  }
  finally {
    ReleaseDC(IntPtr.Zero, dc);
  }
}

The pinvoke definitions are courtesy pinvoke.net. Oh, and I found one thing that a lot of blog posts showed “wrong”: they call CreateDC and DeleteDC instead of GetDC and ReleaseDC. My example simply retrieves the device context that already exists for the desktop, instead of creating a new one. I believe that should be much more efficient. Some other examples I found also leave out the bit with LOGPIXELSY — not something I would do. Can’t be that lazy. Now, if you try this code, you may be a bit surprised to find that you don’t seem to get actual DPI information back. There’s a reason for that: MS think you’re going to do no good with it, and therefore they don’t give it to you unless you prove that you’re in fact clever enough. That means you need to call that function at the top of the example, SetProcessDPIAware, during your application initialization — otherwise GetDeviceCaps will just not tell you what the real values are. Great, eh?

Note: MSDN suggests you don’t use SetProcessDPIAware directly, but instead add an entry to your application manifest. See here.

Graphics != Graphics

Finally, I found a lot of blog posts that suggest you could retrieve DPI information from instances of the .NET Graphics class. This is true, and while I haven’t tried it now to confirm, I expect it will return the same values as the Windows API functions above. But there’s one important thing that the blog posts I’m talking about don’t usually stress: make sure you know where that Graphics instance comes from! Think about it: why would the Graphics object have DPI information if it was always the same? No, the reason it has that information is because it depends on the graphics context the object represents. If you haven’t created a Graphics instance yourself, don’t assume what its resolution is. It might represent an image with whatever resolution, or perhaps a printer object with a very high DPI. If you want to use Graphics for the purpose of retrieving DPI info, make sure you hold an instance that refers to an on-screen drawing context. And perhaps consider not using it at all — yes, it’s convenient, but it’s got a lot of overhead compared to simple API calls like the above.

As a final note

I’m not an expert in typography, and I’m describing everything in this post as logic seems to dictate, and according to my knowledge. I’m sure I’ve got at least one thing wrong, and if you know what it is, please let me know and I’ll fix it.