While this has always been an issue that’s been in the background since Android OEMs started releasing devices with display PPIs above the 300-400 “retina” range, recent events have sparked a broader discussion into the value of pursuing the PPI race that is happening between Android OEMs. Within this discussion, the key points of contention tend to center upon the various tradeoffs from increasing resolution, and whether an increase in pixels per inch (PPI) will actually have a perceivable increase.

If there is any single number that people point to for resolution, it is the 1 arcminute value that Apple uses to indicate a “Retina Display”. This number corresponds to around 300 PPI for a display that is at 10-12 inches from the eye. In other words, this is about 60 pixels per degree (PPD). Pixels per degree is a way accounting for both distance from the display and the resolution of the display, which means that all the information here is not limited to smartphone displays, and applies universally to any type of display. While this is a generally reasonable value to work with, the complexity of the human eye and the brain in regards to image perception makes such a number rather nebulous. For example, human vision systems are able to determine whether two lines are aligned extremely well, with a resolution around two arcseconds. This translates into an effective 1800 PPD. For reference, a 5” display with a 2560x1440 resolution would only have 123 PPD. Further muddying the waters, the theoretical ideal resolution of the eye is somewhere around .4 arcminutes, or 150 PPD. Finally, the minimal separable acuity, or the smallest separation at which two lines can be perceived as two distinct lines, is around .5 arcmin under ideal laboratory conditions, or 120 PPD. While all these values for resolution seem to contradict each other in one way or another, the explanation behind all of this is that the brain is responsible for interpretation of the received image. This means that while the difference in angular size may be far below what is possible to unambiguously interpret by the eye, the brain is able to do a level of interpolation to accurately determine the position of the object in question. This is self-evident, as the brain is constantly processing vision, which is illustrated most strikingly in cases such as using a flashlight to alter the shadowing of blood vessels that are on top of the retina. Such occlusions, along with various types of optical aberration and other defects present in the image formed on the retina are processed away by the brain to present a clean image as the end result.

Snellen chart used to test eyesight. The width of the lines determines the angle subtended.

While all of these resolution values are achievable by human vision, in practice, such values are highly unlikely. The Snellen eye test as seen above, is the well-known chart of various lines of high contrast text with increasingly small size, gives a reasonable value of around 1 arcminute, or 60 PPD for adults, and around .8 arcminutes for children, or 75 PPD. It's also well worth noting that these tests are all conducted under ideal conditions with high contrast, well-lit rooms.

So after going through these possible resolutions, the most reasonable upper bound for human vision is the .5 arcminutes value, as while there is a clear increase in detail going from ~300 PPI to ~400 PPI in mobile displays, it is highly unlikely that any display manufacturer can make a relatively large display with a resolution that corresponds to 1800 PPD at 12 inches away for mass production.  However, for the .5 arcminute value, at a distance 12 inches away from the eye, this would mean a pixel density of around 600 PPI. Of course, there would be no debate if it was that easy to reach an answer. Realistically, humans seem to only be able to have a practical resolution of around .8 to 1 arcminute. So while getting to 600 PPI would mean near zero noticeable pixelation for the vast majority of edge cases, the returns are diminishing after passing the 1 arcminute point. For smartphones around the display size of 4.7 to 5 inches in diagonal length, this effectively frames the argument around the choice of a few reasonable display resolutions with PPI ranging from 300 to 600. For both OLED and LCD displays, pushing higher pixel densities incurs a cost in the form of greater power consumption for a given luminance value. Going from around 330 PPI to 470 PPI for an LCD IPS display incurs around a 20% power draw increase on the display, which can be offset by more efficient SoC, larger batteries, improved RF subsystem power draw. Such power draw increases can also be offset by improvements in the panel technology used, which has consistently been the case with Samsung’s OLED development but regardless of these improvements, it is an increase to power draw compared to an equivalent technology display with lower pixel density. In the case of LCD displays, a stronger backlight must be used as the higher pixel density means the transistors around the liquid crystal become a larger proportion of the display, and the same is also true of OLED panels, but instead the issue becomes that smaller portions of organic phosphors on the display have to be driven at higher voltages in order to maintain the same level of luminance. An example of this can be seen below with the photo of the LCD display with its transistors, with the second showing a front-lit shot to illuminate the TFTs.

Example photo of the TFTs in a display, contrasted with the luminous area

Thus, there are multiple sets of tradeoffs that come with increased resolution. While getting closer to the 0.5 arcminute value means getting closer to nearly unperceivable pixels, there is a loss in power efficiency, and on the same note, a loss in peak luminance for a given level of power consumption which implies reduced outdoor visibility if an OEM artificially clamps the upper bound of display brightness. With the focus on resolution, it also means that the increased cost associated with producing higher resolution displays may be offset elsewhere as it’s much harder to market lower reflectance, higher color accuracy, and other aspects of display performance that require a more nuanced understanding of the underlying technology. Higher resolution also means a greater processing load to the SoC, and as a result, UI fluidity can be greatly affected by an insufficient GPU, and a greater need to leverage the GPU for drawing operations can also reduce battery life in the long run.

Of course, reaching 120 PPD may be completely doable with little sacrifice in any other aspect of a device, but the closer OEMs get to that value, the less likely it is that anyone will be able to distinguish a change in resolution between a higher pixel density and lower pixel density display, and diminishing returns definitely set in after the 60 PPD point. The real question is what point between 60 and 120 PPD is the right place to stop. Current 1080p smartphones are at the 90-100 PPD mark, and it seems likely that staying at that mark could be the right compromise to make.

An example of a display with RGB stripe, 468 PPI.

But all of this assumes that the display uses an RGB stripe as seen above, and with Samsung’s various subpixel layouts used to deal with the idiosyncrasies of the organic phosphors such as uneven aging of the different red, green and blue subpixels, as blue ages the fastest, followed by green and red. This is most obvious on well-used demo units, as the extensive runtime can show how white point drops dramatically if the display is used for the equivalent of the service lifetime of the smartphone. For an RGBG pixel layout as seen below, this means that a theoretical display of 2560x1440 resolution with a diagonal length of 5 inches would only give 415.4 SPPI for the red and blue subpixels, and only green subpixels would actually have the 587 SPPI value. While the higher number of green subpixels is a way of hiding the lower resolution due to the human eye’s greater sensitivity to wavelengths that correspond to a green color, it is without question that it is still possible to notice the different subpixel pattern, and the edge of high contrast detail is often where such issues are most visible. Therefore, in order to reach the 587 SPPI mark for the red and blue subpixels, a resolution of around 3616x2034 would be needed to actually get to the level of acuity required. This would mean a PPI of 881. Clearly, at such great resolutions, achieving the necessary SPPI with RGBG pixel layouts would effectively be untenable with any SoC that should be launching in 2014, possibly even 2015.

Example of a non-RGB layout. Note the larger area blue pixels due to their square shape.

While going as far as possible in PPD makes sense for applications where power is no object, the mobile space is strongly driven by power efficiency and a need to balance both performance and power efficiency, and when display is the single largest consumer of battery in any smartphone, it seems to be the most obvious place to focus on for battery life gains. While 1440p will undoubtedly make sense for certain cases, it seems hard to justify such a high resolution within the confines of a phone, and that’s before 4K displays come into the equation. While no one can really say that reaching 600 PPI is purely for the sake of marketing, going any further is almost guaranteed to be for marketing purposes.

Source: Capability of the Human Visual System



View All Comments

  • Solandri - Tuesday, February 11, 2014 - link

    Laser printers go past 300 dpi because toner is black. That means each "dot" is either black or white, nothing in between. You need 600-1200 dpi to do half-toning with black and white dots to simulate a greyscale "pixel" at 300 dpi. Reply
  • Ktracho - Tuesday, February 11, 2014 - link

    Text is much easier to read at resolutions higher than 300 DPI, even though it is monochrome. 30 years ago, when I was trying to print the final copy of my thesis, the best printers could only do 300 DPI, whereas professional books were being printed at 2400 DPI. Guess which was easier to read, and by a huge margin? There is still a noticeable difference between a text document printed 600 DPI and one printed at 2400 DPI, even though you can't see the individual pixels in either document. Reply
  • JoshHo - Sunday, February 9, 2014 - link

    That's definitely true, but there are limits to how close the display can get based upon the size and the minimum distance that an object can be from the eye before it can't be focused upon. Reply
  • fokka - Sunday, February 9, 2014 - link

    so you say we should implement even higher res screens, which probably aren't cheap and also use considerable more power, just so we don't have to pinch our screens just to zoom in on scarlett johannsons nudes, instead we bring it up to 10cm to our exes?

    doesn't seem very straight forward to me to let all the additional resolution and processing power go to waste in every standard use case, just so we can zoom in like this again.
  • gg555 - Tuesday, February 18, 2014 - link

    I was thinking more or less the same thing. I don't know why 12 inches is treated as some sort of magic number.

    I often hold my phone closer to my eyes than 12 inches, especially if I have my glasses off, for any of many reasons. I imagine I am far from the only person who does this. My phone has about 320 ppi and I can easily see the pixelation. Even looking at some of the 450 ppi phones in stores, it's not hard for me to see the pixelation. I always thought Apple's "retina" claim was so demonstrable wrong, the first time I ever saw one of the screens, it was just stupid. Higher resolution would be nice.

    If the question is what could make a difference in pratical real world use, then the 12 inch assumption seems like a bad one.
  • npz - Sunday, February 9, 2014 - link

    very high pixel density == worse image quality

    Why? Scaling. Images. Bitmap fonts (terminal fonts and Asian fonts).

    The purpose of higher resolution should be for more screen real estate. Think about it, if your eyes are good enough to take advantage of the difference in higher pixel density i.e. you can resolve details down to the pixel level, then that means you should use it for efficiency, where the extra pixels provide more utility, as opposed to wasting it by scaling!

    My eyes are very good, and I utilize the higher resolution by utilizing my ability to resolve pixels better than most people. I run 10 pixel high bitmap terminal fonts (ahh... so sharp) on a 15.4" 1080p laptop with ZERO scaling and practically none of my colleagues can make it out without sticking the screen right up to their faces. However, even my eyes are not good enough for 1:1 for high pixel density phones.

    So, unless you have hawk-like eyes, the irony is those who expect higher quality visuals end up destroying image quality, blurring pixels into a mushy mess.
  • ZeDestructor - Sunday, February 9, 2014 - link

    JDI announced a 651ppi display back in 2012: http://www.j-display.com/english/news/2012/2012060...

    From their press release images, you can see the extreme improvement in quality for the Japanese characters. Asian typefaces aren't bitmapped by choice. They are bitmapped by necessity, since current font smoothing techniques and display density aren't high enough to render vector-based characters in a readable fashion.
  • npz - Sunday, February 9, 2014 - link

    The irony is that the bitmaps produce better and *sharper* results by hand placement of the pixels. Font smoothing, or anti-aliasing in general, by definition always blurs the images. At high enough DPI you don't need font smoothing, like print.

    In fact, font smoothing never helps. I still see rainbow color fringing on these webpages from sub-pixel font rendering. It's possible to have vector fonts look as sharp as bitmap fonts at small sizes by using sophisticated byte code interpreter and disabling anti-aliasing. This will align curved edges to the pixel grid, rather than force a curve according to it's original path. Because byte-code (essentially a small program) is already complex enough for Latin fonts, Asian (in particular Chinese and JP Kanji) just use bitmaps.

    Of course you end up with exactly the same problem once you render the font out at a particular resolution, then need to scale it i.e. images with Asian characters.
  • Nick2000 - Sunday, February 9, 2014 - link

    This is why we want vector fonts because you skip the blurry mess. Scaling algorithms can also be smarter with edge detection etc... This alievates the blurriness somewhat. Reply
  • bsim500 - Sunday, February 9, 2014 - link

    npz - "very high pixel density == worse image quality"

    Personally, I'm left wondering what source material all these people demanding ultra-high res screens are going to view on it, given most pictures they take is with a built-in camera whose overall quality (15.5 mm2 camera sensor size area on iPhone 5) is "left wanting" compared to typical 225 mm2 area "Four-Thirds" or 464 mm2 area 35mm Full Frame sensor sizes on proper cameras. I can understand a "Prosumer" wanting to view his $1,000 camera RAW files on it, but let's be honest, most typical uses consisting of sending each other blurred / grainy / noisy (tiny CMOS -> JPEG compression) snaps with the built in camera's tiny lens, is a bit like buying a 4k TV only to watch cheap and nasty webcam footage on it. Even if the resolution is higher, there's more to image quality than resolution alone, as is painfully obvious every time a new iPhone comes out with a higher MegaPixel rating and the Moron Brigade start churning out the same articles like "Is the Digital Camera dead?" every two years :-D

    In fact, "Zoom in" to a full res iPhone 5 photo and what you mainly get is CMOS noise on every mid-dark area of the photo that can actually look much worse on shaper screens, the sky in this photo being an obvious example:-

    And don't even get me started on the "quality" of low-light iPhone photo's at 100% viewed even at 96dpi:-

Log in

Don't have an account? Sign up now