Display types: Difference between revisions

From Helpful
Jump to navigation Jump to search
 
(20 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Many-element=
=Few-element=
==Backlit flat-panel displays==
==Lighting==
 
===Nixie tubes===
[[Image:Nixie2.gif|thumb|right|]]
<!--
<!--
Nixie tubes were some of the earliest outputs of computers


We may call them LCD, but that was an early generation
They have been a inefficient solution since the transistor or so.
LCD and TFT and various other acronyms are all the same idea, with different refinements on how the pixels work exactly.


There are roughly two parts of such monitors you can care about: How the backlight works, and how the pixels work.
But they're still pretty.


[[Lightbulb_notes#Nixie_tubes]]
-->
<br style="clear:both">


But almost all of them come down to
* pixels will block light, or less so.
* put a bright lights behind those
: in practice, they are on the side, and there is some trickery to try to reflect that as uniformly as possible


===Eggcrate display===


There are a lot of acronyms pointing tou
<!--
: TN and IPS is more about the crystals (and you mostly care about that if you care about viewing angle),
An eggcrate display is a number of often-incandescent, often-smallish lighbulbs in a grid (often 5 by 7),
: TFT is more about the electronics, but the two aren't really separable,
named for the pattern of round cutouts
: and then there are a lot of experiments (with their own acronyms) that


https://en.wikipedia.org/wiki/TFT_LCD


TFT, UFB, TFD, STN
These were bright, and primarily used in gameshows, presumably because they would show up fine even in bright studio lighting.
Note that when showing $0123456789, not all bulbs positions are necessary.




-->
-->


===CCFL or LED backlight===
==Mechanical==
 
===Mechanical counter===
 
https://en.wikipedia.org/wiki/Mechanical_counter
 
 


===Split-flap===
[[Image:Split-flap diagram.png|thumb|right]]
<!--
<!--
Both refer to a global backlight.
It's only things like OLED and QLED that do without.


If you're over thirty or so, you'll have seen these at airports. There's a few remaining now, but only a few.


CCFLs, Cold-Cathode Fluorescnt Lamps, are a variant of [[fluorescent lighting]] that (surprise) runs a lot colder than some other designs.
They're somehow satisfying to many, and that rustling sound is actually nice feedback on when you may want to look at the board again.


CCFL backlights tend to pulse at 100+ Hz{{verify}}, though because they necessarily use phosphors, and those can easily made to be slow, it may be a ''relatively'' steady pulsing.


They are also high voltage devices.
They are entirely mechanical, and only need to be moderately precise -- well, assuming they only need ~36 or so characters.




LED backlights are often either
https://www.youtube.com/watch?v=UAQJJAQSg_g
* [[PWM]]'d at kHz speeds{{verify}},
* current-limited{{verify}}, which are both smoother.
 




-->
-->


https://nl.wikipedia.org/wiki/CCFL
https://en.wikipedia.org/wiki/Split-flap_display
<br style="clear:both"/>


==Self-lit==


===OLED===
{{stub}}


While OLED is also a thing in lighting, OLED ''usually'' comes up in the context of OLED displays.
===Vane display===


It is mainly contrasted with backlit displays (because it is hard to get those to block all light).
OLEDs being off just emit no light at all. So the blacks are blacker, you could go brighter at the same time,
There are some other technical details why they tend to look a little crisper.


Viewing angles are also better, ''roughly'' because the light source is closer to the surface.
===Flip-disc===


https://en.wikipedia.org/wiki/Flip-disc_display


OLED are organic LEDs, which in itself party just a practical production detail, and really just LEDs.
{{comment|(...though you can get fancy in the production process, e.g. pricy see-through displays are often OLED with substate trickery{{verify}})}}


===Other flipping types===


<!--


PMOLED versus AMOLED makes no difference to the light emission,
-->
just to the way we access them (Passive Matrix, Active Matrix).
AMOLED can can somwhat lower power, higher speed, and more options along that scale{{verify}},
all of which makes them interesting for mobile uses. It also scales better to larger monitors.


POLED (and confusingly, pOLED is a trademark) uses a polymer instead of the glass,
==LED segments==
so is less likely to break but has other potential issues


===7-segment and others===
{{stub}}
[[File:Segment displays.png|thumb|right|200px|7-segment,
9-segment display, 14-segment, and 16-segment display. If meant for numbers will be a dot next to each (also common in general), if meant for time there will be a colon in one position.]]


<!--


'''Confusion'''
These are really just separate lights that happen to be arranged in a useful shape.


Very typically LEDs (with a common cathode or anode), though similar ideas are sometimes implemented in other display types - notably the electromechanical one, and also sometimes VFD.


"Isn't LED screen the same as OLED?"


No.  
Even the simplest, 7-segment LED involves a bunch of connectors so are
Marketers will be happy if you confuse "we used a LED backlight instead of a CCFL" (which we've been doing for ''ages'')
* often driven multiplexed, so only one of them is on at a time.
with "one of those new hip crisp OLED thingies", while not technically lying,
* often done via a controller that handles that multiplexing for you<!--
so they may be fuzzy about what they mean with "LED display".
: which one depends on context, e.g. is it a BCD-style calculator, a microcontroller; what interface is more convenient for you
:: if you're the DIY type who bought a board, you may be looking at things like the MAX7219 or MAX7221, TM1637 or TM1638, HT16K33, 74HC595 (shift register), HT16K33 
-->


You'll know when you have an OLED monitor, because it will cost ten times as much - a thousand USD/EUR, more at TV sizes.
The cost-benefit for people without a bunch of disposable income isn't really there.


Seven segments are the minimal and classical case,
good enough to display numbers and so e.g. times, but not really for characters.


More-than-7-segment displays are preferred for that.


"I heard al phones use OLED now?"


Fancier, pricier ones do, yes.


Cheaper ones do not, because the display alone might cost on the order of a hundred bucks.{{verify}}
https://en.wikipedia.org/wiki/Seven-segment_display


==DIY==


===LCD character dislays===


Character displays are basically those with predefined (and occasionally rewritable) fonts.


-->


===QLED===
====Classical interface====
<!--
It's quantum, so it's buzzword compatible. How is it quantum? Who knows!


The more barebones interface is often a 16 pin line with a pinout like


It may surprise you that this is LCD-style, not OLED-style,
* Ground
but is brighter than most LCD style,


they're still working on details like decent contrast.
* Vcc


* Contrast
: usually there's a (trim)pot from Vcc, or a resistor if it's fixed


Quantum Dot LCD  https://en.wikipedia.org/wiki/Quantum_dot_display


* RS: Register Select (character or instruction)
: in instruction mode, it receives commands like 'clear display', 'move cursor',
: in character mode,


-->
* RW: Read/Write
: tied to ground is write, which is usually the only thing you do


==On image persistence / burn-in==
* ENable / clk (for writing)
<!--
CRTs continuously illuminating the same pixels would somewhat-literally cook their phosphors a little,
leading to fairly-literal image burn-in.


* 8 data lines, but you can do most things over 4 of them


Other displays will have similar effects, but it may not be ''literal'' burn in, so we're calling it image persistence or image retention now.


* backlight Vcc
* Backlight gnd


'''LCD and TFT''' have no ''literal'' burn-in, but the crystals may still settle into a preferred state.
: there is limited alleviation for this


'''Plasma''' still has burn-in.


'''OLED''' seems to as well, though it's subtler.
The minimal, write-only setup is:
* tie RW to ground
* connect RS, EN, D7, D6, D5, and D4 to digital outs




Liquid crystals (LCD, TFT, etc.) have an persisting-image effect because
====I2C and other====
of the behaviour of liquid crystals when held at the same state ''almost always''.
<!--
Basically the above wrapped in a controller you can address via I2C or SPI (and usually they then speak that older parallel interface)


You can roughly describe this as having a preferred state they won't easily relax out of -- but there are a few distinct causes, different sensitivity to this from different types of panels, and different potential fixes.
Sometimes these are entirely separate ones bolted onto the classical interface.


Also, last time I checked this wasn't ''thoroughly'' studied.




Unplugging power (/ turning it off) for hours (or days, or sometimes even seconds) may help, and may not.
For DIY, you may prefer these just because it's less wiring hassle.
 
-->


A screensaver with white, or strong moving colors, or noise, may help.
===Matrix displays===


There are TVs that do something like this, like jostling the entire image over time, doing a blink at startup and/or periodically, or scanning a single dot with black and white (you probably won't notice).


===(near-)monochrome===




https://en.wikipedia.org/wiki/Image_persistence
====SSD1306====


http://www.jscreenfix.com/
OLED, 128x64@4 colors{{vierfy}}


http://gribble.org/lcdfix/
https://cdn-shop.adafruit.com/datasheets/SSD1306.pdf


{{search|statictv screensaver}}
====SH1107====


-->
OLED,


==VFD==
https://datasheetspdf.com/pdf-file/1481276/SINOWEALTH/SH1107/1
<gallery mode="packed" style="float:right" heights="200px">
VFD.jpg|larger segments
VFD-dots.jpg|dot matrix VFD
</gallery>


[[Vacuum Fluorescent Display]]s are vacuum tubes applied in a specific way - see [[Lightbulb_notes#VFDs]] for more details.
===Small LCD/TFTs / OLEDs===
{{stub}}


<br style="clear:both"/>
Small as in order of an inch or two (because the controllers are designed for a limited resolution?{{verify}}).


=Few-element=
==Lighting==


===Nixie tubes===
{{zzz|Note that, like with monitors, marketers really don't mind if you confuse LED-backlit LCD with OLED,
[[Image:Nixie2.gif|thumb|right|]]
and some of the ebays and aliexpresses sellers of the world will happily 'accidentally'
call any small screen OLED if it means they sell more.
 
This is further made more confusing by the fact that there are
* few-color OLEDs (2 to 8 colors or so, great for high contrast but ''only'' high cotnrast),
* [[high color]] OLEDs (65K),
...so you sometimes need to dig into the tech specs to see the difference between high color LCD and high color OLED.
}}
 
<!--
<!--
Nixie tubes were some of the earliest outputs of computers
[[Image:OLED.jpg|thumb|300px|right|Monochrome OLED]]
[[Image:OLED.jpg|thumb|300px|right|High color OLED]]
[[Image:Not OLED.jpg|thumb|400px|right|Not OLED (clearly backlit)]]
-->


They have been a inefficient solution since the transistor or so.


But they're still pretty.
When all pixels are off they give zero light pollution (unlike most LCDs) which might be nice in the dark.
These seem to appear in smaller sizes than small LCDs, so are great as compact indicators.


[[Lightbulb_notes#Nixie_tubes]]
-->
<br style="clear:both">


'''Can it do video or not?'''


===Eggcrate display===
If it ''does'' speak e.g. MIPI it's basically just a monitor, probably capable of decent-speed updates, but also the things you ''can'' connect to will (on the scale of microcontroller to mini-PC) be moderately powerful, e.g. a raspberry.


<!--
But the list below don't connect PC video cables.
An eggcrate display is a number of often-incandescent, often-smallish lighbulbs in a grid (often 5 by 7),
named for the pattern of round cutouts


Still, they have their own controller, and can hold their pixel state one way or the other, but connect something more command-like - so you can update a moderate amount of pixels with via an interface that is much less speedy or complex.


These were bright, and primarily used in gameshows, presumably because they would show up fine even in bright studio lighting.
You might get reasonable results over SPI / I2C for a lot of e.g. basic interfaces and guages.
By the time you try to display video you have to think about your design more.
Note that when showing $0123456789, not all bulbs positions are necessary.


For a large part because amount of pixels to update times the rate of frames per second has to fit through the communication (...also the display's capabilities).
There is a semi-standard parallel interface that might make video-speed things feasible.
This interface is faster than the SPI/I2C option, though not always ''that'' much, depending on hardware details.


-->


==Mechanical==
Even if the specs of the screen can do it in theory, you also have to have the video ready to send.
If you're running it from an RP2040 or ESP32, don't expect to libav/ffmpeg.


===Mechanical counter===
Say, something like the {{imagesearch|tinycircuits tinytv|TinyTV}} runs a 216x135 65Kcolor display from a from a [[RP2040]].


https://en.wikipedia.org/wiki/Mechanical_counter
Also note that such hardware won't be doing decoding and rescaling arbitrary video files.
They will use specifically pre-converted video.




In your choices, also consider libraries.
Things like [https://github.com/Bodmer/TFT_eSPI TFT_eSPI] has a compatibility list you will care about.


===Split-flap===
[[Image:Split-flap diagram.png|thumb|right]]
<!--


If you're over thirty or so, you'll have seen these at airports. There's a few remaining now, but only a few.


They're somehow satisfying to many, and that rustling sound is actually nice feedback on when you may want to look at the board again.


====Interfaces====
{{stub}}


They are entirely mechanical, and only need to be moderately precise -- well, assuming they only need ~36 or so characters.
<!--
 
* 4-line SPI
* 3-line SPI ([[half duplex]], basically)
* I2C
* 6800-series parallel
* 8080-series parallel interface


https://www.youtube.com/watch?v=UAQJJAQSg_g


The last two are 8-bit parallel interfaces. ''In theory'' these can be multiples faster,
though notice that in some practice you are instead limited by the display's controller,
your own ability to speak out data that fast, and the difference may not even be twice
(and note that [[bit-banging]] that parallel may take a lot more CPU than dedicated SPI would).


-->
The numbers aren't about capability, they seem to purely references then Intel versus Motorola origins of their specs{{verify}})
They are apparently very similar - the main differences being the read/write and enable, and in some timing.
: If they support both, 8080 seems preferable, in part because some only support that?{{verify}}


https://en.wikipedia.org/wiki/Split-flap_display
<br style="clear:both"/>


There are others that aren't quite ''generic'' high speed moniutor interfaces yet,
but too fast for slower hardware (e.g. CSI, MDDI)


https://forum.arduino.cc/t/is-arduino-6800-series-or-8080-series/201241/2


===Vane display===
-->


====ST7735====


===Flip-disc===
LCD, 132x162@16bits RGB


https://en.wikipedia.org/wiki/Flip-disc_display
<!--
* SPI interface (or parallel)


* 396 source line (so 132*RGB) and 162 gate line
* display data RAM of 132 x 162 x 18 bits


===Other flipping types===
* 2.7~3.3V {{verify}}


<!--


-->
Boards that expose SPI will have roughly:
: GND: power supply
: VCC: 3.3V-5.0V


==LED segments==
: SCL: SPI clock line
: SDA: SPI data line


===7-segment and others===
: RES: reset
{{stub}}
[[File:Segment displays.png|thumb|right|200px|7-segment,
9-segment display, 14-segment, and 16-segment display. If meant for numbers will be a dot next to each (also common in general), if meant for time there will be a colon in one position.]]


: D/C: data/command selection
: CS: chip Selection interface


These are really just separate lights that happen to be arranged in a useful shape.
: BLK: backlight control (often can be left floating, presumably pulled up/down)


Very typically LEDs (with a common cathode or anode), though similar ideas are sometimes implemented in other display types - notably the electromechanical one, and also sometimes VFD.


Lua / NodeMCU:
* [https://nodemcu.readthedocs.io/en/release/modules/ucg/ ucg]
* [https://nodemcu.readthedocs.io/en/release/modules/u8g2/ u8g2]
* https://github.com/AoiSaya/FlashAir-SlibST7735


Even the simplest, 7-segment LED involves a bunch of connectors so are
* often driven multiplexed, so only one of them is on at a time.
* often done via a controller that handles that multiplexing for you<!--
: which one depends on context, e.g. is it a BCD-style calculator, a microcontroller; what interface is more convenient for you
:: if you're the DIY type who bought a board, you may be looking at things like the MAX7219 or MAX7221, TM1637 or TM1638, HT16K33, 74HC595 (shift register), HT16K33 
-->


Arduino libraries
* https://github.com/adafruit/Adafruit-ST7735-Library
* https://github.com/adafruit/Adafruit-GFX-Library


Seven segments are the minimal and classical case,  
These libraries may hardcode some of the pins (particularly the SPI ones),
good enough to display numbers and so e.g. times, but not really for characters.
and this will vary between libraries.


More-than-7-segment displays are preferred for that.




'''ucg notes'''


https://en.wikipedia.org/wiki/Seven-segment_display


=DIY=
Fonts that exist:    https://github.com/marcelstoer/nodemcu-custom-build/issues/22
fonts that you have:  for k,v in pairs(ucg) do print(k,v) end


===LCD character dislays===


Character displays are basically those with predefined (and occasionally rewritable) fonts.
http://blog.unixbigot.id.au/2016/09/using-st7735-lcd-screen-with-nodemcu.html


-->


====Classical interface====
====ST7789====


The more barebones interface is often a 16 pin line with a pinout like
LCD, 240x320@16bits RGB


* Ground
https://www.waveshare.com/w/upload/a/ae/ST7789_Datasheet.pdf


* Vcc
====SSD1331====


* Contrast
OLED, 96x 64, 16bits RGB
: usually there's a (trim)pot from Vcc, or a resistor if it's fixed


https://cdn-shop.adafruit.com/datasheets/SSD1331_1.2.pdf


* RS: Register Select (character or instruction)
: in instruction mode, it receives commands like 'clear display', 'move cursor',
: in character mode,


* RW: Read/Write
====SSD1309====
: tied to ground is write, which is usually the only thing you do


* ENable / clk (for writing)
OLED, 128 x 64, single color?


* 8 data lines, but you can do most things over 4 of them
https://www.hpinfotech.ro/SSD1309.pdf


====SSD1351====


* backlight Vcc
OLED, 65K color
* Backlight gnd


https://newhavendisplay.com/content/app_notes/SSD1351.pdf


 
====HX8352C====
The minimal, write-only setup is:
LCD
* tie RW to ground
* connect RS, EN, D7, D6, D5, and D4 to digital outs
 
 
====I2C and other====
<!--
<!--
Basically the above wrapped in a controller you can address via I2C or SPI (and usually they then speak that older parallel interface)  
240(RGB)x480, 16-bit
-->
https://www.ramtex.dk/display-controller-driver/rgb/hx8352.htm


Sometimes these are entirely separate ones bolted onto the classical interface.




====HX8357C====


For DIY, you may prefer these just because it's less wiring hassle.
====R61581====


<!--
240x320
-->
-->


===Matrix displays===
====ILI9163====
LCD, 162x132@16-bit RGB


http://www.hpinfotech.ro/ILI9163.pdf


===(near-)monochrome===
====ILI9341====


<!--
240RGBx320, 16-bit
-->
https://cdn-shop.adafruit.com/datasheets/ILI9341.pdf


====SSD1306====
====ILI9486====
LCD, 480x320@16-bit RGB


OLED, 128x64@4 colors{{vierfy}}
https://www.hpinfotech.ro/ILI9486.pdf


https://cdn-shop.adafruit.com/datasheets/SSD1306.pdf
====ILI9488====
LCD
<!--
320(RGB) x 480
-->


====SH1107====
https://www.hpinfotech.ro/ILI9488.pdf


OLED,  
====PCF8833====
LCD, 132×132 16-bit RGB


https://datasheetspdf.com/pdf-file/1481276/SINOWEALTH/SH1107/1
https://www.olimex.com/Products/Modules/LCD/MOD-LCD6610/resources/PCF8833.pdf


===Small LCD/TFTs / OLEDs===
====SEPS225====
{{stub}}
LCD


Small as in order of an inch or two (because the controllers are designed for a limited resolution?{{verify}}).
https://vfdclock.jimdofree.com/app/download/7279155568/SEPS225.pdf




{{zzz|Note that, like with monitors, marketers really don't mind if you confuse backlit LCD with OLED,
====RM68140====
and some of the ebays and aliexpresses sellers of the world will happily 'accidentally'
LCD
call any small screen OLED if it means they sell more.
 
This is further made more confusing by the fact that there are
* few-color OLEDs (2 to 8 colors or so, great for high contrast but ''only'' high cotnrast),
* [[high color]] OLEDs (65K),
...so you sometimes need to dig into the tech specs to see the difference between high color LCD and high color OLED.
}}
 
<!--
<!--
[[Image:OLED.jpg|thumb|300px|right|Monochrome OLED]]
320 RGB x 480
[[Image:OLED.jpg|thumb|300px|right|High color OLED]]
[[Image:Not OLED.jpg|thumb|400px|right|Not OLED (clearly backlit)]]
-->
-->


https://www.melt.com.ru/docs/RM68140_datasheet_V0.3_20120605.pdf
====GC9A01====


When all pixels are off they give zero light pollution (unlike most LCDs) which might be nice in the dark.
LCD, 65K colors, SPI
These seem to appear in smaller sizes than small LCDs, so are great as compact indicators.


Seem to often be used on round displays{{verify}}


'''Can it do video or not?'''
https://www.buydisplay.com/download/ic/GC9A01A.pdf


If it ''does'' speak e.g. MIPI it's basically just a monitor, probably capable of decent-speed updates, but also the things you ''can'' connect to will (on the scale of microcontroller to mini-PC) be moderately powerful, e.g. a raspberry.
[[Category:Computer‏‎]]
[[Category:Hardware]]


But the list below don't connect PC video cables.
===Epaper===


Still, they have their own controller, and can hold their pixel state one way or the other, but connect something more command-like - so you can update a moderate amount of pixels with via an interface that is much less speedy or complex.
====SSD1619====


You might get reasonable results over SPI / I2C for a lot of e.g. basic interfaces and guages.
https://cursedhardware.github.io/epd-driver-ic/SSD1619A.pdf
By the time you try to display video you have to think about your design more.


For a large part because amount of pixels to update times the rate of frames per second has to fit through the communication (...also the display's capabilities).
<!--
There is a semi-standard parallel interface that might make video-speed things feasible.
====UC8151====
This interface is faster than the SPI/I2C option, though not always ''that'' much, depending on hardware details.


https://www.orientdisplay.com/wp-content/uploads/2022/09/UC8151C.pdf


Even if the specs of the screen can do it in theory, you also have to have the video ready to send.
If you're running it from an RP2040 or ESP32, don't expect to libav/ffmpeg.


Say, something like the {{imagesearch|tinycircuits tinytv|TinyTV}} runs a 216x135 65Kcolor display from a from a [[RP2040]].
-->


Also note that such hardware won't be doing decoding and rescaling arbitrary video files.
=Many-element - TV and monitor notes (and a little film)=
They will use specifically pre-converted video.
==Backlit flat-panel displays==
<!--


We may call them LCD, but that was an early generation
LCD and TFT and various other acronyms are all the same idea, with different refinements on how the pixels work exactly.


In your choices, also consider libraries.
There are roughly two parts of such monitors you can care about: How the backlight works, and how the pixels work.
Things like [https://github.com/Bodmer/TFT_eSPI TFT_eSPI] has a compatibility list you will care about.




But almost all of them come down to
* pixels will block light, or less so.
* put a bright lights behind those
: in practice, they are on the side, and there is some trickery to try to reflect that as uniformly as possible




====Interfaces====
There are a lot of acronyms pointing tou
{{stub}}
: TN and IPS is more about the crystals (and you mostly care about that if you care about viewing angle),
: TFT is more about the electronics, but the two aren't really separable,
: and then there are a lot of experiments (with their own acronyms) that


<!--
https://en.wikipedia.org/wiki/TFT_LCD
* 4-line SPI
* 3-line SPI ([[half duplex]], basically)
* I2C
* 6800-series parallel
* 8080-series parallel interface


TFT, UFB, TFD, STN


The last two are 8-bit parallel interfaces. ''In theory'' these can be multiples faster,
though notice that in some practice you are instead limited by the display's controller,
your own ability to speak out data that fast, and the difference may not even be twice
(and note that [[bit-banging]] that parallel may take a lot more CPU than dedicated SPI would).


The numbers aren't about capability, they seem to purely references then Intel versus Motorola origins of their specs{{verify}})
-->
They are apparently very similar - the main differences being the read/write and enable, and in some timing.
: If they support both, 8080 seems preferable, in part because some only support that?{{verify}}


===CCFL or LED backlight===


There are others that aren't quite ''generic'' high speed moniutor interfaces yet,
<!--
but too fast for slower hardware (e.g. CSI, MDDI)
Both refer to a global backlight.  
It's only things like OLED and QLED that do without.


https://forum.arduino.cc/t/is-arduino-6800-series-or-8080-series/201241/2


-->
CCFLs, Cold-Cathode Fluorescnt Lamps, are a variant of [[fluorescent lighting]] that (surprise) runs a lot colder than some other designs.


====ST7735====
CCFL backlights tend to pulse at 100+ Hz{{verify}}, though because they necessarily use phosphors, and those can easily made to be slow, it may be a ''relatively'' steady pulsing.


LCD, 132x162@16bits RGB
They are also high voltage devices.


<!--
* SPI interface (or parallel)


* 396 source line (so 132*RGB) and 162 gate line
LED backlights are often either
* display data RAM of 132 x 162 x 18 bits
* [[PWM]]'d at kHz speeds{{verify}},
* current-limited{{verify}}, which are both smoother.


* 2.7~3.3V {{verify}}




Boards that expose SPI will have roughly:
-->
: GND: power supply
: VCC: 3.3V-5.0V


: SCL: SPI clock line
https://nl.wikipedia.org/wiki/CCFL
: SDA: SPI data line


: RES: reset
==Self-lit==


: D/C: data/command selection
===OLED===
: CS: chip Selection interface
{{stub}}


: BLK: backlight control (often can be left floating, presumably pulled up/down)
While OLED is also a thing in lighting, OLED ''usually'' comes up in the context of OLED displays.


It is mainly contrasted with backlit displays (because it is hard to get those to block all light).
OLEDs being off just emit no light at all. So the blacks are blacker, you could go brighter at the same time,
There are some other technical details why they tend to look a little crisper.


Lua / NodeMCU:
Viewing angles are also better, ''roughly'' because the light source is closer to the surface.
* [https://nodemcu.readthedocs.io/en/release/modules/ucg/ ucg]
* [https://nodemcu.readthedocs.io/en/release/modules/u8g2/ u8g2]
* https://github.com/AoiSaya/FlashAir-SlibST7735




Arduino libraries
OLED are organic LEDs, which in itself party just a practical production detail, and really just LEDs.
* https://github.com/adafruit/Adafruit-ST7735-Library
{{comment|(...though you can get fancy in the production process, e.g. pricy see-through displays are often OLED with substate trickery{{verify}})}}
* https://github.com/adafruit/Adafruit-GFX-Library


These libraries may hardcode some of the pins (particularly the SPI ones),
and this will vary between libraries.




PMOLED versus AMOLED makes no difference to the light emission,
just to the way we access them (Passive Matrix, Active Matrix).
AMOLED can can somwhat lower power, higher speed, and more options along that scale{{verify}},
all of which makes them interesting for mobile uses. It also scales better to larger monitors.


'''ucg notes'''
POLED (and confusingly, pOLED is a trademark) uses a polymer instead of the glass,
 
so is less likely to break but has other potential issues


Fonts that exist:    https://github.com/marcelstoer/nodemcu-custom-build/issues/22
fonts that you have:  for k,v in pairs(ucg) do print(k,v) end


<!--


http://blog.unixbigot.id.au/2016/09/using-st7735-lcd-screen-with-nodemcu.html
'''Confusion'''


-->


====ST7789====
"Isn't LED screen the same as OLED?"


LCD, 240x320@16bits RGB
No.
Marketers will be happy if you confuse "we used a LED backlight instead of a CCFL" (which we've been doing for ''ages'')
with "one of those new hip crisp OLED thingies", while not technically lying,
so they may be fuzzy about what they mean with "LED display".


https://www.waveshare.com/w/upload/a/ae/ST7789_Datasheet.pdf
You'll know when you have an OLED monitor, because it will cost ten times as much - a thousand USD/EUR, more at TV sizes.
The cost-benefit for people without a bunch of disposable income isn't really there.


====SSD1331====


OLED, 96x 64, 16bits RGB


https://cdn-shop.adafruit.com/datasheets/SSD1331_1.2.pdf
"I heard al phones use OLED now?"


Fancier, pricier ones do, yes.


====SSD1309====
Cheaper ones do not, because the display alone might cost on the order of a hundred bucks.{{verify}}


OLED, 128 x 64, single color?


https://www.hpinfotech.ro/SSD1309.pdf


====SSD1351====


OLED, 65K color
-->
 
===QLED===
<!--
It's quantum, so it's buzzword compatible. How is it quantum? Who knows!
 


https://newhavendisplay.com/content/app_notes/SSD1351.pdf
It may surprise you that this is LCD-style, not OLED-style,
but is brighter than most LCD style,


====HX8352C====
they're still working on details like decent contrast.
LCD
<!--
240(RGB)x480, 16-bit
-->
https://www.ramtex.dk/display-controller-driver/rgb/hx8352.htm




Quantum Dot LCD  https://en.wikipedia.org/wiki/Quantum_dot_display


====HX8357C====


====R61581====
-->


==On image persistence / burn-in==
<!--
<!--
240x320
CRTs continuously illuminating the same pixels would somewhat-literally cook their phosphors a little,
-->
leading to fairly-literal image burn-in.


====ILI9163====
LCD, 162x132@16-bit RGB


http://www.hpinfotech.ro/ILI9163.pdf
Other displays will have similar effects, but it may not be ''literal'' burn in, so we're calling it image persistence or image retention now.


====ILI9341====


<!--
'''LCD and TFT''' have no ''literal'' burn-in, but the crystals may still settle into a preferred state.
240RGBx320, 16-bit
: there is limited alleviation for this
-->
https://cdn-shop.adafruit.com/datasheets/ILI9341.pdf


====ILI9486====
'''Plasma''' still has burn-in.
LCD, 480x320@16-bit RGB


https://www.hpinfotech.ro/ILI9486.pdf
'''OLED''' seems to as well, though it's subtler.


====ILI9488====
LCD
<!--
320(RGB) x 480
-->


https://www.hpinfotech.ro/ILI9488.pdf
Liquid crystals (LCD, TFT, etc.) have an persisting-image effect because
of the behaviour of liquid crystals when held at the same state ''almost always''.  


====PCF8833====
You can roughly describe this as having a preferred state they won't easily relax out of -- but there are a few distinct causes, different sensitivity to this from different types of panels, and different potential fixes.
LCD, 132×132 16-bit RGB


https://www.olimex.com/Products/Modules/LCD/MOD-LCD6610/resources/PCF8833.pdf
Also, last time I checked this wasn't ''thoroughly'' studied.


====SEPS225====
LCD


https://vfdclock.jimdofree.com/app/download/7279155568/SEPS225.pdf
Unplugging power (/ turning it off) for hours (or days, or sometimes even seconds) may help, and may not.


A screensaver with white, or strong moving colors, or noise, may help.


====RM68140====
There are TVs that do something like this, like jostling the entire image over time, doing a blink at startup and/or periodically, or scanning a single dot with black and white (you probably won't notice).
LCD
<!--
320 RGB x 480
-->


https://www.melt.com.ru/docs/RM68140_datasheet_V0.3_20120605.pdf


====GC9A01 (round)====


LCD, 65K colors, SPI
https://en.wikipedia.org/wiki/Image_persistence


https://www.buydisplay.com/download/ic/GC9A01A.pdf
http://www.jscreenfix.com/


[[Category:Computer‏‎]]
http://gribble.org/lcdfix/
[[Category:Hardware]]


=TV and monitor notes=
{{search|statictv screensaver}}


==On reproduction==
-->


====Reproduction that flashes====
==VFD==
<gallery mode="packed" style="float:right" heights="200px">
VFD.jpg|larger segments
VFD-dots.jpg|dot matrix VFD
</gallery>


{{stub}}
[[Vacuum Fluorescent Display]]s are vacuum tubes applied in a specific way - see [[Lightbulb_notes#VFDs]] for more details.


'''Mechanical film projectors''' flash individual film frames while that film is being held entirely still, before advancing that film to the next (while no light is coming out) and repeating.
<br style="clear:both"/>


{{comment|(see e.g. [https://www.youtube.com/watch?v%3dCsounOrVR7Q this] and note that it moves so quickly that you see ''that'' the film is taken it happens so quickly that you don't even see it move.  Separately, if you slow playback you can also see that it flashes ''twice'' before it advances the film - we'll get to why)}}




This requires a shutter, i.e. not letting through any light a moderate part of the time (specifically while it's advancing the film).
<!--
We are counting on our eyes to sort of ignore that.
==Capabilities==


One significant design concept very relevant to this type of reproduction is the '''flicker fusion threshold'''[https://en.wikipedia.org/wiki/Flicker_fusion_threshold], the "frequency at which intermittent light stimulus appears to be steady light" to our eyes.
===Resolution===
A TFT screen has a number of pixels, and therefore a natural resolution. Lower resolutions (and sometimes higher ones) can be displayed, but are interpolated so will not bee as sharp. Most people use the natural resolution.  
This may also be important for gamers, who may not want to be forced to a higher resolution for crispness than their graphics card can handle in terms of speed.


Research shows that this varies somewhat with conditions, but in most conditions practical around showing people images, that's somewhere between 50Hz and 90Hz.
For:
* 17": 1280x1024 is usual (1280x768 for widescreen)
* 19": 1280x1024 (1440x900 for widescreen)
* 20": 1600x1200 (1680x1050 for widescreen)
* 21": are likely to be 1600x1200 (1920x1200 for widescreen)




Since people are sensitive to flicker, some more than than others, and this can lead to eyestain and headaches,  
Note that some screens are 4:3 (computer-style ratio), some 5:4 (tv ratio), some 16:9 or 16:10 (wide screen), but often not ''exactly'' that, pixelwise; many things opt for some multiple that is easier to handle digitally.
we aim towards the high end of that range - whenever that's not hard to do.


In fact, while film is 24fps and was initially shown at 24Hz flashes, movie projectors soon introduced two-blade and then three-blade shutters, showing each image two or three times before advancing, meaning that while they still only show 24 distinct images per second, they flash it twice or three times for a ''regular'' 48Hz or 72Hz flicker, for a lot less eyestrain.
===Refresh===
Refresh rates as they existed in CRT monitors do not directly apply; there is no line scanning going on anymore.




Pixels are continuously lit, which is why TFTs don't seem to flicker like CRTs do. Still, they react only so fast to the changes in the intensity they should display at, which limits the amount of pixel changes that you will actually see per second.


An arguably even more basic constraint to moving pictures is the point at which rate of images we accept animation as '''fluid movement'''.  
Longer refresh times mean moving images are blurred and you may see ghosting of brigt images. Older TFT/LCDs did something on the order of 20ms (roughly 50fps), which was is not really acceptable for gaming.
: Anything under 10fps looks jerky and stilted or at least like a ''choice'' (western and eastern animation were rarely higher than 12, or 8 or 6 for the simpler ones),
: around 20fps we start readily accepting it as continuous movement,
: above 30 or 40fps it looks smooth,
: and above that it keeps on looking a little better ''yet'' also becomes quickly diminishing returns.




Early movies on film were approximately 24fps, some faster, various slower.  
However, the millisecond measure is nontrivial. The direct meaning of the number has been slaughtered primarily by number-boast-happy PR departments.
The number 24 was a chosen balance between barious things, like the fact that that's enough for fluid movement and relatively few scenes need higher,
and the fact that film stock is expensive, and a standard for projection (adaptable or even multiple projectors would be too expensive for most cinemas).


More exactly, there are various things you can be measuring. It's a little like the speaker rating (watt RMS, watt 'in regular use', PMPO) in that a rating may refer unrealistic exhaggerations as well as strict and real measures.


The reason we ''still'' use 24fps it now is more varied and doesn't really have a one-sentence answer.


But part of it is that making movies go faster is only sometimes well received.  
The argument is that even when the time for a pixel to be fully off to fully on may take 20ms, not everyone is using their monitor to induce epileptic attacks - usually the pixel is done faster, going from some grey to some grey. If you play the DOOM3 dark-room-fest, you may well see the change from that dark green to that dark blue happen in 8ms (not that that's in any way easy to measure).
But a game with sharp contrasts may see slower, somewhat blurry changes.


It seems that we associated 24fps to feels like movies, 50/60fps feels like shaky-cam home movies made by dad's camcorder (when those were still a thing) or sports broadcasts (which we did even though it reduced detail) with their tense, immediate, real-world associations.
So higher, while technically better, was also associated with a specific aesthetic. It mat works well for action movies, yet less for others.


There is an argument that 24fps's sluggishness puts us more at ease, reminds us that it isn't real, seems associated with storytelling, a dreamlike state, memory recall.
8ms is fairly usual these days. Pricier screens will do 4ms or even 2ms, which is nicer for gaming.


Even if we can't put our finger on why, such senses persist.


<!--
===Video noise===




'''when more is and isn't better'''
===Contrast===
The difference between the weakest and strongest brightness it can display. 350:1 is somewhat minimal, 400:1 and 500:1 are fairly usual, 600:1 and 800:1 are nice and crisp.
 
===Brightness===
The amount of light emitted - basically the strength of the backlight. Not horribly interesting unless you like it to be bright in even a well lit room.


And you can argue that cinematic language evolved not only with the technical limitations, but also the limitations of how much new information you can show at all.
300 cd/m2 is fairly usual.


In some ways, 24fps feels a subtle slightly stylized type of video,
from the framerate alone,
and most directors will like this because most movies benefit from that.
Exceptions include some fast paced - but even they benefit from feeling more distinct from ''other'' movies still doing that stylized thing.


There are details like brightness uniformity - in some monitors, the edges are noticably darker when the screen is bright, which may be annoying. Some monitors have stranger shapes for their lighting.
Only reviews will reveal this.


At the same time, a lot of seems like a learned association that will blur and may go away over time.
===Color reproduction===
The range of colors a monitor can reproduce is interesting for photography buffs. The curve of how each color is reproduced is also a little different for every monitor, and for some may be noticeably different from others.  


This becomes relevant when you want a two-monitor deal; it may be hard to get a CRT and a TFT the same color, as much as it may be hard to get two different TFTs from the same manufacturer consistent. If you want perfection in that respect, get two of the same - though spending a while twiddlign with per-channel gamma correction will usually get decent results.


Some changes will make things ''better'' (consider that the [[3-2 pulldown]] necessary to put 24fps movies on TV in 60Hz countries made pans look ''worse'').




You can get away with even less in certain aesthetic styles - simpler cartoons may update at 8fps or 6fps, and we're conditioned enough that in that cartoon context, 12fps looks ''fancy''.  But more is generally taken to be worse. It may be objectively smoother but there are so many animation choices (comparable to cinematic language) that you basically ''have'' to throw out at high framerates - or accept that it will look ''really'' jarring when it switches between them. It ''cannot'' look like classical animation/anime, for better ''and'' worse.  And that's assuming it was ''made'' this way -- automatic resolution upscaling usually implies filters - they things, they are effectively a stylistic change that was not intended and you can barely control. Automatic frame interpolation generally does ''terribly'' on anime because it was trained on photographic images instead. The more stylistic the animation style, the worse it will look interpolated, never mind that it will deal poorly with intentional animation effects like [https://en.wikipedia.org/wiki/Squash_and_stretch stretch and squash] and in particular [https://en.wikipedia.org/wiki/Smear_frame smear frames].
==Convenience==
-->


====CRT screens====
===Viewing angle===
{{stub}}
The viewing angle is a slightly magical figure. It's probably well defined in a test, but its meaning is a little elusive.


Basically it indicates at which angle the discoloration starts being noticeable. Note that the brightness is almost immediately a little off, so no TFT is brilliant to show photos to all the room. The viewing angle is mostly interesting for those that have occasional over-the-shoulder watchers, or rather watchers from other chairs and such.


'''Also flashing'''
The angle, either from a perpendicular line (e.g. 75°) or as a total angle (e.g. 150°).
As noted, the figure is a little magical. If it says 178° the colors will be as good as they'll be from any angle, but frankly, for lone home use, even the smallest angle you can find tends to be perfectly fine.


CRT monitors do something ''vaguely'' similar to movie projectors, in that they light up an image so-many times a second.
===Reflectivity===
While there is no formal measure for this, you may want to look at getting something that isn't reflective. If you're in an office near a window, this is probably about as important to easily seeing your screen as its brightness is.


Where with film you spend a least some of the time with the entire image being lit up continuously (and some time with the shutter coming in and out).
with a CRT there is a constantly-flowing beam of electrons lighting whatever phosphor it hits,
and that beam that is being dragged line-by-line along the screen - look at [https://youtu.be/3BJU2drrtCM?t=137 slow motion footage like this].


The phosphor will have a softish onset and retain light for some while,
It seems that many glare filters will reduce your color fidelity, though.
and while slow motion tends to exaggerate that a little (looks like a single line),  
it's still visible for much less than 1/60th of a second.


The largest reason that these pulsing phosphors don't look like harsh blinking is that our persistence of vision
(you could say our eyes framerate sucks, though actually this is a poor name for our eyes's actual mechanics), combined with the fact that it's relatively bright.




<!--
-->
Analog TVs were almost always 50Hz or 60Hz (depending on country).
==Some theory - on reproduction==
(separately, broadcast would often only show 25 or 30 new image frames, depending on the type of content content - read up on [[interlacing]] and [[three-two pull down]]).


Most CRT ''monitors'', unlike TVs, can be told to refresh at different rates.
====Reproduction that flashes====


There's a classical 60Hz mode that was easy to support, but people often preferred 72Hz or 75Hz or 85Hz or higher modes because they reduced eyestrain.
{{stub}}


'''Mechanical film projectors''' flash individual film frames while that film is being held entirely still, before advancing that film to the next (while no light is coming out) and repeating.


And yes, after working behind one of those faster-rate monitors and moving to a 60Hz monitor would be ''really'' noticeable.
(see e.g. [https://www.youtube.com/watch?v%3dCsounOrVR7Q this] and note that it moves so quickly that you see ''that'' the film is taken it happens so quickly that you don't even see it move.  Separately, if you slow playback you can also see that it flashes ''twice'' before it advances the film - we'll get to why)
Because even when we accept it as smooth enough, it still blinked, and we still perceive it as such.


This requires a shutter, i.e. not letting through ''any'' light a moderate part of the time (specifically while it's advancing the film).
We are counting on our eyes to sort of ignore that.


'''How do pixels get sent?'''


One significant design concept very relevant to this type of reproduction is the [https://en.wikipedia.org/wiki/Flicker_fusion_threshold '''flicker fusion threshold'''], the "frequency at which intermittent light stimulus appears to be steady light" to our eyes because separately from actual image it's showing, it appearing smooth is, you know, nice.


Research shows that this varies somewhat with conditions, but in most conditions practical around showing people images, that's somewhere between 50Hz and 90Hz.


-->


====Flatscreens====
Since people are sensitive to flicker to varying degrees, and this can lead to eyestain and headaches,
{{stub}}
we aim towards the high end of that range whenever that is not hard to do.


Film projectors and CRTs don't matter directly to the discussion of 'what rate do you need',
In fact, we did so even with film. While film is 24fps and was initially shown at 24Hz flashes, movie projectors soon introduced two-blade and then three-blade shutters, showing each image two or three times before advancing, meaning that while they still only show 24 distinct images per second, they flash it twice or three times for a ''regular'' 48Hz or 72Hz flicker.
because the flatscreens that most of us use do not flicker like that {{comment|(except some do, and ''intentionally so'', but we'll put that aside)}}.
No more detail, but a bunch less eyestrain.


While in film, and in CRTs, the mechanism that lights it up and the mechanism that shows the image is the same mechanism, so inseparable,
in LCD-style flatscreens (including QLED) they are separate - there is a global CCFL or LED backlight, and the pixels are light ''blockers''.
{{comment|(In QLED there are the same again, though with some new footnotes)}}




And global backlights tend to be lit fairly continuously.  
As to what is actually being show, an arguably even more basic constraint is the rate of new images that we accept as '''fluid movement'''.
There is still variation in backlights, mind, and some will still give you a little more eye strain than others.
: Anything under 10fps looks jerky and stilted
:: or at least like a ''choice''.  
:: western ''and'' eastern animations were rarely higher than 12, or 8 or 6 for the simpler/cheaper ones
: around 20fps we start readily accepting it as continuous movement,  
: above 30 or 40fps it looks smooth,
: and above that it keeps on looking a little better yet, with quickly diminishing returns


When a backlight is CCFL, its phosphors are intentionally be made to decay slowly so even if the panel is a mere 100Hz,
that CCFL would look much less blinky than e.g. CRT at 100Hz.


LED backlights are often [[PWM]]'d at kHz speeds{{verify}}, or current-limited{{verify}}, which are both smoother.
If you take a high speed camera, you may still not see it flicker [https://youtu.be/3BJU2drrtCM?t=267 this part of the same slow motion video] {{comment|(note how the backlight appears constant even when the pixel update is crawling by)}} until you get really fast and specific.




So the difference between, say, a 60fps and 240fps monitor isn't in the lighting, it's how fast the light-blocking pixels in front of that constant backlight change.
'''So why 24?'''
A 60fps monitor changes its pixels every 16ms (1/60 sec), a 240fps the latter every 4ms (1/240 sec).


Film's 24 was not universal at the time, and has no strong significance then or now.
It's just that when a standard was needed, the number 24 was a chosen balance between various aspects, like the fact that that's enough for fluid movement and relatively few scenes need higher, and the fact that film stock is expensive, and a standard for projection (adaptable or even multiple projectors would be too expensive for most cinemas).


A CRT at 30Hz would look very blinky and be hard on the eyes. A flatscreen at 30fps updates looks choppy but the lighting would be constant.


So in one way, it's more about the fps, but at the same time, the Hz rating usually ''is'' its physical fps.
The reason we ''still'' use 24fps ''today'' is more faceted, and doesn't really have a one-sentence answer.


<!--
But part of it is that making movies go faster is not always well received.
'''Footnotes to backlight'''


Note also that dimming a PWM'd backlight screen will effectively change the flicker a little.
It seems that we associated 24fps to feels like movies, 50/60fps feels like shaky-cam home movies made by dad's camcorder (when those were still a thing) or sports broadcasts (which we did even though it reduced detail) with their tense, immediate, real-world associations.
At high speeds this should not matter perceptibly, though.  
So higher, while technically better, was also associated with a specific aesthetic. It mat works well for action movies, yet less for others.


On regular screens the dimming is usually fixed, for laptop screens it may do so based on battery status as well as ambient light.
There is an argument that 24fps's sluggishness puts us more at ease, reminds us that it isn't real, seems associated with storytelling, a dreamlike state, memory recall.


There are another few reasons why both can flicker a little more than that suggests, but only mildly so.
Even if we can't put our finger on why, such senses persist.


<!--
'''when more both is and isn't better'''


And you can argue that cinematic language evolved not only with the technical limitations, but also the limitations of how much new information you can show at all.


People who are more sensitive to eyestrain and related headaches will want to know the details of the backlight, because the slowest LED PWM will still annoy you, and you're looking for faster PWM or current-limited.
In some ways, 24fps feels a subtle slightly stylized type of video,  
from the framerate alone,
and most directors will like this because most movies benefit from that.
Exceptions include some fast paced - but even they benefit from feeling more distinct from ''other'' movies still doing that stylized thing.


But it's often not specced very well - so whether any particular monitor is better or worse for eyestrain is essentially not specced.


At the same time, a lot of seems like a learned association that will blur and may go away over time.


-->


Some changes will make things ''better'' (consider that the [[3-2 pulldown]] necessary to put 24fps movies on TV in 60Hz countries made pans look ''worse'').


=====On updating pixels=====


<!--
You can get away with even less in certain aesthetic styles - simpler cartoons may update at 8fps or 6fps, and we're conditioned enough that in that cartoon context, 12fps looks ''fancy''.  But more is generally taken to be worse. It may be objectively smoother but there are so many animation choices (comparable to cinematic language) that you basically ''have'' to throw out at high framerates - or accept that it will look ''really'' jarring when it switches between them. It ''cannot'' look like classical animation/anime, for better ''and'' worse.  And that's assuming it was ''made'' this way -- automatic resolution upscaling usually implies filters - they things, they are effectively a stylistic change that was not intended and you can barely control. Automatic frame interpolation generally does ''terribly'' on anime because it was trained on photographic images instead. The more stylistic the animation style, the worse it will look interpolated, never mind that it will deal poorly with intentional animation effects like [https://en.wikipedia.org/wiki/Squash_and_stretch stretch and squash] and in particular [https://en.wikipedia.org/wiki/Smear_frame smear frames].
-->


To recap, '''in TVs and CRT monitors''', there is a narrow stream of electrons steered across the screen, making small bits of phosphor glow, one line at a time {{comment|(the line nature is not the only way to use a CRT, see e.g. [https://en.wikipedia.org/wiki/Vector_monitor vector monitors], and CRT oscilloscopes are also interesting, but it's a good way to do a generic display)}}
====CRT screens====
{{stub}}


So yes, a TV and CRT monitor is essentially updated one pixel at a time, which happens so fast you would need a ''very'' fast camera
to notice this - see e.g. [https://youtu.be/3BJU2drrtCM?t=52].


This means that there needs to be something that controls the left-to-right and top-to-bottom steering[https://youtu.be/l4UgZBs7ZGo?t=307] - and because you're really just bending it back and forth, there are also times at which that bending shouldn't be visible, which is solved by just not emitting electrons, called the blanking intervals. If you didn't have horizontal blanking interval between lines, you would see nearly-horizontal lines as it gets dragged back for the next line; if you didn't have vertical blanking interval between frames you would see a diagonal line while it gets dragged back to the start of the next frame.
'''Also flashing'''


In CRTs monitors, '''hsync''' and '''vsync''' names signals that (not control it directly but) help that movement happen.
CRT monitors do something ''vaguely'' similar to movie projectors, in that they light up an image so-many times a second.




CRTs were driven relatively directly from the graphics card, in the sense that the values of the pixel we're beaming onto a pixel will be the value that is on the line ''at that very moment''.
Where with film you light up the entire thing at once {{comment|(maybe with some time with the shutter coming in and out, ignore that for now)}}.
It would be hard and expensive to do any buffering, and there would be no reason (the phosphor's short term persistance is a buffer of sorts).  
a CRT light up one spot at a time - there is a beam constantly being dragged line by line across the screen -- look at [https://youtu.be/3BJU2drrtCM?t=137 slow motion footage like this].


So there needs to be a precisely timed stream of pixel data that is passed to the phosphors,
The phosphor will have a softish onset and retain light for some while,
and you spend most of the interval drawing pixels (all minus the blanking parts).}}
and while slow motion tends to exaggerate that a little (looks like a single line),
it's still visible for much less than 1/60th of a second.


The largest reason that these pulsing phosphors don't look like harsh blinking is that our persistence of vision
(you could say our eyes framerate sucks, though actually this is a poor name for our eyes's actual mechanics), combined with the fact that it's relatively bright.




'''How are CRT monitors different from CRT TVs?'''
<!--
Analog TVs were almost always 50Hz or 60Hz (depending on country).
(separately, broadcast would often only show 25 or 30 new image frames, depending on the type of content content - read up on [[interlacing]] and [[three-two pull down]]).


In TVs, redraw speeds were basically set in stone, as were some decoding details.
Most CRT ''monitors'', unlike TVs, can be told to refresh at different rates.
It was still synchronized from the signal, but the speed was basically fixed, as that made things easier.


On color TV there were some extra details, but a good deal worked the same way.
There's a classical 60Hz mode that was easy to support, but people often preferred 72Hz or 75Hz or 85Hz or higher modes because they reduced eyestrain.


Early game consoles/computers just generated a TV signal, so that you could use the TV you already had.


After that, CRT monitors started out as adapted CRT TVs.
And yes, after working behind one of those faster-rate monitors and moving to a 60Hz monitor would be ''really'' noticeable.
Yet we were not tied to the broadcast format,
Because even when we accept it as smooth enough, it still blinked, and we still perceive it as such.
so it didn't take long at all before speed at which things are drawn was configurable.
By the nineties it wasn't too unusual to drive a CRT monitor at 56, 60, 72, 75, perhaps 85, and sometimes 90{{verify}}, 100, or 120Hz.


We also grew an increasing amount of resolutions that the monitor should be capable of displaying.
Or rather, resolution-refresh combinations. Detecting and dealing that is a topic in and of itself.


'''How do pixels get sent?'''


Yet at the CRT level, they were driven much the same way -
synchronization timing to let the monitor know when and how fast to sweep the beams around,
and a stream of pixels passed through as they arrive on the wires.


So a bunch of the TV mechanism lived on into CRT monitors - and even into the flatscreen era.


That means that at, say, 60fps, roughly 16.6 milliseconds per frame,
-->
''most'' of that 16ms is spent moving values onto the wire and onto the screen.


====Flatscreens====
{{stub}}


Flatscreens do not reproduce by blinking things at us.


While in film, and in CRTs, the mechanism that lights up the screen is the is the same mechanism as the one that shows you the image,
in LCD-style flatscreens, the image updates and the lighting are now different mechanisms.


'''How are flatscreens different from CRTs?'''
Basically, there's one overall light behind the pixely part of the screen, and each screen pixel blocks light.


The physical means of display is completely different.


There is a constant backlight, and from the point of view of a single LCD pixel,
That global backlights tends to be lit ''fairly'' continuously.
the crystal's blocking-or-not state will sit around until asked to change.
Sure there is variation in backlights, and some will still give you a little more eye strain than others.


CCFL backlight phosphors seem intentionally made to decay slowly,
so even if the panel is a mere 100Hz, that CCFL ''ought'' to look look much less blinky than e.g. CRT at 100Hz.


And yet the sending-pixels part is still much the same.
Consider that a ''single'' frame is millions of numbers (e.g. 1920 * 1080 * 3 colors ~= 6 million).
For PCs with color this was never much under a million.


Regardless of how many colors,  
LED backlights are often [[PWM]]'d at kHz speeds{{verify}}, or current-limited{{verify}}, which are both smoother.
actually just transferring that many individual values will take some time.


If you take a high speed camera, you may still not see it flicker [https://youtu.be/3BJU2drrtCM?t=267 this part of the same slow motion video] {{comment|(note how the backlight appears constant even when the pixel update is crawling by)}} until you get really fast and specific.


The hsync and vsync signals still exist,
though LCDs are often a little more forgiving, apparently keeping a short memory and allowing some syncing [https://youtu.be/muuhgrige5Q?t=425]


So the difference between, say, a 60fps and 240fps monitor isn't in the lighting, it's how fast the light-blocking pixels in front of that constant backlight change. 
A 60fps monitor changes its pixels every 16ms (1/60 sec), a 240fps the latter every 4ms (1/240 sec). The light just stays on.


As such, while a cRT at 30Hz would look very blinky and be hard on the eyes,
a flatscreen at 30fps updates looks choppy but not like a blinky eyestrain.


Does a monitor have a framebuffer and then update everything at once?
It ''could'' be designed that way if there was a point, but there rarely is.
It would only make things more expensive for no reason.


<!--
'''Footnotes to backlight'''


When the only thing we are required to do is to finish drawing one image before the next starts,
Note also that dimming a PWM'd backlight screen will effectively change the flicker a little.
then we can spend most of that time sending pixels,
At high speeds this should not matter perceptibly, though.  
and the screen, while it has more flexibility in ''how'' exactly, can spend most of the refresh interval updating pixels.  
...basically as in the CRT days.


On regular screens the dimming is usually fixed, for laptop screens it may do so based on battery status as well as ambient light.


And when we were using VGA on flatscreens, that was what we were doing.
There are another few reasons why both can flicker a little more than that suggests, but only mildly so.


Panels even tend to update a line at a time.


Why?
: the ability to update specific pixels would require a lot more wiring - a per-line addressing is already a good amount of wires (there is a similar tradeoff in camera image sensor readout, but in the other direction)
: LCDs need to be refreshed somewhat like DRAM (LCD doesn't like being held at a constant voltage, so monitors apply the pixel voltage in alternating polarities each frame. (This is not something you really need to worry about. [http://www.techmind.org/lcd/index.html#inversion It needs an ''extremely'' specifc image to see])).


People who are more sensitive to eyestrain and related headaches will want to know the details of the backlight, because the slowest LED PWM will still annoy you, and you're looking for faster PWM or current-limited.


But it's often not specced very well - so whether any particular monitor is better or worse for eyestrain is essentially not specced.


Scanout itself basically refers to the readout and transfer of the framebuffer on the PC side,


'''Scanout lag''' can refer to
-->
: the per-line update
: the lag in pixel change, GtG stuff
:




You can find some high speed footage of a monitor updating, e.g. [https://blurbusters.com/understanding-display-scanout-lag-with-high-speed-video here]
=====On updating pixels=====
which illustrates terms like [[GtG]] (and why they both are and are not a sales trick):
Even if a lines's values are updated in well under a millisecond, the pixels may need ~5ms to settle on their new color,
and seen at high speeds, this looks like a range of the screen is a blur between the old and new image.


<!--


To recap, '''in TVs and CRT monitors''', there is a narrow stream of electrons steered across the screen, making small bits of phosphor glow, one line at a time {{comment|(the line nature is not the only way to use a CRT, see e.g. [https://en.wikipedia.org/wiki/Vector_monitor vector monitors], and CRT oscilloscopes are also interesting, but it's a good way to do a generic display)}}


'''Would it be technically possible to update all this-many million pixels at the same time? '''
So yes, a TV and CRT monitor is essentially updated one pixel at a time, which happens so fast you would need a ''very'' fast camera
to notice this - see e.g. [https://youtu.be/3BJU2drrtCM?t=52].


In theory yes, but even if that didn't imply an insane amount of wires (it does), it may not be worth it.
This means that there needs to be something that controls the left-to-right and top-to-bottom steering[https://youtu.be/l4UgZBs7ZGo?t=307] - and because you're really just bending it back and forth, there are also times at which that bending shouldn't be visible, which is solved by just not emitting electrons, called the blanking intervals. If you didn't have horizontal blanking interval between lines, you would see nearly-horizontal lines as it gets dragged back for the next line; if you didn't have vertical blanking interval between frames you would see a diagonal line while it gets dragged back to the start of the next frame.


Transferring millions of numbers takes time, meaning that to update everything at once you need to wait until you've stored all of it,
In CRTs monitors, '''hsync''' and '''vsync''' names signals that (not control it directly but) help that movement happen.
and you've gained nothing in terms of speed. You're arguably losing speed because instead of updating lines as the data comes in, you're choosing to wait until you have it all.


You would also need to get communication that can go much much faster (and sits idle most of the time).


CRTs were driven relatively directly from the graphics card, in the sense that the values of the pixel we're beaming onto a pixel will be the value that is on the line ''at that very moment''.
It would be hard and expensive to do any buffering, and there would be no reason (the phosphor's short term persistance is a buffer of sorts).


The per-line tradeoff described above makes much more sense for multiple reasons.
So there needs to be a precisely timed stream of pixel data that is passed to the phosphors,
Do things ''more or less'' live,
and you spend most of the interval drawing pixels (all minus the blanking parts).}}
but do it roughly one line at a time - we can do it ''sooner'',
it requires less storage,
we can do it over dozens to hundreds of lines to the actual panel.






'''How are CRT monitors different from CRT TVs?'''


'''If you can address lines at a time, you could you do partial updates this way?'''
In TVs, redraw speeds were basically set in stone, as were some decoding details.
It was still synchronized from the signal, but the speed was basically fixed, as that made things easier.


Yes. And that exists.
On color TV there were some extra details, but a good deal worked the same way.
But it turns out there are few cases where this tradeoff is actually useful.


And now you also need some way to keep track of what parts (not) to update,
Early game consoles/computers just generated a TV signal, so that you could use the TV you already had.


which tends to mean either a very simple UI, or an extra framebuffer
After that, CRT monitors started out as adapted CRT TVs.
Yet we were not tied to the broadcast format,
so it didn't take long at all before speed at which things are drawn was configurable.
By the nineties it wasn't too unusual to drive a CRT monitor at 56, 60, 72, 75, perhaps 85, and sometimes 90{{verify}}, 100, or 120Hz.


e.g. Memory LCD is somewhat like e-paper, but it's 1-bit, mostly gets used in status displays{{verify}},
We also grew an increasing amount of resolutions that the monitor should be capable of displaying.
Around games and TVs it is rare that less than the whole screen changes,
Or rather, resolution-refresh combinations. Detecting and dealing that is a topic in and of itself.


Some browser/office things are mostly static - except when you scroll.


}}
Yet at the CRT level, they were driven much the same way -
synchronization timing to let the monitor know when and how fast to sweep the beams around,
and a stream of pixels passed through as they arrive on the wires.


So a bunch of the TV mechanism lived on into CRT monitors - and even into the flatscreen era.


That means that at, say, 60fps, roughly 16.6 milliseconds per frame,
''most'' of that 16ms is spent moving values onto the wire and onto the screen.








''' 'What to draw' influenced this development too'''
'''How are flatscreens different from CRTs?'''


When in particular the earliest gaming consoles were made, RAM was expensive.
The physical means of display is completely different.
A graphics chip (and sometimes just the CPU) could draw something as RAM-light as sprites, if it was aware of where resolution-wise it was drawing right now.
It has to be said there were some ''very'' clever ways those were abused over time, but at the same time, this was also why the earliest gaming had limits like 'how many sprites could be on screen at once and maybe not on the same horizontal lines without things going weird'.


There is a constant backlight, and from the point of view of a single LCD pixel,
the crystal's blocking-or-not state will sit around until asked to change.


In particular PCs soon switched to framebuffers, meaning "a screenful of pixels on the PC side that you draw into", and the graphics card got a dedicated to sending that onto a wire (a [https://en.wikipedia.org/wiki/RAMDAC RAMDAC], basically short for 'hardware that you point at a framebuffer and spits out voltages, one for each pixel color at a time'). This meant we could draw anything, meant we had more time to do the actual drawing, and made higher resolutions a little more practical (if initially still limited by RAM cost).  In fact, VGA as a connector carries very little more than hsync, vsync, and "the current pixel's r,g,b", mostly just handled by the RAMDAC.


And yet the sending-pixels part is still much the same.
Consider that a ''single'' frame is millions of numbers (e.g. 1920 * 1080 * 3 colors ~= 6 million).
For PCs with color this was never much under a million.


Regardless of how many colors,
actually just transferring that many individual values will take some time.


======Screen tearing and vsync======


The hsync and vsync signals still exist,
though LCDs are often a little more forgiving, apparently keeping a short memory and allowing some syncing [https://youtu.be/muuhgrige5Q?t=425]


'''When to draw'''


So, the graphics card has the job of sending 'start new frame' and the stream of pixels.


You ''could'' just draw into the framebuffer whenever,
Does a monitor have a framebuffer and then update everything at once?
but if that bears no relation to when the graphics card sends new frames,
It ''could'' be designed that way if there was a point, but there rarely is.
it would happen ''very easily'' that you are drawing into the framebuffer while the video card is sending it to the screen,
It would only make things more expensive for no reason.
and would shown a half-drawn image.




While in theory a program could learn the time at which to ''not'' draw, it turns out that's relatively little of all the time you have.
When the only thing we are required to do is to finish drawing one image before the next starts,
then we can spend most of that time sending pixels,
and the screen, while it has more flexibility in ''how'' exactly, can spend most of the refresh interval updating pixels.
...basically as in the CRT days.




One of the simplest ways around that is '''double buffering''':
And when we were using VGA on flatscreens, that was what we were doing.
: have one image that the graphics card is currently showing,  
: have one hidden next one that you are drawing to
: tell the graphics card to switch to the other when you are done


Panels even tend to update a line at a time.


This also means the program doesn't really need to learn this hardware schedule at all:
Why?
Whenever you are done drawing, you tell the graphics card to flip to the other one.
: the ability to update specific pixels would require a lot more wiring - a per-line addressing is already a good amount of wires (there is a similar tradeoff in camera image sensor readout, but in the other direction)
: LCDs need to be refreshed somewhat like DRAM (LCD doesn't like being held at a constant voltage, so monitors apply the pixel voltage in alternating polarities each frame. (This is not something you really need to worry about. [http://www.techmind.org/lcd/index.html#inversion It needs an ''extremely'' specifc image to see])).




How do we keep it in step? Varies, actually, due to some other practical details.


Scanout itself basically refers to the readout and transfer of the framebuffer on the PC side,


Roughly speaking,
'''Scanout lag''' can refer to
: vsync on means "wait to flip the buffers until you are between frames"
: the per-line update
: vsync off means "show new content as soon as possible, I don't care about tearing",
: the lag in pixel change, GtG stuff
:




You can find some high speed footage of a monitor updating, e.g. [https://blurbusters.com/understanding-display-scanout-lag-with-high-speed-video here]
which illustrates terms like [[GtG]] (and why they both are and are not a sales trick):
Even if a lines's values are updated in well under a millisecond, the pixels may need ~5ms to settle on their new color,
and seen at high speeds, this looks like a range of the screen is a blur between the old and new image.


In the latter case, you will often see part of two different images. They will now always be ''completely drawn'' images,
and they will usually resemble each other,
but on fast moving things you will just about see the fact that there was a cut point for a split second.


If the timing of drawing and monitor draw is unrelated, this will be at an unpredictable position.
This is arguably a feature, because if this happens regularly (and it would), it happens at different positions all the time,
which is less visible than if they are very near each other (it would seem to slowly move).


'''Would it not be technically possible to update all this-many million pixels at the same time? '''


In theory yes, but even if that didn't imply an insane amount of wires (it does), it may not be worth it.


Gamers may see vsync as a framerate cap, but that's mainly just because it would be entirely pointless to render more frames than you can show,
Transferring millions of numbers takes time, meaning that to update everything at once you need to wait until you've stored all of it,
(unless it's winter and you are using your GPU as a heater).
and you've gained nothing in terms of speed.


You're arguably losing speed because instead of updating lines as the data comes in, you're choosing to wait until you have it all.


You would also need to get communication that can go so much faster that it moves a frame multiples faster and sits idle most of the time.


----
That too is possible - but would only drive up cost.




'''Does that mean it's updating pixels while you're watching rather than all at once?'''
The per-line tradeoff described above makes much more sense for multiple reasons.
Do things ''more or less'' live,
but do it roughly one line at a time - we can do it ''sooner'',
it requires less storage,
we can do it over dozens to hundreds of lines to the actual panel.


Yes.


Due to the framerate alone it's still fast enough that you wouldn't notice.
As that slow mo video linked above points out, you don't even notice that your phone screen is probably updating sideways.




Note that while in VGA, the pixel update is fixed by the PC,
'''If you can address lines at a time, you could you do partial updates this way?'''
in the digital era we are a little less tied.


In theory, we could send the new image ''much'' faster than the speed at which the monitor can update itself,
Yes. And that exists.
but in practice, you don't really gain anything by doing so.
But it turns out there are few cases where this tradeoff is actually useful.


And now you also need some way to keep track of what parts (not) to update,


which tends to mean either a very simple UI, or an extra framebuffer


e.g. Memory LCD is somewhat like e-paper, but it's 1-bit, mostly gets used in status displays{{verify}},
Around games and TVs it is rare that less than the whole screen changes,


'''Does that mean it takes time from the first pixel change to the last pixel change within a frame? Like, over multiple microseconds?'''
Some browser/office things are mostly static - except when you scroll.


Yes. Except it's over multiple ''milliseconds''.
}}


Just how many milliseconds varies, with the exact way the panel works, but AFAICT not a lot.


I just used some phototransistors to measure that one of my (HDMI) monitors at 60Hz takes approximately 14ms to get from the top to the bottom.








''' 'What to draw' influenced this development too'''


When in particular the earliest gaming consoles were made, RAM was expensive.
A graphics chip (and sometimes just the CPU) could draw something as RAM-light as sprites, if it was aware of where resolution-wise it was drawing right now.
It has to be said there were some ''very'' clever ways those were abused over time, but at the same time, this was also why the earliest gaming had limits like 'how many sprites could be on screen at once and maybe not on the same horizontal lines without things going weird'.


---


There would be little point to keeping a coping of an entire screen, since we're reading from a framebuffer just as before.  
In particular PCs soon switched to framebuffers, meaning "a screenful of pixels on the PC side that you draw into", and the graphics card got a dedicated to sending that onto a wire (a [https://en.wikipedia.org/wiki/RAMDAC RAMDAC], basically short for 'hardware that you point at a framebuffer and spits out voltages, one for each pixel color at a time'). This meant we could draw anything, meant we had more time to do the actual drawing, and made higher resolutions a little more practical (if initially still limited by RAM cost).  In fact, VGA as a connector carries very little more than hsync, vsync, and "the current pixel's r,g,b", mostly just handled by the RAMDAC.
It would in fact be simpler and more immediate to just tell the LCD controller where that framebuffer is, and what format it is in.


Note that while this is different mechanism, it has almost no visible effect on tearing.


(on-screen displays seem like they draw on top of a framebuffer, particularly when they are transparent, but this can be done on the fly)


https://www.nxp.com/docs/en/application-note/AN3606.pdf
======Screen tearing and vsync======




'''When to draw'''


'''Are flatscreen TVs any different?'''
So, the graphics card has the job of sending 'start new frame' and the stream of pixels.


Depends.  
You ''could'' just draw into the framebuffer whenever,
but if that bears no relation to when the graphics card sends new frames,
it would happen ''very easily'' that you are drawing into the framebuffer while the video card is sending it to the screen,
and would shown a half-drawn image.


Frequently yes, because a lot are sold with some extra Fancy Processing&tm;, like interpolation in resolution and/or time,
which by the nature of these features ''must'' have some amount of framebuffer.
This means the TV could be roughly seen as a PC with a capture card: it keeps image data around, then does some processing before it
gets sent to the display (which happens to physically be in the same bit of plastic))


While in theory a program could learn the time at which to ''not'' draw, it turns out that's relatively little of all the time you have.


And that tends to add a few to a few dozen milliseconds.
Which doesn't matter to roughly ''anything'' in TV broadcast. There was a time at which different houses would cheer at different times for the same sports goal, but ''both'' were much further away from the actual event than to each other. It just doesn't matter.


One of the simplest ways around that is '''double buffering''':
So if they having a "gaming mode", that usually means "disable that processing".
: have one image that the graphics card is currently showing,  
: have one hidden next one that you are drawing to
: tell the graphics card to switch to the other when you are done




This also means the program doesn't really need to learn this hardware schedule at all:
Whenever you are done drawing, you tell the graphics card to flip to the other one.




'''Are there other ways of updating?'''
How do we keep it in step? Varies, actually, due to some other practical details.


Yes. Phones have been both gearhead country and battery-motivated territory for a while
so have also started going 120Hz or whatnot.


At the same time, they may figure out that you're staring at static text,  
Roughly speaking,
so they may refresh the framebuffer and/or screen ''much'' less often.
: vsync on means "wait to flip the buffers until you are between frames"
: vsync off means "show new content as soon as possible, I don't care about tearing",






'''Is OLED different?'''
In the latter case, you will often see part of two different images. They will now always be ''completely drawn'' images,
and they will usually resemble each other,
but on fast moving things you will just about see the fact that there was a cut point for a split second.


No and yes.
If the timing of drawing and monitor draw is unrelated, this will be at an unpredictable position.
This is arguably a feature, because if this happens regularly (and it would), it happens at different positions all the time,
which is less visible than if they are very near each other (it would seem to slowly move).


And it seems it's not so much differen ''because'' it's OLED specifically,
but because these new things correlated with a time of new design goals and new experiments{{verify}}


https://www.youtube.com/watch?v=n2Qj40zuZQ4


Gamers may see vsync as a framerate cap, but that's mainly just because it would be entirely pointless to render more frames than you can show,
(unless it's winter and you are using your GPU as a heater).


-->


=====On pixel response time and blur=====
<!--
So, each pixel cannot change its state instantly. It's on the order of a few milliseconds.


There are various metrics, like
----
* GtG (Gray To Gray)
* BwB (black-white-black)
* WbW (white-black-white)


None of these are standardized, vendors measure them differently, and you can assume as optimistically as possible for their specs page, so these are not very comparable.


'''Does that mean it's updating pixels while you're watching rather than all at once?'''


This also implies that a given framerte will look slightly different on different monitors - because pixels having to change more than they can looks like blur.
Yes.
: this matters more to high contrast (argument for BwB, WbW)
: but most of most images isn't high contrast (argument for GtG)


Due to the framerate alone it's still fast enough that you wouldn't notice.
As that slow mo video linked above points out, you don't even notice that your phone screen is probably updating sideways.


One way to work around this, on a technical level, is higher framerate.
''Display'' framerate, that is{{verify}}.


On paper this works, but it is much less studied and less clear to which degree this is just the current sales trick, versus to what degree it actually looks better.
Note that while in VGA, the pixel update is fixed by the PC,  
in the digital era we are a little less tied.  


In fact, GtG seems by nature a sales trick. Yes there's a point to it because it's a curve, but at the same time it's a figure that is easily half of the other two, that you can slap on your box.
In theory, we could send the new image ''much'' faster than the speed at which the monitor can update itself,
but in practice, you don't really gain anything by doing so.




It's also mentioned because it's one possible factor in tracking.


-->


====What Vsync adds====
'''Does that mean it takes time from the first pixel change to the last pixel change within a frame? Like, over multiple microseconds?'''


<!--
Yes. Except it's over multiple ''milliseconds''.


Just how many milliseconds varies, with the exact way the panel works, but AFAICT not a lot.


As the above notes, we push pixels to the framebuffer as fast as we can, a monitor just shows what it gets (at what is almost always a fixed framerate = fixed schedule).
I just used some phototransistors to measure that one of my (HDMI) monitors at 60Hz takes approximately 14ms to get from the top to the bottom.


''Roughly'' speaking, every 16ms, the video card signals it's sending a new frame, and sends whatever data is in its framebuffer as it sends it.




And it has no inherent reason to care whatever else is happening.


So if you changed that framebuffer while it was being sent out, the from the perspective of entire frames, it didn't sent full, individual ones.
: It is possible that it sent part of the previous and part of a next frame.
: It is possible that it sent something that looks wrong because it wasn't fully done drawing (this further depends a little on ''how'' you are drawing)




It's lowest latency to get things on screen, but can look incorrect.
---


There would be little point to keeping a coping of an entire screen, since we're reading from a framebuffer just as before.
It would in fact be simpler and more immediate to just tell the LCD controller where that framebuffer is, and what format it is in.


Note that while this is different mechanism, it has almost no visible effect on tearing.
(on-screen displays seem like they draw on top of a framebuffer, particularly when they are transparent, but this can be done on the fly)


There are two common ingredients to the solution:
https://www.nxp.com/docs/en/application-note/AN3606.pdf


* vsync in the sense of waiting until we are between frames
:: The above describes the situation without vsync, or with vsync off. This is the most immediate, "whenever you want" / "as fast as possible" way.
:: '''With vsync on''', the change of the image to send from is closely tied to when we tell/know the monitor is between frames.
:: It usually amounts to the software ''waiting'' until the graphics card is between frames. That doesn't necessarily solve all the issues, though. In theory you could do all your drawing in that inbetween time, but it's not a lot of time.


* double buffering
:: means we have two framebuffers: one that is currently being sent to the screen, and one we are currently drawing to
:: and at some time we tell the GPU to switch between the two


'''Are flatscreen TVs any different?'''


Double buffering with vsync off means ''switch as soon as we're done drawing''.
Depends.  
It solves the incomplete-drawing problem, and is fast in the sense that it otherwise doesn't wait to get things on screen.  


But we still allow that to be in the middle of the monitor pixels being sent out, so it can switch images in the middle of a frame.
Frequently yes, because a lot are sold with some extra Fancy Processing&tm;, like interpolation in resolution and/or time,
which by the nature of these features ''must'' have some amount of framebuffer.
This means the TV could be roughly seen as a PC with a capture card: it keeps image data around, then does some processing before it
gets sent to the display (which happens to physically be in the same bit of plastic))


If those software flips are at a time unrelated to hardware, and/or images were very similar anyway (slow moving game, non-moving office document),
then this discontinuity is ''usually'' barely visible, because either the frames tend to look very similar,
or it happens at random positions on the screen, or both.


It is only when this split-of-sorts happens is in a similar place (fast movement, regular and related update schedule, which can come from rendering a lot faster than monitor) that it is consistent enough to notice, and we call it '''screen tearing''', because it looks something like a torn photo taped back together without care to align it. Screen tearing is mostly visible with video and with games, and mostly with horizontal motion (because of the way most the update is by line).
And that tends to add a few to a few dozen milliseconds.
Which doesn't matter to roughly ''anything'' in TV broadcast. There was a time at which different houses would cheer at different times for the same sports goal, but ''both'' were much further away from the actual event than to each other. It just doesn't matter.


It's still absolutely there in more boring PC use, but it's just less visible because what you are showing is a lot more static.
Also the direction helps - I put one one my monitors vertical, and the effect is visible when I scroll webpages because this works out as a broadly visible skew, more noticeable than taller/shorter letters that I'm not reading at that moment anyway.
So if they having a "gaming mode", that usually means "disable that processing".




So the most proper choice is vsync on ''and'' '''double buffering'''. With this combination:
* we have two framebuffers that we flip between
* we flip them when the video card is between frames




'''Are there other ways of updating?'''


'''vsync is also rate limiting. And in a integer division way.'''
Yes. Phones have been both gearhead country and battery-motivated territory for a while
so have also started going 120Hz or whatnot.


Another benefit to vsync on is that we don't try to render faster than necessary.
At the same time, they may figure out that you're staring at static text,
60Hz screen? We will be sending out 60 images per second.
so they may refresh the framebuffer and/or screen ''much'' less often.


...well, ''up to'' 60. Because we wait until a new frame, that means a very regular schedule that we have no immediate control over.


At 60Hz, you need to finish a new frame within 16.67ms (1 second / 60).
Did you take 16.8ms? Then you missed it, and the previous frame will stay on screen for another 16.67ms.


'''Is OLED different?'''


For example, 60Hz will mean that if
No and yes.
: 16.6ms (60fps)
: 1 frame late means it has to come 33.3ms after the start instead, which you could say is momentarily 30fps (60/2)
: 2 frames late is 50ms later, for 20fps (60/3)
: 3 frames late is 66ms later, for 15fps (60/4)
: ...and so on, but we're already in "looks terrible" territory.


And it seems it's not so much differen ''because'' it's OLED specifically,
but because these new things correlated with a time of new design goals and new experiments{{verify}}


So assuming things are a little late, do we get locked into 30fps instead?  
https://www.youtube.com/watch?v=n2Qj40zuZQ4


Well, no. Or rather, that depends on whether you are CPU-bound or GPU-bound.
If the GPU is a little too slow to get each frame in on time, then chances are it's closer to 30fps.
If the GPU is fast enough but the CPU


The difference exists in part because the GPU not having enough time is a ''relatively'' fixed problem
-->
because
it mostly just has one job (of relatively-slowly varying complexity)
and drawing the next frame is often timed since the last
while
the CPU being late is often more variable,
but even if it weren't and it's on its own rate,


=====On pixel response time and blur=====
<!--
So, each pixel cannot change its state instantly. It's on the order of a few milliseconds.


And even that varies on the way the API works.
There are various metrics, like
: OpenGL and D3D11 - GPU won't start rendering until vsync releases a buffer to render ''into''  {{verify}}
* GtG (Gray To Gray)
: Vulkan and D3D12 - such backpressure has to be done explicitly and differently {{verify}}
* BwB (black-white-black)
* WbW (white-black-white)


None of these are standardized, vendors measure them differently, and you can assume as optimistically as possible for their specs page, so these are not very comparable.




(side note: A game's FPS counter indicating only the interval of the last frame would update too quickly to see, so FPS counters tend to show a recent average)
This also implies that a given framerte will look slightly different on different monitors - because pixels having to change more than they can looks like blur.
: this matters more to high contrast (argument for BwB, WbW)
: but most of most images isn't high contrast (argument for GtG)


vsync means updatesg only happen at multiples of that basic time interval,
meaning immediate framrerates that are integer divisions of the native rate.


To games, vsync is sort of bad, in that if you ''structurally'' take more than 16.8ms aiming for 60Hz,
One way to work around this, on a technical level, is higher framerate.
you will be running at a solid 30fps, not something inbetween 30 and 60.
''Display'' framerate, that is{{verify}}.
When you ''could'' be rendering at 50 and showing most of that information,
there are games where that immediacy is worth it regardless of the tearing.


For gamers, vsync is only worth it if you can keep the render solidly at the monitor rate.
On paper this works, but it is much less studied and less clear to which degree this is just the current sales trick, versus to what degree it actually looks better.


In fact, GtG seems by nature a sales trick. Yes there's a point to it because it's a curve, but at the same time it's a figure that is easily half of the other two, that you can slap on your box.




It's also mentioned because it's one possible factor in tracking.


Notes:
-->
: vsync this is not the same mechanism as double buffering, but it is certainly related


====Vsync====


<!--


What vsync adds




As the above notes, we push pixels to the framebuffer as fast as we can, a monitor just shows what it gets (at what is almost always a fixed framerate = fixed schedule).


'''Related things'''
''Roughly'' speaking, every 16ms, the video card signals it's sending a new frame, and sends whatever data is in its framebuffer as it sends it.
Pointing a camera at a screen gives the same issue, because that camera will often be running at a different rate.
Actually, the closer it is, the slower the tearing will seem to move - which is actually more visible.


This was more visible on CRTs because the falloff of the phosphor brightness (a bunch of it in 1-2ms) is more visible the shorter the camera's exposure time was. There are new reasons that make this visible in other situations, like [[rolling shutter]].


And it has no inherent reason to care whatever else is happening.
So if you changed that framebuffer while it was being sent out, the from the perspective of entire frames, it didn't sent full, individual ones.
: It is possible that it sent part of the previous and part of a next frame.
: It is possible that it sent something that looks wrong because it wasn't fully done drawing (this further depends a little on ''how'' you are drawing)


Screen capture, relevant in these game-streaming days, may be affected by vsync in the same way.
It's basically a ''third'' party looking at the same frame data, so another thing to ''independently'' have such tearing and/or slowdown issues.


It's lowest latency to get things on screen, but can look incorrect.






There are two common ingredients to the solution:


====Vsync====
* vsync in the sense of waiting until we are between frames
:: The above describes the situation without vsync, or with vsync off. This is the most immediate, "whenever you want" / "as fast as possible" way.
:: '''With vsync on''', the change of the image to send from is closely tied to when we tell/know the monitor is between frames.
:: It usually amounts to the software ''waiting'' until the graphics card is between frames. That doesn't necessarily solve all the issues, though. In theory you could do all your drawing in that inbetween time, but it's not a lot of time.


Intuitively, vsync tells the GPU to wait until refresh.
* double buffering
:: means we have two framebuffers: one that is currently being sent to the screen, and one we are currently drawing to
:: and at some time we tell the GPU to switch between the two




This is slightly imprecise, in a way that matters later.
Double buffering with vsync off means ''switch as soon as we're done drawing''.
What it's really doing is telling the GPU to flip buffers until the next refresh.  
It solves the incomplete-drawing problem, and is fast in the sense that it otherwise doesn't wait to get things on screen.  


But we still allow that to be in the middle of the monitor pixels being sent out, so it can switch images in the middle of a frame.


In theory, we could still be rendering multiples more frames,  
If those software flips are at a time unrelated to hardware, and/or images were very similar anyway (slow moving game, non-moving office document),
and only putting out some to the monitor.
then this discontinuity is ''usually'' barely visible, because either the frames tend to look very similar,  
or it happens at random positions on the screen, or both.  


But there is typically no good reason to do so,  
It is only when this split-of-sorts happens is in a similar place (fast movement, regular and related update schedule, which can come from rendering a lot faster than monitor) that it is consistent enough to notice, and we call it '''screen tearing''', because it looks something like a torn photo taped back together without care to align it. Screen tearing is mostly visible with video and with games, and mostly with horizontal motion (because of the way most the update is by line).
so enabling vsync usually comes with reducing the render rate to the refresh rate.


It's still absolutely there in more boring PC use, but it's just less visible because what you are showing is a lot more static.
Also the direction helps - I put one one my monitors vertical, and the effect is visible when I scroll webpages because this works out as a broadly visible skew, more noticeable than taller/shorter letters that I'm not reading at that moment anyway.


Similarly, disabling vsync does not ''necessarily'' uncap your framerate, because developers may
have other reasons to throttle frames.


So the most proper choice is vsync on ''and'' '''double buffering'''. With this combination:
* we have two framebuffers that we flip between
* we flip them when the video card is between frames






'''Vsync when the frames come too fast''' is a fairly easy decision,
'''vsync is also rate limiting. And in a integer division way.'''
because we only tell the GPU to go no faster than the screen's refresh rate.


The idea being that there's no point in trying to have things to draw faster than we can draw them.
Another benefit to vsync on is that we don't try to render faster than necessary.  
It just means more heat drawing things we mostly don't see.
60Hz screen? We will be sending out 60 images per second.


In this case, with vsync off you may see things ''slightly'' earlier,
...well, ''up to'' 60. Because we wait until a new frame, that means a very regular schedule that we have no immediate control over.  
But it can be somewhat visually distracting, and the difference in timing may not be enough to matter.


Unless you're doing competition, you may value aesthetics more than reaction.
At 60Hz, you need to finish a new frame within 16.67ms (1 second / 60).
Did you take 16.8ms? Then you missed it, and the previous frame will stay on screen for another 16.67ms.




For example, 60Hz will mean that if
: 16.6ms (60fps)
: 1 frame late means it has to come 33.3ms after the start instead, which you could say is momentarily 30fps (60/2)
: 2 frames late is 50ms later, for 20fps (60/3)
: 3 frames late is 66ms later, for 15fps (60/4)
: ...and so on, but we're already in "looks terrible" territory.




'''Vsync when the frames come in slower than the monitor refresh rate''' is another matter.
So assuming things are a little late, do we get locked into 30fps instead?


Vsync still means the same thing -- "GPU, wait to display an image at monitor refresh time".
Well, no. Or rather, that depends on whether you are CPU-bound or GPU-bound.
But note that slower-than-refresh rendering means we are typically too late for a refresh,  
If the GPU is a little too slow to get each frame in on time, then chances are it's closer to 30fps.
so this will make it wait even longer.
If the GPU is fast enough but the CPU


The difference exists in part because the GPU not having enough time is a ''relatively'' fixed problem
because
it mostly just has one job (of relatively-slowly varying complexity)
and drawing the next frame is often timed since the last
while
the CPU being late is often more variable,
but even if it weren't and it's on its own rate,


Say, if we have a 60Hz screen (16.7ms interval) and 50fps rendering (20ms interval),


* if you start a new frame render after the previous one got displayed, you will miss every second frame, and be displaying at 30fps.
And even that varies on the way the API works.
: OpenGL and D3D11 - GPU won't start rendering until vsync releases a buffer to render ''into''  {{verify}}
: Vulkan and D3D12 - such backpressure has to be done explicitly and differently {{verify}}


* if you render independently, then we will only occasionally be late, and leave a frame on screen for two intervals rather than one




(side note: A game's FPS counter indicating only the interval of the last frame would update too quickly to see, so FPS counters tend to show a recent average)


You can work this out on paper, but it comes down to that it will run at an integer divisor of the framerate.
vsync means updatesg only happen at multiples of that basic time interval,
meaning immediate framrerates that are integer divisions of the native rate.


So if rendering dips a little below 60fps, it's now displaying at 30fps (and usually only that. Yes, the next divisors are 20, 15, 12, 10, but below 30fps people will whine ''regardless' vsync. Also, the difference is lower). This is one decent reason gamers dislike vsync, and one decent reason to aim for 60fps and more - even if you don't get it, your life will be predictably okay.
To games, vsync is sort of bad, in that if you ''structurally'' take more than 16.8ms aiming for 60Hz,
you will be running at a solid 30fps, not something inbetween 30 and 60.
When you ''could'' be rendering at 50 and showing most of that information,
there are games where that immediacy is worth it regardless of the tearing.


For gamers, vsync is only worth it if you can keep the render solidly at the monitor rate.




And depending on how much lower, then the sluggishness may be more visible than a little screen tearing.
This also seems like an unnecessary slowdown


Say a game is aiming at 60fps and usually ''almost'' managing. You might prefer vsync off.


Notes:
: vsync this is not the same mechanism as double buffering, but it is certainly related




TODO: read [https://hardforum.com/threads/how-vsync-works-and-why-people-loathe-it.928593/]






'''Adaptive VSync'''


If the fps is higher than the refresh, VSync is enabled,
'''Related things'''
if the fps is lower than the monitor refresh, it's disabled.
Pointing a camera at a screen gives the same issue, because that camera will often be running at a different rate.
Actually, the closer it is, the slower the tearing will seem to move - which is actually more visible.


This avoids
This was more visible on CRTs because the falloff of the phosphor brightness (a bunch of it in 1-2ms) is more visible the shorter the camera's exposure time was. There are new reasons that make this visible in other situations, like [[rolling shutter]].




'''NVidia Fast Sync''' / '''AMD Enhanced Sync'''
Screen capture, relevant in these game-streaming days, may be affected by vsync in the same way.
It's basically a ''third'' party looking at the same frame data, so another thing to ''independently'' have such tearing and/or slowdown issues.


Adaptive VSync that adds triple buffering to pick.


Useful when you have more than enough GPU power.






====Vsync====


Intuitively, vsync tells the GPU to wait until refresh.


Note that with vsync on, whenever you are late for a frame at the monitor's framerate, you really just leave the image on an extra interval. Or two.


This is why it divides the framerate by integer multiples. If on a 60Hz monitor it stays on for one frame it's 60fps, for two frames it's 30fps, for three frames it's 20fps, for four frames it's 15fps, etc.
This is slightly imprecise, in a way that matters later.  
What it's really doing is telling the GPU to flip buffers until the next refresh.  


When with vsync on that FPS counter says something like 41fps, it really means "the average over tha last bunch of frames was a mix of updates happening at 16ms/60fps intervals, but more happening at 32/30fps" (and maybe occasionally lower).


In theory, we could still be rendering multiples more frames,
and only putting out some to the monitor.


'''Could it get any more complex?'''
But there is typically no good reason to do so,
so enabling vsync usually comes with reducing the render rate to the refresh rate.


Oh, absolutely!


'''I didn-'''
Similarly, disabling vsync does not ''necessarily'' uncap your framerate, because developers may
have other reasons to throttle frames.


...you see, the GPU is great at independently drawing complex things from relatively simple descriptions, but something still needs to tell it to that.
And a game will have a bunch of, well, game logic, and often physics that needs to be calculated before we're down to purely ''visual'' calculations.


Point is that with that involved, you can now also be CPU-limited. Maybe the GPU can ''draw'' what it gets at 200fps but the CPU can only figure out new things to draw at around 40fps. Which means a mix of 60 and 30fps as just mentioned. And your GPU won't get very warm because it's idling most of the time.
Games try to avoid this, but [some people take that as a challenge].




'''Vsync when the frames come too fast''' is a fairly easy decision,
because we only tell the GPU to go no faster than the screen's refresh rate.


-->
The idea being that there's no point in trying to have things to draw faster than we can draw them.
It just means more heat drawing things we mostly don't see.


====Adaptive sync====
In this case, with vsync off you may see things ''slightly'' earlier,
<!--
But it can be somewhat visually distracting, and the difference in timing may not be enough to matter.


Where most monitors keep a regular schedule (and set it themselves).
Unless you're doing competition, you may value aesthetics more than reaction.


Nvidia G-sync and AMD Freesync treats frame length them as varying.




Note that doing this requires both a GPU and monitor that support this.


Which so far is a gamer niche.
'''Vsync when the frames come in slower than the monitor refresh rate''' is another matter.
 
And the two mentioned buzzwords are not compatible with each other,
but for some reason, most articles don't get much further than
"yeah G-sync is fancier and more expensive and freesync is cheaper and good enough (unless you fall under 30fps which you want to avoid anyway)" and tend to completely forego ''why'' (possibly related that the explanation behind stuttering is often a confused ''mess'').


Vsync still means the same thing -- "GPU, wait to display an image at monitor refresh time".
But note that slower-than-refresh rendering means we are typically too late for a refresh,
so this will make it wait even longer.


G-sync and freesync are neat ''but'' the marketing and the typical complete lack of explanation is suspect.


Producers and reviewers seem very willing to imply
Say, if we have a 60Hz screen (16.7ms interval) and 50fps rendering (20ms interval),
: vague "increases input lag dramatically" which is so vague that it's basically just ''wrong'', or
: "do you want to solve tearing ''without'' the framerate cap that vsync implies" and it's a pity that's pointing out the ''wrong'' issue about vsync,
: "G-Sync has better quality than FreeSync", whatever that means


* if you start a new frame render after the previous one got displayed, you will miss every second frame, and be displaying at 30fps.


Note that this also implies such a monitor's Hz rate is now a maximum, so it can show frames as soon as your GPU has them.
* if you render independently, then we will only occasionally be late, and leave a frame on screen for two intervals rather than one


It seems to mean that the PC can instruct the monitor to start a new frame whenever it wants.
The monitor's refresh rate is now effectively the ''maximum''.




This in particular avoids the 'unnecessary further drop when frames come in slower than the monitor refresh rate'
You can work this out on paper, but it comes down to that it will run at an integer divisor of the framerate.


So if rendering dips a little below 60fps, it's now displaying at 30fps (and usually only that. Yes, the next divisors are 20, 15, 12, 10, but below 30fps people will whine ''regardless' vsync. Also, the difference is lower). This is one decent reason gamers dislike vsync, and one decent reason to aim for 60fps and more - even if you don't get it, your life will be predictably okay.






FreeSync uses DP(1.2a?)'s existing Adaptive-Sync instead of something proprietary like GSync.  
And depending on how much lower, then the sluggishness may be more visible than a little screen tearing.
It will also work over HDMI when it supports VRR, which as officially added in HDMI 2.1 (2017) but there were supporting devices before then.
This also seems like an unnecessary slowdown


It means that any monitor could choose to support it with the same components, which means both that FreeSync monitors tend to be cheaper than G-Sync,
Say a game is aiming at 60fps and usually ''almost'' managing. You might prefer vsync off.
but also that there is more variation of what a given FreeSync implementation actually gives you.
Say, these monitors may have a relatively narrow range in which they are adaptive
: below that minimum, ''both'' will have trouble (but apparently FreeSync is more visible?)




G-Sync apparently won't go below 30Hz which seems to mean your typical vsync-styl integer reduction


TODO: read [https://hardforum.com/threads/how-vsync-works-and-why-people-loathe-it.928593/]


-->


===On perceiving===


====The framerate of our eyes====
'''Adaptive VSync'''
<!--


This is actually one of the hardest things to start with, because biology makes this rather interesting.
If the fps is higher than the refresh, VSync is enabled,
if the fps is lower than the monitor refresh, it's disabled.


So hard to quantify, or summarize at all - there's a ''lot'' of research out there.
This avoids




'''NVidia Fast Sync''' / '''AMD Enhanced Sync'''


The human flicker fusion threshold, often quoted as 'around 40Hz', actually varies with a bunch of things - the average brightness, the amount of difference in brightness, the place on your retina, fatigue, the color/frequency, the chemistry of your retina, and more.
Adaptive VSync that adds triple buffering to pick.


It varies even with what you're trying to see, because it also differs between rods (brightness sensitivity, and more resolution) where it seems to ''start'' dropping above ~15Hz / ~60ms, whereas for cones (color sensitivity, lower resolution) it drops above 60Hz / ~30ms. Which is roughly why peripheral vision reacts faster but less precisely.
Useful when you have more than enough GPU power.


Also, while each cone might be limited to order of 75fps{{verify}} (which seems to be an overall biochemical limit),
the fact that we have an absolute ''ton'' of them amounts to more, but it's hard to say how much exactly?








Take all of those together, and you can estimate it with a curve that does a gradual drop somewhere, at a place and speed that varies with context.
Note that with vsync on, whenever you are late for a frame at the monitor's framerate, you really just leave the image on an extra interval. Or two.


For example, staring at a monitor
This is why it divides the framerate by integer multiples. If on a 60Hz monitor it stays on for one frame it's 60fps, for two frames it's 30fps, for three frames it's 20fps, for four frames it's 15fps, etc.
: it is primarily the center of vision that applies (slower than peripheral)
: it's fairly bright, so you're on the faster end.


When with vsync on that FPS counter says something like 41fps, it really means "the average over tha last bunch of frames was a mix of updates happening at 16ms/60fps intervals, but more happening at 32/30fps" (and maybe occasionally lower).




Can you perceive things that are shorter in time?
'''Could it get any more complex?'''


When you flash a momentary image at a person, they can tell you something about that image once it's longer than 15ms or so[citation needed]. Not everyone sees that, but some do. In itself, seems to argue that over ~70fps has little to no added benefit (to parsing frame-unique information).
Oh, absolutely!


In theory you can go much further, though not necessarily in ways that matter.
'''I didn-'''


Like camera sensors, our eyes effectively collect energy. Unlike cameras, we do not have shutters, so all energy gets counted, no matter when it came in.
...you see, the GPU is great at independently drawing complex things from relatively simple descriptions, but something still needs to tell it to that.  
As such, an extremely bright nanosecond flash, a moderate 1ms flash, and a dim 30ms flash are perceived as more or less the same.[citation needed]
And a game will have a bunch of, well, game logic, and often physics that needs to be calculated before we're down to purely ''visual'' calculations.


While interesting, this doesn't tell us much. You ''can't'' take this and say our eyes are 1Gfps or 1000fps or 30fps.
Point is that with that involved, you can now also be CPU-limited. Maybe the GPU can ''draw'' what it gets at 200fps but the CPU can only figure out new things to draw at around 40fps. Which means a mix of 60 and 30fps as just mentioned. And your GPU won't get very warm because it's idling most of the time.
The only thing it really suggests is that the integration time is fairly slow - variable, actually, which is why light level is a factor.
Games try to avoid this, but [some people take that as a challenge].




So generally it seems that it's quickly diminishing returns between 30 and 60fps. It's not that we can't go higher, it's that the increase in everything you need to produce it is worth it less and less.


-->


====Adaptive sync====
<!--
{{stub}}


For context, most monitors keep a regular schedule (and set it themselves{{verify}}).


'''Are you saying 30fps is enough for everything?'''
Nvidia G-sync and AMD Freesync treats frame length them as varying.


It's enough information for most everyday tasks, and certainly dealing with slower motion, yes.


Basically, the GPU sending a new frame will make the monitor start updating it as it comes in,
rather than on the next planned interval.


It's not hard to manufacture an example that looks smoother at 60fps, though, particularly with some fast-moving high-contrast things. Which certainly includes some game scenes.


So aiming for 60fps can't hurt.
Manufacturer marketing as well as reviewers seem very willing to imply
It may not always let you play much better, but it still feels a bit smoother.
: vague "increases input lag dramatically" and the way they say it is often just ''wrong'', or
: "do you want to solve tearing ''without'' the framerate cap that vsync implies" and it's a pity that's pointing out the ''wrong'' issue about vsync,
: "G-Sync has better quality than FreeSync", whatever that means


A more honest way of putting it might be
"when frames come in slower than the monitor's maximum,
we go to the actual render rate, without tearing"


...rather than the integer-division ''below'' the actual render rate (if you have vsync on)
or sort of the actual render rate but with tearing (with vsync on).


There's absolutely value to that, but dramatic? Not very.


'''Smoothness'''


When focusing in individual details, things look recognisably smoother when moving up, jumping to 20, 30, 40fps. Steps up to 50, 60, 70fps are subtler but still perceivable.


Note that doing this requires both a GPU and monitor that support this.


Movies are often 24fps (historically down to half of that), and many new images movies ''still are'', even though sports broadcasts and home video had been doing 50/60fps for a long time (if interlaced).
Which so far, for monitors, is a gamer niche.




In fact, for a while moviegoers preferred the calmness of the sluggish 24fps.
And the two mentioned buzzwords are not compatible with each other.
There's a theory and there's a that we associated 50/60 with cheap,
roughly because of early home video,
also easily jerkier because motion stabilizing wasn't a thing in that yet.


For some reason, most articles don't get much further than
"yeah G-sync is fancier and more expensive and freesync is cheaper and good enough (unless you fall under 30fps which you want to avoid anyway)"


...and tend to skip ''why'' (possibly related that the explanation behind stuttering is often a confused ''mess'').




'''Reaction time'''


If you include your brain and actually parsing what you see. The total visual reaction timeof comprehending a new thing you see is ~250ms, down to ~200ms at the absolute best of times.  
Note that while it can show frames as soon as your GPU has them, it doesn't go beyond a maximum rate.  
{{comment|(And that's not even counting a decision or moving your hands to click that mouse)}}
So that Hz rate is essentially its ''maximum'', and it will frequently go slower.


It seems to mean that the PC can instruct the monitor to start a new frame whenever it wants.
The monitor's refresh rate is now effectively the ''maximum''.


If on the other hand you are expecting something, the main limit is your retina's signalling.


And if, say, someone is moving in a straight line, you can even estimate ahead of time when they hit a spot, rather than wait for it to happen.
This in particular avoids the 'unnecessary further drop when frames come in slower than the monitor refresh rate'








'''Tracking'''


Vision research does tell us that more complex tasks, even just tracking a single object, the use of extra frames starts dropping above 40fps.
FreeSync uses DP(1.2a?)'s existing Adaptive-Sync instead of something proprietary like GSync.
It will also work over HDMI when it supports VRR, which as officially added in HDMI 2.1 (2017) but there were supporting devices before then.


Also, the more complex a scene is, the less of it you will be able to parse, and we effectively start filling in assumptions instead.
It means that any monitor could choose to support it with the same components, which means both that FreeSync monitors tend to be cheaper than G-Sync,
but also that there is more variation of what a given FreeSync implementation actually gives you.
Say, these monitors may have a relatively narrow range in which they are adaptive
: below that minimum, ''both'' will have trouble (but apparently FreeSync is more visible?)




And even for simple things we mostly just assume that they're placed in the intermediate positions (which as assumptions go goes a long way), but the amount of jumping between them is noticeable up to a point, because e.g. moving 20cm/s of screen space at 24fps is ~1cm each frame, and at 60fps is ~0.3cm.
G-Sync apparently won't go below 30Hz which seems to mean your typical vsync-styl integer reduction


Motion blur helps guides us and makes that jump much less noticeable, so that 24fps+blur can look like twice that. It's probably more useful at lower framerates than higher, though.


-->


===On perceiving===


'''What is your tracking task, really?'''
====The framerate of our eyes====
<!--


That 20cm/s example is not what many games show.
This is actually one of the hardest things to start with, because biology makes this rather interesting.


As further indication of "if you want to notice everything", when speed-reading you move your eyes more often to see more words, but above ten times (or so) per second can you no longer process a fairly full eyeful of information fast enough.  
So hard to quantify, or summarize at all - there's a ''lot'' of research out there.




But FPS gaming is usually either about
noticing a whole new scene (e.g. after a fast twichy 180), or
tracking a slow moving object for precisely timed shot, which is roughly the opposite of looking at everything.


The human flicker fusion threshold, often quoted as 'around 40Hz', actually varies with a bunch of things - the average brightness, the amount of difference in brightness, the place on your retina, fatigue, the color/frequency, the chemistry of your retina, and more.


The first is which is largely limited by your brain - you don't really interpret a whole new scene in under 17ms (of 60fps).
It varies even with what you're trying to see, because it also differs between rods (brightness sensitivity, and more resolution) where it seems to ''start'' dropping above ~15Hz / ~60ms, whereas for cones (color sensitivity, lower resolution) it drops above 60Hz / ~30ms. Which is roughly why peripheral vision reacts faster but less precisely.


From another angle, we seem to see an image for at least ~15ms or so before we can parse what is in it.
Also, while each cone might be limited to order of 75fps{{verify}} (which seems to be an overall biochemical limit),
While that's for an isolated scene flashed at you (and in games you are usually staring at something smaller) it's still a decent indicator that more than 65fps or so probably doesn't help much.
the fact that we have an absolute ''ton'' of them amounts to more, but it's hard to say how much exactly?






In a scene that is largely still, or that you are tracking overall, it is likely you could focus on the important thing, and push the limits of what extra information extra frames might give.


You can make make arguments for not only 60fps but a few multiples higher -- but only for things so specifically engineered to maximize that figure that that it really doesn't apply to games.
Take all of those together, and you can estimate it with a curve that does a gradual drop somewhere, at a place and speed that varies with context.


For example, staring at a monitor
: it is primarily the center of vision that applies (slower than peripheral)
: it's fairly bright, so you're on the faster end.


You don't really use them to track (you don't need them to track),
but it looks more pleasing when it seems to be a little more regular.




Can you perceive things that are shorter in time?


When you flash a momentary image at a person, they can tell you something about that image once it's longer than 15ms or so[citation needed]. Not everyone sees that, but some do. In itself, seems to argue that over ~70fps has little to no added benefit (to parsing frame-unique information).


In theory you can go much further, though not necessarily in ways that matter.


Like camera sensors, our eyes effectively collect energy. Unlike cameras, we do not have shutters, so all energy gets counted, no matter when it came in.
As such, an extremely bright nanosecond flash, a moderate 1ms flash, and a dim 30ms flash are perceived as more or less the same.[citation needed]


While interesting, this doesn't tell us much. You ''can't'' take this and say our eyes are 1Gfps or 1000fps or 30fps.
The only thing it really suggests is that the integration time is fairly slow - variable, actually, which is why light level is a factor.




Notes:
So generally it seems that it's quickly diminishing returns between 30 and 60fps. It's not that we can't go higher, it's that the increase in everything you need to produce it is worth it less and less.
* It seems that around eye [https://en.wikipedia.org/wiki/Saccade saccades] you can perceive ''irregularity'' in flicker a few multiples faster than you would otherwise}}


-->


===arguments for 60fps / 60Hz in gaming===
<!--


tl;dr:
* a smooth 60fps looks a ''little'' smoother than, say, 30fps
:: and if you can guarantee means "looks good, no worries about it" which is worth some money


* yet a ''stable'' framerate may be more important than the exact figure
'''Are you saying 30fps is enough for everything?'''


* ...particularly with vsync on - a topic still misunderstood
It's enough information for most everyday tasks, and certainly dealing with slower motion, yes.
: and why people like me actually complain a little too much


* ...so if you get a stable 60+fps it will feel a little smoother
:: But there is a small bucket of footnotes to that.


: A solid 60 is going to look better than a solid 30
It's not hard to manufacture an example that looks smoother at 60fps, though, particularly with some fast-moving high-contrast things. Which certainly includes some game scenes.  
: yet a solid 30 is less annoying than an unpredictable 60.
 
:: more so with vsync on
So aiming for 60fps can't hurt.
::: because that's actually a mix of 30 and 60
It may not always let you play much better, but it still feels a bit smoother.
::: unless it's always enough and it's actually a solid 30


* the difference in how it plays is tiny for almost all cases
:: but if it's your career, you care.
:: and if it truly affects comfort, you care






'''Smoothness'''


When focusing in individual details, things look recognisably smoother when moving up, jumping to 20, 30, 40fps. Steps up to 50, 60, 70fps are subtler but still perceivable.




There's a bunch of conflation that actually hinders clear conclusions here. Let's try to minimize that.
Movies are often 24fps (historically down to half of that), and many new images movies ''still are'', even though sports broadcasts and home video had been doing 50/60fps for a long time (if interlaced).




To get some things out of the way:
In fact, for a while moviegoers preferred the calmness of the sluggish 24fps.
There's a theory and there's a that we associated 50/60 with cheap,
roughly because of early home video,
also easily jerkier because motion stabilizing wasn't a thing in that yet.


'''60fps and 60Hz are two different things entirely.'''


In the CRT days as well as in the flatscreen days, fps and Hz are not quite tied.


In the CRT days there was an extra reason you might would care separately about Hz - see [[#Reproduction that flashes]] above.


'''Reaction time'''
If you include your brain and actually parsing what you see. The total visual reaction timeof comprehending a new thing you see is ~250ms, down to ~200ms at the absolute best of times.
{{comment|(And that's not even counting a decision or moving your hands to click that mouse)}}




If on the other hand you are expecting something, the main limit is your retina's signalling.


And if, say, someone is moving in a straight line, you can even estimate ahead of time when they hit a spot, rather than wait for it to happen.




'''why do 24fps movies look okay, and 24fps game look terrible?'''


A number of different things help 24fps movies look okay.


Including, but not limited to,
'''Tracking'''
: a fully fixed framerate
: optical expectations
: expectation in general
: typical amount of movement - which is ''lower'' in most movies
: the typical amount of visual information


Vision research does tell us that more complex tasks, even just tracking a single object, the use of extra frames starts dropping above 40fps.


The last two are guided by cinematic languge - it suits most storytelling to not have too much happening most of the time,  
Also, the more complex a scene is, the less of it you will be able to parse, and we effectively start filling in assumptions instead.
and to have ''mostly'' slow movement.
It helps us parse most of what's happening, which is often the point, at least in most movies.


Sure, this partly because movies have adapted to this restriction {{comment|(and an old one at that - 24fps was settled on in the 1930s or so, just as an average of the varying speeds from very early film that made it harder to show film)}}, but is partly true regardless.


{{comment}(Frame rate was also once directly proportional to cost - faster speeds meant proportionally more money spent on film stock. In the digital era this is much less relevant)}}
And even for simple things we mostly just assume that they're placed in the intermediate positions (which as assumptions go goes a long way), but the amount of jumping between them is noticeable up to a point, because e.g. moving 20cm/s of screen space at 24fps is ~1cm each frame, and at 60fps is ~0.3cm.


Motion blur helps guides us and makes that jump much less noticeable, so that 24fps+blur can look like twice that. It's probably more useful at lower framerates than higher, though.


In particular, a moderately fast pan doesn't look great in 24fps, so it's often either a slow cinematic one, or a fast blur meant to signify a scene change. {{comment|(side note: Americans watching movies on analog TV had it worse - the way 24fps was turned into the fixed 30fps of TV (see [[telecining]]), which makes pans look more stuttery than in the cinema)}}




And no, movies at 24fps don't always look great - a 24fps action scene does ''not'' look quite as good as good as a higher speed one.
'''What is your tracking task, really?'''
So action movies may well have faster framerates now.


That 20cm/s example is not what many games show.


As further indication of "if you want to notice everything", when speed-reading you move your eyes more often to see more words, but above ten times (or so) per second can you no longer process a fairly full eyeful of information fast enough.




But FPS gaming is usually either about
noticing a whole new scene (e.g. after a fast twichy 180), or
tracking a slow moving object for precisely timed shot, which is roughly the opposite of looking at everything.


'''Optics''' 


Blur, depth of field, and other optical imperfections that happen in film sound bad,
The first is which is largely limited by your brain - you don't really interpret a whole new scene in under 17ms (of 60fps).
but we're so used to them that they help us parse what's happening.
: Depth of field helps us focus on the important bits.
: Motion blur tells us where we're going.
Both can be used well to help us focus mostly on important things without us ever noticing.


From another angle, we seem to see an image for at least ~15ms or so before we can parse what is in it.
While that's for an isolated scene flashed at you (and in games you are usually staring at something smaller) it's still a decent indicator that more than 65fps or so probably doesn't help much.




Also, the absence of such optical imperfections looks artificial.
This too is fine in (most) games.


In a scene that is largely still, or that you are tracking overall, it is likely you could focus on the important thing, and push the limits of what extra information extra frames might give.


Motion blur also helps things look less jumpy ''than they actually are''.
You can make make arguments for not only 60fps but a few multiples higher -- but only for things so specifically engineered to maximize that figure that that it really doesn't apply to games.


There's a GIF out there showing a ball moving at 60fps (up to; browser ''might'' go slower. And it ''must'' be approximate, because of how GIF works - TODO: figure out that GIF), 24fps, and 24fps with motion blur.
: the 60fps looks smooth
: the 24fps looks jumpy
: the 24fps with motion blur looks inbetween, but ''significantly''  better than basic 24fps


https://www.reddit.com/r/pcmasterrace/comments/257suq/nice_little_gif_showing_the_difference_between_24/
You don't really use them to track (you don't need them to track),
but it looks more pleasing when it seems to be a little more regular.




"You're saying 30fps with motion blur would be good?"


It might. And it would be a real consideration, were it not that that motion blur must be simulated,
and doing that ''well'' is more costly to render than aiming for 60fps without motion blur.


So it's typically not worth it, ''especially'' not in games that go for twitchy shooting,
rather than cinematics (for a few different reasons, not just the amount of motion).


(It seems that some people get queasy from motion blur - possinly often because it tends to be inconsistent framerates)






'''Fast movement'''
Notes:
* It seems that around eye [https://en.wikipedia.org/wiki/Saccade saccades] you can perceive ''irregularity'' in flicker a few multiples faster than you would otherwise}}


The faster the movement is, the more noticeable a low framerate is.
-->


Say, if that ball in the gif was not moving at half the speed, the jumpiness would be much less noticeable.
===arguments for 60fps / 60Hz in gaming===
<!--


tl;dr:
* a smooth 60fps looks a ''little'' smoother than, say, 30fps
:: and if you can guarantee means "looks good, no worries about it" which is worth some money


This is actually more complex topic. The fact that that ball is against a plain, static background lets us focus on it.
* yet a ''stable'' framerate may be more important than the exact figure
 
Games almost never have a static background.
: Much less have a static background  ''and'' have a fast moving object'', at the same time.


In games, things move relative to each other, and  
* ...particularly with vsync on - a topic still misunderstood
: and why people like me actually complain a little too much


* ...so if you get a stable 60+fps it will feel a little smoother
:: But there is a small bucket of footnotes to that.


: A solid 60 is going to look better than a solid 30
: yet a solid 30 is less annoying than an unpredictable 60.
:: more so with vsync on
::: because that's actually a mix of 30 and 60
::: unless it's always enough and it's actually a solid 30


* the difference in how it plays is tiny for almost all cases
:: but if it's your career, you care.
:: and if it truly affects comfort, you care






'''How many fps do you need, for what?'''




'''Isn't more always better?'''


There's a bunch of conflation that actually hinders clear conclusions here. Let's try to minimize that.


There's a few camps here.


Some people don't care as long as it's not too choppy. 30fps is pretty decent for them.
To get some things out of the way:


'''60fps and 60Hz are two different things entirely.'''


Then there's the first person shooter crew, figuring that it'll lower their reaction time so make them play better.
In the CRT days as well as in the flatscreen days, fps and Hz are not quite tied.
Which is actually an interesting discussion on its own.
 
In the CRT days there was an extra reason you might would care separately about Hz - see [[#Reproduction that flashes]] above.




There are also the more generic gamers, who don't argue 'makes me play better' but do argue if looks smoother it looks better.


This one is perhaps most interesting - and also simpler to answer.
Regardless of what else it does or doesn't do, if it looking a little smoother makes you happier, then it makes you happier.






'''For a cinematic sense'''
'''why do 24fps movies look okay, and 24fps game look terrible?'''


For at least the generation that grew up on camcorders, 50 or 60fps was associated with homemade and cheap.  
A number of different things help 24fps movies look okay.


Similarly, SDTV broadcasts would use 25 / 30fps for movies and 50 / 60fps for sports broadcasts,  
Including, but not limited to,  
which made for some association with slower=cinematic and faster=pay attention.
: a fully fixed framerate
: optical expectations
: expectation in general
: typical amount of movement - which is ''lower'' in most movies
: the typical amount of visual information


Both of these are less true today, but we have ''different'' associations of this kind,
like webcams and phone cameras usually getting too little light for quality high frame rates.


Also, because people making cinematic content know how slower movement helps draws attention,
The last two are guided by cinematic languge - it suits most storytelling to not have too much happening most of the time,  
regardless of the framerate it's being rendered at, it's arguably separate concepts anyway.
and to have ''mostly'' slow movement.
It helps us parse most of what's happening, which is often the point, at least in most movies.


Sure, this partly because movies have adapted to this restriction {{comment|(and an old one at that - 24fps was settled on in the 1930s or so, just as an average of the varying speeds from very early film that made it harder to show film)}}, but is partly true regardless.


In fact, it seems that just the fact that this slowness feels different ''somehow'' helps our suspension of disbelief even if we don't understand where it's coming from.
{{comment}(Frame rate was also once directly proportional to cost - faster speeds meant proportionally more money spent on film stock. In the digital era this is much less relevant)}}




In particular, a moderately fast pan doesn't look great in 24fps, so it's often either a slow cinematic one, or a fast blur meant to signify a scene change. {{comment|(side note: Americans watching movies on analog TV had it worse - the way 24fps was turned into the fixed 30fps of TV (see [[telecining]]), which makes pans look more stuttery than in the cinema)}}


'''for a sense of smooth motion'''


higher is perceptibly smoother -- to diminishing amounts
And no, movies at 24fps don't always look great - a 24fps action scene does ''not'' look quite as good as good as a higher speed one.
: 30ish is a sensible minimum
So action movies may well have faster framerates now.
: in that
:: 30 is fairly clearly better than 20
:: 40 is still better than 30 in a way you can notice without looking too hard,
:: 50 over 40 is less clear, but can still be seen
:: 60 over 50 is




There's also the fact that monitors and such have been 60Hz.
When you show a lower framerate, that difference in rate interacts n a way that means you occasionally have to not show a frame (or show two at once),
and for this reason alone it makes sense to shoot for either
: one of the rates your display goes at, exactly
: or a clear divisor.


In a lot of practice, that means 60 or 30.


Or some rates like 75, 90 but, from the above,
the diminishing returns mean that at this point the improvements are getting small, but still require linearly faster (and more-than-linearly more expensive) hardware.


So arguing for than 60fps is on the verge of overkill, 144 and 240 much more so.
'''Optics''' 


Blur, depth of field, and other optical imperfections that happen in film sound bad,
but we're so used to them that they help us parse what's happening.
: Depth of field helps us focus on the important bits.
: Motion blur tells us where we're going.
Both can be used well to help us focus mostly on important things without us ever noticing.






'''regularity and frametimes'''
Also, the absence of such optical imperfections looks artificial.
This too is fine in (most) games.


Perhaps one of the best arguments for 60fps is then that, assuming that rendering rate will almost always vary, you want your dip to not be so visible.


Motion blur also helps things look less jumpy ''than they actually are''.


You're treating 60fps as a sort of headroom:
There's a GIF out there showing a ball moving at 60fps (up to; browser ''might'' go slower. And it ''must'' be approximate, because of how GIF works - TODO: figure out that GIF), 24fps, and 24fps with motion blur.
If it dips from 60, it's probably not going to dip below 30, and while you may well ''notice'', it'll still be quite acceptable.
: the 60fps looks smooth
: the 24fps looks jumpy
: the 24fps with motion blur looks inbetween, but ''significantly'' better than basic 24fps


If it dips from 30, you're immediately in clunky territory.
https://www.reddit.com/r/pcmasterrace/comments/257suq/nice_little_gif_showing_the_difference_between_24/




Who hasn't tweaked their graphics settings to avoid that clunky?
"You're saying 30fps with motion blur would be good?"


It might. And it would be a real consideration, were it not that that motion blur must be simulated,
and doing that ''well'' is more costly to render than aiming for 60fps without motion blur.


So it's typically not worth it, ''especially'' not in games that go for twitchy shooting,
rather than cinematics (for a few different reasons, not just the amount of motion).


This ends up having some weird edge cases.
(It seems that some people get queasy from motion blur - possinly often because it tends to be inconsistent framerates)
For example, if you want to ensure a game runs at &ge;60fps ''throughout'', this is probably a waste, because the best way to do that is at the cost of detail. On consoles with more predictable hardware this is fine tuning, in general (and on the more varied modern consoles) it's more dynamic and arbitrary.






If you have
'''Fast movement'''
* one system that draws most frames 10ms but sometimes jump up to 100ms (maybe a poorly optimized texture load or whatnot),
* another system that draws frames 15ms with little variation


but the latter is going to look smoother on 60fps.
The faster the movement is, the more noticeable a low framerate is.  


Say, if that ball in the gif was not moving at half the speed, the jumpiness would be much less noticeable.




Additionally, for any monitor that itself has a schedule (TODO: figure out how common that is),
This is actually more complex topic. The fact that that ball is against a plain, static background lets us focus on it.
getting a frame late basically means we show the last frame duplicated instead of this one.


For example, say, 60fps means you have to have a new frame every &lt;16.6ms.
Games almost never have a static background.
: Much less have a static background  ''and'' have a fast moving object'', at the same time.


In games, things move relative to each other, and






And regularity also makes lower lower fps look better to most people.




As such, aiming for a consistent 30fps might look just as good, or better, than wavering below 60fps.
Of course, if you can get a ''solid'' 60fps, that's better.


'''How many fps do you need, for what?'''




'''Isn't more always better?'''




There's a few camps here.
Some people don't care as long as it's not too choppy. 30fps is pretty decent for them.


'''The playing-better angle'''


Then there's the first person shooter crew, figuring that it'll lower their reaction time so make them play better.
Which is actually an interesting discussion on its own.




'''Input latency'''
There are also the more generic gamers, who don't argue 'makes me play better' but do argue if looks smoother it looks better.
''How'' important it is in general depends on the task, and is probably quickly diminishing returns under a few dozen ms for a lot of things.


If your argument is shootybang responsiveness, input latency is at least as important as framerate,
This one is perhaps most interesting - and also simpler to answer.
in that you just want your input lag to be lower than the other guys.  
Regardless of what else it does or doesn't do, if it looking a little smoother makes you happier, then it makes you happier.


That said
* you can't react to anything you haven't seen yet.
: Even on 240fps (4ms frame interval) there's little point to having a 2ms keyboard;


* there's no point in seeing something when your input is a multiple of frames
: if your keyboard has 15ms delay, there's no point to more than 60fps


'''For a cinematic sense'''


For at least the generation that grew up on camcorders, 50 or 60fps was associated with homemade and cheap.


Similarly, SDTV broadcasts would use 25 / 30fps for movies and 50 / 60fps for sports broadcasts,
which made for some association with slower=cinematic and faster=pay attention.


'''for reaction time'''
Both of these are less true today, but we have ''different'' associations of this kind,
: showing things earlier means we can react to them earlier
like webcams and phone cameras usually getting too little light for quality high frame rates.


: ''regardless'' of whether framerate still helps at a point, we also require input latency not much higher than 15ms, or it negates the point of the framerate
Also, because people making cinematic content know how slower movement helps draws attention,
regardless of the framerate it's being rendered at, it's arguably separate concepts anyway.


: ''regardless'' of whether framerate or input latency help, human reaction time is 200ms for most people, maybe half that for professional twitchy-FPS players
::  basically meaning that the difference isn't between e.g. 10ms or 20ms, comparison is 200+10ms and 200+20ms.


: this still argues for 30..60fps.
In fact, it seems that just the fact that this slowness feels different ''somehow'' helps our suspension of disbelief even if we don't understand where it's coming from.
:: say, the difference between 20fps and 60fps means you may see things ''maybe'' 20ms earlier
::: ...'''if'' it only just became visible this frame (anything that already was you can track). Or you can spin 180 degrees in a frame or two (most games don't let you).
::: which for unanticipated things is ~10% of your reaction time, so barely matters
::: yet assuming you can use all visual information at 60fps (depends a lot on the amount and type of information), this is a frame or two, so could matter
:: the same calculation for higher than 60fps falls away to a handful of milliseconds, and dubious ability to even see that.






'''for a sense of smooth motion'''


"But I can clearly see the difference between 30fps and 60fps"
higher is perceptibly smoother -- to diminishing amounts
: 30ish is a sensible minimum
: in that
:: 30 is fairly clearly better than 20
:: 40 is still better than 30 in a way you can notice without looking too hard,
:: 50 over 40 is less clear, but can still be seen
:: 60 over 50 is


Oh yeah, so can I.


And you are probably objectively better at seeing it than I am, having trained yourself.
There's also the fact that monitors and such have been 60Hz.
When you show a lower framerate, that difference in rate interacts n a way that means you occasionally have to not show a frame (or show two at once),
and for this reason alone it makes sense to shoot for either
: one of the rates your display goes at, exactly
: or a clear divisor.


In a lot of practice, that means 60 or 30.


More so with vsync on - a topic still misunderstood, but which means dips are more severe than strictly necessary.
Or some rates like 75, 90 but, from the above,
the diminishing returns mean that at this point the improvements are getting small, but still require linearly faster (and more-than-linearly more expensive) hardware.


So arguing for than 60fps is on the verge of overkill, 144 and 240 much more so.


Yet there are several possible reasons that you see a difference that do not actually help in any real way.




Choppiness is annoying, and if we can throw money at it, fine.


I might too, but I'm not bothered until a lower fps rate ''or'' lowering the render quality to avoid it.
'''regularity and frametimes'''


I'm just not as picky, but I understand that other people are.
Perhaps one of the best arguments for 60fps is then that, assuming that rendering rate will almost always vary, you want your dip to not be so visible.




You're treating 60fps as a sort of headroom:
If it dips from 60, it's probably not going to dip below 30, and while you may well ''notice'', it'll still be quite acceptable.


If it dips from 30, you're immediately in clunky territory.


Monitors tend to run at a constant refresh rate - a very sensible design choice.


If you don't have a new frame when the monitor wants to replace the image it's showing,
Who hasn't tweaked their graphics settings to avoid that clunky?
this amounts to duplicating the previous frame and probably soon dropping another frame too.
If what you were drawing is smooth movement, this will be perceived as a slight stutter.




{{skipme|
If you grew up before or around DVDs, and in America or another 60Hz country, you will be much more familiar with that stutter this than I am.
I've lived mostly in 50Hz countries, which just double the frames and play the audio a few percent slower,
but 60Hz SDTV, and DVD showing 24fps movies, use telecining (more specifically three-two pulldown), which amounts to some information being on screen longer than others.  In particular large pans make it clear that this is a little irregular.


This argues that you're better off at media-native rates if you can do it - but this doesn't apply to gaming directly,
This ends up having some weird edge cases.
because we can ask our engine to render at a rate we want}}
For example, if you want to ensure a game runs at &ge;60fps ''throughout'', this is probably a waste, because the best way to do that is at the cost of detail. On consoles with more predictable hardware this is fine tuning, in general (and on the more varied modern consoles) it's more dynamic and arbitrary.




Unlike video, which has a framerate itself, anything that is rendered on the fly can be done at any speed we choose,
up until the point that rendering isn't fast enough to be done in time.


If you have
* one system that draws most frames 10ms but sometimes jump up to 100ms (maybe a poorly optimized texture load or whatnot),
* another system that draws frames 15ms with little variation


but the latter is going to look smoother on 60fps.






Additionally, for any monitor that itself has a schedule (TODO: figure out how common that is),
getting a frame late basically means we show the last frame duplicated instead of this one.


For example, say, 60fps means you have to have a new frame every &lt;16.6ms.






Another argument is the object tracking thing.
Looking smoother might help a little with movement cues, but it doesn't help you judge any better.
It isn't in your head.


And regularity also makes lower lower fps look better to most people.


https://www.pcgamer.com/how-many-frames-per-second-can-the-human-eye-really-see/


Of course, I'm way behind, with gamers going for 120Hz and even 240Hz
As such, aiming for a consistent 30fps might look just as good, or better, than wavering below 60fps.
Of course, if you can get a ''solid'' 60fps, that's better.


Where ''most'' of these arguments fall away.




Line 1,856: Line 1,879:




'''What about e-sports?'''
'''The playing-better angle'''


Yes. Fair.


If you are playing FPSes at a professional level,
then even if it maybe doesn't matter much half of the time,
you have a strong interest in not wanting even the potential degradation to happen even sometimes maybe.


I heartily recommend you get the ridiculous rated stuff (within reason).
'''Input latency'''
I would do the same myself.
''How'' important it is in general depends on the task, and is probably quickly diminishing returns under a few dozen ms for a lot of things.


This is also a tiny exception case.
If your argument is shootybang responsiveness, input latency is at least as important as framerate,
in that you just want your input lag to be lower than the other guys.  


That said
* you can't react to anything you haven't seen yet.
: Even on 240fps (4ms frame interval) there's little point to having a 2ms keyboard;


* there's no point in seeing something when your input is a multiple of frames
: if your keyboard has 15ms delay, there's no point to more than 60fps


-->




====On reaction time====
<!--


Human reaction time has a curve of diminishing returns, for a handful of different reasons.
'''for reaction time'''
: showing things earlier means we can react to them earlier


: ''regardless'' of whether framerate still helps at a point, we also require input latency not much higher than 15ms, or it negates the point of the framerate


'''For things you do not or cannot anticipate''', reaction time is easily 300ms and rarely below 200ms.
: ''regardless'' of whether framerate or input latency help, human reaction time is 200ms for most people, maybe half that for professional twitchy-FPS players
: ...for visuals - it can be a few dozen ms lower for audio and touch, presumably because it's less information to process{{verify}}
::  basically meaning that the difference isn't between e.g. 10ms or 20ms, comparison is 200+10ms and 200+20ms.


When designing things with personal safety in mind, such as "braking distance for drivers", this guides the ''absolute minimum'' figure, usually with a lot of safety margin on top of this. {{comment|(Consider that's 8 meters traveled before you've even processed enough to ''start'' moving your foot towards the brake pedal - it'll be a bunch more before your foot is pressing the brake hard enough, and the friction to slow down is actually happening. And even then the typical calculations are for cars, not for trucks that have many times more momentum to get rid of)}}.
: this still argues for 30..60fps.
:: say, the difference between 20fps and 60fps means you may see things ''maybe'' 20ms earlier
::: ...'''if'' it only just became visible this frame (anything that already was you can track). Or you can spin 180 degrees in a frame or two (most games don't let you).
::: which for unanticipated things is ~10% of your reaction time, so barely matters
::: yet assuming you can use all visual information at 60fps (depends a lot on the amount and type of information), this is a frame or two, so could matter
:: the same calculation for higher than 60fps falls away to a handful of milliseconds, and dubious ability to even see that.




'''fully anticipated reactions''' can be much faster
: e.g. when they are purely about motor control, which you can start early
: for example, rhythm games are on a fully regular pace, and you can work from memory,
: and you can negate some of the controller latency by hitting it early - you'll probably learn to do this intuitively
: These games judge you at timing within a much smaller window, where more than 30ms or so[https://gaming.stackexchange.com/questions/294060/which-bemani-rhythm-game-in-the-arcades-has-the-strictest-timing-window] may just be considered 'bad', and expert players can get timing within 15ms pretty consistently.




"But I can clearly see the difference between 30fps and 60fps"


Most games are somewhere inbetween these two extreme.  
Oh yeah, so can I.


Yes, if someone is walking in a straight line, then yeah, you can figure out precisely where they'll be.
And you are probably objectively better at seeing it than I am, having trained yourself.


In most twitch shooters, though, one of the first things you learn is to ''not'' walk in a straight line all the time, though, sooo....


More so with vsync on - a topic still misunderstood, but which means dips are more severe than strictly necessary.




Yet there are several possible reasons that you see a difference that do not actually help in any real way.


Also consider that the input lag of most devices out there is on the order of 10-25ms.


Wireless controllers should be assumed to have 10-25ms latency (and may spike higher, which is mostly out of their control).
Choppiness is annoying, and if we can throw money at it, fine.  


Wired keyboards and mice have at least 8ms latency (often more like 15ms overall), and so do a lot of game controllers.
I might too, but I'm not bothered until a lower fps rate ''or'' lowering the render quality to avoid it.


-->
I'm just not as picky, but I understand that other people are.


====On end-to-end latency====


<!--
End-to-end, from click to screen, expect 30  to 50ms (normally) to maybe 15ms at expensive-best (with stuff like gsync and reflex)




It's not that higher framerate won't reduce that, it's that it will only reduce one aspect
Monitors tend to run at a constant refresh rate - a very sensible design choice.


If you don't have a new frame when the monitor wants to replace the image it's showing,
this amounts to duplicating the previous frame and probably soon dropping another frame too.
If what you were drawing is smooth movement, this will be perceived as a slight stutter.


-->


====Tracking objects?====
{{skipme|
<!--
If you grew up before or around DVDs, and in America or another 60Hz country, you will be much more familiar with that stutter this than I am.
I've lived mostly in 50Hz countries, which just double the frames and play the audio a few percent slower,
but 60Hz SDTV, and DVD showing 24fps movies, use telecining (more specifically three-two pulldown), which amounts to some information being on screen longer than others.  In particular large pans make it clear that this is a little irregular.


While reaction time to something new is order of 250ms, what about tracking someone?
This argues that you're better off at media-native rates if you can do it - but this doesn't apply to gaming directly,
because we can ask our engine to render at a rate we want}}


Because as long as they are perfectly predictable, e.g. under "assuming something keeps going at that speed",
this isn't about reaction ''at all''.


Unlike video, which has a framerate itself, anything that is rendered on the fly can be done at any speed we choose,
up until the point that rendering isn't fast enough to be done in time.


The funny thing is that because we're a state observer with assumptions anyway, and
* ''while'' an object is predictable, we actually do very well with very little
: The question becomes - how much does framerate help tracking, and how fast doe the returns diminish


* ''if'' they do something unpredictable, we're back to order-of-250ms reaction speed
: ...mostly.
:: If we're talking real physics, people can only change direction so fast without assistance and/or damage. Games, however, allow physically impossible things for fun and challenge.
:: if we can anticipate what they do (turn back from ravine) we have a ''change'' of that path being correct.




-->


====On intentional motion blur====
<!--


Film will often have motion blur. Exposure time and movement do that.


This feels natural to our brains, in part because we've seen this in everything camera-based,
and probably always will.




Another argument is the object tracking thing.
Looking smoother might help a little with movement cues, but it doesn't help you judge any better.
It isn't in your head.


Rendered frames are perfectly sharp.


That makes it look a little ''worse'' at the same frame rate, in part because without blur it's more noticeable that things jump from one place to the other, with no hint as to what direction.
https://www.pcgamer.com/how-many-frames-per-second-can-the-human-eye-really-see/


Of course, I'm way behind, with gamers going for 120Hz and even 240Hz


Real camera motion blur guides our brains to realize which direction movement is happening in.
Where ''most'' of these arguments fall away.


And interestingly, it's fairly simple to show with an example that
30fps with motion blur in the appropriate direction looks noticeably smoother than
30fps with no motion blur,
even though we have the exact amount of frames,
probably because it helps guides your eyes to where things are going.


It's more like ~30fps film, or like ~60fps rendered.
It's not easy/accurate to quantify like that, though.


It's one reason that screenshots may be rendered with extra blur ''to look better'',
and games may add motion blur.




There are two 'but's to this, though.
'''What about e-sports?'''


One, games tend to overdo motion blur, making things harder to see,
Yes. Fair.  
and giving a portion of people motion sickness.


If you are playing FPSes at a professional level,
then even if it maybe doesn't matter much half of the time,
you have a strong interest in not wanting even the potential degradation to happen even sometimes maybe.


But motion blur (particularly the better-quality blur) can be pretty expensive to render,
I heartily recommend you get the ridiculous rated stuff (within reason).
to the point you might be able to run twice the framerate without,
I would do the same myself.
which will be comparable and lets the eye/brain
 
sort it out from whole-screen cues (...not detail).
This is also a tiny exception case.




There is an argument that motion blur is only worth it when you can render
it at maybe twice the intended framerate -- but this is prohibitively expensive.


-->
-->


===On resolution===
 
====On reaction time====
<!--
<!--


Human reaction time has a curve of diminishing returns, for a handful of different reasons.


The eye means we see a maximum size/angle of detail, meaning useful resolution in things we see is a thing about size ''and'' distance, and to a lesser degree what you're viewing.


'''For things you do not or cannot anticipate''', reaction time is easily 300ms and rarely below 200ms.
: ...for visuals - it can be a few dozen ms lower for audio and touch, presumably because it's less information to process{{verify}}


When designing things with personal safety in mind, such as "braking distance for drivers", this guides the ''absolute minimum'' figure, usually with a lot of safety margin on top of this.  {{comment|(Consider that's 8 meters traveled before you've even processed enough to ''start'' moving your foot towards the brake pedal - it'll be a bunch more before your foot is pressing the brake hard enough, and the friction to slow down is actually happening. And even then the typical calculations are for cars, not for trucks that have many times more momentum to get rid of)}}.


You can think of our eye's resolution as an ''angular'' resolution, as a cone that becomes wider at distance, as a little area that that cone has at a particular distance.


You will see the ''average'' of what it happening in there, so, say, at three meters, you will barely see whether whether a 20" monitor is 4K or 480p.
'''fully anticipated reactions''' can be much faster
: e.g. when they are purely about motor control, which you can start early
: for example, rhythm games are on a fully regular pace, and you can work from memory,  
: and you can negate some of the controller latency by hitting it early - you'll probably learn to do this intuitively
: These games judge you at timing within a much smaller window, where more than 30ms or so[https://gaming.stackexchange.com/questions/294060/which-bemani-rhythm-game-in-the-arcades-has-the-strictest-timing-window] may just be considered 'bad', and expert players can get timing within 15ms pretty consistently.




You ''could'' plot this out as viewing distance versus screen size with regions of where what resolutions become noticeable - three variables, like "for a 23" monitor, you must sit at 1 meter or closer to see detail in 1080p".


And that's arguably useful in purchasing decisions, where you are constrained by budget and room
Most games are somewhere inbetween these two extreme.  
"I sit at 2.5 meters and can afford maybe 50 inch, does better than 720p matter? (yes), does better than 1080p matter? (no)"


Yes, if someone is walking in a straight line, then yeah, you can figure out precisely where they'll be.


In some cases you can simplify it further.  
In most twitch shooters, though, one of the first things you learn is to ''not'' walk in a straight line all the time, though, sooo....


If you can assume you automatically sit close enough for whatever screen size to fill most of your field of vision, that basically fixes distance ''to'' size - and reduces the questions to how many pixels per your viewing angle. The answer to that is as constant as your eye is.




Say, I'm fairly consistent at sitting around 70cm away from monitors,
and monitors I use tend to be around 20 inch.


Roughly speaking, the benefits drop quickly above 1080p.  
Also consider that the input lag of most devices out there is on the order of 10-25ms.


The only reason for more is when you actually sometimes sit closer focusing on a smaller area.
Wireless controllers should be assumed to have 10-25ms latency (and may spike higher, which is mostly out of their control).
Which can be true for games, like shooters.


Wired keyboards and mice have at least 8ms latency (often more like 15ms overall), and so do a lot of game controllers.


-->


Behind your computer, you're probably a meter or less away from a 20-30" monitor.
====On end-to-end latency====


A living room with a TV is usually further away - but also much larger.
<!--
End-to-end, from click to screen, expect 30  to 50ms (normally) to maybe 15ms at expensive-best (with stuff like gsync and reflex)




 
It's not that higher framerate won't reduce that, it's that it will only reduce one aspect
 
http://carltonbale.com/1080p-does-matter/




-->
-->


 
====Tracking objects?====
===On contrast ratio / dynamic range===
 
<!--
<!--


Both of these terms describe the idea of 'how much brighter is the brightest than the darkest part?'
While reaction time to something new is order of 250ms, what about tracking someone?


Because as long as they are perfectly predictable, e.g. under "assuming something keeps going at that speed",
this isn't about reaction ''at all''.


But there's no standard, you see, which means you can play with the wording,
so monitors (and projectors) have started to bend the rules when it comes to dynamic range.


The funny thing is that because we're a state observer with assumptions anyway, and
* ''while'' an object is predictable, we actually do very well with very little
: The question becomes - how much does framerate help tracking, and how fast doe the returns diminish


Short story:
* ''if'' they do something unpredictable, we're back to order-of-250ms reaction speed
* "native"  is the maximum you'll see in an image.
: ...mostly.
* "dynamic" is the maximum possible difference over time when you play with the backlight intensity and also get creative with the measurements
:: If we're talking real physics, people can only change direction so fast without assistance and/or damage. Games, however, allow physically impossible things for fun and challenge.
:: if we can anticipate what they do (turn back from ravine) we have a ''change'' of that path being correct.




Native basically means put an image on there (say, something black and white),
-->
look at the difference between the brightest and darkest pixel.
The factor will be on the order of 5000.


====On intentional motion blur====
<!--


Dynamic can mean anything, including
Film will often have motion blur. Exposure time and movement do that.
"adjust the backlight to max and display white. Now turn the backlight off and display black".
The factor can be hundreds of thousands.
But you will ''never'' see that in an image, or any realistic video, or gaming, or much of anything.


So is it a lie? Technically no.  
This feels natural to our brains, in part because we've seen this in everything camera-based,
and probably always will.  


Is it misleading? Absolutely.


Is it completely useless?  Not entirely.
Take a daylight scene. The darkest pixel in a shadow somewhere present does not need to be a nearly-devoid-of-light dark - you won't notice it.
Now take the same scene at night.
Probably the same dynamic range within the image,
but the fact that the backlight can know to dial things down so that black looks black is probably going to look better than not doing that.
The fact that the display can do both is ''useful''.
The fact that you wouldn't want to see those two images alternating at 60FPS, and the fact that the display probably wouldn't deal with that, are the interesting things to know.


Rendered frames are perfectly sharp.
That makes it look a little ''worse'' at the same frame rate, in part because without blur it's more noticeable that things jump from one place to the other, with no hint as to what direction.




http://www.cnet.com/news/contrast-ratio-or-how-every-tv-manufacturer-lies-to-you/
Real camera motion blur guides our brains to realize which direction movement is happening in.


And interestingly, it's fairly simple to show with an example that
30fps with motion blur in the appropriate direction looks noticeably smoother than
30fps with no motion blur,
even though we have the exact amount of frames,
probably because it helps guides your eyes to where things are going.


-->
It's more like ~30fps film, or like ~60fps rendered.
It's not easy/accurate to quantify like that, though.


===see also===
It's one reason that screenshots may be rendered with extra blur ''to look better'',
and games may add motion blur.


[[Visuals_DIY#Analog_video_notes]]


There are two 'but's to this, though.


One, games tend to overdo motion blur, making things harder to see,
and giving a portion of people motion sickness.


=Flatscreen monitors=
<!--


==Capabilities==
But motion blur (particularly the better-quality blur) can be pretty expensive to render,
to the point you might be able to run twice the framerate without,
which will be comparable and lets the eye/brain
sort it out from whole-screen cues (...not detail).


===Resolution===
A TFT screen has a number of pixels, and therefore a natural resolution. Lower resolutions (and sometimes higher ones) can be displayed, but are interpolated so will not bee as sharp. Most people use the natural resolution.
This may also be important for gamers, who may not want to be forced to a higher resolution for crispness than their graphics card can handle in terms of speed.


For:
There is an argument that motion blur is only worth it when you can render
* 17": 1280x1024 is usual (1280x768 for widescreen)
it at maybe twice the intended framerate -- but this is prohibitively expensive.
* 19": 1280x1024 (1440x900 for widescreen)
* 20": 1600x1200 (1680x1050 for widescreen)
* 21": are likely to be 1600x1200 (1920x1200 for widescreen)


-->


Note that some screens are 4:3 (computer-style ratio), some 5:4 (tv ratio), some 16:9 or 16:10 (wide screen), but often not ''exactly'' that, pixelwise; many things opt for some multiple that is easier to handle digitally.
===On resolution===
<!--


===Refresh===
Refresh rates as they existed in CRT monitors do not directly apply; there is no line scanning going on anymore.


The eye means we see a maximum size/angle of detail, meaning useful resolution in things we see is a thing about size ''and'' distance, and to a lesser degree what you're viewing.


Pixels are continuously lit, which is why TFTs don't seem to flicker like CRTs do. Still, they react only so fast to the changes in the intensity they should display at, which limits the amount of pixel changes that you will actually see per second.


Longer refresh times mean moving images are blurred and you may see ghosting of brigt images. Older TFT/LCDs did something on the order of 20ms (roughly 50fps), which was is not really acceptable for gaming.


You can think of our eye's resolution as an ''angular'' resolution, as a cone that becomes wider at distance, as a little area that that cone has at a particular distance.


However, the millisecond measure is nontrivial. The direct meaning of the number has been slaughtered primarily by number-boast-happy PR departments.
You will see the ''average'' of what it happening in there, so, say, at three meters, you will barely see whether whether a 20" monitor is 4K or 480p.


More exactly, there are various things you can be measuring. It's a little like the speaker rating (watt RMS, watt 'in regular use', PMPO) in that a rating may refer unrealistic exhaggerations as well as strict and real measures.


You ''could'' plot this out as viewing distance versus screen size with regions of where what resolutions become noticeable - three variables, like "for a 23" monitor, you must sit at 1 meter or closer to see detail in 1080p".


The argument is that even when the time for a pixel to be fully off to fully on may take 20ms, not everyone is using their monitor to induce epileptic attacks - usually the pixel is done faster, going from some grey to some grey. If you play the DOOM3 dark-room-fest, you may well see the change from that dark green to that dark blue happen in 8ms (not that that's in any way easy to measure).
And that's arguably useful in purchasing decisions, where you are constrained by budget and room
But a game with sharp contrasts may see slower, somewhat blurry changes.
"I sit at 2.5 meters and can afford maybe 50 inch, does better than 720p matter? (yes), does better than 1080p matter? (no)"




8ms is fairly usual these days. Pricier screens will do 4ms or even 2ms, which is nicer for gaming.
In some cases you can simplify it further.  


If you can assume you automatically sit close enough for whatever screen size to fill most of your field of vision, that basically fixes distance ''to'' size - and reduces the questions to how many pixels per your viewing angle. The answer to that is as constant as your eye is.


===Video noise===


Say, I'm fairly consistent at sitting around 70cm away from monitors,
and monitors I use tend to be around 20 inch.


===Contrast===
Roughly speaking, the benefits drop quickly above 1080p.
The difference between the weakest and strongest brightness it can display. 350:1 is somewhat minimal, 400:1 and 500:1 are fairly usual, 600:1 and 800:1 are nice and crisp.
 
The only reason for more is when you actually sometimes sit closer focusing on a smaller area.
Which can be true for games, like shooters.


===Brightness===
The amount of light emitted - basically the strength of the backlight. Not horribly interesting unless you like it to be bright in even a well lit room.


300 cd/m2 is fairly usual.


Behind your computer, you're probably a meter or less away from a 20-30" monitor.


There are details like brightness uniformity - in some monitors, the edges are noticably darker when the screen is bright, which may be annoying. Some monitors have stranger shapes for their lighting.
A living room with a TV is usually further away - but also much larger.


Only reviews will reveal this.


===Color reproduction===
The range of colors a monitor can reproduce is interesting for photography buffs. The curve of how each color is reproduced is also a little different for every monitor, and for some may be noticeably different from others.


This becomes relevant when you want a two-monitor deal; it may be hard to get a CRT and a TFT the same color, as much as it may be hard to get two different TFTs from the same manufacturer consistent. If you want perfection in that respect, get two of the same - though spending a while twiddlign with per-channel gamma correction will usually get decent results.


http://carltonbale.com/1080p-does-matter/




==Convenience==
-->


===Viewing angle===
The viewing angle is a slightly magical figure. It's probably well defined in a test, but its meaning is a little elusive.


Basically it indicates at which angle the discoloration starts being noticeable. Note that the brightness is almost immediately a little off, so no TFT is brilliant to show photos to all the room. The viewing angle is mostly interesting for those that have occasional over-the-shoulder watchers, or rather watchers from other chairs and such.
===On contrast ratio / dynamic range===


The angle, either from a perpendicular line (e.g. 75°) or as a total angle (e.g. 150°).
<!--
As noted, the figure is a little magical. If it says 178° the colors will be as good as they'll be from any angle, but frankly, for lone home use, even the smallest angle you can find tends to be perfectly fine.


===Reflectivity===
Both of these terms describe the idea of 'how much brighter is the brightest than the darkest part?'
While there is no formal measure for this, you may want to look at getting something that isn't reflective. If you're in an office near a window, this is probably about as important to easily seeing your screen as its brightness is.




It seems that many glare filters will reduce your color fidelity, though.
But there's no standard, you see, which means you can play with the wording,
so monitors (and projectors) have started to bend the rules when it comes to dynamic range.
 
 
Short story:
* "native"  is the maximum you'll see in an image.
* "dynamic" is the maximum possible difference over time when you play with the backlight intensity and also get creative with the measurements
 
 
Native basically means put an image on there (say, something black and white),
look at the difference between the brightest and darkest pixel.
The factor will be on the order of 5000.
 
 
Dynamic can mean anything, including
"adjust the backlight to max and display white. Now turn the backlight off and display black".
The factor can be hundreds of thousands.
But you will ''never'' see that in an image, or any realistic video, or gaming, or much of anything.
 
So is it a lie? Technically no.
 
Is it misleading? Absolutely.
 
Is it completely useless?  Not entirely.
Take a daylight scene. The darkest pixel in a shadow somewhere present does not need to be a nearly-devoid-of-light dark - you won't notice it.
Now take the same scene at night.
Probably the same dynamic range within the image,
but the fact that the backlight can know to dial things down so that black looks black is probably going to look better than not doing that.
The fact that the display can do both is ''useful''.
The fact that you wouldn't want to see those two images alternating at 60FPS, and the fact that the display probably wouldn't deal with that, are the interesting things to know.
 
 
 
http://www.cnet.com/news/contrast-ratio-or-how-every-tv-manufacturer-lies-to-you/




-->
-->
===see also===
[[Visuals_DIY#Analog_video_notes]]
==Monitor mounts==
===VESA mounts===
The sizes are often one of:
* 7.5 cm x 7.5 cm  (2.95 inches), 8kg max
* 10 cm x 10 cm (3.94 inches), 12kg max
* 20 cm x 20 cm (7.87 inches), 50kg+
<!--
* 40 cm x 40 cm (15.7 inches)
-->
...though there are smaller and larger variants, and also non-square ones.
Most products will have holes to fit more than one.
10cm was apparently the original, 7.5cm was added for smaller displays, though note that lightish displays could use either.
See also:
* https://en.wikipedia.org/wiki/Flat_Display_Mounting_Interface
==Monitor faults==
===Permanent lines on monitor===
<!--
For laptops, this is likely some cable that has seen a lot of movement, or a connector that got oxidized.
For one of my laptops, the hinge had broken and the video cable that ran through that hinge was broken quickly after, causing my entire screen to go wonky. Replacing that video cable solved everything.
For most monitors, there is a PCB near the actual screen that decodes the video signal,
and sends it to the actual screen over a few hundred electronic lines.
Bad contacts here tends to show up as either a few misbehaving lines, or a portion of your screen being wonky.
In larger models (monitors, TVs), these are likely to be detachable connectors.
On laptops, there is a video cable to this PCB.
Clean the connector contacts - if that's the problem, this is a simple and good fix.
In laptops, aside from the video cable from video card to the PCB (try that first), there are also non-detachable ones from the PCB to the actual monitor. This is fragile stuff that needs exact positioning, and most of us won't be in a position to work with that well.
If pressure works, then a piece of tape and/or cardboard may be a fix that holds long enough.
If it's on the edge (e.g. horizontal lines, meaning the connections on the right edge), then taping it down may work.
If it's on the bottom, you may find that pushing in a specific area makes the lines disappear, try that.
-->
<!--
==3D monitor notes==
===Keep in mind===
{{notes}}
'''Monitor viewing angle''' on 3D monitors will often be smaller (varying on 3D technique) compared to typical monitors , particularly the vertical (up-down) angle{{verify}}.
On the stereoscopic effect:
* Your mind only likes processing it in a small cone in the center of your (indication of size of cone: ball at ~1 meter). A large monitor (or huge wall of 3D) may not have much point.
* We are used to shifting our focus a lot {{comment|(it helps us register everything around us -- even just staring at one thing without saccadesfeels a little odd after a while)}}, which means you'll be drifting around and sometimes out of the steroscopic zone
: Looking at datasets can work well enough, assuming there is a typical thing you look at
: This is hard to manage for large crowds. Movies and interactive experiences tend to draw your attention to a good point to look at, and may vary the depth of 3D effect, for similar reasons
* the ability to see the 3D varies between people, and may be tiresone to different degrees, so think about how typical end users will benefit
==With glasses==
For more than 5 people or so, active glasses becomes impractical.
You'll want passive (polarized) glasses.
They are likely to be cheaper, more ergonomic (also consider work purposes).
====Sequential stereo====
* mechanics:
** the whole screen quickly alternates between left and right images
** An IR pulse is sent that (liquid crystal shutter) glasses listen to, which black out the left and right
* upsides
** does not give up spatial resolution (like row-interleaved), e.g. meaning text does not become less readable.
** Monitor isn't particularly special
* requirements / limitation
** means a high-frame-rate monitor (e.g. 120Hz) to give each eye half of that
** card must have connector for the transmitter
** transmitter to the glasses must work (driver stuff, usually easy enough) {{verify}}
** support for doing this in a window (rather than fullscreen-only) varies (with graphics card and their drivers)
** may not combine with non-3D monitors {{verify}}
====Row interleaved stereo====
* mechanics:
** alternate lines lines of the display are right- and left-circularly polarized
** the glasses have according left-and-right polarized
* upsides
** no special graphics card requirements
** cheap glasses, because they're passive (same as 3D glasses used in currently-typical cinemas)
** less potentially finicky hardware
* limitations/downsides
** half the vertical resolution, which has an effect on readability of text
rapid flickering between left-eye and right-eye views, to be viewed with special synchronised glasses. Sequential stereo is not always available.
===Glasses-free===
====DTI====
===Technical notes===
====Cards and drivers====
GeForce may only work with NVIDIA 3D Vision?
====NVIDIA 3D Vision====
Designed for monitors, projectors at 120Hz.
Has an IR emitter (typically USB?) that sends to liquid crystal shutter glasses.
NVIDIA 3D Vision PRO glasses use RF instead of IR, for more range
(but are more costly)
-->


[[Category:Computer‏‎]]
[[Category:Computer‏‎]]
[[Category:Hardware]]
[[Category:Hardware]]

Latest revision as of 00:48, 29 April 2024

Few-element

Lighting

Nixie tubes



Eggcrate display

Mechanical

Mechanical counter

https://en.wikipedia.org/wiki/Mechanical_counter


Split-flap

https://en.wikipedia.org/wiki/Split-flap_display


Vane display

Flip-disc

https://en.wikipedia.org/wiki/Flip-disc_display


Other flipping types

LED segments

7-segment and others

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.
7-segment, 9-segment display, 14-segment, and 16-segment display. If meant for numbers will be a dot next to each (also common in general), if meant for time there will be a colon in one position.


These are really just separate lights that happen to be arranged in a useful shape.

Very typically LEDs (with a common cathode or anode), though similar ideas are sometimes implemented in other display types - notably the electromechanical one, and also sometimes VFD.


Even the simplest, 7-segment LED involves a bunch of connectors so are

  • often driven multiplexed, so only one of them is on at a time.
  • often done via a controller that handles that multiplexing for you


Seven segments are the minimal and classical case, good enough to display numbers and so e.g. times, but not really for characters.

More-than-7-segment displays are preferred for that.


https://en.wikipedia.org/wiki/Seven-segment_display

DIY

LCD character dislays

Character displays are basically those with predefined (and occasionally rewritable) fonts.


Classical interface

The more barebones interface is often a 16 pin line with a pinout like

  • Ground
  • Vcc
  • Contrast
usually there's a (trim)pot from Vcc, or a resistor if it's fixed


  • RS: Register Select (character or instruction)
in instruction mode, it receives commands like 'clear display', 'move cursor',
in character mode,
  • RW: Read/Write
tied to ground is write, which is usually the only thing you do
  • ENable / clk (for writing)
  • 8 data lines, but you can do most things over 4 of them


  • backlight Vcc
  • Backlight gnd


The minimal, write-only setup is:

  • tie RW to ground
  • connect RS, EN, D7, D6, D5, and D4 to digital outs


I2C and other

Matrix displays

(near-)monochrome

SSD1306

OLED, 128x64@4 colorsTemplate:Vierfy

https://cdn-shop.adafruit.com/datasheets/SSD1306.pdf

SH1107

OLED,

https://datasheetspdf.com/pdf-file/1481276/SINOWEALTH/SH1107/1

Small LCD/TFTs / OLEDs

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Small as in order of an inch or two (because the controllers are designed for a limited resolution?(verify)).


💤 Note that, like with monitors, marketers really don't mind if you confuse LED-backlit LCD with OLED,

and some of the ebays and aliexpresses sellers of the world will happily 'accidentally' call any small screen OLED if it means they sell more.

This is further made more confusing by the fact that there are

  • few-color OLEDs (2 to 8 colors or so, great for high contrast but only high cotnrast),
  • high color OLEDs (65K),

...so you sometimes need to dig into the tech specs to see the difference between high color LCD and high color OLED.



When all pixels are off they give zero light pollution (unlike most LCDs) which might be nice in the dark. These seem to appear in smaller sizes than small LCDs, so are great as compact indicators.


Can it do video or not?

If it does speak e.g. MIPI it's basically just a monitor, probably capable of decent-speed updates, but also the things you can connect to will (on the scale of microcontroller to mini-PC) be moderately powerful, e.g. a raspberry.

But the list below don't connect PC video cables.

Still, they have their own controller, and can hold their pixel state one way or the other, but connect something more command-like - so you can update a moderate amount of pixels with via an interface that is much less speedy or complex.

You might get reasonable results over SPI / I2C for a lot of e.g. basic interfaces and guages. By the time you try to display video you have to think about your design more.

For a large part because amount of pixels to update times the rate of frames per second has to fit through the communication (...also the display's capabilities). There is a semi-standard parallel interface that might make video-speed things feasible. This interface is faster than the SPI/I2C option, though not always that much, depending on hardware details.


Even if the specs of the screen can do it in theory, you also have to have the video ready to send. If you're running it from an RP2040 or ESP32, don't expect to libav/ffmpeg.

Say, something like the TinyTV runs a 216x135 65Kcolor display from a from a RP2040.

Also note that such hardware won't be doing decoding and rescaling arbitrary video files. They will use specifically pre-converted video.


In your choices, also consider libraries. Things like TFT_eSPI has a compatibility list you will care about.



Interfaces

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


ST7735

LCD, 132x162@16bits RGB


ST7789

LCD, 240x320@16bits RGB

https://www.waveshare.com/w/upload/a/ae/ST7789_Datasheet.pdf

SSD1331

OLED, 96x 64, 16bits RGB

https://cdn-shop.adafruit.com/datasheets/SSD1331_1.2.pdf


SSD1309

OLED, 128 x 64, single color?

https://www.hpinfotech.ro/SSD1309.pdf

SSD1351

OLED, 65K color

https://newhavendisplay.com/content/app_notes/SSD1351.pdf

HX8352C

LCD https://www.ramtex.dk/display-controller-driver/rgb/hx8352.htm


HX8357C

R61581

ILI9163

LCD, 162x132@16-bit RGB

http://www.hpinfotech.ro/ILI9163.pdf

ILI9341

https://cdn-shop.adafruit.com/datasheets/ILI9341.pdf

ILI9486

LCD, 480x320@16-bit RGB

https://www.hpinfotech.ro/ILI9486.pdf

ILI9488

LCD

https://www.hpinfotech.ro/ILI9488.pdf

PCF8833

LCD, 132×132 16-bit RGB

https://www.olimex.com/Products/Modules/LCD/MOD-LCD6610/resources/PCF8833.pdf

SEPS225

LCD

https://vfdclock.jimdofree.com/app/download/7279155568/SEPS225.pdf


RM68140

LCD

https://www.melt.com.ru/docs/RM68140_datasheet_V0.3_20120605.pdf

GC9A01

LCD, 65K colors, SPI

Seem to often be used on round displays(verify)

https://www.buydisplay.com/download/ic/GC9A01A.pdf

Epaper

SSD1619

https://cursedhardware.github.io/epd-driver-ic/SSD1619A.pdf


Many-element - TV and monitor notes (and a little film)

Backlit flat-panel displays

CCFL or LED backlight

https://nl.wikipedia.org/wiki/CCFL

Self-lit

OLED

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

While OLED is also a thing in lighting, OLED usually comes up in the context of OLED displays.

It is mainly contrasted with backlit displays (because it is hard to get those to block all light). OLEDs being off just emit no light at all. So the blacks are blacker, you could go brighter at the same time, There are some other technical details why they tend to look a little crisper.

Viewing angles are also better, roughly because the light source is closer to the surface.


OLED are organic LEDs, which in itself party just a practical production detail, and really just LEDs. (...though you can get fancy in the production process, e.g. pricy see-through displays are often OLED with substate trickery(verify))


PMOLED versus AMOLED makes no difference to the light emission, just to the way we access them (Passive Matrix, Active Matrix). AMOLED can can somwhat lower power, higher speed, and more options along that scale(verify), all of which makes them interesting for mobile uses. It also scales better to larger monitors.

POLED (and confusingly, pOLED is a trademark) uses a polymer instead of the glass, so is less likely to break but has other potential issues


QLED

On image persistence / burn-in

VFD

Vacuum Fluorescent Displays are vacuum tubes applied in a specific way - see Lightbulb_notes#VFDs for more details.



Some theory - on reproduction

Reproduction that flashes

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Mechanical film projectors flash individual film frames while that film is being held entirely still, before advancing that film to the next (while no light is coming out) and repeating.

(see e.g. this and note that it moves so quickly that you see that the film is taken it happens so quickly that you don't even see it move. Separately, if you slow playback you can also see that it flashes twice before it advances the film - we'll get to why)

This requires a shutter, i.e. not letting through any light a moderate part of the time (specifically while it's advancing the film). We are counting on our eyes to sort of ignore that.


One significant design concept very relevant to this type of reproduction is the flicker fusion threshold, the "frequency at which intermittent light stimulus appears to be steady light" to our eyes because separately from actual image it's showing, it appearing smooth is, you know, nice.

Research shows that this varies somewhat with conditions, but in most conditions practical around showing people images, that's somewhere between 50Hz and 90Hz.


Since people are sensitive to flicker to varying degrees, and this can lead to eyestain and headaches, we aim towards the high end of that range whenever that is not hard to do.

In fact, we did so even with film. While film is 24fps and was initially shown at 24Hz flashes, movie projectors soon introduced two-blade and then three-blade shutters, showing each image two or three times before advancing, meaning that while they still only show 24 distinct images per second, they flash it twice or three times for a regular 48Hz or 72Hz flicker. No more detail, but a bunch less eyestrain.


As to what is actually being show, an arguably even more basic constraint is the rate of new images that we accept as fluid movement.

Anything under 10fps looks jerky and stilted
or at least like a choice.
western and eastern animations were rarely higher than 12, or 8 or 6 for the simpler/cheaper ones
around 20fps we start readily accepting it as continuous movement,
above 30 or 40fps it looks smooth,
and above that it keeps on looking a little better yet, with quickly diminishing returns



So why 24?

Film's 24 was not universal at the time, and has no strong significance then or now. It's just that when a standard was needed, the number 24 was a chosen balance between various aspects, like the fact that that's enough for fluid movement and relatively few scenes need higher, and the fact that film stock is expensive, and a standard for projection (adaptable or even multiple projectors would be too expensive for most cinemas).


The reason we still use 24fps today is more faceted, and doesn't really have a one-sentence answer.

But part of it is that making movies go faster is not always well received.

It seems that we associated 24fps to feels like movies, 50/60fps feels like shaky-cam home movies made by dad's camcorder (when those were still a thing) or sports broadcasts (which we did even though it reduced detail) with their tense, immediate, real-world associations. So higher, while technically better, was also associated with a specific aesthetic. It mat works well for action movies, yet less for others.

There is an argument that 24fps's sluggishness puts us more at ease, reminds us that it isn't real, seems associated with storytelling, a dreamlike state, memory recall.

Even if we can't put our finger on why, such senses persist.


CRT screens

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


Also flashing

CRT monitors do something vaguely similar to movie projectors, in that they light up an image so-many times a second.


Where with film you light up the entire thing at once (maybe with some time with the shutter coming in and out, ignore that for now). a CRT light up one spot at a time - there is a beam constantly being dragged line by line across the screen -- look at slow motion footage like this.

The phosphor will have a softish onset and retain light for some while, and while slow motion tends to exaggerate that a little (looks like a single line), it's still visible for much less than 1/60th of a second.

The largest reason that these pulsing phosphors don't look like harsh blinking is that our persistence of vision (you could say our eyes framerate sucks, though actually this is a poor name for our eyes's actual mechanics), combined with the fact that it's relatively bright.


Flatscreens

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Flatscreens do not reproduce by blinking things at us.

While in film, and in CRTs, the mechanism that lights up the screen is the is the same mechanism as the one that shows you the image, in LCD-style flatscreens, the image updates and the lighting are now different mechanisms.

Basically, there's one overall light behind the pixely part of the screen, and each screen pixel blocks light.


That global backlights tends to be lit fairly continuously. Sure there is variation in backlights, and some will still give you a little more eye strain than others.

CCFL backlight phosphors seem intentionally made to decay slowly, so even if the panel is a mere 100Hz, that CCFL ought to look look much less blinky than e.g. CRT at 100Hz.


LED backlights are often PWM'd at kHz speeds(verify), or current-limited(verify), which are both smoother.

If you take a high speed camera, you may still not see it flicker this part of the same slow motion video (note how the backlight appears constant even when the pixel update is crawling by) until you get really fast and specific.


So the difference between, say, a 60fps and 240fps monitor isn't in the lighting, it's how fast the light-blocking pixels in front of that constant backlight change. A 60fps monitor changes its pixels every 16ms (1/60 sec), a 240fps the latter every 4ms (1/240 sec). The light just stays on.

As such, while a cRT at 30Hz would look very blinky and be hard on the eyes, a flatscreen at 30fps updates looks choppy but not like a blinky eyestrain.



On updating pixels
On pixel response time and blur

Vsync

Adaptive sync

On perceiving

The framerate of our eyes

arguments for 60fps / 60Hz in gaming

On reaction time

On end-to-end latency

Tracking objects?

On intentional motion blur

On resolution

On contrast ratio / dynamic range

see also

Visuals_DIY#Analog_video_notes

Monitor mounts

VESA mounts

The sizes are often one of:

  • 7.5 cm x 7.5 cm (2.95 inches), 8kg max
  • 10 cm x 10 cm (3.94 inches), 12kg max
  • 20 cm x 20 cm (7.87 inches), 50kg+

...though there are smaller and larger variants, and also non-square ones.

Most products will have holes to fit more than one.

10cm was apparently the original, 7.5cm was added for smaller displays, though note that lightish displays could use either.


See also:

Monitor faults

Permanent lines on monitor