FPS and graphics settings

Keatah

Active member
Joined
Apr 14, 2008
Messages
2,218
Reaction score
2
Points
38
More important, with gas prices at $4.80 (and rising), what sort of mileage will we get with this ship?
 

Izack

Non sequitur
Addon Developer
Joined
Feb 4, 2010
Messages
6,665
Reaction score
13
Points
113
Location
The Wilderness, N.B.
Yeah thats me too except with my hand-me-down laptop. At home.
That's me with my not-so-old 1100$ laptop. Protip: don't get a laptop if you're counting on stylish suborbital flights in the near future. :lol:

Still, I know I'll fly it anyway even at 8FPS, if only for the glorious screenshots.
 

Keatah

Active member
Joined
Apr 14, 2008
Messages
2,218
Reaction score
2
Points
38
CH said he was baking in a lot of the lighting and other effects, so that should cut the work required by the GPU. Though that would increase the work required to transfer textures, more detail means less compression and more memory usage.

I tend to think and believe that Orbiter rendering/fps slowdowns on older hardware are related to the amount of memory your graphics chip can access directly, without having to ride the bus. Texturing across the bus can slow down anything!!

Over the years I've upgraded some pretty old hardware, and Orbiter seems to respond well to graphics memory increases. I've seen some performance increase going from 1GB to 2GB or 2GB to 4GB or 1GB to 4GB system ram. I've also seen some improvement going from a 1.7GHz CPU to a 3GHz CPU.

NOW, I've seen astonishing gains going from 32MB video ram to 64MB, or 128MB. I still continued to see measurable and substantial gains going to 256MB and 512MB, all the way to 1GB VRAM. The wow factor, I believe, is in the VRAM. Of course, I considered the advancements in the GPU and the speed at which the GPU can get at the textures. It still comes down to HOW MUCH RAM YOU HAVE ON THE CARD.

If anyone wants to disagree with me that's fine. Every system's bottleneck is different.

My observation of consumer-level graphics cards from around 1997 to about 2005 or so, more or less.. The observation is that the framerate suffered noticeably when the graphics chip had to go off-card and onto the bus for additional data. Oftentimes I'd be cruising along at 80fps for example, and when there was a need for the graphics chip (GPU's as we think of them weren't invented yet) to go off-card, mind you, OFF CARD, I'd see the framerate drop to under 30fps. Once the textures got from main memory SIMM's and DIMM's to the graphics chip everything was fine again.

So for many games that did a lot of texturing and going back-n-forth between complex and simple scenes the framerate would be all over town!! Only when ***ALL*** the texture data could stay on the card's memory would I see a steady framerate that did not jump up and down from 10fps to 130fps.

This problem is prevalent (and still is) throughout "pc-gaming-history". It is horribly annoying too! I would by far prefer a framerate that was slower, say 25fps and consistent *AT* that 25fps as opposed to a rate that goes from 30 to a 100 then 120 then 50 then 10 then back up to 75. What a pain in the :censored:! The only way to really eliminate that is to have enough onboard memory on the card, today. That is one of Console's ace in the hole. Their GPU is scheduled to go "off-card" carefully, and they maintain a steady framerate. Very pleasant to watch and play. So your graphics card needs a *STEADY* fps output. And in order to do that it needs a *STEADY* speed of data access, so to speak. There can't be any waiting, or but contention or :censored: like that. Something PC's do not do correctly. But the VCS' TIA got it right! (for you Atari fans out there)

Once the GPU-on-CPU concept gets rolling, they should be able to update and change the L3 caches fast enough that it appears as ONE HONK'N HUGE bank of memory. Ah-hah!! Bank-switching. By the time the GPU needs another set of textures, the L3 cache will be ready and waiting. OH MAN OHH MAN! Exciting times ahead indeed.

I've observed this "phenomenon", if you will, on all sorts of bus structures from ISA, EISA, VLB, PCI, AGP, PCI-E, :censored:, you name it!

Also note that your graphics card needs to produce a framerate equivalent to your display's refresh rate. Otherwise you get tearing and flickering. Not only that - All that bull:censored: about anything above 60fps is excess, IT IS NOT!! It's a little beyond the scope of this posting, you can research it yourself *IF* you can find a source written by a knowledgeable engineer. But to achieve ultimate smoothness, you will need to update your display for every horizontal dot change, that could be 1024x sec or more. Otherwise you will see a slight stretching and double ghosting. Yes, the image will appear to move smoothly without tearing as long as the update is a multiple of the horizontal clock. Yes, look, you will see it! The TIA from the VCS got it right. Perhaps this is best explained by a YouTube video, I'll put one together. Don't bother reading about it from an overclocking site or a nvidia/amd fanboi page, they don't understand.

Side note:
Texturing data has to take a tortuous path! Going from main memory, to the motherboard traces, then through the Northbridge, then its memory controller. Probably buffers in between too. Then off the Northbridge, onto the motherboard. Then onto the CPU, and through its preliminary "front-end" units, then through the pipeline for compression/decompression and scaling. Now, it goes off the CPU, back to the Northbridge, more buffers and formatting, then onto yet ANOTHER bus to the graphics card. Once it gets there, it needs to be operated on, or more often, put into memory, and *then* operated on. :censored: ridiculous!

Having the GPU on die is a good step to eliminating a lot of that latency.
Let me phrase it this way. The industry should (and is slowly, slower than onboard sound cards have) evolve to where graphics ARE integrated. In the design of the pc, the graphics card is an outsider and a cancer that consumes far too much energy. Still, I'm mostly ragging on the 300+ watt chip designs, and when you put two of them together you are drawing 600 watts. The 590 from nvidia runs at 110+C, that is just plain ridiculous.

I think that there are too many inefficiencies in that you have a CPU that lays out and builds up a scene. Then that CPU has to take the data from main memory, zoom it through the caches, the FSB, the memory controller, then send it over a slow-:censored: bus.

NOW, the graphics chip has to take all that data, put it in memory, do billions of ops on it. Save it to the framebuffer. Communicate back to the CPU (tell it what it just did), then and only then can it pull the finalized data through the RAMDAC and out to the VGA connector. OR perhaps skip a step and feed it through HDMI or the DVI connector.

Folks that is just way way too :censored:ine and in-efficient! Terrible! And to this very day, all integrated graphics solutions for the common pc are of that design. As all integrated graphics sit on the northbridge, and the bus, just like a real physical card. NOW, with Sandy Bridge (and its only a first iteration) the graphics subsystem has direct access to the CPU caches. We are talking bandwith, baby, BANDWIDITH! A great deal of the modern GPU is based on bandwidth and simple operations done in parallel.

Sure, intel's hardware is pretty bad and intel doesn't know what to do with a good concept.
But as this evolves, the memory bandwidth available to an on-die GPU will far exceed what a discrete graphics card can achieve. Unless of course, you want to build a complete system on a card, which is what today's graphics cards are.

And good graphics is all about pushing around pixels. The type of graphics integrated into SandyBridge are about voodoo-1 or Voodoo-2 level. Perhaps a little more. But for the first time, now, they have the bandwidth. The first sets of IGP like 82855 and the 945gm or the 3000 still sat on a slow-:censored: AGP and PCI style bus.

This next step in the evolution of of pc graphics is like shifting gears, and you need to re-build your torque curve all over again, at every gear-change. Just that this is a 10 speed gearbox with a really low or perhaps negative (!) drive ratio at the last gear. Nvidia and ATI are spinning their wheels and screaming because their tranny has maxed out at 5th gear, they only have 1 more shift to make because it is a 6-speed gearbox. All they can do is increase RPM's (read that as heat). Whereas getting the bandwidth right in with the CPU ring bus, well that's a KILLER! Nothing can beat it! Absolutely nothing! And intel is only in 1st gear now.

So this is where we're at, when the XR-2 mkII rolls out the hangar, and you ain't getting the FPS you thought you should - I'd 1st evaluate your graphics sub-system. Pay attention to your video bus and video ram. Yes, you may need to buy a new motherboard and new ram and power supply AND a new CPU to go along with a graphics card upgrade.
 

Eli13

Fish Dreamer
Joined
Mar 5, 2011
Messages
1,562
Reaction score
0
Points
0
Location
Somewhere, TN
That's me with my not-so-old 1100$ laptop. Protip: don't get a laptop if you're counting on stylish suborbital flights in the near future. :lol:

Still, I know I'll fly it anyway even at 8FPS, if only for the glorious screenshots.

The way i see it my laptop was for free (parents got a new computer) and it does well with UCGO and most add-ons except XR-2 and 5 kill it. Also, its hard to get a nice computer when you mow lawns for a living :p
 

squeaky024

New member
Joined
May 26, 2010
Messages
128
Reaction score
0
Points
0
Location
Here.
Website
www.google.com
I'm just glad that this computer I got 3 years ago can run anything i've seen in orbiter just fine.

FSX however... :(

edit: I refuse to lower my graphics settings on there, it looks too good even with 10 fps constant.
 
Last edited:

Eli13

Fish Dreamer
Joined
Mar 5, 2011
Messages
1,562
Reaction score
0
Points
0
Location
Somewhere, TN
I can only run aircraft at the highest graphics without problems, other than that i might as well shoot the computer.
 

Keatah

Active member
Joined
Apr 14, 2008
Messages
2,218
Reaction score
2
Points
38
I got a system with a 100mhz bus, a 1.4ghz slot-1 processor and 1gb ram and agp card with 128mb vram. It's the vram that gets the performance. I get like 40-50 fps with the new xr-5 at wideawake. And that's good enough for me on a 12 year old computer! Not to mention it's at 1920x1200x32.

Yeh this is on an agp 2x bus and there are still isa slots in the system!
 
Last edited:

Hielor

Defender of Truth
Donator
Beta Tester
Joined
May 30, 2008
Messages
5,580
Reaction score
2
Points
0
Also note that your graphics card needs to produce a framerate equivalent to your display's refresh rate. Otherwise you get tearing and flickering. Not only that - All that bull:censored: about anything above 60fps is excess, IT IS NOT!! It's a little beyond the scope of this posting, you can research it yourself *IF* you can find a source written by a knowledgeable engineer. But to achieve ultimate smoothness, you will need to update your display for every horizontal dot change, that could be 1024x sec or more. Otherwise you will see a slight stretching and double ghosting. Yes, the image will appear to move smoothly without tearing as long as the update is a multiple of the horizontal clock. Yes, look, you will see it! The TIA from the VCS got it right. Perhaps this is best explained by a YouTube video, I'll put one together. Don't bother reading about it from an overclocking site or a nvidia/amd fanboi page, they don't understand.
:facepalm:
I'd like a source for this ridiculous claim. If your computer is capable of producing more than 60fps in a given game, it should be limited to the refresh rate of the monitor when it's output to you, otherwise you'll get noticeable screen tearing.

Ignoring the technical impossibilities of your idea, let's give a simple example of something running at 120fps on a 60Hz monitor in your ideal little world. In this case, the top half of the screen will show one frame and the bottom half of the screen will be showing the next. There will be a very noticeable tear in the middle of the screen. This is a Bad Thing. Even if you were hypothetically able to reach your "frame for every horizontal line" nonsense, while tearing wouldn't be obvious, in scenes of very fast motion there would be very strange effects--ever seen a "melting propeller" in a picture or video from a digital camera? This is a result of the camera scanning line-by-line, and will be the sort of thing you'd see from having your monitor displaying a different frame for every line.
 

Keatah

Active member
Joined
Apr 14, 2008
Messages
2,218
Reaction score
2
Points
38
Heh that figures.. Prove it to yourself and conduct some experiments. Let my lay the groundwork for you. This is so simple, try this.. Read the number on a taxi-cab as it zooms by in car chase on the movie screen. If you can eye-track it you can see the number, but it's all smeared out. You won't be able to read it till the fps of the capturing camera and presenting projector is increased. PC gaming is no different.

If you don't eye track it, then it naturally appears blurry and this is totally acceptable. It's when you try to follow an object that is being presented to you via too-few 'fps' that the problems of image quality come into question.

Since I am in the process of publishing a paper on this phenomenon I thought, in the interim, that I could refer to the FPS wiki article. I had high hopes it would cover the basics, but they left out the "framerate-fusion" section. Nor did they even mention spatial-fusion.

The closest 'thing' that the article describes is Judder. More specifically the 2:3 pulldown telecine Judder. To the gamer, to eliminate all tearing and judder and "jerkiness" and stutter, you MUST update the entire image for every horizontal dot change. Might as well throw in vertical movements and dot changes as well. And this is assuming you need to stay with a raster-scan type of display.

Ideally, but not cost effective right now - is to have each pixel on the LCD directly connected to its own memory location. A large parallel connection, if you will. Taking that further, each pixel should have its own shader unit. 1 per pixel. That will eventually be, but not today.

Furthermore, only pixels that change should be updated. With today's graphic technology, we update the entire frame, even though only a few pixels are updated in the frame buffer.

OK fantasy hardware and theorizing aside, Let us take this simple approach and cut through the marketing fluff. Throw away the conceptual and nebulous in-depth explanations. We will dispense with the notion forever, right here and now, that 60fps is horribly insufficient for correct motion depiction. Forget everything else, let us take it from the top.. You will forgive any condescending language, but it must be used.

As long as 60 fps is the accepted standard.. In order to achieve a true fluid motion appearance, you are limited to moving an on-screen object (a missile perhaps) by only a distance of 60 pixels within that one second time frame. For sake of this argument we will assume you are using a standard resolution of 1024x768 and sitting about 1 meter from the display. You should not be able to discern any of the individual pixels on the monitor by themselves, you should not see any jaggies or stair-step artifacts. If you can, give me you eyes! I want them!

Now - If you try to move the missile across a high resolution screen in one second, to achieve perfect fluid motion, you will need to update the image 1024 times in that one second. Or 512 times if you give it two seconds. One update per pixel.

Doing that insures the missile is represented in the highest detail possible as it traverses your screen. There is no spatial gap with that timing. No loss of detail, no skipped frames(read as no skipped pixels, as "frame" is a holdover from the days of film projection.) Each pixel change is a frame, anything less and you are being cheated. Each pixel in turn turns on and then off in the allotted time. Each pixel stays on for 1/60th second. There is no "gap", the missile is always perfectly rendered to best possible detail at all points during its 1 second traverse across a landscape of 60 pixels.

For the missile to cross from left to right - a 1024x768 landscape of pixels - and staying perfectly visible with all detail present, no jumpiness, no pulldown artifacts, no edge fuzzing.. Going from pixel-to-pixel, it would take 17.067 seconds to make the trip - at 60fps.

Well I'll be damned!! It's gonna take me 17 seconds to blow up the guy on the right side of the screen?? Good god!! I can't get behind that. I gotta blow him up NOW.

I have 2 choices, reduce the spatial-temporal detail or get more GPU horsepower.

Let us look option #1 - reducing spatial-temporal detail, as it is the most commonly used method and by far the cheapest. It is a natural side effect anyways, so let us use it. By the way, this doesn't mean make a simpler texture or bigger or smaller object, or anything like using a different mip-map. Or texture compression, or t-buffer tricks with motion-blur, No. No. No.

For me to effectively play the game(whatever it may be), I'm going to take liberty and assume my missile will impact the target on the right side of the screen in one second after being launched from the left side. (I think we're playing a side-scrolling shoot-em-up.) I've got a decent video card that can jam at 60fps too. And my resolution is set at 1024x768.

To show me the missile flying that left to right distance, in the allotted one second, my spiffy spank'n GPU **AND** LCD will give me 60 pictures a second. Not too shabby, eh? This looks awesome. On paper, and on the gold-foil-lined box my card came in.. Consider this: You must divide those 60 pictures up into 60 evenly spaced areas on the screen. The missile will, over the course of 1 second, occupy 60 discrete positions as it makes it's way across the playfield. Folks, that is only **60** . My graphics card is positioning the missile only 60 times. Drawing it 60 times, updating it 60 times on the way to the target. That's it!

Ladies and gentlemen - This missile is *MAGIC* it can jump 17 pixels in 1/60th second, imagine that! 17 pixels! Or perhaps a few millimeters depending on the specifications of your LCD panel, if you want to visualize it that way. My graphics card is big, it's strong, it's powerful, and blasts this missile from left to right in 60 huge leaps of 17 pixels at a time. *INCREDIBLE*.. What is going on between those 17 pixel jumps? Where's the missile? It must teleport its way from jump to jump. It appears at the 1st 17 pixel "marker", disappears, then re-appears like magic 17 pixels further to the right. This has to happen 60 times till it gets to the target!

But if I follow the missile very carefully with my eye, focusing on it, tracking it, it will *appear* to jitter, parts of it may overlap, parts may fade in and out, perhaps smear, stretch a little (don't bring relativity into this, it doesn't practically apply here). Disappear and reappear. The left and right edges of the nosecone and flaming nozzle may widen and blur. Definitely not a smooth real life presentation by any means. All those "artifacts" are being generated by you, the missile is only occupying 60 discrete positions across the screen.

However, if I pay attention to the target and focus on that, the missile will appear to fly into view and cross the screen somewhat smoothly. But there will be an apparent motion-blur generated by your optic nerve and visual perception center in your head! ***YOU*** are filling in that gap, There is no real motion blur on the screen, you are filling it in! It is often this which causes headaches among some gamers and tv watchers.

The "more-in" your peripheral vision an object is, the more you fill-in the gaps.

(3D only makes this worse because you have to do the same work in trying to figure out just what is going on in another direction. As if 2 weren't enough! Something the MPAA and gaming industry does not explain to you. They will make a big stink of it later on though, and if I have time I'll tell you why.)

Mmm, I don't like the first option, Let us examine #2 and see if we have a better choice.
This is where it gets more realistic and more interesting. And it all falls into place here. Once again, we're gonna blow up a target, same launch position, same target area. We launch our missile, it's an old time one, one from the 50's. It doesn't have all the whizbang teleportation and pixel jumping capabilities. No. This is the real deal folks. As real as it gets! This missile obeys all the known (and unknown) laws of physics. This missile transitions smoothly from pixel-to-pixel. At every of the 1024 steps along the flight from left to right our missile stays right here, in the known universe, it does not enter hyperspace, or drop through subspace. No sir. It does not distort its appearance or change looks on you when you look away. No funny stuff. During the one second flight, our missile stays visible on the playfield. Each of the 1024 steps along the way we can see all the detail. You can track it with your eye 100% percent. It doesn't hide, it don't jump 17 pixels at a time. Ahhh, **REALISM**!! Just smooth pixel-to-pixel velocity. We draw one image, move it one pixel, draw it again. All the tedious way from left to right. Never missing a dot-clock! Imagine that!

We updated our moving object 1024 times, yes folks, yes, a thousand times. If we crank up the turbopump this missile moves faster and hits the target in 0.5 seconds. For it to be represented fully, completely, accurately, no hyperspace pixel-jumping, we must **ABSOLUTELY** update the screen 2048 times per second. This ensures the missile is presented to you fully, at every step of the way. If we do not then there is mis-representation of the data. Something is now missing. I don't know of a graphics card or monitor that refreshes at 2khz yet. But you can be assured the industry is hard at work trying to get there!

Please don't complain that a single pixel-to-pixel transition is considered a jump itself. If you do then I'm going to make you factor in a 256*2*1024 refresh rate! Because you could vary the brightness of each pixel beginning at 256 "brightness" and ramping it down to 0 then taking the next adjacent pixel and doing the same in the opposite direction. INSANE!

And there you have it!! It can't get any simpler than that. Believe it or not, your gaming experience is ULTIMATELY limited by your refresh rate and not how fast your graphics card is. In the interim we can live with some artifacting and jumping jitters. But let's now look at the big-picture(no pun intended) relationship between the monitor and refresh rate and graphics card framerate. I won't go into a discussion of CRT's as no one seems to use them anymore except for the die-hard classic gamers into the likes of systems from the Atari2600 and Intellivision and Commodore-64 era.

First, let us establish that the "perfect GPU" should update each pixel on the display device instantly as it changes. Alas this is not the case.
The absolute best we can hope for today is about a 100hz refresh rate. That means for your missile to be perfectly represented on a 1024x768 monitor, it must cover the left-to-right distance no faster than about 10 seconds. Anything faster and you will get artifacting like I've described above. Tearing, smearing, flickering, and especially - micro-stuttering.

In the end you can easily see imperfect renderings of fast moving object by tracking them as they move across screen. Once the object gets going fast enough, the leading and trailing edges get a little fuzzy, the whole missile seems to "buzz" and shimmer as it covers 2, then 3, pixels per refresh cycle. Only when the image is updated and refreshed for every pixel do we not see this effect.

Gaming consoles have an apparent advantage, they work at much lower resolutions. 320x200, 640x480, with the newer ones going to the standard HD resolutions. If you look at the lowest resolution ones and consider updating the screen image at 60hz, interlaced ntsc (now introducing a sliding-swimming artifact) you'll find that you can get away with the missile jumping 5 pixels in a 1 second traverse. Granted, you'd need to step back a little to alleviate the pixelation. But you get the idea.

What is even worse is an uneven presentation of frames. Say you got 100 fps, then 30 fps, than back to 63, now down to 40, then up to 75. Sadly that is the current state of affairs with videocards, there is no forced steady frame output. Sure you can lock the output to the refresh rate, but you still can't force the GPU to have the image ready to display in 16ms if it is not ready! You've just dumped an incompletely updated frame buffer, now you get tearing.

Till we can update on a per-pixel basis (perhaps 10 years from now), we need to at least keep a steady framerate, let's settle on 240hz or thereabouts.

Till more powerful gpu's come to pass we're going to have to live with imperfections like this! We will need to live with micro-stuttering and blurry objects. If James Cameron is dissatisfied with current framerates and wants to shoot the next Avatar sequel with a new system, then that's good enough for me! If you want to continue your own research go right on ahead. I invite you to prove me wrong.In the meantime, immerse yourself here. The last article - hdtv blur - just barely mentions the eye-tracking issue which *IS* the root cause of today's :censored:-poor display technology. All these newfangled features are patchwork in an attempt to hide the real problem. Slow refresh rate.

You may continue your research here, and you will come to conclusion the slow refesh rate is the major hangup.

http://www.pcgameshardware.com/aid,...from-current-multi-GPU-technologies/Practice/
http://www.overclockers.com/micro-stutter-the-dark-secret-of-sli-and-crossfire/
http://hardforum.com/showpost.php?p=1032646751&postcount=1
http://www.100fps.com/how_many_frames_can_humans_see.htm
http://msdn.microsoft.com/en-us/windows/hardware/gg463407.aspx
http://www.pcgameshardware.com/aid,...force-GTX-285-SLI-Multi-GPU-Shootout/Reviews/
[ame="http://en.wikipedia.org/wiki/Graphic_display_resolutions"]Graphic display resolutions - Wikipedia, the free encyclopedia[/ame]
[ame="http://en.wikipedia.org/wiki/Frame_rate"]Frame rate - Wikipedia, the free encyclopedia[/ame]
[ame="http://en.wikipedia.org/wiki/Motion_compensation"]Motion compensation - Wikipedia, the free encyclopedia[/ame]
http://en.wikipedia.org/wiki/Refresh_rate#Computer_displays
[ame="http://en.wikipedia.org/wiki/Micro_stuttering"]Micro stuttering - Wikipedia, the free encyclopedia[/ame]
[ame="http://en.wikipedia.org/wiki/HDTV_blur"]HDTV blur - Wikipedia, the free encyclopedia[/ame]
 
Last edited:

Hielor

Defender of Truth
Donator
Beta Tester
Joined
May 30, 2008
Messages
5,580
Reaction score
2
Points
0
Yes, monitors that update at 1024Hz would be nice.

They don't exist. Even if you had a GPU that could pump out 1000+ fps, your monitor isn't going to show you 1000+ Hz. Or 240Hz for that matter. It's going to show you 60Hz. It doesn't matter what your video card can do.

Nice theory though. Sign me up when those 1000+ Hz monitors come out.
 

Keatah

Active member
Joined
Apr 14, 2008
Messages
2,218
Reaction score
2
Points
38
Yes, monitors that update at 1024Hz would be nice.

They don't exist. Even if you had a GPU that could pump out 1000+ fps, your monitor isn't going to show you 1000+ Hz. Or 240Hz for that matter. It's going to show you 60Hz. It doesn't matter what your video card can do.

Nice theory though. Sign me up when those 1000+ Hz monitors come out.

Why only 60Hz? Where is that limit coming from?

Samsung and Sony are experimenting with updating just the pixels that need updating and leaving the rest alone, to be left in whatever state they are, till a change is needed.

I've seen many small graphics demos that push 300 or 400 fps, or so says the built-in counter. Sure the graphics looked simple enough so 400fps is believable. There are ongoing experiments with putting the final framebuffer on the LCD panel glass itself, and allowing each memory "location" consisting of "11111111" "11111111" "11111111" to be tied to it's own D/A converter, onboard the glass substrate as well. This gets rid of the slow x-y scan process and theoretically allows for thousands of updates a second. But here, refresh rate as we refer to it today, does not exist.

The best LCD tv's are running 240Hz and some new ones coming this year should be 320 or 380. Depends on just how they want to work the math.
a 240Hz monitor needs a 4ms response time from an LCD crystal element. Doable. But still expensive. So they augment it with other tricks such as flashing the backlight. Or overvolting the crystals to force them from one state to the other, and blanking the LED backlight while a transition is taking place. LED 'bulbs' can flash as fast as you like all the way into the MHz range, but that is not necessary.

Sony has a prototype of a 4800x3600 display that achieves 600Hz refresh rate from 5ms LCD panels! Now how they do that?:shrug:

Let me tell you. The display that YOU SEE is really 1600x1200. There are 3 sets of pixels (actually 9, if figure in the sub-pixels for RGB color). And there are 3 backlights, each polarized 120 degrees apart.

Set #1 of pixel data is prepared and presented to the panel. Backlight #1 is flashed on for 1.67ms, then turned off. Being an LED backlight, you get bang-bang operation with little rise and fall times. While that is happening set #2 of the pixels are being prepared and soon #2's backlight will fire, affecting only set #2 pixels because of polarization. Next up is #3, same process. Just moving right along.

So every 1.6 ms you see a new image - on a display that uses 5ms elements! You are not slave to 5ms updates. That's one way. And it's a lot of trickery too. Too much for my tastes, but its there.

Not only that, figure that that 5ms per pixel time is going from black to white. The fastest way an LCD element can change is going from full on to full off, or the other way around. LCD's have slowdowns when you have subtle changes from like R,G,B 220,145,175 to like 220,150,178. Small value changes can take 3x or 4x the quoted 5ms response time! Good god! Don't you love specmanship?:idk:

Another way of achieving a high response time is to use individual LED's themselves as the basis for each pixel element. Their response time can be amazing, down into the ns range, easily. Since LED's have a short life of 50,000 hours when run at maximum rated current, you can see some degradation of color after only 2 or 3 years. And the life expectancy of a tv set based on these wouldn't be more than 5 or 6 years! Yes, LED's have a surprisingly short life when you look at their initial brightness and compare it back 2 years later.

LCD's are long lived because their color comes from varying light through a valve and dye. And the RGB color dye for each pixel doesn't really fade over the years. Remarkable accuracy. And low power, we're talking down near pico-watts to activate each element. An LCD pixel element is little more than perhaps a capacitor. An open circuit really. The power drain comes from the x-y scanning AND most prominently, the backlight.

An LCD array displaying white is drawing zero current, but yo'backlight is sucking down 4 watts in a laptop perhaps. 25 or more in a full-size living room set.

An LED array of equivalent brightness far exceeds that of a plasma display not to mention an LCD!! Good GOD!! LED's generate heat too!

IMHO once the industry dumps the x-y scanning method (or improves the speed by about 400x) and learns to address the individual pixels themselves at any time, without having to wait for processing, x-y array scanning, interpolation, GPU motion blur effects - only then will we come close to achieving the perfect display.

Lightpeak - Thunderbolt is a step in the right direction as far as interfaces go. It has the bandwidth to begin to address individual pixels albeit at low resolutions while getting close to 1000+Hz refresh rates.

VGA, DVI, HDMI, DISPLAYPORT - frak me - You might as well be playing with the TRS-80 Pocket Computer..

[ame="http://en.wikipedia.org/wiki/LED_display"]LED display - Wikipedia, the free encyclopedia[/ame]
http://en.wikipedia.org/wiki/LCD#Specifications
[ame="http://en.wikipedia.org/wiki/DisplayPort"]DisplayPort - Wikipedia, the free encyclopedia[/ame]
http://en.wikipedia.org/wiki/Thunderbolt_(interface)
[ame="http://en.wikipedia.org/wiki/Hdmi"]HDMI - Wikipedia, the free encyclopedia[/ame]
http://en.wikipedia.org/wiki/TRS-80_Pocket_Computer
 

Hielor

Defender of Truth
Donator
Beta Tester
Joined
May 30, 2008
Messages
5,580
Reaction score
2
Points
0
I think you missed the point of my statement. Your post is all fine and dandy and full of data.

That doesn't change the fact that I'll need new monitors to get this goodness you're talking about, not just a new graphics card. Going back to your statement earlier:
All that bull about anything above 60fps is excess, IT IS NOT!!
Given current monitors, yes anything above 60fps is excess (disregarding physics advantages of higher framerates in things like Orbiter).
 

Keatah

Active member
Joined
Apr 14, 2008
Messages
2,218
Reaction score
2
Points
38
I think you missed the point of my statement. Your post is all fine and dandy and full of data.

That doesn't change the fact that I'll need new monitors to get this goodness you're talking about, not just a new graphics card. Going back to your statement earlier:

Given current monitors, yes anything above 60fps is excess (disregarding physics advantages of higher framerates in things like Orbiter).

Yes, of course you will need new equipment, monitors, cables, cards, mobos, that sort of thing.:thumbup:

A card that can produce, say a reasonable 150fps in Orbiter, today is doing that in the framebuffer and circuitry on the card. Once it comes down the pipe, its 60fps, 75fps, 85fps, depending on your monitor. Whatever.

We could get into figuring out if that 150fps is maintainable. *I* would say that a card that displays the framerate at 150 is "more likely" to be able to maintain a smooth 60Hz come hell or high water. It has the overhead to punch through times of complex rendering and still complete the task in 16ms.

A major problem, still in pc gaming, is maintaining the 60Hz update. There are times when a scene may take 18ms or 25ms or 34ms to render. What if your graphics card just has to draw a starfield with the stock DG in the distance? It will whip through that like nobody's business! Perhaps 4ms or so. So what does your nice 60Hz display do? Does it tear the image a little? Does it miss a frame from time to time. Read up on the topic of micro-stuttering.

What does the vsync option in Orbiter (and other games) really really do?
Does it ensure that 60fps is going to your LCD? Does it ensure that your videocard is drawing 60 pictures, and only 60 to the framebuffer? Is it making the framebuffer dump at 60fps regardless of if an image is ready? What happens if an image take 34ms to draw? Surely the frambuffer is going to be commanded to send out 2 images in that time.. Right? Are they the same image back to back? An old one from double-buffering? Did it pull one out of its :censored:ole? And how do calculate fps now? We have to average it over a few seconds! What if your cpu gets tied up with something else for a millisecond?

I strongly prefer a setup that gets you 60fps constant, absolute. Rock solid. And despite some of the artifacting I describe earlier, you quickly get used to the edge blurring and stuff. Orbiter is a slow simulation. Things aren't whipping past you like lightning. So many of the effects of fast moving objects being portrayed on slow refresh rate displays are non issues. They simply don't exist. Try panning around quickly though and pay attention to the high contrast edges, a white wing edge moving against a black backdrop, you might see something then.

But that's not what I'm getting at. When running at the natural refresh rate of the display, a smooth-looking 75Hz for me, any AND I MEAN ANY drop in fps to even like 58 or 62 for just **ONE** frame, JUST ONE, is visible and annoying if there is a panning motion like using the arrow keys and F1 and F2 views. You know.

As far as I'm concerned Vsync means, draw 60 frames, in equal 16ms timeslices and output those to the display at 60Hz. This doesn't mean draw me 60 frames and just be sure 60 are displayed within 1-second's time. That is a cop-out, it doesn't cut it.

And that's where consoles shine. They do the framebuffer thing differently. You are getting a guaranteed fps, at the start of the top of the screen (in the CRT days) and draw the frame as the electron beam scans across. Pixel by pixel. This didn't vary, it was always a new game frame with new activity. A programmer had X amount of time and # of instructions to work with and that was it. When the beam got to the bottom of the raster, there was some processing and housekeeping, and then the start of a new render. Nowhere is this more prevalent than in the old Atari 2600/VCS with its genlock-like stable speed.

The Apple II was similar, but it had a full 48k framebuffer. That framebuffer, despite outputting a "hardware-fixed" rate, could still display jerkey and juddered and jittered images because the 6502 CPU could still take varying times to form the image in the buffer. I built a "mod" that would let the CPU read where the framebuffer was in the process of displaying the image and the programmer could ensure the image was built in a fixed time, in time, for a smoothly moving picture. No tearing, no slowdowns.

Hell, the PC is all over the place compared to that old console. One tiny 4k text file access can trip up the pc if it happens at the wrong time.

My point being, is that in the race to quote high AVERAGE frame rates among scenes that vary in complexity from 5k polygons to 50k polygons (for example), the PC CPU/GPU arhcitecture seemingly speeds up and slows down. There are no intelligent or forced output rates, VSYNC BE DAMNED!

PC game designers are so obsessed with getting the most detail out of their creations that they disregard cardinal rule of constant fps. They immediately use up any overhead the hardware provides. This is mostly from laziness and bloated programming. And having all this varied hardware doesn't help much either. With consoles, there's a nice strict set of rules a designer must stick with. And in return, the hardware guarantees a certain level of performance. The consoles, in general, promise you X amount of FPS.

There are some "big-picture" items in the pc-sphere that are that cause these problems.

1- the cpu and gpu can take varying times to complete scenes of varying complexity.
2- the cpu can get called away to do some housekeeping at anytime.
3- any overhead is immediately used up. This manifests itself when the cpu/gpu/bus "subsystems" are maxed out. You must complete your render 16ms, no questions asked.
4- extra credit doesn't count, I dont care if you take 45ms to draw a frame, then try to makeup for it by doing the next three frames in 7ms. Do it right the first time.

Nvidia and AMD/ATI are quietly working to correct the micro-stutter issue, btw.

http://en.wikipedia.org/wiki/Television_Interface_Adapter#RAM-less_design
[ame="http://en.wikipedia.org/wiki/Atari_2600"]Atari 2600 - Wikipedia, the free encyclopedia[/ame]
[ame="http://en.wikipedia.org/wiki/Micro_stuttering"]Micro stuttering - Wikipedia, the free encyclopedia[/ame]


P.s. I'd like to see the Good Doctor try a program on the Atari2600, it is one of the most difficult machines to work with of all computing history. Puts anyone's panties in a bunch. And yet it does so much. So much that new hardware tricks are still being invented today.
 
Last edited:
Top