Skip to main content

Who cares? (poll)

Submitted by Maitrek on

I'm going to start the most common type of server space wastage, I'm going to poll the populace of this thread
Three questions, with a few minor queries - so that people might think about elaborating their posts - although simple yay nay responses would also be appreciated due to the general inactivity in this thread. Even if you don't know what the crap I'm asking of ya, at least pipe in to say that you think I'm a tool :)

1. x86-64 vs IA-64 - not which one is better, but whether you actually care or think it affects you or your programming in any way. Has assembly for all intents and purposes finally been slain by high level languages?

2. Floating point colour representation vs 32bit RGBA integer colour representation - Do you actually care, which do you prefer?

3. Visibility determination - is it soon to be a thing of the past or still a valid part of any 3D environment parsing software? Oct-trees, quad-trees (+ derivatives), portals, what do people think the future of vis-det is?

Submitted by Maitrek on Tue, 18/03/03 - 1:33 AMPermalink

Okay, obviously I haven't managed to create any controversial flame war, that's a good start. But perhaps I'll try to kick start some debate by stating my opinion on the first question.

x86-64 v IA-64 does it affect me.

Personally, I think it does, but only to a certain extent. I have a bit of a problem myself with wanting to stay close to the machine level. Alot of coders don't like this, because admittedly, it's very hard to maintain the code in a reasonably manageable state whilst sticking close to the machine. x86-64 is a very good extension to the current x86 and IA-32 instruction set, it's not a RISC architecture which appears to be the way everyone is going nowadays, but it's still very workable, very backwards compatible and friendly and familiar to me. It's also very fast, and addresses all the problems I have with the current architecture. IA-64 is a pretty radical change, it supports all the old isntructions, but it treats them rather separately to the new instruction set. It too solves alot of problems with the old architecture, but the fact is it makes some of my old code very slow because it was made for the old style dual pipelined pentium processor and sometimes multi-pipelined floating point unit, and the data structures were optimised for that and reflected that style of architecture, even in a high level language.

As a coder, sometimes I have a tendency to use assembly simply because the compiler just doesn't compile an inner loop the way I want it, or it does some floating point instructions in an innefficient manner. I can't say I like having to do this all that much, because compilers overall do a very good job of optimising code, and it's a bit of a pain to use assembly. Changing the ISA radically means I have to change all that code, also it means changing the way I code to better suit the many registers we now have and the 3 levels of cache possible, and also the fact that code size is now far more important in a RISC - this mainly applies to the IA-64 architecture.

That is (in short) how i think the changes will affect me if we radically change the architecture, and these same statements will exist if we go with x86-64 or IA-64 but maybe less of an issue with x86-64. Basically I have to learn the best way to code for these processors, which isn't that bad, but I was just starting to get good at the old processors. And if we have double standards, then I'm in an extra bind because it'll bloat the executable size with different levels of support for the two styles of processor.

Submitted by Soul on Tue, 18/03/03 - 3:08 AMPermalink

My thoughts...

1. Any change in architecture will affect performance code - just look at the multitude of common, high-level tricks employed to take advantage of modern processors (explicit parallelism, padded data types, loop invariance etc.)

While the established techniques will carry through to the x86-64, the IA64 architecture will introduce new problems. Much of its primary focus is in pushing the limits of compiler technology to produce fast code - this means we will have to find the best way to write our programs to assist the compiler to take advantage of these optimisations.

Having said this, compilers still have a long way to go before totally eclipsing the power of low-level languages.

2. Floating point is, without doubt, the way of the future. With the advent of shaders, and other hardware tech, it's becoming more difficult to justify using a discrete representation when most of our calculations require more accuracy.

3. From my limited knowledge, depends on how powerful hardware gets. I get the feeling vis-det will soon become an unneccessary load on the CPU, and should probably be hand-balled off to the gfx hardware.

Submitted by Maitrek on Tue, 18/03/03 - 9:27 AMPermalink

For the response to the second question.
Hardware support of floating point colour representation should come sooner rather than later. Although it takes up more space to store these details in memory, and if we store permanently floating point colours in our image formats then it'll have hard disk space issues, it's still a far more accurate way of representing the colours.

Also it brings up the issue of colour mode independence, so that it doesn't matter what video mode you are in, we only have to handball off the one format (floating point).

The only drawback I see, is that the large amount of floating point operations the GPU will have to cope with. I think however, a consistency of colour format will greatly reduce conversions between integer and floating point, it will reduce alpha mapping and bilinear filtering pixel format conversions and all sorts of other time could be saved, so from a performance point of view it'll definitely break even, at least look alot smoother, and I would imagine decent implementations will run faster.

The amount of texture memory and hard disk space, and even removable storage device space has increased a very large amount, and I don't think that there is any real issue with having floating point colour representations being stored in image formats as well. Sure we might in the short term be restricted in the number of textures we can fit into the graphics cards' memory, but it'll only be a matter of time before we have more than enough again...

Submitted by redwyre on Wed, 19/03/03 - 2:01 AMPermalink

1. x86-64 vs IA-64 - Theses days it's not about speed, but productivity. The only way this affects me is if the compiler is immature and doesn't know what it's doing.

2. Floating point colour representation vs 32bit RGBA integer colour representation - This is great. Floating point buffers allow so much precision. You would only use FP buffers with pixel shaders and as the back buffer, you don't need any more precision in your textures. FB buffers allow you to do multi-pass rendering that doesn't loose accuracy, which will improve image quality. You won't see more colours, but the ones you do see will be more smooth and calculated more correctly.

3. Visibility determination - well, since DirectX9, your hardware can do this now :) Occlusion and culling will always be a part of game dev, because the number of triangles keeps increasing.

Submitted by Daemin on Wed, 19/03/03 - 8:56 AMPermalink

I'd say that we only really need floating point color representation for the intermediate stages of the pipeline, since textures would take too much space if they were RGBA 32 floating point (that'd be 16 bytes per pixel, far too large). And we will not need to store the final image (that outputs to the monitor) in 32 bit floating point either.

One final point to make is that 32 bit floating point is slow - 24 bit is much faster - as witnessed with the Radeon 9500+ vs GeforceFX battle.

Submitted by Maitrek on Wed, 19/03/03 - 10:28 AMPermalink

I think it's similar to the days of 3DFXs problems with 16bit - floating point is a far more robust pixel format for rendering purposes, and another question you have to ask is, is floating point slow in the GeForceFX because it's having to convert lots of 32bit values to floating point values and back again? It's part of that self-fulfilling prophecy type garbage, current software is designed with 32bit RGBA in mind, does that affect performance of floating point colour representation hardware? Converting integer to floating point is a rather slow operation, and obviously floating point math is slow(ish) compared to integer math, but there are obvious benefits in terms of rendering pipeline (multi-pass, blending) and image quality issues as well - that floating point is far better for than integer math.

Posted by Maitrek on

I'm going to start the most common type of server space wastage, I'm going to poll the populace of this thread
Three questions, with a few minor queries - so that people might think about elaborating their posts - although simple yay nay responses would also be appreciated due to the general inactivity in this thread. Even if you don't know what the crap I'm asking of ya, at least pipe in to say that you think I'm a tool :)

1. x86-64 vs IA-64 - not which one is better, but whether you actually care or think it affects you or your programming in any way. Has assembly for all intents and purposes finally been slain by high level languages?

2. Floating point colour representation vs 32bit RGBA integer colour representation - Do you actually care, which do you prefer?

3. Visibility determination - is it soon to be a thing of the past or still a valid part of any 3D environment parsing software? Oct-trees, quad-trees (+ derivatives), portals, what do people think the future of vis-det is?


Submitted by Maitrek on Tue, 18/03/03 - 1:33 AMPermalink

Okay, obviously I haven't managed to create any controversial flame war, that's a good start. But perhaps I'll try to kick start some debate by stating my opinion on the first question.

x86-64 v IA-64 does it affect me.

Personally, I think it does, but only to a certain extent. I have a bit of a problem myself with wanting to stay close to the machine level. Alot of coders don't like this, because admittedly, it's very hard to maintain the code in a reasonably manageable state whilst sticking close to the machine. x86-64 is a very good extension to the current x86 and IA-32 instruction set, it's not a RISC architecture which appears to be the way everyone is going nowadays, but it's still very workable, very backwards compatible and friendly and familiar to me. It's also very fast, and addresses all the problems I have with the current architecture. IA-64 is a pretty radical change, it supports all the old isntructions, but it treats them rather separately to the new instruction set. It too solves alot of problems with the old architecture, but the fact is it makes some of my old code very slow because it was made for the old style dual pipelined pentium processor and sometimes multi-pipelined floating point unit, and the data structures were optimised for that and reflected that style of architecture, even in a high level language.

As a coder, sometimes I have a tendency to use assembly simply because the compiler just doesn't compile an inner loop the way I want it, or it does some floating point instructions in an innefficient manner. I can't say I like having to do this all that much, because compilers overall do a very good job of optimising code, and it's a bit of a pain to use assembly. Changing the ISA radically means I have to change all that code, also it means changing the way I code to better suit the many registers we now have and the 3 levels of cache possible, and also the fact that code size is now far more important in a RISC - this mainly applies to the IA-64 architecture.

That is (in short) how i think the changes will affect me if we radically change the architecture, and these same statements will exist if we go with x86-64 or IA-64 but maybe less of an issue with x86-64. Basically I have to learn the best way to code for these processors, which isn't that bad, but I was just starting to get good at the old processors. And if we have double standards, then I'm in an extra bind because it'll bloat the executable size with different levels of support for the two styles of processor.

Submitted by Soul on Tue, 18/03/03 - 3:08 AMPermalink

My thoughts...

1. Any change in architecture will affect performance code - just look at the multitude of common, high-level tricks employed to take advantage of modern processors (explicit parallelism, padded data types, loop invariance etc.)

While the established techniques will carry through to the x86-64, the IA64 architecture will introduce new problems. Much of its primary focus is in pushing the limits of compiler technology to produce fast code - this means we will have to find the best way to write our programs to assist the compiler to take advantage of these optimisations.

Having said this, compilers still have a long way to go before totally eclipsing the power of low-level languages.

2. Floating point is, without doubt, the way of the future. With the advent of shaders, and other hardware tech, it's becoming more difficult to justify using a discrete representation when most of our calculations require more accuracy.

3. From my limited knowledge, depends on how powerful hardware gets. I get the feeling vis-det will soon become an unneccessary load on the CPU, and should probably be hand-balled off to the gfx hardware.

Submitted by Maitrek on Tue, 18/03/03 - 9:27 AMPermalink

For the response to the second question.
Hardware support of floating point colour representation should come sooner rather than later. Although it takes up more space to store these details in memory, and if we store permanently floating point colours in our image formats then it'll have hard disk space issues, it's still a far more accurate way of representing the colours.

Also it brings up the issue of colour mode independence, so that it doesn't matter what video mode you are in, we only have to handball off the one format (floating point).

The only drawback I see, is that the large amount of floating point operations the GPU will have to cope with. I think however, a consistency of colour format will greatly reduce conversions between integer and floating point, it will reduce alpha mapping and bilinear filtering pixel format conversions and all sorts of other time could be saved, so from a performance point of view it'll definitely break even, at least look alot smoother, and I would imagine decent implementations will run faster.

The amount of texture memory and hard disk space, and even removable storage device space has increased a very large amount, and I don't think that there is any real issue with having floating point colour representations being stored in image formats as well. Sure we might in the short term be restricted in the number of textures we can fit into the graphics cards' memory, but it'll only be a matter of time before we have more than enough again...

Submitted by redwyre on Wed, 19/03/03 - 2:01 AMPermalink

1. x86-64 vs IA-64 - Theses days it's not about speed, but productivity. The only way this affects me is if the compiler is immature and doesn't know what it's doing.

2. Floating point colour representation vs 32bit RGBA integer colour representation - This is great. Floating point buffers allow so much precision. You would only use FP buffers with pixel shaders and as the back buffer, you don't need any more precision in your textures. FB buffers allow you to do multi-pass rendering that doesn't loose accuracy, which will improve image quality. You won't see more colours, but the ones you do see will be more smooth and calculated more correctly.

3. Visibility determination - well, since DirectX9, your hardware can do this now :) Occlusion and culling will always be a part of game dev, because the number of triangles keeps increasing.

Submitted by Daemin on Wed, 19/03/03 - 8:56 AMPermalink

I'd say that we only really need floating point color representation for the intermediate stages of the pipeline, since textures would take too much space if they were RGBA 32 floating point (that'd be 16 bytes per pixel, far too large). And we will not need to store the final image (that outputs to the monitor) in 32 bit floating point either.

One final point to make is that 32 bit floating point is slow - 24 bit is much faster - as witnessed with the Radeon 9500+ vs GeforceFX battle.

Submitted by Maitrek on Wed, 19/03/03 - 10:28 AMPermalink

I think it's similar to the days of 3DFXs problems with 16bit - floating point is a far more robust pixel format for rendering purposes, and another question you have to ask is, is floating point slow in the GeForceFX because it's having to convert lots of 32bit values to floating point values and back again? It's part of that self-fulfilling prophecy type garbage, current software is designed with 32bit RGBA in mind, does that affect performance of floating point colour representation hardware? Converting integer to floating point is a rather slow operation, and obviously floating point math is slow(ish) compared to integer math, but there are obvious benefits in terms of rendering pipeline (multi-pass, blending) and image quality issues as well - that floating point is far better for than integer math.