[url="http://www.shacknews.com/ja.zz?comments=36039"]Hardware accelerated physics[/url]!! Good thing or bad thing?
Slashdot has a [url="http://slashdot.org/articles/05/03/08/1827239.shtml?tid=137"]news story on it[/url]. It sounds like the beginning of something really good! Just hoping game developers use the potential of this to make better games rather than trivial stuff!
On the downside - it's another card to upgrade [:(]
I question the point of it actually: if the Cell lives up to the hype and a single one delivers 256 gigaflops (that's 256 000 000 000 calculations per second involving numbers with decimal points for you non-programmer types), and people are using 4 of them in a single machine (a terraflop) then there is PLENTY of horsepower for quite a while.
An athlon-xp 2500 does around 2 1/2 gigaflops btw.
I'll be much more enthusiastic about this chip when I find out if it is programmable, how easy it is to program, whether or not it is possible to program it without signing an Evil NDA and similar considerations.
Something that pissed me off straight away was some marketing linked to from slashdot where it talked about "completing the triangle: CPU->GPU->PPU". They've forgotten about the need for an APU (audio processing unit), and anyone that calls the chips on mobos APUs hasn't looked at them properly. Even the chips on Creative's Audigy 2s are really crap compared to what's happening in the GPU and CPU worlds.
Something that I'd be interested in playing with on this PPU is using it for physical modelling audio synthesis.
I remember Maitrek & Co talking about something like this a long long time ago, I think they also mentioned something about a dedicated 3d graphics card even before 3DFx.
But yeah, any decent SIMD processor now a days, including MIMD and Multicore systems would be able to do physics easier anyways.
I still think that the games industry needs to hold up a bit and wait for the content development people and tools to catch up to where technology has gotten to. I hate seeing incredibly detailed but short games. I think that better development tools and pipelines need to be created and invented for that to occur.
I'm with Daemin on the pipelines & tools development notion.
Lorien : Good call on the Audio side of things, although strangely enough in the paper a couple weeks back I did read a review of a new soundcard up in the $800 - $900 price bracket with extremely impressive signal to noise ratios & other bits and pieces.
[begin off topic audio freak rant]
It's not S/N ratio, it's the whole idea of simply playing back samples with some very fake 3d positioning algorithms and a bit of listener position dependent reverb slapped on.
Samples are like sprites (a sample is simply an audio bitmap), and as long as they are used game audio is going to continue to suck.
I suspect the only way the audio people are going to get the required cycles for doing realtime synthesis (when they have to compete with AI and physics) is to have a dedicated APU.
Creative really don't help things at all: not only do they buy out and close down anyone with better tech, but they have crazy conditions on using the EAX SDKs past version 2, eg people aren't allowed to discuss anything technical about EAX 3 or 4 as a condition of using the SDK. At the same time they love saying "EAX is an open standard".
EAX is a dodgy hack: it's impossible to do a decent acoustic environment simulation with simple reverbs and filters. What is needed is wave-propagation or wave-tracing where the level geometry and material descriptions are sent to the soundcard which calculates how the sound waves bounce around.
EAX material presets are just filter characteristics (tone control parameters), and EAX has no geometry support: you are supposed to use collision detection to figure out how to muffle sounds.
IMHO it's the worst designed system I've ever seen.
Added to this their cards only do 64 3d voices, when for decent realism you really need 512.
Ever wondered why soundcards don't have onboard RAM or huge heatsinks like graphics cards? It's not because they don't need it to do the job properly, it's that game audio is still in the dark ages, when there is no need for it to be. What's more, studios and publishers either don't realise or don't care.
Brett Patterson (the main fmod guy) agrees with me whole heartedly btw. That's one of the reasons firelight are putting some basic synthesis and geometry in the software side of fmod 4.
In short currently game audio hardware is so bad it makes more sense to use software. Or do audio processing on a GPU.
[end off topic audio freak rant]
It seems a little strange to me personally, especially with the hardware industry apparently moving towards multi processor systems. The big examples at the moment being the PS3 with it's Cell architecture and the XBox 2 which will apparently have 3 CPUs all faster than my current PC.
Is it worth continuing to confuse developers with more and more hardware standards or should we be looking to see how we can best utilise the extra general processing power being made available?
disclaimer : I did Electronic Engineering at uni, so I have torn out my hair on audio and image signal processing algorithms for many a sleepless night...
I would have thought that by it's very nature, true audio occlusion and propogation calculations would be more complex than graphics calculations by a factor of O(N) : O(N^4) at best. Light at least propogates roughly linearly and instantaneously, while soundwaves propogate spherically and with measurable time delay. Add to this the possibility of multipath effects and you're talking about an almost astronomically complex multivariate problem to solve in realtime. My brain's bleeding just thinking about it.
Now, I can certainly see that there's the possibility for static precomputation of audio propogation, but using that in-game would actually only require large numbers of voices with plein-air 3d modelling... However this would be like exporting all the BSP and collision detection logic to the GPU...
Add to this that when a large chunk of your gamers are happy to use any old headphones they find lying around, while most of the rest have the low-range turned way up high for kickarse bass, there'll be quite a struggle to get a real commercial driver for "true" audio hardware.
The secret to realtime audio propogation indedependent of the number of voices is to use cellular automata. An honours student at La Trobe did it last semester. It's still very processor intensive, but easily made parrallel (he's used some of my SIMD code already).
Also Aureal's A3d featured simple wave tracing in hardware. You did send low poly geometry to the soundcard.
I think the expectations in listening devices (speakers/headphones) is improving, that's why Creative went 24-bit linear PCM and Intel are hyping 32-bit for future systems. Also the sampling rates are already up to 192 kHz on Audigy 2 cards.
I think increasing bit-depth and sampling rate is the Wrong Way (tm) to improve game audio: the biggest change will come from using synthesis rather than simple sample playback.
And I warned that I'm an audio freak [:D]
[url="http://science.slashdot.org/science/05/03/15/0124218.shtml?tid=152&tid=…"]3D Raytracing Chip Shown at CeBIT[/url] [:0]
so to speculate about realtime dynamic audio, and not using samples anymore:
Is that to say, I could create cymbol, define its material properties... then the cymbol would either 'ping' or really 'crasH!!' depending on how hard the cymbol is hit in game? And, this sound would vary depending on the defintion of the stricking object? (wood, metal, felt)
...and a stereo in a room, say, would be muffled in real-time as the door opened/closed?
similar to the way physics have been introduced to realtime games... where initially breaking walls were pre-animated sequences and you could maybe 'slide' a crate around - whereas now objects react naturally in a scene.
urgrund, you've just described "physical modelling synthesis", which is what many, many synthesisers do these days.
There are many other ways to synthesise audio that aren't quite as cool, but much less CPU intensive.
There are also programming languages specifically for synthesising audio like CSound http://www.csounds.com/ , JMax http://freesoftware.ircam.fr/rubrique.php3?id_rubrique=14 , Supercollider http://www.audiosynth.com/ and loads more, and the field is far more developed than using computers for graphics- Max Matthews wrote the first audio synthesis language (Music 1) in the late 1950's and research has been going on ever since.
And yes, if people want to talk more imho it should be in the programmers section.
Why don't we have hardware synthesized sound in games?
1) Current methods are "good enough"
2) Content creation for synthesized sound is going much more time consuming. This will most likely change over time as the model becomes more real, but initially I expect this to be too complex to be viable in games.
Why don't we have hardware synthesized sound in games?
That sort of attitude is a big reason... No offence, but have you really looked into it? And so what if the conent creation takes longer, making a game with the inverse kinematic characters takes longer too.
Current methods are not "good enough", they are completely lame (don't just take my word for it, ask anyone who has really studied digital audio). This push for "cinematic" games has mostly forgotten cinematic sound and music.
Synthesis in games would take a different kind of sound designer, mostly the people who use these custom synthesis languages are classically trained composers, not audio engineers.
I bet no one on sumea plays games much with the sound turned off...
Better make this clear: I've been synthesising audio since I was 11 years old (and I'm 32 now).
I was thinking about the possibility of these things ages ago. They would be quite cool in that its going to give you a much more realistic, interactive environment. I'd definately love to see proper fluid simulation. I can see this card being used for two purposes. To add pretty extra effects to games, and to really improve the gameplay in others. The first will not necessarily require the card to play, as its just improving the immersion of the world. The second is definately going to need a card, otherwise there would be too much trouble trying to implement both a hardware and software side of the physics. If the card is going to be relatively inexpensive ~150 then I think it could take off quite well.
Another option would be to integrate it into new motherboard chipsets.
I can also see this one being used heavily in later gen consoles.
Time will tell...