Programmers can talk all about their craft here
Programmers
C# and .NET - but Aus studios will stay behind
XNA Framework Simplifies Cross-Platform Game Development
The XNA Framework contains a custom implementation of the Microsoft? .NET Framework and new game-development-specific libraries designed to help game developers more easily create cross-platform games on Windows? and Xbox 360 using the highly productive C# programming language. Using the XNA Framework, game developers will benefit from the ability to re-use code and game assets in developing multiplatform titles, without sacrificing performance or flexibility.
Pigs fly backwards in wintertime as well btw. And you left out the quotes around "cross platform". Reminds me of a certain Aus company (not MF in case anyone is wondering!) claiming their engine was "cross platform". It worked on win95, win98, winME and win2k...
Nothing personal poser, but that's just Microsoft's hype you've posted.
XNA Framework Requires entirely new build processes and expertise for Microsoft-only Game Development
The XNA Framework is a set of wrappers around the same old Microsoft? .NET Framework with some new game-development-specific libraries designed to make game developers entirely recode all libraries, core tools and dev systems for their Windows? and Xbox 360 development using the C# programming language. Using the XNA Framework, game developers will need to buy new source-code-control integration, workflow tools, server and shared-build infrastructure in order to build the same games they've been building for the last few years.
quote:Originally posted by mcdrewski
XNA Framework Requires entirely new build processes and expertise for Microsoft-only Game Development
The XNA Framework is a set of wrappers around the same old Microsoft? .NET Framework with some new game-development-specific libraries designed to make game developers entirely recode all libraries, core tools and dev systems for their Windows? and Xbox 360 development using the C# programming language. Using the XNA Framework, game developers will need to buy new source-code-control integration, workflow tools, server and shared-build infrastructure in order to build the same games they've been building for the last few years.
If this is true.. and if this type of framework is required for the xbox360 development.. Then that would actually be bad for Microsoft in the case that the ps3 and revolution promises to work on your standard proven frameworks.
This introduces a form of elitism where by only the bigger developers would be able to implement the framework due to cost and time. Therefore killing off a whole lot of possible titles for the 360. The smaller companies would look to make games on other systems. In the end Microsoft would be doing a classic Nintendo style development screw up.
This is all based of course on the idea that Nintendo and Sony will allow developers to not implement the new system. ..
I'll say it again though. Sonys success lays in the fact that they let any poor bastard make whatever the hell they want on a PS console. Ive seen some realllly low budget titles.
quote:Originally posted by CombatWombat
nice job I might add - had me going there on first read too :)
It IS a nice job what with the trouble MS seem to constantly get themselves into over misuse of monopoly powers and their business tactics. Caroo, I suggest you read http://www.catb.org/~esr/halloween/ it's a collection of leaked internal MS documents.
from the site quote:
Over time, these memoranda have grown into quite a series. The Halloween Documents I, II, III, VII, VIII and X are leaked Mucrosoft documents with annotations. IV is a satire based on an idiotic lead-with-the-chin remark by the person who was at the time Microsoft's anti-Linux point person; V is serious comment on a statement by the same fool. VI is a takedown of one of the bought-and-paid-for "independent studies" Microsoft marketing leans on so heavily, IX refutes the Amended Complaint by Microsoft's sock puppets at SCO, and XI is a field report from one of Microsoft's marketing road shows. The common theme is that the Halloween Documents reveal, from Microsoft's own words, the things Microsoft doesn't want you to know.
The page is maintained by Eric S. Raymond, who I've mentioned on sumea before: an MS recruiter was foolish enough to offer him a job and he responded with public FLAMES [;)] http://sumea.com.au/forum/topic.asp?TOPIC_ID=3240&SearchTerms=how+to+tu…
Hard to say - but I think MS may be about to win the console war and probably will win the home entertainment / media centre one as well as they won the desktop and the office and the browser if so then C# .NET is their preferred platform and that's really all that can be said of it. Anyway - let's see if Sony can make their comeback later in the year they are the only one left that can stop MS taking over practically all end user computing.
I'm a firm beleaver that sony will dominate the console sector as it has with the last 2 generations. Microsoft have the advantage of the early bird tactic but that's only a short term advantage. The Xbox 360 may of made record sales but I predict the PS3 at minimum will have 1 1/2 times as many initial sales in comparison to the 360. Most of those buyers being dedicated playstation players. Remember sonys been in the game for 10 years now. 8-15 year olds back then who grew up on the old systems will be sonys sales boost for the new system.
People like the new 360. But their not captivated by it from what I seen. When a new concole comes out and it makes many people buy it regaurdless of cost like the PS2 did back then, you know you dominate the industry.
quote:Originally posted by poser
Hard to say - but I think MS may be about to win the console war and probably will win the home entertainment / media centre one as well as they won the desktop and the office and the browser if so then C# .NET is their preferred platform and that's really all that can be said of it. Anyway - let's see if Sony can make their comeback later in the year they are the only one left that can stop MS taking over practically all end user computing.
C# & .NET may be their preferred platform, but I don't see Microsoft themselves making too much use of it ;-) They also have large legacy codebases in C++, MFC and the Win32 API so they will support that for a long time to come.
The X-Box 360 has a few things not in it's favour, a next generation media drive being one. The PS3 will absolutely dominate the 360 in Japan (and likely around that region) which is quite a sizeable market, so Sony won't be going away soon. Sony is also making a smart move away from proprietary APIs to one right in line with OpenGL, Microsoft requiring C# and .NET would be going in the opposite direction, and a big mistake.
In terms of the home media center, my guess is a company with a device like TiVo will be more successful than both Sony and Microsoft. They both have visions for the "lounge room" that are too far fetched, at least in the near future - which is when I think it will make a difference.
quote:Originally posted by urgrund
In the topic name "...but Aus studios will stay behind" <- what do you mean by that?
Perhaps he means that where a lot of "Microsoft" business IT shops are eating up the "latest & greatest" PR from MS and switching to C# and .NET for development, games companies aren't?
There seems to be a bit of confusion here. From the xna website (http://www.microsoft.com/xna/):
quote:XNA Framework
Microsoft unveiled the XNA Framework at the Game Developers Conference 2006. The XNA Framework is an exciting new development and execution environment which will allow game developers to more easily create games which run on the Microsoft Windows and Xbox 360 platforms. It is being designed with a unified set of class libraries which will allow for maximal re-use of code and assets across target platforms. A custom version of the Common Language Runtime is being built to enable the execution of managed code on an Xbox 360, and at GDC the XNA team showcased some exciting demonstrations of games which were built on an early version of this technology.
So it's just a .Net stack that supports the set of xna common libraries, that runs on the Xbox360. I know from the xbox newsgroups that people have been asking for this, so I think it's great.
lorien: running on win32/win64 and xbox360 sound quite cross platform to me.
bp: What about C# isn't cross platform? You can write an app on windows and have it run on a number of different systems using mono.
Caroo: Microsoft is implementing the framework, you just have the option of using it with C#, or C++ (probably both as well). I see this as being pretty useful for the smaller developers as it is my understanding the xna framework will allow developers to get up and running much quicker. And the xbox live arcade is a great way for smaller developers to get their games out there and available to many people.
CombatWombat: No one is stupid enough to remove support for C++ :)
lorien: How is providing their developers a framework a misuse of monopoly powers? A monopoly of making it easier to work with their own console? And I don't see what this has to do with open source and Eric S. Raymond.
Dragoon: In a way Direct X is a standard of sorts, alot of people use and know it. Sony is playing catch-up by moving to OpenGL ES. Again, you will not be required to use C#, it's just another option.
quote:Dragoon: In a way Direct X is a standard of sorts, alot of people use and know it. Sony is playing catch-up by moving to OpenGL ES. Again, you will not be required to use C#, it's just another option.
Yes it is, but I was referring to the implication that MS would be hypothetically making C# and .NET a requirement or strong preference in the future. It's definately not a standard for game development, or even common in game dev companies.
Redwyre: it doesn't seem cross platform to me. It seems almost like claiming something is cross platform because it runs on x86 Linux, AMD64 Linux, MIPS Linux, PowerPC Linux and ARM Linux. I think it's a minor step up from "cross platform" across the different versions of windows.
I didn't say providing developers with a framework was an abuse of monopoly powers at all, but that MS have a history of abusing those powers (and have been convicted for it and been hit with massive fines over it). Also MS try to lock developers into their products alone and their products alone- I think that's one of the main points of DirectX. It's something to keep an eye on I think.
Mono is as doomed as Wine I think: MS products are constantly changing, Mono and Wine play catch up the whole time. That's not saying they aren't amazing efforts- the Wine team have boasted about aiming at "bug for bug compatibility" with windows...
quote:lorien: running on win32/win64 and xbox360 sound quite cross platform to me.
No, see if it ran on MIPS, PS3, Revolution, ARM etc it'd be cross platform. But it doesn't so it isn't.
quote:bp: What about C# isn't cross platform? You can write an app on windows and have it run on a number of different systems using mono.
The fact that there isn't a version on most platforms?
quote:lorien: How is providing their developers a framework a misuse of monopoly powers? A monopoly of making it easier to work with their own console? And I don't see what this has to do with open source and Eric S. Raymond.
Unless by "monopoly" you mean "half of Sony's market share", then its pretty obvious they don't have anything even close to it.
As for open source, I think MS has a lot to do with it... Hear me out: Open source exists and thrives because there is a critical mass of people making it. The only way that can happen is if powerful hardware is affordable and available to all. And MS has driven the demand for powerful hardware sky high because their software requires it.
That lets the Linux guys build powerful boxes at affordable prices. Of course not only are they completely ungrateful, they actually have the audacity to complain that MS software is bloated. Personally, I'm glad it is, otherwise economies of scale wouldn't work to let me by HW at these amazingly low prices.
quote:Dragoon: In a way Direct X is a standard of sorts, alot of people use and know it. Sony is playing catch-up by moving to OpenGL ES. Again, you will not be required to use C#, it's just another option.
Sony isn't playing "catch up", they're easily the market leader, whether you like it or not. And they got there without any GL or DX nonsense. Hello, its a console, you know, a fixed architecture, you don't need to abstact the hardware, that's the whole damned point.. !?
pb
pb said: quote:?Sony isn't playing "catch up", they're easily the market leader, whether you like it or not.?
I don't think it was really meant as a reference to market share but to developer ease-of-use when it comes to developing for the platform. However if any developer believes that this in itself is a justification to claims that the X360 will win the next-gen race... I would say that they are delusional and such claims are nothing more than wishful thinking on the part of developers, and hype from Microsoft.
Games are a consumer-driven market, not a developer-driven market nor a platform-driven market.
If the majority of consumers end up with a PS3, and there is no evidence to suggest that this will be anything other than that, with the current-gen market shares around: PS2 91.6 million, XBOX 21.9 million and GCUBE 19.4 million units worldwide ? based on the last set of figure I could scrape together, and I point out that the PS2 has twice as many as both the XBOX and GCUBE combined. Then it does not matter if developers or even publishers prefer developing for X360 because it is perceived as less ?alien? to their old-school PC development sentiments and their for has a higher ?ease-of-use? due to familiarity ? as I think that a lot of PC developers found the transition to console development easier because of the XBOX and it similar hardware and OS combo to a PC, so the learning curve was not as severe as the PS2, and therefore they became XBOX fan-boy converts, as did perhaps many PC gamers, which may make up a majority of their consumer base.
What does not matter is whether the developers find the X360's ease-of-use better development wise than the PS3's ease-of-use. What matters is consumer-base, and what they care about in the way of new products, is their ease-of-use in utilising them ? speaking only within the context of ease-of-use as there are other factors to success like: brand-image / reputation. Consumers do not care how hard or easy it is to make a game for a platform, all they care about is how easy or hard the platform is to use from the ?ease-of-use? context / Point-of-View.
In existing brands, what also matters is established consumer-base, and their loyalty to your brand based on the quality of the product, titles and service, and your ability to maintain this ?reputation? with good marketing. And the size of your market-share.
If you think that all those that bought a Playstation, then bought a Playstation 2 are all of a sudden going to buy a X360 just because it doesn't have as impressive hyped-up hardware and because it has been released early severely affecting the life-cycle of existing platforms and revenue streams from these, not to mention catching a lot of developers with their pants down by pre-empting next-gen a year or two before it was due.
Then I think you should try to face reality, as you are only deluding yourself into yet again believing what you want to believe rather than what you need to believe.
Now all that shit said, I really do not favour either one of these platforms, if the market research supported the X360, which mine does not, then I would focus development on that platform ? hell, I own an XBOX, not a PS2 ;). Hell, if the market research suggests that the Revolution would dominate this generation of gaming consoles, I would back the bitch and develop for it.
In the case of the Revolution, the reality is that it is a long-shot, a wild-card, that may not be an iPod but rather another Nintendo Virtual Boy ? a gimmicky and painful failure, and when I say painful, I mean to use: eye and neck-strain.
The iPod was successful as it allowed people a relatively simple way to use and play MP3s in a portable device, its innovation was ease-of-use ? along with style and good marketing - targeting the average consumer, creating a product that clearly would be in demand because there was a demand in such a device; ease-of-use along with style, is something Apple has a reputation for with their Macintosh systems. I fail to see how the Revolution is analogous to this and will recreate such success, and see the ?innovative? controller as being rather gimmicky, nor, do I see it as being that innovative compared to something like the PS2 eye-toy or other peripherals ? sing-star, buzz, etc ? for innovative game play, as these added to the wheel with something easy-to-use, not reinvented it.
The Nintendo controller to me does not look to have such ease-of-use with its wand thing and VR spin-off controller thing. To me that reads as: takes a while to get used to, and, will lead to cases of RSI and lawsuits for Nintendo ? I'm actually surprised no one has done so already for the hideously deforming qualities of the GBA on the average person's hands :/
I think this is why when people speak of next-gen, they refer to the X360 and PS3 but don't even bother to mention the Revolution ? but hey, it is a wild-card and may prove many wrong, including me, and make a lot of Nintendo fan-boys rather happy :p.
And so my rant on next-gen comes to and end.
I would love to see the Revolution make a successful name for it. Truly I do. And while each of us has have 6 different theories on why Nintendo is falling in the home console market [and frankly all those theories are true most of the time.] Nintendo?s death will be assured unless it starts to take it's market seriously game wise.
They've pretty much given up on the Game cube. And their main "die-in-the-ass" point with that system was simply that they where making gimmicks, not games. Every third game to come out on the G-cube and DS lately has been Mario based and frankly that?s what?s killing them. Sony and Microsoft are relatively new competitors and don't relay on a character as deeply as Nintendo does. X-box has halo. But their only up to game no3. How many Mario games are out there? He's a cool character and all. But he lost his charm after Mario 64.
This is not to say their not trying. With the revolution sporting the ability to emulate old Nintendo consoles they will need to relay less and less with a mascot. Although the controller might become the new systems Frankenstein?s monster if they make it the default controller. As i said. gimmicks - not games.
You can tell Nintendo is trying to cling to what they think will be innovation and something fresh for buyers. But in a business where so many games are cross-console made and market share is really in the end determined by the number of games you can get developed on your system you don't really get that luxury. Innovation is great for side projects and gimmicky products [eye-toy , dance mat]
But you want solid fundamentals to support those gimmicks. That means a standard controllers, Machine Specs that don't go to under the competitors and NOT being dependant on a mascot that isn't all that popular anymore.
WE hope and prey Nintendo will conform and get back that market share. But as for me. I'd say there doomed to go the way of the SEGA
I think I recall that the Revolution will be able to use GCUBE controllers as well as play GCUBE titles. That and being able to play older classics, and if the price point is low, like $200 AUD, then it may see me buying one eventually so that I can play the old GCUBE titles and Nintendo classics which I have heard about but not played, along with the half-decent titles released using the new controller ? most will probably be third-party mutton dressed up as lamb under the guise of ?innovation,? or at least that is my impression.
But, it will not be my main console at all, but will sit along side of it ? and it may lose out to the X360 to be honest.
Perhaps for that reason it may do reasonably well, perhaps better than the GCUBE, but, being able to play the odd title that came out only on the GCUBE, especially compared to next-gen titles on the PS3 and X360, isn't perhaps going to be incentive enough for consumers ? beyond Nintendo Fan-boys ? as the graphics have aged, and due to this, the gameplay has diminished ? I mean let's face it, a lot of titles without their graphics, would not be anywhere as interesting or ?innovative? as first thought compared to a real next-gen quality title, as it was the wow-factor of ?graphics? that sold the title in the first place.
This is made even more glaring with the older back-catalogue titles. Anyone who has played a classic will know that how they remember the game is not what they end up playing when they take a trip down memory lane. As their perceptions of the game have been ?updated? to today's standards in game mechanics, graphics and tech in general ? and what probably made more than one title ?cool? back then wasn't its ?gameplay? but rather its graphics, which are quite frankly utterly shit compared to today's standards, not to mention the questionable ?gameplay? :/.
I am in somewhat agreement with Caroo. I think the whole Mario thing is the problem as it represents their ?kiddy? approach to the gaming market ? not to mention a tired one. Anyone that disagrees with that should ask themselves why the original Playstation has done so well and why the GCUBE has done so bad ;)
But personally, I couldn't care less whether Nintendo goes under or not.
I think Sega has more right to be still kicking and releasing consoles that Nintendo ? Sega Dreamcast for instance. Personally I have always hated Nintendo games for the kiddy factor ? and have found the quality of the gaming experience in general, lacking compared to other platforms. At the end of the day, if a company like Nintendo, who managed to sell an inferior product in the way of the Gameboy over a superior one in the Atari Lynx in the initial handheld wars ? killing the Atari Lynx. Can't evolve with the times and ends up going extinct because of it, well... they have only themselves to blame and I can not think of it happening to anyone more deserving :)
But hey, that is my cynical opinion ;)
Just have to point out one more thing about .NET and C# - it is very end user cross platform when you consider you can program content for all mobile devices and internet browsers using it. I think there will be a convergence of home media centres and pay on demand and games and virtual reality style stuff over the next 10 years that may see consoles either become integrated or the platform for future home media centres - MS knows this and they will fight one hell of a fight for this world as it will probably also include streaming internet tv / phone i.e. totally integrated. In the long run I'm not sure Sony are really up for this fight - the mother of all end user computing fights.
quote:Originally posted by poser
Just have to point out one more thing about .NET and C# - it is very end user cross platform when you consider you can program content for all mobile devices and internet browsers using it. I think there will be a convergence of home media centres and pay on demand and games and virtual reality style stuff over the next 10 years that may see consoles either become integrated or the platform for future home media centres - MS knows this and they will fight one hell of a fight for this world as it will probably also include streaming internet tv / phone i.e. totally integrated. In the long run I'm not sure Sony are really up for this fight - the mother of all end user computing fights.
To call C# and .NET cross platform is choosing a *very specific* definition of cross platform. It's called smoke and mirrors.
J2ME is a standard for mobile devices (they number in the hundreds of millions), even C++ works on at least an order of magnitude more than C# (a far cry from all?). Not one mobile phone from the various manufacturers devices I've programmed on supported C#.
An assembly program written to run on only one computer and no other can also serve up any content to any web browser. What's your point?
I agree convergence will happen, but only if DRM doesn't take too much of a hold. See http://arstechnica.com/news.ars/post/20060322-6434.html
Apple are spitting chips at being told to cross license their DRM, they call it "state-sponsored piracy". So no iTunes for your PSP, or likely your media center device (unless it's an Apple) in the future.
The real reason DRM is so big is not piracy (which it is very poor at stopping), it's vendor lock in and potential monopoly status and profits.
MS may win, but if they do I think it will be because they bought someone who did, not because they did it on their own.
:-)
First it was Jacana and I agreeing about something, now it's Dragoon and I agreeing about something... This site is getting strange...
Poser, have a look at http://parrotcode.org "one bytecode to rule them all". Parrot is a multi-language VM and series of Just Too Late (tm) compilers for different CPUs that seems to be shaping up to be the open source world's answer to .Net
When Microsoft gets someone to say that something is the "preferred platform" for anything, it's not worth a grain of salt.
Let's just remember back when they said that visual studio 2005 was the ideal environment for making HL2 mods... 6 months later you STILL couldn't compile the source code on it.
[QUOTE=lorien]Poser, have a look at http://parrotcode.org "one bytecode to rule them all". Parrot is a multi-language VM and series of Just Too Late (tm) compilers for different CPUs that seems to be shaping up to be the open source world's answer to .Net[/QUOTE]
Parrot's been a promise that's been a long time coming and ... well, it's still not here.
I believe Mono is the open source community's response to .NET :)
Microsoft or Borland or GCC
All 3 have free C++ compilers. Which is the best for games? Has anyone used DigitalMars C++ or D?
Stay away from Borland is my advice. It's highly non standard C++ and in my own experience is far more trouble than it's worth.
Lots of people like MSVC. I prefer GCC (though I admit MS have a better optimiser atm).
Stay away from Cygwin if you're interested in GCC for games, use mingw32.
D is not compatible with C++, which could make using DirectX interesting.
On the GCC subject version 4 has an entirely new optimisation framework- atm it isn't as good as GCC 3.4, but it's a much cleaner and more hackable optimiser. It does some basic auto-vectorisation now and the optimiser is only going to get better.
GCC 3.x and 4.x are the most standard-compliant and trouble-free compilers I've used, which for me is what counts. They take their time to compile C++ though- particularly on windows. No idea why it's slower on windows...
"fortune" is a unix prog that dispenses randomly selected gems of wisdom from a database btw [:)]
id go with visual studio purely for the debugger. but thats me :)
gcc can be very anal, but it is the most standards compliant.
oh, and vs2005 is also free.. well, not quite, i dont think you can release software legally using the free compiler but its free to use until you get to that point. plus the MSDN docs are fantastic.
i assume you can still use pix with GCC generated directx code.. but its probrably something worth checking
MattD
[url="http://msdn.microsoft.com/vstudio/express/default.aspx"]vs2k5 is free in the Express edition for any purpose your heart desires including full commercial use.[/url]
however, it ships as a .NET2 compiler only. you need the platform SDK and some arcane magic to set it up for native apps. I can't find the link now but it's somewhere on the visual studio site.
In terms of every day usage and features and robustness I order the compilers so:
1st choice: MSVC - Hate it or not, this is the defacto standard, and by far and away the easiest to get going on a Win platofrm. I dont consider Linux a valid platform for development of common apps, because 90% of the market says so. Soz. Most of the applications I have made have been Win based as well (even tools for Linux systems and compilers).
2nd choice: gcc - Only used because of cross platform compatibility (for the few apps I need it for).
3rd choice: Ming - I have _had_ to use this for building some apps (ffmpg and so forth) and I simply dont like it. Far too much system modification, and little integration.
Ive also used various other compilers like ARM and MIPs and others. These are often quite an experience to get working on windows, although Metroworks ARM is pretty decent, pity about the IDE.
For making games on PC - Imho there is simply only one choice. MSVC. All other solutions will simply be full of difficulties in setup, and usage. The only time you might end up wanting to go gcc is if you need to do console, or handheld work. Even then though, you will end up with SNSystem tools (for Sony products), Metroworks (for Nintendo products) and MSVC for MS consoles.
quote:Originally posted by Grover
In terms of every day usage and features and robustness I order the compilers so:
1st choice: MSVC - Hate it or not, this is the defacto standard, and by far and away the easiest to get going on a Win platofrm. I dont consider Linux a valid platform for development of common apps, because 90% of the market says so. Soz. Most of the applications I have made have been Win based as well (even tools for Linux systems and compilers).
Hmm, VS2005 is the most annoying piece of cruddy bloatware I've used in years, it refuses to compile some perfectly legal c++, MS have screwed their crazy template dll export system, they've forgotten that people need to build the odd dll with the 'free' version (which installs that slimy windows 'genuine advantage' spyware): it's missing required libraries that don't come with the SDK and are impossible to download. I ended up copying them from a full install. The list goes on.
Then there's the insanity of the manifest stuff and msvcrt versioning stupidity... And the fact that they haven't bothered to implement the C99 standard by 2006!
MSVC2005 is still a pretty broken compiler. Much better than MSVC 6 admittedly, but it's still broken. Particulary with dlls and templates, which are two things I use all the time. It's a POS imho.
Just as a quick side note, anyone who's used MSVC for any length of time and found themselves swearing at the computer because Intellisense is just so **#^ing useless, should check out [url="http://www.wholetomato.com"]Visual Assist X[/url].
Some of my utterances within the first hour of use:
- "Holy crap! Intellisense actually works now!"
- "I clicked the "Go" button, and it when straight to the function definition!"
- "It's underlining misspelled functions and variables that haven't been declared yet!"
It's not perfect (it has some trouble dereferencing an iterator into a vector of Boost smart pointers, for example) but it is just so far superior to the MSVC offering that you'll easily forgive the minor shortcomings.
Fortunately, work is going to pay for it for me because there is absolutely no going back.
MSVC undoubtedly has the best debugger and most optimal compiler, I prefer the vc6 ide but a lot of people love the new ide.
It is basically what all game developers are using these days. They may use intel c++ but usually as a plugin to the visual studio ide which they still use. By the way, the majority of win32 binaries we put out ARE dlls.
Honestly Lorien, you cannot compare the amount of problems for setup on VS2005 against gcc or Ming. For the game developer you usually buy the optimised compiler, and studio. Having projects with both ARM compiler setups (for GBA/DS) and console projects and PC projects VS compiler is still by far the easiest to use, simply due to the IDE alone. What any nuances you have with templates are just as problematic on other platforms and other compilers if not more so. For example, try using advanced templates on a SGI machine like a Origin 4000 ... and these guys are supposedly the standards makers.
Btw I work with quite advanced template packages and dlls everyday, and I have found VS2005 to be the best effort yet. 2003 was pretty solid too, although the 2002/2003 workspace changes were annoying, but hey, nothing is perfect - name me a workspace in another application you could use on Win32 with similar features? There are plenty of instances with gcc and ming that show similar template problems - and for dll development, ming suks bigtime. And gcc - well.. yeah.. you can make dll's... no headaches though to get there though is there.. . Next you'll be telling me makefiles are awesome.. and Ant is brilliant.. :)
But we are talking about game development. And any serious game developer needs a good integrated IDE for development, debugging and verisoning. Most compiler makers have recognised the dominance and features of the VS toolset, so you will often find most compilers available to be usable within the VS environment in any case. For making games, you need something that supports DX, and MS Windows - like or hate it, this is the most common platform around, and simply the biggest gaming platform around.
If I were making cross platform tools I might recommend gcc, or if I were making IP that were to be released free that needed to be able to be compiled with free compilers I might choose gcc. But for games, it makes no sense to use anything else, its an industry standard. Ask any programmer in any game development studio. And please.. dont refer to Linux and OSX markets, sure there are games on these machines, but the Win32 platform outnumbers them 100 to 1.
What I think people should ask, is can I do similar things on a Win32 platform with any other toolset - ie compiling, debugging, editing, and versioning within a complete package with a fairly consistant interface? Thats the question - and honsetly there is simply no comparison. You have things like emacs, code::blocks, DevC++ and such, but they are so far away from a game development toolset its not funny.
I have spoken to a games programmer from a reputable games company making games for PS2 who does all his coding using GCC as compiler and Vi as editor. While I use MSVC now, at university I did all my coding the same way. To an extent I think I wrote better code back then simply because an IDE that does too much for you can promote lazy programming practices.
The question as far as I can tell is "How do I like to work and what toolset will support that?". Sometimes that choice is severely limited by the platform you might be targetting: if you're targetting Xbox then you are pretty well limited to the obvious. But you don't _need_ MSVC and DX. OpenGL is still the superior API, in my opinion, and there are a lot of games guys out there that use it...particularly if you're targetting Playstation.
My point is that whatever you're chosen toolset, you actually have to work with the thing and if you find yourself, instead, constantly fighting with it then you've probably made the wrong choice.
Im purely speaking from a work experience point of view. I too know emacs and gcc fans that stayed on that on PS2 dev - however if you have ever used SNSystems compiler, ee & vu debugger youd soon realise how poorly productive you are. And thats not based on peoples preferences, or otheriwse, its purely from a feature set, and capability standpoint. I remember at a certain games co, they simply didnt warrant spending the cash on such tools, and thus always languished behind other studios that did.
Lazy programming practices dont come from IDE's at all, in fact they originate from poor design practices. A half decent IDE will save you _hours_ over a non-integrated one for example - in debugging time alone. This again, isnt from simple adhoc commentry, this is from some 10 yrs in the business of building apps, games, sims etc.. I have used quite a many of the tools you speak about for professional work, from gcc, to msvc, to arm, to sgi, to console (PS2 snsystems and gcc), with varying IDE's, from metroworks, to VS, to nedit, to emacs, to vi, borland, devc++, and code blocks to name a few (even a couple of my own tcl based toolsets). The fact is its a job - your aim is to do the work efficiently, effectively and in the smallest possible timeslot you can with no bugs :) ... MSVC goes far further in achieving this, than any other setup on Win32.
Whether you 'like' a toolset is almost utterly ignored these days. You need to use the tools that dev studios buy. Game studios (or ones with some form of cash) buy decent toolsets. Unless you are a core engine programmer on PS3, these days, MSVC is simply what you will NEED to know, understand and use. Im not promoting MS products, and as lorien knows I personally abhore MS as a company. But I am simply talking facts about making games. I even personally still use Fedora Core 4 and various Linux based tools to build many apps, but not games. I cannot see that any game coder would put gcc in front of the MSVC tools (and attached SN Tools and compilers for PS2). If you are still using Vi and gcc on PS2 - you need to get 4K.. and buy some decent tools.
As for OpenGL still the superior API - thats sadly very, very, very, mistaken. OpenGL is in the throws of being reinvented, and may see the light of day with some improvements (hopefully) but it is currently so far behind DX in functionality and usability alone, its horrible for use with games on PC (even though many people will wave the GL flag enlessly). The main reason for this is extensions, and shaders. GL has no serious comparative shader language to match HLSL. Again, dont quote GLSlang and GLSL, because that then will just show how little you know about HLSL. Its a sad fact - and very sad, because the GL API used to the be the best around by far. Now with the horrors you need to go through to get a wide range of PC cards running shaders... its a waste of time. While you spend months writing fallbacks to cover your shaders, you could have written a single HLSL file - installed DX9.. and youre dun.
PSP and PS3 will use OpenGL, but not the true OpenGL. Its OpenGL ES (PSP has a hacked up API of its own really). PS3's Open GL ES shader system uses underlying Cg NVidia gear - which is sensible for a specific platform, but it aint pretty to use. Hopefully OpenGL can make a transition through to something back to its original greatness.. but as it stands at the moment, its a pretty sad mess.
Finally, if you _dont_ go an leanr to use MSVC, you will simply be doing yourself a disservice for two main reasons:
1. You will fall behind in use of the most commonly used game development tools - esp debugger, versioning integration and so on.
2. You will find it hard to work with other people/studios that predominantly use MSVC for game development.
What I find most bizarre though about gcc lovers, is that anyone can even remotely call using makefiles and/or ant to build large game projects as being a good choice against workspaces. If you thought MSVC had probs.. my god.. try a large scale project with make and ant...
Point certainly very well taken and understood. However, I interpreted the orignal post in the context of homebrew development, in which case, as I've already said, there are a few more things to consider.
As for OpenGL, I fully realise it is languishing...and no small part of that is probably due to SGI handing over all the IP to Microsoft for cash in order to save their sinking ship. In some ways it's difficult to compare OGL and DX because their design philosophies are quite different, and one should note that DX can make some obvious assumptions about the platform it is running on where OGL can't.
What I liked about OGL was that, for one it's API is elegant and clean. It's a C API and doesn't try to be anything else. DX doesn't know if it's C or C++ so we've got classes but every name has a prefix of some sort tacked on to it (which is what namespaces are for!) plus there's Microsoft's COM paradigm to contend with. The other thing is it is pretty easy to get something going quickly in OGL with only a few lines of code, which is important to me because I do a lot of prototyping and proof-of-concept work. In DX, I groan at the thought of yet again seting up all the vertex and index buffers just so I can test an idea. Performance at this stage is irrelevant. (Some of the DX extensions might help here but most of the time they aren't an option.) No doubt OGL vertex buffers involve the same pain, but you don't have to use them if you don't want to whereas you have no choice in DX.
Having said all that, I've not had a lot of experience using OpenGL...what little I do have was actually quite pleasant. I haven't used GLSL at all so I can't comment on it but I'm not sure what you're getting at with HLSL. You still have to write fallbacks if the GPU doesn't support the shader version you are using. An HLSL shader can't magically rewrite itself depending on the capabilities of the GPU. If you're talking about the DX Effect system, it might make writing fallbacks easier but it still doesn't do it for you.
Hrm. Even as a homebrew developer on Win32 - the points still hold. By not using many of the industry standard tools you are simply alienating yourself and as I mentioned, making things harder for yourself down the track. Again Im not talking about my own personal preferences, since that should not be accounted for when trying to determine the most useful toolset for developing games on Win32.
The simple fact is in OGL you cannot use advanced HW GPU techniques as easily - in HLSL I am referring to techniques. This method of writing mutliple platform compatible shaders is not available in OGL. Yes, DX WILL do many things for you to abstract the platform (especially in relation to shaders) - thats the whole reason OGL is so far behind. My guess is you probably havent done alot of shader work in GL or DX. I can write a single fallback in a technique in HLSL that will work on both ATI, NV and Matrox and others. Try doing that in OGL - you cannot. So what you say is untrue - HLSL is a high level language that is built to be able to be used on multiple types of GPU's - this is its utterly huge benefit. You try and support multiple GPUs in OGL - even if you use say ARB extensions (which are supposed to be a GPU standard) you'll find so many differences in how the cards implement the ARB standard its crazy. In fact, sadly even OpenGL itself isnt implemented properly on all platforms - for instance fog is implemented differently between ATi and NV, and in OGL with shaders you can easily become caught by this, and in DX its all nicely abstracted for you. On DX that problem is reduced massively. Again this is much more important when writing a PC game, where you want as wide exposure as possible with minimal effort (especially if you are working from a homebrew perspective).
Overall, I like the OpenGL API and I even use it in my own apps/tools/homebrew (see LuaEng for example) but again, its not correct to promote an API that is severly flawed, or ignore a toolset that is an industry standard. Personally I dont _want_ to agree with what I have pointed out - but these are _glaringly_ obvious facts not some rant of an old grumpy coder (although.. hehe they look like it). But I dont beleive in pushing information that doesnt hold true. With relation to development, I think personal favouritism should take a back seat - I prefer a balanced critique of the realities of these tools and system especially when you relate to game development and Win32. Things change dramatically if you are restricting your platform, or your OS, or your application type (and associated market). But this is games...
quote:Originally posted by Grover
Im purely speaking from a work experience point of view. I too know emacs and GCC fans that stayed on that on PS2 dev - however if you have ever used SNSystems compiler, ee & vu debugger youd soon realise how poorly productive you are. And thats not based on peoples preferences, or otheriwse, its purely from a feature set, and capability standpoint. I remember at a certain games co, they simply didnt warrant spending the cash on such tools, and thus always languished behind other studios that did.
Using GCC doesn't mean the whole emacs/vim deal, there are plenty of IDEs with all the features of MSVC (ok, except intellisense). Try Eclipse with the CDT and subeclipse and scons extras or KDevelop (Unix only) for example. I have no arguments about makefiles being nutty, and the gnu automake/autoconf is even more cryptic. Likewise I don't use GDB directly, but through a gui.
quote:
The fact is its a job - your aim is to do the work efficiently, effectively and in the smallest possible timeslot you can with no bugs :) ... MSVC goes far further in achieving this, than any other setup on Win32.
My main problems are with the compiler and linker, not so much the IDE. It is bloated though. Also by being so tailored to win32 MSVC gets in the way of cross platform software somewhat.
quote:
Whether you 'like' a toolset is almost utterly ignored these days. You need to use the tools that dev studios buy.
Depends what for I think. I don't make commercial games and have no intention of doing so. I have to teach MSVC at work though- even Managed "C++" this semester [:(]
For linux dev it's expected to be a GCC wizard, in the same way as for game dev it's expected to be an MSVC wizard. It's a toolset you find everywhere, so is a very handy thing to be experienced with.
I think for non-commercial games built using open engines and libraries MSVC isn't the automatic choice simply because open code gets compiled far more often with GCC than MSVC, so is far more tested with that toolset. Open libraries are sometimes badly crippled when you build them with MSVC due sometimes to broken parts of MSVC and sometimes due to use of GCC language extensions.
Anywhere that stability of generated code and cross-platform compatibility is desired over optimiser speed GCC is the way to go imho.
quote:Originally posted by lorien
quote:Originally posted by Grover
Im purely speaking from a work experience point of view. I too know emacs and GCC fans that stayed on that on PS2 dev - however if you have ever used SNSystems compiler, ee & vu debugger youd soon realise how poorly productive you are. And thats not based on peoples preferences, or otheriwse, its purely from a feature set, and capability standpoint. I remember at a certain games co, they simply didnt warrant spending the cash on such tools, and thus always languished behind other studios that did.
Using GCC doesn't mean the whole emacs/vim deal, there are plenty of IDEs with all the features of MSVC (ok, except intellisense). Try Eclipse with the CDT and subeclipse and scons extras or KDevelop (Unix only) for example. I have no arguments about makefiles being nutty, and the gnu automake/autoconf is even more cryptic. Likewise I don't use GDB directly, but through a gui.
Hrm - I think people might slap you for the Eclipse comment (at least most of the Java coders I know hehe) :) Again, for Win32, and games.. Id be very hesitant for that combo at all - maybe a Java game? Hrm.. even Netbeans is nice.. but it aint no VS.
What I was referring to was the compiler though. SNSystems compiler, linker and debugging toolset for PS2 vs the gcc based one that comes with the PS2 devkit, is like chalk and cheese - gcc being horribly cheesy. Again, you need to use the both to see what I mean, the difference in productivity is quite obvious. And a person whom uses gcc on PS2 is simply missing out - ie its not a good example in the pros for gcc. The SN compiler was originally gcc based even - but has been improved.. a touch ;)
quote:
quote:
The fact is its a job - your aim is to do the work efficiently, effectively and in the smallest possible timeslot you can with no bugs :) ... MSVC goes far further in achieving this, than any other setup on Win32.
My main problems are with the compiler and linker, not so much the IDE. It is bloated though. Also by being so tailored to win32 MSVC gets in the way of cross platform software somewhat.
For me its the whole working package. And for most people. Once you start splitting the compiler, linker, debuggers up - you just end up with heartache. From my exp, there have been many times where the three tools have been entirely seperate. This decreases your development capabilties bigtime. Coders simply dont have the time to learn different toolsets and nuances of each compiler and system to be developing with. Gcc, and its linker is a classic example here, where knowing options is core to getting the best from the compiler/linker (and sometimes just getting it to work - homebrew psp gcc for instance) :) I know.. its horrible to think that programmers should not have to worry about these things, but I beleive in this day and age, they shouldn't. Unless you are a compiler writer, or maybe a performance tuner, you should be able to just get and code asap - in 20 years time, Im sure we wont even want to know how C++ works :) (in reference to how few people even bother with gas and such these days).
Again, not something I prefer, but something I believe is simply logical - Would you spend time learning how to code in Forth? Would you spend time learning how to get the best out of an Origin 4000 (geez.. that was a waste)... and so on. By focussing your efforts to the optimum, you then have the best chance for jobs, for support, and for future development options.
quote:
quote:
Whether you 'like' a toolset is almost utterly ignored these days. You need to use the tools that dev studios buy.
Depends what for I think. I don't make commercial games and have no intention of doing so. I have to teach MSVC at work though- even Managed "C++" this semester [:(]
Yep. Agreed. Its very application specific. However, gaming is 90% Win32 these days. Again, this is a matter of fact and making best use of energy expended ;) Do you target a niche market (btw Im not suggesting not to - but make sure you do your homework on the market first).. or do you target a much bigger audience with better likelihood of success or exposure?
quote:
For linux dev it's expected to be a GCC wizard, in the same way as for game dev it's expected to be an MSVC wizard. It's a toolset you find everywhere, so is a very handy thing to be experienced with.
I think for non-commercial games built using open engines and libraries MSVC isn't the automatic choice simply because open code gets compiled far more often with GCC than MSVC, so is far more tested with that toolset. Open libraries are sometimes badly crippled when you build them with MSVC due sometimes to broken parts of MSVC and sometimes due to use of GCC language extensions.
Yep. For non-commercial open sourced libs are ideal, and very often ggc - if you are looking for gcc solutions :) There is also a fairly large amount of VS only based libs and such too. All depends really where you shop :)
Also, I hate to say it, but open sourced gaming isnt exactly burgeoning with numerous hot game titles :) Although... I do like that TA clone :) But Im sure there are plenty of good titles around, its just that like most things in this world, cash gets you resources.. and in making games.. thats pretty much a necessity. Ask any indie here :)
quote:
Anywhere that stability of generated code and cross-platform compatibility is desired over optimiser speed GCC is the way to go imho.
Hrm. Stability is debatable :) - there are plenty of issues in gcc too . Its not immune from problems creating quality code. It certainly has a good turnover of people trying to improve it - albeit introducing issues as well sometimes ;) It is definitely the only choice for cross-platform however, and that is important if you are looking to make a cross platform game. Espeically people interested in homebrew.
Actually I'll insert a fat disclaimer here:
For homebrew lorien is dead right - gcc is prolly the best choice by a mile for DS, GBA, PSP, PS2 etc .. if not the only choice. Getting VS to help building with those can be a pain (it is doable in 2005 though I should point out - with a pretty simple plugin).
Mind you gcc is a mammoth effort, many languages supported, many platforms, binary types, and so on - it really is amazing it compiles anything :) But it is impressive, no doubt about it. In that light alone gcc stands very tall in my book, but again I'll note Im talking Win32, and game development, and Id love to use gcc for game dev (actually i do in a couple of instances :) .. Code::Blocks + gcc for homebrew PSP) but its not a serious answer for PC Win32 DX development.
On the flipside to my argument is that if you extrapolate what Im saying about generic development systems and programmers not needing to be at the low level - Im not sure Id actually want that to be the case. But I do recognise that systems, OS's, libraries are all becoming increasingly complex, and that programmer roles are becoming more and more specialised. I'd very much personally HATE MS to end up with a strangehold on the tools we use to develop - but unless someone comes along with a serious competitor (with a good integrated debugger).. we are stuck.
Anyone got a comparable IDE to VS for game dev? Im keen to hear if there is something new around I havent heard of that can flush this argument of MS away...
quote:Originally posted by Grover
Hrm - I think people might slap you for the Eclipse comment (at least most of the Java coders I know hehe) :) Again, for Win32, and games.. Id be very hesitant for that combo at all - maybe a Java game? Hrm.. even Netbeans is nice.. but it aint no VS.
Eclipse 3.1 with the CDT is very cool indeed once it's set up properly. I know quite a few java coders for whom it's the only IDE worth using.
quote:
What I was referring to was the compiler though. SNSystems compiler, linker and debugging toolset for PS2 vs the gcc based one that comes with the PS2 devkit, is like chalk and cheese - gcc being horribly cheesy. Again, you need to use the both to see what I mean, the difference in productivity is quite obvious. And a person whom uses gcc on PS2 is simply missing out - ie its not a good example in the pros for gcc. The SN compiler was originally gcc based even - but has been improved.. a touch ;)
If it's based on gcc they are likely in breach of the GPL: I haven't seen the compiler sourcecode around anywhere... Correct me if wrong of course. I have nothing to do with ps2-dev.
quote:
For me its the whole working package. And for most people. Once you start splitting the compiler, linker, debuggers up - you just end up with heartache. From my exp, there have been many times where the three tools have been entirely seperate. This decreases your development capabilties bigtime. Coders simply dont have the time to learn different toolsets and nuances of each compiler and system to be developing with. Gcc, and its linker is a classic example here, where knowing options is core to getting the best from the compiler/linker (and sometimes just getting it to work - homebrew psp gcc for instance) :)
Umm MSVC is exactly the same. If anything it's options are more cryptic and make it even easier to brake binary compatibility...
quote:
Again, not something I prefer, but something I believe is simply logical - Would you spend time learning how to code in Forth? Would you spend time learning how to get the best out of an Origin 4000 (geez.. that was a waste)... and so on. By focussing your efforts to the optimum, you then have the best chance for jobs, for support, and for future development options.
I think the game dev crowd should try and get over making a broken compiler and linker their standard tool myself... I find I simply can't do things with MSVC that are no problem at all with GCC, and they are useful things (ever been bitten by MSVCs template/dllexport issues?). I argue that MSVC isn't the optimum at all and has never been.
quote:
Yep. Agreed. Its very application specific. However, gaming is 90% Win32 these days.
And game-dev is a tiny niche for programmers.
quote:
Also, I hate to say it, but open sourced gaming isnt exactly burgeoning with numerous hot game titles :)
Nor is commercial game dev :)
quote:
Hrm. Stability is debatable :) - there are plenty of issues in gcc too . Its not immune from problems creating quality code. It certainly has a good turnover of people trying to improve it - albeit introducing issues as well sometimes ;) It is definitely the only choice for cross-platform however, and that is important if you are looking to make a cross platform game. Espeically people interested in homebrew.
GCC isn't perfect by any means, it's just much better than MSVC... SOmetimes I wonder how many of the problems in windows are actually caused by MSVC bugs.
quote:Originally posted by Grover
The simple fact is in OGL you cannot use advanced HW GPU techniques as easily - in HLSL I am referring to techniques. This method of writing mutliple platform compatible shaders is not available in OGL. Yes, DX WILL do many things for you to abstract the platform (especially in relation to shaders) - thats the whole reason OGL is so far behind. My guess is you probably havent done alot of shader work in GL or DX. I can write a single fallback in a technique in HLSL that will work on both ATI, NV and Matrox and others. Try doing that in OGL - you cannot. So what you say is untrue - HLSL is a high level language that is built to be able to be used on multiple types of GPU's - this is its utterly huge benefit.
At the risk of sounding petulant, I've been working with HLSL now for around 12 months. HLSL is not able to be used on multiple GPU types simply because it is a high level language. HLSL code is still compiled to an object code that will only work on GPUs that support the instruction set, i.e. shader version, for which the code was compiled. So if you compile an HLSL shader targeting shader version 2.0, that shader will run on any GPU that implements the shader version 2.0 instruction set. I have had trouble lately because I've been using Shader 3.0 for everything, and up until very recently, ATI did not produce any GPUs that supported it; only NVidia.
So what you say is not strictly true - you can write shaders (as well as techniques but you don't have to use techniques if you don't want to) that will run on anything but the reason is not because you are using HLSL. You can go back to the pre-Cg days and write assembly code that will do that. It's because the GPUs support it.
And I've said before, I've not had any experience with OGL shaders or GLSL, so I'll just have to take your word for it, as far as that goes.
...and to sum up:
MSVC is "Better" than GCC because...
Mac is "Better" than PC because...
OGL/DX9, ATI/Nvidia, Ford/Holden, XXXX/VB, apples/oranges, chalk/cheese...
My experience is that if you're used to working with GCC, MSVC is a pain and Vice versa. However right or wrong it is, MSVC is the 800 pound gorilla in the gamedev industry and you should ignore using it at your commercial peril. Those in research or at lower levels of abstraction will probably be at the point where they can make an educated choice of one over the other based on their own needs but the rest of us peons work with what we're given to focus on the job, not the tools.
-d
Possible Contract Job
Hi Guys,
I thought I'd post this in here not in the jobs section because firstly I'm looking for some solid programming advice.
I have a tool in development which I'm looking to get finished for Kalescent Studios.
Some good aggresive progress has been made on it - but I need to discuss the possibilities of extra features and options with some fresh minds.
Please email info(at)kalescentstudios(dot)com or answer in this thread if your interested in taking a closer look or offering your input.
To summarise proceedence from that point onward:
1) Organise NDA.
2) Supply Source and Current Design Documentation.
3) Discuss Additional Features and Implementation possibilities / avenues.
4) Agree on Quotation / Schedule for completion.
5) Contracts.
6) Conquer!
Cheers!
New toys coming in the next C++ revision
http://www.artima.com/cppsource/cpp0x.html
Look like they are tackling some of the things that make me swear a lot. That can only be a good thing [:)]
quote:Originally posted by pb
http://members.safe-t.net/jwalker/programming/interview.html
pb
Rather cheeky putting that old joke up pb [:)] Don't take it seriously anyone, it's a well known (and very well done) fake.
quote:Originally posted by pb
I've grown to hate C++ over the last 6 months. It seems to mess with people's minds. For a particular task, if they were coding C, they wouldn't use a table of function pointers, but give them C++ and they start using virtual functions (which does the same thing)... Why!?
Hmm... a couple of possibilities I can think of:
1) the C design was made better, more readable/maintainable, more powerful/flexible or more future proof by abstraction, inheretence and virtual functions.
2) they don't really understand virtual functions and just realise that "bad things" can happen if they don't make a function virtual.
:-P
Maybe try Objective C pb, the language of NextStep/MacOSX. It's slower than C++ but doesn't have most of the gotchas. C made object oriented in a smalltalkish kind of way.
COM is the worst culprit for virtual functions of course.
I don't like plain old C much (I've written some sizeable apps in it), and I really detest system level programming languages running inside a virtual machine (like Java and C#), hence I'll be sticking to C++ in combination with a scripting language for a while yet.
Bjarn has had some good ideas in the past: http://public.research.att.com/~bs/whitespace98.pdf ;)
Here is a nice presentaion by Herb Sutter on sexy future features (from the IDE perspective) as well, especially wrt multithreading: http://microsoft.sitestream.com/PDC05/TLN/TLN309_files/Default.htm
Just remember in C++ you only pay for features you use, and some people should just not be allowed near C++ compilers.
(btw lorien C# is JIT compiled)
quote:Originally posted by Dragoon
Hmm... a couple of possibilities I can think of:
1) the C design was made better, more readable/maintainable, more powerful/flexible or more future proof by abstraction, inheretence and virtual functions.
2) they don't really understand virtual functions and just realise that "bad things" can happen if they don't make a function virtual.
:-P
The "C design"? You can do abstraction with function pointers in C. This idea that if you were implementing an OOD project in C you'd have that silly SQUARE,TRIANGLE,CIRCLE switch statement that you constantly see in C++ literature is just crap.
I reckon that when people are forced to explicitly put that function pointer table in they seem to make more sensible decisions about abstraction and inheritence.
Mindlessly making functions virtual can also cause bad thing to happen. But I guess you're right on this point, if you're playing the odds you'd make it virtual (although it will cost you in speed and memory).
pb
Redwyre - You're asking for trouble posting an IE-only link in response to one of lorien's posts [:P]. Also, believe me, java is also largely JIT compiled (mutter-debugging-sun's-thread-firkin-unsafe-JIT-compiler-on-sparc-mutter) :)
will watch young herb's presentation with interest...
edited to make it clear I was responding to redwyre's post, not pb's
quote:Originally posted by pb
...give them C++ and they start using virtual functions... Why!?
Because of the [url="http://www.joelonsoftware.com/articles/LeakyAbstractions.html"]Law of leaky Abstractions[/url], programmers either don't know or don't care that these are, in effect, the same thing.
So, it comes down to whether you'd prefer to employ (or work with) a programmer that doesn't know, or one that doesn't care. Personally, I choose the 'don't care' every time. After all, you know what they say about premature optimisation.
quote:Originally posted by pb
The "C design"? You can do abstraction with function pointers in C. This idea that if you were implementing an OOD project in C you'd have that silly SQUARE,TRIANGLE,CIRCLE switch statement that you constantly see in C++ literature is just crap.
I reckon that when people are forced to explicitly put that function pointer table in they seem to make more sensible decisions about abstraction and inheritence.
Mindlessly making functions virtual can also cause bad thing to happen. But I guess you're right on this point, if you're playing the odds you'd make it virtual (although it will cost you in speed and memory).
pb
Yes you can do object oriented programming in C, the cleanest example I've seen of it is the GTK windowing toolkit for Linux. You wouldn't have the switch statement for it under GTK in C, but something like:
[code]
if( GTK_FRAME_CAST( obj ) )
{
// stuff
}
else if( GTK_CANVAS_CAST( obj ) )
{
// other stuff
}
or
GtkWindow* window = GTK_WINDOW_CAST( obj );
if( window )
{
// stuff with window
// eg:
gtk_window_show( window, 1 );
}
[/code]
which is not too different from C++, but requires a lot more effort to implement in C.
(the specific macro names probably aren't accurate - but they basically cast obj to the type pointer or return NULL if it isn't possible through the ?: operator)
I guess what I am trying to say is that C++ does "virtual functions" much cleaner and less error prone because they are part of the language and automatic with the keyword. If you take using function tables in C as a replacement odds are that because of compiler optimisations in C++ it would be slower than a virtual function in C++.
Still back to the original question, sure you could move code from C to C++ without using virtuals, but as in case 1, using them could improve the code quality considerably through a better design without the pain of trying to implement object oriented inheritance in C. This largely depends upon what you're moving to C++ though, certainly not all code would warrant virtual functions or be improved by them.
I guess I ask myself, if I was programming in C I would program it the C way which is unlikely to include object oriented inheritance in C as its a real pain and "works against the grain" of the language. If I was to program it in C++ my thinking and design would default to the C++ way, including thoughts on future proofing against others using it for inheritence re picking functions to be virtual.
quote:Originally posted by Dragoon
I guess what I am trying to say is that C++ does "virtual functions" much cleaner and less error prone because they are part of the language and automatic with the keyword. If you take using function tables in C as a replacement odds are that because of compiler optimisations in C++ it would be slower than a virtual function in C++.
Really? You have some example code that shows superior compiler output using a virtual over a function pointer table? Or is this just more C++ dogma? I've even heard C++ evangalists claim that const makes your code go faster (!?)..
On the premature optimization point - this often gets cited as an excuse to not do any optimization at all, or to leave it all to the end, but really its talking about it in context - obviously you optimize your design quite early, otherwise you go down a dead end and will never be able to optimize without a massive re-write.
As for doing this or that being a pain in C - thats because C is a very literal language - it doesn't do much for you. If its a pain to code its probably slow to execute.
pb
quote:Originally posted by pb
On the premature optimization point - this often gets cited as an excuse to not do any optimization at all, or to leave it all to the end, but really its talking about it in context - obviously you optimize your design quite early, otherwise you go down a dead end and will never be able to optimize without a massive re-write.
Agreed 100%, making sure the obvious parts are designed with speed/memory tradeoffs in mind are key. However, when I see [url="http://www.gamasutra.com/features/20051220/thomason_01.shtml"]articles about next-gen optimisation[/url] which recommend:
quote:
We can eliminate branches entirely by changing code of this kind:
if( x == 0 ) y = z;
to
y = x == 0 ? z : y;
and
if( ptr != 0 ) ptr->next = prev;
to
*( ptr != 0 ? &ptr->next : &dummy ) = prev;
it just blows my mind. Most people will miss or ignore his comment about this being applicable to "critical methods" and will start thinking that the perf tuned code is "better" than the maintanable code above.
I'd shoot any programmer that tried this on with me unless appropriate code comments made it maintainable, and proof of them using a frickin' profiler was forthcoming. There's a lot of layers in most pieces of software, and tuning one in isolation "just on principle" normally doesn't help much.
Mcdreski is right- if you want me to read a link it's got to work in Konqueror or Firefox.
IMHO "Just In Time Compiler" should be changed to "Just Too Late Compiler" [:)] And IMHO Sun's hotspot optimisers guarantee sluggish progs because the bulk of the prog remains fairly unoptimised.
I agree with mcdrewski about optimising for next gen- that's one of the reasons I've been looking into templates. But you DON'T want that code all over the place... The way I look at it is if there is low level code that gets used everywhere or is known to be time critical it's probably worth optimising- but watch out for bottlenecks with a profiling.
quote:Originally posted by pb
Really? You have some example code that shows superior compiler output using a virtual over a function pointer table? Or is this just more C++ dogma? I've even heard C++ evangalists claim that const makes your code go faster (!?)..
No I've never looked at the compiler output, but C++ will optimise every call to a virtual for which it can deduce the type at compile time by not requiring a look up in the function table. You have to remember to do this manually in C, and people in such respects get lazy or forget. I'm not an evangalist for any language. I take each on its merits, and not only the language merits, but library support, maturity, community, etc. Is it that you are a C evangalist or hate C++ or both? (maybe your not, but you seem to not like C++) I can't think of why else anyone would prefer C to C++ for object oriented programming.
Const does not increases your "code speed" significantly that I am aware, but does improve your "coding speed" ;-) It traps a lot of improper function usage and errors before they happen saving you debugging time.
quote:
As for doing this or that being a pain in C - thats because C is a very literal language - it doesn't do much for you. If its a pain to code its probably slow to execute.
Wrong on the speed point... I can see what you are getting at but you've overly generalised. Optimisation for example (of code not design) almost always complicates the code (and is a real pain to do for some algorithms), results in many more lines of code, but produces faster code.
Object oriented coding and inheretance in C is a pain, which I don't like because it takes significantly longer to code (many more lines) is far more prone to error than having it done for you with the use of a keyword, is much harder to maintain, and is much harder for another programmer to learn your code. These are all bad things when coding.
quote:Originally posted by lorien
Mcdreski is right- if you want me to read a link it's got to work in Konqueror or Firefox.
Not even Lynx ;-(
quote:
IMHO "Just In Time Compiler" should be changed to "Just Too Late Compiler" [:)] And IMHO Sun's hotspot optimisers guarantee sluggish progs because the bulk of the prog remains fairly unoptimised.
Additionally a seperate virtual machine with it own memory confines can be good for some applications, but for the general case is just a little strange. The JIT compilers can get algorithmic speed (in a mathematical sense) close to C, but in Java you have to work hard to write fast code, else the garbage collector creates and destroys far too many objects, and who wants to use Object pooling just to get code to run quickly. Yucky!
quote:
I agree with mcdrewski about optimising for next gen- that's one of the reasons I've been looking into templates. But you DON'T want that code all over the place... The way I look at it is if there is low level code that gets used everywhere or is known to be time critical it's probably worth optimising- but watch out for bottlenecks with a profiling.
Yep in general the first versions are much better. There aren't too many places in a game engine where the second optimisation would make a noticable difference.
quote:Originally posted by mcdrewski
Agreed 100%, making sure the obvious parts are designed with speed/memory tradeoffs in mind are key. However, when I see [url="http://www.gamasutra.com/features/20051220/thomason_01.shtml"]articles about next-gen optimisation[/url] which recommend:
it just blows my mind. Most people will miss or ignore his comment about this being applicable to "critical methods" and will start thinking that the perf tuned code is "better" than the maintanable code above.
I'd shoot any programmer that tried this on with me unless appropriate code comments made it maintainable, and proof of them using a frickin' profiler was forthcoming. There's a lot of layers in most pieces of software, and tuning one in isolation "just on principle" normally doesn't help much.
Plus those sorts of optimizations are stupid no matter where and when you use them. If you're trying to control individual instructions its time to break out the assembler. Reverse engineering the compiler to understand how it generates its instructions is a good idea for determining when/where the compiler might fail you, but that knowledge shouldn't be used to finesse the compiler into producing particular instructions.
There's a notorious example from years ago where the sign of a float is determined by casting the address of the float to the address of an int, deferencing and checking the top bit - on modern achitectures this actually runs slower because the value needs to be moved to an int register.
pb
Anybody can create crap in any language, it's all really a matter of knowing your tools, knowing your task, and then being intelligent about the design. It all comes down to design, or engineering as it is sometimes called. To prove that anyone can create crap in any language just take a look at the Daily-WTF (http://www.thedailywtf.com/). Anyways, being a programmer or an engineer is about designing solutions to problem, and if you can't to that well then you shouldn't be one.
quote:Originally posted by Dragoon
No I've never looked at the compiler output, but C++ will optimise every call to a virtual for which it can deduce the type at compile time by not requiring a look up in the function table. You have to remember to do this manually in C, and people in such respects get lazy or forget. I'm not an evangalist for any language. I take each on its merits, and not only the language merits, but library support, maturity, community, etc. Is it that you are a C evangalist or hate C++ or both? (maybe your not, but you seem to not like C++) I can't think of why else anyone would prefer C to C++ for object oriented programming.
As I mentioned before, I've grown to dislike C++ over the past few months. Up until that point I saw it as a superset of C (although C is going in its own direction now, it has restricted pointers for example) and even though I'm of the view that some of the new features are silly or poorly implemented I went with Mcdrewski's view that you only pay for the features you use.
What changed my mind was observation of the human factor. Mark Twain put it like this: "To a man with a hammer, everything looks like a nail."
One of the first things people learn about operator overloading is that there's a time and a place to do it and you shouldn't go nuts putting it into everything. The reason this is such an important lesson is because without it (and even with it) people tend to overuse it. Templates are the same, massively overused for no apparent purpose (unless thrashing the instruction case is the objective).
You mention people getting lazy and forgetting. But this fact produces far more unnessary virtual calls in C++ than suboptimal virtual calls in C. Not that the langauge should even be dicating a design decision.
Then there's the C++ community. In one well known critique of C++ (I don't have it here, I'll cite it later) the author points out that in a C++ book or paper the number of references to other C++ literature vastly outnumbers those that refer to work outside of the C++ community, indicating that its very closed off.
I tend to agree with this conclusion because I see the same old unsubstantiated claims going around the community. The numerous claims about how sticking virtual everywhere future proofs your code for example. The claim that the design of your software should be influenced by the language itself (just in this thread we've had terms that indicatate this form of thinking: "C design", the "C++ way", "works against the grain of the language").
quote:Const does not increases your "code speed" significantly that I am aware, but does improve your "coding speed" ;-) It traps a lot of improper function usage and errors before they happen saving you debugging time.
Type safety, const included, is an investment. You put in the extra effort to write type safe code and you'll be rewarded by the compiler finding bugs for you. So its a balance. You need to ask yourself - have I done more work for const, or has const done more work for me?
Personally I think the idea of read-only/const for type safety is sound. The C++ implementation is just bad. Is it bitwise const, or is it conceptual const. Now thanks to mutable it can either, both or even neither. The propogation of constness into references, args etc is quite clumsy too.
quote:Wrong on the speed point... I can see what you are getting at but you've overly generalised. Optimisation for example (of code not design) almost always complicates the code (and is a real pain to do for some algorithms), results in many more lines of code, but produces faster code.
No, I deliberatly avoided over generalising by qualifying my statement with the word "probably".
quote:Object oriented coding and inheretance in C is a pain, which I don't like because it takes significantly longer to code (many more lines) is far more prone to error than having it done for you with the use of a keyword, is much harder to maintain, and is much harder for another programmer to learn your code. These are all bad things when coding.
More claims without any evidence. "not future proof", "more prone to error", "much harder for another programmer to learn", "much harder to maintain". Pick up any C++ book and you'll see it all there, repeated over and over, complete with claims about how good the compiler output is, (always written by people who don't study the output).
Object oriented programming and inheritence is a pain in C++ too. Just because they make it a feature of the language doesn't mean its works well. I've had to deal with lots of bugs that were caused by a tangled mess of inherited classes and it takes ages for another programmer to peel back all the layers, follow a function call from the most derived class up the inheritence chain etc. Now I know - you can write crap code in any language - thats true. But my view is that C++ gives you some awesome tools to make a convoluted web of inheritence, people tend to use the tools they're given just because they can, it gets combined with a community that seems to operate on faith and in the end you get a bloated, slow, unmaintainable, unoptimisable mess.
And what about some of the really poor aspects of the C++ standard. The fact that a constructor is permitted to produce side effects, that temporaries must be constructed with said constructor, but that the compiler implementation determines when to create temporaries, basically means that the compiler implementation is allowed to output code that generates side effects without any specific guidance from the standard. If the standard can't even pin down when and how side effects are generated I'd say you're dealing with a very ill-defined langauge. Hardly surprising, the standard is only about 200 pages longer than C99.
pb
quote:Originally posted by pb
...I see the same old unsubstantiated claims going around the community. The numerous claims about how sticking virtual everywhere future proofs your code for example. The claim that the design of your software should be influenced by the language itself.
I find this a very interesting point. In my commercial experience I had a lot of freedom to select languages and technologies which fitted in the right spot in the solution. Thus the design influenced the language first. However once that decision is made, I see no reason why the lower-level design should not be influenced in turn by the language. After all, in an ideal world that language was chosen for a reason.
quote:
Type safety, const included, is an investment. You put in the extra effort to write type safe code and you'll be rewarded by the compiler finding bugs for you. So its a balance. You need to ask yourself - have I done more work for const, or has const done more work for me?
I think I understand your position here pb, I sense the soul of an old-school hacker who wants to make the system under his power dance and sing using the best way possible. In your case const probably does just make you waste valuable coding time, in the same way that passing lots of void*'s around to save messy and confusing casting can save coding time.
The problem there comes when working with other programmers. If the const declaration saves someone ELSE time when maintaining, debugging or on the day after you're hit by a bus then the investment pays off. If nobody ever looks at your code again, it'll never pay off.
Fundamentally I agree with you, since after all the aim of a SW company is [url="http://www.joelonsoftware.com/articles/fog0000000074.html"]"to convert money to code through programmers"[/url], there's no need to waste that money. However, people [url="http://www.sei.cmu.edu/cmmi/"]teach these practices for a reason[/url]. Mostly because people change companies, and one day someone else not as good as you is going to have to work on that code. Help 'em out. :P
quote:Originally posted by mcdrewski
[url="http://www.joelonsoftware.com/articles/fog0000000074.html"]"to convert money to code through programmers"[/url], there's no need to waste that money.
And here I was thinking that programmers converted caffeine into code... [;)]
Pb perhaps you should join the boost mailing list- I haven't been on it for years, but there are plenty of people who have a lot of say about the standard subscibed to it. With a new standard due in 2009 it now could be a good time to make a stink.
quote:Originally posted by mcdrewski
I think I understand your position here pb, I sense the soul of an old-school hacker who wants to make the system under his power dance and sing using the best way possible. In your case const probably does just make you waste valuable coding time, in the same way that passing lots of void*'s around to save messy and confusing casting can save coding time.
The problem there comes when working with other programmers. If the const declaration saves someone ELSE time when maintaining, debugging or on the day after you're hit by a bus then the investment pays off. If nobody ever looks at your code again, it'll never pay off.
You've summed it up perfectly. The old-skool hacker way is fine and dandy if you're the only person working on it, you'll never touch it again and you're only targetting one platform. For the rest of us who work in TEAMS, we trade raw performance for readability and maintainability and cross-platformability. (Yes, they're all perfectly cromulent words!) A little bit of extra effort on your part is pretty much guaranteed to more than pay for itself down the line when you quit, die or someone else has to look at the code.
Of course, if your C++ programmers are doing brain dead things with inheritance, overloading, templates, etc. then you need to either not hire people like that, help teach them how to do things properly or fire them.
Profiling will point you in the direction of the places that need your hardcore low-level optimisations, but more often than not you won't need to drop to assembly, just change something around - it could be something as simple as making sure data is aligned. I used to think that every cycle was valuable and that code should be as tight as possible, but all that matters is that you don't drop any frames. =]
quote:Originally posted by Shplorb
You've summed it up perfectly. The old-skool hacker way is fine and dandy if you're the only person working on it, you'll never touch it again and you're only targetting one platform. For the rest of us who work in TEAMS, we trade raw performance for readability and maintainability and cross-platformability. (Yes, they're all perfectly cromulent words!) A little bit of extra effort on your part is pretty much guaranteed to more than pay for itself down the line when you quit, die or someone else has to look at the code.
Of course, if your C++ programmers are doing brain dead things with inheritance, overloading, templates, etc. then you need to either not hire people like that, help teach them how to do things properly or fire them.
Profiling will point you in the direction of the places that need your hardcore low-level optimisations, but more often than not you won't need to drop to assembly, just change something around - it could be something as simple as making sure data is aligned. I used to think that every cycle was valuable and that code should be as tight as possible, but all that matters is that you don't drop any frames. =]
Gawd, don't you people ever come up with an arguement you didn't read out of a religious/C++ book?
But hey, don't believe me - you can see for yourself that every part of your arguement is nonsense by taking a look at FMOD, which is not only worked on by a team but used by lots of other developers, massively multi-platform, squeezed down to every cycle and written in C and assembler. Oh its also highly readable and easy to understand, far more so than your typical C++ project.
By all means blame the programmer, but don't you find it odd that in just about every game studio that uses C++ its always basically the same? When Sony surveyed and profiled PS2 games to see how their hardware was being used and developed for they found most titles were coded in C++ and spent the bulk of their cycles thrashing the instruction cache.. Gee, I wonder why.
Yes, you can write good code in C++, but I think its ironic that the language of "abstraction" and "data hiding" is the one that requires you to understand every detail of how the compiler works if you want it to generate half reasonable code, and its usually only the low level guys who are interested in looking at compiler disassembly and they don't use C++ in the first place.
Of course since this is a matter of faith I'm sure that pointing to FMOD as evidence of the falacy of this C++ malarcy won't actually change anyone's mind. I can hear it now... If they had done it in C++, it would be even better, more readable, more maintable, it would magically acquire the property of being better in a team environment...
As for not dropping frames, thats easy, just don't render very much or have much gameplay. Then you can make your code as sloppy as you like... and still achieve the most important thing..
pb
quote:Originally posted by pb
Gawd, don't you people ever come up with an arguement you didn't read out of a religious/C++ book?
Heh, the only "religious" C++ book I've read was "Teach Yourself C++" or something back in 1994 when I was 13. C++ has changed a lot since then and compilers all mostly work like they should.
quote:But hey, don't believe me - you can see for yourself that every part of your arguement is nonsense by taking a look at FMOD, which is not only worked on by a team but used by lots of other developers, massively multi-platform, squeezed down to every cycle and written in C and assembler. Oh its also highly readable and easy to understand, far more so than your typical C++ project.
...
Of course since this is a matter of faith I'm sure that pointing to FMOD as evidence of the falacy of this C++ malarcy won't actually change anyone's mind. I can hear it now... If they had done it in C++, it would be even better, more readable, more maintable, it would magically acquire the property of being better in a team environment...
Having spent last year working intimately with FMOD (to the point of modifying it... your CTO was alarmed when I told him about it) I have to say that you sir, are smoking crack if you think it is clean and easy to follow code! (No offense to Firelight!) FMOD3 is also like 10 years old and I believe that the current version was completely re-written in C++ as well. Only the core mixing routines are done in asm... the parts that take the most time!
The use of C++ doesn't imply that something will be more readable, etc. just for using it, Just like using C doesn't imply that something will be fast just for using it. Generally though, I say that C++ code on average tends to be more readable in that you can come to grips with a new codebase faster, and that its features allows for complex things to be expressed in a more concise manner. In a team environment I think that is more important than raw speed.
quote:As for not dropping frames, thats easy, just don't render very much or have much gameplay. Then you can make your code as sloppy as you like... and still achieve the most important thing..
They're both valid options, but I think I'd rather profile to identify hotspots and try to fix them instead of bitching about us damn kids and our newfangled C++. =]
quote:Having spent last year working intimately with FMOD (to the point of modifying it... your CTO was alarmed when I told him about it) I have to say that you sir, are smoking crack if you think it is clean and easy to follow code! (No offense to Firelight!) FMOD3 is also like 10 years old and I believe that the current version was completely re-written in C++ as well. Only the core mixing routines are done in asm... the parts that take the most time!
Smoking crack? I'm not the one claiming C++ is readable. If you think FMOD3 is so bad maybe you'd like to point me towards some more readable middlware in C++... Unreal perhaps.. or what about Havok??
quote:The use of C++ doesn't imply that something will be more readable, etc. just for using it, Just like using C doesn't imply that something will be fast just for using it. Generally though, I say that C++ code on average tends to be more readable in that you can come to grips with a new codebase faster, and that its features allows for complex things to be expressed in a more concise manner. In a team environment I think that is more important than raw speed.
Its interesting that we both seem agree on what's important - readability and good in a team environment.
In my view C++ is fine when you're coding by yourself, but as soon as you put it in a team environment, where you have to read and work with other people's code, its a ruinous disaster area. What the hell does the expression "a+b" mean? What will the compiler do? It could call constructors (with side effects), it could be calling "operator+", it could be doing implicit type conversions, it might be copying memory around, you have *no* idea without spending all day looking through code.
In C, I don't need to, all I need to know is whether + is int or float and if a or b get promoted. No matter how messy and poorly designed the C code is, thats always true. In C++ I can't assume anything. I suspect if you took away the Visual Assist and other similar features most C++ coders would be utterly lost.
quote:They're both valid options, but I think I'd rather profile to identify hotspots and try to fix them instead of bitching about us damn kids and our newfangled C++. =]
I spend a lot of time doing exactly that. In fact its been precisely this process that has turned be right off C++.
quote:The fmod3 C source code is a mess really. not very nice at all.
FMOD Ex on the other hand i am quite proud of. It is extremely neat and well thought out, written in C++ but doesnt do anything fancy. Just inheritance - no stl or templates or any of that stuff.
Well, if thats as bad and messy as plain C gets then I think it proves my point, its much easier to find your way around than even a slightly messy C++ library.
pb
I guess it could be worse. You could be working on enterprise software written in ( dare I say it ) Java. At work I have to deal with several different code streams that we maintain and it is a major headache porting a fix across all affected versions. Syntax is a nightmare and all the Java IO streams are all over the place, there's no joy or satisfaction in working with that technology.
Who are the programmers you look up to?
In the digital art field, there's an incredible amount of really talented artists (or "freaks", to be honest) who can really blow you away with what they're capable of. In particular, the concept art field has many, many talented people, including the likes of Craig Mullins (this guy is absolutely amazing) and Feng Zhu. Modellers don't seem to get the same kind of fanfare as concept artists, but there are still quite a lot of talented folk doing fantastic stuff with Zbrush. Paul Steed is probably the most famous game modeller, he's great with low polygon modelling, but seems to have disappeared from the limelight for quite a while now.
I'm really curious though, who are the influential or absurdly gifted programmers in the games industry taht you admire, look up to, or gain inspiration from? Is John Carmack still the poster boy for a lot of programmers? Tim Sweeney? Any one else, and why?
Apart from being constantly blown away by the resident gurus where I work, I'm always inpired by people that take simple ideas and techniques and make magic with them, such as [url="http://www.massivesoftware.com/stephen.html"]Stephen Regelous (Massive)[/url]. I also don't think I could let a thread like this go by without pointing again at [url="http://www.joelonsoftware.com"]Joel Spolsky[/url]'s excellent insights into how software gets made (people just expect it of me now).
That said, I can't help but feel that master programmers don't get the same glory and publicity as artists, designers etc. Perhaps because it's hard for any programmer to do anything by themselves, to get the true [url="http://www.selu.edu/kslu/auteurtheory.html"]auteur[/url] glow that Molineux, Wright etc have.
For me it's Peter Molyneux, though I suppose he's probably more of a game designer than programmer. Back in his Bullfrog days, games like Syndicate, Theme Park, Magic Carpet and Theme Hospital were almost all original concepts in their day and remain some of my favourite games. I still pull them out on the odd occasion I feel like a bit of retro gaming. More recently, I thought Black & White was brilliant, and I haven't played The Movies yet but from all accounts it is living up to the hype. He's the sort of person I look up to simply because he's able to pull out these original ideas and make them work.
I'd still have to say John Carmack, because he manages to do all sorts of programming with a lot of different tech. I've come to realise that he's pretty much a good hacker in the sense that he can use the best tool for any job ad creat something good. However personally I'm not a fan of his coding style.
Of course there is Michael Abrash, but I haven't heard much from him recently, but I did enjoy watching his keynotes on Gamasutra when he was still into games programming. He was in the Microsoft DirectX team at some point, and now I do wonder what he's up to.
You also have to admire people like Justin Frankel (Winamp guy), mainly because he created something sleek and utterly useful (nevermind what AOL are doing to it now), and he also created lots of useful little utilities that he released for free (NSIS anyone?).
Andrei Alexandrescu- author of modern c++ design
Miller Puckette- father of the Max family of visual audio programming languages.
Paul Davis- the god of high end Linux audio programming and author of Ardour. Also the first programmer to join Amazon.
Barry Vercoe- author of Music11, CSound and SAOL audio synthesis languages.
Roger Dannenberg- head of the computer music research group at Carnegie Melon uni. Author of Serpent (realtime scripting language), Audacity, Nyquist (lisp based audio synth language) and tons of other stuff.
No game developers in my list. Sorry.
Edit: how on earth did I forget Linus Torvalds and Richard Stallman!!
Obviously, Linus Torvalds and Richard Stallman :)
Michael Abrash is a good pick, Daemin. It was some of Abrash's articles that got me fascinated in graphics programming in the first place. I believe Abrash wrote the renderer for Quake, although I'd say the fame was mostly enjoyed by John Carmack.
John Carmack has my respect as a programmer, but I wouldn't put him at the top of any programmer kudos list. Aside from his awesome capacity for finding performance optimisations, I have found his most recent projects (yeah, that one) to be rather unimpressive. There are many programmers building flexible and scalable engines for a variety of games, while Carmack's scope is largely limited to the kind of games he likes to make - which is fine by its own merit, but isn't enough to keep me paying attention anymore.
If there's a crown for FPS engine technology, I'd say it was stolen by Tim Sweeny a long time ago. IMHO, Sweeny's commitment to building a flexible engine for a wide variety of applications is hugely impressive. UE3's popularity as middleware for next generation titles is large.
Ultimately though, I'm not so preoccupied with graphics/game engine programmers. IMHO, some of the most impressive programmers are those working in R&D roles in fields such as Artificial Intelligence and Interaction Design. Programmers in these fields come up with incredibly creative solutions to all sorts of problems, yet seldom enjoy the fame that programmers like Carmack and Sweeny do.
Ooh, one more: The illustrious Yann Lombard, GameDev.net forum graphics/engine guru.
A thoughtful fanboy has erected a site linking to some of his more insightful posts: [url="http://triplebuffer.devmaster.net/misc/yannl.php"]clicky[/url].
Alfred Reynolds (VALVe) - good community mind, and wonderful at fixing bad bugs in the steam client, and the source engine - even if making some minor ones along the way...
Many of the app programers at experts-exchange.com (They have given me wonderful advice over the years)
Tim Holt, who many of you wont know unless your his student, is very well versed in his ideas, and conecpts, and good at returining code for both. - i only know him through the [hlcoders] mail list - but i look upto him in a peer (not god) way.
Knowledge Management in Game Development
I?m starting this thread to really examine what our ideas of knowledge management in game development, and on a broader level, as professionals actually are.
This is actually a hot topic at uni amongst biz/information management students in masters. It is however broadly generalized and in a lot of instances sapped by profiteering consultants. Despite this I believe that I solid understanding by designers and developers of what Tacit and Explicit knowledge is, what knowledge buy in is, and what sort of systems exist for capturing or Tacit knowledge actually exists.
To start the hunt I?ve included a few links to start the ranting:
http://www.systems-thinking.org/kmgmt/kmgmt.htm (A brief overview)
http://www.kmworld.com/
http://www.systems-thinking.org/tkco/tkco.htm
http://www.amazon.com/gp/product/0684844745/002-3638931-8880044?v=glanc… (if you want a text)
And for a little knowledge before you start attacking your pointy haired boss:
quote:Originally posted by matt.davo
...I believe that I solid understanding by designers and developers of what Tacit and Explicit knowledge is, what knowledge buy in is, and what sort of systems exist for capturing or Tacit knowledge actually exists.
My entire understanding of these terms is pretty much based on [url="http://en.wikipedia.org/wiki/Knowledge_management"]this wikipedia entry[/url], so I'm probably talking way outside my station - but...
One of the most useful tools I've ever used was an integrated bug tracking, source code control and release management system. It wasn't specifically designed to track tacit knowledge, but as a side-effect it certainly managed to.
The way that worked was that to get anything into a release it could only be checked in against a bug* in the bug database. Each release was thus a collection of bugs*.
That meant that if you ever wanted to know what was done to fix a bug* you could immediately see it. The reverse was even more useful - if you wanted to know what a line of code was doing, or why it was there, you could see which bug* it was checked in against.
We hooked it up to CVSweb and using the 'annotate' feature and some custom hacks you could see literally see a bug* number against every single line of code, or a collection of changes across the entire project on a single summary page. And, of course, by looking at the bug* you could see the symptoms which caused the issue, with as much detail as you want (client reports, screenshots, etc) in the first place.
So hidden between the lines you could see how and more importantly why things were done.
The key reason that this worked was that it was made relatively easy, but also mandatory for the users of the system. Every time you made a checkin, you had to say which bug* it was against.
I've seen any number of knowledge-bases, wikis, documents etc in projects come and go - most of them seemed to just fall into disuse. The combination of simple and mandatory as part of workflow seemed to be the magic bullet.
...and congratulations Matt!
*a bug is used as shorthand for a defect, or feature request, or task
quote:Originally posted by mcdrewski
The way that worked was that to get anything into a release it could only be checked in against a bug* in the bug database. Each release was thus a collection of bugs*.
Somehow reminds me of a certain very large software company's products [;)]
edit: Sorry I couldn't be there btw Matt.
quote:Originally posted by matt.davo
This is actually a hot topic at uni amongst biz/information management students in masters.
Also in comp-sci. Would you/people like me to ask a research postgrad who's topic is in the area if she'd be interested in joining?
Management Info Systems (subject I've been head tutor of) is part of this area- it's about the software systems used to aquire, store and analyse data, information and knowledge for company management.
Version control systems and bug tracking databases are imho the most obvious for game dev studios explicit knowledge storage.
I suspect some huge publishers would be using some very fancy systems indeed for trying to predict what is going to sell based on past data. I would be in their position...
Good morning ladies and gents. I'll be back in Australia from the 8th of January. Looking forward to continuing this thread.
Really, I'm more interested in the way game developers as artists and programmers share knowledge with other co workers (say for an artist an interesting modelling technique) or for a shader coder different ways of passing state variables to a shader to make it run faster...
This is the sort of data that a successful game development project can live or die by...
I think knowledge hoarding *MAY* be prevalant in the Australian Industry because of the amount of turnover induced by the companies themselves. Keeping knowledge to oneself makes you indispensible.
2D Point-and-click Adventure game engine & game
Hi,
I've been developing a 2D point + click adventure game engine (similar in style to the classic Lucasarts SCUMM games) since April this year, and have been maintaining a blog regarding its development at:
[url]http://www.bigbucketblog.com/?cat=2[/url]
The engine is called MAGE, and the game I'm developing in parallel is called Fishink. I'm looking to find people eager to try out the engine and provide feedback as it progresses and also to critique my game.
The engine is cross-platform, (Windows, Linux, Mac and even PocketPC!) and I update the version on the website as I achieve milestones.
Any feedback would be appreciated,
Thanks,
Matt Comi
Just a quick update for anyone who's interested:
MAGE development is still going strong!
I've recently begun to migrate all of the game logic from my own configuration format to Lua. This has given me a lot more flexibility in terms of the ways in which I can respond to an event. It also reduces the turnaround time involved in changing something and seeing its effect.
I'm blogging about my experiences with Lua at the moment with code snippets here and there.
Cheerio,
Matt.
Hi everyone, I've just finished uploading a new version of MAGE and Fishink for Windows. Main features in the new version include:
* Fully integrated LUA scripting
* Pathfinding that actually works!
You can download the engine/game (in one zip) from [url]http://www.bigbucketblog.com/[/url]
And any feedback is MOST appreciated. In fact, if anyone is keen to give the engine a whirl, I'm going to attempt to put together some tutorials soon.
I'm looking into putting up a Mac release... not sure how much work is involved in that though.
Cheers,
Matt Comi
I've been meaning to comment on this for a while.. I had a go, it's pretty nice - but I couldn't figure out what to do after climbing down :(
I would suggest you put in a title screen and menu, even if it's just "new game" and "quit" - it will make your game feel more complete, which is important if you plan on using this as part of your portfolio.
Also, if you can on windows, I suggest against resetting the screen resolution - since the game is graphically simple you should be fine with streching your image to whatever resolution the user has set. This will help with mouse sensitivity and wouldn't mess up with the placement of any windows in the background (very annoying).
And last of all, add some sound! Even if it's just some ambient sounds and action sounds (eg. opening a door or draw) it would add alot to the game.
I'm looking forward to the next version :)
This is cool, though have you checked out Adventure Game Studio?
It's been in development for about 10 years, and is a pretty mature indie adventure game development tool. Some people have even been able to make money off the games they've developed using it. see http://www.wadjeteyegames.com/
Visual Studio 2k5 EXPRESS free for 1yr
Microsoft are obviously scared by open source and the very good free dev platforms available for open OSes...
They've decided to release their [url="http://msdn.microsoft.com/vstudio/express/"]Visual Studio Express Editions free to download for one year. This includes C++, C#, J#, VB and 'web' developer[/url]
According to their FAQ, this means that if you download before November 6th next year, you can use what you've downloaded free forever. After Nov 6th next year, the software will become available at low cost.
For anyone out there that's ever been dissuaded from devloping for Windows because they refuse to pay for VisualStudio and also refuse to pirate it, this is great news.
So, broadbanders - download them now in case you ever need them later!
How do you use templates?
Over the course of this year, my use of templates has completely changed: before this year I mostly just used the STL, and had written some specialised Ruby->C++ and C++->Ruby wrapping templates.
Late last year I started really studying templates. Turns out you have a whole new and different language inside of C++. The template system is actually a compile time scripting language. You can use it to calculate things while code is being compiled, and generate highly specialised code for particular variables whose value is known (or can be calculated) at compile time.
You can (in some compilers) write c++ code that throws the compiler into an infinite loop instantiating templates!
Something that really showed me how the template system actually works was in writing templatised replacements for the new and delete operators (create() and destroy()) to take advantage of the metaprogrammed memory allocator I posted here earlier this year.
I had to use the placement new operator, which you pass the memory you want to use for the class instance you are creating.
[code]
void* mem = malloc(sizeof(MyClass));
MyClass* myObj = new(mem) MyClass();
[/code]
placement new require you manually call destructors. You cannot use the normal delete operator.
[code]
myObj->~MyClass();
[/code]
My destroy function template looks something like this
[code]
template
void destroy(T* obj)
{
if(obj)
{
obj->~T();
free(obj);
}
}
[/code]
Though of course I use metaprogrammed allocation functions rather than malloc and free, so this exact code here is completely insane [:)]
The point of this code is one line:
[code] obj->~T() [/code]
The only way it can work is if the template scripting language works rather like the c preprocessor in that inside a template definition it goes around replacing "T" with the actual type you've placed in the <>'s.
I'd like to hear about interesting template stuff people are doing.
Templates should be used only when they make sense (not often - they make sense for generic containers like lists, vectors, etc) or are a must (no choice), and avoided like the plague at all other times. The STL covers a good deal of what you need, and many features can be done through proper abstraction and design (cf design patterns).
They aren't very portable (yay for overly complex language features that are hard to implement), make debugging code harder, and increase compile time significantly. Maintenance (people unfamiliar with your code) is an order of magnitude worse with custom templates used throughout your code.
Generally they should be as short as possible, with as much functionality of what you are trying to achieve placed into normal classes as possible.
I'd be careful modifying your allocation / deallocation functions as it may make it harder to use 3rd party memory libraries and other debugging tools to trace memory issues and memory leaks.
The only time I've had a need to write some is for weak and strong smart pointers, counted pointers and auto pointers (and you can just use Boost for them). I also used some on mobile devices to implement math using arbitrary fixed point (eg. 12.4, 8.8, 4.12, 16.16, 12.20) base number classes without having to write vectors, matrices, math functions, etc for all variations on fixed we used.
Thanks.
I saw some comments ago on flipcode about people thinking meta-programming is likely o make it's way in more and more engines. Not that that was what started me.
I came across the technique reading about ways scientific programmers use C++ for really heavy numeric computation. Then I found Blitz++ http://www.oonumerics.org/blitz/ and things started to make sense about why you might want to do some metaprogramming.
It's also part of boost (the boost MPL).
IMHO it is likely to be very usefull for extracting full performance in low level routines on the XBox360 and PS3 because branches are likely to be nasty and slow, and there is no branch prediction anymore. You can shift branches over to compile time.
I've been very careful- that's why I haven't overloaded new and delete globally, and I have my own leak tracking system built into the allocator now. Works rather like the one in MFC.
I have no arguemnt with a lot of what you said, I just think using it in low level details that get used over and over, and where the feature set isn't likely to change much isn't that big a deal.
Things like type lists can be pretty cool if you're trying to make a scripting language (I'm not anymore, I'm hacking one, but I spent a good 6 months of this year designing and coding one).
I'd say DON'T use anything like metaprogramming in game code. Use it (if you are going to) in low level engine code.
Templates aren't something to be afraid of, but - like just about every feature in C++ - they should be used when the situation calls for it. All the coders (interesting to note: especially the younger ones) I work with are fairly proficient with templates and metaprogramming concepts so I don't feel like I have to hold back in any metaprogramming. Though that said, I am not often presented with a situation in game code to use it.
The last thing I wrote was some templatised max functions that takes 8/4/2 values as template params and returns the biggest value at compile time. Though I do sometimes add templatised helper functions that handle casting internally which results in much simpler and cleaner code.
If you don't understand templates then I suggest you grab a book on the subject and learn, it's well worth it.
There are lots of little and handy uses for templates... don't just think that they're used only for overkill metaprogrammingmagic and container classes.
One really nice use for templates is templated abstract base classes/interfaces. Where applicable they can give you a base class that adheres to a fixed naming scheme but returns hard types, which saves alot of dynamic casting. It's not a technique without it's flaws but it's something that really grew on me.
quote:Originally posted by Kezza
One really nice use for templates is templated abstract base classes/interfaces. Where applicable they can give you a base class that adheres to a fixed naming scheme but returns hard types, which saves alot of dynamic casting. It's not a technique without it's flaws but it's something that really grew on me.
While I get the general idea of what you're talking about could you perhaps post a little code? I don't think I've done that one before.
Edit: do you mean the Curiously Recursive Template pattern? It lets you do compile-time polymorphism of a sort.
re: I'd like to hear about interesting template stuff people are doing.
the following is a rather old sample of some of my core vector/matrix library. was surpried by how to implement friend template functions outside the class definition (i like being able to read a class definition all on screen at once) and the compiler was getting upset at template paramterer reuse and resorting in function paramerter (example is the multiple function)
yes i'm being bad, working towards ease of code reuse rather than speed....
[code]
template class TMathMatrix;
template const TMathMatrix operator + (const TMathMatrix & in_lhs, const TYPE & in_rhs);
template const TMathMatrix operator * (const TMathMatrix & in_lhs, const TMathMatrix & in_rhs);
/**/
template class TMathMatrix
{
public:
TMathMatrix(const TMathMatrix & in_matrix);
TMathMatrix(const TYPE * in_data);
TMathMatrix(void);
friend const TMathMatrix operator + <> (const TMathMatrix & in_lhs, const TYPE & in_rhs);
template friend const TMathMatrix operator * (const TMathMatrix & in_lhs, const TMathMatrix & in_rhs);
};
/**/
template const TMathMatrix operator + (const TMathMatrix & in_lhs, const TYPE & in_rhs)
{
COMMON_CONDITION_PRE(COMMON_TRACE_PRIORITY_MATH_MATRIX)
{
}
TYPE data[WIDTH * HEIGHT];
s32 index;
s32 count;
count = WIDTH * HEIGHT;
for (index = 0; index < count; ++index)
{
data[index] = in_lhs.m_data[index] + in_rhs;
}
return TMathMatrix(data);
}
/**/
template const TMathMatrix operator * (const TMathMatrix & in_lhs, const TMathMatrix & in_rhs)
{
COMMON_CONDITION_PRE(COMMON_TRACE_PRIORITY_MATH_MATRIX)
{
}
TMathMatrix matrix;
const TYPE * lhs_data;
const TYPE * rhs_data;
s32 lhs_data_index;
s32 rhs_data_index;
s32 height_index;
s32 width_index;
s32 other_index;
s32 data_index;
TYPE value;
{
lhs_data = in_lhs.GetDataConst();
rhs_data = in_rhs.GetDataConst();
data_index = 0;
}
for (height_index = 0; height_index < HEIGHT; ++height_index)
{
for (width_index = 0; width_index < WIDTH; ++width_index)
{
value = 0;
lhs_data_index = width_index;
rhs_data_index = (height_index * IN_OTHER);
for (other_index = 0; other_index < IN_OTHER; ++other_index)
{
value += lhs_data[lhs_data_index] * rhs_data[rhs_data_index];
lhs_data_index += WIDTH;
rhs_data_index += 1;
}
matrix.m_data[data_index] = value;
++data_index;
}
}
return matrix;
}
[/code]
hmm, not sure I see your point, to me that syntax seems plain though a little annoying- perhaps I've been reading too much perverted meta-code though [;)]
That's how templates are likely to be written more in future- as compilers begin to support the export keyword. The template bodies will be in a separate file, or compiled to some kind of byte-code. Afaik the language designers didn't realise how complicated the export keyword would be to impliment.
Nice use of templates to make a really flexible matrix btw. With a good compiler the for loops would likely get completely unrolled because you've used compile time constants.
I've started using a more class-like syntax for template declarations, this is my replacement for the new operator for types that take 4 constructor arguments for example.
[code]
template
<
class T,
typename C1,
typename C2,
typename C3,
typename C4
>
inline T* create(const char* file, int line, C1 c1, C2 c2, C3 c3, C4 c4)
{
void* mem = LDK::alloc(file,line);
try
{
return new (mem) T(c1,c2,c3,c4);
}
catch(std::exception& e)
{
LDK::dealloc(mem);
throw e;
}
catch(...)
{
LDK::dealloc(mem);
throw;
}
}
//////////////////////////////////////////////////////////////////////////////////////
/// def LDK_CREATE4(Type, arg1, arg2, arg3, arg4)
/// ingroup ObjectAlloc
/// rief create an object that takes 4 constructor arguments
/// hrow BadAlloc in an out of memory condition.
//////////////////////////////////////////////////////////////////////////////////////
#define LDK_CREATE4(Type,c1,c2,c3,c4) LDK::create(__FILE__,__LINE__,c1,c2,c3,c4)
[/code]
I use the preprocessor macros LDK_CREATEX because that's the only way I can think of to get the __FILE__ and __LINE__ macros to work for the site of the allocation. Anyone have any neat ideas on how to improve this? You can't overload preprocessor macros [:(]
Pointers and References - oh no!
I've taken the [url="http://www.sumea.com.au/forum/topic.asp?TOPIC_ID=3304"]side discussion on references and pointers from lorien's post here[/url] off to it's own tread, since I actually feel strongly about it.
The thread so far:
quote:
rezn0r
Java doesn't have pointers. :P
quote:
mcdrewski
I know this was a joke, but every damn procedural language has pointers, they just call them references, aliases or whatever.
quote:
Dragoon
Actually no they aren't the same.
What's a null reference? What's a null pointer?
That's right the first doesn't exist.
quote:
mcdrewski
In C++, Dragoon is right, the term 'null reference' has no meaning. However there absolutely is a concept which can be described as 'null references'. For example, in Java, think about the following chunk of code.
Player p[];
p = new Player[10];
Until each Player object in the array is instantiated, we have an array of references to uninstantiated Player objects. 'null references' for want of a better term.
In short, as soon as you have a way of describing one object by two different names, you have aliases, or references, or pointers etc. Dig deep enough in virtually any procedural language and you'll start to find traps,tricks and caveats that only make sense if you think in terms of pointers. The language might do everything in it's power to hide those tricks from you so you never try any form of *i++ malarky, but under the covers it's all the same thing (since at CPU level the computer can only think in terms of data or the address where that data is stored).
quote:
dragoon
Yes the underlying implementation of references would indeed use pointers at some time, however consider:
*p = 10;
Some languages may have specific situations where you might call them something similar to null references but they can't be used like a null pointer (comparison wise can you use them in logical operations? maybe you can? I would imagine in Java that in your array situation arr[0] != arr[1] even though they are not initialised - likely it would throw a not initialised exception or something as it can't get the hash code of the objects). NB: Java isn't my strongest language.
The concept of a pointer is also different from the concept of a reference. Pointers are a location in memory, which can be used as a reference, but allow you to do a whole lot more (and shoot yourself in the foot in a myriad of additional ways). A reference is just that, a reference to an object that allows the compiler or interpreter to match up the code using it to the original object.
I'll respond at lunchtime since it's time to go to work now :)
Sorry mcdrewski, but I can't help pointing out the origin of the phrase that's got this rant started "graduates who don't understand the difference between a pointer to an array and an array of pointers" (or something like it).
I pinched it. The guilty party was actually Entr0py- one of my classmates who was having a bloody good rant about Certain People and the course we did at the time on that same blog of Yusuf's [:)]
It's offline now :(
Yeas indeedy. And while Dragoon is 100% right, a pointer is not identical to a reference, my point is kind of that all these languages have concepts which have the same mental gymnastics required as understanding pointers.
So back to your original phrase, it works just as well with references. "reference to an array and an array of references" is the same conceptual thing.
So I posit that to solve the problem we just say that a pointer is a special kind of reference and that they trip up n00bs. [:)]
Even though it's not common knowledge, null references exist in C++. It's only a bit hard to make that mistake.
First time I saw it I was pretty amazed. How come this is null now? I'm in this method from a reference! Not possible!
Here's proof of the elusive bastard:
[code]
#include
#include
using namespace std;
int main()
{
string *p = NULL;
string &r = (string &) (*p);
cout << r.size() << "
";
}
[/code]
quote:Originally posted by wicked
Even though it's not common knowledge, null references exist in C++. It's only a bit hard to make that mistake.
First time I saw it I was pretty amazed. How come this is null now? I'm in this method from a reference! Not possible!
Here's proof of the elusive bastard:
[code]
#include
#include
using namespace std;
int main()
{
string *p = NULL;
string &r = (string &) (*p);
cout << r.size() << "
";
}
[/code]
10 points for being guilty of that code :) I've been guilty of it too :(
Dragoon posted above about the crazy power of pointers, they are really dangerous, but they're just so cool because it's though pointers that you can do things that you'd otherwise have to do in asm.
Plain old K&R C was designed to be a kind of portable, structured assembly language for making unix in, and you do need that sort of power when developing an OS.
quote:Originally posted by matt.davo
actually this should get a laugh from lorien too...
and possibly the bullant team.
inline Vector3* Vector3::operator-()
{
return new Vector3(-x, -y, -z);
}
maybe... sorry, but you guys hired him.
Poor Bullant [:(] You guys must be a lot more careful about the AIE grads you hire after that one...
PS2 homebrew dev
Hey, is anyone here doing any PS2 homebrew coding? Any engines currently in the pipes that are running on the PS2? I'm currently putting together an engine on the PC which I want to get onto the PS2 platform ASAP. Anyone know of any tech references, particularly for Emotion Engine core, besides the ps2dev.org?
Does Visual Studio rot the mind?
From slashdot (quite long and imho interesting) http://charlespetzold.com/etc/DoesVisualStudioRotTheMind.html
What do people think? I stopped using visual studio for my own projects in late '99- when I switched to Linux- for my work completely non-portable project files are far more hassle than they are worth.
I have a bit of a love/hate relationship with IDEs in general, I find they make some things I do all the time much easier, but other things I do all the time much harder.
These days I've now switched to using scons for building my software (http://scons.org , I highly suggest you check it out), and syntax highlighting text editors for code writing (like SciTE and Kate).
I know all the coding I've done using simple tools (and holding loads of stuff about my code in short-term memory) has really improved my IQ. Do people think tools like Visual Studio lessen the personal benefits of coding?
Quite an interesting article, and to be honest I haven't used the latest ones enough to judge. At work we're using the embedded version (which is equivalent to Visual Studio 6), and that only has a haphazard stab in the dark with intellisense.
Sometimes I don't mind the whole IDE thing, but I tend to think that the more recent incarnations of IDE's do too much of the variable stuff for the developer. A good IDE should only do the mundane things so that a programmer doesn't have to. By mundane I mean syntax colouring, building/rebuilding/cleaning the code, and a debugger.
Generating code should be left for specialist tools like flex/bison or some other ones like that.
Though any programmer worth their salt should be able to hack out code using something like notepad, and then compile it on the command line. If they can't do that then they should learn!
Anyways, enough rambling incoherence, back to work.
I always posts links to this article by Joel Spolsky since I think it's fantastic.
http://www.joelonsoftware.com/articles/LeakyAbstractions.html
The brain-rot article seems to be basically ranting on the same subject with a different focus.
I have to use Visual Studio at work which is tolerable most of the time, but I'm just glad to see I'm not the only one who finds it more than a little annoying some of the time. Intellisense (and even the build system at times) can't make sense of the design idioms I'm using at the moment, and doesn't seem to pay much attention to namespaces so it's all but useless. Even the seemingly simple things like: hi-lite function name, right-click, select "Go To Definition...", Visual Studio opens a header file and points to the function declaration, I said *definition* you useless *##*%#!!!!!
The overall effect is to, 1) reinforce the very low regard I have for Microsoft software, and 2) dramatically increase the desire to perform a few physics experiments - specifically, to study the fragmentation of a glass window as a computer is passed through it at high velocity.
Back when I was at uni, all my coding was done on Solaris workstations using Vi and GCC. I could get by quite happily with 2 or 3 Vi windows open and a command prompt. (I'm not a Vi fanboy, it's just what I got used to.) By having to remember more about what code I'd already written I think I wrote better code; better designed, more concise, more efficient, better documented. Visual Studio has made me lazy, but I'm as much an addict as the guy who wrote that article.
Good post Leto... Apart from Vi I'm with you [:)]
Try automating the generation of a regression test framework from unit test header files, building the code, running the tests, and reporting all failures using the MSVC build system. Then try doing it across multiple OSs and different hardware achitectures. It can probably be done, but for complex build processes imho things are much easier with the power of a scripting language.
The best thing about a good IDE is a good integrated debugger. edit/compile/run tools thrown together are typically quite poor at that (although you can of course work out alternatives).
That said, understanding in tooth-wrenching detail the steps your code goes through (parse/compile/link) can certainly make you a better programmer. Or at leas a paranoid grizzled wreck of a programmer who doesn't trust anything (s)he can't see the source for :)
At work a guy here has integrated a build system based on Ant to do our builds, and we can trigger it from within Visual Studio too. There are some small irregularities and limitations still but we can build libraries, dll's, unit tests, applications etc from the one "script" file.
I guess if you try to go too far into the multiplatform with Visual Studio on one side you're just asking for hurt.
quote:Originally posted by Daemin
At work a guy here has integrated a build system based on Ant to do our builds, and we can trigger it from within Visual Studio too. There are some small irregularities and limitations still but we can build libraries, dll's, unit tests, applications etc from the one "script" file.
I guess if you try to go too far into the multiplatform with Visual Studio on one side you're just asking for hurt.
Visual Studio.NET is a PoS - buggy and slow. It does 2 things well, organises settings for your projects and is a fast compiler (particularly if you use precompiled headers properly).
However Visual Studio 6 is better at both of those than .NET - no shader debugging though, and a shit STL implementation :-(.
I use SciTE plus a project organiser that I wrote in wxPython to edit the files.
Does it rot you brain? I don't know why it would. I don't know anyone who would seriously use their template generators for classes, functions, etc anyway.
quote:Originally posted by Dragoon
Does it rot you brain? I don't know why it would. I don't know anyone who would seriously use their template generators for classes, functions, etc anyway.
While the rant does talk at length about the shittyness of the code MSVC generates I think the mind rotting stuff is more Intellisense. If the IDE remembers pretty much everything about the APIs you're using and your own classes, you can become accustomed to not doing it yourself. Not really an issue in MSVC 6, the intellisense is quite simple, as it gets more sophisticated it becomes more addictive.
It's not just the STL in MSVC 6 that is bad, the compiler itself has pretty crap template support. Don't even try template meta-programming with MSVC 6...
If you've ever programmed in Java with the Eclipse IDE you'd know how great all the "Intelli" type stuff can be. It helps productivity no end.
Its bad from the point of view you won't want to use anything else to program Java once you've used it, not for forgetting things.
Good thing its cross-platform.
quote:Originally posted by lorien
It's not just the STL in MSVC 6 that is bad, the compiler itself has pretty crap template support. Don't even try template meta-programming with MSVC 6...
Question being why on Earth would you want to do meta-programming with templates, aside from research purposes?
How does it solve any real-world problem (which cannot be solved in a simpler, more maintanable way)?
How about an RTOS memory allocation system?
Seriously though, metaprogramming does seem useless. Then, as with any coding technique, you start to find applications.
Metaprogramming doesn't involve anything really fancy, but a reasonably startdard compliant compiler, which MSVC 6 most definitely isn't. That's all I meant.
I've had MSVC's non-standard compliance bite me in other areas too.
But yes, I'm a researcher, and much tech stuff I talk about does not come from making games.
quote:Originally posted by Dragoon
If you've ever programmed in Java with the Eclipse IDE you'd know how great all the "Intelli" type stuff can be. It helps productivity no end.
Its bad from the point of view you won't want to use anything else to program Java once you've used it, not for forgetting things.
Good thing its cross-platform.
I said exactly the same thing until I started using RAD ( Rational Application Developer ), once you start using RAD there's no going back to Eclipse.
We just have to keep in mind with all of the intellisense stuff is only good when it is correct, when using Visual Studio 6 IDE (or equivalents) and trying to use ATL/WTL it is actually infuriating, since it gives you the MFC method signiatures and the like.
For languages such as Java and C# I see that the intellisense stuff could be quite useful, since they have a relatively fixed set of main language libraries to use, where-as C++ has quite a lot of varying libraries, few of which are "standard". When using C++ you're far better off looking either at the library code directly or at the documentation.
And as for templates I use them a fair bit, they're useful not only for container types like in the STL, but also useful for certain base classes - that share functionality with each other but not certain data types, and also for mixin classes such as in ATL/WTL.
SQL Question
Just a question for the SQL users out there. Is it possible to find the count of how many tables are in a database?
If I can get this question answered, then my Software Design assignment will be finished! =D
Yes, it's possible. It depends (however) on the database you're using.
Most databases (AFAIK all, but I'm sure there's an exception somewhere) use meta-tables to hold information about their tables and columns.
In oracle, it's SYS.ALL_TABLES (or it's kin SYS.USER_TABLES / SYS.DBA_ALL_TABLES - refer to Oracle Table Reference for more versions)
In SQLServer, it's DBO.SYSOBJECTS
There is *no* standard for this though, so every database is different.
However, though ODBC,JDBC,DAO etc there are calls in the API to enumerate the tables. Each driver implements this in a DB-specific fashion to hide the underlying implementation.
Without wanting to sound facetious though, asking an "is it possible" question in a software engineering assignment is just asking for trouble. ie: Is it possible to access the "hot coffee" scenes in GTA:SA? Easy, intended or supported, no. Possible, well obviously.
Aaaaaaaaaa-[url="http://java.sun.com/j2se/1.4.2/docs/api/java/sql/DatabaseMetaData.html"]choo![/url]
Programming a homebrew game
This is a thread is for those who would like to discuss game projects that they have created/completed in their spare time (from a simple PacMan clone to something more complex) and share experiences (e.g. unforseen difficulties that you overcame, staying motivated etc.)
As for me, I've been working on a 3D/FP engine for some time though I've not yet written a full game on a PC.
I've posted screenshots and a desc of this project here previously and have been grateful for the constructive criticism. In contrast I also tried posting about it once on a generic gaming forum - the response was pretty viperous! (the demo was made using simple placeholders for graphics). A lesson there being to confine such discussion to game-development boards, where everyone understands what you've done.
Since the project isn't yet complete, I can't really comment on the entire process (at least for games). What I generally do is to take an informal SD approach; first read up on common algorithms for solving a particular problem, or take apart demo code if it's an interface I'm trying to understand. Then comes planning the classes, their responsibilities and how they interact etc. As it's a hobby project, I'm bit lax with testing, though I generally throw input at new classes in isolation until I'm satisfied that they work (a 'test harness'), before adding them in.
In terms of difficulties, the constant upgrading of DirectX has been a pain in the butt, which ended up with me buying MSVS .net. To stay motivated I try to add a single bit of functionality (or similar) improvement in each session at least.
I was working on a Street Fighter style game in java for quite a while, but lost interest because of my crappy programmer art. As well as that, I didn't really do much planning, so the code was beginning to become messy.
It was far from finished, you could only punch to attack, and there was a shite load of bugs in it, but it is the biggest hobby project I've attempted.
I'll try to post a link to the game so people can download, try it out, and laugh at how simple and buggy it is. =D
I'm always working on some sort of game or game tech project in my spare time. Unfortunately, I don't have a whole lot of spare time these days as I work full-time and study part-time at uni.
I've been working on a new game engine for most of this year, which is coming along quite nicely. I've probably only averaged two or three hours of coding time a week, but it already has some nice features. Hopefully in a few months it'll be advanced enough for me to write some sort of basic networked-multiplayer FPS demo.
quote:Originally posted by Kane
I was working on a Street Fighter style game in java for quite a while, but lost interest because of my crappy programmer art.
Replacing the phrase 'crappy programmer art' with the euphemism 'placeholders' tends to make a project seem more worthwhile.
quote:
As well as that, I didn't really do much planning, so the code was beginning to become messy.
It's true that five minutes on paper can save you a couple of hours at the keyboard fixing bugs. I tend now to do 'tidying passes' through code as much as I add stuff, in order to keep it maintainable/reusable.
quote:
It was far from finished, you could only punch to attack..
Could call it 'Joe Bugner's Street Pugilism' instead. [:p]
quote:
I'll try to post a link to the game so people can download
Cool [8D]
TheBigJ: Sounds ambitious. Multiplayer is way down on my list, since I'm trying to keep the project to a managable size, and there are enough things I need to consider to make a decent single-player game first. [}:)]
I think good programmers always have spare time projects on the boil. It's part of loving what you do.
If a spare time project becomes an obsession you might be able to pursue it as a research degree somewhere, and if it (and you) are good, end up being paid (scholarships) to work on your own project, get input from others, and get a higher degree too. Of course there's a big thesis as well, but writing about your own work and ideas (and similar work and ideas) isn't so bad.
In my case (being qualified in computer music more than anything else), a comp-sci research scholarship was out of the question. There is no government support for research degree study other than scholarships, but there are no fees (other than GSF) either. After a trial as a casual tutor I was given a teaching fellowship.
While my research work isn't a spare time project, that's how it started.
I've always got projects underway. It's fun alot of the time and a pain at other times :P
1. Big major project - Top secret stuff... very crazy idea. Not likely to be done anytime soon
2. 3D RPG (Current Project) - A bit like Morrowind in a way... ie. FP or 3P and realtime combat - Will mix real life weapons in a fantasy setting
3. Multiplayer 3D Shooter - Annoying Aliens and Maniac Marines
4. Numerous other bits and pieces such as experimental AI, maps, q3 mods etc...
Annoying Aliens and Maniac Marines is quite playable:
http://darkreign.buysmartpc.com/temp/AAMM-Alpha.avi (11mb or so WMV9 AVI)
Although as you can probably tell I'm not an artist - Seen as it's a private project I've been borrowing art from other sources
Its got full MP support, AI, melee and ranged combat plus lots of other stuff.
IMHO if I didn't have projects to work on I dunno what I'd be doing :P
quote:Originally posted by WiffleCube
In terms of difficulties, the constant upgrading of DirectX has been a pain in the butt, which ended up with me buying MSVS .net. To stay motivated I try to add a single bit of functionality (or similar) improvement in each session at least.
Same here, had to keep rewriting DirectX code - think I'll leave it at DirectX 9 - Windows Vista and DirectX 10 will take a while before everyone jumps on the bandwagon. I'm also doing a multimedia course to keep me motivated. Another thing that keeps me motivated is writing a website on how I created the game (like a blog of sorts) - helps to keep my head clear as well. I wrote a simple 2D game and still need to finish it off (see [url]http://users.tpg.com.au/ruzbacky/chapter400.html[/url]). I'm also working on a 3D FPS from scratch.
I found the 2D game not too hard to program - took around a year part time to get the hang of it. For the 3D game. some of the hard parts seem to be learning how to use a 3D modeler to do a level design and make characters / objects and texture them. It took a while to get my head around how the whole 3D graphics pipeline works (model, world, camera,projection space to screen space). Playing with existing graphics engines and game mods for FPS games seemed to help. It's going to take a while! :)
Web based games looks very promising. I was playing urbandead and I have thoughts on developing a game with this philosophy.
There are many limitation, since the browser does not give you any resources. I am referring to audio and graphics. No referrence for any video or movement on characters.
I will give more information a few days later, since now it's a testing and debuging period.
Ditto with lorien on this. Its a little bit like an addiction. Youd think Id get my fill from writing game code for a living.. bzzzt. Need more.. NEED MORE!! :)
Like most people have fingers in all sorts of things outside work at the moment. From simple script based app/game development tools LuaEng http://luaforge.net/projects/luaeng/ which was actually made for the ARage guys, and ended up being my little 'testing sandpit' for all sorts of things. Also writing a embedded html widget (target surface independant). As well as a Mil Strat sim.. and a little game that I have been meaning to finish for my kids (since they did a tun of art for it :)). Added to this is various side things going on... most of which is under NDA's, so thats all I can say - although one I can talk about is a pretty interesting way to make apps, via a completely visual tool :) Thats turning out to be quite alot of fun - imagine developing a game in a tenth of the time with no possible bugs ;) .. tis the future of dev imho.. no more coding.. YAY! :)
grover, would have thought writing a game without code might result in a rather slow game?
have the usual olde'heap'o'projects, but most time has been spent working and reworking application and code architecture.
currently there are about 3 games i want to make, and 2 tech demos, though made myself promice to 'only work on one project at a time and finish it' which helps a little in my constant start new project, move on in 2 weeks (has just ment i would start a new architecture to make these things with, and abandon that in 2 weeks)
Yeah I had thought so.. but.. :) But I made a little discovery (or many ppl have I guess) and that is, if you use a optimal, small and lightweight scripting system, the amount of code it generates and runs is comparable to writing complex object management systems in code, with all the controllers and handlers to suit. While there are performance benefits for specific things in C/C++/Asm, thats where they should be anyway (ie I/O systems like gfx, audio, files etc). Also, many high level systems have been around for decades for managing large volumes of complex data - like relational databases, which are also ideal for game systems, and a whole lot more efficient than anyone could code for object management (ie dont reinvent the wheel :)). Coding these days too is frought with complexity and thus extremely large debugging and error chasing resources, but moving up an API or three, you can gain alot of safety, rigidity (regression testing is usually a whole lot easier too), and the best bit, you can develop rapidly - Ive found around 5-10 times faster with more robust results. Take that a step further.. and I think you can even elimate programming (as we know it) alltogether :)
Programmer / Consultant Drag Racing Game
Hiyas,
Didnt wanna put this under jobs because its not a real job as such.
But i am looking for a programmer or someone who has expierance in taking Suspension and Engine data of cars and applying them in game.
At this point even anyone that has an idea on how to go about it would be appreciated to hear from them although if there are any other drag racing buffs that can program would like to come on board it would be a bonus.
The game itself wont have top fuel dragsters and what not but is more a sportsman racer (like bethesda's Burnout from 1998) involving modified street cars on the 1/4 mile. I think this would be a little easier to handle the physics and what not needed to get a realistic feel to the game.
Currently its just me as an artist and a friend using his engine which is open source so anything can be added to it although it has newton physics already in it.
A little more about the engine can be found at http://homepage.ntlworld.com/matibee/MEng/
Currenttly working on getting a generic car base started and going from there.
Other than if anyone else is interested in making a drag racing game eg artists etc etc drop me a line.
Cheers
Adam
while i don't have time for this project (sorry), the soft body deformations of car tyres is actualy of some interest to me as both a race fan and gamer.
in particular the shape of a drag racing wheel changes dramatically at different speed, and while a collidable/ friction aware soft bode code simulation might be overkill for your avaliable resources, a few 2d deformables 'band' crossectioning the wheel might be an acceptable and simple to implement solution for you..... sounds fun :)
best of luck
Hiya,
Im not a programmer myself but know a little...
Anyways yeh tire expansion in drag racing is an interesting thing as alot of the latest drag racing games feature it as such but from memory you dont actually feel the car raise like you do in real life. You can notice it alot more on the top fuel dragsters and what not but in a car like mine which i do drive even though the expansion is probably about 2 inches you do feel the effect while do a warm up burnout.
Would be interesting to implement for viewing and would be interesting to in fact make it have an effect on the suspension. I hope to have a fairly simple to use but realistic working suspenion in the game with changes actually being noticable in game un like some others.
I defiently will work on that one.
As for a 3d engine i was looking at http://www.kjapi.com anyone else played with it?
I was maybe thinking of using as my mates engine while quite advanced lacks features and this one is pretty complete with the same newton physics already implemented and alot of other features and tools too.
Thanks
Adam
Inspiration!
Sometimes (especially when you do this stuff for a living) it is hard to get inspired to code something. So to get inspiration I have a few things I like to do, everyone post their own and lets all get inspired!
For quick inspiration I checkout the current [url="http://pouet.net/toplist.php?type=prods&platform=Windows&x=19&y=12&prod…"]top productions[/url] on [url="http://pouet.net/"]pouet.net[/url].
Also, [url="http://www.flipcode.com/"]flipcode[/url]'s [url="http://www.flipcode.com/iotd/"]Image of the Day[/url] gallery is very nice to see what other people have come up with.
[url="http://www.halfbakery.com/"]HalfBakery[/url] has a lot of useless ideas - I sometimes just spin through there to get brainstorms.
Now that flipcode has sadly bitten the dust, gamedev.net has just started their own [url="http://www.gamedev.net/community/forums/gallery.asp?forum_id=62"]image of the day gallery[/url], which already has a few inspiring images.
While I find IOTD things interesting, I can't say I've ever been inspired by them. It's more depressing usually because I'll see something I'd like to do, but it doesn't fit within my own project in any way.
Actually my source of inspiration is usually spending several hours doodling something that I thought up as a meaningless phrase or similar idea and trying to draw something nice or substantial as the end result. Though I've found it's often quite tiring to spend most of your waking hours being productive and work+projects+drawing doesn't leave much time for anything except a little bit of playing games and some eating and sleeping.
Listening to some Heavy Metal is nice for me - Especially if it's a new discovery or favourite band of mine. I also sometimes just sit around and think of ideas. I tend to think WAY too big (NO not MMO games [:P] I don't like em...) so the challenge for me is to shrink the ideas down to managable sizes. Now if only it were possible to squeeze MORE hours into the normal day...
Oh... can't forget this suggestion - comedy [:D] Watching funny stuff can inspire cool ideas... Billy Connelly for example. I tend to have a whacked sense of humour so coming up with funny ideas can sometimes inspire new game ideas and all soughts of random stuff.
Quake 3 source (engine) has just been released!
You can get it from here: ftp://ftp.idsoftware.com/idstuff/source/quake3-1.32b-source.zip
I've been trying all day and haven't been able to get on yet
Comparison
I am not in the business at all, but in the past "LAST YEAR" I retook up attempting a modication to a game alone then with a friend, he had extensive C++ schooling and learned through trial error and knowlege on the python scripting code, I started out in the python with no past knowledge of code, i learned how to work around in the code, find what code was doing, alter this to do new events within the script and add things to this. what i am curious to find out. "i am not a tech wizard:(" is how different are the two languages and how hard is it for NON college person to learn programming as a whole, i ask because i want to dive into the unreal engine for the matinee features and i know you need a bit of knowledge on the unrealed, unreal script to do this. will trying to learn c help in learning this stuff also???
if i sound goofy or asking the wrong group just slap a closed on the thread lol i can be annoying to experience folk.
problem i have is starting from nothing lol, just basic python code structure and such, funny i can make the game do some pretty nice events that i script in but can't figure out the python tutorials make a program work correctly lol. i was going to wait till stargate alliance came out to dive into learning the script but i may go get unreal 2004 dvd and start since at most the game has been delayed
in regards to learning from scratch - we all have at some stage ;)
unreal script - if you dont want to buy anything, you can download the unreal engine 2 runtime demo, this is a feature complete build of the unreal engine (version 2, not sure how recent the build is compared with the build used for unreal tournament); also has a rather basic game framework on top.
it will allow you to fiddle with unreal script in a more basic form (without all the unreal tournament additions and overrides)
Yeah, the unreal Engine 2 Runtime is what you'd want for making mods for the Unreal engine. Basically version 2(or 2.5 in some cases) of the Unreal Engine is the latest, with version 3 being held out for the PS3 and Xbox 360 IIRC.
But keep plugging away at it and soon you'll get somewhere, also ask someone who's already done a mod and ask them for feedback on the code. The more eyes looking at it the better.
right on pc version, i am curious as to what updates there will be to the doom engine after they release the unreal 3 engine, you know john carmack has more advanced features programmed but not shown due to competition and limits of hardwrae. i don't know were to get enough texture packages . models and so on with the unreal demo engine, so i may just re buy the unreal 2004 game. just not a fan of the game but can't beat what you get for editing i guess for the money
Looking for a good navmesh tutorial
At the moment I've been scribbling out an AI pathfinding manager,
and I've been looking for a good tutorial online for navmeshes.
EDIT:
Found a good section on navmeshes in 'AI Game Programming Wisdom'[8D].
On a side note one thing that you could do would be to break up the traversable mesh into disjoint nodes and just make a graph out of them. Then pathfinding would just be as simple as finding the shortest route through a graph, and therer are many algorithms and much work that has been done on that. Going further you could subdivide the traversable mesh into a hierarchy and use different algorithms (at different rates) for navigating the world.
Ah, all those graph algorithms like Dijkstra and A*.
This is the first time I've tried something like this
(aside from graphs for a couple of uni alg. courses).
The navigation mesh idea sounds good because it reduces
the search space into a minimal set of large convex
polygons (the walkable floor is divided into the least
number of rooms possible, or thereabouts). It's a bit
complicated so I might look at simpler methods first.
I downloaded an interesting paper about the algorithm
used in Baldur's Gate pathfinding; HPA*. HP there
stands for 'heirarchical path' - as in the heirarchies
you mentioned.
Progress is slow, but I'll get there in the end [}:)].
Yeah, I just came up with stuff off the top of my head from different articles that I've read. You could perhaps try www.generation5.org, that is a decent AI site with links to articles. Gamasutra should have something along these lines also (but I figure you would have tried that out already).
quote:Originally posted by Daemin
Yeah, I just came up with stuff off the top of my head from different articles that I've read. You could perhaps try www.generation5.org, that is a decent AI site with links to articles. Gamasutra should have something along these lines also (but I figure you would have tried that out already).
That generation5 site appears to be an excellent resource- thanks.
HLSL shaders
Well, I think this is the first time I've posted anything under the Programmer Discussion forum, but hey, it really doesn't belong elsewhere!
Ok, starting to dabble in the .fx High Level Shader Language, since I think it's a really good skill for an artist to have; if not to make 100% of their own shaders, then to be able to calibarate on more advanced ones with programmers more directly and understand what numbers to plug to make things look good in an artists eye.
So where I'm up to is I'm picking apart basic blinn and lambert shaders, reverse engineering a few more advanced effects, and starting to slowly get a grasp on everything. I can write a basic light and texturing shader, and playing around with per pixel specular component at the moment.
My question is, does anyone else have shader experiance, and if so how have you found it? what have you done so far?
My current aim is to make myself 3 pluggable shaders:
1)A base shader supporting diffuse, normal, specular, opacity, gloss, offset mapping (read: virtual displacement) and hopefully attenuation
2)A skin shader as above, but also a faked translucency with colour component, intensity, and falloff, plus an added fresnal specular.
3)A hair shader, anisotropic highlights, normal, and light transmission (eg, another spin on a fake translucency for backlighting).
Well I've been working with shaders for a little while now. Like you're probably finding, initially the hard part was getting my head around the myriad coordinate spaces (model-space, view-space, texture-space, etc.) that you have to deal with. After that, it was getting a grasp on what you can and can't do with shaders. I mean, you can probably do anything but it might take 17 render passes in which case there's no way it's going to happen in real time. This is where I think HLSL is just too high-level. It's nice that you're shaders are much more human readable than assembly code, but it's too powerful: there's support for if-statements but everyone will tell you not to use them because they can badly affect performance.
I've really enjoyed taking the time to work it all out because you can see the results of your work almost immediately, and coming up with a cool effect ("Hey those clouds look almost real!") is really satisfying. On the other hand the limitations can be really frustrating. I've managed a pretty good looking water effect but I've not been able to get it to reflect the environment properly yet simply because I'm constrained to doing everything in one pass.
If you are able to work out you're base shader and fully understand everything that's going on, then I think you're probably about 80% of the way to doing just about anything you want.
Quick update before I run off home; so far:
ambient controller (base light multiplier)
diffuse tint
diffuse multiplier
translucent colour (due to be a texture)
translucent falloff
diffuse texture
specular texture
[img]http://server2.uploadit.org/files/jistyles-hlslskinwip01.jpg[/img]
very pleased with the progression so far.
My barely passable coding abilities however mean I've been stuck up on a few basic code structure problems. I've found passing data through different states is a pain in the butt though [:(] took me way too long to figure out how to pass the uv's into the shared segment... anyways:
[img]http://server2.uploadit.org/files/jistyles-hlslskinwip03.jpg[/img]
edit: Pretty bad head shot there of my test asset - it only has a very quick translucency mask and ramp off texture, so I really should spend a little time seeing how far I can push the visuals before going too far on. Just an update on specs:
inputs:
light
ambient
diffuse colour (tints texture)
diffuse texture
diffuse power
translucency colour
translucency power
translucency ramp in
translucency ramp out
translucency mask texture
translucency ramp bias texture
basic specular is broken since I borked the shader structure and it got way too messy... hopefully have something a bit more elegant sometime this decade.
soooo... now I have full per pixel texture masked ramping controls, so I can bleed the light around into the shadow per area. What that means in a practical sense is I can effectively make the difference between the thin minimal light occlusion of the ear and the thick near total occlusion forehead skin being so close to the skull.
One thing that seems rather cool is that it's pretty cheap even in this unoptomised state... you can even bump the translucency maps down to half or even quarter resolution of your diffuse since since the sampling is hardly noticeable and it's a very smooth gradient effect.
anyways, more to come soon I suppose, and I should also be posting something done with this in the exhibition soon enough.
enough blab!
ok, quick update - required a new test asset, so I made what I'm calling a "Squishy". They go BOING if you drop them.
Work in progress, also showing the UI:
[url]http://www.jistyles.com/content/resources/hlsl/hlsl-skin_wip03.jpg[/url]
I've also set up a mini diary/log, documenting my progress on my website, so check that out for more info :)
[url]http://www.jistyles.com/main.php?id=doc&page=hlsl[/url]
(man... I need to put in a lighter backing for those text boxes huh?)
I don't know if this will be of any use, but there
is a free plugin for 3DSMAX5+ exporting .x files
that also generates .fx shader files; it's called
"Panda". I found it on the web as "PandaExporter.dle",
but can't remember where I got it [}:)].
There is also apparently one in the DirectX SDK but
it was missing in my copy. I should probably look
for a better one.
wifflecube: I don't really need to export anything since it's written directly in .fx form :) good to know there's a replacement to the standard ms .x exporter though - I needed to export some assets for a client recently and didn't like the ms exporter very much.
redwyre: Heh, you'll probably laugh... I'm writing it in notepad (couldn't stand fx composer or rendermonkey... felt like a loss of control) and using 3dsmax 6 and 7's direct x 9 viewport as a test bed. The shader itself is a .fx file, so you can just plug it straight into the max viewport, as well as any engine that supports .fx / .hlsl shaders. I think I'll start posting source once I've got fresnal specular component in there and the lighting model debugged; I've got a seam error on my normal map which I can't figure out for the life of me.
I doubt any programmer will laugh if you tell them you're coding in a plain text editor with a compiler to do your learning. It might not be pretty, or quick, but actually knowing things from the bottom up gives you an insight you just can't get with yer fancy IDEs.
Of course, they'll laugh if you *stay* using it for long after your first few months [:)]
A bit off-topic, but I disagree mcdrewski. I do most of my coding with a text editor and commandline. The text edotor is SciTE http://scintilla.org/SciTE.html and I've got my own build system written in ruby. IDEs slow me down because I work across many platforms and compilers and I have no desire to maintain a project file for each IDE/platform.
I used to use IDEs, and they can be nice. I just refuse to use text editors with UIs designed by a dyslexic gibbon (I'm talking about VI and Emacs).
JI, you will get far more out of SciTE if you spend 1/2 an hour or so going through the global config file and set it up the way you like. I turn on multiple tabs (10 files open at once), line numbers, default to monospace font, and have it check if it is already open before launching another instance. It's way cooler than what you see from the basic setup you get from a fresh download.
Also if you edit the config file in SciTE itself, as soon as you save it the changes take effect i.e. no need to restart it.
Ok, been going pretty steady, but I've got a quick question:
is there any cleaner way to invert a 0-1 range to 1-0? As in, white/black inverse? Currently, my fresnel masking code looks something like this:
...
fresnel = dot(lightvector,normals);
return saturate(fresnel - 1) * -1;
...
I think that's it... don't have it here to directly copy/paste though [:)] anyways, you get the drift... clamping to 0-1 - 1 * -1 ; that seems really bulky and crap to me, and it seems like I should just be able to saturate -fresnel or something along those lines, but that obviously doesn't work [:D]
anyways, if any programming types can point me in the right direction there, it'd be much appreciated :)
Erm...that code doesn't look like it even does what you are wanting it to, unless I'm reading it wrong.
Anyways, as long as you can guarantee that your range is [0,1] (which "saturate" does nicely), then inverting the range is just "1 - n". So, if I'm guessing the intent of your code correctly:
[code]fresnel = dot( lightvector, normal );
return 1 - saturate( fresnel );
[/code]
huh, odd... sorry, I didn't think I explained it very clearly, but trying out the 1-n seems to work even if the numbers aren't the same... what I need to do is invert the range 0-1, to effectively change 1 to 0 and 0 to 1.
so what I'm doing is - 1 * -1, so:
0 - 1 * -1 = 1
1 - 1 * -1 = 0
the 1-n I thought would just put 0 to -1 and 1 to 0, but it doesn't seem to have any visible difference and is computationally lighter. since I'm using it as a mask to ramp in glancing glare on the on specular, I thought the -1 would make a negative addition (burning the specular instead of additively pushing up the value). Well... that's my crappy maths for ya! cheers :)
Compiling A Plug-in for Maya Mac OS
Hello,
I've got a plug-in that I desperately need compiled for Mac OS X. Here's the link:
http://www.comet-cartoons.com/toons/melscript.cfm
I'm looking at:
poseDeformer 1.17 - Requires: Maya 6.01 - (Windows PLUG-IN) - CATEGORY: Rigging
I e-mailed Mr. Comet and he said that he is planning on releasing a version for Mac, just not at this time. However, his source code is included in the download and if I was able to find someone to do it, it could be done farily easily. If there is a kind soul out there with a little extra time and the ability to compile for Mac, I would be very grateful. I'm running Maya 6.0.1 on Mac OS 10.3.9.
Thank you so much!
Travis
quote:Originally posted by wavescorx
Hello,
I've got a plug-in that I desperately need compiled for Mac OS X. Here's the link:
http://www.comet-cartoons.com/toons/melscript.cfm
I'm looking at:
poseDeformer 1.17 - Requires: Maya 6.01 - (Windows PLUG-IN) - CATEGORY: Rigging
I e-mailed Mr. Comet and he said that he is planning on releasing a version for Mac, just not at this time. However, his source code is included in the download and if I was able to find someone to do it, it could be done farily easily. If there is a kind soul out there with a little extra time and the ability to compile for Mac, I would be very grateful. I'm running Maya 6.0.1 on Mac OS 10.3.9.
Thank you so much!
Travis
I also e-mailed Mr. Comet and it seems he has no urgent plans to make a OSX port, if at all.
But, some progress?
There is a Maya user over at the Autodesk Maya Mac OSX forum who has been working on a Makefile for PoseDeformer. However he is still getting 1 error. Can ANYONE help with this?
I have included the text from his post, plus the URL (you can access the download the makefile there). I have attached a ZIP file of the makefile as well, but the extention needs to be changed to .zip from .txt
before it can be opened. Once there is a valid makefile for OSX then there is a chance to recompile this plug-in for OSX.
quote:[1]Tom C[/i]I am having problems with constructing the Makefile...
I have attached my current version, went from 3 errors down to now just 1 error.
I think maybe I am linking the obj files incorrectly but I am not sure.
These are the remaning error, no matter what I do.
ld: Undefined symbols:
_main
__ZN10mirrorData10initializeEv
__ZN10mirrorData2idE
__ZN10mirrorData7creatorEv
__ZN12poseDeformer10initializeEv
__ZN12poseDeformer2idE
__ZN12poseDeformer7creatorEv
__ZN16poseDeformerEdit7creatorEv
__ZN16poseDeformerEdit9newSyntaxEv
make: *** [plugin] Error 1
The URL
http://forums.alias.com/WebX?13@305.DQMFaAtTeSu.1@.3bb1a31e/1
I am not able to construct a makefile, but with help from a co-worker, I might be able to recompile a working makefile into a OSX plug-in.
Anyone?
[img]icon_paperclip.gif[/img] Download Attachment: [url="http://www.sumea.com.au/forum/attached/fever/20063102123_Makefile.txt"]Makefile.txt[/url]
1.97 KB
A question about DX textures and materials
In DirectX is CreateTextureFromFile smart enough to know
if it has already loaded a particular texture?
The reason I ask is that many examples load a mesh's textures by looping through
the materials, using the material's pTextureFilename field to load the textures-
so that may mean a lot of needlessly duplicated textures in memory if DirectX doesn't
recognise textures as being previously loaded (unless you keep track yourself).
quote:In DirectX is CreateTextureFromFile smart enough to know
if it has already loaded a particular texture?
Short answer: It doesn't as far as I can tell, and I'd be surprised if it did. One way to check would be just to try and load a texture from the same file and see if the function returns the same pointer.
Longer answer: I can see that sort of functionality causing weird and confusing problems for us users. It would mean DirectX would have to maintain some sort of internal texture library - do you bother trying to work out if two different directory paths point to the same file? What if some external program overwrites the file or renames it? What about textures modified, or even created procedurally, at runtime? It's quite possible to sort through the issues, but the seemingly simple CreateTextureFromFile function suddenly has different semantics and side-effects from what is documented.
CHM Authoring tool.
Does anyone know of a good CHM authoring tool that isn't HTML Help? It's driving me absolutely berserk as it's falling to pieces and working with mixed results on other machines.
A free monkey for anyone that knows of one. [B)]
Scott.
Game Designer : Understanding Programming.
G?day guys. Well I?m currently in the process of making a game design document and folio to get into the game industry. However I come from a more artistic background.
To be a better game designer I think its right that I at least read up on the concept of programming. Basically I want to understand what you guys do.. Even if only to the basic level. Some knowledge is better then no knowledge.
So.. What books could anyone recommend to me for me to read up on? I?m looking for A book that is both relevant to game programming and is understandable by someone who doesn?t have any previous knowledge of programming. It doesn?t have to be in-depth though.
I?m sceptical on wether a book like this even exists. But it can?t hurt to ask.
(( I?m guessing what I?m looking for is a basic book on C++ and direct X ))
When you say simple I think Idiots Guide to C++, when you say in-depth I think C++ Primer Plus. And those books may give you an idea of what programming is but they still will not give you an idea of what games programming is. I think you are looking for two very different things when you say you want a book that is in-depth but can be understood by a non-programmer.
Maybe you could clarify what it is you want to know about what programmers do.
Maybe you could clarify what it is you want to know about what programmers do.
[/quote]
Basically what a designer needs to know. The limitations and constraints that he has to work with to make his game design feasible to produce. Making a game idea is easy. Making a game idea that can be feasibly made with the technology you have available is a more complicated matter.
So I?m more or less looking for content that will teach me the limitations of programming.
I?m only guessing.. But I think that encompasses current game engines and the such.
I know this all sounds contradictive. But it could be worse. I could be sticking my head in the sand on the topic of programming.
See the real difficulty with your query is that I really can't concieve of a way to be taught the limitations of programming. It has taken a few years of experience in the real world to gain any sort of intuition in that regard. And what you can and can't do with todays technology might be overcome by a major breakthrough tomorrow.
I would suggest you go ahead and document your concept from beginning to end anyway, without thought to what you might and might not be able to do. That way the vision in your head is clear and you'll find it much easier to answer questions about it. Then talk to some people, whose technical expertise you trust, about your ideas to see how feasible they are.
Don't expect that the first draft of your design document will be the final one. Expect that you'll have to revise it several times before it's in a state where it can be turned into a real game.
By far - the biggest limitation of programming is in getting hazy requirements.
Basically, you need to explain what you want. If you say "auto-following camera", you need to explain how you want it to work if there's an enemy behind and one ahead. what if the camera's behind a tree? what if it's inside a secret room behind a wall?
Think about what you want in detail and your team can tell you which bits aren't possible. Think in hazy terms and your team may make something that's either not very good (but easy to write) or not what you want (and waste time).
Leto and Mcdrewski:
Thanks for your replys. that helps a good deal. so better to drill forward in the design (and to explain in detail) and to let you programmers tell me what i can and can't do rather then myself telling me so.
One last thing. Your a programmer right and a designer is talking to you, trying to communicate ideas. If given the chance. would you like the designer to have knowledge of all the programming terms and jargon. And if so. Be pacific. (like C++ or visual basic )
I ask these questions becouse i want to be able to communicate best with programmers and basicly all people in a stuido team. i beleave correct and clear communication is the vital part of any project or business.
I'd prefer that a designer could program a little bit, or had programmed a little bit, but if not I wouldn't expect them to use the terminology appropriately. ie: If you can't program, don't try to "talk the talk" or it'll probably just confuse things.
Secondly - and this is overly picky on my part, so please no offense meant - if you're looking for excellent communication, then spend a bit of time on your English skills. Your QA department, and your publishers will love you for it. Learn which "your/you're" to use. Learn that the word 'Pacific' refers to an ocean, whereas 'specific' means to be prcise. Spell 'replies', 'because', 'believe', 'basically'...
Disclaimer : I'm a programmer, but I'm working in QA at the moment.
For the most part I agree with what the others have said above me. You can't get to know the other side of the fence that quickly, however using simple languages, python, lua, ruby (I'd recommend it) would give you a neat little foundation in programming. Plus if need be you could tell your programmers to add one of those scripting engines to the game and do some of the work yourself.
As far as books go I would suggest reading the free Ruby book online (you'll find links to it), and reading through a copy of "Game Architecture and Design" wouldn't hurt. For an artists I would try to stay away from the C++ books because you'll either read a very basic book designed for absolute beginners and then you'll sound like one, or you'll be bored stiff with one of the more advanced books.
Also take a look at BlitzBasic (Blitz Max), a few people have recommended it as a great prototyping tool, and since it's basic an artist, or designer should be able to pick it up and create simple prototypes with ease.
quote:Originally posted by mcdrewski
Secondly - and this is overly picky on my part, so please no offense meant - if you're looking for excellent communication, then spend a bit of time on your English skills. Your QA department, and your publishers will love you for it. Learn which "your/you're" to use. Learn that the word 'Pacific' refers to an ocean, whereas 'specific' means to be prcise. Spell 'replies', 'because', 'believe', 'basically'...
Disclaimer : I'm a programmer, but I'm working in QA at the moment.
It's not overly picky really. And it's quite true my grammar and spelling is indeed shitty. Now like the rest of the world you would say, "Then just fix it and get better at it" ... And I really wish it were that simple.
You know how there's always one thing in your life that no matter how much you try at it you can only improve at a very slow rate if any at all. For me its grammar and spelling.. Which is quite ironic because dozens of people have told me that I?m quite a good story/screenplay writer. And without trying to boost I myself know that I am.
It?s just one of those things. I guarantee you it's not due to laziness. I took through my posts twice over before posting it and still people can find mistakes.
So yep. It is my undoing so to speak. Half the reason why I draw. So I can express my ideas in other forms.
If I didn?t have that.. I wouldn?t expect for anyone to hire me.
[img]icon_paperclip.gif[/img] Download Attachment: [url="http://www.sumea.com.au/forum/attached/caroo/200575212051_artefact.jpg"]artefact.jpg[/url]
118.23 KB
[img]icon_paperclip.gif[/img] Download Attachment: [url="http://www.sumea.com.au/forum/attached/caroo/200575204220_thebigboss.jpg"]thebigboss.jpg[/url]
124.11?KB
Thats my drawing of the final boss that you have to fight in my concept. it's all in the game design document I'm writing. Im so far upto 40 pages.
Just a note - spelling can be fixed by spell check. There is little reason for written documents or communication to be incorrect.
Just to add to what Leto said about programming - I have heard people say that as a designer you are not suppose to worry about the limitations of the teams around you. You are suppose to worry about the limitations of your own imagination. If you can dream something up more then likely an artist or programmer can find a way to make it work.
What Jacana says about not thinking about the limitations of the team around you is bang on. Its exactly the way i would encourage a gun designer to approach things.
Leave the programming and art to the artists and coders, although some basic understanding would help in expressing your idea - but only as far as in that you could present the problem clearly through meetings and discussion and via your documentation.
A rule would be not to imply formulas or methods of working out a problem. You supply the vision let the professionals in each respective areas provide the feedback on whether or not its feasible or not. Its really all about having the ability to rely and use the power of others in the best way you can.
HararD, Jacana: It's more so he'd be able to better express his ideas and concept to programmers than to understand what limitations are present in his designs. If the programmers can understand him better because he can speak their langauge then they are more likely to implement what he was envisaging initially.
Daemin: Bingo mate. Couldn?t say it better myself.
Programmers have spent the time to sit down and learn these complex programs. They have my respect and I trust in there ability.
Artists have slaved away at their work until they finally reached that ever-intangible industry standard. They also have my respect.
I don?t want to know these things to limit myself.. I want to know these things to better communicate and become more versatile to the studio, therefore pulling my fair weight and being the best I can be.
With the want to become a creative and respected designer you kinda want that skill.
Compilation problems
Many DirectX projects bring up errors when I try and compile them,
the errors seem to be of this form or similar:
"c:AdvAniBookCodeBookCodeCommonDirect3D.cpp(314) : error C2664: 'D3DXLoadSkinMeshFromXof' : cannot convert parameter 1 from 'IDirectXFileData *' to 'LPD3DXFILEDATA'
Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast"
i.e. for some reason the compiler recognises these pointers as being
of different types.
I'm using Windows XP, DirectX 9 and MS Visual Studio .NET 2003.
These errors did not occur when I was using MS Visual C++ v6.
I'd be grateful for any input on how to fix this.
WiffleCube.
Thanks. That type depreciated already? Ho hum. Another factor
that appeared during compilation is that MSVS .net throws an
error if you use aliases for classes, e.g.
typedef class2 class1;
If you passed an object of type class2 to a function expecting
an object of type class1 it throws an error. Overly strict
type-checking perhaps.
...well yes strict, but no, not overly strict.
Those two classes ARE different in every way that matters to the compiler. Only in a human programmer's mind could they be considered the same thing, and we all know that computers don't do well when they have to do what you mean, not what you said. :)
I struggle to think of a time in which aliasing in this way would be very good programming practice - could you elaborate on why you need to do it?
quote:
I struggle to think of a time in which aliasing in this way would be very good programming practice - could you elaborate on why you need to do it?
For example, in the above, where you're using DirectX library
functions, and it generates an error because you use LPD3DXFILEDATA
as a parameter instead of IDirectXFileData * and vica versa. MVC++
recognises these as being the same, whereas MSVS .NET throws an error.
Yes, there are two distinct objects in question:
typedef IDirectXFileData *LPDIRECTXFILEDATA; (deprecated)
typedef ID3DXFileData *LPD3DXFILEDATA;
D3DXLoadSkinMeshFromXof() takes a LPD3DXFILEDATA, which is a typedef of ID3DXFileData *, not IDirectXFileData *. I assume that the function took a LPDIRECTXFILEDATA before IDirectXFileData was deprecated.
I'm at a loss to explain why MSVC6 allows casting between these objects. It's not something you want to do (mainly for mcdrewski's reason). I'd say the best thing to do would be to declare the object as a LPD3DXFILEDATA in the first place and rewrite the functionality to conform with the new interface. I'm not sure how heavily it's used in your code but I imagine they're more or less functionally equivalent. Check the DX9 docs to see what's been replaced with what (The documenation should still reference both interfaces).
Oh man... M$ DX9/VS6 blues
Just ported a 3yr old Dx8/VC6 demo to DX9.0c/VC6,
which should have just been a recompile...
ok, so after rewriting chunks of code because of
deprecated APIs, I get this strange link error,
undefined symbol "security_cookie" in dxerr9.dll.
Many net postings about this, boil down to the fact that
M$ dont support VC6 with DX9 after october 2004, grrr.
Anyway, I just removed DXTrace() calls, which were only
reason for linking with DXerr9.dll, and it builds and
runs fine now on DX9.0c [the Feb 2005 SDK update]
This is all within the bounds of simple M$ silliness,
but I had similar type of problem when DX8/VS5 combo
was unworkable because the exe format had been upgraded.
Which meant I was forced to upgrade to VC6.
They just seem a bit arrogant... you can never get a straight
answer or simple DLL to fix the problem.
On the plus side I cut out a lot of code by using
DrawIndexedPrimitiveUP().
Whats the consensus of the current best target DX version?
DX8 well supported by most PC gamers?
Do you realise that vc6 is something like 8 years old, and is nowhere near as conformant as vc7.x? If you want to use vc6, then stay with the older SDKs.
And why would you think a major version upgrade is just a recompile...?
If you want to support the largest ammount of hardware, I think targeting DX7 is a good idea. DX9 should work on DX8 class hardware so I don't see any point in targeting just DX8.
[?][:0][:(!][V][:(]