I have been doing a decent bit of research on AI. I am very fascinated by this area and think I would like to develop my skills here.
Just wondering if there is anyone lurking around here with some AI experience that would like to answer some questions. Or even know someone whos working with AI that they could put me in contact with.
As I have found through my readings I have so many questions to ask! I realise there are some great AI forums around the net but I really like the idea of talking with someone locally.....
pathfinding is cool and easy to explain, and a simple AI like chess type AI (simple chess AI) isnt far off pathfinding, uses the same basic idea.
You just gotta sort the paths you've found, and then look at the possible moves your enemy could make X turns ahead, and you got yourself some cool AI.
All that advanced AI, like nerual-networking and stuff, dont ask me.
I sent off an email to BioWare and Dice to see what advice they might be able to give as well as ideas on what they would want to see in AI demos. I got an email back from Ray saying he'd forwarded on my email to one of the senior BioWare programmers who's a PhD in AI.
I will post what I find out here :)
Ok here's the full email I received from BioWare about what to do for an AI Demo:
An AI demo is a somewhat tricky proposition. Personally, artificial intelligence has, at its basis, a lack of artificial stupidity. Depending on what you code in the rest of the demo, it can be complicated or rather simplistic. Either the demo is complete (fun and
playable) or the demo should be able to illustrate excellence in the field of artificial intelligence.
To elaborate, if you are considering modifying a game such as Unreal Tournament, the AI would have to be demonstratively better than the packaged ones that come with the game. If you are implementing a simple X and O type pattern to illustrate a squad's tactics over a specific terrain, people would expect more from the AI than they would if you had some 2000-polygonal characters running around and making decisions, complete with your own sound effects and flare effects.
What sort of design approach to AI do you want to illustrate: top-down design (i.e. with a scripting language and a ruleset/environment that leads to complicated decision making) or bottom-up design (taking fundamentally sane principles and generating complicated decision making through planning/optimization of states)? Top-down design is probably the better approach for a demo ... being able to demonstrate a framework (of your own design) that allows designers to input their rules interests more than just the programmers in a company. Take Lilac Soul's NWScript Generator, for example. It's great at what it does ... avoiding the tedium of writing scripts. However, are there better approaches for designing GOOD AIs? How much control/assistance do you expose to the designer? Give us a toolkit with an artifical character in an environment, and allow us to tweak the AI so that the character dies in elaborate and interesting ways ... eventually learning what to do to get out of the room. If you're going to use learning techniques, illustrate how you can show other people what the AI is actually learning. An AI that is using neural networks or genetic algorithms isn't that useful, unless you can extract the relevant information from the genes or network.
Don't underestimate the fun factor in the demo. Every time the person laughs out loud at what they are seeing on the screen, or is reminded about one of their favourite games, is an interview that you've already landed.
Be prepared to submit full source code of your demo along with the executables, and include a README illustrating what the demo is supposed to show (and what, if you had more time to work on it, would you do in the next two months of working on it!)
I'm an odd case, because my background is in traditional AI "games" such as chess or checkers.
Take Minesweeper as an example, and the "hard" AI level where there are 30x16 squares and 99 bombs. Can you write an AI such that the maze is generated after the first square is chosen (to guarantee that the first square is never a bomb), that it succeeds at solving the maze more than 20% of the time? It's an interesting example of bottom-up design that might impress some people if it was a "visual" test. (For beginner level (8x8, 10 bombs), you should be able to get 75% success without too much effort, and intermediate level (16x16, 40 bombs) isn't much harder with about a 70% success rate.) If you're allowed to make at most ONE mistake, how high can you push the success rates? Can you get the success rate over 50% on "expert" mode if you're allowed to land on at most one bomb? (Most of the time, you find the bomb on the second move of the game ... ;-] )
Thats a fairly porrly worded challenge there kezza :P
By logic the second shortest path WOULD be the path that just goes one extra tile and otherwise has the same exact path. It seems that you want an entirely different path, that is also short, but it is doughtful that this complete alternate route would be the second best possible path. :)
CYer, Blitz
Yeah, its something that a friend asked me... but it completely stumped me. I think one way to do it is to use a tree based space representation for path finding and remove some of those upper level nodes that represent the area near the path.
However, its pretty hard for AI to appear to be intelligent if it is so predictable :)
... sorry about the wording
-- edit "just fixing some spelling :)"
I'm coding the A* algorithm for my game, and currently I looking at how to modify the cost function to take such factors into account, if we consider that the a* algorithm works around the basic formula: Total = heuristic to goal + cost from origin
If the heuristic cannot be modified, then we can only really modify the cost side of the function. In the simplest sense, the cost could be either 1 or 0 (either passable or not) for a single node in the map. If the cost for each node were a float value, that were adjustable based on a number of parameters, the a* will utilise paths which have lower traversal costs. In the case of avoiding enemy units and specific areas, the nodes within the vicinity of those areas can have their traversal costs increased. This can be done using a number of functions, such as squared, logarithmic or constant (i.e. the decay of cost as we move further from the zone). One should find, that the a* will path away from undesired areas, and avoid walking too close to enemy units if we want to avoid them!
I'm speaking on Cheryl's Student Panel this year at the AGDC, so if my theory is fundamentally flawed, you can throw rotten vegetables at me then! From initial experiments, this kind of method seems to work, although I need to find out the exact limits of operation for the cost function.
cheers
Matt
I am currently doing AI at uni, and it is quite an interesting area (although at the moment compiler construction has got my main interest).
Couldn't you just post the questions on the forums here and let everyone chip-in with an answer or two?