Dave Mark’s post on Why Not More Simulation in Game AI? and the comments it elicited are right on the money about the correlation between lifespan and intelligence of supposedly intelligent adversaries in first person shooter (FPS) games. It is extremely refreshing to hear advanced gamers agreeing that more intelligent, longer-lived characters would keep a game more interesting and engaging than current FPS. This is exactly consistent with my experience with one of my employers who delivers intelligent agents for the military. The military calls them “computer generated forces” (CGFs). The idea is that these things need to be smart and human enough to constitute a meaningful adversary for training purposes (i.e., “serious games”). Our agents fly fixed wing and rotary wing aircraft or animate special operations forces (SOFs) on the ground. (They even talk – with humans – over the radio. I love that part. It makes them seem so human.)
Of course, they get to demonstrate this intelligence only because they are not shot up or blown to bits within seconds!
Here’s where AI in games is headed (from the perspective of a behavior leader in serious games):
- Real AI is being embedded with gaming milddleware to animate non-player characters (NPCs) with much higher human-like fidelity.
- Human performance modeling, including emotion, is being enhanced within the cognitive models of CGFs and will show up next in NPCs.
- Crowd animation is becoming much more sophisticated and moving to center stage in role-playing games (RPGs).
The first step involves embedding a rule-based or similar cognitive architecture in game AI middleware. These days, game AI middleware is pretty much about path planning and finite-state machine (FSM) modeling of agent behavior. That is, it is not knowledge based and has all the limitations on context-sensitive and complexity that you would expect if you were hard-coding in C++ (Java is out since the middleware has to support the Xbox and the Playstation, as well as Microsoft Windows and Linux, in some cases.) The FSM approach has been played out to its limit with hierarchical task networks.
For a look at some of this software, see:
- AI-Implant from Presagis
- Gamebryo from Emergent Game Technologies
- kynapse from kynogon (acquired by Autodesk)
- Jupiter from Touchdown Entertainment
- DI-Guy from Boston Dynamics
From our standpoint, Presagis is particularly well positioned to transition from serious games to entertainment. We also think their simulation, 3D terrain, and flight dynamics capabilities are by far the best for our autonomous pilots in our military markets. (We might have to plug-in to Microsoft Flight Simulator in the consumer market, though!)
Autonomy in gaming and robotics
Parenthetically, our agents can also fly unmanned aerial vehicles (UAVs), but for those most part those are teleoperated rather than autonomous. Boston Dynamics BigDog is one example of an autonomous unmanned ground vehicle (UGV) that will have our kind of cognition on board in the surprisingly near future. Boston Dynamics apparently sees the same synergy in autonomy across gaming and robotics that we do. They are also teamed up with Presagis using DI-Guy with AI-implant.
If you are interested in this kind of synnergy, you might also want to take a look at the following robotics middleware:
- URBI from GostAI
- Microsoft Robotics Studio
If you take a close look at the “behavior modeling” of these and game AI software you will see many conceptual similarities.
Short term AI for NPCs
Bringing rule-based cognitive capabilities to NPCs involves taking our AI technology and replacing or augmenting the FSM/HTN approaches of existing middleware is the critical step forward for AI in games. This means C++ in order to support the Xbox and the Playstation, if not Microsoft Windows and Linux, too. Our technology tends to be a little too heavy for the massive crowds that kynogon addresses (check out these videos). Our technology is more capable of moderate crowds, but it is really targeted at teams of agents at high fidelity. And our technology is perfect for high-fidelity human animation. Check out this (and other) videos from DI-Guy, for example.
Aside from the post I referenced above, these audio/video clips can give you some insight on how much game developers are commited to AI and how hard they are working to push the limits of their current approaches:
- A podcast by a developer of the AI in Halo III. (I too like the NPCs fighting to drive the HumVee.)
- A video with the developer of the AI in Assassins’ Creed.
The Assassin’s Creed scenario is close to ideal for today’s cognitive AI. You have a small crowd of agents of long-lived NPCs where the quality of the game emerges from the reality and complexity of interactions with them. It has a long way to go but this game clearly broke ground.
The basic approach is to take the lessons from military CGFs, especially concerning cognitive and human performance modeling, including emotion, and raise the level of everything from path planning to behavior up to the knowledge-level. If you take a close look at some of the game AI middleware, you will see that NPCs actually throw out rays to “see”. In a few years (a game cycle or two), NPCs will just drop into games and perceive their environment with the same vision that camera and laser equiped robots use. And they will figure how to get from point A to point B the same way you and I do.
Going into all the details of how this is unfolding is too much for me at the moment, but here are few more references on where the military folks are coming from:
- Stottler Henke tried (apparently unsuccessfully) with the FSM/HTN approach in SimBionic.
- Chi Systems tried (apparently unsuccessfully) with a proprietary cognitive architecture in iGEN.
- Micro Analysis & Design remains strictly military CGF emphasizing stress/fatique/hunger/training more than cognition or emotion.
- Soar Technology also remains strictly military CGF with the strongest emphasis on cognition and some emotional modeling.
Note that each of these companies does more than “intelligent” agent work, I am only commenting on those aspects here.
Realistic Avatars
As more human behavior (including performance and emotion) is modeled, facial animation, including speech will become key. I enjoy CrazyTalk from Reallusion but image metrics seem to be the facial animation leader for current games. Also, check out Volker Blanz.
If you find this interesting, nothing that I have seen compares to the work of Stephen Stahlberg. It will take a lot of horsepower before the agents that we interact with will be as personified as the video behind his face shown above, but will happen.
Intelligent Agents
As compelling as synthetic humans may become, their ability to engage us and be of use to us will require more the complex behavior, it will require learning and problem solving that current rule technology does not address. Some of the cognitive architectures such as Soar and ACT-R provide these capabilities, the former on a rule-based platform. Soar is an interesting technology with a good rule engine. It may not be the exact approach to embed in AI middleware, but it is certainly a guidepost on the way towards intelligent autonomous agents, whether they be on screen or robots.