A simple and quick recipe I rely on 🍰
[C]ONTROLLER - this is the layer handling (frame by frame) decisions and I usually implement this using a behavior tree so I end up calling this "the behavior tree" or "the BT".
For class names? Well depending on the case could be
Controller not bad except it will confuse everyone else.
Planners a good choice for smarter AIs; these aren't stateless but, it's well understood when you write a planner (path-finding, GOAP, ...) that plans need to be dropped and re-evaluated often.
↗️ Stateless. flags create moving parts and make debugging harder.
⚠️ Keep it concise, clear and speedy; if two agents do fairly different things provide separate controllers. Rely on: memory for state, apperception for fast processing and "getting answers" (also: your agents should have a clean API for doing things; if this is not the case, write an
⚠️ There's just a number of AI paradigms/libraries/solutions out there; many of which are not stateless: many BT implementations aren't and... state machines? Right, not stateless.
✅ If your designer can read the source file.
[AP]PERCEPTION - feeds predigested perception and self-perception data into the controller. If the sensors are simple I implement them here. If apperception is costly, run at a lower frame rate (down to 10 fps or less); frame rate differentials may increase code complexity but not the case here so, low hanging fruit for optimization.
I just name it
↗️ "Predigested" the buzz word: the API exposed to controllers is dead simple and gives the answers decision logic is asking for. An example? Your controller wants to know if the enemy is
near. Five meters isn't near or far.
⚠️ Multiple AIs with differentiated behaviors: perception/sensor classes are reusable, apperception models not so much.
[M]EMORY - keeping control state outside the controller is great but, where does it go? Do we really need control state (such as with puny flags, cooldowns, timers, action queues, coroutines and event management)?
Even with simple AIs an explicit memory model can be very helpful, roughly matching short term memory and easily implemented with (just a few!) key-value pairs.
Surely with AIs actually remembering stuff, memory will break down into short term, mid term, semantic/scalar knowledge and so forth but let's focus on the STM today.
↗️ Small! The primary purpose of STM is to assure behavioral continuity - this has to do with solving technical problems but continuity, mostly, is a by product of "the environment doesn't change too fast". Here the STM concept is not an analogy. It's a memory structure used to direct your AI "right now" so if you cannot count the number of memorized items on the fingers of two hands - or display it in a tiny window at runtime) - you're storing too much.
⚠️ Forgetting (aka clearing old state) keeps your AI up to date. Bots hold onto state and have lame bugs - whereas animals and humans just forget stuff.
If the above is not too complex this maps to classes. Otherwise I break my AIs into roles (orthogonal to the above so, each role has the C, Ap, M components) or break down responsibilities within each module.
Photo - Maximalfocus on Unsplash