Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411489 Posts in 69371 Topics- by 58428 Members - Latest Member: shelton786

April 24, 2024, 03:55:16 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsCommunityDevLogsProject Rain World
Pages: 1 ... 93 94 [95] 96 97 ... 367
Print
Author Topic: Project Rain World  (Read 1446408 times)
jamesprimate
Level 10
*****


wave emoji


View Profile WWW
« Reply #1880 on: July 29, 2014, 03:02:04 PM »

this looks like a really elegant solution! easy to visualize and seems like it has deep application.
Logged

JLJac
Level 10
*****



View Profile
« Reply #1881 on: July 30, 2014, 11:43:52 AM »

Update 279
Big progress today! Finally a lot of lose ends came together. Now we have abstracted creatures working like I intended them to. The best way to do this is probably a list.

* When an abstract room is realized and there are creatures in it, the creatures will appear in the room in positions suitable to where they are headed. So say that a creature is moving from one exit to another in the room, and has been doing so for 40 ticks when you get in there and see it. A path will be calculated between the exits, and the creature placed at a corresponding distance along the path. This path will also be fed to the path finder, selecting all the cells so that the creature will follow the path immediately from frame 1 as the room is realized.

* If a creature is abstracted in a room, and doesn't leave it, its actual tile position will be saved, and used as a starting point for a path as described above. This means that you can abstractize a creature half-way in following a path, and then realize it a little while later, to see it having traversed some of that distance.

* If a creature is not following a path, but is realized in a room, it will walk around between to that creature accessible tiles randomly for a corresponding amount of repeats, using the creature's last known position as a starting point. This is meant to place the creature somewhere in an area connected to where it was last seen, further away the longer ago you saw it.

* If a creature is abstracted while outside of walkable terrain, it will be marked as stuck and unable to move in abstracted mode.

* On abstraction, creatures save a simple manhattan distance to the closest exit, the one which will be its node in node space. The creature won't be able to move away from that node until it has been in it a time that corresponds to that distance. This is so that a creature that was half a screen away from an exit won't be able to pop through it the very next frame after abstraction because the engine thought it was "in that node".

* Creatures can be assigned destinations in other rooms, and will move towards them. Every combination of destination in abstract/realized space and creature in abstract/realized space will compute, and the creature is able to pursue a path that goes through both realized and abstracted space. Theoretically it should even be able to traverse a room that's flickering between realized and abstract, though that's an extreme case that will most likely never occur in the game.

I have also been able to do just a tiny bit of performance testing, and it looks very promising. These are for example ~20 creatures passing through the room, and the framerate seems to handle it well.


(The gif though is captured in 10fps, so you'll have to take my word for it  Tongue )

The whole solution is still throwing a lot of nullreference exceptions and the like, but the general structure seems to be working and I'm excited to squash the last bugs and pack up and move on. Maybe soon I can finally get to actual AI!
Logged
Fuzzy
Level 0
**



View Profile
« Reply #1882 on: July 30, 2014, 05:07:07 PM »

Update 279
Big progress today! Finally a lot of lose ends came together. Now we have abstracted creatures working like I intended them to. The best way to do this is probably a list.

* When an abstract room is realized and there are creatures in it, the creatures will appear in the room in positions suitable to where they are headed. So say that a creature is moving from one exit to another in the room, and has been doing so for 40 ticks when you get in there and see it. A path will be calculated between the exits, and the creature placed at a corresponding distance along the path. This path will also be fed to the path finder, selecting all the cells so that the creature will follow the path immediately from frame 1 as the room is realized.

* If a creature is abstracted in a room, and doesn't leave it, its actual tile position will be saved, and used as a starting point for a path as described above. This means that you can abstractize a creature half-way in following a path, and then realize it a little while later, to see it having traversed some of that distance.

* If a creature is not following a path, but is realized in a room, it will walk around between to that creature accessible tiles randomly for a corresponding amount of repeats, using the creature's last known position as a starting point. This is meant to place the creature somewhere in an area connected to where it was last seen, further away the longer ago you saw it.

* If a creature is abstracted while outside of walkable terrain, it will be marked as stuck and unable to move in abstracted mode.

* On abstraction, creatures save a simple manhattan distance to the closest exit, the one which will be its node in node space. The creature won't be able to move away from that node until it has been in it a time that corresponds to that distance. This is so that a creature that was half a screen away from an exit won't be able to pop through it the very next frame after abstraction because the engine thought it was "in that node".

* Creatures can be assigned destinations in other rooms, and will move towards them. Every combination of destination in abstract/realized space and creature in abstract/realized space will compute, and the creature is able to pursue a path that goes through both realized and abstracted space. Theoretically it should even be able to traverse a room that's flickering between realized and abstract, though that's an extreme case that will most likely never occur in the game.

I have also been able to do just a tiny bit of performance testing, and it looks very promising. These are for example ~20 creatures passing through the room, and the framerate seems to handle it well.


(The gif though is captured in 10fps, so you'll have to take my word for it  Tongue )

The whole solution is still throwing a lot of nullreference exceptions and the like, but the general structure seems to be working and I'm excited to squash the last bugs and pack up and move on. Maybe soon I can finally get to actual AI!
Wow, congrats on making so much progress, Gentleman what do you think you will be working on after you finish A.I.?
Logged
Savick
Guest
« Reply #1883 on: July 30, 2014, 05:20:25 PM »

Slug cat forever! Waaagh!
Logged
JLJac
Level 10
*****



View Profile
« Reply #1884 on: July 31, 2014, 01:29:23 AM »

Update 280
Hehe wow - when you've paved the road with robust solutions it's amazing how quickly everything comes together in the end. Now I have the path finding and creature abstraction done, it seems! It isn't throwing errors at me any more! And way too soon, because I haven't had any time to think about which item to move on to  Shocked

What I should do is to adapt this whole path finding/abstract space system to accommodate for creature dens as well, so that creatures will be able to go hibernate and the like. But frankly I'm a bit bored by this stuff now, and the dens can wait a little while.

The next thing that seems reasonable to get to is some kind of generic AI system. That would be the very basics of an AI, the core that's shared between all creatures. One thing I know this system should incorporate is the "brain ghost" system, where creatures upon seeing other creatures will create a symbol for that creature that can move around with slight autonomy even after the creature is no longer seen.

This is the core mechanic that makes it possible to trick the enemies in Rain World. If a lizard sees you, it will create a ghost of you, and as long as you're in its field of vision that ghost will always be fixed at your position. But the moment you get behind a corner, the ghost will start to move based on the lizard's assumptions of how you'd move (based on things such as movement direction on last visual) and this is your chance. The lizard AI is only ever able to ask for the ghosts position, never the actual player's. This means that if you're diverging from the path the lizard assume you'd take, you have a chance to trick it.

This system is going to be common for all creatures, so that's something I could get at right away. The problem is that I have a slightly more complex environment this time compared to the lingo build - for example it's not possible to know how many creatures are going to be in the room and needing ghosts this time around, as creatures are allowed to move freely. Nothing that can't be helped by an hour or two in the thinking chair.



All of this is just "data collection" though, the system that will provide the AI with the information on which it'll base its decisions. The decision making process itself is an entirely different matter.

A Tree solution was posted a page or two back, and I really liked it. But I've also been thinking about a Pushdown Automata Finite State Machine kind of solution, where different modes are defined as behavior modules that can be stacked on top of each other. This would basically be a list of tasks, where the topmost item would be handled until it reached a success or failure, in which case the next one would take over. The difference from an ordinary to-do list is that the modules would be self-sufficient classes actually executing the behavior themselves, and also saving their state in case they got postponed, allowing them to pick up where they left off.

In a way, you could say that the Pushdown Automata is a subsystem of the Tree, as it could be created within the tree if you used a Sequence node and allowed it to be dynamically change its "playlist" during play...

While I'm rambling, one thing that I don't quite like with the Tree is that it has the AI character actively try every solution before moving on to the next one. If there's a lot of walking involved, this could mean that the character could spend several minutes executing some chain of actions where it's very obvious that the last one won't carry through.


(Images nicked from here)

Say that the "open door" node will return a fail. The NPC will still have walked to the door which might not be a big problem in a small world with fast-paced gameplay. But in a large world Frodo and Sam might set out on their epic pilgrimage for three movies just to arrive at mt. doom and face the fact that they left their mountain climbing shoes back home, and that's pretty stupid. Potentially you could circumvent this problem by first running through the tree with an "assumption" cursor, which will make an educated guess on whether or not each action in the sequence will succeed, and only do the actual run-through if that's the case. This sort of defeats the purpose of the architecture though ...

Another thing to take into consideration is that the tasks the RW AI will handle looks very little like this:



Rain World contains extremely few puzzle-like situations where items are needed to traverse obstacles and the like. This layout is perfect for complex puzzles where several interactions are each unlocking the other until a final goal is reached.

The problems in Rain World are much more... fluid than this. Each creature is pretty much always free to move to wherever it wants without having to collect any keys or hit any switches. The only things that can restrain movement is being speared to a wall or held by another creature, neither of which there is really anything to do about except wiggle and squirm.

The Tree solution seems ideal for overcoming geographical obstacles in order to obtain items that can help overcome further geographical obstacles. RW has very little of both these elements, and for NPS's, almost none.

So what does a Rain World creature need to think about? It needs to weight many options, none of which are simply "possible" or "impossible". It needs to do this with limited information as well, being able to account for uncertainty. I imagine a typical rain world lizard problem something like this:

"I currently see no slugcats. I have seen two on this level though (Ghosts are still in memory). One I saw 240 ticks ago in a position 30 tiles removed from me. The other one I saw 20 ticks ago just 10 tiles away, but it was holding a spear. Which one do I go after?"

Another might be:

"Room A have three edible creatures in it, but also a creature that considers me as edible. Room B has just one edible creature in it, but my own skin would be safe. Which one do I go in?"

My immediate idea is to somehow construct "plans" for what to do, and for each plan calculate a "good idea" value based on the known information.

Quote
Plan: Follow slugcat A! [Slugcat deliciousness: +40pts] [Distance to target: 20 tiles -> -20pts] [Time since I saw this slugcat: Currently looking at it! -> +30pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts] Total points: 40pts

Plan: Follow slugcat B! [Slugcat deliciousness: +40pts] [Distance to target: 10 tiles -> -10pts] [Time since I saw this slugcat: 100 ticks -> -10pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts]  Total points: 10pts

Plan: Go home to den! [Time till rain: Starting to feel a little uneasy about the rain -> +10pts]  Total points: 10pts

Decision: Follow slugcat A!

As the rain got closer, the "go home" option would appear more and more attractive until it became the winner of the evaluation. The main problem I can see with a system like this is that it could lead to flickering back and forth between behaviors without actually carrying any of them through. This could perhaps be circumvented by having the evaluation be delayed for a little while after a decision has been made, but that in turn would make the creature look slow to react in some situations. Maybe the evaluation could be tied to some event...

What do you think? If you guys want to give me some reading on AI that would be much appreciated! (Looking at you, Gimym JIMBERT)
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #1885 on: July 31, 2014, 05:14:16 AM »

If you are looking at weighting solution you might look at "utility AI"
http://gdcvault.com/play/1012410/Improving-AI-Decision-Modeling-Through

However I want to discuss Behavior tree a bit more. I'll came back once I re read something and select new source both for behavior tree (BT) and utility.

Now I will just point that generally BT is advised for selecting "goal" (go through the door) instead of actions (move to the door, open the door, pass through, close door), but it works with both and it can be practical to go this low. Leaf can be as granular as you want since the implementation is left to you.

Also the example is likely to be a subtree, you never tick the whole tree, and you are likely to have branch that prune whole subtree without evaluating this. It's likely that this go through doors subtree will never be tick in the general case because a hi level decision (generally the top node) would never fire.

But in the industry in practice we use "ai sandwich" aka we mixed and match different ai model. For example a utility "tree" to weight "concepts" from sensory info, then use these concept in the "decision sandwich" system, for exemple the hi level layer could be a state machine where each state are a BT and BT node can still be implemented as other state machine or even planner.
« Last Edit: July 31, 2014, 05:40:58 AM by Gimym JIMBERT » Logged

dancing_dead
Level 2
**


View Profile
« Reply #1886 on: July 31, 2014, 05:54:39 AM »

don't underestimate behavior trees. Like Jimbert said they're flexible and can be as low or as high level as you want. The very, very big plus of behavior trees is their modularity - once you create a couple of complete AI trees, you will find yourself having a very nice collection of leaf nodes and subtrees, and every next AI most of the time will be just a re-arrangement of what you already got. This makes AI development and debugging, after the basic framework is done, relatively fast and easy, compared to most other solutions.

typically, the tree is used for granular decision making, but you'll entrust things like threat evaluation, focus, attention to a dedicated subsystem either completely independent from the tree itself or merely influenced by certain leaf nodes. this means the AI architecture looks somewhat like this: Sensors/Memory -> attention/threat selection -> Behavior Tree/Decision Making -> Subsystems that carry out what leaf nodes tell them to, such as navigation, combat etc systems.

the plans you mentioned can, thus, be made into a special kind of selector node, which counts what the sensors have accumulated, then picks the plan with the appropriate value. you also don't have to worry about the tree going through all the actions every time. in a good framework, this doesn't happen, the nodes can have several return states, including Success, Failure and Running, where if a node returns Running, the tree will resume at that node, once it gets updated next time.
the "obviously going to fail" chain of action is also not a problem, since nothing is stopping you at putting a simple condition node at the start of a sequence, that will knock out the sequence early, instead of late, so if an "obviously going to fail" sequence still does happen, blame the designer for not accounting for that, instead of the framework Tongue
Logged
Sebioff
Level 1
*



View Profile
« Reply #1887 on: July 31, 2014, 07:40:45 AM »

While I can't tell if it works for your game, I just wanted to say that I've also made good experiences with a state machine/BT mix - state machines at the higher level for deciding which set of actions is important right now, then a BT for each state for actually executing these actions. Even if you don't want different sets of actions as in your second example, you'll likely want to share some code between your different behaviours, and BTs are really really good at this. Wire up the nodes slightly different and you've got a completely different AI for another creature without having to write a single additional line of code.

In your first "door open" example, if you want an omniscient AI you'd simply put a "check if door is unlocked" node at the front of the sequence so the entire thing fails if it's not. Then your AI could re-evaluate it's options ("the BT for chasing prey failed cause there's no path due to a locked door? alright, switch to idle state instead and execute the BT for idling").
« Last Edit: July 31, 2014, 07:51:00 AM by Sebioff » Logged

Current devlog: Parkitect, a theme park simulation, currently on Kickstarter | Twitter
Whiteclaws
Level 10
*****


#include <funny.h>


View Profile
« Reply #1888 on: July 31, 2014, 07:51:53 AM »

Well, first and foremost, you gotta consider what a lizard does in his day, wake up, goes to hunt and if there's no food and the rain is near, they'll start hunting each others and become more aggressive, otherwise come back to the den with food, with these goals you can make an ecosystem where a mother will feed her children's some of them will die, and others will become adults and such, also savagery can also depend on the type of lizard and how hungry it is and in a nutshell, you feed those actions to the smarter AI that will ask how to achieve these actions

Your goal would be to survive as rain gets more and more frequent, food less frequent and more lizards trying to get food and not be killed, kinda like a simulation of the survival of the fittest, as stronger lizards emerge and try to get food maybe a ""roguelike""

Also, why are they scared of the rain?
« Last Edit: July 31, 2014, 08:17:16 AM by Whiteclaws » Logged
spinaljack
Level 0
***



View Profile WWW
« Reply #1889 on: July 31, 2014, 08:05:05 AM »

This game looks amazing on the surface and it looks there are some pretty complex systems underlying the graphics which is even more awesome.
Keep up the good work, can't wait to play it!
Logged

jamesprimate
Level 10
*****


wave emoji


View Profile WWW
« Reply #1890 on: July 31, 2014, 09:15:09 AM »

Quote
Plan: Follow slugcat A! [Slugcat deliciousness: +40pts] [Distance to target: 20 tiles -> -20pts] [Time since I saw this slugcat: Currently looking at it! -> +30pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts] Total points: 40pts

Plan: Follow slugcat B! [Slugcat deliciousness: +40pts] [Distance to target: 10 tiles -> -10pts] [Time since I saw this slugcat: 100 ticks -> -10pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts]  Total points: 10pts

Plan: Go home to den! [Time till rain: Starting to feel a little uneasy about the rain -> +10pts]  Total points: 10pts

Decision: Follow slugcat A!

^^^^ I love this so much. THAT is how AI should work, imho.

From a outsiders perspective, it's hard not to think of bumbling FPS enemies when talking about Behavior Tree stuff, where you can almost hear the CLUNK as it changes gears, mechanically going through it's list of activities. Not terribly life-like, and super easy to predict. But perhaps as dancing_dead says, that could be the fault of the designer not the framework.
Logged

saluk
Level 2
**


View Profile
« Reply #1891 on: July 31, 2014, 10:20:34 AM »

I don't know, I think a real player would not know the door was locked, and would go try it out, hehe. You do have to be aware of  The fellowship went after mount doom without knowing they would arrive safely Wink But their tree would probably have a step that continuously checks for gear they might need along the way - what if they took their shoes but lost them halfway through?

As mentioned though, you can get really creative with behavior trees, and they are surprisingly resilient to what obstacles come up. When you see the ai do something that looks dumb or that you want to weed out of the system, sometimes all it takes is one well placed node (of a type which you probably already have in your node library) and the issue magically goes away.

For instance, if you have a hunting scenario where the prey is faster than the predator and runs away when they see them, you don't want the predator to just keep chasing the little guy forever. One timeout node that triggers a fail after a certain amount of time with no success of its children and that won't happen. You can pull the timeout variable based on what type of prey it is, where some prey he will chase a bit longer because he knows it will tire, other prey he knows he can never reach so the timeout is shorter.

Then in your hunt prey subtree, you can have a find_prey node that does the actual calculations of which prey we should try to go after next, to prioritize easy prey, close prey etc.

Think of a behavior tree less like a rigid box to build a special purpose puzzle solving brain; and more like an ai oriented design system for containing logic that evaluates over time. The selectors are like for loops; the leafs are like if statements and function calls.

Your plans sound cool too though! To eliminate the toggling you would definitely want to have some stickiness. When selecting plans, you weight them as normal, but you also have a high weight for the current plan. The other plans would have to be more important than even sticking with the current plan, at which point it would switch over. Saw this on hacker news the other day, they present the kind of behavior you want (although you don't necessarily want to implement the behavior with markhov chains): http://setosa.io/blog/2014/07/26/markov-chains/index.html

I could still see the plans being placed under a PrioritySelector in a behavior tree Wink I'm sorry, ever since I started playing with behavior trees they are my hammer that makes everything look like a nail lol.

But you could look into goal oriented action planning, which where you give the ai a goal, and it kind of does "pathfinding" over a series of states that it hopes will be the best sequence to lead it to a result. They kind of resemble your plans.
Logged

Lee
Level 1
*


View Profile
« Reply #1892 on: July 31, 2014, 10:52:11 AM »

The problem is that the method you have put forth there is a decision making scheme and like you say constantly re-evaluating what to do (so broadly like that at least) is impractical.

I think it best to then stick in the behaviour until interrupted or complete:

start->NoBehaviourSet

HighLevel_Decision(){
  Plan: Follow slugcat A! [Slugcat deliciousness: +40pts] [Distance to target: 20 tiles -> -20pts] [Time since I saw this slugcat: Currently looking at it! -> +30pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts] Total points: 40pts

  Plan: Follow slugcat B! [Slugcat deliciousness: +40pts] [Distance to target: 10 tiles -> -10pts] [Time since I saw this slugcat: 100 ticks -> -10pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts]  Total points: 10pts

  Plan: Go home to den! [Time till rain: Starting to feel a little uneasy about the rain -> +10pts]  Total points: 10pts

  Decision: Follow slugcat A!
}

LowLevel_ChasingAI(Follow slugcat){
  is it getting too late?
   \-(yes) how high is my blood lust (hunger, distance to target...) -> weighted decision: continue or back to high level decision (in which go home would be chosen)?

  is another target visible?
   \-(yes) is target reachable?
     \-(yes) evaluate target (how far, delicious, deadly... are they) -> weighted decision: swap targets?

  is target visible?
   \-(no) when was target last seen (too long ago) -> back to high level decision
   \-(yes) is target reachable?
      \-(no) is target trapped?
      |  \-(yes) keep target cornered or evaluate new targets? -> weighted decision: continue or back to high level?
      \-(yes)  blah blah (attack, flee, whatever)
}

You get the point, basically you have your high-level planning and decision making which is your weighted options, and then you settle on a low-level or "focussed" AI routine which then gets called repeatedly every tick until the routine is ended. The low-level routine has a series of in-order checks with branching decisions and weighing up of options at the extremes.

I think eventually you would have found out that you'd need to have a deeper level of AI anyway because while you might want to check for a lot of possible options when there is no clear aim, you want to ignore a lot of those options when carrying out a tasks.
Logged
dancing_dead
Level 2
**


View Profile
« Reply #1893 on: July 31, 2014, 11:41:23 AM »

Think of a behavior tree less like a rigid box to build a special purpose puzzle solving brain; and more like an ai oriented design system for containing logic that evaluates over time. The selectors are like for loops; the leafs are like if statements and function calls.

this, BTs are more of a structural pattern than anything. they scale really damn well, and so just how smart the creatures using BTs appear rest entirely with how much work you put into the behaviors and how clever you can get with the behavior logic flow.

the mentioned GOAP is another funky AI architecture, but, in my experience (not much, admittedly), it takes a looooot more work to get GOAP running at all, and then it takes a whole looooot more to make it produce reasonable behaviors, compared to BTs, where you could be up & running in a day or two, not to mention that for the few cases, where you do want a very accurate behavior sequence, you don't have to go mad by endlessly adjusting the planner and action symbol requirements until it finally produces the sequence you want (and then does something entirely different and, often, stupid, when one little condition changes, haha), you can just create that sequence in BT with the first try + some debugging.

then again, I'm a fairly recent BT convert, and so my love for BTs still burns bright, perhaps blindingly so.
Logged
agersant
Level 4
****



View Profile
« Reply #1894 on: July 31, 2014, 05:53:59 PM »

The main problem I can see with a system like this is that it could lead to flickering back and forth between behaviors without actually carrying any of them through. This could perhaps be circumvented by having the evaluation be delayed for a little while after a decision has been made, but that in turn would make the creature look slow to react in some situations. Maybe the evaluation could be tied to some event...

"[This is not what I'm currently doing] -50pts." Giving a cost to activity change should give some inertia to the behaviors.
Logged
JLJac
Level 10
*****



View Profile
« Reply #1895 on: August 01, 2014, 02:52:21 AM »

Thanks guys, awesome input! All of your takes on the issue are really valuable to me, and it's good to see that you have some different angles on the problem. I'm taking it all in.

Update 281
I took a little step back and looked at the big picture. After spending some time with my notebook and a pen rather than staring at these pixels I managed to pin a few key points down.

First of all, the core question and its (as of now) three answers:

What is the purpose of the AI in Rain World?
 * Challenge
 * Flavor
 * Trickability


Challenge - The AI should add to the challenge to keep you at the edge of your seat while playing. However, I'm going to dismiss this point entirely, because I have a million other parameters that affect challenge and are much easier to tune. The movement speed of the enemy, the range of its senses, the amount of enemies are all such parameters.

Flavor - The whole purpose of the project is to simulate a world that feels exotic and alive. Creatures' behaviors are a huge part of that. I'm not too concerned about this one either though - I trust my gut feeling when it comes to this aspect, and I think most of it is in the details (animation etc) rather than the overarching architecture. In short, I'm not worried about this aspect, it's going to be cool no matter what system I use.

Trickability - This is the thing - the problem that needs to be solved. The idea is that you want the AI to be smart enough so that the player can trick it and get satisfaction out of having outsmarted it. When it comes to Rain World AI, this is the holy grail I'm pursuing. Every amount of complexity on the AI's part should generally fall back on this; this is why the AI is complex. An NPC that just moves towards a target on visual contact isn't smart enough to be tricked. RW AI needs to be smart enough to come up with a simple plan and carry it through, so that you can have anticipated that simple plan and act accordingly.



I've identified a few ways to achieve trickability:

Communication/Clarity - It needs to be clear what the intent and purpose of the AI character is. You need to know what it's currently up to, and if it changes its behavior, that needs to be clear too. Some of this can be communicated by animation and sound, and is thus not really an AI concern. The simplest, solidest solution for this problem would be to visualize the state of the AI with HUD somehow. For example I've seen some splinter cell game where a ghost of the player lingers where the enemy believe you to be. You could also have eye beams representing visual contact etc. I don't like these solutions, as I try to minimize HUD in general, and because it feels like cheating to see what's going on inside the enemy's head. What you look at on the screen should just be the physical reality of the game world - you are in fact already cheating by being able to see through walls, which no other creature can. So this one is going to be an issue. One thing I've been thinking about is that flickering between behaviors should really be avoided in order to achieve this. The creature needs to commit to an action in order for the player to be able to see what it's up to.

Predictability - It's crucial that the behavior is predictable in order for your cunning plans to follow through. You should be able to know what each enemy is going to do in each situation once you've gotten to know them. If they are too predictable they might appear as soulless machines, but I think I know of a way to fix that. If their idle behaviors, when they are not pursuing or being pursued, is more random, that will bring a little randomization into the mix without hurting the predictability of the actual gameplay-relevant behaviors. In their idle states they will also move slower with less urgency, giving the player some time to react to an unpredictable move.

Sufficient complexity - As stated, a creature that is too dumb can't be tricked. A box that just moves towards you can be made to fall down a pit, but you don't really feel like you've outsmarted anything. If the box tries to move around the pit, but you're one step ahead and have already prepared an ambush or something along that route, you suddenly have a war of minds (albeit a very uneven one) and that's much more fun. However, if the AI is too smart, you will just spot it as it's traversing your room on its two hour quest to collect parts for building a rocket or something, and when you don't have the ability to ask it what its ultimate goal is, it's just going to appear as if it's moving about at random. So, more complex isn't per definition better - especially if the complexity gets way ahead of item #1, the communication aspect.

Interactions - There needs to be tools at hand when tricking the enemy. In the old build, a creature could pretty much only move about and eat or get eaten. There needs to be a few more interactions to achieve trickability. Something to lure creatures to a specific location. Something to stun them. Some ability, such as armor or the like, that they can be robbed of through a cunning plan. These are not strictly AI issues, as they encompass larger game design choices, but they will have corresponding AI behaviors and might deserve to be mentioned here.



So that's how far I've gotten on my thoughts of why Trickability is my main goal, and how to achieve it. The next page in my notebook has a simple little observation about the kind of environment this AI will live in.

Contrary to in many games, Rain World creatures will encounter extremely few of what I call "Key/Door Puzzles". That is, actions that are locked behind other actions, such as "You can't get to behind the door until you've unlocked the door. You can't unlock the door until you have the key."

The Rain World NPS's are generally animal-type creatures, that don't use much tools or items. The Rain World maps are generally open - they don't have locked doors that can be opened with items or switches. This means that there are extremely few situations where something is possible to do if something else is done first. Categorically, every action is either possible or not possible, always. Either I can get up on that ledge, or I can't. Either I can hunt that prey, or I'm speared to a wall. Among the possible actions, however, there is a wild variation of desirability.

This means that Rain World AI is about

Decision Making, not Problem Solving

This notion has me currently leaning in this direction. But the behavior trees seem pretty awesome too haha! I seriously don't know if anyone made it all the way through this monster of a post, but either way it was good to get my mind sorted out. As always, all your input is very welome!  Smiley
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #1896 on: August 01, 2014, 05:58:43 AM »

First an overview of ai architectures
http://intrinsicalgorithm.com/IAonAI/2012/11/ai-architectures-a-culinary-guide-gdmag-article/

I don't think both (BT and UT) are exclusive, BT is just a convenient way to code ai, even though you don't use its full power. It's mostly about structure, scalability and flexibility.

The article on BT was focusing on implementation, I had to hunt down the overall explanation to give more context.

Here is the real intro to BT, if anything look at this one at least!
http://aigamedev.com/insider/presentation/behavior-trees/
Need to register for free to get in however.

More
http://www.altdev.co/2011/02/24/introduction-to-behavior-trees/
http://chrishecker.com/My_liner_notes_for_spore/Spore_Behavior_Tree_Docs
http://takinginitiative.wordpress.com/2014/02/17/synchronized-behavior-trees/ Read there comments too!

There is free implementation of behavior tree on unity ready to use:
http://angryant.com/behave/
http://www.gamasutra.com/view/news/190615/Is_this_AI_tool_right_for_you_A_RAINindie_review.php

Another tools is smart object, where intelligence is coded in the env and the character query them for action/decision (see the sims). I don't think it's entirely relevant to this project but might help in some case, in BT it's common to have subtree in object and just have a "use" node that append the subtree of the objects (contextual action). SO this idea to appends action/evaluation based on objects in the env is a smart way to bring context and not bloat the character AI.

Just putting that there in case this tools comes handy at some moments.


About Utility, I think on the scope of the project that's enough, here is more:
http://gdcvault.com/play/1015683/Embracing-the-Dark-Art-of
Extend the previous talk and adress more clearly things like in rain world.
Also You don't have to use the math to draw the curve (especially on unity). I just use a regular "hand drawn" curve.
« Last Edit: August 01, 2014, 06:15:10 AM by Gimym JIMBERT » Logged

Fuzzy
Level 0
**



View Profile
« Reply #1897 on: August 01, 2014, 05:59:54 AM »

Thanks guys, awesome input! All of your takes on the issue are really valuable to me, and it's good to see that you have some different angles on the problem. I'm taking it all in.

Update 281
I took a little step back and looked at the big picture. After spending some time with my notebook and a pen rather than staring at these pixels I managed to pin a few key points down.

First of all, the core question and its (as of now) three answers:

What is the purpose of the AI in Rain World?
 * Challenge
 * Flavor
 * Trickability


Challenge - The AI should add to the challenge to keep you at the edge of your seat while playing. However, I'm going to dismiss this point entirely, because I have a million other parameters that affect challenge and are much easier to tune. The movement speed of the enemy, the range of its senses, the amount of enemies are all such parameters.

Flavor - The whole purpose of the project is to simulate a world that feels exotic and alive. Creatures' behaviors are a huge part of that. I'm not too concerned about this one either though - I trust my gut feeling when it comes to this aspect, and I think most of it is in the details (animation etc) rather than the overarching architecture. In short, I'm not worried about this aspect, it's going to be cool no matter what system I use.

Trickability - This is the thing - the problem that needs to be solved. The idea is that you want the AI to be smart enough so that the player can trick it and get satisfaction out of having outsmarted it. When it comes to Rain World AI, this is the holy grail I'm pursuing. Every amount of complexity on the AI's part should generally fall back on this; this is why the AI is complex. An NPC that just moves towards a target on visual contact isn't smart enough to be tricked. RW AI needs to be smart enough to come up with a simple plan and carry it through, so that you can have anticipated that simple plan and act accordingly.



I've identified a few ways to achieve trickability:

Communication/Clarity - It needs to be clear what the intent and purpose of the AI character is. You need to know what it's currently up to, and if it changes its behavior, that needs to be clear too. Some of this can be communicated by animation and sound, and is thus not really an AI concern. The simplest, solidest solution for this problem would be to visualize the state of the AI with HUD somehow. For example I've seen some splinter cell game where a ghost of the player lingers where the enemy believe you to be. You could also have eye beams representing visual contact etc. I don't like these solutions, as I try to minimize HUD in general, and because it feels like cheating to see what's going on inside the enemy's head. What you look at on the screen should just be the physical reality of the game world - you are in fact already cheating by being able to see through walls, which no other creature can. So this one is going to be an issue. One thing I've been thinking about is that flickering between behaviors should really be avoided in order to achieve this. The creature needs to commit to an action in order for the player to be able to see what it's up to.

Predictability - It's crucial that the behavior is predictable in order for your cunning plans to follow through. You should be able to know what each enemy is going to do in each situation once you've gotten to know them. If they are too predictable they might appear as soulless machines, but I think I know of a way to fix that. If their idle behaviors, when they are not pursuing or being pursued, is more random, that will bring a little randomization into the mix without hurting the predictability of the actual gameplay-relevant behaviors. In their idle states they will also move slower with less urgency, giving the player some time to react to an unpredictable move.

Sufficient complexity - As stated, a creature that is too dumb can't be tricked. A box that just moves towards you can be made to fall down a pit, but you don't really feel like you've outsmarted anything. If the box tries to move around the pit, but you're one step ahead and have already prepared an ambush or something along that route, you suddenly have a war of minds (albeit a very uneven one) and that's much more fun. However, if the AI is too smart, you will just spot it as it's traversing your room on its two hour quest to collect parts for building a rocket or something, and when you don't have the ability to ask it what its ultimate goal is, it's just going to appear as if it's moving about at random. So, more complex isn't per definition better - especially if the complexity gets way ahead of item #1, the communication aspect.

Interactions - There needs to be tools at hand when tricking the enemy. In the old build, a creature could pretty much only move about and eat or get eaten. There needs to be a few more interactions to achieve trickability. Something to lure creatures to a specific location. Something to stun them. Some ability, such as armor or the like, that they can be robbed of through a cunning plan. These are not strictly AI issues, as they encompass larger game design choices, but they will have corresponding AI behaviors and might deserve to be mentioned here.



So that's how far I've gotten on my thoughts of why Trickability is my main goal, and how to achieve it. The next page in my notebook has a simple little observation about the kind of environment this AI will live in.

Contrary to in many games, Rain World creatures will encounter extremely few of what I call "Key/Door Puzzles". That is, actions that are locked behind other actions, such as "You can't get to behind the door until you've unlocked the door. You can't unlock the door until you have the key."

The Rain World NPS's are generally animal-type creatures, that don't use much tools or items. The Rain World maps are generally open - they don't have locked doors that can be opened with items or switches. This means that there are extremely few situations where something is possible to do if something else is done first. Categorically, every action is either possible or not possible, always. Either I can get up on that ledge, or I can't. Either I can hunt that prey, or I'm speared to a wall. Among the possible actions, however, there is a wild variation of desirability.

This means that Rain World AI is about

Decision Making, not Problem Solving

This notion has me currently leaning in this direction. But the behavior trees seem pretty awesome too haha! I seriously don't know if anyone made it all the way through this monster of a post, but either way it was good to get my mind sorted out. As always, all your input is very welome!  Smiley

Like what you were saying, If there was some type of shiny armor for example, and the lizard is abstract when it sees it, it would suddenly change directions and go towards the shiny armor and maybe stay there or take it or something. This would add some Communication/Clarity because the player knows that if a lizard sees it, it will go towards the armor. It is also Predictable because once it starts moving, it will be like it is hunting something and you could ambush it on the way to it by hiding and throwing a stick. The armor would be a substitute for another player in the ambush("the bait").  And if it wasn't armor maybe you could pick it up and place it somewhere to set up the ambush.
This is an example that an interaction can also help with getting the rest of the ways to achieve trickability.
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #1898 on: August 01, 2014, 06:31:23 AM »

Just another things ... I mention "concept" in the context of Utility.
Consider you code something like this:
threat = function(distance)

threat is the concept, what you must read is precisely:
Concept = perception(sense)

Concept can be compounded like:
Fear = function (threat, health, allies)

Generally all the concept are stored in a kind of struct called the "blackboard". The blackboard is then use to evaluate decision, either by looking at their weight or checks like in bt or fsm.

edit:
Perception is where the curve evaluation happen to give the weight.
« Last Edit: August 01, 2014, 06:53:49 AM by Gimym JIMBERT » Logged

Whiteclaws
Level 10
*****


#include <funny.h>


View Profile
« Reply #1899 on: August 01, 2014, 08:16:21 AM »

The Lizard is a being, not an AI that can see all the level and pathfind perfectly, thus you gotta give it a first-person view of things, like in stealth games, a lizard has a FOV, when he hears a sound, he'll react, and if he can't see you he'll go in a place where he is safer, or the nearest place where he heard the sound, also, to flesh it out, he can only know the direction the sound comes from, so he'll try to go to a place where he can see in that direction, otherwise, he'd go in attention mode, where he'd look at that place for a moment and try to get to a safer place, if he sees you, he'll not try to go to the path that takes the less time, but the path that is nearest to the slugcat until he hits a block, and then he'll try to reevaluate the situation and see if you are really worth it, that's how a real being would act and I think that tricking a real being is fun
Logged
Pages: 1 ... 93 94 [95] 96 97 ... 367
Print
Jump to:  

Theme orange-lt created by panic