Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
quick:agents [2019/03/01 11:22] – victorzamora | quick:agents [2020/09/03 09:39] – pedro | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | < | ||
# ML-Agents Integration Guide | # ML-Agents Integration Guide | ||
Line 5: | Line 6: | ||
This tutorial continues the small example created in the BT tutorials, where the player moves his avatar in the “environment” (a mere plane) using mouse clicks, and the enemy wanders around and pursues the player when he is near enough. We encourage you to follow that tutorial in the first place but, if you are impatient, its final version (and the starting point for this guide) is available in the Behavior Bricks package, under `Samples\ProgrammersQuickStartGuide\Done` folder. Obviously, you are supposed to have been loaded Behavior Bricks into a new Unity project. Refer to the download instructions in other case. | This tutorial continues the small example created in the BT tutorials, where the player moves his avatar in the “environment” (a mere plane) using mouse clicks, and the enemy wanders around and pursues the player when he is near enough. We encourage you to follow that tutorial in the first place but, if you are impatient, its final version (and the starting point for this guide) is available in the Behavior Bricks package, under `Samples\ProgrammersQuickStartGuide\Done` folder. Obviously, you are supposed to have been loaded Behavior Bricks into a new Unity project. Refer to the download instructions in other case. | ||
- | The final scene and all the contents of this ML-Agents integration guide is also available in the Behavior Bricks package, under `Samples\MLAgentsIntegrationGuide\Done` folder. | ||
## Setting-up the environment | ## Setting-up the environment | ||
To start creating a new tree with a behavior trained in ML-Agents in a project that is using Behavior Bricks, it is necessary to make the regular installantion of ML-Agents. More information can be retrieved in [the ML-Agents documentation.](https:// | To start creating a new tree with a behavior trained in ML-Agents in a project that is using Behavior Bricks, it is necessary to make the regular installantion of ML-Agents. More information can be retrieved in [the ML-Agents documentation.](https:// | ||
- | First thing we are going to do is to prepare | + | First thing we are going to do is to prepare |
- | + | ||
- | <code csharp> | + | |
- | using MLAgents; | + | |
- | + | ||
- | public class BBAcademy : Academy {} | + | |
- | </ | + | |
- | + | ||
- | Now we create a `learning brain` (e.g. by right clicking in a folder of the project tab and `create/ | + | |
- | + | ||
- | {{: | + | |
Before creating the C# script for our agent, we have to modify the player and the enemy: | Before creating the C# script for our agent, we have to modify the player and the enemy: | ||
Line 33: | Line 23: | ||
The script `FiredBullet` consists in giving intelligence to the bullet, so it can tell if the it has impacted the player, besides autodestroy passed 2 seconds (or whatever the time indicated by the parameter). Additionally, | The script `FiredBullet` consists in giving intelligence to the bullet, so it can tell if the it has impacted the player, besides autodestroy passed 2 seconds (or whatever the time indicated by the parameter). Additionally, | ||
- | < | + | |
+ | ```csharp | ||
using UnityEngine; | using UnityEngine; | ||
Line 72: | Line 63: | ||
} | } | ||
} | } | ||
- | </ | + | ``` |
The code should be self-explanatory, | The code should be self-explanatory, | ||
Line 82: | Line 73: | ||
The script `EnemyShoot` implements the shooting capacity of the enemy agent. We create a C# script that extends `Monobehaviour` based on the previous script `ShootOnce`. | The script `EnemyShoot` implements the shooting capacity of the enemy agent. We create a C# script that extends `Monobehaviour` based on the previous script `ShootOnce`. | ||
- | < | + | ```csharp |
using UnityEngine; | using UnityEngine; | ||
Line 159: | Line 150: | ||
} | } | ||
} | } | ||
- | </ | + | ``` |
We have to note the following points: | We have to note the following points: | ||
Line 170: | Line 161: | ||
Finally, we create the C# script `EnemyAgent`, | Finally, we create the C# script `EnemyAgent`, | ||
- | < | + | ```csharp |
using UnityEngine; | using UnityEngine; | ||
Line 227: | Line 218: | ||
} | } | ||
- | public override void AgentAction(float[] vectorAction, string textAction) | + | public override void AgentAction(float[] vectorAction) |
{ | { | ||
// Actions, size = 2 | // Actions, size = 2 | ||
Line 263: | Line 254: | ||
} | } | ||
} | } | ||
- | </ | + | ``` |
This class has three main methods that are overriden from the `Agent` class. Explaining why this methods have to be overridden and how Agent class works is out of the scope of this guide. The concrete implementation of these methods is described below. | This class has three main methods that are overriden from the `Agent` class. Explaining why this methods have to be overridden and how Agent class works is out of the scope of this guide. The concrete implementation of these methods is described below. | ||
Line 276: | Line 267: | ||
## Setting-Up the execution with Behavior Bricks | ## Setting-Up the execution with Behavior Bricks | ||
- | Start creating a new behavior in the Behavior Bricks editor (Window-Behavior Bricks) called `EnemyBT`. | + | Start creating a new behavior in the Behavior Bricks editor (Window-Behavior Bricks) called `mlagentBehavior`. |
This behavior will be used by the enemy to wander around, when he is close to the player he follows him and, when is even closer, he shoots aiming at him. The behavior made in previous tutorials is similar, but that behavior shoots in a straight line, being steady, and our behavior will rotate to aim using ML-Agents. | This behavior will be used by the enemy to wander around, when he is close to the player he follows him and, when is even closer, he shoots aiming at him. The behavior made in previous tutorials is similar, but that behavior shoots in a straight line, being steady, and our behavior will rotate to aim using ML-Agents. | ||
- The first node will be a `Repeat`, linked to a `Priority Selector`. | - The first node will be a `Repeat`, linked to a `Priority Selector`. | ||
- The first branch of our `Priority` Selector will be a node called `AgentML`, which use ML-Agents, with a `IsTargetClose` decorator. | - The first branch of our `Priority` Selector will be a node called `AgentML`, which use ML-Agents, with a `IsTargetClose` decorator. | ||
- | - In `IsTargetClose` set 7 as the close distance. For the `target` we will create a blackboard input parameter called `Player`. | + | - In `IsTargetClose` set 7 as the close distance. For the `target` we will create a blackboard input parameter called `target`. |
- | - `AgentML` has three input parameters that we will create in the Blackboard: `ML-Agent`, `ML-Agent Brain` and `Environment Academy`. | + | - `AgentML` has one input parameters that we will create in the Blackboard: `ML-Agent |
- | {{: | + | {{: |
- The second branch will be `MoveToGameObject` node with a `IsTargetClose` decorator. | - The second branch will be `MoveToGameObject` node with a `IsTargetClose` decorator. | ||
- | - In `IsTargetClose` set 7 as the close distance. For the `target` we will create a blackboard input parameter called `Player`. | + | - In `IsTargetClose` set 15 as the close distance. For the `target` we will create a blackboard input parameter called `target`. |
- | - In `MoveToGameObject ` set `Player` as the `Target`. | + | - In `MoveToGameObject ` set `target` as the `target`. |
Line 308: | Line 299: | ||
The behavior is prepared, so we have to add a `Behavior Executor` component to our `Enemy` GameObject and set all every parameter. | The behavior is prepared, so we have to add a `Behavior Executor` component to our `Enemy` GameObject and set all every parameter. | ||
- | - `Player` from the scene for `player`. | + | - `Player` from the scene for `target`. |
- | - Our `EnemyBrain` for `ML-Agent Brain`. | + | - `Enemy` from the scene for `ML-Agent |
- | - `Academy` from the scene for `Environment Academy`. | + | |
- | - `Enemy` from the scene for `ML-Agent`. | + | |
- `Floor` from the scene for `wanderArea`. | - `Floor` from the scene for `wanderArea`. | ||
- | {{: | + | {{: |
- | Before execute, set up the `EnemyBrain` as the following image. | + | Before execute, set up the `Behavior parameters` as the following image. |
- | {{: | + | {{: |
Line 331: | Line 320: | ||
- We have 2 continuous actions, one to rotate and one to shoot, that is the space size. | - We have 2 continuous actions, one to rotate and one to shoot, that is the space size. | ||
- | You need to train our Enemy for having a proper behavior, but, for now, we give you a trained model that you can set in the `EnemyBrain` {{: | + | You need to train our Enemy for having a proper behavior, but, for now, we give you a trained model that you can set in the `Behavior parameters` {{: |
- | {{: | + | {{: |
## Training | ## Training | ||
- | The node used to execute `ML-Agents` in `Behavior Bricks` lets also to train a behavior inside a behavior tree. To do so, the only thing that has to be done is to check the control checkbox | + | The node used to execute `ML-Agents` in `Behavior Bricks` lets also to train a behavior inside a behavior tree. To do so, We have to follow |
- | Then we have to follow the same procedure as indicated in [ML-Agents guides.](https: | + | </markdown> |