Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
quick:agents [2019/03/01 11:22] victorzamoraquick:agents [2020/09/03 09:46] pedro
Line 1: Line 1:
 +<markdown>
 # ML-Agents Integration Guide # ML-Agents Integration Guide
  
Line 5: Line 6:
 This tutorial continues the small example created in the BT tutorials, where the player moves his avatar in the “environment” (a mere plane) using mouse clicks, and the enemy wanders around and pursues the player when he is near enough. We encourage you to follow that tutorial in the first place but, if you are impatient, its final version (and the starting point for this guide) is available in the Behavior Bricks package, under `Samples\ProgrammersQuickStartGuide\Done` folder. Obviously, you are supposed to have been loaded Behavior Bricks into a new Unity project. Refer to the download instructions in other case. This tutorial continues the small example created in the BT tutorials, where the player moves his avatar in the “environment” (a mere plane) using mouse clicks, and the enemy wanders around and pursues the player when he is near enough. We encourage you to follow that tutorial in the first place but, if you are impatient, its final version (and the starting point for this guide) is available in the Behavior Bricks package, under `Samples\ProgrammersQuickStartGuide\Done` folder. Obviously, you are supposed to have been loaded Behavior Bricks into a new Unity project. Refer to the download instructions in other case.
  
-The final scene and all the contents of this ML-Agents integration guide is also available in the Behavior Bricks package, under `Samples\MLAgentsIntegrationGuide\Done` folder. 
  
 ## Setting-up the environment ## Setting-up the environment
 To start creating a new tree with a behavior trained in ML-Agents in a project that is using Behavior Bricks, it is necessary to make the regular installantion of ML-Agents. More information can be retrieved in [the ML-Agents documentation.](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) Once ML-Agents is installed, it is enough to drag the *ML-Agents* and *Gizmos* folders from `~/UnitySDK/Assets` to the Unity Project tab in order to import it to the project. To start creating a new tree with a behavior trained in ML-Agents in a project that is using Behavior Bricks, it is necessary to make the regular installantion of ML-Agents. More information can be retrieved in [the ML-Agents documentation.](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) Once ML-Agents is installed, it is enough to drag the *ML-Agents* and *Gizmos* folders from `~/UnitySDK/Assets` to the Unity Project tab in order to import it to the project.
  
-First thing we are going to do is to prepare all necessary `gameObjects` that allow us to execute a trained model using ML-Agents: an `Academy`, a `Brain` and an `Agent`. Therefore, we start creating an empty `gameObject` in the scene and we add a C# script called `BBAcademy`. This script must extend `Academy`, so also must include ML-Agents, and should be left empty, since we only need the parameters and the functionality provided by inheritance. +First thing we are going to do is to prepare the necessary `gameObject` that allow us to execute a trained model using ML-Agents: an `Agent`. 
- +
-<code csharp> +
- using MLAgents; +
- +
- public class BBAcademy : Academy {} +
-</code> +
- +
-Now we create a `learning brain` (e.g. by right clicking in a folder of the project tab and `create/ML-Agents/Learning Brain`) and name it `EnemyBrain`. We left the by default parameters by now (we will modify these paremeters later). This brain have to be added to the `Academy Broadcast Hub`, leaving the `Control` box unchecked. +
- +
-{{:images:agents:EnemyBrain_ByDefaultParameters_Academy_AddLearningBrain.png}}+
  
 Before creating the C# script for our agent, we have to modify the player and the enemy: Before creating the C# script for our agent, we have to modify the player and the enemy:
Line 27: Line 17:
 - Add a cube to the enemy above the `shootpoint` scaled to (0.1, 0.1, 0.3) at the relative position (0, 0.5, 0.5). This will tell us where the enemy is aiming. - Add a cube to the enemy above the `shootpoint` scaled to (0.1, 0.1, 0.3) at the relative position (0, 0.5, 0.5). This will tell us where the enemy is aiming.
  
-{{:images:agents:Enemy_Aim_adapted.png}}+![](:images:agents:Enemy_Aim_adapted.png)
  
 In addition, we have to modify the way the enemy shoots in order to fit the training and the execution of the agent. Specifically, we create two new C# scripts: `FiredBullet` and `EnemyShoot`. In addition, we have to modify the way the enemy shoots in order to fit the training and the execution of the agent. Specifically, we create two new C# scripts: `FiredBullet` and `EnemyShoot`.
Line 33: Line 23:
 The script `FiredBullet` consists in giving intelligence to the bullet, so it can tell if the it has impacted the player, besides autodestroy passed 2 seconds (or whatever the time indicated by the parameter). Additionally, the bullet has information about its creator, which will be used for knowing who to inform of the impact. Therefore, we have to add the following C# script to the Bullet prefab, after removing the script used in previous tutorials. The script `FiredBullet` consists in giving intelligence to the bullet, so it can tell if the it has impacted the player, besides autodestroy passed 2 seconds (or whatever the time indicated by the parameter). Additionally, the bullet has information about its creator, which will be used for knowing who to inform of the impact. Therefore, we have to add the following C# script to the Bullet prefab, after removing the script used in previous tutorials.
  
-<code csharp>+ 
 +```csharp
  using UnityEngine;  using UnityEngine;
  
Line 72: Line 63:
      }      }
  }  }
-</code>+```
  
 The code should be self-explanatory, but we have to note several things: The code should be self-explanatory, but we have to note several things:
Line 82: Line 73:
 The script `EnemyShoot` implements the shooting capacity of the enemy agent. We create a C# script that extends `Monobehaviour` based on the previous script `ShootOnce`.  This code should be self-explanatory too, and have to be added to the enemy, binding the shootpoint and the bullet prefab in the editor. The script `EnemyShoot` implements the shooting capacity of the enemy agent. We create a C# script that extends `Monobehaviour` based on the previous script `ShootOnce`.  This code should be self-explanatory too, and have to be added to the enemy, binding the shootpoint and the bullet prefab in the editor.
  
-<code csharp>+```csharp
  using UnityEngine;  using UnityEngine;
  
Line 159: Line 150:
      }      }
  }  }
-</code>+```
  
 We have to note the following points: We have to note the following points:
Line 170: Line 161:
 Finally, we create the C# script `EnemyAgent`, extending `Agent` class of `MLAgents`.  We add the following code, deleting the code included by default (`Start` and `Update` methods). Finally, we create the C# script `EnemyAgent`, extending `Agent` class of `MLAgents`.  We add the following code, deleting the code included by default (`Start` and `Update` methods).
  
-<code csharp>+```csharp
  using UnityEngine;  using UnityEngine;
  
Line 227: Line 218:
      }      }
  
-     public override void AgentAction(float[] vectorAction, string textAction)+     public override void AgentAction(float[] vectorAction)
      {      {
          // Actions, size = 2          // Actions, size = 2
Line 263: Line 254:
      }      }
  }  }
-</code>+```
  
 This class has three main methods that are overriden from the `Agent` class. Explaining why this methods have to be overridden and how Agent class works is out of the scope of this guide. The concrete implementation of these methods is described below. This class has three main methods that are overriden from the `Agent` class. Explaining why this methods have to be overridden and how Agent class works is out of the scope of this guide. The concrete implementation of these methods is described below.
Line 276: Line 267:
 ## Setting-Up the execution with Behavior Bricks ## Setting-Up the execution with Behavior Bricks
  
-Start creating a new behavior in the Behavior Bricks editor (Window-Behavior Bricks) called `EnemyBT`.+Start creating a new behavior in the Behavior Bricks editor (Window-Behavior Bricks) called `mlagentBehavior`.
 This behavior will be used by the enemy to wander around, when he is close to the player he follows him and, when is even closer, he shoots aiming at him. The behavior made in previous tutorials is similar, but that behavior shoots in a straight line, being steady, and our behavior will rotate to aim using ML-Agents. This behavior will be used by the enemy to wander around, when he is close to the player he follows him and, when is even closer, he shoots aiming at him. The behavior made in previous tutorials is similar, but that behavior shoots in a straight line, being steady, and our behavior will rotate to aim using ML-Agents.
  
 - The first node will be a `Repeat`, linked to a `Priority Selector`. - The first node will be a `Repeat`, linked to a `Priority Selector`.
 - The first branch of our `Priority` Selector will be a node called `AgentML`, which use ML-Agents, with a `IsTargetClose` decorator. - The first branch of our `Priority` Selector will be a node called `AgentML`, which use ML-Agents, with a `IsTargetClose` decorator.
-  - In `IsTargetClose` set 7 as the close distance. For the `target` we will create a blackboard input parameter called `Player`. +  - In `IsTargetClose` set 7 as the close distance. For the `target` we will create a blackboard input parameter called `target`. 
-  - `AgentML` has three input parameters that we will create in the Blackboard: `ML-Agent`, `ML-Agent Brain` and `Environment Academy`.+  - `AgentML` has one input parameters that we will create in the Blackboard: `ML-Agent GameObject`.
  
  
-{{:images:agents:arbol1editado.png}}+![](:images:agents:arbol1editadov2.png)
  
  
  
  - The second branch will be `MoveToGameObject` node with a `IsTargetClose` decorator.  - The second branch will be `MoveToGameObject` node with a `IsTargetClose` decorator.
- - In `IsTargetClose` set as the close distance. For the `target` we will create a blackboard input parameter called `Player`. + - In `IsTargetClose` set 15 as the close distance. For the `target` we will create a blackboard input parameter called `target`. 
- - In `MoveToGameObject ` set `Player` as the `Target`.  + - In `MoveToGameObject ` set `target` as the `target`.  
  
  
-{{:images:agents:arbol2editado.png}}+![](:images:agents:arbol2editado.png)
  
  
Line 302: Line 293:
  
  
-{{:images:agents:enemytreeeditado.png}}+![](:images:agents:enemytreeeditado.png)
  
  
Line 308: Line 299:
 The behavior is prepared, so we have to add a `Behavior Executor` component to our `Enemy` GameObject and set all every parameter. The behavior is prepared, so we have to add a `Behavior Executor` component to our `Enemy` GameObject and set all every parameter.
  
-- `Player` from the scene for `player`. +- `Player` from the scene for `target`. 
-- Our `EnemyBrain` for `ML-Agent Brain`. +- `Enemy` from the scene for `ML-Agent GameObject`.
-- `Academy` from the scene for `Environment Academy`. +
-- `Enemy` from the scene for `ML-Agent`.+
 - `Floor` from the scene for `wanderArea`. - `Floor` from the scene for `wanderArea`.
  
  
-{{:images:agents:behaviorexecutor.png}}+![](:images:agents:behaviorexecutorv1.4.png)
  
  
-Before execute, set up the `EnemyBrain` as the following image.+Before execute, set up the `Behavior parameters` as the following image.
  
  
-{{:images:agents:enemybrain.png}}+![](:images:agents:behaviorparamswithouttrainedmodel.png)
  
  
Line 331: Line 320:
 - We have 2 continuous actions, one to rotate and one to shoot, that is the space size. - We have 2 continuous actions, one to rotate and one to shoot, that is the space size.
  
-You need to train our Enemy for having a proper behavior, but, for now, we give you a trained model that you can set in the `EnemyBrain`  {{:wiki:agents:cubeagentlearningbrain.zip}}. It would be a good challenge to try getting a better model than this.+You need to train our Enemy for having a proper behavior, but, for now, we give you a trained model that you can set in the `Behavior parameters`  {{:wiki:agents:cubeagentlearningmodel.zip}}. It would be a good challenge to try getting a better model than this.
  
  
-{{:images:agents:enemybrainmodel.png}}+![](:images:agents:BehaviorParams.png)
  
  
 ## Training ## Training
  
-The node used to execute `ML-Agents` in `Behavior Bricks` lets also to train a behavior inside a behavior tree. To do so, the only thing that has to be done is to check the control checkbox in the academy in the brain that we want to train.+The node used to execute `ML-Agents` in `Behavior Bricks` lets also to train a behavior inside a behavior tree. To do so, We have to follow the same procedure as indicated in [ML-Agents guides.](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-ML-Agents.md)
  
-Then we have to follow the same procedure as indicated in [ML-Agents guides.](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-ML-Agents.md)+</markdown>