Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
quick:agents [2019/02/27 14:30] rodrigobravoquick:agents [2020/02/18 10:29] – [Setting-up the environment] jesusmayoral
Line 1: Line 1:
 # ML-Agents Integration Guide # ML-Agents Integration Guide
  
-## Setting-up the environment +Behavior Bricks includes the integration of the usage of ML-Agents in the behaviors created through its behavior trees. In this tutorial, we will show how to integrate ML-Agents with Behavior BricksWe will use an existing node for using ML-Agents in our project. This nodein this case, will execute a pre-trained reinforcement learning shooting behavior in a part of our behavior tree. As we will see at the end of this guide, this node can be used to train a model too.
-To start creating a new tree with a behavior trained in ML-Agents in a project that is using Behavior Bricks, it is necessary to make the regular installantion of ML-Agents. More information can be retrieved in [the ML-Agents documentation.](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) Once ML-Agents is installedit is enough to drag the *ML-Agents* and *Gizmos* folders from `~/UnitySDK/Assets` to the Unity Project tab in order to import it to the project.+
  
-First thing we are going to do is to prepare all necessary `gameObjects` that allow us to execute trained model using ML-Agents: an `Academy`a `Brain` and an `Agent`Therefore, we start creating an empty `gameObject` in the scene and we add a C# script called `BBAcademy`. This script must extend `Academy`, so also must include ML-Agentsand should be left empty, since we only need the parameters and the functionality provided by inheritance.+This tutorial continues the small example created in the BT tutorials, where the player moves his avatar in the “environment” (mere plane) using mouse clicks, and the enemy wanders around and pursues the player when he is near enoughWe encourage you to follow that tutorial in the first place but, if you are impatient, its final version (and the starting point for this guide) is available in the Behavior Bricks package, under `Samples\ProgrammersQuickStartGuide\DonefolderObviouslyyou are supposed to have been loaded Behavior Bricks into a new Unity project. Refer to the download instructions in other case.
  
-<code csharp> 
- using MLAgents; 
  
- public class BBAcademy : Academy {} +## Setting-up the environment 
-</code> +To start creating new tree with a behavior trained in ML-Agents in a project that is using Behavior Bricks, it is necessary to make the regular installantion of ML-Agents. More information can be retrieved in [the ML-Agents documentation.](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.mdOnce ML-Agents is installed, it is enough to drag the *ML-Agents* and *Gizmos* folders from `~/UnitySDK/Assetsto the Unity Project tab in order to import it to the project.
- +
-Now we create `learning brain` (e.g. by right clicking in a folder of the project tab and `create/ML-Agents/Learning Brain`and name it `EnemyBrain`. We left the by default parameters by now (we will modify these paremeters later). This brain have to be added to the `Academy Broadcast Hub`, leaving the `Control` box unchecked.+
  
-{{:images:agents:EnemyBrain_ByDefaultParameters_Academy_AddLearningBrain.png}}+First thing we are going to do is to prepare the necessary `gameObject` that allow us to execute a trained model using ML-Agentsan `Agent`
  
 Before creating the C# script for our agent, we have to modify the player and the enemy: Before creating the C# script for our agent, we have to modify the player and the enemy:
Line 221: Line 216:
      }      }
  
-     public override void AgentAction(float[] vectorAction, string textAction)+     public override void AgentAction(float[] vectorAction)
      {      {
          // Actions, size = 2          // Actions, size = 2