Create training samples

Inventorying and assessing the health of each palm tree on the Kolovai, Tonga, plantation would take a lot of time and a large workforce. To simplify the process, you'll use a deep learning model to identify trees, then calculate their health based on a measure of vegetation greenness. The first step is to find imagery that shows Kolovai, Tonga, and has a fine enough spatial and spectral resolution to identify trees. Once you have the imagery, you'll create training samples and convert them to a format that can be used by a deep learning model. For the model to recognize what it's tasked with finding, you need to define images of palm trees so that it can identify similar pixels.

Download the data

Accurate and high-resolution imagery is essential when extracting features. The model will only be able to identify the palm trees if the pixel size is small enough to distinguish palm canopies. Additionally, to calculate tree health, you'll need an image with spectral bands that will enable you to generate a vegetation health index. You'll find and download the imagery for this study from OpenAerialMap, an open-source repository of high-resolution, multispectral imagery.

  1. Download the Deep Learning file and unzip it to your C: drive.

    The path must be C:\DeepLearning\Data, or the files that reference this path later will not work.

  2. Go to the OpenAerialMap website.
  3. Click Start Exploring.

    In the interactive map view, you can zoom, pan, and search for imagery available anywhere on the planet. The map is broken up into grids. When you point to a grid box, a number appears. This number indicates the number of available images for that box.

  4. In the search box, type Kolovai. In the list of results, click Kolovai.

    This is a town on the main island of Tongatapu with a coconut plantation.

  5. If necessary, zoom out until you see the label for Kolovai on the map. Click the grid box directly over Kolovai and click Kolovai UAV4R Subset (OSM-Fit) by Cristiano Giovando.

    Choose Kolovai image tile

  6. Click the download button to download the raw .tif file. Save the image to a location of your choice.

    Download imagery

    Because of the file size, download may take a few minutes.

Take a look at the data

To begin the classification process, you'll create an ArcGIS Pro project with the imagery you downloaded and save a few bookmarks to use while creating training samples.

  1. Start ArcGIS Pro. If prompted, sign in using your licensed ArcGIS account.

    If you don't have ArcGIS Pro or an ArcGIS account, you can sign up for an ArcGIS free trial.

  2. Under New, click Map.

    The Map template creates a project with a 2D map.

  3. In the Create a New Project window, name the project CoconutHealth. Save the project to the location of your choice and click OK.

    The project opens and displays the Topographic basemap.

  4. In the Map tab, in the Layer group, click Add Data.

    Add data to the map

    The Add Data window appears.

  5. In the Add Data window, under Computer, browse to the Kolovai image you downloaded from OpenAerialMap. Select the .tif file and click OK.

    The Kolovai image is added to your map. The layer is listed in the Contents pane by its unique identifier, which isn't meaningful. It's best practice to rename the layer to something you understand.

  6. In the Contents pane, click the imagery layer two times and type Kolovai Palms. Press Enter.

    Rename the layer

  7. Pan and zoom around the map to get an idea of what the palm farm looks like.

    A large number of coconut palm trees are in this image. Counting them individually, on the field or by visually inspecting the image, would take days. To enable a deep learning model to do this work for you, you'll create a small sample of palm trees to use for training your model. First, you'll create a custom map display, so you can quickly zoom in on different areas of the image.

  8. At the bottom of the map window, click the map scale arrow and choose Customize.

    Create a custom scale extent

    The Scale Properties window appears.

  9. In the Scale Properties window, make sure the Standard Scales tab is selected. In the Scale box, type 1:500.

    Create custom scale

  10. Click Add and click OK.

    The custom scale option has been added to the list of map scales in your project. You'll use this scale each time you create a bookmark.

  11. On the ribbon, on the Map tab, in the Inquiry group, click Locate.

    Locate button

    The Locate pane appears.

  12. Into the Locate search box, paste the following coordinates and press Enter: 175.3458501°W 21.0901350°S.

    The letter A appears on your map to mark the location of the coordinates. You'll bookmark this location using the custom scale so you can refer to it in the next section.

  13. Click the map scale list and choose 1:500.

    You are looking at a zoomed-in display of the coordinate location in your map.

  14. On the ribbon, on the Map tab, in the Navigate group, click Bookmarks. In the menu, click New Bookmark.

    Create bookmark

    The Create Bookmark window appears.

  15. In the Create Bookmark window, type Northwest palms and click OK.
  16. Create bookmarks for the following coordinates at a scale of 1:500:

    CoordinatesBookmark name

    175.3413074°W 21.0949798°S

    Central east palms

    175.3479054°W 21.1018014°S

    Southwest palms

    175.3409475°W 21.1035265°S

    Southeast palms

    175.3479457°W 21.0959058°S

    Central west palms

  17. Close the Locate pane and save the project.

Create training schema

Creating good training samples is essential when training a deep learning model, or any image classification model. It is also often the most time-consuming step in the process. To provide your deep learning model with the information it needs to extract all the palm trees in the image, you'll create features for a number of palm trees to teach the model what the size, shape, and spectral signature of coconut palms may be. These training samples are managed through the Training Samples Manager. Before digitizing training samples, you'll set up a new schema within the Training Samples Manager.

  1. On the ribbon, click the Imagery tab.

    ArcGIS Pro works on a contextual basis, so certain tools and tabs will only be available to you if the associated data is selected from the Contents pane. To activate the imagery analysis tools, a raster layer must be selected.

  2. In the Contents pane, make sure Kolovai Palms is selected.

    Tools in the Image Classification, Mensuration, and Tools groups are now available to you. A new contextual tab Raster Layer with Appearance and Data tabs are activated.

  3. In the Image Classification group, click Classification Tools and choose Training Samples Manager.

    Image classification tool

    The Training Samples Manager pane appears with the default classification schema from the National Land Cover Database 2011 (NLCD2011). You'll create a schema with only one class because you're only interested in extracting coconut palm trees from the imagery.

  4. In the Image Classification pane, click the Create New Schema button.

    New training schema

    The NLCD2011 schema is removed from the Training Samples Manager pane. You'll rename the schema and add one class to the schema.

  5. Right-click New Schema and choose Edit Properties. For Name, type Coconut Palms. For Description, add a short explanation and click Save.

    The schema is renamed in the Training Samples Manager pane. You can now add samples to it.

  6. With the Coconut Palms schema selected, click the Add New Class button.

    Add a new class to the schema


    If you don't see the button, try expanding the pane or clicking the drop-down arrow to see more options.

  7. In the Add New Class pane, set the following parameters:

    • For Name, type Palm.
    • For Value, type 1.
    • For Color, choose Mars Red.

    Class properties

  8. Click OK.

    The Palm class is added to the Coconut Palms schema in the Training Samples Manager pane. You'll create features with the Palm class to train the deep learning model in each bookmark you created.

Create training samples

To make sure you're capturing a representative sample of trees in the area, you'll digitize features throughout the image. These features are read into the deep learning model in a specific format called image chips. Image chips are small blocks of imagery cut from the source image. Once you've created a sufficient number of features in the Training Manager, you'll export them as image chips with metadata using a geoprocessing tool.

  1. On the ribbon, click the Map tab. In the Navigate group, click the Bookmarks drop-down button and choose the Northwest palms bookmark.
  2. In the Training Samples Manager pane, select the Palm class and click the Circle tool.

    Circle tool

    You'll use this tool to draw circles around each palm tree in your current display.

  3. On the map, click the center of a palm tree and draw a circle around a single tree.

    Make training samples

    A new palm record is added in the bottom pane of the Training Samples Manager pane. When training a deep learning model, each image chip must have all the palm trees within it labeled as a palm. The image chips will be much smaller than your current map display, but you'll create a palm record for every tree you can to ensure there are many image chips with all the palm trees marked.

  4. Draw circles around each tree in the map display.

    Palm tree training samples

    When you're finished, you'll have approximately 100 samples recorded in the Training Samples Manager pane.

  5. Create training samples for every palm tree on each bookmark.

    Overview of training data


    The more samples to train the model with, the better the model performs classification.

    Digitizing training samples can be a time-consuming process, but it pays off to have a large number of samples. The more samples you provide the model with as training data, the more accurate results it returns.

  6. When you're done creating samples, in the Training Samples Manager pane, click Save.

    Save the training samples

  7. In the Save current training samples window, under Project, click Databases and double-click the default geodatabase, CoconutHealth.gdb.
  8. Name the feature class PalmTraining and click Save.

    The last step before training the model is exporting your training samples to the correct format, as image chips.

  9. On the ribbon, click the Analysis tab. In the Geoprocessing group, click Tools.

    The Geoprocessing pane appears.

  10. In the Geoprocessing pane, search for and open the Export Training Data for Deep Learning tool.
  11. In the Export Training Data for Deep Learning tool, enter the following parameters:

    • For Input Raster, choose Kolovai Palms.
    • For Output Folder, browse to the CoconutHealth folder, create a folder called ImageChips, and click OK.
    • For Input Feature Class Or Classified Raster, browse to CoconutHealth.gdb, click the Refresh button, and choose PalmTraining.
    • For Class Value Field, choose Classvalue.
    • For Image Format, choose JPEG format.
    • For Tile Size X and Tile Size Y, type 448.
    • For Meta Data Format, choose PASCAL Visual Object Classes.

    Export image chips for training

  12. Click Run.

    The tool runs. It may take several minutes to finish.

  13. In the Catalog pane, expand Folders and CoconutHealth. Right-click the ImageChips folder and choose Refresh.

    The folder is now populated with image chip samples and metadata.

  14. Save the project.

In this lesson, you downloaded and added open-source imagery to a project, created training samples using the Training Samples Manager pane, and exported them to a format compatible with a deep learning model for training. Next, you'll identify all the trees on the plantation.

Detect palm trees with a deep learning model

Clone the default conda environment

First, you created training samples of coconut palm trees and exported them as image chips. These training samples can be used to train a model using a deep learning framework such as TensorFlow, Keras, or CNTK. Optionally, you can use these samples to train your own deep learning model using the arcgis.learn module. Whether you plan to train the model or use the pre-trained model, you'll need to clone the default Conda environment in ArcGIS Pro to install the deep learning libraries that the geoprocessing tools rely on.

  1. On your desktop, search for and run the Python Command Prompt as an administrator.

    The Python Command Prompt was downloaded when you installed ArcGIS Pro, so it automatically runs the propy.bat initialization file. This file, which runs in place of python.exe, recognizes your application's active conda environment and allows you to run standalone scripts using that environment.

    For this project, you want to create a new environment named palm-detection. The default environment in ArcGIS Pro is read only, so you'll clone it to make changes.

  2. Run the following command to create a new conda environment by cloning the default ArcGIS Pro environment:
    conda create -n palm-detection –-clone arcgispro-py3

    The cloning process may take a few minutes. Cloned environments are stored in the envs folder at %LOCALAPPDATA%\Esri\conda\envs\. The system variable %LOCALAPPDATA% is a substitute for the C:\Users\YourUserFolderName\AppData\Local\.

  3. After the new environment is created, run the following commands to change your directory to that folder:
    cd C:\Program Files\ArcGIS\Pro\bin\Python\envs\palm-detection

    If your ArcGIS Pro installation is not located in the Program Files folder, you must use your installation's path instead. There are several default folders that python environments can be cloned to. These include \AppData\Local\ESRI\conda\envs, as shown in the screen shot, and \ArcGIS\Pro\bin\Python\envs, and may vary depending on the previous active environment.

  4. Run the following command to activate the new environment and update the file path to match where you stored the cloned environment:
    activate palm-detection

    This may take a few minutes. When the activation process is finished, the active environment name appears in parentheses before the Activating the new environment specifies that all changes you make occur only in the selected environment. When doing multiple projects that need different packages or different versions of packages, this is important. Now, you'll install the packages that the arcgis.learn module needs to run.

    Activate new environment

  5. Run the following command to make sure you have the latest arcgis version installed as well as the deep learning dependencies:
    conda install arcgis --no-pin

    The arcgis.learn module is available in version 1.6 and above of the Python API. ArcGIS Pro 2.3 was pinned with version 1.5.1, so if you encounter errors with the arcgis.learn module later, you may still need to update your version of the arcgis package and/or your version of ArcGIS Pro.

  6. Run the following command to install the deep learning package dependencies:
    conda install -c fastai -c pytorch fastai=1.0.39 pytorch=1.0.0 torchvision

    You now have a conda environment set up with all the libraries needed for the deep learning tools to run. In the next section, you'll use this environment to train your own model. Depending on your familiarity with Python, this process may add an hour to the lesson time, and may be challenging. If you want to use a model that's already been trained to continue working within ArcGIS Pro, skip to the Review the model section.

Train a deep learning model

Using the image chips you exported, you can train a model to recognize palm trees. One of the easiest ways to do this in the ArcGIS platform is to use the ArcGIS API for Python's arcgis.learn module. Using Jupyter Lab, you'll run a .ipynb file that trains a model using the SingleShotDetector method from the arcgis.learn module. Once you're finished training the model, you'll save it with an .emd file, or Esri Model Definition, that you'll use in ArcGIS Pro to detect palm trees in the imagery.

  1. Run the following command to change the directory to C:\DeepLearning\Data, where you downloaded the project data:
    cd C:\DeepLearning\Data

  2. Run the following command to open a Jupyter Lab instance:

    The project data file included a Jupyter Notebook with instructions on training the model.

  3. In the file browser pane, double-click PalmDetectionModel.ipynb to open the notebook in a new tab.

    Open the Jupyter Notebook


    For best results, run Jupyter Lab in Google Chrome. Firefox and Internet Explorer can work, but are less reliable.

  4. Run through the steps in the Jupyter notebook to produce the output model.

Review the model

The Detect Objects Using Deep Learning geoprocessing tool uses a trained deep learning model to extract features from a given input image. The training process produces an Esri model definition (.emd) file that is formatted to be read by ArcGIS geoprocessing tools. Based on the method used to train the model, the .emd file tells the tool which third-party deep learning Python API to use. You'll populate a .emd file, then use the Detect Objects Using Deep Learning tool to identify palm trees in the image.

If you trained your own model, skip to the Palm tree detection section.

  1. If necessary, open your CoconutHealth project in ArcGIS Pro.
  2. On the ribbon, click the View tab and choose Catalog Pane. Browse to the ImageChips folder you created in the CoconutHealth folder. Right-click the folder and choose Copy Path.

    Copy folder path

  3. Open File Explorer or any other file management system you use and paste the path to navigate to the ImageChips folder.

    ImageChips folder

    There are two folders, two text files, a .json and an .emd file that were created from the Export Training Data for Deep Learning tool. The esri_model_definition.emd file is a template that will be filled in by the data scientist who trained the model, with information such as the deep learning framework, the file path to the trained model, class names, model type, and image specifications of the image used for training. The .emd file is the bridge between the trained model and ArcGIS Pro.

  4. Open the .emd file in a text editor to explore the information it requires.

    The framework, ObjectDetection model configuration and type, and image specifications are listed.

  5. Close the esri_model_definition.emd file.

    Because this is only a template, you'll use the .emd provided with the lesson data. The esri_model_definition file provided in the ImageChips output folder is what you'd provide your data scientists with for training purposes.

  6. Open the CoconutTrees.emd file that was included in the lesson data. It is saved at C:\DeepLearning\Data.

    EMD file

    The framework specifies that the model framework runs within the arcgis.learn module. The ModelFile type is a .pth, or Pytorch model. These two lines tell you that the model relies on Pytorch to run. For example, if the Framework was Tensorflow and the ModelFile had a .pb extension instead, you'd know that the model relied on Tensorflow. You installed these packages earlier in the new environment, so you'll make sure it's active.

Palm tree detection

The bulk of the work in extracting features from imagery is preparing the data, creating training samples, and training the model. Now that these steps have been completed, you'll use a trained model to detect palm trees throughout your imagery.

  1. In ArcGIS Pro, on the ribbon, click the Project tab and choose Python.

    The Python Package Manager opens. Unless you've previously used different environments within ArcGIS Pro, the default environment, arcgispro-py3, is active. Environments created and activated in the command prompt are only persisted in that instance of the prompt unless you also set them as active in ArcGIS Pro.

  2. Under Project Environment, click Manage Environments.

    Manage environments in ArcGIS Pro

    All the environments you've created are listed in the Manage Environments window.

  3. Click palm-detection and click OK.

    In order for the new environment to be activated, you need to restart ArcGIS Pro.

  4. Save the project. Restart ArcGIS Pro and reopen your project.
  5. In the Geoprocessing pane, search for and open the Detect Objects Using Deep Learning tool.

    This tool calls a third-party deep learning Python API and uses the specified Python raster function to process the image.

  6. For the Detect Objects Using Deep Learning tool, enter the following parameters:

    • For Input raster, choose Kolovai Palms.
    • For Output Detected Objects, type CoconutTrees.
    • For Model Definition, navigate to CoconutTrees.emd (downloaded with the lesson data located in C:\DeepLearning\Data) or the .emd file that was created with the model you saved from the Jupyter Notebook.
    • For Batch Size, type 1.
    • Check the box for Non Maximum Suppression.
    • For Max Overlap Ratio, type 0.4.

    Additional arguments will appear in the tool window because of the information in the trained model. The data scientist that supplied you with the model should provide suggestions for each argument. More information about the arguments is provided if you want to experiment with different values. Otherwise, keep the default settings.

    The score_threshold argument is the confidence threshold—how much confidence is acceptable to label an object a palm tree? This number can be tweaked to achieve desired accuracy.

    When performing convolution of imagery in convolutional neural network modeling, you are essentially shrinking the data, and the pixels at the edge of the image are used much less during the analysis, compared to inner pixels. By default, the padding parameter is 0, but a padding parameter of 1 means an additional boundary of pixels is added to the outside edges of the image, all with a value of 0. This reduces the loss of information from the valid edge pixels and shrinking. You can change the parameter to 1 or 2 to see the effects.

    Padding pixels

    The batch_size parameter defines the number of samples that will be used to train the network in each iteration of training. For example, if you have 1,000 training samples (image chips) and a batch size of 100, the first 100 training samples will train the neural network. On the next iteration, the next 100 samples will be used, and so on. Depending on the memory your machine has available, you can increase this parameter, though training should only be done in perfect square batches. For example, you could use a batch of 4, 9, 16, and on.

    The tool could take some time to run for the full image, so if you want to experiment with other parameters, you'll change the processing extent to a smaller area.

  7. Zoom in to a scale of 1:500 somewhere in the image.
  8. Click the Environments tab on the Detect Objects Using Deep Learning tool. Change the Extent to Current Display Extent.
  9. Click Run. Once you see the results on a smaller scale, change the processing extent back to Default to process the entire image.
  10. If necessary, change the parameters for score Threshold, Padding, and Batch Size to 0.6, 0, and 1, respectively.
  11. Click Run.

    The tool may take a few minutes to run, depending on your hardware and whether you are running on CPU, GPU, or RAM.

    Objects detected by deep learning tools

  12. Save the project.

The deep learning tools in ArcGIS Pro depend on a trained model from a data scientist and the inference functions that come with the Python package for third-party deep learning modeling software. In the next lesson, you'll use raster functions to obtain an estimate of vegetation health for each tree in your study area.


前のレッスンでは、画像からココヤシの木を抽出するためにディープ ラーニング モデルを作成しました。このレッスンでは、同じ画像を使用して、植生の健康状態指数を算出することで、植生の健康状態を推定します。

植生の健康状態を評価するには、可視大気抵抗植生指数 (VARI) を算出します。これは、可視波長から反射率の値のみを使用して、葉面積指数 (LAI) と植生比率 (VF) の間接的な計測値として作成されたものです。

(Rg - Rr) / (Rg + Rr - R(Rg - Rb))

Rr、Rg、Rb は、それぞれ、赤、緑、青を示す反射率です (Gitelson et al. 2002)。

通常、正規化植生指数 (NDVI) の場合と同様に、可視波長バンドと近赤外 (NIR) 波長バンドの両方の反射率の値を使用して、植生の健康状態を推定します。OpenAerialMap からダウンロードした Kolovai Palms ラスター データは、すべてが可視電磁スペクトルにある 3 つのバンドを持つマルチバンド画像であるため、VARI を使用します。

VARI の算出

VARI の計測には、Kolovai Palms ラスター内の 3 つのバンドの入力が必要です。VARI を算出するには、[バンド演算] ラスター関数を使用します。ラスター関数は、新しいラスター データセットを作成しないため、ジオプロセシング ツールよりも高速です。画面移動やズームに対応して、ピクセルのリアルタイムの分析を実行します。

  1. 必要に応じて、ArcGIS Pro.で [CoconutHealth] プロジェクトを開きます。
  2. リボンの [画像] タブをクリックします。[解析] グループで [ラスター関数] をクリックします。
  3. [ラスター関数] ウィンドウで、[バンド演算] ラスター関数を検索して選択します。


  4. [バンド演算プロパティ] 関数で、次のパラメーターを設定し、[レイヤーの新規作成] をクリックします。

    • [ラスター] で、[Kolovai Palms] ラスター レイヤーを選択します。
    • [方法] で、[VARI] を選択します。この関数では、式に入力バンドに対応するバンド インデックスを指定する必要があります。バンド インデックスパラメーターの下にある入力は、赤、緑、青を表示しているため、対応するバンド インデックスに、赤、緑、青のバンドをこの順序で指定します。各バンド間にはシングル スペースを必ず入れてください。
    • [バンド インデックス] に「1 2 3」と入力します。


    VARI レイヤーが、[Band Arithmetic_Kolovai Palms] として [コンテンツ] ウィンドウに追加されます。エリアをズームしたり画面移動することで、海岸線、道路、建物、田畑などのフィーチャを表示できます。

    VARI ラスター結果

  5. [コンテンツ] ウィンドウで、[Band Arithmetic_Kolovai Palms] レイヤーが選択されていることを確認します。
  6. リボン上の [表示設定] コンテキスト タブをクリックします。
  7. [レンダリング] グループで、[ストレッチ タイプ] ドロップダウン メニューを選択し、[標準偏差] を選択します。

    ラスターのストレッチ タイプの変更

  8. [コンテンツ] ウィンドウで、[Band Arithmetic_Kolovai Palms] を 2 回クリックし、名前を「VARI」に変更します。

VARI のココヤシへの抽出

VARI を表示するラスター レイヤーは有益ですが、必ずしも実用的ではありません。どの樹木に注目すべきかを知るために、個々の樹木の平均 VARI を知る必要があります。各樹木の VARI 値を特定するには、基本となる平均 VARI を抽出し、どの樹木が健康で、どの樹木が保守を必要としているかを表示するようシンボル化します。

最初に、ポリゴン フィーチャをヤシの木を表す円に変換します。

  1. [ジオプロセシング] ウィンドウで、[フィーチャ → ポイント (Feature To Point)] ツールを検索して開きます。以下のパラメーターを入力し、[実行] をクリックします。

    • [入力フィーチャ] で、[CoconutTrees] レイヤーを選択します。
    • [出力フィーチャクラス] に「CoconutTrees_Points」と入力します。

    フィーチャ → ポイント (Feature to Point) ツール

    検出された各ポリゴンの重心にポイント フィーチャクラスが設定されます。さまざまな位置にズームして、計測ツールを使用すると、ヤシの木におよそ 3 メートルの平均半径が設定されているのがわかります。

  2. [ジオプロセシング] ウィンドウで、[バッファー (Buffer)] ツールを検索して開きます。
  3. 以下のパラメーターを指定し、[実行] をクリックします。

    • [入力フィーチャ][CoconutTrees_Points] を選択します。
    • [出力フィーチャクラス][PalmTreesBuffer] を選択します。
    • [距離][3] [メートル] を選択します (値は必ず [距離単位] に設定します)。

    ポリゴン フィーチャクラスが各ココヤシの頂部の位置と形状を示します。

  4. [コンテンツ] ウィンドウで、[VARI][CoconutTrees][CoconutTrees_Points] レイヤーをオフにします。


    次に、各ポリゴンの平均 VARI 値を抽出します。

  5. [ジオプロセシング] ウィンドウで、[ゾーン統計をテーブルに出力 (Zonal Statistics as Table)] ツールを検索して開きます。
  6. [ゾーン統計をテーブルに出力 (Zonal Statistics as Table)] ツールで、次のパラメーターを入力し、[実行] をクリックします。

    • [入力ラスター、またはフィーチャ ゾーン データ] で、[PalmTreesBuffer] を選択します。
    • [ゾーン フィールド] で、[ORIG_FID] を選択します。
    • [入力値ラスター][VARI] を選択します。
    • [出力テーブル] に「MeanVARI_per_Palm」と入力します。
    • [計算時に NoData を除外] をオンにします。
    • [統計情報の種類] で、[平均値] を選択します。

    [ゾーン フィールド][ORIG_FID] に設定すると、各樹木の統計情報が個別に取得されます。この属性は、元の [CoconutTrees] レイヤーの一意の ID です。

    [ゾーン統計をテーブルに出力 (Zonal Statistics as Table)] のパラメーター

    出力テーブルが [コンテンツ] ウィンドウの下部に追加されます。これを開くと、元の FID 値と、平均 VARI 値を含む MEAN という列が表示されます。このテーブルを PalmTreesBuffer レイヤーに結合して、検出された各ヤシの木の信頼度スコアと平均 VARI 値の両方を持つフィーチャクラスを取得します。

  7. [ジオプロセシング] ウィンドウで、[フィールドの結合 (Join Field)] ツールを検索して開きます。
  8. [フィールドの結合 (Join Field)] ツールで、次のパラメーターを入力し、[実行] をクリックします。

    • [入力テーブル] で、[PalmTreesBuffer] を選択します。
    • [レイヤー、テーブル ビューのキーとなるフィールド] で、[ORIG_FID] を選択します。
    • [結合テーブル] で、[MeanVARI_per_Palm] を選択します。
    • [結合テーブルのキーとなるフィールド] で、[ORIG_FID] を選択します。
    • [結合フィールド] で、[MEAN] を選択します。

    [PalmTreesBuffer] レイヤーに、[MEAN] というフィールドが結合されました。データをわかりやすくするために、名前を変更し、シンボル化します。

  9. [コンテンツ] ウィンドウで、[PalmTreesBuffer] を 2 回クリックし、名前を「PalmTreesVARI」に変更します。
  10. リボンの [表示設定] タブの [描画] グループで、[シンボル表示] をクリックします。
  11. [プライマリ シンボル] で、[等級色] を選択します。


  12. [フィールド] で、[MEAN] を選択します。
  13. 必要に応じて、[方法] で、[自然分類(Jenks)] を選択し、[クラス][4] に設定します。
  14. [配色] でドロップダウン メニューをクリックし、[すべて表示][名前の表示] チェックボックスをオンにします。スクロールして、[赤黄緑 (4 クラス)] スキーマを選択します。


  15. [クラス] で、各ラベルをクリックし、クラスの名前を上から下に次のように変更します - 「Needs Inspection」、「Declining Health」、「Moderate」、「Healthy」。

    カテゴリ ラベル



  16. プロジェクトを保存します。

追加: フィールド タスクの割り当てとプロジェクトの進行状況の監視

フィーチャの抽出と画像の分析に ArcGIS Pro を使用する最大のメリットは、ArcGIS プラットフォーム全体と統合できることです。最後のレッスンでは、ArcGIS Pro でディープ ラーニング ツールを使用して、画像からココヤシの木を特定しました。樹木は、GIS での使用に適したフィーチャクラスのフィーチャとして保存できます。拡張ワークフローとして、結果をクラウドに公開する、品質保証のために Web アプリケーション テンプレートを構成する、樹木の調査タスクを現場の作業員に割り当てる、ダッシュボードを使用してプロジェクトの進行状況を監視する、などのワークフローが可能です。

ArcGIS Online での公開

構成可能なアプリを使用してデータと連携するには、ArcGIS Online または ArcGIS Enterprise で、ヤシの木をフィーチャ サービスとして公開する必要があります。ArcGIS Pro で、[コンテンツ] ウィンドウの [PalmTreesVARI] レイヤーを右クリックし、[共有] を選択し、[Web レイヤーとして共有] を選択します。ArcGIS Online アカウントに公開されます。

フィーチャ サービスの公開の詳細

アプリ テンプレートを使用したディープ ラーニングの精度確認

ディープ ラーニング ツールが提供する結果の精度は、トレーニング サンプルの精度とトレーニング済みモデルの品質に比例します。言い換えると、結果は常に完全とは限りません。モデルの結果の品質を評価するには、ディープ ラーニング結果に保存されている [信頼度] スコアが一定値より低い樹木をチェックします。ArcGIS Pro の属性フィルターを使用して各レコードにズームするのではなく、構成可能な Web アプリ テンプレートの [Image Visit] を使用して、Web アプリケーションの結果の精度を確認します。

Image Visit アプリの詳細

Workforce for ArcGIS を使用した現場確認の実行

Workforce for ArcGIS は、フィーチャの位置情報を使用して現場作業者を調整するモバイル アプリ ソリューションです。Workforce アプリを使用して、組織のメンバーにタスクを割り当てて、VARI スコアが「Needs Inspection」と表示されたすべての樹木を現場の作業員に割り当て、樹木を確認し、推奨される処置でマークすることができます。

Workforce for ArcGIS の詳細

Operations Dashboard を使用したプロジェクト進行状況の監視

Workforce for ArcGIS を使用して、Operations Dashboard for ArcGIS プロジェクトで割り当てられた作業の進行状況を監視できます。Operations Dashboard for ArcGIS は、 作業員、サービス、およびタスクの状況をリアルタイムに表示できるビジュアライゼーションと分析を提供する構成可能な Web アプリです。

Operations Dashboard を開始する詳細を理解します。

このレッスンでは、オープンソースのドローン画像を取得し、画像内のヤシの木のトレーニング サンプルを作成しました。これらの画像チップは、画像チップとしてデータ サイエンティストに提供され、トレーニング済みのディープ ラーニング モデルで使用されて、画像内の 11,000 以上のヤシの木が抽出されました。

ディープ ラーニングと画像分析、さらに、ArcGIS プラットフォームの構成可能なアプリについて学習しました。ディープ ラーニング モデルの画像と知識があれば、このワークフローを多数のタスクに使用できます。たとえば、これらのツールを使用して、自然災害による構造上の損害を評価したり、都市部の車両を数えたり、地質学上の危険地帯付近の人工構造物を検索したりすることができます。

You can find more lessons in the Learn ArcGIS Lesson Gallery.