Map the most recent data

Coral depends on algae to survive. Oceanic temperatures that are too hot or cold reduce algae, bleaching corals white and increasing mortality rates. The NOAA Coral Reef Watch program provides global data on coral bleaching risk. This data is updated frequently.

In this lesson, you'll use ArcGIS Pro and Python to retrieve the most recent coral bleaching data as a JSON file. Then, you'll create two feature classes based on the data, change their symbology, and publish them. In later lessons, you'll develop a feed routine so that these layers and services are automatically updated when new data becomes available.

Import ArcPy and ArcGIS API for Python

First, you'll create a new project in ArcGIS Pro and change its basemap. Then, you'll use Python to import ArcPy and ArcGIS API for Python.

ArcPy is a Python site package. With it, you can use Python to run geoprocessing tools and other ArcGIS functions. ArcGIS API for Python is a Python library that also enables Python to perform GIS tasks. Later, you'll use it to connect to ArcGIS Online or ArcGIS Enterprise.

  1. Start ArcGIS Pro. If prompted, sign in using your licensed ArcGIS account (or ArcGIS Enterprise portal using a named user account).
    注:

    If you don't have ArcGIS Pro, you can sign up for an ArcGIS free trial. If you're signing in to an Enterprise account, ensure that ArcGIS Pro is configured to use your organization's portal.

  2. Under Blank Templates, click Map.

    Map template

  3. In the Create a New Project window, for Name, type Coral Bleaching. Click OK.

    A blank map project opens in ArcGIS Pro. Depending on your organization's settings, the default extent may vary. First, you'll change the basemap to one that will emphasize your data.

  4. On the ribbon, click the Map tab. In the Layer group, click Basemap and choose Light Gray Canvas.

    Light Gray Canvas basemap option

    The basemap is added to the map and the Contents pane. The basemap also includes a reference layer that contains place-names. You won't need this reference layer, so you'll turn it off.

  5. In the Contents pane, uncheck the World Light Gray Reference layer.

    World Light Gray Reference layer turned off in Contents pane

    Next, you'll open the Python window.

    ArcGIS Pro comes with Python 3 through the Anaconda Distribution. The Anaconda Distribution includes many common Python modules used in data science applications.

  6. On the ribbon, click the Analysis tab. In the Geoprocessing group, click Python.

    Python button on Analysis tab

    The Python window appears. The window contains two parts, the Python prompt and the transcript. The Python prompt is where you enter Python code. The transcript provides a record of the Python code that you've entered.

    First, you'll run a simple line of Python code to become familiar with basic Python syntax.

    提示:

    You can reposition and resize the Python window any way you like. Drag the title bar to reposition the pane and drag the pane's edges to resize it. You can also dock it to several areas in ArcGIS Pro.

  7. Click the Python prompt (where it says Enter Python code here), type print('Hello World!'), and press Enter.

    Transcript containing the print function

    The print() function causes the transcript to display the parenthetical text. In this case, print() is the function and 'Hello World!' is the argument (a variable or input for the function).

    You can run many Python functions by typing the function's name and including an argument inside the parentheses. However, not all Python functions require an argument, while others require multiple arguments (separated by commas).

    Functions return a value in the transcript. Statements, on the other hand, execute a process without returning a value. To import modules (such as ArcPy), you'll use a statement instead of a function. First, you'll import the sys module. This module provides several functions specific to your system.

  8. In the Python prompt, type import sys and press Enter.

    The sys module is imported. Next, you'll use the version attribute to show your version of Python. Attributes are properties of Python objects (functions, variables, or constants). They can be appended to Python objects using a dot.

  9. Type sys.version and press Enter.

    Transcript containing import sys and sys.version lines

    In this case, the sys function is the object and version is its attribute.

    The transcript returns information about your version of Python. In the example image, the version is 3.6.8. Your version may differ depending on your version of ArcGIS Pro.

    With ArcPy, you can run ArcGIS geoprocessing tools using the Python prompt. Next, you'll import the arcpy module the same way you imported the sys module.

  10. In the Python prompt, type import arcpy and press Enter.

    The arcpy module is imported.

  11. Type help(arcpy) and press Enter.

    Transcript containing import arcpy and help(arcpy) lines

    The help() function provides information about the given argument (arcpy). In this case, a large amount of information was added. You may want to resize the Python window to look through it all.

    Next, you'll import ArcGIS API for Python using the arcgis module.

  12. In the Python prompt, type import arcgis and press Enter.

    ArcGIS API for Python is imported.

    注:

    If you receive a ModuleNotFoundError message while importing the arcgis module, ArcGIS API for Python may not be installed on your instance of ArcGIS Pro. To install it, click the Project tab and click the Python tab. Click Add Packages and search for arcgis. In the list of search results, click the arcgis package and click Install. The Install and set up guide explains how to install ArcGIS API for Python in more detail.

  13. Type help(arcgis) and press Enter.

    Transcript containing import arcgis and help(arcgis) lines

    Information about the arcgis module is displayed.

Download a file

Next, you'll download spatial data in JSON format from the NOAA Coral Reef Watch program. This data will contain the most recent information on the risk of coral bleaching.

The data is hosted on the Coral Reef Watch website. To retrieve it, you'll use several functions. These functions require you to first import the appropriate modules.

  1. In the Python prompt, run the following line:
    import os, tempfile

    The os and tempfile modules are imported. You'll also import the request submodule of the urllib module.

  2. Run the following line:
    from urllib import request

    The request submodule includes the urlretrieve function, which you'll use to retrieve the data. The urlretrieve function requires two variables: the URL of the online file (url) and the location on your machine where it will be saved (filename).

    In Python, variables are defined and assigned using an equal sign (=). To define the filename variable, you'll run the os.path.join() function to join a temporary directory (temp_dir) with the intended name of the file. To define the temporary directory, you'll run the tempfile.mkdtemp() function.

    注:

    The filename variable can be any path pointing to a location on your computer. However, it's recommended to save the file in a temporary directory. The os.path.join() function is used because it works on any operating system regardless of the character used to separate folders in the file path.

  3. Run the following lines (copy and paste them to run them all at once):
    url = 'https://coralreefwatch.noaa.gov/vs/vs_polygons.json'
    temp_dir = tempfile.mkdtemp()
    filename = os.path.join(temp_dir, 'latest_data.json')
    response = request.urlretrieve(url, filename)
    注:

    You may need to press Enter twice to run multiple lines of code that you copy and paste.

    These lines define all the necessary variables. To check the path of the retrieved JSON file, you'll run the print function with filename as the argument.

  4. Run the following line:
    print(filename)

    Transcript containing the path to the latest data

    In the example image, the final line is the path to the latest_data.json file you retrieved (your path will be different). You can copy the file name and paste it into your computer's file explorer (such as Windows Explorer) to open the file. The file can be opened in any text editor.

    Next, you'll create a data_raw variable to represent the data in JSON format. You'll use this variable whenever referring to the JSON data in a line of code. To create the variable, you'll need to import the json module so you can run the json.load() function. You'll also need to create an intermediate variable named json_file that opens your file.

  5. Run the following lines:
    import json
    json_file = open(filename)
    data_raw = json.load(json_file)

    Although nothing happens in the transcript, the json_file variable (your JSON file) is opened and the data_raw variable is loaded with it.

    注:

    The json_file variable will remain open until the json_file.close() command is run. An alternative way of opening the file is to use a with statement. Doing so will automatically close the file after the code block is closed. The following lines of code represent how the json_file variable can be opened with a with statement:

    with open(filename) as json_file:
        data_raw = json.load(json_file)
        # Do something with the 'data_raw' variable
    # Do something else outside the 'with' section. The json_file variable is now closed.

    The lines that begin with number signs (#) are comments that do not affect the code but provide information to the user.

Create layers from the file

To visualize the data you downloaded (and become more familiar with how you can use Python to interact with spatial data), you'll create two feature classes based on the JSON file.

First, you'll create a new file geodatabase to contain the feature classes. The file contains data for point and polygon features, so you'll separate it into two JSON files, one for each feature type. Then, you'll create a feature class for each JSON file and save them in the geodatabase.

Like the urlretrieve function, the function to create the geodatabase (arcpy.management.CreateFileGDB) requires a path and a name. You'll also run the arcpy.env.workspace function to set the geodatabase as your default workspace.

  1. In Windows Explorer, in your computer's drive C, create a folder named Temp.
  2. In the Python prompt, run the following lines:
    arcpy.management.CreateFileGDB(r'C:\Temp', 'Live.gdb')
    arcpy.env.workspace = os.path.join(r'C:\Temp', 'Live.gdb')

    The Live geodatabase is created in the Temp folder (you can open the folder to check). Next, you'll create two dictionaries, one for point features (stations) and one for polygon features (areas of interest determined by NOAA).

    In Python, dictionaries are collections of unordered, indexed items. The dictionaries that you create will have two elements, as required by the GeoJSON file format. The type element will refer to the geometry type in the data_raw variable (your JSON). The features element will list the features. For now, the list will be empty, as indicated by a pair of brackets.

  3. Run the following lines:
    data_stations = dict(type=data_raw['type'], features=[])
    data_areas = dict(type=data_raw['type'], features=[])

    Transcript containing lines to create dictionaries

    The dictionaries are created. Next, you'll load the features in the data_raw variable into either the data_stations or data_areas library, depending on feature geometry type.

    First, you'll create a for loop. A for loop executes a function or statement for each item in a list. You'll create one that loops through all features in the data_raw variable. By creating a conditional if statement, you'll determine each feature's geometry type. Then, using the append() method, you'll append features of specific geometry types to the list of features of the appropriate library.

  4. Run the following lines:
    for feat in data_raw['features']:
        if feat['geometry']['type'] == 'Point':
            data_stations['features'].append(feat)
        else: # elif feat['geometry']['type'] in ['MultiPolygon', 'Polygon']:        
            data_areas['features'].append(feat)
    警告:

    The Python Software Foundation recommends using spaces instead of tabs to indent lines of code. Do not mix the use of tabs and spaces for indentation, or your code will not run correctly.

    For each feature listed in the JSON data, if the geometry type is point, the feature will be appended to the data_stations library. If the geometry type is polygon or multipolygon, the feature will be appended to the data_areas library.

    You'll run the len() function using the data_stations['features'] list to check how many features were loaded into the list.

  5. Run the following line:
    len(data_stations['features'])

    Transcript showing len() function to count stations

    The line returns the number 213, indicating that there are 213 point features. The polygon features in the JSON correspond to areas of interest determined by NOAA. These areas change periodically, so you can't validate them as easily as the stations.

    Instead, you'll access the name and coordinates of the tenth feature in the list. In Python, lists start with 0 instead of 1, so the tenth feature is feature 9.

  6. Run the following line:
    data_areas['features'][9]['properties']['name']

    The transcript returns the name of the tenth feature.

  7. Run the following line:
    data_areas['features'][9]['geometry']['coordinates']

    The transcript returns a long list of coordinates, corresponding to each of the polygon's vertices.

    注:

    Because this data is updated periodically, your results may differ from the example image.

    Transcript showing name and coordinates of Beagle Gulf feature

    Next, you'll save the dictionaries as JSON files in the same temporary directory where you saved the original JSON (temp_dir). You'll run the os.path.join() function to create file paths for each new JSON. Then, you'll create with statements and use the json.dump() function to save the dictionaries as JSON files.

  8. Run the following lines (the comments are optional):
    # Filenames of temp json files 
    stations_json_path = os.path.join(temp_dir, 'points.json')
    areas_json_path = os.path.join(temp_dir, 'polygons.json')
    # Save dictionaries into json files
    with open(stations_json_path, 'w') as point_json_file:
        json.dump(data_stations, point_json_file, indent=4)
    with open(areas_json_path, 'w') as poly_json_file:
        json.dump(data_areas, poly_json_file, indent=4)

    You'll run the print() function on each path to confirm that the files were saved correctly.

  9. Run the following line:
    print(stations_json_path)
  10. Run the following line:
    print(areas_json_path)
    注:

    Your file path will differ from those shown in the example image.

    Transcript with print() functions for each JSON file

    Each path goes to your Temp folder. The file names are points.json and polygons.json, as you specified when you ran the os.path.join() function. If you want, you can copy each path and open the files in any text editor.

    Now that the JSON files are created, you'll run the arcpy.conversion.JSONToFeatures() function to convert them into feature classes. This function requires the path to the JSON file and the name of the feature class to be created.

  11. Run the following lines:
    # Convert JSON files to features
    arcpy.conversion.JSONToFeatures(stations_json_path, 'alert_stations') 
    arcpy.conversion.JSONToFeatures(areas_json_path, 'alert_areas')

    The feature classes are saved in the default workspace, which you previously specified to be the Live geodatabase. Two layers, alert_stations and alert_areas, are added to the map.

    Map showing stations and areas

Change the symbology

You can also update layer symbology using Python. Although doing so is not required to create a feed routine, it's best practice to display data in a visually appealing and meaningful way. Additionally, the code you create to change the symbology can be adjusted quickly if you later decide to change the symbology.

You'll symbolize the layers based on the alert level, which ranges from 0 to 4 depending on recorded heat stress. Lower alert levels will be blue, while higher alert levels will be red. The layers contain text fields for alert level, but the fields must be numeric in order to give them a graduated symbology.

First, you'll create a new numeric field for alert level using the arcpy.management.AddField function. For this function's argument, you first list the layer to which you want to add the field, then the name of the field, then the data type and field alias. You can also specify other settings, but these are the only ones you need for these fields.

  1. In the Python prompt, run the following lines:
    # Add alert_level field
    arcpy.management.AddField('alert_stations', 'alert_level', 'SHORT', field_alias='Alert Level')
    arcpy.management.AddField('alert_areas', 'alert_level', 'SHORT', field_alias='Alert Level')

    Both the alert_stations and alert_areas layers have an alert_level field added. Both fields have a data type of short integer (a numeric data type) and an alias of Alert Level.

    Next, you'll calculate the new fields with the arcpy.management.CalculateField function. This function also takes the layer name and field name as arguments, as well as an expression to calculate the field. Your expression will use the int() function to convert the alert text field's values to integers.

  2. Run the following lines:
    # Calculate alert_level field
    arcpy.management.CalculateField('alert_stations', 'alert_level', "int(!alert!)")
    arcpy.management.CalculateField('alert_areas', 'alert_level', "int(!alert!)")

    The fields are calculated.

    提示:

    If you want to confirm that the fields were calculated correctly, right-click either the alert_stations or alert_areas layer and choose Attribute Table. Scroll to the end of the table and confirm that an Alert Level field has been added with values ranging between 0 and 4.

    Next, you'll change the symbology for the alert_stations layer based on the new field. First, you'll create variables to represent the current project (p) and map (m). You'll also create variables for the alert_stations layer (points_lyr) and its symbology (points_sym).

  3. Run the following lines:
    # Symbology
    p = arcpy.mp.ArcGISProject("CURRENT")
    m = p.listMaps('Map')[0]
    
    # Points
    points_lyr = m.listLayers('alert_*')[0]
    points_sym = points_lyr.symbology
    注:

    If you are using a localized version of ArcGIS Pro, your map will have a localized name. You'll need to adjust the code to replace Map with your map's name.

    Next, you'll update the renderer. Currently, the symbols are rendered with the SimpleRenderer type (single symbol). To symbolize features differently based on a field, you'll change the renderer to GraduatedSymbolsRenderer (graduated symbols).

    You'll also set the renderer to use the alert_level field as the classification field and sort the data into four classes (0 to 1, 1 to 2, 2 to 3, and 3 to 4). For each class, you'll set the size and color so that the sizes increase with higher alert levels and the colors change from blue to red.

  4. Run the following lines:
    # Always change to the GraduatedSymbolsRenderer from the SimpleRenderer
    if points_sym.renderer.type != 'SimpleRenderer':
        points_sym.updateRenderer('SimpleRenderer')
    points_sym.updateRenderer('GraduatedSymbolsRenderer')
    points_sym.renderer.classificationField = 'alert_level'
    points_sym.renderer.breakCount = 4
    
    points_labels = ['0 - 1', '> 1 - 2', '> 2 - 3', '> 3 - 4']
    points_upperBounds = [1, 2, 3, 4]
    points_sizes = [6, 16.50, 27, 37.50] 
    layers_colors = [{'RGB': [5, 113, 176, 40]}, {'RGB': [146, 197, 222, 50]},
                       {'RGB': [244, 165, 130, 50]}, {'RGB': [202, 0, 32, 30]}]

    Next, you'll create a for loop to apply the sizes and colors you specified to each class. Your classes are contained in a numeric list from 1 to 4. You'll use i (for integer) as a placeholder variable to represent the numbered classes.

  5. Run the following lines:
    for i in range(4):
        item = points_sym.renderer.classBreaks[i]
        item.symbol.applySymbolFromGallery('Circle', 1)
        item.label = points_labels[i]
        item.upperBound = points_upperBounds[i]
        item.symbol.size = points_sizes[i]
        item.symbol.color = layers_colors[i]
    注:

    If you are using a localized version of ArcGIS Pro, you'll need to adjust the code to use the localized version of the word Circle instead. You may need to make similar adjustments in later steps.

    This loop will cycle through all integers in the specified range (4). For the first integer, the first label, upper bound, size, and color in the lists you previously created will be used. For the second integer, the second series of values will be used, and so on until the end of the range.

    Finally, you'll run a line to update the layer's symbology based on the lines you ran previously.

  6. Run the following lines:
    # Update
    points_lyr.symbology = points_sym

    The symbology is updated. (Your map may differ from the example image due to the source data being more recent).

    Map with updated point symbology

    Using similar code, you'll change the symbology of the alert_areas layer. First, you'll create variables for the layer (polygons_lyr) and its symbology (polygons_sym).

  7. Run the following lines:
    # Polygons
    polygons_lyr = m.listLayers('alert_*')[1]
    polygons_sym = polygons_lyr.symbology

    Next, you'll update the renderer and create four classes that match those you created for the points layer.

  8. Run the following lines:
    # Always change to the GraduatedSymbolsRenderer from the SimpleRenderer
    if polygons_sym.renderer.type != 'SimpleRenderer':
        polygons_sym.updateRenderer('SimpleRenderer') 
    polygons_sym.updateRenderer('GraduatedColorsRenderer')
    polygons_sym.renderer.classificationField = 'alert_level'
    polygons_sym.renderer.breakCount = 4
    
    polygons_labels = ['0 - 1', '> 1 - 2', '> 2 - 3', '> 3 - 4']
    polygons_upperBounds = [1, 2, 3, 4]
    layers_colors = [{'RGB': [5, 113, 176, 40]}, {'RGB': [146, 197, 222, 50]},
                       {'RGB': [244, 165, 130, 50]}, {'RGB': [202, 0, 32, 30]}]

    Last, you'll create a loop to change the labels, size, and color of each class and update the layer symbology.

  9. Run the following lines:
    for i in range(4):
        item = polygons_sym.renderer.classBreaks[i]
        item.label = polygons_labels[i]
        item.upperBound = polygons_upperBounds[i]
        item.symbol.color = layers_colors[i]
    
    # Update
    polygons_lyr.symbology = polygons_sym

    The symbology is updated. (You may need to zoom in to see it more clearly.)

    Map with updated polygon symbology

  10. Save the project.

Publish the layers

Next, you'll publish the layers to ArcGIS Online or an ArcGIS Enterprise portal.

  1. Confirm that you are signed in to an ArcGIS Online account (or ArcGIS Enterprise portal using a named user account).
    注:

    If you are signed in, the name of your account appears in the upper right corner of ArcGIS Pro.

  2. On the ribbon, click the Share tab. In the Share As group, click Web Layer.

    Web Layer button in Share As group

  3. In the Share As Web Layer pane, set the following parameters:
    • For Name, type Coral Reef Watch and add your name or initials to the end.
    • For Summary, type NOAA's latest data for risk of coral bleaching.
    • For Tags, type NOAA, Coral Bleaching, and Alert Levels, pressing Enter between each tag.
    • For Layer Type, confirm that Feature is chosen.

    Item Details and Layer Type parameters

  4. Click Analyze.

    The layer is analyzed to ensure there are no errors that would prevent publishing. The analysis returns zero errors but a few warnings related to the feature template and data source. For the purposes of this exercise, you can ignore these warnings.

  5. Click Publish.

    The tool runs. A message appears at the bottom of the pane, confirming that the web layer was published.

  6. Save the project.

    Next, you'll confirm that the layer was published successfully by adding it to a web map.

  7. Sign in to your ArcGIS organizational account or ArcGIS Enterprise portal.
  8. On the ribbon, click Content.

    Content option on ribbon

    Two new items have been added to your Content page: the Coral Reef Watch feature layer and service definition.

  9. For the Coral Reef Watch feature layer, click the options button (the three horizontal dots) and choose Open in Map Viewer.

    Open in Map Viewer option

    A web map opens with the layers you created.

In this lesson, you used Python to retrieve the most recent data from the Coral Reef Watch program, convert it into two feature classes, and symbolize it. Then, you published the layers as a feature service. In the next lesson, you'll develop a feed routine to automatically download the latest Coral Reef Watch data once it becomes available.


创建源例程

在上一课程中,您对 NOAA 珊瑚礁观测计划的最新数据执行了检索、制图和发布。数据显示固定时刻的珊瑚礁白化信息,但是数据会经常更新。如何能够在 NOAA 每次更新数据后,相应地快速更新您的地图呢?

在本课程中,您将开发一个源例程。源例程将自动下载内容,对内容进行处理并发布数据集。您的源例程将包括在上一课程中使用的 Python 工作流。

创建独立脚本

与上一课程的方法不同,此次您不在 ArcGIS Pro 中运行脚本,而是在文本编辑器中创建一个可以保存并通过单个命令运行的独立脚本。此脚本将包含您的源例程,并遵循聚合实时源 (ALF) 方法指南。

ALF 是用于在生产环境中部署 Live Feed 例程的指南、工具和函数。使用此工具可以在最小化监督的情况下自动化例程部署。ALF 方法包括了 Live Feed 例程的步骤,并添加了一些组件以优化常规工作流。这些组件添加了以下功能:将每一个运行步骤记录到日志、高效解压文件以及在发生故障时自动发送电子邮件。

注:

要了解有关 ALF 方法的详细信息,请参阅 Aggregated Live Feed Community 群组。您也可以下载并查看 Aggregated Live Feed Methodologies 文档。

  1. 在纯文本编辑器中打开一个文件。
    注:

    如果您没有纯文本编辑器,可以免费下载 Notepad++。您还可以使用 Python 集成开发环境 (IDE),例如,PyCharmPyScripterSpyder。本课程中的示例图像将使用 Notepad ++。

    首先,将文件保存为 Python 脚本文件。一些文本编辑器(包括 Notepad ++)会根据所使用的编程语言突出显示语法,这在编写代码时十分有用。

  2. 在文本编辑器的功能区上,单击文件,然后选择保存另存为。文件将作为 coral_reef_exercise.py 保存到计算机上的某个位置。
    注:

    将文件另存为 Python 文件 (.py) 的过程可能会有所不同,具体取决于您所使用的程序。

    在上一课程中,针对最新数据制图时首先要执行的操作是导入 sys 模块,因此您将在源例程的开始添加执行这一操作的语句。

  3. 在文本编辑器中,创建 import sys 行。

    您还将定义一个名为 feedRoutine 的函数,该函数需要以下两个参数:url(数据的 URL)和 workGDB(您之前创建的文件地理数据库的路径) 稍后,您将定义此函数使其运行获取并针对数据制图的步骤,但是现在您将使用占位符 pass 为这些步骤占位。

    您还将添加 __name__ = "__main__" 语句,此语句会将脚本作为独立例程运行(例如,在命令提示符下运行脚本)。在这种情况下,独立脚本会通过 feedRoutine 语句将 urlworkGDB 参数从命令提示符传递至 sys.argv[1:] 函数以使其运行。

  4. 按两次 Enter 键。从第 3 行开始复制并粘贴以下行:
    def feedRoutine (url, workGDB):
        pass
    if __name__ == "__main__":
    	[url, workGDB] = sys.argv[1:]
    	feedRoutine (url, workGDB)
    注:

    缩进对于确保脚本正确运行而言十分重要。确保保留脚本中的所有缩进。请勿添加不必要的缩进。

    具有初始语句的源例程

    您在工作流中导入了多个其他模块。您已通过此函数使脚本导入所有必要的模块,而无需为每个模块再次运行导入语句。

  5. 将第 1 行 (import sys) 修改为 import sys, arcpy, os, tempfile, json。按 Enter 键并在第 2 行写入 from urllib import request

    完整的代码行如下所示:

    import sys, arcpy, os, tempfile, json
    from urllib import request

    具有所有模块的源例程

    接下来,您要将 pass 占位符替换为检索数据和创建图层所需的步骤。首先,您需要将 workGDB 定义为默认地理数据库 (arpy.env.workspace) 然后使用 arcpy.management.CreateFileGDB 函数创建地理数据库。

  6. 将第 5 行 (pass) 替换为以下行:
    # workGDB and default workspace
        arcpy.env.workspace = workGDB 
        arcpy.management.CreateFileGDB(os.path.dirname(workGDB), os.path.basename(workGDB))
    注:

    根据您的窗口大小的不同,一些较长的行可能会换行,如下图所示:

    已定义默认地理数据库的源例程

    接下来,您将为部署逻辑函数添加占位符代码。您现在并不需要此函数,但是为了供后续课程使用,最好在您的脚本中为其添加位置。您还要为上一课程中使用的代码添加占位符。

  7. 在第 7 行后,按两次 Enter 键并移除所有缩进。从第 9 行开始,复制并粘贴以下行(包括缩进)。
    ### Placeholder for retrieving and mapping data ###
        # Deployment Logic
        deployLogic()
        # Return
        return True
    def deployLogic():
        pass

    具有部署逻辑的占位符的源例程

    注:

    缩进应使用空格而不是制表符,这一点十分重要。

    接下来,您将使用在上一课程中运行的代码替换占位符以检索并针对数据进行制图。

    此代码将不包括用于更改符号系统的代码。图层的符号系统已保存在您的 ArcGIS Pro 工程和 web 地图中,因此您无需再次更改符号系统。(用于添加和计算 alert_level 字段的代码将保留,因为现有的符号系统需要这些字段。)

    另外,此代码将包括多个包含语句的 print() 函数,使您能够及时了解正在运行哪些代码行。

  8. 将第 9 行 (### Placeholder for retrieving and mapping data ###) 替换为以下行:
    # Download and split json file
        print("Downloading data...")
        temp_dir = tempfile.mkdtemp()
        filename = os.path.join(temp_dir, 'latest_data.json')
        response = request.urlretrieve(url, filename)
        with open(filename) as json_file:
            data_raw = json.load(json_file)
            data_stations = dict(type=data_raw['type'], features=[])
            data_areas = dict(type=data_raw['type'], features=[])
        for feat in data_raw['features']:
            if feat['geometry']['type'] == 'Point':
                data_stations['features'].append(feat)
            else:
                data_areas['features'].append(feat)
        # Filenames of temp json files
        stations_json_path = os.path.join(temp_dir, 'points.json')
        areas_json_path = os.path.join(temp_dir, 'polygons.json')
        # Save dictionaries into json files
        with open(stations_json_path, 'w') as point_json_file:
            json.dump(data_stations, point_json_file, indent=4)
        with open(areas_json_path, 'w') as poly_json_file:
            json.dump(data_areas, poly_json_file, indent=4)
        # Convert json files to features
        print("Creating feature classes...")
        arcpy.conversion.JSONToFeatures(stations_json_path, 'alert_stations') 
        arcpy.conversion.JSONToFeatures(areas_json_path, 'alert_areas')
        # Add 'alert_level ' field
        arcpy.management.AddField('alert_stations', 'alert_level', 'SHORT', field_alias='Alert Level')
        arcpy.management.AddField('alert_areas', 'alert_level', 'SHORT', field_alias='Alert Level')
        # Calculate 'alert_level ' field
        arcpy.management.CalculateField('alert_stations', 'alert_level', "int(!alert!)")
        arcpy.management.CalculateField('alert_areas', 'alert_level', "int(!alert!)")

    接下来,您将向脚本的其他部分再添加三个 print() 函数,包括用于在脚本执行完成时向您发出通知的函数。

  9. 在第 5 行 (# workGDB and default workspace) 后按 Enter 键。在第 6 行,使用缩进创建以下行:
    print("Creating workGDB...")

    已将 print() 函数添加到 workGDB 代码块的源例程

  10. 在第 43 行 (# Deployment Logic) 后按 Enter 键。在第 44 行,使用缩进创建以下行:
    print("Deploying...")
  11. 在第 47 行 (# Return) 后,按 Enter 键。在第 48 行,使用缩进创建以下行:
    print("Done!")

    根据您创建脚本的方式不同,可能还会存在制表符缩进。

  12. 检查您的整个脚本,以确保所有缩进均使用空格而非制表符。
    提示:

    如果您正在使用 Notepad++,您可以自动将制表符转换为空格。在功能区中,单击编辑,指向空白字符操作,然后选择TAB 转空格

  13. 保存脚本。
    注:

    如果要确认是否已正确创建脚本,可以将其与示例脚本进行对比。

    接下来,您将测试脚本。

  14. 打开 Windows“开始”菜单。搜索并打开 Python 命令提示符。

    首先,您将浏览至保存 coral_reef_exercise.py 文件目录。

    注:

    对于示例图像,coral_reef_exercise.py 将保存在 Documents 文件夹中 (C:\Users\Documents)。您可以在文件浏览器(Windows 资源管理器)中浏览至文件以获取路径,并将其复制。

  15. 在 Python 命令提示符中,输入 cd 并按空格键。将保存 Python 文件的路径粘贴至目录。如果路径包含空格,请在路径前后添加引号。

    带有更改目录命令的 Python 命令提示符

  16. Enter 键。

    目录已更改。接下来,您将运行脚本。您将使用 python 命令,其中包括文件名、包含珊瑚礁数据的网站的 URL 以及地理数据库的路径和名称。

  17. 运行以下命令:

    python coral_reef_exercise.py https://coralreefwatch.noaa.gov/vs/vs_polygons.json C:\Temp\Work.gdb

    运行此命令需要花费一些时间。运行命令的同时,您可通过 print() 函数了解其进度。

    具有独立脚本的 Python 命令提示符

    工作地理数据库中的图层将使用最新数据进行更新(可能与您在上一课程中使用的数据相同)。目前,脚本不会更新 ArcGIS Pro 工程中和 web 地图中显示的地图。稍后您将使用 deployLogic() 函数添加此功能。

添加高级脚本功能

您的脚本有效,但是可以进行改进。首先,您将调整脚本以便在不存在地理数据库时进行创建。

您将使用 if 函数创建 arcpy.Exists 语句,以检查是否存在工作空间。如果存在,您将删除所有以 alert_ 开头的要素类,以便它们可以被新的要素类替换。您可以使用 alert_* 字符串和可以代替任何文本的星号来表示两个警报图层。

  1. 如有必要,在文本编辑器中打开 coral_reef_exercise 脚本。
  2. 将第 5 行到第 8 行(用于创建默认工作空间的代码行)替换为以下行:
    # Create workGDB and default workspace
        print("Starting workGDB...")
        arcpy.env.workspace = workGDB
        if arcpy.Exists(arcpy.env.workspace):
            for feat in arcpy.ListFeatureClasses ("alert_*"):   
                arcpy.management.Delete(feat)
        else:
            arcpy.management.CreateFileGDB(os.path.dirname(workGDB), os.path.basename(workGDB))

    用于检查现有 workGDB 的高级脚本

    如果在无 Internet 访问的电脑上运行脚本,则无法下载在线 JSON 文件。要捕获此常见错误,您将提出标准异常 URLError,通知用户 URL 不可用。

  3. 将第 18 行 (response = request.urlretrieve(url, filename)) 替换为以下行(确保第一行已缩进):
    try:
            response = request.urlretrieve(url, filename)
        except URLError:
            raise Exception("{0} not available. Check internet connection or url address".format(url))

    提出 URLError 的高级脚本

    接下来,您将在日志文件中记录运行步骤。您已在命令提示符中打印了关于脚本每个部分的消息,因此您需要在每个 logging.info() 函数后添加 print() 函数。如果脚本出现错误,此函数会记录可能对调试有用的消息。您记录的消息将包括记录该消息的日期和时间。

    首先,您将修改脚本以导入 loggingdatetime 模块。您还将导入之前使用的 URLError 异常。

  4. 将第 1 行和第 2 行(导入行)替换为以下行:
    import sys, os, tempfile, json, logging, arcpy
    import datetime as dt
    from urllib import request
    from urllib.error import URLError

    接下来,您将使用 logging.basicConfig() 函数配置将用于记录消息的日志文件。您还将设置用于定义日期和时间的显示格式的 log_format 变量。

  5. 在第 6 行 (def feedRoutine) 后添加以下行:
    # Log file
        logging.basicConfig(filename="coral_reef_exercise.log", level=logging.INFO)
        log_format = "%Y-%m-%d %H:%M:%S"

    用于配置日志的高级脚本

    接下来,您将添加用于将消息添加到日志的 logging.info() 函数。这些函数将通过 dt.datetime.now() 函数记录日期和时间并使用您之前创建的 log_format 变量设置日期和时间的格式。您将在每个 print() 函数后添加这些函数。

  6. 在第 11 行(print("Starting workGDB..."))之后,添加以下行(确保已缩进):
    logging.info("Starting workGDB... {0}".format(dt.datetime.now().strftime(log_format)))
  7. 在第 21 行(print("Downloading data..."))之后,添加以下行(确保已缩进):
    logging.info("Downloading data... {0}".format(dt.datetime.now().strftime(log_format)))
  8. 在第 47 行(print("Creating feature classes..."))之后,添加以下行(确保已缩进):
    logging.info("Creating feature classes... {0}".format(dt.datetime.now().strftime(log_format)))
  9. 在第 59 行(print("Deploying..."))之后,添加以下行(确保已缩进):
    logging.info("Deploying... {0}".format(dt.datetime.now().strftime(log_format)))
  10. 在第 64 行(print("Done!"))之后,添加以下行(确保已缩进):
    logging.info("Done! {0}".format(dt.datetime.now().strftime(log_format)))

    您还将使用 URLError 函数记录 logging.exception() 异常。

  11. 在第 27 行 (except URLError) 之后,添加以下行(确保第一行缩进):
    logging.exception("Failed on: request.urlretrieve(url, filename) {0}".format(
                              dt.datetime.now().strftime(log_format)))

    最后,您将在脚本末尾处使用 logging.shutdown() 函数关闭日志文件。

  12. 在第 63 行的末尾,按两次 Enter 键。第 65 行添加以下行:
    # Close Log File
        logging.shutdown()

    用于关闭日志文件的高级脚本

  13. 确保您的脚本不包含任何制表符缩进。保存您的脚本。
    注:

    如果要确认是否已正确创建脚本,可以将其与示例脚本进行对比。

在本课程中,您使用命令提示符创建了一个可以作为独立脚本运行的源例程。您还添加了一些高级功能。在下一课程中,您将调整源例程以更新 ArcGIS Pro 工程中的本地要素类。


Update local feature classes

In the previous lesson, you created a feed routine. When this feed routine is run, it downloads the latest NOAA coral bleaching data and creates two feature classes, one for point data and one for polygon data. These feature classes are located in the Work geodatabase. However, the script does not update the layers in your ArcGIS Pro project, which are located in the Live geodatabase.

In this lesson, you'll adapt your feed routine so that it automatically updates the alert_stations and alert_areas layers in your ArcGIS Pro project.

Define the deployment logic function

Your feed routine contains placeholder script for a deployment logic function named deployLogic(). In the ALF methodology, the deployment logic process is the part of the feed routine that takes the latest information retrieved from the internet (in your case, the data located in the Work geodatabase) and overwrites the live data (the feature classes located in the Live geodatabase).

  1. Create a copy of your coral_reef_exercise.py script named coral_reef_exercise_local.py. Open the copy in a text editor or Python IDE.

    Next, you'll modify the script so that the shutil module is imported. This module contains several advanced file operations that will be necessary for defining the deployment logic function.

  2. In the coral_reef_exercise_local.py script, on line 1, add shutil to the list of modules.

    Import function with shutil module

    Next, you'll add a third argument to the feed routine named liveGDB, which will represent the Live geodatabase where the layers in your project are located.

  3. On line 6, add liveGDB to the list of feedRoutine arguments.

    Feed routine with liveGDB argument

    You'll also add this argument to the sys.argv[1:] command at the end of the script. That way, users can supply the path and name of the Live geodatabase when running the script in the command prompt.

  4. On line 77, add liveGDB to the bracketed list. On line 78, add liveGDB to the list of feedRoutine arguments.

    Feed routine with additional liveGDB arguments

    The deployLogic() function will take two arguments, workGDB and liveGDB, so you'll add these arguments where the function is called and where it is defined.

  5. On lines 63 and 73, replace deployLogic() with deployLogic(workGDB, liveGDB).

    Feed routine with deployLogic() arguments

    Next, you'll define the deployLogic() function. You'll use a for loop to copy all of the elements in the workGDB argument and use them to replace the elements in the liveGDB argument. You'll use the os.walk() function to list the files and the shutil.copy2() function to copy and replace them. In accordance with ALF methodology, you'll also have the script ignore any .lock files.

  6. On line 74, replace pass (the placeholder) with the following lines (ensure that the first line is indented):
    for root, dirs, files in os.walk(workGDB, topdown=False):
    	    files = [f for f in files if '.lock' not in f]
    	    for f in files:
    	        shutil.copy2(os.path.join(workGDB, f), os.path.join(liveGDB, f))

    Script to define the deployLogic() function

    注:

    This feed routine will replace the feature classes in your ArcGIS Pro project automatically, but some precautions are necessary. The new feature classes must be consistent with the previous data structure; missing fields or different names can break the map project.

  7. Ensure that your script does not include any tab indentations. Save your script.
    注:

    If you want to confirm that you created your script correctly, you can compare it to an example script.

Run the stand-alone script

Next, you'll run your adapted feed routine and update your ArcGIS Pro project.

It's likely that NOAA has not updated its data since you mapped the latest data in your project. If you ran the script now, you probably would not see any changes in the data. To test that your script functions correctly, you'll update the data using a historic file hosted by Learn ArcGIS.

  1. If necessary, open your Coral Bleaching project in ArcGIS Pro.
  2. If necessary, open the Python Command Prompt and use the cd command to browse to the directory where your coral_reef_exercise_local.py file is saved.
  3. In the command prompt, run the following command:

    python coral_reef_exercise_local.py https://downloads.esri.com/LearnArcGIS/update-real-time-data-with-python/vs_polygons.json C:\Temp\Work.gdb C:\Temp\Live.gdb

    The command runs.

    Python Command Prompt with command run

    注:

    Once the NOAA Coral Reef Watch program updates its data, you can replace the URL to the historic file (https://downloads.esri.com/LearnArcGIS/update-real-time-data-with-python/vs_polygons.json) with the URL to NOAA's data (https://coralreefwatch.noaa.gov/vs/vs_polygons.json).

    Although the prompt states that the command was run successfully, no changes have occurred to the map in ArcGIS Pro. You'll restart ArcGIS Pro and refresh the map display to visualize the changes.

  4. Close ArcGIS Pro (save if prompted). Reopen your Coral Bleaching project.

    Restarting ArcGIS Pro may have also refreshed the map display. If it didn't, you can refresh the display manually.

  5. If necessary, in the lower right corner of the Map view, click the Refresh button.

    Refresh button

    The map refreshes and displays the historic data (from February 19, 2019).

    Map with historic data

    You can also verify that the attribute data was updated.

  6. On the map, click any station or area feature.

    The feature's pop-up opens. The date field reads 2019-02-19, the date of the historic data.

    Pop-up with historic date

  7. Close the pop-up.

In this lesson, you developed and ran a feed routine to update a local dataset. You tested the feed routine by updating the map with historic data, but by replacing the URL used in the Python command, you can apply the same process to NOAA's most recent data. In the next lesson, you'll develop and run a similar feed routine to update an online feature service.


Update an online feature service

In the previous lesson, you developed and ran a feed routine to update local feature classes in your ArcGIS Pro project. In this lesson, you'll follow a similar process to develop and run a feed routine to update an online feature service in a web map.

Acquire the service definition and ID

When updating a local feature class, you used the Live geodatabase as the location to which the new data would be copied. Online feature services aren't stored in geodatabases, so instead you'll need the service definition file and item ID to query and replace the feature service using ArcGIS API for Python.

In the first lesson, you published a feature service with the most recent data. You'll use this feature service as the target for updates. First, you'll download its service definition file.

  1. If necessary, sign in to your ArcGIS organizational account or ArcGIS Enterprise portal.
  2. Go to your Content page and access the folder where your Coral Reef Watch feature layer and service definition file are located.
    注:

    The service definition file was created when you published the layer. Because you'll use this file to update the feature service, any changes made to the feature service after it was published (such as metadata or symbology changes) will be lost when the feature service is updated. Any web map or app that includes the feature service will also be affected.

    The recommended practice to avoid losing changes to the feature service is to create a view layer. Any map or web app using the feature layer can point to the view layer instead of the feature service, and any changes to the view layer are preserved when the feature service is updated. To create a view layer, open the details page for the feature layer and click Create View Layer. You can find more information on the documentation page.

  3. For the Coral Reef Watch service definition file, click the options button (the three dots) and choose Download.

    Download service definition file

    The service definition file (.sd) is downloaded to your computer.

  4. Copy the service definition file into the Temp folder on your drive C.

    Next, you'll copy the layer's item ID.

  5. For the Coral Reef Watch feature layer, click the options button and choose View item details.

    The details page for the feature layer opens. The ID is located in the URL.

  6. In the URL for the details page, copy the string of numbers of letters that follows id= (in the example image, this string is 2dcf249f5cd54a609d51acba6e0ba029).
    注:

    Your feature layer's item ID will be different.

    Item ID for feature layer

  7. Paste the ID in an empty text file or somewhere that you will be able to easily access.

Define the deployment logic function

Next, you'll make another copy of your original feed routine. Then, you'll define the deployLogic() function using a script that replaces an online feature service with downloaded data from the Work geodatabase.

  1. Create a copy of your original coral_reef_exercise.py script named coral_reef_exercise_online.py. Open the copy in a text editor or Python IDE.

    For the script you create, you'll need the fnmatch, shutil, subprocess, and arcgis modules, as well as the GIS submodule.

  2. In the coral_reef_exercise_online.py script, on line 1, add fnmatch, shutil, subprocess, and arcgis to the list of modules. Press Enter and add the line from arcgis.gis import GIS to line 2.

    The full lines read as follows:

    import sys, os, tempfile, json, logging, arcpy, fnmatch, shutil, subprocess, arcgis
    from arcgis.gis import GIS

    Feed routine with lines to import modules and submodules

    When you defined the deployment logic function to update a local feature class, you added an argument to the feed routine named liveGDB, which you could use to supply the path to the necessary geodatabase when you ran the script. For this feed routine, you'll add parameters for the item ID of the feature service, the service definition file, and the service name.

  3. On line 7, add the arguments itemid, original_sd_file, and service_name to the list of feedRoutine arguments.

    The full line reads as follows:

    def feedRoutine (url, workGDB, itemid, original_sd_file, service_name):

    Arguments added to the feedRoutine function

    You'll also add these parameters to the sys.argv[1:] command at the end of the script.

  4. On lines 78 and 79, add itemid, original_sd_file, and service_name to both lists of arguments.

    The full lines read as follows:

    [url, workGDB, itemid, original_sd_file, service_name] = sys.argv[1:]
        feedRoutine (url, workGDB, itemid, original_sd_file, service_name)

    Arguments added to the sys.argv[1:] command

    You'll also add these arguments (and the workGDB argument) to the deployLogic() function where it is defined and where it is called.

  5. On lines 64 and 74, replace deployLogic() with deployLogic(workGDB, itemid, original_sd_file, service_name).

    Arguments for the deployLogic function where it is defined and called

    Next, you'll define the deployLogic() function. This function will accomplish several tasks. First, it'll get the feature service item from ArcGIS Online or your ArcGIS Enterprise portal (represented by the itemid argument) using the gis.content.get() function.

    You'll also create a gis variable defined by the GIS() function. This function's arguments will include the URL of your organization's portal (for ArcGIS Online, the URL will be https://arcgis.com), your account user name, and your account password.

  6. On line 75, replace the placeholder script (pass) with the following lines:
    # Get item from ArcGIS Online
        gis = GIS(url='https://arcgis.com', username='your_username', password='your_password')
        item = gis.content.get(itemid)
        sd_file_name = os.path.basename(original_sd_file)
        if sd_file_name != item.related_items("Service2Data")[0].name:
            raise Exception('Erroneous itemid, service name, or original sd file'.format(itemid))
    注:

    If you're using an ArcGIS Online account, the url parameter is optional.

  7. On line 76, replace the username and password parameters with your ArcGIS account user name and password. If your account is for an ArcGIS Enterprise portal, replace the url parameter with the URL to your portal.
    警告:

    Because this script contains your password, be careful about sharing it with others. There is advanced script functionality to help you protect your credentials.

    Next, the deployLogic() function will unpack the contents of the service definition file (original_sd_file). It will do this using 7-Zip, a program that comes default with many Windows computers. Before you add the script that will use 7-Zip to unpack the file, you'll ensure you have it on your computer and add it to the Path environment variable on Windows. By adding it to this environment variable, you'll be able to call the program using your script.

  8. If necessary, download the appropriate version of 7-Zip for your computer and install it.

    The process of editing an environment variable differs depending on your operating system, but it usually can be done through the Environment Variables window.

  9. On your computer, open Control Panel. Click System and Security, click System, and click Advanced system settings. In the System Properties window, click Environment Variables. (The exact path may differ depending on your operating system.)
  10. In the Environment Variables window, for System variables, select the Path variable and click Edit.

    Depending on your operating system, the Edit environment variable window or the Edit System Variable window opens. The process for adding a new variable differs for each window.

  11. If the Edit environment variable window opens, click New and add C:\Program Files\7-Zip as a new variable. If the Edit System Variable window opens, scroll to the end of the Variable value parameter and paste ;C:\Program Files\7-Zip to the end (making sure not to delete any existing text).
  12. Click OK. In the Environment Variables window, click OK.

    Next, you'll create the script. It will include an error message that is raised if 7-Zip cannot be found in the Path environment variable.

  13. In the coral_reef_exercise_online.py script, after line 80, add the following lines:
    # Unpack original_sd_file using 7-zip
        path_7z = fnmatch.filter(os.environ['path'].split(';'), '*7-Zip')
        temp_dir = tempfile.mkdtemp()
        if len(path_7z):
            exe_7z = os.path.join(path_7z[0], '7z.exe')
            call_unzip = '{0} x {1} -o{2}'.format(exe_7z, original_sd_file, temp_dir)
        else:
            raise Exception('7-Zip could not be found in the PATH environment variable')
        subprocess.call(call_unzip)

    After your script acquires the feature service and unzips its service definition file, you'll replace its data with the data downloaded to Work.gdb. You'll use Live.gdb as an intermediary folder that can be zipped into a new service definition file.

    Your script will use the shutil.rmtree() function to delete Live.gdb and the os.mkdir() function to create it again, removing any existing contents in the geodatabase.

  14. After line 89, add the following lines:
    # Replace Live.gdb content
        liveGDB = os.path.join(temp_dir, 'p20', 'live.gdb')
        shutil.rmtree(liveGDB)
        os.mkdir(liveGDB)
        for root, dirs, files in os.walk(workGDB):
            files = [f for f in files if '.lock' not in f]
            for f in files:
                shutil.copy2(os.path.join(workGDB, f), os.path.join(liveGDB, f))

    Next, the script will zip Live.gdb into a new service definition file, referred to as updated_sd.

  15. After line 97, add the following lines:
    # Zip file
        os.chdir(temp_dir)
        updated_sd = os.path.join(temp_dir, sd_file_name)
        call_zip = '{0} a {1} -m1=LZMA'.format(exe_7z, updated_sd)
        subprocess.call(call_zip)

    Last, the script will use the item's manager method (arcgis.features.FeatureLayerCollection.fromitem(item).manager) to overwrite the feature service with the new data.

  16. After line 102, add the following lines:
    # Replace file
        manager = arcgis.features.FeatureLayerCollection.fromitem(item).manager
        status = manager.overwrite(updated_sd)
        # Return
        return True
  17. Ensure that your script does not include any tab indentations. Save your script.
    注:

    If you want to confirm that you created your script correctly, you can compare it to an example script.

Run the stand-alone script

Next, you'll run the feed routine in the Python Command Prompt and update the Coral Reef Watch online feature service. Like in the previous lesson, for the purposes of this exercise you will update the feature service using historic data.

  1. If necessary, in ArcGIS Online or your ArcGIS Enterprise portal, open the Coral Reef Watch feature layer in Map Viewer.
  2. If necessary, open the Python Command Prompt and use the cd command to browse to the directory where your coral_reef_exercise_local.py file is saved.
    注:

    If you added 7-Zip to the Path environment variable, you may need to restart the Python Command Prompt.

  3. Paste the following command (do not run it yet):

    python coral_reef_exercise_online.py https://downloads.esri.com/LearnArcGIS/update-real-time-data-with-python/vs_polygons.json C:\Temp\Work.gdb

    Before you run the command, you'll add the remaining arguments it requires: the item ID, the original service definition file, and the service name.

  4. At the end of the command, press the spacebar and paste the item ID for the feature service.
    注:

    The item ID is a string of letters and numbers located in the feature service's URL, such as 2dcf249f5cd54a609d51acba6e0ba029.

    You previously downloaded the original service definition file and copied it to your Temp folder. The service name is Coral_Reef_Watch, with your name or initials at the end.

  5. Press the spacebar and paste C:\Temp\Coral_Reef_Watch.sd Coral_Reef_Watch. Change the service definition and service name so that they match your service definition's name.

    Your completed command is formatted similarly to the following command (but with a different item ID, service definition file name, and service name):

    python coral_reef_exercise_online.py https://downloads.esri.com/LearnArcGIS/update-real-time-data-with-python/vs_polygons.json C:\Temp\Work.gdb 2dcf249f5cd54a609d51acba6e0ba029 C:\Temp\Coral_Reef_Watch.sd Coral_Reef_Watch

    Example command in Python Command Prompt

  6. Run the command.

    The command runs. The command prompt returns a lot of information about 7-Zip and the data being extracted. When the command finishes, the command prompt returns the line Done.

    Command prompt returning Done

    Next, you'll visualize the changes on your web map.

  7. Refresh your web map using your browser's Refresh or Reload button.

    The map is updated with data from February 19, 2019.

    Final map with February 19, 2019, data

  8. Click any station or area to open its pop-up.

    The date field reads 2019-02-19, confirming that the data has been updated correctly.

    Date field in pop-up

  9. Close the pop-up. Close the web map without saving. (Alternatively, save the web map if you like.)

In this lesson, you created feed routines to automatically update local and web layers with the latest NOAA data. The lesson also introduced basic and advanced Python topics and used the ALF methodology to develop and implement a feed routine. For the purposes of the lesson, you used historic data to update the layers, but you can run the same script using the URL to the NOAA data.

You can run this script frequently to keep your layers up to date. By setting up a task in Windows, you can have the script run automatically at a specified interval, making it deployable with real-time data.

You can find more lessons in the Learn ArcGIS Lesson Gallery.