Map the most recent data

Coral depends on algae to survive. Oceanic temperatures that are too hot or cold reduce algae, bleaching corals white and increasing mortality rates. The NOAA Coral Reef Watch program provides global data on coral bleaching risk. This data is updated frequently.

In this lesson, you'll use ArcGIS Pro and Python to retrieve the most recent coral bleaching data as a JSON file. Then, you'll create two feature classes based on the data, change their symbology, and publish them. In later lessons, you'll develop a feed routine so that these layers and services are automatically updated when new data becomes available.

Import ArcPy and ArcGIS API for Python

First, you'll create a new project in ArcGIS Pro and change its basemap. Then, you'll use Python to import ArcPy and ArcGIS API for Python.

ArcPy is a Python site package. With it, you can use Python to run geoprocessing tools and other ArcGIS functions. ArcGIS API for Python is a Python library that also enables Python to perform GIS tasks. Later, you'll use it to connect to ArcGIS Online or ArcGIS Enterprise.

  1. Start ArcGIS Pro. If prompted, sign in using your licensed ArcGIS account (or ArcGIS Enterprise portal using a named user account).
    Note:

    If you don't have ArcGIS Pro, you can sign up for an ArcGIS free trial. If you're signing in to an Enterprise account, ensure that ArcGIS Pro is configured to use your organization's portal.

  2. Under Blank Templates, click Map.

    Map template

  3. In the Create a New Project window, for Name, type Coral Bleaching. Click OK.

    A blank map project opens in ArcGIS Pro. Depending on your organization's settings, the default extent may vary. First, you'll change the basemap to one that will emphasize your data.

  4. On the ribbon, click the Map tab. In the Layer group, click Basemap and choose Light Gray Canvas.

    Light Gray Canvas basemap option

    The basemap is added to the map and the Contents pane. The basemap also includes a reference layer that contains place-names. You won't need this reference layer, so you'll turn it off.

  5. In the Contents pane, uncheck the World Light Gray Reference layer.

    World Light Gray Reference layer turned off in Contents pane

    Next, you'll open the Python window.

    ArcGIS Pro comes with Python 3 through the Anaconda Distribution. The Anaconda Distribution includes many common Python modules used in data science applications.

  6. On the ribbon, click the Analysis tab. In the Geoprocessing group, click Python.

    Python button on Analysis tab

    The Python window appears. The window contains two parts, the Python prompt and the transcript. The Python prompt is where you enter Python code. The transcript provides a record of the Python code that you've entered.

    First, you'll run a simple line of Python code to become familiar with basic Python syntax.

    Tip:

    You can reposition and resize the Python window any way you like. Drag the title bar to reposition the pane and drag the pane's edges to resize it. You can also dock it to several areas in ArcGIS Pro.

  7. Click the Python prompt (where it says Enter Python code here), type print('Hello World!'), and press Enter.

    Transcript containing the print function

    The print() function causes the transcript to display the parenthetical text. In this case, print() is the function and 'Hello World!' is the argument (a variable or input for the function).

    You can run many Python functions by typing the function's name and including an argument inside the parentheses. However, not all Python functions require an argument, while others require multiple arguments (separated by commas).

    Functions return a value in the transcript. Statements, on the other hand, execute a process without returning a value. To import modules (such as ArcPy), you'll use a statement instead of a function. First, you'll import the sys module. This module provides several functions specific to your system.

  8. In the Python prompt, type import sys and press Enter.

    The sys module is imported. Next, you'll use the version attribute to show your version of Python. Attributes are properties of Python objects (functions, variables, or constants). They can be appended to Python objects using a dot.

  9. Type sys.version and press Enter.

    Transcript containing import sys and sys.version lines

    In this case, the sys function is the object and version is its attribute.

    The transcript returns information about your version of Python. In the example image, the version is 3.6.8. Your version may differ depending on your version of ArcGIS Pro.

    With ArcPy, you can run ArcGIS geoprocessing tools using the Python prompt. Next, you'll import the arcpy module the same way you imported the sys module.

  10. In the Python prompt, type import arcpy and press Enter.

    The arcpy module is imported.

  11. Type help(arcpy) and press Enter.

    Transcript containing import arcpy and help(arcpy) lines

    The help() function provides information about the given argument (arcpy). In this case, a large amount of information was added. You may want to resize the Python window to look through it all.

    Next, you'll import ArcGIS API for Python using the arcgis module.

  12. In the Python prompt, type import arcgis and press Enter.

    ArcGIS API for Python is imported.

    Note:

    If you receive a ModuleNotFoundError message while importing the arcgis module, ArcGIS API for Python may not be installed on your instance of ArcGIS Pro. To install it, click the Project tab and click the Python tab. Click Add Packages and search for arcgis. In the list of search results, click the arcgis package and click Install. The Install and set up guide explains how to install ArcGIS API for Python in more detail.

  13. Type help(arcgis) and press Enter.

    Transcript containing import arcgis and help(arcgis) lines

    Information about the arcgis module is displayed.

Download a file

Next, you'll download spatial data in JSON format from the NOAA Coral Reef Watch program. This data will contain the most recent information on the risk of coral bleaching.

The data is hosted on the Coral Reef Watch website. To retrieve it, you'll use several functions. These functions require you to first import the appropriate modules.

  1. In the Python prompt, run the following line:
    import os, tempfile

    The os and tempfile modules are imported. You'll also import the request submodule of the urllib module.

  2. Run the following line:
    from urllib import request

    The request submodule includes the urlretrieve function, which you'll use to retrieve the data. The urlretrieve function requires two variables: the URL of the online file (url) and the location on your machine where it will be saved (filename).

    In Python, variables are defined and assigned using an equal sign (=). To define the filename variable, you'll run the os.path.join() function to join a temporary directory (temp_dir) with the intended name of the file. To define the temporary directory, you'll run the tempfile.mkdtemp() function.

    Note:

    The filename variable can be any path pointing to a location on your computer. However, it's recommended to save the file in a temporary directory. The os.path.join() function is used because it works on any operating system regardless of the character used to separate folders in the file path.

  3. Run the following lines (copy and paste them to run them all at once):
    url = 'https://coralreefwatch.noaa.gov/product/vs/vs_polygons.json'
    temp_dir = tempfile.mkdtemp()
    filename = os.path.join(temp_dir, 'latest_data.json')
    response = request.urlretrieve(url, filename)
    Note:

    You may need to press Enter twice to run multiple lines of code that you copy and paste.

    These lines define all the necessary variables. To check the path of the retrieved JSON file, you'll run the print function with filename as the argument.

  4. Run the following line:
    print(filename)

    Transcript containing the path to the latest data

    In the example image, the final line is the path to the latest_data.json file you retrieved (your path will be different). You can copy the file name and paste it into your computer's file explorer (such as Windows Explorer) to open the file. The file can be opened in any text editor.

    Next, you'll create a data_raw variable to represent the data in JSON format. You'll use this variable whenever referring to the JSON data in a line of code. To create the variable, you'll need to import the json module so you can run the json.load() function. You'll also need to create an intermediate variable named json_file that opens your file.

  5. Run the following lines:
    import json
    json_file = open(filename)
    data_raw = json.load(json_file)

    Although nothing happens in the transcript, the json_file variable (your JSON file) is opened and the data_raw variable is loaded with it.

    Note:

    The json_file variable will remain open until the json_file.close() command is run. An alternative way of opening the file is to use a with statement. Doing so will automatically close the file after the code block is closed. The following lines of code represent how the json_file variable can be opened with a with statement:

    with open(filename) as json_file:
        data_raw = json.load(json_file)
        # Do something with the 'data_raw' variable
    # Do something else outside the 'with' section. The json_file variable is now closed.

    The lines that begin with number signs (#) are comments that do not affect the code but provide information to the user.

Create layers from the file

To visualize the data you downloaded (and become more familiar with how you can use Python to interact with spatial data), you'll create two feature classes based on the JSON file.

First, you'll create a new file geodatabase to contain the feature classes. The file contains data for point and polygon features, so you'll separate it into two JSON files, one for each feature type. Then, you'll create a feature class for each JSON file and save them in the geodatabase.

Like the urlretrieve function, the function to create the geodatabase (arcpy.management.CreateFileGDB) requires a path and a name. You'll also run the arcpy.env.workspace function to set the geodatabase as your default workspace.

  1. In Windows Explorer, in your computer's drive C, create a folder named Temp.
  2. In the Python prompt, run the following lines:
    arcpy.management.CreateFileGDB(r'C:\Temp', 'Live.gdb')
    arcpy.env.workspace = os.path.join(r'C:\Temp', 'Live.gdb')

    The Live geodatabase is created in the Temp folder (you can open the folder to check). Next, you'll create two dictionaries, one for point features (stations) and one for polygon features (areas of interest determined by NOAA).

    In Python, dictionaries are collections of unordered, indexed items. The dictionaries that you create will have two elements, as required by the GeoJSON file format. The type element will refer to the geometry type in the data_raw variable (your JSON). The features element will list the features. For now, the list will be empty, as indicated by a pair of brackets.

  3. Run the following lines:
    data_stations = dict(type=data_raw['type'], features=[])
    data_areas = dict(type=data_raw['type'], features=[])

    Transcript containing lines to create dictionaries

    The dictionaries are created. Next, you'll load the features in the data_raw variable into either the data_stations or data_areas dictionary, depending on feature geometry type.

    First, you'll create a for loop. A for loop executes a function or statement for each item in a list. You'll create one that loops through all features in the data_raw variable. By creating a conditional if statement, you'll determine each feature's geometry type. Then, using the append() method, you'll append features of specific geometry types to the list of features of the appropriate dictionary.

  4. Run the following lines:
    for feat in data_raw['features']:
        if feat['geometry']['type'] == 'Point':
            data_stations['features'].append(feat)
        else: # elif feat['geometry']['type'] in ['MultiPolygon', 'Polygon']:        
            data_areas['features'].append(feat)
    Caution:

    The Python Software Foundation recommends using spaces instead of tabs to indent lines of code. Do not mix the use of tabs and spaces for indentation, or your code will not run correctly.

    For each feature listed in the JSON data, if the geometry type is point, the feature will be appended to the data_stations dictionary. If the geometry type is polygon or multipolygon, the feature will be appended to the data_areas dictionary.

    You'll run the len() function using the data_stations['features'] list to check how many features were loaded into the list.

  5. Run the following line:
    len(data_stations['features'])

    Transcript showing len() function to count stations

    The line returns the number 213, indicating that there are 213 point features. The polygon features in the JSON correspond to areas of interest determined by NOAA. These areas change periodically, so you can't validate them as easily as the stations.

    Instead, you'll access the name and coordinates of the tenth feature in the list. In Python, lists start with 0 instead of 1, so the tenth feature is feature 9.

  6. Run the following line:
    data_areas['features'][9]['properties']['name']

    The transcript returns the name of the tenth feature.

  7. Run the following line:
    data_areas['features'][9]['geometry']['coordinates']

    The transcript returns a long list of coordinates, corresponding to each of the polygon's vertices.

    Note:

    Because this data is updated periodically, your results may differ from the example image.

    Transcript showing name and coordinates of Beagle Gulf feature

    Next, you'll save the dictionaries as JSON files in the same temporary directory where you saved the original JSON (temp_dir). You'll run the os.path.join() function to create file paths for each new JSON. Then, you'll create with statements and use the json.dump() function to save the dictionaries as JSON files.

  8. Run the following lines (the comments are optional):
    # Filenames of temp json files 
    stations_json_path = os.path.join(temp_dir, 'points.json')
    areas_json_path = os.path.join(temp_dir, 'polygons.json')
    # Save dictionaries into json files
    with open(stations_json_path, 'w') as point_json_file:
        json.dump(data_stations, point_json_file, indent=4)
    with open(areas_json_path, 'w') as poly_json_file:
        json.dump(data_areas, poly_json_file, indent=4)

    You'll run the print() function on each path to confirm that the files were saved correctly.

  9. Run the following line:
    print(stations_json_path)
  10. Run the following line:
    print(areas_json_path)
    Note:

    Your file path will differ from those shown in the example image.

    Transcript with print() functions for each JSON file

    Each path goes to your Temp folder. The file names are points.json and polygons.json, as you specified when you ran the os.path.join() function. If you want, you can copy each path and open the files in any text editor.

    Now that the JSON files are created, you'll run the arcpy.conversion.JSONToFeatures() function to convert them into feature classes. This function requires the path to the JSON file and the name of the feature class to be created.

  11. Run the following lines:
    # Convert JSON files to features
    arcpy.conversion.JSONToFeatures(stations_json_path, 'alert_stations') 
    arcpy.conversion.JSONToFeatures(areas_json_path, 'alert_areas')

    The feature classes are saved in the default workspace, which you previously specified to be the Live geodatabase. Two layers, alert_stations and alert_areas, are added to the map.

    Map showing stations and areas

Change the symbology

You can also update layer symbology using Python. Although doing so is not required to create a feed routine, it's best practice to display data in a visually appealing and meaningful way. Additionally, the code you create to change the symbology can be adjusted quickly if you later decide to change the symbology.

You'll symbolize the layers based on the alert level, which ranges from 0 to 4 depending on recorded heat stress. Lower alert levels will be blue, while higher alert levels will be red. The layers contain text fields for alert level, but the fields must be numeric in order to give them a graduated symbology.

First, you'll create a new numeric field for alert level using the arcpy.management.AddField function. For this function's argument, you first list the layer to which you want to add the field, then the name of the field, then the data type and field alias. You can also specify other settings, but these are the only ones you need for these fields.

  1. In the Python prompt, run the following lines:
    # Add alert_level field
    arcpy.management.AddField('alert_stations', 'alert_level', 'SHORT', field_alias='Alert Level')
    arcpy.management.AddField('alert_areas', 'alert_level', 'SHORT', field_alias='Alert Level')

    Both the alert_stations and alert_areas layers have an alert_level field added. Both fields have a data type of short integer (a numeric data type) and an alias of Alert Level.

    Next, you'll calculate the new fields with the arcpy.management.CalculateField function. This function also takes the layer name and field name as arguments, as well as an expression to calculate the field. Your expression will use the int() function to convert the alert text field's values to integers.

  2. Run the following lines:
    # Calculate alert_level field
    arcpy.management.CalculateField('alert_stations', 'alert_level', "int(!alert!)")
    arcpy.management.CalculateField('alert_areas', 'alert_level', "int(!alert!)")

    The fields are calculated.

    Tip:

    If you want to confirm that the fields were calculated correctly, right-click either the alert_stations or alert_areas layer and choose Attribute Table. Scroll to the end of the table and confirm that an Alert Level field has been added with values ranging between 0 and 4.

    Next, you'll change the symbology for the alert_stations layer based on the new field. First, you'll create variables to represent the current project (p) and map (m). You'll also create variables for the alert_stations layer (points_lyr) and its symbology (points_sym).

  3. Run the following lines:
    # Symbology
    p = arcpy.mp.ArcGISProject("CURRENT")
    m = p.listMaps('Map')[0]
    
    # Points
    points_lyr = m.listLayers('alert_*')[0]
    points_sym = points_lyr.symbology
    Note:

    If you are using a localized version of ArcGIS Pro, your map will have a localized name. You'll need to adjust the code to replace Map with your map's name.

    Next, you'll update the renderer. Currently, the symbols are rendered with the SimpleRenderer type (single symbol). To symbolize features differently based on a field, you'll change the renderer to GraduatedSymbolsRenderer (graduated symbols).

    You'll also set the renderer to use the alert_level field as the classification field and sort the data into four classes (0 to 1, 1 to 2, 2 to 3, and 3 to 4). For each class, you'll set the size and color so that the sizes increase with higher alert levels and the colors change from blue to red.

  4. Run the following lines:
    # Always change to the GraduatedSymbolsRenderer from the SimpleRenderer
    if points_sym.renderer.type != 'SimpleRenderer':
        points_sym.updateRenderer('SimpleRenderer')
    points_sym.updateRenderer('GraduatedSymbolsRenderer')
    points_sym.renderer.classificationField = 'alert_level'
    points_sym.renderer.breakCount = 4
    
    points_labels = ['0 - 1', '> 1 - 2', '> 2 - 3', '> 3 - 4']
    points_upperBounds = [1, 2, 3, 4]
    points_sizes = [6, 16.50, 27, 37.50] 
    layers_colors = [{'RGB': [5, 113, 176, 40]}, {'RGB': [146, 197, 222, 50]},
                       {'RGB': [244, 165, 130, 50]}, {'RGB': [202, 0, 32, 30]}]

    Next, you'll create a for loop to apply the sizes and colors you specified to each class. Your classes are contained in a numeric list from 1 to 4. You'll use i (for integer) as a placeholder variable to represent the numbered classes.

  5. Run the following lines:
    for i in range(4):
        item = points_sym.renderer.classBreaks[i]
        item.symbol.applySymbolFromGallery('Circle', 1)
        item.label = points_labels[i]
        item.upperBound = points_upperBounds[i]
        item.symbol.size = points_sizes[i]
        item.symbol.color = layers_colors[i]
    Note:

    If you are using a localized version of ArcGIS Pro, you'll need to adjust the code to use the localized version of the word Circle instead. You may need to make similar adjustments in later steps.

    This loop will cycle through all integers in the specified range (4). For the first integer, the first label, upper bound, size, and color in the lists you previously created will be used. For the second integer, the second series of values will be used, and so on until the end of the range.

    Finally, you'll run a line to update the layer's symbology based on the lines you ran previously.

  6. Run the following lines:
    # Update
    points_lyr.symbology = points_sym

    The symbology is updated. (Your map may differ from the example image due to the source data being more recent).

    Map with updated point symbology

    Using similar code, you'll change the symbology of the alert_areas layer. First, you'll create variables for the layer (polygons_lyr) and its symbology (polygons_sym).

  7. Run the following lines:
    # Polygons
    polygons_lyr = m.listLayers('alert_*')[1]
    polygons_sym = polygons_lyr.symbology

    Next, you'll update the renderer and create four classes that match those you created for the points layer.

  8. Run the following lines:
    # Always change to the GraduatedSymbolsRenderer from the SimpleRenderer
    if polygons_sym.renderer.type != 'SimpleRenderer':
        polygons_sym.updateRenderer('SimpleRenderer') 
    polygons_sym.updateRenderer('GraduatedColorsRenderer')
    polygons_sym.renderer.classificationField = 'alert_level'
    polygons_sym.renderer.breakCount = 4
    
    polygons_labels = ['0 - 1', '> 1 - 2', '> 2 - 3', '> 3 - 4']
    polygons_upperBounds = [1, 2, 3, 4]
    layers_colors = [{'RGB': [5, 113, 176, 40]}, {'RGB': [146, 197, 222, 50]},
                       {'RGB': [244, 165, 130, 50]}, {'RGB': [202, 0, 32, 30]}]

    Last, you'll create a loop to change the labels, size, and color of each class and update the layer symbology.

  9. Run the following lines:
    for i in range(4):
        item = polygons_sym.renderer.classBreaks[i]
        item.label = polygons_labels[i]
        item.upperBound = polygons_upperBounds[i]
        item.symbol.color = layers_colors[i]
    
    # Update
    polygons_lyr.symbology = polygons_sym

    The symbology is updated. (You may need to zoom in to see it more clearly.)

    Map with updated polygon symbology

  10. Save the project.

Publish the layers

Next, you'll publish the layers to ArcGIS Online or an ArcGIS Enterprise portal.

  1. Confirm that you are signed in to an ArcGIS Online account (or ArcGIS Enterprise portal using a named user account).
    Note:

    If you are signed in, the name of your account appears in the upper right corner of ArcGIS Pro.

  2. On the ribbon, click the Share tab. In the Share As group, click Web Layer.

    Web Layer button in Share As group

  3. In the Share As Web Layer pane, set the following parameters:
    • For Name, type Coral Reef Watch and add your name or initials to the end.
    • For Summary, type NOAA's latest data for risk of coral bleaching.
    • For Tags, type NOAA, Coral Bleaching, and Alert Levels, pressing Enter between each tag.
    • For Layer Type, confirm that Feature is chosen.

    Item Details and Layer Type parameters

  4. Click Analyze.

    The layer is analyzed to ensure there are no errors that would prevent publishing. The analysis returns zero errors but a few warnings related to the feature template and data source. For the purposes of this exercise, you can ignore these warnings.

  5. Click Publish.

    The tool runs. A message appears at the bottom of the pane, confirming that the web layer was published.

  6. Save the project.

    Next, you'll confirm that the layer was published successfully by adding it to a web map.

  7. Sign in to your ArcGIS organizational account or ArcGIS Enterprise portal.
  8. On the ribbon, click Content.

    Content option on ribbon

    Two new items have been added to your Content page: the Coral Reef Watch feature layer and service definition.

  9. For the Coral Reef Watch feature layer, click the options button (the three horizontal dots) and choose Open in Map Viewer.

    Open in Map Viewer option

    A web map opens with the layers you created.

In this lesson, you used Python to retrieve the most recent data from the Coral Reef Watch program, convert it into two feature classes, and symbolize it. Then, you published the layers as a feature service. In the next lesson, you'll develop a feed routine to automatically download the latest Coral Reef Watch data once it becomes available.


Create a feed routine

In the previous lesson, you retrieved, mapped, and published the latest data from NOAA's Coral Reef Watch program. This data shows the coral bleaching information at a fixed moment in time, but the data is frequently updated. How can you quickly update your map every time NOAA updates their data?

In this lesson, you'll develop a feed routine. Feed routines automatically download content, process it, and publish a dataset. Your feed routine will include the Python workflow you used in the previous lesson.

Create a stand-alone script

Rather than run your script in ArcGIS Pro like you did in the previous lesson, you'll create a stand-alone script in a text editor that you can save and run with a single command. This script will contain your feed routine and follow the Aggregated Live Feed (ALF) methodology guidelines.

The ALF methodology is a set of guidelines, tools, and functions used for the deployment of a Live Feed routine in a production environment. Its tools allow the automation of the deployment of the routine with minimal supervision. The ALF methodology encapsulates the steps of a Live Feed routine and adds components to improve the general workflow. These components add the functionality to record every step run in a log, unzip files efficiently, and send automatic emails if something fails.

Note:

To learn more about the ALF methodology, you can join the Aggregated Live Feed Community group. You can also download and review the Aggregated Live Feed Methodologies document.

  1. Open a new file in a plain text editor.
    Note:

    If you don't have a plain text editor, you can download Notepad++ for free. You can also use Python integrated development environments (IDEs) such as PyCharm, PyScripter, or Spyder. The example images in this lesson will use Notepad++.

    First, you'll save the file as a Python script file. Some text editors, including Notepad++, highlight syntax depending on the coding language being used, which can be helpful when writing code.

  2. On the ribbon of your text editor, click File and choose Save or Save As. Save the file as coral_reef_exercise.py in a location of your choice.
    Note:

    The process for saving the file as a Python file (.py) may differ depending on what program you're using.

    The first thing you did when mapping the most recent data in the previous lesson was import the sys module, so you'll add a statement to do that to the beginning of your feed routine.

  3. In the text editor, create the line import sys.

    You'll also define a new function named feedRoutine that requires the arguments url (the URL to the data) and workGDB (the path to the file geodatabase you created previously). Later, you'll define this function so that it runs the steps to retrieve and map the data, but for now you'll use a placeholder value pass to stand in for those steps.

    Also, you'll add the __name__ = "__main__" statement, which allows the script to be run as a stand-alone routine (for instance, by running it in the command prompt). In this case, the stand-alone script will run the feedRoutine function by passing the url and workGDB arguments to it from the command prompt via the sys.argv[1:] statement.

  4. Press Enter twice. Copy and paste the following lines starting on line 3:
    def feedRoutine (url, workGDB):
        pass
    
    if __name__ == "__main__":
    	[url, workGDB] = sys.argv[1:]
    	feedRoutine (url, workGDB)
    Note:

    Indentations are important for ensuring the script runs correctly. Be sure to maintain any indentations in the script. Do not add unnecessary indentations.

    Feed routine with initial statements

    You imported several other modules during the workflow. Rather than run the import statement again for each module, you'll have the script import all the necessary modules with the same function.

  5. Modify line 1 (import sys) to read import sys, arcpy, os, tempfile, json. Press Enter and create the line from urllib import request on line 2.

    The full lines read as follows:

    import sys, arcpy, os, tempfile, json
    from urllib import request

    Feed routine with all modules

    Next, you'll begin to replace the pass placeholder with the steps necessary to retrieve data and create layers. First, you'll define workGDB as the default geodatabase (arpy.env.workspace) and create the geodatabase using the arcpy.management.CreateFileGDB function.

  6. Replace line 5 (pass) with the following lines:
    # workGDB and default workspace
        arcpy.env.workspace = workGDB 
        arcpy.management.CreateFileGDB(os.path.dirname(workGDB), os.path.basename(workGDB))
    Note:

    Depending on your window size, some of the longer lines may wrap around, as shown in the following image.

    Feed routine with default geodatabase defined

    Next, you'll add placeholder code for a deployment logic function. You don't need this function right now, but you will use it in later lessons, so it's a good idea to add a place for it in your script. You'll also add a placeholder for the code from the previous lesson.

  7. After line 7, press Enter twice and remove any indentations. Starting on line 9, copy and paste the following lines (including the indentations):
    ### Placeholder for retrieving and mapping data ###
    
        # Deployment Logic
        deployLogic()
    
        # Return
        return True
    
    def deployLogic():
        pass

    Feed routine with placeholders for deployment logic

    Note:

    It is important that the indentations use spaces instead of tabs.

    Next, you'll replace the placeholder for retrieving and mapping data with the code you ran in the previous lesson.

    This code will not include the code for changing symbology. The symbology for the layers is already saved in both your ArcGIS Pro project and the web map, so you don't need to change the symbology again. The code for adding and calculating the alert_level fields will remain, because these fields are needed for the existing symbology.

    Also, this code will include several print() functions that include statements to keep you updated on which lines are being run.

  8. Replace line 9 (### Placeholder for retrieving and mapping data ###) with the following lines:
    # Download and split json file
        print("Downloading data...")
        temp_dir = tempfile.mkdtemp()
        filename = os.path.join(temp_dir, 'latest_data.json')
        response = request.urlretrieve(url, filename)
        with open(filename) as json_file:
            data_raw = json.load(json_file)
            data_stations = dict(type=data_raw['type'], features=[])
            data_areas = dict(type=data_raw['type'], features=[])
        for feat in data_raw['features']:
            if feat['geometry']['type'] == 'Point':
                data_stations['features'].append(feat)
            else:
                data_areas['features'].append(feat)
        # Filenames of temp json files
        stations_json_path = os.path.join(temp_dir, 'points.json')
        areas_json_path = os.path.join(temp_dir, 'polygons.json')
        # Save dictionaries into json files
        with open(stations_json_path, 'w') as point_json_file:
            json.dump(data_stations, point_json_file, indent=4)
        with open(areas_json_path, 'w') as poly_json_file:
            json.dump(data_areas, poly_json_file, indent=4)
        # Convert json files to features
        print("Creating feature classes...")
        arcpy.conversion.JSONToFeatures(stations_json_path, 'alert_stations') 
        arcpy.conversion.JSONToFeatures(areas_json_path, 'alert_areas')
        # Add 'alert_level ' field
        arcpy.management.AddField('alert_stations', 'alert_level', 'SHORT', field_alias='Alert Level')
        arcpy.management.AddField('alert_areas', 'alert_level', 'SHORT', field_alias='Alert Level')
        # Calculate 'alert_level ' field
        arcpy.management.CalculateField('alert_stations', 'alert_level', "int(!alert!)")
        arcpy.management.CalculateField('alert_areas', 'alert_level', "int(!alert!)")

    Next, you'll add three more print() functions to other parts of the script, including one that informs you when the script is completed.

  9. After line 5 (# workGDB and default workspace), press Enter. On line 6, create the following line with an indentation of four spaces (to align with the next line):
    print("Creating workGDB...")

    Feed routine with print() function added to workGDB block

  10. After line 43 (# Deployment Logic), press Enter. On line 44, create the following line with an indentation:
    print("Deploying...")
  11. After line 47 (# Return), press Enter. On line 48, create the following line with an indentation:
    print("Done!")

    Depending on how you created the script, it's possible that you still have tab indentations.

  12. Check your entire script to ensure that all indentations are made with spaces instead of tabs.
    Tip:

    If you're using Notepad++, you can automatically convert tabs to spaces. On the ribbon, click Edit, point to Blank Operations, and choose TAB to Space.

  13. Save the script.
    Note:

    If you want to confirm that you created your script correctly, you can compare it to an example script.

    Next, you'll test the script.

  14. Open the Windows Start menu. Search for and open the Python Command Prompt.

    First, you'll browse to the directory where you saved your coral_reef_exercise.py file.

    Note:

    For the example images, the coral_reef_exercise.py file was saved in the Documents folder (C:\Users\Documents). You can acquire the path to your file by browsing to it in a file browser such as Windows Explorer and copying the path from the top of the browser.

  15. In the Python Command Prompt, type cd and press the spacebar. Paste the route to the directory where you saved your Python file. If the path contains spaces, add quotation marks around the path.

    Python Command Prompt with command to change directory

  16. Press Enter.

    The directory is changed. Next, you'll run the script. You'll use the python command and include the name of the file, the URL to the website with the coral reef data, and the path and name of the geodatabase.

  17. Run the following command:

    python coral_reef_exercise.py https://coralreefwatch.noaa.gov/product/vs/vs_polygons.json C:\Temp\Work.gdb

    The command takes a few seconds to run. As it runs, you are updated on its progress by the print() functions.

    Python Command Prompt with stand-alone script

    The layers in the Work geodatabase are updated with the latest data (which is likely the same as the data you used in the previous lesson). For now, the script does not update the layers that appear on either the map in your ArcGIS Pro project or your web map. You'll add that functionality later using the deployLogic() function.

Add advanced script functionality

Your script works, but you can improve it. First, you'll adjust the script so that it only creates a geodatabase if one does not already exist.

You'll create an if statement using the arcpy.Exists function to check if the workspace exists. If so, you'll delete any existing feature classes that begin with alert_ so that they can be replaced with new feature classes. You can represent both alert layers with the string alert_*, with the asterisk standing in for any text.

  1. If necessary, open your coral_reef_exercise script in a text editor.
  2. Replace lines 5 through 8 (the lines for creating the default workspace) with the following lines:
    # Create workGDB and default workspace
        print("Starting workGDB...")
        arcpy.env.workspace = workGDB
        if arcpy.Exists(arcpy.env.workspace):
            for feat in arcpy.ListFeatureClasses ("alert_*"):   
                arcpy.management.Delete(feat)
        else:
            arcpy.management.CreateFileGDB(os.path.dirname(workGDB), os.path.basename(workGDB))

    Advanced script to check for existing workGDB

    If the script is run on a computer without internet access, it will fail to download the online JSON file. To catch this common error, you'll raise the standard exception URLError, which informs the user that the URL is not available.

  3. Replace line 18 (response = request.urlretrieve(url, filename)) with the following lines (ensure that the first line is indented):
    try:
            response = request.urlretrieve(url, filename)
        except URLError:
            raise Exception("{0} not available. Check internet connection or url address".format(url))

    Advanced script to raise URLError

    Next, you'll record the running steps in a log file. You already print messages about each part of the script in the command prompt, so you'll add a logging.info() function after each print() function. This function logs messages that might be useful for debugging if errors arise with the script. The messages you log will include the date and time the message was logging.

    First, you'll modify your script to import the logging and datetime modules. You'll also import the URLError exception you used previously.

  4. Replace lines 1 and 2 (the import lines) with the following lines:
    import sys, os, tempfile, json, logging, arcpy
    import datetime as dt
    from urllib import request
    from urllib.error import URLError

    Next, you'll use the logging.basicConfig() function to configure the log file where the messages will be logged. You'll also set a log_format variable that defines how date and time will be displayed.

  5. After line 6 (def feedRoutine), add the following lines:
    # Log file
        logging.basicConfig(filename="coral_reef_exercise.log", level=logging.INFO)
        log_format = "%Y-%m-%d %H:%M:%S"

    Advanced script for configuring the log

    Next, you'll add logging.info() functions that add messages to the log. These functions will record the date and time with the dt.datetime.now() function and format the date and time using the log_format variable you previously created. You'll add these functions after every print() function.

  6. After line 11 (print("Starting workGDB...")), add the following line (ensure that it is indented):
    logging.info("Starting workGDB... {0}".format(dt.datetime.now().strftime(log_format)))
  7. After line 21 (print("Downloading data...")), add the following line (ensure that it is indented):
    logging.info("Downloading data... {0}".format(dt.datetime.now().strftime(log_format)))
  8. After line 47 (print("Creating feature classes...")), add the following line (ensure that it is indented):
    logging.info("Creating feature classes... {0}".format(dt.datetime.now().strftime(log_format)))
  9. After line 59 (print("Deploying...")), add the following line (ensure that it is indented):
    logging.info("Deploying... {0}".format(dt.datetime.now().strftime(log_format)))
  10. After line 64 (print("Done!")), add the following line (ensure that it is indented):
    logging.info("Done! {0}".format(dt.datetime.now().strftime(log_format)))

    You'll also log the URLError exception using the logging.exception() function.

  11. After line 27 (except URLError:), add the following lines (ensure that the first line is indented four spaces relative to except on the line above):
    logging.exception("Failed on: request.urlretrieve(url, filename) {0}".format(
                              dt.datetime.now().strftime(log_format)))

    Last, you'll close the log file at the end of the script with the logging.shutdown() function.

  12. At the end of line 63, press Enter twice. On line 65, add the following lines:
    # Close Log File
        logging.shutdown()

    Advanced script to close log file

  13. Ensure that your script does not include any tab indentations. Save your script.
    Note:

    If you want to confirm that you created your script correctly, you can compare it to an example script.

In this lesson, you created a feed routine that can be run as a stand-alone script using the command prompt. You also added some advanced functionality. In the next lesson, you'll adapt the feed routine to update the local feature classes in your ArcGIS Pro project.


Update local feature classes

In the previous lesson, you created a feed routine. When this feed routine is run, it downloads the latest NOAA coral bleaching data and creates two feature classes, one for point data and one for polygon data. These feature classes are located in the Work geodatabase. However, the script does not update the layers in your ArcGIS Pro project, which are located in the Live geodatabase.

In this lesson, you'll adapt your feed routine so that it automatically updates the alert_stations and alert_areas layers in your ArcGIS Pro project.

Define the deployment logic function

Your feed routine contains placeholder script for a deployment logic function named deployLogic(). In the ALF methodology, the deployment logic process is the part of the feed routine that takes the latest information retrieved from the internet (in your case, the data located in the Work geodatabase) and overwrites the live data (the feature classes located in the Live geodatabase).

  1. Create a copy of your coral_reef_exercise.py script named coral_reef_exercise_local.py. Open the copy in a text editor or Python IDE.

    Next, you'll modify the script so that the shutil module is imported. This module contains several advanced file operations that will be necessary for defining the deployment logic function.

  2. In the coral_reef_exercise_local.py script, on line 1, add shutil to the list of modules.

    Import function with shutil module

    Next, you'll add a third argument to the feed routine named liveGDB, which will represent the Live geodatabase where the layers in your project are located.

  3. On line 6, add liveGDB to the list of feedRoutine arguments.

    Feed routine with liveGDB argument

    You'll also add this argument to the sys.argv[1:] command at the end of the script. That way, users can supply the path and name of the Live geodatabase when running the script in the command prompt.

  4. On line 77, add liveGDB to the bracketed list. On line 78, add liveGDB to the list of feedRoutine arguments.

    Feed routine with additional liveGDB arguments

    The deployLogic() function will take two arguments, workGDB and liveGDB, so you'll add these arguments where the function is called and where it is defined.

  5. On lines 63 and 73, replace deployLogic() with deployLogic(workGDB, liveGDB).

    Feed routine with deployLogic() arguments

    Next, you'll define the deployLogic() function. You'll use a for loop to copy all of the elements in the workGDB argument and use them to replace the elements in the liveGDB argument. You'll use the os.walk() function to list the files and the shutil.copy2() function to copy and replace them. In accordance with ALF methodology, you'll also have the script ignore any .lock files.

  6. On line 74, replace pass (the placeholder) with the following lines (ensure that the first line is indented):
    for root, dirs, files in os.walk(workGDB, topdown=False):
    	    files = [f for f in files if '.lock' not in f]
    	    for f in files:
    	        shutil.copy2(os.path.join(workGDB, f), os.path.join(liveGDB, f))

    Script to define the deployLogic() function

    Note:

    This feed routine will replace the feature classes in your ArcGIS Pro project automatically, but some precautions are necessary. The new feature classes must be consistent with the previous data structure; missing fields or different names can break the map project.

  7. Ensure that your script does not include any tab indentations. Save your script.
    Note:

    If you want to confirm that you created your script correctly, you can compare it to an example script.

Run the stand-alone script

Next, you'll run your adapted feed routine and update your ArcGIS Pro project.

It's likely that NOAA has not updated its data since you mapped the latest data in your project. If you ran the script now, you probably would not see any changes in the data. To test that your script functions correctly, you'll update the data using a historic file hosted by Learn ArcGIS.

  1. If necessary, open your Coral Bleaching project in ArcGIS Pro.
  2. If necessary, open the Python Command Prompt and use the cd command to browse to the directory where your coral_reef_exercise_local.py file is saved.
  3. In the command prompt, run the following command:

    python coral_reef_exercise_local.py https://downloads.esri.com/LearnArcGIS/update-real-time-data-with-python/vs_polygons.json C:\Temp\Work.gdb C:\Temp\Live.gdb

    The command runs.

    Python Command Prompt with command run

    Note:

    Once the NOAA Coral Reef Watch program updates its data, you can replace the URL to the historic file (https://downloads.esri.com/LearnArcGIS/update-real-time-data-with-python/vs_polygons.json) with the URL to NOAA's data (https://coralreefwatch.noaa.gov/product/vs/vs_polygons.json).

    Although the prompt states that the command was run successfully, no changes have occurred to the map in ArcGIS Pro. You'll restart ArcGIS Pro and refresh the map display to visualize the changes.

  4. Close ArcGIS Pro (save if prompted). Reopen your Coral Bleaching project.

    Restarting ArcGIS Pro may have also refreshed the map display. If it didn't, you can refresh the display manually.

  5. If necessary, in the lower right corner of the Map view, click the Refresh button.

    Refresh button

    The map refreshes and displays the historic data (from February 19, 2019).

    Map with historic data

    You can also verify that the attribute data was updated.

  6. On the map, click any station or area feature.

    The feature's pop-up opens. The date field reads 2019-02-19, the date of the historic data.

    Pop-up with historic date

  7. Close the pop-up.

In this lesson, you developed and ran a feed routine to update a local dataset. You tested the feed routine by updating the map with historic data, but by replacing the URL used in the Python command, you can apply the same process to NOAA's most recent data. In the next lesson, you'll develop and run a similar feed routine to update an online feature service.


Update an online feature service

In the previous lesson, you developed and ran a feed routine to update local feature classes in your ArcGIS Pro project. In this lesson, you'll follow a similar process to develop and run a feed routine to update an online feature service in a web map.

Acquire the service definition and ID

When updating a local feature class, you used the Live geodatabase as the location to which the new data would be copied. Online feature services aren't stored in geodatabases, so instead you'll need the service definition file and item ID to query and replace the feature service using ArcGIS API for Python.

In the first lesson, you published a feature service with the most recent data. You'll use this feature service as the target for updates. First, you'll download its service definition file.

  1. If necessary, sign in to your ArcGIS organizational account or ArcGIS Enterprise portal.
  2. Go to your Content page and access the folder where your Coral Reef Watch feature layer and service definition file are located.
    Note:

    The service definition file was created when you published the layer. Because you'll use this file to update the feature service, any changes made to the feature service after it was published (such as metadata or symbology changes) will be lost when the feature service is updated. Any web map or app that includes the feature service will also be affected.

    The recommended practice to avoid losing changes to the feature service is to create a view layer. Any map or web app using the feature layer can point to the view layer instead of the feature service, and any changes to the view layer are preserved when the feature service is updated. To create a view layer, open the details page for the feature layer and click Create View Layer. You can find more information on the documentation page.

  3. For the Coral Reef Watch service definition file, click the options button (the three dots) and choose Download.

    Download service definition file

    The service definition file (.sd) is downloaded to your computer.

  4. Copy the service definition file into the Temp folder on your drive C.

    Next, you'll copy the layer's item ID.

  5. For the Coral Reef Watch feature layer, click the options button and choose View item details.

    The details page for the feature layer opens. The ID is located in the URL.

  6. In the URL for the details page, copy the string of numbers of letters that follows id= (in the example image, this string is 2dcf249f5cd54a609d51acba6e0ba029).
    Note:

    Your feature layer's item ID will be different.

    Item ID for feature layer

  7. Paste the ID in an empty text file or somewhere that you will be able to easily access.

Define the deployment logic function

Next, you'll make another copy of your original feed routine. Then, you'll define the deployLogic() function using a script that replaces an online feature service with downloaded data from the Work geodatabase.

  1. Create a copy of your original coral_reef_exercise.py script named coral_reef_exercise_online.py. Open the copy in a text editor or Python IDE.

    For the script you create, you'll need the fnmatch, shutil, subprocess, and arcgis modules, as well as the GIS submodule.

  2. In the coral_reef_exercise_online.py script, on line 1, add fnmatch, shutil, subprocess, and arcgis to the list of modules. Press Enter and add the line from arcgis.gis import GIS to line 2.

    The full lines read as follows:

    import sys, os, tempfile, json, logging, arcpy, fnmatch, shutil, subprocess, arcgis
    from arcgis.gis import GIS

    Feed routine with lines to import modules and submodules

    When you defined the deployment logic function to update a local feature class, you added an argument to the feed routine named liveGDB, which you could use to supply the path to the necessary geodatabase when you ran the script. For this feed routine, you'll add parameters for the item ID of the feature service, the service definition file, and the service name.

  3. On line 7, add the arguments itemid, original_sd_file, and service_name to the list of feedRoutine arguments.

    The full line reads as follows:

    def feedRoutine (url, workGDB, itemid, original_sd_file, service_name):

    Arguments added to the feedRoutine function

    You'll also add these parameters to the sys.argv[1:] command at the end of the script.

  4. On lines 78 and 79, add itemid, original_sd_file, and service_name to both lists of arguments.

    The full lines read as follows:

    [url, workGDB, itemid, original_sd_file, service_name] = sys.argv[1:]
        feedRoutine (url, workGDB, itemid, original_sd_file, service_name)

    Arguments added to the sys.argv[1:] command

    You'll also add these arguments (and the workGDB argument) to the deployLogic() function where it is defined and where it is called.

  5. On lines 64 and 74, replace deployLogic() with deployLogic(workGDB, itemid, original_sd_file, service_name).

    Arguments for the deployLogic function where it is defined and called

    Next, you'll define the deployLogic() function. This function will accomplish several tasks. First, it'll get the feature service item from ArcGIS Online or your ArcGIS Enterprise portal (represented by the itemid argument) using the gis.content.get() function.

    You'll also create a gis variable defined by the GIS() function. This function's arguments will include the URL of your organization's portal (for ArcGIS Online, the URL will be https://arcgis.com), your account user name, and your account password.

  6. On line 75, replace the placeholder script (pass) with the following lines:
    # Get item from ArcGIS Online
        gis = GIS(url='https://arcgis.com', username='your_username', password='your_password')
        item = gis.content.get(itemid)
        sd_file_name = os.path.basename(original_sd_file)
        if sd_file_name != item.related_items("Service2Data")[0].name:
            raise Exception('Erroneous itemid, service name, or original sd file'.format(itemid))
    Note:

    If you're using an ArcGIS Online account, the url parameter is optional.

  7. On line 76, replace the username and password parameters with your ArcGIS account user name and password. If your account is for an ArcGIS Enterprise portal, replace the url parameter with the URL to your portal.
    Caution:

    Because this script contains your password, be careful about sharing it with others. There is advanced script functionality to help you protect your credentials.

    Next, the deployLogic() function will unpack the contents of the service definition file (original_sd_file). It will do this using 7-Zip, a program that comes default with many Windows computers. Before you add the script that will use 7-Zip to unpack the file, you'll ensure you have it on your computer and add it to the Path environment variable on Windows. By adding it to this environment variable, you'll be able to call the program using your script.

  8. If necessary, download the appropriate version of 7-Zip for your computer and install it.

    The process of editing an environment variable differs depending on your operating system, but it usually can be done through the Environment Variables window.

  9. On your computer, open Control Panel. Click System and Security, click System, and click Advanced system settings. In the System Properties window, click Environment Variables. (The exact path may differ depending on your operating system.)
  10. In the Environment Variables window, for System variables, select the Path variable and click Edit.

    Depending on your operating system, the Edit environment variable window or the Edit System Variable window opens. The process for adding a new variable differs for each window.

  11. If the Edit environment variable window opens, click New and add C:\Program Files\7-Zip as a new variable. If the Edit System Variable window opens, scroll to the end of the Variable value parameter and paste ;C:\Program Files\7-Zip to the end (making sure not to delete any existing text).
  12. Click OK. In the Environment Variables window, click OK.

    Next, you'll create the script. It will include an error message that is raised if 7-Zip cannot be found in the Path environment variable.

  13. In the coral_reef_exercise_online.py script, after line 80, add the following lines:
    # Unpack original_sd_file using 7-zip
        path_7z = fnmatch.filter(os.environ['path'].split(';'), '*7-Zip')
        temp_dir = tempfile.mkdtemp()
        if len(path_7z):
            exe_7z = os.path.join(path_7z[0], '7z.exe')
            call_unzip = '{0} x {1} -o{2}'.format(exe_7z, original_sd_file, temp_dir)
        else:
            raise Exception('7-Zip could not be found in the PATH environment variable')
        subprocess.call(call_unzip)

    After your script acquires the feature service and unzips its service definition file, you'll replace its data with the data downloaded to Work.gdb. You'll use Live.gdb as an intermediary folder that can be zipped into a new service definition file.

    Your script will use the shutil.rmtree() function to delete Live.gdb and the os.mkdir() function to create it again, removing any existing contents in the geodatabase.

  14. After line 89, add the following lines:
    # Replace Live.gdb content
        liveGDB = os.path.join(temp_dir, 'p20', 'live.gdb')
        shutil.rmtree(liveGDB)
        os.mkdir(liveGDB)
        for root, dirs, files in os.walk(workGDB):
            files = [f for f in files if '.lock' not in f]
            for f in files:
                shutil.copy2(os.path.join(workGDB, f), os.path.join(liveGDB, f))

    Next, the script will zip Live.gdb into a new service definition file, referred to as updated_sd.

  15. After line 97, add the following lines:
    # Zip file
        os.chdir(temp_dir)
        updated_sd = os.path.join(temp_dir, sd_file_name)
        call_zip = '{0} a {1} -m1=LZMA'.format(exe_7z, updated_sd)
        subprocess.call(call_zip)

    Last, the script will use the item's manager method (arcgis.features.FeatureLayerCollection.fromitem(item).manager) to overwrite the feature service with the new data.

  16. After line 102, add the following lines:
    # Replace file
        manager = arcgis.features.FeatureLayerCollection.fromitem(item).manager
        status = manager.overwrite(updated_sd)
        # Return
        return True
  17. Ensure that your script does not include any tab indentations. Save your script.
    Note:

    If you want to confirm that you created your script correctly, you can compare it to an example script.

Run the stand-alone script

Next, you'll run the feed routine in the Python Command Prompt and update the Coral Reef Watch online feature service. Like in the previous lesson, for the purposes of this exercise you will update the feature service using historic data.

  1. If necessary, in ArcGIS Online or your ArcGIS Enterprise portal, open the Coral Reef Watch feature layer in Map Viewer.
  2. If necessary, open the Python Command Prompt and use the cd command to browse to the directory where your coral_reef_exercise_local.py file is saved.
    Note:

    If you added 7-Zip to the Path environment variable, you may need to restart the Python Command Prompt.

  3. Paste the following command (do not run it yet):

    python coral_reef_exercise_online.py https://downloads.esri.com/LearnArcGIS/update-real-time-data-with-python/vs_polygons.json C:\Temp\Work.gdb

    Before you run the command, you'll add the remaining arguments it requires: the item ID, the original service definition file, and the service name.

  4. At the end of the command, press the spacebar and paste the item ID for the feature service.
    Note:

    The item ID is a string of letters and numbers located in the feature service's URL, such as 2dcf249f5cd54a609d51acba6e0ba029.

    You previously downloaded the original service definition file and copied it to your Temp folder. The service name is Coral_Reef_Watch, with your name or initials at the end.

  5. Press the spacebar and paste C:\Temp\Coral_Reef_Watch.sd Coral_Reef_Watch. Change the service definition and service name so that they match your service definition's name.

    Your completed command is formatted similarly to the following command (but with a different item ID, service definition file name, and service name):

    python coral_reef_exercise_online.py https://downloads.esri.com/LearnArcGIS/update-real-time-data-with-python/vs_polygons.json C:\Temp\Work.gdb 2dcf249f5cd54a609d51acba6e0ba029 C:\Temp\Coral_Reef_Watch.sd Coral_Reef_Watch

    Example command in Python Command Prompt

  6. Run the command.

    The command runs. The command prompt returns a lot of information about 7-Zip and the data being extracted. When the command finishes, the command prompt returns the line Done.

    Command prompt returning Done

    Next, you'll visualize the changes on your web map.

  7. Refresh your web map using your browser's Refresh or Reload button.

    The map is updated with data from February 19, 2019.

    Final map with February 19, 2019, data

  8. Click any station or area to open its pop-up.

    The date field reads 2019-02-19, confirming that the data has been updated correctly.

    Date field in pop-up

  9. Close the pop-up. Close the web map without saving. (Alternatively, save the web map if you like.)

In this lesson, you created feed routines to automatically update local and web layers with the latest NOAA data. The lesson also introduced basic and advanced Python topics and used the ALF methodology to develop and implement a feed routine. For the purposes of the lesson, you used historic data to update the layers, but you can run the same script using the URL to the NOAA data.

You can run this script frequently to keep your layers up to date. By setting up a task in Windows, you can have the script run automatically at a specified interval, making it deployable with real-time data.

You can find more lessons in the Learn ArcGIS Lesson Gallery.