Create a capture session

A capture session combines all relevant information captured in a single photo-flight that will be required to do the alignment and reconstruction steps. Capture sessions may be built for imagery captured with nadir-only sensors or multihead sensor systems and with the corresponding positioning information for each image.

In a nadir sensor system, the sensor points straight down and captures imagery of the surface under it. The images collected this way are referred to as nadir images.

The following image is an example of nadir imagery:

Example of nadir imagery

The following image is a diagram of a nadir camera cone and image footprint:

Nadir camera cone and image footprint diagram

In a multihead sensor system, sensors point in multiple directions, at angles forward and backward and to the sides. The images collected at an angle are referred to as oblique images. Multihead systems may also include a sensor to collect nadir images.

The following image is an example of oblique imagery:

Example of oblique imagery

The following image is a diagram of a multihead sensor, showing the camera cones and image footprints:

Multihead sensor camera cone and image footprint diagram

Positioning information may be based on navigation information or high-accuracy positions derived in an external aerotriangulation process.

The data you'll add to your capture session consists of the following:

  • 873 images captured with a multihead sensor system (IGI UrbanMapper)
  • An ASCII file including the positioning information per image (GNSS_IMU_whole_Area.csv)
  • A file including the necessary sensor specifications (Camera_template_Frankfurt_UM1.json)
  • A file geodatabase containing the geometry for the region of interest and a water body (AOI_and_Waterbody.gdb)

Download the data

The data for this tutorial takes up about 26 GB of disk space.

  1. Download the Frankfurt_City_Collection.zip file.
    Note:

    Depending on your connection speed, this 26 GB file may take a long time to download.

  2. Extract the .zip file to a folder on your local machine, for example, D:\Datasets\Frankfurt_City_Collection.

Start a capture session

Next, you'll create the capture session.

  1. Start ArcGIS Reality Studio.
  2. On the Welcome screen, click New Capture Session.

    New Capture Session option

  3. In the Capture Session pane, for Capture Session Name, type Frankfurt_Flight_RS.
  4. For Orientation File Format, click ASCII text file (.txt, .csv, etc).

    ASCII text file orientation file format

    A notice appears indicating that the data must be in a supported orientation data format convention.

  5. For Orientation File Path, browse to the Frankfurt_City_Collection folder that you extracted. Select GNSS_IMU_whole_Area.csv and click OK.

    GNSS_IMU_whole_Area.csv file

  6. For Spatial Reference, click the browse button.

    Spatial Reference browse button

  7. In the Spatial Reference window, for Current XY, in the search box, type 25832 and press Enter.

    25832 in the search box

    The search for this well-known ID (WKID) code returns the ETRS 1989 UTM Zone 32N coordinate system. This is the XY coordinate system used in the position file.

  8. In the list of results, click ETRS 1989 UTM Zone 32N.

    You've set the XY coordinate system. Next, you'll set the Z coordinate system.

  9. Click Current Z.

    Current Z option

  10. For Current Z, in the search box, type 7837 and press Enter.

    7837 in the search box

  11. In the list of results, click DHHN2016 height.

    You've set the Z coordinate system.

  12. In the Spatial Reference window, click OK.
  13. In the Data Parsing section, for Parse from row, type 22 and press Enter.

    Parse from row parameter

    The GNSS_IMU_whole_Area.csv orientation file that you imported is a comma-delimited text file. It includes a header section of 21 lines, while the data that ArcGIS Reality Studio will use to process the images begins at line 22. Entering 22 in this box skips the header rows.

    Note:

    Another way to skip the header is to specify the character that begins comment rows. In this file, the # symbol is the comment character, so you could also skip the header by typing # in the Symbols used to ignore rows box.

    Once ArcGIS Reality Studio can read the file correctly, the number of detected orientations is listed in a green highlight box. In this case, 7,775 orientations are detected. These are the orientations collected during the flight. This is greater than the 873 images used in the tutorial because the tutorial images are a subset of a larger collection.

  14. Click Next.

    Next button

Define the parameters of the orientation file

There are multiple image orientation systems and they label the collected parameter data in different ways. In this case, the GNSS_IMU_whole_Area.csv file you imported contains the image name, X, Y, Z, Omega, Phi, and Kappa values in the same order as they appear on the Data Labeling table. You'll match the fields to the data positions in the file.

  1. In the Data Labeling section, for Image Name, choose the first item in the list.

    First item in the list of images

    Place 1 in the file contains data that consists of a code value separated by underscore characters.

  2. For X, choose the second item in the list.

    Second item in the list of images

    Place 2 in the file contains data that consists of floating point data.

    You'll continue mapping the field names to places in the data file.

  3. For Y, choose the third item in the list.
  4. For Z, choose the fourth item in the list.
  5. For Omega, choose the fifth item in the list.
  6. For Phi, choose the sixth item in the list.
  7. For Kappa, choose the seventh item in the list.

    Data labeling completed

    When you have set the Kappa value, in the Camera System Assignment section, a green box appears with the number of assigned orientations from the file.

  8. Skip the Camera Name field.

Relate the orientation data to the images

The orientation data file contains information that ArcGIS Reality Studio will use in reconstructing the scene. There are multiple camera and orientation tracking systems. The relationship between the position data and the cameras is established in different ways, depending on the convention used by the system used to collect your images. The following are the two main ways:

  • The ASCII orientation file may include a column with the camera names.
  • The image file name includes a string that identifies the camera.

In this tutorial, the image file names contain a string to identify the camera.

  1. In the Camera System Assignment section, click the options button and choose Import Template.

    Import Template option

  2. Browse to the Frankfurt_City_Collection folder, select Camera_template_Frankfurt_UM1.json, and click OK.

    The template file

    The Camera System Assignment section updates to include a table for the camera names and ID values.

    Table added with camera names and IDs

    Next, you'll enter the codes that correspond to the cameras in the image file names.

  3. For Left, in the Camera ID column, type the code _11000.
  4. For Forward, in the Camera ID column, type the code _11900.
  5. For Nadir, in the Camera ID column, type the code _NAD.
  6. For Backward, in the Camera ID column, type the code _11600.
  7. For Right, in the Camera ID column, type the code _11100.

    The Camera System Assignment table

    The Camera System Assignment table now matches the camera names to the Camera ID codes embedded in the image file names.

    The Capture Session Selection section appears below the Camera System Assignment table.

    This section allows you to choose to process specific camera sessions or all camera sessions. In this tutorial, you'll process all of the camera sessions.

  8. Click the button for Frankfurt_Flight_RS to select the complete capture session, including all five camera sessions.

    Capture Session Selection option

    The capture sessions are checked.

    Capture sessions are checked.

  9. Click Next.

    Click Next.

Review the camera sessions

The Camera Sessions section allows you to review the parameters of the cameras used to capture the images.

  1. In the Camera Sessions section, click Forward_Frankfurt_Flight_RS.

    Forward camera parameters

    The next sections contain information about the camera used to collect the forward looking images. This information was included in the Camera_template_Frankfurt_UM1.json file that you imported earlier.

  2. Scroll down to see the data in the Sensor Definition section.

    Sensor definition data

    Each of the camera sessions listed has a corresponding table of data documenting the physical properties of the camera and lens system used to capture that set of images.

    Note:

    If the camera data had not been imported from the Camera_template_Frankfurt_UM1.json file, you could manually enter the data from your imagery provider.

  3. Optionally, click the other camera sessions and review their parameters.
  4. Click Finish.

    The capture session is constructed. This process will take a minute or so. The Project Tree pane appears.

    Project Tree pane

    The Process Manager pane also appears. It shows the status of the current process.

    Process Manager pane

    The globe view appears, showing the locations of the camera captures.

    Globe view

Link capture sessions to the image files

Next, you'll connect the capture sessions you've selected to the image file data location. You'll perform this step for each camera session.

  1. In the Project Tree pane, expand the entry for Forward_Frankfurt_Flight_RS.

    Forward_Frankfurt_Flight_RS in the Project Tree pane

    The current number of images is 0.

    You'll connect the image data to the forward looking images.

  2. In the Project Tree pane, expand the Forward_Frankfurt_Flight_RS section and click Add images.

    Add images option

  3. Browse to the Frankfurt_City_Collection folder, select the jpg folder, and click OK.

    The jpg folder where the images are stored

    The Process Manager pane shows the progress as the images are linked to their collection data.

    Process manager as images are linked

    When the process is complete, Forward_Frankfurt_Flight_RS shows 160 images.

    Now, you'll add the images to the next camera session.

  4. In the Project Tree pane, in the Nadir_Frankfurt_Flight_RS section, click Add images.

    Nadir capture session images

  5. In the Select images, folders or list files window, select the jpg folder and click OK.

    The jpg folder in the Select images, folders or list files window

    You'll repeat this process for each of the camera sessions.

  6. In the Project Tree pane, in the Backward_Frankfurt_Flight_RS section, click Add images.
  7. In the Select images, folders or list files window, select the jpg folder and click OK.
  8. In the Project Tree pane, in the Right_Frankfurt_Flight_RS section, click Add images.
  9. In the Select images, folders or list files window, select the jpg folder and click OK.
  10. In the Project Tree pane, in the Left_Frankfurt_Flight_RS section, click Add images.
  11. In the Select images, folders or list files window, select the jpg folder and click OK.

    After the capture sessions have been linked to their images, you can visualize the image footprints.

  12. In the Project Tree pane, click Visualization.

    Visualization tab

  13. In the Forward_Frankfurt_Flight_RS section, check Image Footprints.

    The footprints on for the Forward_Frankfurt_Flight_RS capture session

    The image footprints are shown in the globe view.

    Image footprints shown in globe view

  14. Uncheck Image Footprints.

Define the region of interest and add water bodies

The last two steps before aligning the images are to define the region of interest for the project and to identify where water bodies are located.

  1. On the ribbon, on the Home tab, in the Import section, click Geometries and choose Region of Interest.

    Region of Interest option

  2. In the Select a region of interest geometry window, in the Computer section, browse to the Frankfurt_City_Collection folder.

    The geodatabase in the folder

  3. Double-click the AOI_and_Waterbody.gdb geodatabase to expand it. Click the Frankfurt_AOI feature class and click OK.

    The AOI feature class

    The Frankfurt_AOI polygon feature class is added to the globe view.

    The AOI on the globe view

    Specifying a region of interest geometry prevents unnecessary data from being processed, minimizing total processing time and storage requirements.

  4. On the ribbon, on the Home tab, in the Import section, click Geometries and click Water Body.

    Water Body option

  5. In the Select a water body geometry window, in AOI_and_Waterbody.gdb, click Frankfurt_waterbody and click OK.

    The Frankfurt_waterbody feature class

    The Frankfurt_waterbody polygon feature class is added to the globe view.

    The water body polygons on the globe view

    Specifying water body geometries flattens and simplifies areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water.

    The capture session has been fully defined. You can now save the project.

  6. On the ribbon, click Save Project.

    Save Project button

  7. In the Save Project As window, browse to a location with plenty of free disk space, type 2023-Frankfurt_Reality_Studio_Tutorial, and click Save.

You have defined the capture sessions, set the coordinate system and camera properties, and linked the position and orientation data to the captured images, and saved the project. You are now ready to begin adjusting the images to start creating products from them.


Perform an alignment

The capture session was built from GNSS navigation data recorded during the photo flight. This exterior orientation information is typically not accurate enough to create products such as true orthos or 3D meshes of high geometric quality. To optimize the navigation data, you'll run an alignment. During alignment, also called aerotriangulation, individual images are connected by determining homologous points (tie points) between overlapping images. With many of these image measurements, the image block can be mathematically adjusted to refine the orientation parameters for each image. Additional accuracy can be obtained by manually measuring ground control points.

Create an alignment

To align the images, you must add an alignment to the project.

  1. On the ribbon, on the Home tab, in the Processing section, click New Alignment.
    New Alignment button
  2. In the Alignment pane, for Alignment Name, type Frankfurt_AT.

    Alignment Name parameter

  3. In the Camera Sessions section, check Dataset.

    This alignment will use all the capture sessions, so they should all be checked.

    All the capture sessions turned on

  4. In the Control Points section, click Import Control Points.

    Import Control Points option

  5. In the Select input file window, browse to the Frankfurt_City_Collection folder and open the GroundControlPoints folder. Select Ground_Control_Points.txt and click OK.

    Ground_Control_Points.txt file

  6. In the Control Points Import window, click the Spatial Reference browse button.
  7. In the XY Coordinate Systems Available box, type 25832 and press Enter.

    XY coordinate system

  8. Click ETRS 1989 UTM Zone 32N.
  9. Click the Current Z box. In the Z Coordinate Systems Available box, type 7837 and press Enter.

    Z coordinate system

  10. Click DHHN2016 height and click OK.
  11. For Choose a delimiter, accept the default delimiter, comma.
  12. Click Next.

    Next button

  13. Review the column labels.

    The column labels

    The default values are correct.

  14. Click Import.

    The control points are added to the globe view.

    Control points on the globe view

  15. In the Alignment pane, in the Control Points section, check Dataset. Expand Dataset to see the new Ground_Control_Points data.

    The new Ground_Control_Points item

    The Standard Deviations section allows you to modify the given accuracy (a priori standard deviations) of the image positions (XYZ position and rotation angles) and of the imported ground control points. For this tutorial, the default values are correct.

    The Region of Interest parameter allows you to specify a region to adjust. For this tutorial, you'll perform alignment on the entire dataset, so there is no need to set a region of interest.

  16. Click Create.

    Click the Create button

    Clicking Create adds the Alignment tab to the ribbon. The alignment is ready to run.

    Running the alignment will start the automatic tie point matching and bundle block adjustment process. This is a computationally intensive process, and the duration of the processing will depend on your computer hardware.

    On a computer with 128 GB RAM, AMD Ryzen 24 core CPU at 3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 2 hours.

  17. Click Run.

    Run button

    In the Process Manager pane, the Alignment process status appears.

    Alignment status in Process Manager

  18. Expand the Alignment process to see the steps.

    The Alignment process

    The Process Manager allows you to keep track of the stages of the Alignment process and their status.

    This might be a good time to take a break or work on something else while the process runs.

    When the process finishes, you can see it listed in the Process Manager pane.

    The process is complete message

    Once the alignment finishes, the QA window appears. This window shows the key statistics of the bundle block adjustment.

    QA window when the alignment is complete

Measure ground control points

You can measure ground control points before or after the initial alignment. Doing it after the alignment has the benefit that the software has already refined the image positions and can provide a better indication of where to measure.

  1. In the QA window, on the Overview tab, scroll down and expand Count.

    Image Measurements with a value of 0

    The Image Measurements column for the Ground Control Points row indicates that no image measurements have been done for the ground control points. You will add some now.

  2. Optionally, close the QA window.
  3. On the ribbon, on the Alignment tab, in the Tools section, click Image Measurements.

    Image Measurements button

    The measurement window appears. The left pane shows a globe view of the project area and a Control Points table with the available ground control points.

    Image Measurements overview

    Note:

    If the Alignment tab is not visible, in the Project Tree pane, scroll down to the Alignments section and click Frankfurt_AT.

    The Image pane shows a set of image measuring tool instructions.

    Image control point measuring instructions

  4. Review the information. When finished, close the information window.
  5. In the Control Points table, click the row number for the second row, point 990004.

    Select the second ground control point.

    When you click the header for the second row, the Image List section updates to show all of the images that contain point 990004, and the first image is shown, along with a pink circle indicating the projected point location.

    Result of selecting the second of the GCPs, number 990004

    A ground control point may or may not be visible in any given image, because each was taken from a different location and angle. Trees, buildings, cars, or pedestrians may block the view of the point in some images. Glare or shadow may make a ground control point blend in to the background of the image. When measuring, you can skip the images where the point is not visible.

    The images you see may not appear in the order they are shown in the tutorial. For each image, you should determine whether the ground control point is visible. If it is not visible, you can skip the image by pressing the F key. If the point is visible, you will zoom in to it using the scroll wheel on your mouse, and click the center of the point in the image to measure the difference between its calculated location and its location in the image.

  6. Move the pointer over the image and use the scroll wheel of your mouse to zoom in to the pink circled point that represents the projected point location for this image.

    Location of first projected point

    This point is in an intersection, and some cars were in the intersection when the image was captured. Fortunately, in this image, the ground control point is visible as a light spot with a darker circle surrounding it.

    Zoomed in view of the ground control point in the image

  7. Click the center of the ground control point in the image.

    The location where you clicked is now marked as a measured point.

    The first measured point appears on image and in table.

    The Status column of the table for this image updates with a green Measured Point symbol.

  8. Press the F key to move to the next image.
  9. Click the center of the ground control point in the image.

    Click the second measured point.

    The second measured point is added.

    After this point is added, the Find Suggestions button is enabled. This tool is designed to assist you in measuring the ground control points. It uses the projected points and inspects the imagery to find locations that look like the point that you have marked in the current image.

  10. Click Find Suggestions.

    Click the Find Suggestions button.

    The tool scans the images for this ground control point. This may take a minute or so.

  11. In the table, click the first suggestion.

    Click the first suggestion.

    The image is displayed with a red square box indicating the location of the suggested point.

    The suggestion is good, so you will accept it.

  12. Click Accept to accept the suggestion.

    Click Accept.

    The measured point is added and the next image is displayed. It also has a suggested point.

  13. Click Accept to accept the suggestion.

    The next point does not have a suggestion, so you will manually add it by clicking the ground control point in the image, the way you did for the first two.

  14. Click the center of the ground control point in the image.

    Click the point.

    The measured point is added and the Find Suggestions tool becomes active again.

    Find Suggestions is active.

  15. Click Find Suggestions.

    The tool scans the images for this ground control point. This may take a minute or so.

    The tool makes suggestions for more of the images. You can use the table to navigate through these and manually accept each one by clicking the Accept button, or you can accept all of the suggestions by clicking the Accept All button.

  16. Click Accept All.

    Click Accept All.

    Now 46 of the 128 images showing ground control point 990004 have measurements.

Collect points for the Forward camera

You can scroll through the table to look at the distribution of measured points. The Camera column indicates which camera captured each image. You want to ensure that you have about five measurements for each camera. To do this, you will sort the table on the column values.

  1. Click the Camera column header.

    Click Camera to sort on that column.

    Now the table of images is sorted by camera.

  2. Scroll down the table to see whether each camera is well represented.

    Backward has several measured points. Forward has only one. Left has a few, but could use more.

  3. Click one of the Forward camera images to measure the control point position.

    Click a Forward camera image.

  4. Click the center of the ground control point in the image.

    Click to measure a forward point.

  5. Click Find Suggestions.

    Several more images have suggested points.

    Forward camera images with suggested points

  6. Click each of the images with suggested points, verify the suggested point matches a good location for a measured point, and click Accept.

    You can also review the points and click Accept All.

Collect measurements for the Left camera

Next, you'll collect some measurements for the left camera.

  1. Scroll down to the Left camera images.
  2. Click one of the Left camera images to measure the control point position.

    Click a Left camera image.

  3. If the image shows the ground control point, click it to add a measured point. If it does not, press the F key to move to the next image.
  4. After you've collected a measured point for a Left camera image, click Find Suggestions.
  5. Click each of the images with suggested points, verify the suggested point matches a good location for a measured point, and click Accept.

    You can also review the points and click Accept All.

    Now 111 of the images showing ground control point 990004 have measurements. Each camera is well represented. This is enough.

Collect measurements for another ground control point

You've collected measurements for ground control point 990004. The next step is to continue collecting measurements for the other ground control points. You should collect representative measurements for each of the cameras for at least five of the other ground control points. Use the same techniques you learned on the first ground control point.

  • If the ground control point is not visible in the image (for example, if it is hidden by a car, building, or tree), press F to skip the image.
  • If a suggested point location appears correct, click Accept.
  • If the suggested point location does not appear correct, click the location of the ground control point.

  1. In the Control Points table, click the row number for the third row, point 990007.

    Select the next ground control point.

    When you click the header for this row, the Image List section updates to show all of the images that contain point 990007, and the first image is shown, along with a pink circle indicating the projected point location.

    Result of selecting the second of the GCPs, number 990007

    Note:

    Some ground control points, such as point 990007, were not clearly marked on the ground by a point but were collected at a visually distinguishable location, such as a corner of a crosswalk.

    In the Frankfurt_City_Collection folder, the GroundControlPoints folder contains a set of images showing a green Measured Point marker at the location of the ground control point.

    Ground control point location reference images

    If you open the 990007 image file in this folder, you'll see that this ground control point was collected at the corner of a crosswalk. For each ground control point, view the corresponding image in this folder to verify the location before measuring.

    Ground control point at corner of crosswalk

    When conspicuous existing locations are used as ground control points, the surveyor usually notes the location in a set of field notes and takes a picture showing the GPS antenna at that location. The images in this folder simulate that sort of field data.

  2. Use the scroll wheel on your mouse to zoom closer to the ground control point, then click the corner of the crosswalk.

    Add measure point at corner.

  3. Continue measuring points.

    Keep working until you have collected about five measurements for each of the five cameras for each of the ground control points.

    Remember that a ground control point may or may not be visible in any given image, because each was taken from a different location and angle. Trees, buildings, cars, or pedestrians may block the view of the point in some images. Glare or shadow may make a ground control point blend in to the background of the image. When measuring, you can skip the images where the point is not visible.

    The images you see may not appear in the order they are shown in the tutorial. For each image, you should determine whether the ground control point is visible. If it is not visible, you can skip the image by pressing the F key. If the point is visible, you will zoom in to it using the scroll wheel on your mouse, and click the center of the point in the image to measure the difference between its calculated location and its location in the image.

    All GCPs have measurements.

    The Control Points table shows statistics for the reprojection errors for each control point. If some control points have higher reprojection error statistics than others, you can click the header for the row in the Control Points table, and then in the Image List, search for and remeasure or remove images with high Reprojection Error values.

Remove a measurement

If a control point has a high reprojection error, you may need to remove a measurement or remeasure it.

You can examine the statistics in the Control Points table.

In this example, 990002 has the highest Maximum Reprojection Error value.

Point 990002 has the highest Maximum Reprojection Error value.

The values you see in your table will depend on the measurements you make and will not match these sample images.

  1. Click the row header for point 990002.
  2. In the Image table, click the column header for Reprojection Error (xy).

    The table is sorted by Reprojection Error (xy) value.

  3. Scroll to see the image with the highest Reprojection Error (xy) value.
  4. Click the high Reprojection Error (xy) value.

    Click the high value.

  5. Click Remove.

    Remove measurement.

    In the Control Points table, check that the values improved.

    Values improved in Control Points table.

Change a ground control point to a check point

Check points are used for evaluating and reporting on the accuracy of the alignment. Their 3D position and image residuals are estimated using the output image orientation for quality assurance purposes. You'll convert one of the ground control points to a check point.

  1. In the Control Points table, click the header for the fourth row (ground control point 990006).

    Fourth row

    The row for this ground control point is highlighted.

  2. On the toolbar at the top of the Control Points table, click Set Role and choose CP.

    Set role as Check Point.

    In the table, the role changes to CP, indicating this is a Check Point.

    Role is now Check Point.

Refine the alignment

After adding and measuring control points, or changing other settings of the alignment, you will run the alignment again to refine the positions based on the new information. This reruns the bundle-block adjustment, but it will be much faster than the initial alignment process.

  1. On the ribbon, on the Alignment tab, in the Process section, click Run.

    Run button

    The Process Manager opens and shows the progress on the alignment process. After one or two minutes, the process completes.

    The QA tool pane appears.

    The QA tool pane

    To check the quality of the alignment results, examine the statistics on the QA tool pane.

    For best results with this data, keep the following in mind:

    • The overall Sigma 0 value should be less than 1 px for a well-calibrated photogrammetric camera.
    • The RMS of the tie point reprojection error, which is also expected to be less than 1 px.
    • The RMS for the horizontal and vertical object residuals for control points should be less than 1.5 GSD (12 cm).

    Also check the count data, such as the number of automatic tie points per image and image measurements per tie point, which indicate how well tie points are distributed in the project area and how well adjacent images are connected by a common measurement. You can also review the tie point visualization in the globe view.

    Note:

    These steps are meant to give you basic guidance for analyzing the alignment results. Doing an in-depth analysis of the quality requires knowledge about project requirements and specifications as well as knowledge about the quality of the input data.

  2. In the QA pane, click General Information and view the Sigma 0 value.

    The Sigma 0 value

    The value in this example is 0.7616, which is a good value for this dataset.

  3. On the right side of the QA pane, scroll down to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors section and click the View table button.

    Tie point reprojection errors chart

    Tie point reprojection errors table

    The RMS value for the tie point reprojection errors in this example is 0.762, which is a good value for this dataset.

  4. On the right side of the QA pane, scroll up to the 3D Residuals section. View the Ground Control Points Residuals section.

    Ground Control Points Residuals section

    Ground Control Points Residuals values

    The RMS value for the Ground Control Points Residuals in this example is 0.079 meters, acceptable for this exercise.

  5. On the left side of the QA pane, scroll down and expand the Count section.

    Count statistics

    In this example, there are six ground control points with 463 image measurements and one check point with 98 image measurements.

  6. Optionally, review the other QA statistics and measurements.
  7. On the QA tool, click the Control Points tab.

    Control Points tab

    The Control Points table appears.

    Control Points table open in the QA tool

    You can use this table to check the X,Y, and Z residuals for each control point. Unexpectedly large Delta XYZ values may indicate that points need to be remeasured.

    You can also review the geography of the actual project data (ground control points, automatic tie points, image positions).

  8. On the ribbon, on the Alignment tab, in the Display section, click Automatic Tie Points.

    Automatic Tie Points button

    The automatic tie points are drawn in the Display pane.

    The automatic tie points in the Display pane

    If other elements, such as Camera Positions, are drawing over the Automatic Tie Points, you can click the Project Tree, click Visualization, expand Capture Sessions, expand Frankfurt_Flight_RS, and for each camera, turn off the Camera Positions.

  9. Click the Automatic Tie Points drop-down arrow and choose RMS of Reprojection Errors.

    RMS of Reprojection Errors option

    The Display pane updates to show the manual tie points symbolized by the RMS of the reprojection errors.

    Updated symbology

  10. On the QA tool, click the Automatic Tie Points tab.

    The table shows the automatic tie points.

    Automatic tie points

    You can view and sort the data in this table to identify the automatic tie points with the highest error values.

  11. On the QA tool, click the Overview tab. On the right side, scroll to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors histogram.

    Automatic Tie Points Reprojection Errors histogram

    The symbology of the histogram matches the symbology of the globe view.

  12. On the ribbon, on the Alignment tab, in the Results section, click Report.

    Report button

  13. In the Create Alignment Report window, browse to a location to save the report. For Name, type Frankfurt_AT_report.

    Create Alignment Report window

  14. Click Save.

    The PDF is saved on your computer. It is a way to share the QA statistics of the alignment.

    Sample report

  15. Close the QA tool and save the project.

You've performed an initial alignment, added control points, refined the alignment, and examined the alignment statistics. You also exported a PDF copy of the alignment statistics to document your work, and share with your stakeholders.

Next, you'll use the aligned data to create a reconstruction.


Perform a reconstruction

Now that the alignment process is complete and the results have been examined and determined to be high quality, you are ready to create output products. For this tutorial, you'll create a 3D point cloud and a 3D mesh.

Create a reconstruction

The first step to generate the products is to create a reconstruction.

  1. On the ribbon, click the Home tab. In the Processing section, click New Reconstruction.

    New Reconstruction button

  2. In the Reconstruction pane, for Reconstruction Name, type Frankfurt_RS_3D.

    Reconstruction Name parameter

    This reconstruction session will be used to create two 3D outputs.

  3. For Capture Scenario, click the drop-down list and choose Aerial Oblique.

    Aerial Oblique option

    Choosing a scenario sets some output products and processing settings.

    The Aerial Oblique setting is useful now because the sample data is a multihead capture session, and all the available imagery will be used to create the output 3D products. The Aerial Nadir setting is more useful when you are creating 2D products. For optimal quality, 2D products should be produced using only Nadir images.

  4. In the Camera Sessions section, check the Frankfurt_AT alignment session that you created.

    Frankfurt_AT alignment session

    The alignment is selected.

    Selected alignment

  5. In the Products section, review the output products.

    The Point Cloud and Mesh products are highlighted.

    Point Cloud and Mesh output products

    The SLPK mesh format will be exported by default. You can check other formats for the output mesh if you choose.

  6. In the Workspace section, specify a local folder for the output of the reconstruction.

    The results of the reconstruction process will be stored there. Ensure that there is enough disk space for the output.

  7. In the Optional section, for Quality, click Ultra.

    Ultra output quality

    The Ultra setting will run the 3D reconstruction at the native image resolution. This will take a longer time to process than the High quality option, but the results will look better. On a single computer with 128 GB RAM, AMD Ryzen 24 core CPU at 3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 8 hours.

    You can choose the High quality option if you want the output to have slightly reduced detail and lower texture resolution.

    Note:
    The reconstruction process is designed to support processing in a distributed environment, with a local network of workstations running ArcGIS Reality Studio serving as processing nodes. To run efficiently in such an environment, the process is split into individual tasks and the project is divided into manageable subprojects.

    To run a reconstruction on multiple nodes, you need to specify the following:

    • A workspace, where the results of the reconstruction run are collected. The workspace needs to be accessible for each processing node.
    • A temporary processing folder, which is used to store intermediate processing results for the automatically defined subprojects.

  8. For Region of Interest, choose Frankfurt_AOI.

    Frankfurt_AOI option

    Region of Interest allows you to limit processing for your output products to the images relevant to your project.

  9. For Water Body Geometries, choose Frankfurt_waterbody.

    Frankfurt_waterbody option

    The Water Body Geometries parameter is used to flatten and simplify areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water.

  10. For Correction Geometries, accept the default value of None.
  11. Click Create.

    Create button

    This finishes the Reconstructions setup. The reconstruction is added to the Project Tree pane.

    The Reconstructions session in the Project Tree pane

Run the reconstruction

Now that the reconstruction has been set up, the next step is to run it. This will take some time, depending on your computer resources. On a single computer with 128 GB RAM, AMD Ryzen 24 core CPU at 3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 8 hours. Adding additional nodes will make the process faster.

  1. On the ribbon, on the Reconstruction tab, in the Processing section, click Submit.

    Click Submit.

    After you click Submit, the Process Manager will show the reconstruction process as pending.

    Process is pending.

  2. On the ribbon, on the Reconstruction tab, in the Workspace section, click Start Contribution.

    Start contributing to the reconstruction process.

    The Process Manager now shows the status of the reconstruction process.

    The Process Manager pane for the reconstruction session

    You can use the Workspace Monitor to get an overview of which machine is contributing to a reconstruction job. In this example, there is only one machine, but you can use multiple machines to process a reconstruction.

    Workspace Monitor

    You can use the Job Monitor to get an overview of which task of a reconstruction job is running.

    Job Monitor

    After the analysis step is finished, the globe view will show the processing progress as well. You can observe the individual stereo models being processed in dense matching. Later in the process, you'll see individual tiles of the point cloud and the mesh added to the globe view.

    Once the process has finished, the products are added to the Project Tree pane. You can use the Visualization tab to show or hide these products.

  3. Wait for the reconstruction process to run.

    The Progress Manager will indicate when each process is done.

    Process Manager showing process complete.

    For this example, it takes approximately 8 hours on the single example machine.

  4. In the Project Tree pane, click Visualization.

    Click Visualization.

  5. In the Project Tree pane, scroll down to the Reconstructions section, expand Frankfurt_RS_3D, expand Products, and check Point Cloud.

    Check Point Cloud.

  6. Click the Globe tab to view your output.

    If necessary, in the Project Tree pane, on the Visualization tab, uncheck other layers.

    View the resulting point cloud.

    Optionally, turn off the point cloud layer and turn on the mesh layer, and explore the results.

  7. On the ribbon, on the Reconstruction tab, click Open Results Folder.

    This opens the Results folder in Microsoft File Explorer. It contains the 3D point cloud in i3s (SLPK) format, as well as the 3D mesh in i3s (SLPK) format. Use the .slpk files to add the products to ArcGIS Online.

    If you need to deliver your reconstruction products in a different tiling scheme or projection, on the Reconstruction tab, click the Export button and choose an alternative export option.

  8. If you do not run the process, view the results.

In this tutorial, you have created an ArcGIS Reality Studio project, added a capture session, performed an initial alignment, measured ground control points, and refined the alignment. You evaluated the quality of the alignment and determined that it was acceptable. You used the alignment to create a reconstruction, and you used that reconstruction to create point cloud and 3D mesh outputs. These can be shared to ArcGIS Online or used with local applications on your computer. You can use a similar process in the reconstruction stage to create 2D products such as true orthophotos and digital surface models. The main difference for creating 2D outputs is that you would use the aerial nadir scenario and limit the camera session to nadir camera captures.

You can find more tutorials in the tutorial gallery.