Create a capture session

A capture session combines all relevant information captured in a single photo-flight that will be required to do the alignment and reconstruction steps. Capture sessions may be built for imagery captured with nadir-only sensors or multihead sensor systems and with the corresponding positioning information for each image.

In a nadir sensor system, the sensor points straight down and captures imagery of the surface under it. The images collected this way are referred to as nadir images.

The following image is an example of nadir imagery:

Example of nadir imagery

The following image is a diagram of a nadir camera cone and image footprint:

Nadir camera cone and image footprint diagram

In a multihead sensor system, sensors point in multiple directions, at angles forward and backward and to the sides. The images collected at an angle are referred to as oblique images. Multihead systems may also include a sensor to collect nadir images.

The following image is an example of oblique imagery:

Example of oblique imagery

The following image is a diagram of a multihead sensor, showing the camera cones and image footprints:

Multihead sensor camera cone and image footprint diagram

Positioning information may be based on navigation information or high-accuracy positions derived in an external aerotriangulation process.

The data you'll add to your capture session consists of the following:

  • 873 images captured with a multihead sensor system (IGI UrbanMapper)
  • An ASCII file including the positioning information per image (GNSS_IMU_whole_Area.csv)
  • A file including the necessary sensor specifications (Camera_template_Frankfurt_UM1.json)
  • A file geodatabase containing the geometry for the region of interest and a water body (AOI_and_Waterbody.gdb)

Download the data

The data for this tutorial takes up about 26 GB of disk space.

  1. Download the Frankfurt_City_Collection.zip file.
    Note:

    Depending on your connection speed, this 26 GB file may take a long time to download.

  2. Extract the .zip file to a folder on your local machine, for example, D:\Datasets\Frankfurt_City_Collection.

Start a capture session

Next, you'll create the capture session.

  1. Start ArcGIS Reality Studio.
  2. On the Welcome screen, click New Capture Session.

    New Capture Session option

  3. In the Capture Session pane, for Capture Session Name, type Frankfurt_Flight_RS.
  4. For Orientation File Format, click ASCII text file (.txt, .csv, etc).

    ASCII text file orientation file format

    A notice appears indicating that the data must be in a supported orientation data format convention.

  5. For Orientation File Path, browse to the Frankfurt_City_Collection folder that you extracted. Select GNSS_IMU_whole_Area.csv and click OK.

    GNSS_IMU_whole_Area.csv file

  6. For Spatial Reference, click the Select coordinate system button.

    Select coordinate system button

  7. In the Spatial Reference window, for Current XY, in the search box, type 25832 and press Enter.

    25832 in the search box

    The search for this well-known ID (WKID) code returns the ETRS 1989 UTM Zone 32N coordinate system. This is the XY coordinate system used in the position file.

  8. In the list of results, click ETRS 1989 UTM Zone 32N.

    You've set the XY coordinate system. Next, you'll set the Z coordinate system.

  9. Click Current Z.

    Current Z option

  10. For Current Z, in the search box, type 7837 and press Enter.

    7837 in the search box

  11. In the list of results, click DHHN2016 height.

    You've set the Z coordinate system.

  12. In the Spatial Reference window, click OK.
  13. In the Data Parsing section, for Parse from row, type 22 and press Enter.

    Parse from row parameter

    The GNSS_IMU_whole_Area.csv orientation file that you imported is a comma-delimited text file. It includes a header section of 21 lines, while the data that ArcGIS Reality Studio will use to process the images begins at line 22. Entering 22 in this box skips the header rows.

    Note:

    Another way to skip the header is to specify the character that begins comment rows. In this file, the # symbol is the comment character, so you could also skip the header by typing # in the Symbols used to ignore rows box.

    Once ArcGIS Reality Studio can read the file correctly, the number of detected orientations is listed in a green highlight box. In this case, 7,775 orientations are detected. These are the orientations collected during the flight. This is greater than the 873 images used in the tutorial because the tutorial images are a subset of a larger collection.

  14. Click Next.

    Next button

Define the parameters of the orientation file

There are multiple image orientation systems and they label the collected parameter data in different ways. In this case, the GNSS_IMU_whole_Area.csv file you imported contains the image name, X, Y, Z, Omega, Phi, and Kappa values in the same order as they appear on the Data Labeling table. You'll match the fields to the data positions in the file.

  1. In the Data Labeling section, for Image Name, choose the first item in the list.

    First item in the list of images

    Place 1 in the file contains data that consists of a code value separated by underscore characters.

  2. For X, choose the second item in the list.

    Second item in the list of images

    Place 2 in the file contains data that consists of floating point data.

    You'll continue mapping the field names to places in the data file.

  3. For Y, choose the third item in the list.
  4. For Z, choose the fourth item in the list.
  5. For Omega, choose the fifth item in the list.
  6. For Phi, choose the sixth item in the list.
  7. For Kappa, choose the seventh item in the list.

    Data labeling completed

    When you have set the Kappa value, in the Camera System Assignment section, a green box appears with the number of assigned orientations from the file.

  8. Skip the Camera Name field and leave Angular Unit set to degrees.

Relate the orientation data to the images

The orientation data file contains information that ArcGIS Reality Studio will use in reconstructing the scene. There are multiple camera and orientation tracking systems. The relationship between the position data and the cameras is established in different ways, depending on the convention used by the system used to collect your images. The following are the two main ways:

  • The ASCII orientation file may include a column with the camera names.
  • The image file name includes a string that identifies the camera.

In this tutorial, the image file names contain a string to identify the camera.

  1. In the Camera System Assignment section, click the options button and choose Import Template.

    Import Template option

  2. Browse to the Frankfurt_City_Collection folder, select Camera_template_Frankfurt_UM1.json, and click OK.

    The template file

    The Camera System Assignment section updates to include a table for the camera names and ID values.

    Table added with camera names and IDs

    Next, you'll enter the codes that correspond to the cameras in the image file names.

  3. For Left, in the Camera ID column, type the code _11000.
  4. For Forward, in the Camera ID column, type the code _11900.
  5. For Nadir, in the Camera ID column, type the code _NAD.
  6. For Backward, in the Camera ID column, type the code _11600.
  7. For Right, in the Camera ID column, type the code _11100.

    The Camera System Assignment table

    The Camera System Assignment table now matches the camera names to the Camera ID codes embedded in the image file names.

    The Capture Session Selection section appears below the Camera System Assignment table.

    This section allows you to choose to process specific camera sessions or all camera sessions. In this tutorial, you'll process all of the camera sessions.

  8. Click the button for Frankfurt_Flight_RS to select the complete capture session, including all five camera sessions.

    Capture Session Selection option

    The capture sessions are checked.

    Capture sessions are checked.

  9. Click Next.

    Click Next.

Review the camera sessions

The Camera Sessions section allows you to review the parameters of the cameras used to capture the images.

  1. In the Camera Sessions section, click Forward_Frankfurt_Flight_RS.

    Forward camera parameters

    The next sections contain information about the camera used to collect the forward-looking images. This information was included in the Camera_template_Frankfurt_UM1.json file that you imported earlier.

  2. Scroll down to see the data in the Sensor Definition section.

    Sensor definition data

    Each of the camera sessions listed has a corresponding table of data documenting the physical properties of the camera and lens system used to capture that set of images.

    Note:

    If the camera data had not been imported from the Camera_template_Frankfurt_UM1.json file, you could manually enter the data from your imagery provider.

  3. Optionally, click the other camera sessions and review their parameters.
  4. Click Finish.

    The capture session is constructed. This process will take a minute or so. The Project Tree pane appears.

    Project Tree pane

    The Process Manager pane also appears. It shows the status of the current process.

    Process Manager pane

    The globe view appears, showing the locations of the camera captures.

    Globe view

Link capture sessions to the image files

Next, you'll connect the capture sessions you've selected to the image file data location. You'll perform this step for each camera session.

  1. In the Project Tree pane, expand the entry for Forward_Frankfurt_Flight_RS.

    Forward_Frankfurt_Flight_RS in the Project Tree pane

    The current number of images is 0.

    You'll connect the image data to the forward-looking images.

  2. In the Project Tree pane, next to Forward_Frankfurt_Flight_RS, click Add images.

    Add images option

  3. Browse to the Frankfurt_City_Collection folder, select the jpg folder, and click OK.

    The jpg folder where the images are stored

  4. In the One or more images are not optimized window, click No.

    Process manager as images are linked

    Optimized images use a tiled format including image pyramids, allowing them to display faster. However, optimization is not required for the upcoming workflow.

    When the process is complete, Forward_Frankfurt_Flight_RS shows 160 images.

    Now, you'll add the images to the next camera session.

  5. In the Project Tree pane, next to Right_Frankfurt_Flight_RS, click Add images.

    Right capture session images

  6. In the Select images, folders or list files window, select the jpg folder and click OK.

    The jpg folder in the Select images, folders or list files window

    You'll repeat this process for each of the camera sessions.

  7. In the Project Tree pane, next to Nadir_Frankfurt_Flight_RS, click Add images.
  8. In the Select images, folders or list files window, select the jpg folder and click OK.
  9. In the One or more images are not optimized window, click No.
  10. Next to Left_Frankfurt_Flight_RS, click Add images. Choose the jpg folder and choose to not optimize images.
  11. Next to Backward_Frankfurt_Flight_RS, click Add images. Choose the jpg folder and choose to not optimize images.

    After the capture sessions have been linked to their images, you can visualize the image footprints.

  12. In the Project Tree pane, click the Visualization tab.

    Visualization tab

    For a better understanding of the dataset, you have different options to visualize it. In this example, you will turn off the camera stations and display the footprints of the nadir camera.

  13. Next to Frankfurt_Flight_RS, click the Toggle visibility button.

    Toggle visibility button

    Visibility is turned off for all capture session items at once.

  14. Under Forward_Frankfurt_Flight_RS, turn on visibility for Image Footprints.

    The footprints on for the Forward_Frankfurt_Flight_RS capture session

    The image footprints are shown in the globe view.

    Image footprints shown in globe view

  15. Turn off visibility for Image Footprints.

Define the region of interest and add water bodies

The last two steps before aligning the images are to define the region of interest for the project and to identify where water bodies are located.

  1. On the ribbon, click the Home tab. In the Input section, click Geometries, Import Geometry, and Region of Interest.

    Region of Interest option

  2. In the Select a region of interest geometry window, in the Computer section, browse to the Frankfurt_City_Collection folder.

    The geodatabase in the folder

  3. Double-click the AOI_and_Waterbody.gdb geodatabase to expand it. Click the Frankfurt_AOI feature class and click OK.

    The AOI feature class

    The Frankfurt_AOI polygon feature class is added to the globe view. It appears with a dashed orange outline.

    The AOI on the globe view

    Specifying a region of interest geometry prevents unnecessary data from being processed, minimizing total processing time and storage requirements.

  4. On the ribbon, on the Home tab, in the Input section, click Geometries, Import Geometry, and Water Body.

    Water Body option

  5. In the Select a water body geometry window, in AOI_and_Waterbody.gdb, click Frankfurt_waterbody and click OK.

    The Frankfurt_waterbody feature class

    The Frankfurt_waterbody polygon feature class is added to the globe view.

    The water body polygons on the globe view

    Specifying water body geometries flattens and simplifies areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water.

    The capture session has been fully defined. You can now save the project.

  6. On the ribbon, click Save Project.

    Save Project button

  7. In the Save Project As window, browse to a location with plenty of free disk space, type 2023-Frankfurt_Reality_Studio_Tutorial, and click Save.

You have defined the capture sessions, set the coordinate system and camera properties, and linked the position and orientation data to the captured images, and saved the project. You are now ready to begin adjusting the images to start creating products from them.


Perform an alignment

The capture session was built from GNSS navigation data recorded during the photo flight. This exterior orientation information is typically not accurate enough to create products such as true orthos or 3D meshes of high geometric quality. To optimize the navigation data, you'll run an alignment. During alignment, also called aerotriangulation, individual images are connected by determining homologous points (tie points) between overlapping images. With many of these image measurements, the image block can be mathematically adjusted to refine the orientation parameters for each image. Additional accuracy can be obtained by manually measuring ground control points.

Create an alignment

To align the images, you must add an alignment to the project.

  1. On the ribbon, on the Home tab, in the Processing section, click New Alignment.
    New Alignment button
  2. In the Alignment pane, for Alignment Name, type Frankfurt_AT.

    Alignment Name parameter

  3. In the Camera Sessions section, check Dataset.

    This alignment will use all the camera sessions, so they should all be checked.

    All the capture sessions turned on

  4. In the Control Points section, click Import Control Points.

    Import Control Points option

  5. In the Select input file window, browse to the Frankfurt_City_Collection folder and open the GroundControlPoints folder. Select Ground_Control_Points.txt and click OK.

    Ground_Control_Points.txt file

  6. In the Control Points Import window, for Spatial Reference, click the Select coordinate system button.

    Select coordinate system button

  7. In the search box, type 25832 and press Enter.

    XY coordinate system

  8. Click ETRS 1989 UTM Zone 32N.
  9. Click the Current Z box. In the Z Coordinate Systems Available box, type 7837 and press Enter.

    Z coordinate system

  10. Click DHHN2016 height and click OK.
  11. For Choose a delimiter, accept the default delimiter, comma.

    Comma delimeter chosen

  12. Click Next.
  13. Review the column labels.

    The column labels

    The default values are correct.

  14. Click Import.

    The control points are added to the globe view.

    Control points on the globe view

  15. In the Alignment pane, in the Control Points section, check Dataset. Expand Dataset to see the new Ground_Control_Points data.

    The new Ground_Control_Points item

    The Standard Deviations section allows you to modify the given accuracy (a priori standard deviations) of the image positions (XYZ position and rotation angles) and of the imported ground control points. For this tutorial, the default values are correct.

    The Region of Interest parameter allows you to specify a region to adjust. For this tutorial, you'll perform alignment on the entire dataset, so there is no need to set a region of interest.

  16. Click Create.

    Click the Create button.

    Clicking Create adds the Alignment tab to the ribbon. The alignment is ready to run.

    Running the alignment will start the automatic tie point matching and bundle block adjustment process. This is a computationally intensive process, and the duration of the processing will depend on your computer hardware.

    On a computer with 128 GB RAM, AMD Ryzen 24 core CPU at 3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 0.5 hours.

  17. Click Run.

    Run button

    In the Process Manager pane, the Alignment process status appears.

    Alignment status in Process Manager

  18. Expand the Alignment process to see the steps.

    The Alignment process

    The Process Manager allows you to keep track of the stages of the Alignment process and their status.

    This might be a good time to take a break or work on something else while the process runs.

    When the process finishes, you can see it listed in the Process Manager pane.

    This process is complete message

    Once the alignment finishes, the Quality Assurance view opens. This window shows the key statistics of the bundle block adjustment.

    Quality Assurance view when the alignment is complete

    The globe view is also updated with the green shapes representing camera poses, now limited to the set of available images.

    Camera poses on the globe view

Measure ground control points

Ground control points are well-recognizable points on earth, where the exact position is well known. Ground control points can be established by field teams in the form of prepared markings on ground, but it’s also common practice to use well-distinguishable features such as manholes or road markings for that purpose.

You can measure ground control points before or after the initial alignment. Doing it after the initial alignment has the benefit that the software has already refined the image positions and can provide a better indication of where to measure.

  1. In the Quality Assurance view, on the Overview tab, expand the Count section.

    The Image Measurements column for the Ground Control Points row indicates that no image measurements have been done for the ground control points. You will add some now.

    Image Measurements with a value of 0

  2. Close the Quality Assurance view.
  3. In the Project Tree pane, scroll down to the Alignments section and click Frankfurt_AT.

    The Alignment tab reappears on the ribbon.

  4. On the ribbon, on the Alignment tab, in the Tools section, click Image Measurements.

    Image Measurements button

    The measurement window appears. The Globe pane shows a globe view of the project area and a Control Points table with the available ground control points. The Image pane shows an image, a table, and a set of image measuring tool instructions.

    Image Measurements overview

  5. Review the Image Measeuments information. When finished, close the Image Measeuments panel.

    Image control point measuring instructions

  6. In the Control Points table, click the row number for point 990004.

    Gound control point 990004.

    When you click the header for the row, the Image section updates to show all of the images that contain point 990004, and the first image is shown, along with a pink circle indicating the projected point location.

    Result of selecting ground control point 990004

  7. On the image, zoom to the pink circle.

    Location of first projected point

    First, you will make sure that the ground control point is visible. A ground control point will not be visible in every image, because each was taken from a different location and angle. Trees, buildings, cars, or pedestrians may block the view, and glare or shadow may make a ground control point blend in to the background of the image. When measuring, you can skip images where the point is not visible.

    Fortunately, in this image, the ground control point is visible as a light spot with a darker circle surrounding it.

  8. Click the center of the ground control point in the image.

    The location where you clicked is now marked as a measured point. The Status column of the table for this image also updates with a green Measured point symbol.

    The first measured point appears on image and in table.

  9. Press the F key to move to the next image.
  10. Click the center of the ground control point in the image.

    The second measured point is added.

    After this point is added, the Find Suggestions button is enabled. This tool is designed to assist you in measuring the ground control points. It uses the projected points and inspects the imagery to find locations that look like the point that you have marked in the current image.

  11. Click Find Suggestions.

    Click the Find Suggestions button.

    Based on this latest measurement, the tool scans the remaining images for this ground control point. This may take a minute or so.

  12. In the table, click the first suggestion, indicated by a red symbol in the Status column.

    Click the first suggestion.

    The suggestion is good, so you will accept it.

  13. Click Accept to accept the suggestion.

    Click Accept.

    The measured point is added and the next image is displayed. It also has a suggested point.

  14. Click Accept to accept the suggestion.

    The next point does not have a suggestion, so you will manually add it by clicking the ground control point in the image, the way you did for the first two.

  15. Click the center of the ground control point in the image.

    The measured point is added and the Find Suggestions tool becomes active again.

    Find Suggestions is active.

  16. Click Find Suggestions.

    The tool scans the images for this ground control point. This may take a minute or so.

    The tool makes suggestions for more of the images. You can use the table to navigate through these and manually accept each one by clicking the Accept button, or you can accept all of the suggestions by clicking the Accept All button.

  17. Click Accept All.

    Click Accept All.

    Now more than 40 of the 128 images showing ground control point 990004 have measurements.

Collect points for the Forward camera

You can scroll through the table to look at the distribution of measured points. The Camera column indicates which camera (Left, Right, Forward, Backward, or Nadir) captured each image. You want to ensure that you have at least five measurements for each camera. To do this, you will sort the table on the column values.

  1. Scroll to the end of the table and click the Camera column header.

    Click Camera to sort on that column.

    Now the table of images is sorted by camera.

  2. Scroll down the table to see whether each camera is well represented.

    Rows with values in the Reprojection Error columns have measured points. All of the cameras have at least five measured points, except for the Forward camera, which has only one.

  3. Click one of the Forward camera images to measure the control point position.

    Click a Forward camera image.

  4. If no ground control point is visible, press the F key to move to the next image.
  5. Click the center of the ground control point in the image.

    Click to measure a forward point.

  6. Click Find Suggestions.

    Several more images have suggested points.

    Forward camera images with suggested points

  7. Click each of the images with suggested points, verify the suggested point matches a good location for a measured point, and click Accept.

    You can also review the points and click Accept All.

    Now more than 50 of the images showing ground control point 990004 have measurements. Each camera is well represented. This is enough.

Collect measurements for other ground control points

You've collected measurements for ground control point 990004. The next step is to continue collecting measurements for the other ground control points. You should collect representative measurements for each of the cameras for at least five of the other ground control points. Use the same techniques you learned on the first ground control point.

  • If the ground control point is not visible in the image (for example, if it is hidden by a car, building, or tree), press F to skip the image.
  • If a suggested point location appears correct, click Accept.
  • If the suggested point location does not appear correct, click the location of the ground control point.
  • Use the Camera column to ensure that you collect measurements for each available camera.

  1. In the Control Points table, click the row number for point 990007.

    Select the next ground control point.

    The Image section updates to show images containing point 990007.

    Result of selecting the second of the GCPs, number 990007

    No ground control point is visible in the first image.

    Sometimes, ground control points are not clearly marked on the ground, but instead make use of conspicuous existing locations. When this is the case, the surveyor usually notes the location in a set of field notes and takes a picture showing the GPS antenna at that location. For this project, the location of each ground control point was documented in a series of images to simulate this sort of field data. You’ll consult the images to learn the location of ground control point 990007.

  2. Open the Frankfurt_City_Collection folder that you downloaded and unzipped at the start of this tutorial. Open the GroundControlPoints subfolder.
  3. In the GroundControlPoints folder, double-click 990007.JPG to view the image.

    Ground control point location reference images

    The green marker in the image indicates the location of ground control point 990007, at the corner of the crosswalk.

    Ground control point at corner of crosswalk

  4. Return to ArcGIS Reality Studio. Zoom in on the image and click the corner of the crosswalk.

    Add measure point at corner.

  5. Continue measuring points. Keep working until you have collected about five measurements for each of the five cameras for control point 990007.

    You'll choose the next control point to assess from the globe view, instead of the table.

  6. On the globe view, zoom and pan until you can see the triangle control point symbols.

    Any control point that has no image measurements appears on the map as a yellow triangle.

    All GCPs have measurements.

  7. At the top of the globe view, click the Rectangle selection tool. Drag a box around one of the yellow triangles to select it.

    Rectangle selection tool

    The corresponding row in the Control Points table is selected.

    Control point 990005 selected on the globe and in the table

    The Image section also updates to show all of the images containing the selected control point.

    Tip:

    If no ground control point is visible in the image, consult the images in the Frankfurt_City_Collection folder.

  8. In the Image section, add measured points until you have collected at least five measurements for each of the five cameras.
  9. Select each of the remaining control points in turn and for each one, collect at least five measurements for each camera.

    All GCPs have measurements.

    The Control Points table shows statistics for the reprojection errors for each control point. If some control points have higher reprojection error statistics than others, you can click the header for the row in the Control Points table, and then in the Image section, search for and remeasure or remove images with high Reprojection Error values.

Remove a measurement

If a control point has a high reprojection error, you may need to remove a measurement or remeasure it.

You can examine the statistics in the Control Points table.

In this example, 990002 has the highest Maximum Reprojection Error value.

Point 990002 has the highest Maximum Reprojection Error value.

The values you see in your table will depend on the measurements you make and will not match these sample images.

  1. Click the row header for point 990002.
  2. In the Image table, click the column header for Reprojection Error XY [px].

    The table is sorted by Reprojection Error XY [px] value.

  3. Scroll to see the image with the highest Reprojection Error XY [px] value.
  4. Click the highest Reprojection Error XY [px] value.

    Click the high value.

  5. Click Remove.

    Remove measurement.

    In the Control Points table, check that the values improved.

    Values improved in Control Points table.

Change a ground control point to a check point

Check points are used for evaluating and reporting on the accuracy of the alignment. Their 3D position and image residuals are estimated using the output image orientation for quality assurance purposes. You'll convert one of the ground control points to a check point.

  1. In the Control Points table, click the header for ground control point 990006)

    Fourth row

    The row for this ground control point is highlighted.

  2. On the toolbar at the top of the Control Points table, click Set Role and choose CP.

    Set role as check point.

    In the table, the role changes to CP, indicating this is a check point.

    Role is now check point.

Refine the alignment

After adding and measuring control points, or changing other settings of the alignment, you will run the alignment again to refine the positions based on the new information. This reruns the bundle-block adjustment, but it will be much faster than the initial alignment process.

  1. On the ribbon, on the Alignment tab, in the Process section, click Run.

    Run button

    The Process Manager opens and shows the progress on the alignment process. After one or two minutes, the process completes.

    The Quality Assurance view opens.

    The Quality Assurance view

    To check the quality of the alignment results, examine the statistics on the Quality Assurance view.

    For best results with this data, keep the following in mind:

    • The overall Sigma 0 value should be less than 1 px for a well-calibrated photogrammetric camera.
    • The RMS of the tie point reprojection error is also expected to be less than 1 px.
    • The RMS for the horizontal and vertical object residuals for control points should be less than 1.5 GSD (12 cm).

    Also check the count data, such as the number of automatic tie points per image and image measurements per tie point, which indicate how well tie points are distributed in the project area and how well adjacent images are connected by a common measurement. You can also review the tie point visualization in the globe view.

    Note:

    These steps are meant to give you basic guidance for analyzing the alignment results. Doing an in-depth analysis of the quality requires knowledge about project requirements and specifications as well as knowledge about the quality of the input data.

  2. In the Quality Assurance view, click General Information and view the Sigma 0 value.

    The Sigma 0 value

    The value in this example is 0.7616, which is a good value for this dataset.

  3. On the right side of the Quality Assurance view, scroll down to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors section and click the View table button.

    Tie point reprojection errors chart

    Tie point reprojection errors table

    The RMS value for the tie point reprojection errors in this example is 0.762, which is a good value for this dataset.

  4. On the right side of the Quality Assurance view, scroll up to the 3D Residuals section. View the Ground Control Points Residuals section.

    Ground Control Points Residuals section

    Ground Control Points Residuals values

    The RMS value for the Ground Control Points Residuals in this example is 0.079 meters, acceptable for this exercise.

  5. On the left side of the Quality Assurance view, scroll down and expand the Count section.

    Count statistics

    In this example, there are six ground control points with 463 image measurements and one check point with 98 image measurements.

  6. Optionally, review the other quality assurance statistics and measurements.
  7. On the Quality Assurance view, click the Control Points tab.

    Control Points tab

    The Control Points table appears.

    Control Points table open in the Quality Assurance view

    You can use this table to check the X,Y, and Z residuals for each control point. Unexpectedly large Delta XYZ values may indicate that points need to be remeasured.

    You can also review the geography of the actual project data (ground control points, automatic tie points, image positions).

  8. On the ribbon, on the Alignment tab, in the Display section, click Automatic Tie Points.

    Automatic Tie Points button

    The automatic tie points are drawn in the globe view.

    The automatic tie points in the Display pane

    Note:

    If you want to see the automatic tie points unobstructed by the green camera symbols, in the Project Tree pane, click the Visualization tab. Under Alignments and Results, expand each group and turn off visibility for each Camera Poses layer.

  9. Click the Automatic Tie Points drop-down arrow and choose RMS of Reprojection Errors.

    RMS of Reprojection Errors option

    The globe view updates to show the manual tie points symbolized by the RMS of the reprojection errors.

    Updated symbology

  10. On the Quality Assurance view, click the Automatic Tie Points tab.

    The table shows the automatic tie points.

    Automatic tie points

    You can view and sort the data in this table to identify the automatic tie points with the highest error values.

  11. On the Quality Assurance view, click the Overview tab. On the right side, scroll to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors histogram.

    Automatic Tie Points Reprojection Errors histogram

    The symbology of the histogram matches the symbology of the globe view.

  12. On the ribbon, on the Alignment tab, in the Results section, click Report.

    Report button

  13. In the Create Alignment Report window, browse to a location to save the report. For Name, type Frankfurt_AT_report.

    Create Alignment Report window

  14. Click Save.

    The PDF is saved on your computer. It is a way to share the quality assurance statistics of the alignment.

    Sample report

  15. Close the Quality Assurance view and save the project.

You've performed an initial alignment, added control points, refined the alignment, and examined the alignment statistics. You also exported a PDF copy of the alignment statistics to document your work, and share with your stakeholders.

Next, you'll use the aligned data to create a reconstruction.


Perform a reconstruction

Now that the alignment process is complete and the results have been examined and determined to be high quality, you are ready to create output products. For this tutorial, you'll create a 3D point cloud and a 3D mesh.

Create a reconstruction

The first step to generate the products is to create a reconstruction.

  1. On the ribbon, click the Home tab. In the Processing section, click New Reconstruction.

    New Reconstruction button

  2. In the Reconstruction pane, for Reconstruction Name, type Frankfurt_RS_3D.

    Reconstruction Name parameter

    This reconstruction session will be used to create two 3D outputs.

  3. For Capture Scenario, click the drop-down list and choose Aerial Oblique.

    Aerial Oblique option

    Choosing a scenario sets some output products and processing settings.

    The Aerial Oblique setting is useful now because the sample data is a multihead capture session, and all the available imagery will be used to create the output 3D products. The Aerial Nadir setting is more useful when you are creating 2D products. For optimal quality, 2D products should be produced using only nadir images.

  4. In the Camera Sessions section, check the Frankfurt_AT alignment session that you created.

    Frankfurt_AT alignment session

    The alignment is selected.

    Selected alignment

  5. In the Products section, review the output products.

    The Point Cloud and Mesh products are highlighted.

    Point Cloud and Mesh output products

    The SLPK mesh format will be exported by default. You can check other formats for the output mesh if you choose.

  6. In the Workspace section, specify a local folder for the output of the reconstruction.

    The results of the reconstruction process will be stored there. Ensure that there is enough disk space for the output.

    Note:
    The reconstruction process is designed to support processing in a distributed environment, with a local network of workstations running ArcGIS Reality Studio serving as processing nodes. To run efficiently in such an environment, the process is split into individual tasks and the project is divided into manageable subprojects.

    To run a reconstruction, you need to specify the following:

    • A workspace, where the results of the reconstruction run are collected. The workspace must be accessible for each processing node.
    • A temporary processing folder, which is used to store intermediate processing results for the automatically defined subprojects.

  7. In the Optional section, for Quality, click Ultra.

    Ultra output quality

    The Ultra setting will run the 3D reconstruction at the native image resolution. This will take a longer time to process than the High quality option, but the results will look better.

    You can choose the High quality option if you want the output to have slightly reduced detail and lower texture resolution.

  8. For Region of Interest, choose Frankfurt_AOI.

    Frankfurt_AOI option

    Region of Interest allows you to limit processing for your output products to the images relevant to your project.

  9. For Water Body Geometries, choose Frankfurt_waterbody.

    The Water Body Geometries parameter is used to flatten and simplify areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water.

  10. For Type, choose Precise.

    Frankfurt_waterbody option

    You should choose Precise when the water body polygons represent the exact outline of the water body, excluding nonwater content such as bridges and boat jetties. You should choose Coarse when the polygons represent a rough outline of the water area. In both cases, the Water Body Geometries need to represent the elevation of the water surface.

  11. For Correction Geometries, accept the default value of None.
  12. For Tile Size, accept the default value of Auto.
  13. Click Create.

    Create button

    This finishes the reconstructions setup. A Reconstructions section is added to the Project Tree pane.

Run the reconstruction

Now that the reconstruction has been set up, the next step is to run it. This will take some time, depending on your computer resources. On a single computer with 128 GB RAM, AMD Ryzen 24 core CPU at 3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 6 hours. Adding additional nodes will make the process faster.

  1. On the ribbon, on the Reconstruction tab, in the Processing section, click Submit.

    Click Submit.

    After you click Submit, the Process Manager pane will show the reconstruction process as pending.

    Process is pending.

  2. On the ribbon, on the Reconstruction tab, in the Workspace section, click Start Contribution.

    Start contributing to the reconstruction process.

    Note:

    If a warning appears, prompting you to select a temporary processing location, click OK to open the Options window. On the General tab, choose a Temporary Processing Location on your local disk with at least 1 terabyte of available space. It should be separate from the Results folder that you specified earlier.

  3. Click OK.

    The Process Manager pane now shows the status of the reconstruction process.

  4. In the Process Manager pane, click the Open the job monitor pane and Open the workspace manager pane buttons.

    The Process Manager pane for the reconstruction session

    You can use the Workspace Monitor to get an overview of which machine is contributing to a reconstruction job.

    Job Monitor

    You can use the Job Monitor to get an overview of which task of a reconstruction job is running.

    Workspace Monitor

  5. In the Process Manager pane, on the Frankfurt_RS_3D card, enable the Show on Map option.

    Show on Map

    When it's ready, a processing tile will appear on the globe view. The color of the tile indicates its status: gray means that the project is pending, blue means that it is processing, and green means that it is complete. For this example, it takes about half an hour for the processing tile to appear.

  6. Click the Globe tab so you can view the processing tile when it is ready.
  7. Wait for the reconstruction process to run.

    For this example, it takes approximately 6 hours on the single example machine.

    The Progress Manager pane will indicate when the process is complete.

    Process Manager pane showing process complete.

    On the globe view, the processing tile will appear green, indicating it is complete.

    Processing tile

  8. When the process is complete, on the ribbon, on the Reconstruction tab, in the Workspace group, click Stop Contribution.

    Stop Contribution button

    This will allow the system to take on local processing tasks again.

  9. Click OK.

    Next, you'll view the results.

  10. In the Process Manager pane, on the Frankfurt_RS_3D card, turn off Show on Map.
  11. In the Project Tree pane, click Visualization.

    Click Visualization.

  12. In the Project Tree pane, scroll down to the Reconstructions section. Expand Frankfurt_RS_3d and Products.
  13. Turn on visibility for the Mesh layer.

    Visibility turned on for the Mesh layer

    The Mesh layer appears on the globe view.

    Mesh layer

    By default, the 3D point cloud is only stored in the .las format. The Create layer button will trigger the creation of a scene point cloud layer (.slpk) as well.

  14. Next to Point Cloud, click Create layer.

    Create layer option

  15. When the point cloud layer creation process is complete, turn off visibility for the Mesh layer and turn on visibility for the Point Cloud layer.

    View the resulting point cloud.

  16. Explore the results.
  17. On the ribbon, on the Reconstruction tab, in the Results group, click Open Results Folder.

    This opens the Results folder in Microsoft File Explorer. It contains the 3D point cloud in .laz files and the 3D mesh in i3s (SLPK) format. Since you clicked the Create layer button, you will also see the 3D point cloud in i3s (SLPK) format. Use the .slpk files to add the products to ArcGIS Online.

    If you need to deliver your reconstruction products in a different tiling scheme or projection, on the Reconstruction tab, click the Export button and choose an alternative export option.

  18. If you do not run the process, view the results.

In this tutorial, you have created an ArcGIS Reality Studio project, added a capture session, performed an initial alignment, measured ground control points, and refined the alignment. You evaluated the quality of the alignment and determined that it was acceptable. You used the alignment to create a reconstruction, and you used that reconstruction to create point cloud and 3D mesh outputs. These can be shared to ArcGIS Online or used with local applications on your computer. You can use a similar process in the reconstruction stage to create 2D products such as true orthophotos and digital surface models. The main difference for creating 2D outputs is that you would use the aerial nadir scenario and limit the camera session to nadir camera captures.

You can find more tutorials in the tutorial gallery.