Create a capture session

A capture session combines all relevant information captured in a single photo-flight that will be required to do the alignment and reconstruction steps. Capture sessions may be built for imagery captured with nadir-only sensors or multihead sensor systems and with the corresponding positioning information for each image.

In a nadir sensor system, the sensor points straight down and captures imagery of the surface under it. The images collected this way are referred to as nadir images.

The following image is an example of nadir imagery:

Example nadir imagery

The following image is a diagram of a nadir camera cone and image footprint.

Nadir camera cone and image footprint diagram

In a multihead sensor system, sensors point in multiple directions, at angles forward and backward and to the sides. The images collected at an angle are referred to as oblique images. Multihead systems may also include a sensor to collect nadir images.

The following image is an example of oblique imagery.

Example oblique imagery

The following image is a diagram of a multihead sensor, showing the camera cones and image footprints.

Multihead sensor camera cone and image footprint diagram

Positioning information may be based on navigation information or high accuracy positions derived in an external aerotriangulation process.

The data you'll add to your capture session consists of the following:

  • 873 images captured with a multihead sensor system (IGI UrbanMapper)
  • An ASCII file including the positioning information per image (GNSS_IMU_whole_Area.csv)
  • A file including the necessary sensor specifications (Camera_template_Frankfurt_UM1.json)
  • A file geodatabase containing the geometry for region of interest and a water body (AOI_and_Waterbody.gdb)

Download the data

The data for this tutorial takes up about 26 GB of disk space.

  1. Download the Frankfurt_City_Collection.zip file.
    Note:

    Depending on your connection speed, this 26 GB file may take a long time to download.

  2. Extract the .zip file to a folder on your local machine, for example D:\Datasets\Frankfurt_City_Collection.

Start a capture session

Next, you'll create the capture session.

  1. Start ArcGIS Reality Studio.
  2. On the Welcome screen, click New Capture Session.

    New Capture Session option

  3. In the Capture Session pane, for Capture Session Name, type Frankfurt_Flight_RS.
  4. For Orientation File Format, click ASCII Text File (.txt, .csv, etc).

    ASCII text file orientation file format

    A notice appears that the data must be in a supported orientation data format convention.

  5. For Orientation File Path, browse to the Frankfurt_City_Collection folder that you extracted. Select GNSS_IMU_whole_Area.csv and click OK.

    GNSS_IMU_whole_Area.csv file

  6. For Spatial Reference, click the browse button.

    Spatial Reference browse button

  7. In the Spatial Reference window, for Current XY, in the search box, type 25832 and press Enter.

    25832 in the search box

    The search for this Well Known ID (WKID) code returns the ETRS 1989 UTM Zone 32N coordinate system. This is the XY coordinate system used in the position file.

  8. In the list of results, click ETRS 1989 UTM Zone 32N.

    You've set the XY coordinate system. Next, you'll set the Z coordinate system.

  9. Click Current Z.

    Current Z option

  10. For Current Z, in the search box, type 7837 and press Enter.

    7837 in the search box

  11. In the list of results, click DHHN2016 height.

    You've set the Z coordinate system.

  12. In the Spatial Reference window, click OK.
  13. In the Data Parsing section, for Parse from row, type 22 and press Enter.

    Parse from row parameter

    The GNSS_IMU_whole_Area.csv orientation file that you imported is a comma delimited text file. It includes a header section of 21 lines, while the data that ArcGIS Reality Studio will use to process the images begins at line 22. Entering 22 in this box skips the header rows.

    Note:

    Another way to skip the header is to specify the character that begins comment rows. In this file, the # symbol is the comment character, so you could also skip the header by typing # in the Symbols used to ignore rows box.

    Once ArcGIS Reality Studio can read the file correctly, the number of detected orientations is listed in a green highlight box. In this case, 7,775 orientations are detected. These are the orientations collected during the flight. This is greater than the 873 images used in the tutorial because the tutorial images are a subset of a larger collection.

  14. Click Next.

    Next button

Define the parameters of the orientation file

There are multiple image orientation systems and they label the collected parameter data in different ways. In this case, the GNSS_IMU_whole_Area.csv file you imported contains the image name, X, Y, Z, Omega, Phi, and Kappa values in the same order as they appear on the Data Labeling table. You'll match the fields to the data positions in the file.

  1. In the Data Labeling section, for Image Name, choose the first item in the list.

    First item in the list of images

    Place 1 in the file contains data that consists of a code value separated by underscore characters.

  2. For X, choose the second item in the list.

    Second item in the list of images

    Place 2 in the file contains data that consists of floating point data.

    You'll continue mapping the field names to places in the data file.

  3. For Y, choose the third item in the list.
  4. For Z, choose the fourth item in the list.
  5. For Omega, choose the fifth item in the list.
  6. For Phi, choose the sixth item in the list.
  7. For Kappa, choose the seventh item in the list.

    Data labeling completed

    When you have set the Kappa value, in the Camera System Assignment section, a green box appears with the number of assigned orientations from the file.

  8. Skip the Camera Name field.

Relate the orientation data to the images

The orientation data file contains information that ArcGIS Reality Studio will use in reconstructing the scene. There are multiple camera and orientation tracking systems. The relationship between the position data and the cameras is established in different ways, depending on the convention used by the system used to collect your images. The following are the two main ways:

  • The ASCII orientation file may include a column with the camera names.
  • The image file name includes a string that identifies the camera.

In this tutorial, the image file names contain a string to identify the camera.

  1. In the Camera System Assignment section, click the options button and choose Import Template.

    Import Template option

  2. Browse to the Frankfurt_City_Collection folder, select Camera_template_Frankfurt_UM1.json, and click OK.

    The template file

    The Camera System Assignment section updates to include a table for the camera names and ID values.

    Table added with camera names and IDs

  3. Click the Camera_Name row and click Delete.

    Delete button

    Camera_Name is the default entry and is not needed now that the table of camera names is populated.

    Next, you'll enter the codes that correspond to the cameras in the image file names.

  4. For Left, in the Camera ID column, type the code _11000.
  5. For Forward, in the Camera ID column, type the code _11900.
  6. For Nadir, in the Camera ID column, type the code _NAD.
  7. For Backward, in the Camera ID column, type the code _11600.
  8. For Right, in the Camera ID column, type the code _11100.

    The Camera System Assignment table

    The Camera System Assignment table now matches the camera names to the Camera ID codes embedded in the image file names.

    The Capture Session Selection section appears below the Camera System Assignment table.

    The Capture Session Selection section

    This section allows you to choose to process specific camera sessions or all camera sessions. In this tutorial, you'll process all of the camera sessions.

  9. Click CaptureSession to select all of the capture sessions.

    CaptureSession option

    The capture sessions are checked on.

    Capture sessions are checked on

    The warning icons beside each capture session indicate that images are not yet linked up with the orientation data. You'll fix that issue after the capture session has been created.

  10. Click Next.

Review the camera sessions

The Camera Sessions section allows you to review the parameters of the cameras used to capture the images.

  1. In the Camera Sessions section, click Forward_Frankfurt_Flight_RS.

    Forward camera parameters

    The next sections contain information about the camera used to collect the forward looking images. This information was included in the Camera_template_Frankfurt_UM1.json file that you imported earlier.

  2. Scroll down to see the data in the Sensor Definition section.

    Sensor definition data

    Each of the camera sessions listed has a corresponding table of data documenting the physical properties of the camera and lens system used to capture that set of images.

    Note:

    If the camera data had not been imported from the Camera_template_Frankfurt_UM1.json file, you could manually enter the data from your imagery provider.

  3. Optionally, click the other camera sessions and review their parameters.
  4. Click Finish.

    The capture session is constructed. This process will take a minute or so. The Project Tree pane appears.

    The Project Tree pane

    The Process Manager pane also appears. It shows the status of the current process.

    Process Manager pane

    The globe view appears, showing the locations of the camera captures.

    Globe view

Link capture sessions to the image files

Next, you'll connect the capture sessions you've selected to the image file data location.

  1. In the Project Tree pane, look at the entry for Forward_Frankfurt_Flight_RS.

    Forward_Frankfurt_Flight_RS in the Project Tree

    The number of images is listed as 0. You'll connect the image data to the forward looking images.

  2. In the Project Tree pane, in the Forward_Frankfurt_Flight_RS section, click Add images.

    Add images option

  3. Browse to the Frankfurt_City_Collection folder, select the jpg folder, and click OK.

    The jpg folder where the images are stored

    The Process Manager pane shows the progress as the images are linked to their collection data.

    Process manager as images are linked

    When the process is complete, Forward_Frankfurt_Flight_RS shows 160 images.

    Now, you'll add the images to the next capture session.

  4. In the Project Tree pane, in the Nadir_Frankfurt_Flight_RS section, click Add images.

    Nadir capture session images

  5. In the Select images, folders or list files window, select the jpg folder and click OK.

    The jpg folder in the Select images, folders or list files window

    You'll repeat this process for each of the camera capture sessions.

  6. In the Project Tree pane, in the Backward_Frankfurt_Flight_RS section, click Add images.
  7. In the Select images, folders or list files window, select the jpg folder and click OK.
  8. In the Project Tree pane, in the Right_Frankfurt_Flight_RS section, click Add images.
  9. In the Select images, folders or list files window, select the jpg folder and click OK.
  10. In the Project Tree pane, in the Left_Frankfurt_Flight_RS section, click Add images.
  11. In the Select images, folders or list files window, select the jpg folder and click OK.

    After the capture sessions have been linked to their images, you can visualize the image footprints.

  12. In the Project Tree pane, click Visualization.

    Visualization tab

  13. In the Forward_Frankfurt_Flight_RS section, check Image Footprints.

    The footprints on for the Forward_Frankfurt_Flight_RS capture session

    The image footprints are shown in the globe view.

    Image footprints shown in globe view

  14. Uncheck Image Footprints.

Define the region of interest and add water bodies

The last two steps before align the images is to define the region of interest for the project and identify where water bodies are located.

  1. On the ribbon, on the Home tab, in the Import section, click Geometries and choose Region of Interest.

    Region of Interest option

  2. In the Select a region of interest window, in the Computer section, browse to the Frankfurt_City_Collection folder.

    The geodatabase in the folder

  3. Double-click the AOI_and_Waterbody.gdb geodatabase to expand it. Click the Frankfurt_AOI feature class and click OK.

    The AOI feature class

    The Frankfort_AOI polygon feature class is added to the globe view.

    The AOI on the globe view

    Specifying a region of interest geometry prevents unnecessary data from being processed, minimizing total processing time and storage requirements.

  4. On the ribbon, on the Home tab, in the Import section, click Geometries and click Water Body.

    Water Body option

  5. In the Select a water body geometry window, in AOI_and_Waterbody.gdb, click Frankfurt_waterbody and click OK.

    The Frankfurt_waterbody feature class

    The Frankfurt_waterbody polygon feature class is added to the globe view.

    The water body polygons on the globe view

    Specifying water body geometries flattens and simplifies areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water.

    The capture session has been fully defined. You can now save the project.

  6. On the ribbon, click Save Project.

    Save Project button

  7. In the Save Project As window, browse to a location with plenty of free disk space, type 2023-Frankfurt_Reality_Studio_Tutorial, and click Save.

You have defined the capture sessions, set the coordinate system and camera properties, and linked the position and orientation data to the captured images, and saved the project. You are now ready to begin adjusting the images to start creating products from them.


Perform an alignment

The capture session was built from GNSS navigation data recorded during the photo flight. This exterior orientation information is typically not accurate enough to create products such as true orthos or 3D meshes of high geometric quality. To optimize the navigation data, you'll run an alignment. During alignment, also called aerotriangulation, individual images are connected by determining homologous points (tie points) between overlapping images. With many of these image measurements, the image block can be mathematically adjusted to refine the orientation parameters for each image. Additional accuracy can be obtained by manually measuring ground control points.

Create an alignment

To align the images, you must add an alignment to the project.

  1. On the ribbon, on the Home tab, in the Processing section, click New Alignment.
    New Alignment button
  2. In the Alignment pane, for Alignment Name, type Frankfurt_AT.

    Alignment Name parameter

  3. In the Camera Sessions section, check Dataset.

    This alignment will use all the capture sessions, so they should all be checked.

    All the capture sessions turned on

  4. In the Control Points section, click Import Control Points.

    Import Control Points option

  5. In the Select input file window, browse to the Frankfurt_City_Collection folder and open the GroundControlPoints folder. Select Ground_Control_Points.txt and click OK.

    Ground_Control_Points.txt file

  6. In the Control Points Import window, click the Spatial Reference browse button.
  7. In the XY Coordinate Systems Available box, type 25832 and press Enter.

    XY coordinate system

  8. Click ETRS 1989 UTM Zone 32N.
  9. Click the Current Z box. In the Z Coordinate Systems Available box, type 7837 and press Enter.

    Z coordinate system

  10. Click DHHN2016 height and click OK.
  11. For Choose a delimiter, accept the default delimiter, comma.
  12. Click Next.

    Next button

  13. Review the column labels.

    The column labels

    The default values are correct.

  14. Click Import.

    The control points are added to the globe view.

    Control points on the globe view

  15. In the Alignment pane, in the Control Points section, check Dataset. Expand Dataset to see the new Ground_Control_Points data.

    The new Ground_Control_Points item

    The Standard Deviations section allows you to modify the given accuracy (a-priori standard deviations) of the image positions (XYZ position and rotation angles) and of the imported ground control points. For this tutorial, the default values are correct.

    The Region of Interest parameter allows you to specify a region to adjust. For this tutorial, you'll perform alignment on the entire dataset, so there is no need to set a region of interest.

  16. Click Create.

    Create button

    Clicking Create adds the Alignment tab to the ribbon. The alignment is ready to run.

    Running the alignment will start the automatic tie point matching and bundle block adjustment process. This is a computationally intensive process, and the duration of the processing will depend on your computer hardware.

    On a computer with 128 GB RAM, AMD Ryzen 24 core CPU @3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 2 hours.

  17. Click Run.

    Run button

    In the Process Manager pane, the Alignment process status appears.

    Alignment status in Process Manager

  18. Expand the Alignment process to see the steps.

    The Alignment process

    The Process Manager allows you to keep track of the stages of the Alignment process and their status.

    This might be a good time to take a break or work on something else while the process runs.

    When the process finishes, you can see it listed in the Process Manager pane.

    The process is complete message

    Once the alignment finishes, the QA window appears. This window shows the key statistics of the bundle block adjustment.

    QA window when the alignment is complete

Measure ground control points

You can measure ground control points before or after the initial alignment. Doing it after the alignment has the benefit that the software has already refined the image positions and can provide a better indication of where to measure.

  1. In the QA window, on the Overview tab, scroll down and expand Count.

    The Image Measurements column for the Ground Control Points row indicates that no image measurements have been done for the ground control points. You will add some now.

    Image Measurements with a value of 0

  2. Optionally, close the QA window.
  3. On the ribbon, on the Alignment tab, in the Tools section, click Image Measurements.

    Image Measurements button

    The measurement window appears. The left pane shows a globe view of the project area and a Control Points table with the available ground control points.

    Image Measurements overview

    Note:

    If the Alignment tab is not visible, in the Project Tree pane, scroll down to the Alignments section and click Frankfurt_AT.

    The Image pane shows a set of image measuring tool instructions.

    Image control point measuring instructions

  4. Review the information. When finished, close the information window.
  5. If the first row in the Control Points table is not highlighted, click the first row.

    First ground control point

    When you click the first row, the Image List section updates to show all of the images that see the point.

  6. Click the first image in the Image List section.

    The Image pane shows the first image of that list with a pink circle indicating the potential location of the ground control point.

    First image ground control point

    This image is not good for measuring ground control points, because the point is hidden by the tree canopy.

  7. Press F to move to the next image.

    Second image in the list

    This image shows the ground control point in the image, surrounded by the Projected point symbol.

    To measure a selected point, place the pointer on the location of the marked ground control point as seen by the image sensor in the image and click to measure. The measurement will be added to the Image window as a green cross.

  8. Use the scroll wheel on your mouse to zoom closer to the ground control point. Click the center of the point.

    The center of the ground control point

    The new measured point location is added.

    New measured point location

  9. Use the scroll wheel on your mouse to zoom out.
  10. Under Image List, click the next image.

    The next image showing this ground control point opens in the Image window.

    Note:

    You can also press the Spacebar to accept the measurement and move to the next image in the list.

  11. The next ground control point is difficult to distinguish from the pavement.
  12. Press F to move to the next image.

    The next image has a clear ground control point.

  13. In the Image window, click the ground control point.

    Ground control point

    The measured point is added.

    Second point added

  14. Press the Spacebar to accept the measurement and move to the next image in the list.

    After you added the second measured point, a new symbol is added to the next image. The red square bracketed point represents a suggested point.

    Suggested point added to image

    A suggested point represents the calculated location of the ground control point based on the previous two measurements.

  15. Press the Spacebar to accept the suggested point.
  16. Proceed through the remaining images using the following instructions:
    • If the ground control point is not visible in the image (for example, if it is hidden by a parked car or tree), press F to skip the image.
    • If the suggested point location appears correct, press the Spacebar to confirm it as a valid measurement.
    • If the suggested point location does not appear correct, click the location of the ground control point.

    The image list can be expanded to show the reprojection error associated with each measured point.

    The list after collecting measured ground control points

    The Use column has check marks to indicate the measured ground control points. If you make a mistake, you can uncheck a point and not use it.

    In the following example image, one of the points has much higher Reprojection Error values than the other ones.

    Bad point measurement highlighted

    You can click the row and go back to examine the image and re-collect the measurement, or you can discard this point.

  17. Uncheck the Use box for the bad point.

    The bad point unchecked

    Unchecking the point removes the point and clears the Reprojection Error values.

    Once all image measurements are collected for a given ground control point, ArcGIS Reality Studio automatically moves forward to the next ground control point in the Control Points list.

    To measure a different ground control point, click the row header for the point in the Control Points table on the lower left side of the Image measurement window.

    The row header to select a new ground control point

    Doing so opens a new set of images in the image list and shows the first image on the list in the Image pane.

    A new set of images for the newly selected ground control points

  18. Close the Image Measurements window and begin measuring ground control points. Use the same method to collect measurements for at least five of the other Control Points covering the area of interest specified by the Frankfurt_AOI geometry.
    Note:

    Some points, such as point 990007, were not clearly marked on the ground by a point but were collected at a visually distinguishable location, such as a corner of a crosswalk.

    In the Frankfurt_City_Collection folder, the GroundControlPoints folder contains a set of images showing a green Measured Point marker at the location of the ground control point.

    Ground control point location reference images

    If you open the 990007 image file in this folder, you'll see that this ground control point was collected at the corner of a crosswalk. For each ground control point, view the corresponding image in this folder to verify the location before measuring.

    Ground control point at corner of crosswalk

    When conspicuous existing locations are used as ground control points, the surveyor usually notes the location in a set of field notes and takes a picture showing the GPS antenna at that location. The images in this folder simulate that sort of field data.

    Note:

    For best results, measure the location about five times for each sensor view (Left, Right, Forward, Backward, Nadir) for each of the ground control points. This will ensure that the views are correctly connected. The image list includes a column that identifies the camera for each image.

    After you have measured a set of images for a ground control point, review the image list for images that have high Reprojection Error values. Uncheck the Use box for these images.

    As you work through each of the ground control points in the Control Points table, the Status column will will change from an open circle to a half-filled circle for the control points than have been measured.

    Control Points table in progress

    When all the control points have been measured, the Status field will show half-filled circles for each row and the Reprojection Error statistics for each control point will be visible.

    All control points measured

    If some control points have higher Reprojection Error statistics than others, you can click the header for the row in the Control Points table, and then in the Image List, search for and re-measure or remove images with high Reprojection Error values.

    Once you are satisfied with the control point measurements, you can change the role of a control point to a check point, to use for statistical reporting.

Change a ground control point to a check point

Check points are used for evaluating and reporting on the accuracy of the alignment. Their 3D position and image residuals are estimated using the output image orientation for quality assurance purposes. You'll convert one of the ground control points to a check point.

  1. In the Control Points table, click the header for the fourth row (ground control point 990006).

    Fourth row

    The row for this ground control point is highlighted.

  2. In the toolbar at the top of the Control Points table, click Set Role and choose CP.

    CP option

    In the table, the role changes to CP.

    CP role

Refine the alignment

After adding and measuring control points, or changing other settings of the alignment, you will run the alignment again to refine the positions based on the new information. This re-runs the bundle-block adjustment, but it will be much faster than the initial alignment process.

  1. On the ribbon, on the Alignment tab, in the Process section, click Run.

    Run button

    The Process Manager opens and shows the progress on the alignment process. After one or two minutes, the process completes.

    The QA tool pane appears.

    The QA tool pane

    To check the quality of the alignment results, examine the statistics on the QA tool pane.

    For best results with this data, keep the following in mind:

    • The overall Sigma 0 value should be less than 1 px for a well calibrated photogrammetric camera.
    • The RMS of the tie point reprojection error, which is also expected to be less than 1 px.
    • The RMS for the horizontal and vertical object residuals for control points should be less than 1.5 GSD (12 cm).

    Also check the count data, such as the number of automatic tie points per image and image measurements per tie point, which indicate how well tie points are distributed in the project area and how well adjacent images are connected by a common measurement. You can also review the tie point visualization in the globe view.

    Note:

    These steps are meant to give you basic guidance for analyzing the alignment results. Doing an in-depth analysis of the quality requires knowledge about project requirements and specifications as well as knowledge about the quality of the input data.

  2. In the QA pane, click General Information and view the Sigma 0 value.

    The Sigma 0 value

    The value in this example is 0.7559, which is a good value for this dataset.

  3. On the right side of the QA pane, scroll down to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors section.

    Tie point reprojection errors

    The RMS value for the tie point reprojection errors in this example is 0.756, which is a good value for this dataset.

  4. On the right side of the QA pane, scroll up to the 3D Residuals section. View the Ground Control Points Residuals section.

    Ground Control Points Residuals section

    The RMS value for the Ground Control Points Residuals in this example is 0.151 meters, perhaps a little higher than the desired value of 0.12 meters but acceptable for this exercise.

  5. On the left side of the QA pane, scroll down and expand the Count section.

    Count statistics

    In this example, there are six ground control points with 390 image measurements and one check point with 30 image measurements.

  6. Optionally, review the other QA statistics and measurements.
  7. On the QA tool, click the Control Points tab.

    Control Points tab

    The Control Points table appears.

    Control points table open in the QA tool

    You can use this table to check the X,Y and Z residuals for each control point. Unexpectedly large values may indicate points that may have to be re-measured. This might be the case if they were incorrectly measured. Unexpected large Delta XYZ values compared properly measured control points are an indication that this is the problem.

    You can also review the geography of the actual project data (ground control points, automatic tie points, image positions).

  8. On the ribbon, on the Alignment tab, in the Display section, click Automatic Tie Points.

    Automatic Tie Points button

    The automatic tie points are drawn in the Display pane.

    The automatic tie points in the Display pane

  9. Click the Automatic Tie Points drop-down arrow and choose RMS of Reprojection Errors.

    RMS of Reprojection Errors option

    The Display pane updates to show the manual tie points symbolized by the RMS of the reprojection errors.

    Updated symbology

  10. On the QA tool, click the Automatic Tie Points tab.

    The table shows the automatic tie points.

    Automatic tie points

    You can view and sort the data in this table to identify the automatic tie points with the highest error values.

  11. On the QA tool, click the Overview tab. On the right side, scroll to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors histogram.

    Automatic Tie Points Reprojection Errors histogram

    The symbology of the histogram matches the symbology of the globe view.

  12. On the ribbon, on the Alignment tab, in the Results section, click Report.

    Report button

  13. In the Create Alignment Report window, browse to a location to save the report. For Name, type Frankfurt_AT_report.

    Create Alignment Report window

  14. Click Save.

    The PDF is saved on your computer. It is a way to share the QA statistics of the alignment.

    Sample report

  15. Close the QA tool and save the project.

You've performed an initial alignment, added control points, refined the alignment, and examined the alignment statistics. You also exported a PDF copy of the alignment statistics to document your work, and share with your stakeholders.

Next, you'll use the aligned data to create a reconstruction.


Perform a reconstruction

Now that the alignment process is complete and the results have been examined and determined to be high quality, you are ready to create output products. For this tutorial, you'll create a 3D point cloud and a 3D mesh.

Create a reconstruction

The first step to generate the products is to create a reconstruction.

  1. On the ribbon, click the Home tab. In the Processing section, click New Reconstruction.

    New Reconstruction button

  2. In the Reconstruction pane, for Reconstruction Name, type Frankfurt_RS_3D.

    Reconstruction Name parameter

    This reconstruction session will be used to create two 3D outputs.

  3. For Capture Scenario, click the drop-down list and choose Aerial Oblique.

    Aerial Oblique option

    Choosing a scenario sets some output products and processing settings.

    The Aerial Oblique setting is useful now because the sample data is a multihead capture session, and all the available imagery will be used to create the output 3D products. The Aerial Nadir setting is more useful when you are creating 2D products. It limits processing to the nadir images.

  4. In the Camera Session section, check the Frankfurt_AT alignment session that you created.

    Frankfurt_AT alignment session

    The alignment is selected.

    Selected alignment

  5. In the Products section, review the output products.

    The Point Cloud and Mesh products are highlighted.

    Point Cloud and Mesh output products

    The OSGB and SLPK mesh formats will be exported by default. You can check other formats for the output mesh if you choose.

  6. In the Optional section, for Quality, click Ultra.

    Ultra output quality

    The Ultra setting will run the 3D reconstruction at the native image resolution. This will take a longer time to process than the High quality option, but the results will look better. On a computer with 128 GB RAM, AMD Ryzen 24 core CPU @3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 8 hours.

    You can choose the High quality option if you want the output to have slightly reduced detail and lower texture resolution.

  7. For Region of Interest, choose Frankfurt_AOI.

    Frankfurt_AOI option

    Region of Interest allows you to limit processing for your output products to the images relevant to your project.

  8. For the Water Body Geometries, choose Frankfurt_waterbody.

    Frankfurt_waterbody option

    The Water Body Geometries parameter is used to flatten and simplify areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water.

  9. For Correction Geometries, accept the default value of None.
  10. Click Create.

    Create button

    This finishes the Reconstruction set up. The reconstruction is added to the Project Tree pane.

    The Reconstruction session in the Project Tree pane

Run the reconstruction

Now that the reconstruction has been set up, the next step is to run it. This will take some time, depending on your computer resources. On a computer with 128 GB RAM, AMD Ryzen 24 core CPU @3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 8 hours.

  1. On the ribbon, on the Reconstruction tab, in the Processing section, click Run.

    Run button

    The Process Manager pane opens and shows the status of the reconstruction process.

    The Process Manager pane for the reconstruction session

    Note:

    The progress bar for the analysis step will start to run after 10 minutes.

    After the analysis step is finished, the globe view will show the processing progress as well. You can observe the individual stereo models being processed in dense matching. Later in the process, you'll see individual tiles of the point cloud and the mesh added to the globe view.

    Once the process has finished, the products are added to the Project Tree pane. You can use the Visualization tab to show or hide these products.

  2. Wait for the reconstruction process to run.
  3. On the ribbon, on the Reconstruction tab, click Open Results Folder.

    This opens the Results folder in Microsoft File Explorer. It contains the 3D point cloud in LAS and i3s (SLPK) format, as well as the 3D mesh in OSGB and i3s (SLPK) format. Use the .slpk files to add the products to ArcGIS Online.

  4. If you do not run the process, view the results.

In this tutorial, you have created an ArcGIS Reality Studio project, added a capture session, performed an initial alignment, measured ground control points, and refined the alignment. You evaluated the quality of the alignment and determined that it was acceptable. You used the alignment to create a reconstruction, and you used that reconstruction to create point cloud and 3D mesh outputs. These can be shared to ArcGIS Online or used with local applications on your computer. You can use a similar process in the reconstruction stage to create 2D products such as true orthophotos and digital surface models. The main difference for creating 2D outputs is that you would use the aerial nadir scenario and limit the camera session to nadir camera captures.

You can find more tutorials in the tutorial gallery.