Create a capture session
A capture session combines all relevant information captured in a single photo-flight that will be required to do the alignment and reconstruction steps. Capture sessions may be built for imagery captured with nadir-only sensors or multihead sensor systems and with the corresponding positioning information for each image.
In a nadir sensor system, the sensor points straight down and captures imagery of the surface under it. The images collected this way are referred to as nadir images.
The following image is an example of nadir imagery:
The following image is a diagram of a nadir camera cone and image footprint:
In a multihead sensor system, sensors point in multiple directions, at angles forward and backward and to the sides. The images collected at an angle are referred to as oblique images. Multihead systems may also include a sensor to collect nadir images.
The following image is an example of oblique imagery:
The following image is a diagram of a multihead sensor, showing the camera cones and image footprints:
Positioning information may be based on navigation information or high-accuracy positions derived in an external aerotriangulation process.
The data you'll add to your capture session consists of the following:
- 873 images captured with a multihead sensor system (IGI UrbanMapper)
- An ASCII file including the positioning information per image (GNSS_IMU_whole_Area.csv)
- A file including the necessary sensor specifications (Camera_template_Frankfurt_UM1.json)
- A file geodatabase containing the geometry for the region of interest and a water body (AOI_and_Waterbody.gdb)
Download the data
The data for this tutorial takes up about 26 GB of disk space.
- Download the Frankfurt_City_Collection.zip file.
Note:
Depending on your connection speed, this 26 GB file may take a long time to download.
- Extract the .zip file to a folder on your local machine, for example, D:\Datasets\Frankfurt_City_Collection.
Start a capture session
Next, you'll create the capture session.
- Start ArcGIS Reality Studio.
- On the Welcome screen, click New Capture Session.
- In the Capture Session pane, for Capture Session Name, type Frankfurt_Flight_RS.
- For Orientation File Format, click ASCII text file (.txt, .csv, etc).
A notice appears indicating that the data must be in a supported orientation data format convention.
- For Orientation File Path, browse to the Frankfurt_City_Collection folder that you extracted. Select GNSS_IMU_whole_Area.csv and click OK.
- For Spatial Reference, click the browse button.
- In the Spatial Reference window, for Current XY, in the search box, type 25832 and press Enter.
The search for this well-known ID (WKID) code returns the ETRS 1989 UTM Zone 32N coordinate system. This is the XY coordinate system used in the position file.
- In the list of results, click ETRS 1989 UTM Zone 32N.
You've set the XY coordinate system. Next, you'll set the Z coordinate system.
- Click Current Z.
- For Current Z, in the search box, type 7837 and press Enter.
- In the list of results, click DHHN2016 height.
You've set the Z coordinate system.
- In the Spatial Reference window, click OK.
- In the Data Parsing section, for Parse from row, type 22 and press Enter.
The GNSS_IMU_whole_Area.csv orientation file that you imported is a comma-delimited text file. It includes a header section of 21 lines, while the data that ArcGIS Reality Studio will use to process the images begins at line 22. Entering 22 in this box skips the header rows.
Note:
Another way to skip the header is to specify the character that begins comment rows. In this file, the # symbol is the comment character, so you could also skip the header by typing # in the Symbols used to ignore rows box.
Once ArcGIS Reality Studio can read the file correctly, the number of detected orientations is listed in a green highlight box. In this case, 7,775 orientations are detected. These are the orientations collected during the flight. This is greater than the 873 images used in the tutorial because the tutorial images are a subset of a larger collection.
- Click Next.
Define the parameters of the orientation file
There are multiple image orientation systems and they label the collected parameter data in different ways. In this case, the GNSS_IMU_whole_Area.csv file you imported contains the image name, X, Y, Z, Omega, Phi, and Kappa values in the same order as they appear on the Data Labeling table. You'll match the fields to the data positions in the file.
- In the Data Labeling section, for Image Name, choose the first item in the list.
Place 1 in the file contains data that consists of a code value separated by underscore characters.
- For X, choose the second item in the list.
Place 2 in the file contains data that consists of floating point data.
You'll continue mapping the field names to places in the data file.
- For Y, choose the third item in the list.
- For Z, choose the fourth item in the list.
- For Omega, choose the fifth item in the list.
- For Phi, choose the sixth item in the list.
- For Kappa, choose the seventh item in the list.
When you have set the Kappa value, in the Camera System Assignment section, a green box appears with the number of assigned orientations from the file.
- Skip the Camera Name field.
Relate the orientation data to the images
The orientation data file contains information that ArcGIS Reality Studio will use in reconstructing the scene. There are multiple camera and orientation tracking systems. The relationship between the position data and the cameras is established in different ways, depending on the convention used by the system used to collect your images. The following are the two main ways:
- The ASCII orientation file may include a column with the camera names.
- The image file name includes a string that identifies the camera.
In this tutorial, the image file names contain a string to identify the camera.
- In the Camera System Assignment section, click the options button and choose Import Template.
- Browse to the Frankfurt_City_Collection folder, select Camera_template_Frankfurt_UM1.json, and click OK.
The Camera System Assignment section updates to include a table for the camera names and ID values.
Next, you'll enter the codes that correspond to the cameras in the image file names.
- For Left, in the Camera ID column, type the code _11000.
- For Forward, in the Camera ID column, type the code _11900.
- For Nadir, in the Camera ID column, type the code _NAD.
- For Backward, in the Camera ID column, type the code _11600.
- For Right, in the Camera ID column, type the code _11100.
The Camera System Assignment table now matches the camera names to the Camera ID codes embedded in the image file names.
The Capture Session Selection section appears below the Camera System Assignment table.
This section allows you to choose to process specific camera sessions or all camera sessions. In this tutorial, you'll process all of the camera sessions.
- Click the button for Frankfurt_Flight_RS to select the complete capture session, including all five camera sessions.
The capture sessions are checked.
- Click Next.
Review the camera sessions
The Camera Sessions section allows you to review the parameters of the cameras used to capture the images.
- In the Camera Sessions section, click Forward_Frankfurt_Flight_RS.
The next sections contain information about the camera used to collect the forward looking images. This information was included in the Camera_template_Frankfurt_UM1.json file that you imported earlier.
- Scroll down to see the data in the Sensor Definition section.
Each of the camera sessions listed has a corresponding table of data documenting the physical properties of the camera and lens system used to capture that set of images.
Note:
If the camera data had not been imported from the Camera_template_Frankfurt_UM1.json file, you could manually enter the data from your imagery provider.
- Optionally, click the other camera sessions and review their parameters.
- Click Finish.
The capture session is constructed. This process will take a minute or so. The Project Tree pane appears.
The Process Manager pane also appears. It shows the status of the current process.
The globe view appears, showing the locations of the camera captures.
Link capture sessions to the image files
Next, you'll connect the capture sessions you've selected to the image file data location. You'll perform this step for each camera session.
- In the Project Tree pane, expand the entry for Forward_Frankfurt_Flight_RS.
The current number of images is 0.
You'll connect the image data to the forward looking images.
- In the Project Tree pane, expand the Forward_Frankfurt_Flight_RS section and click Add images.
- Browse to the Frankfurt_City_Collection folder, select the jpg folder, and click OK.
The Process Manager pane shows the progress as the images are linked to their collection data.
When the process is complete, Forward_Frankfurt_Flight_RS shows 160 images.
Now, you'll add the images to the next camera session.
- In the Project Tree pane, in the Nadir_Frankfurt_Flight_RS section, click Add images.
- In the Select images, folders or list files window, select the jpg folder and click OK.
You'll repeat this process for each of the camera sessions.
- In the Project Tree pane, in the Backward_Frankfurt_Flight_RS section, click Add images.
- In the Select images, folders or list files window, select the jpg folder and click OK.
- In the Project Tree pane, in the Right_Frankfurt_Flight_RS section, click Add images.
- In the Select images, folders or list files window, select the jpg folder and click OK.
- In the Project Tree pane, in the Left_Frankfurt_Flight_RS section, click Add images.
- In the Select images, folders or list files window, select the jpg folder and click OK.
After the capture sessions have been linked to their images, you can visualize the image footprints.
- In the Project Tree pane, click Visualization.
- In the Forward_Frankfurt_Flight_RS section, check Image Footprints.
The image footprints are shown in the globe view.
- Uncheck Image Footprints.
Define the region of interest and add water bodies
The last two steps before aligning the images are to define the region of interest for the project and to identify where water bodies are located.
- On the ribbon, on the Home tab, in the Import section, click Geometries and choose Region of Interest.
- In the Select a region of interest geometry window, in the Computer section, browse to the Frankfurt_City_Collection folder.
- Double-click the AOI_and_Waterbody.gdb geodatabase to expand it. Click the Frankfurt_AOI feature class and click OK.
The Frankfurt_AOI polygon feature class is added to the globe view.
Specifying a region of interest geometry prevents unnecessary data from being processed, minimizing total processing time and storage requirements.
- On the ribbon, on the Home tab, in the Import section, click Geometries and click Water Body.
- In the Select a water body geometry window, in AOI_and_Waterbody.gdb, click Frankfurt_waterbody and click OK.
The Frankfurt_waterbody polygon feature class is added to the globe view.
Specifying water body geometries flattens and simplifies areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water.
The capture session has been fully defined. You can now save the project.
- On the ribbon, click Save Project.
- In the Save Project As window, browse to a location with plenty of free disk space, type 2023-Frankfurt_Reality_Studio_Tutorial, and click Save.
You have defined the capture sessions, set the coordinate system and camera properties, and linked the position and orientation data to the captured images, and saved the project. You are now ready to begin adjusting the images to start creating products from them.
Perform an alignment
The capture session was built from GNSS navigation data recorded during the photo flight. This exterior orientation information is typically not accurate enough to create products such as true orthos or 3D meshes of high geometric quality. To optimize the navigation data, you'll run an alignment. During alignment, also called aerotriangulation, individual images are connected by determining homologous points (tie points) between overlapping images. With many of these image measurements, the image block can be mathematically adjusted to refine the orientation parameters for each image. Additional accuracy can be obtained by manually measuring ground control points.
Create an alignment
To align the images, you must add an alignment to the project.
- On the ribbon, on the Home tab, in the Processing section, click New Alignment.
- In the Alignment pane, for Alignment Name, type Frankfurt_AT.
- In the
Camera Sessions section, check Dataset.
This alignment will use all the capture sessions, so they should all be checked.
- In the Control Points section, click Import Control Points.
- In the Select input file window, browse to the Frankfurt_City_Collection folder and open the GroundControlPoints folder. Select Ground_Control_Points.txt and click OK.
- In the Control Points Import window, click the Spatial Reference browse button.
- In the XY Coordinate Systems Available box, type 25832 and press Enter.
- Click ETRS 1989 UTM Zone 32N.
- Click the Current Z box. In the Z Coordinate Systems Available box, type 7837 and press Enter.
- Click DHHN2016 height and click OK.
- For Choose a delimiter, accept the default delimiter, comma.
- Click Next.
- Review the column labels.
The default values are correct.
- Click Import.
The control points are added to the globe view.
- In the Alignment pane, in the Control Points section, check Dataset. Expand Dataset to see the new Ground_Control_Points data.
The Standard Deviations section allows you to modify the given accuracy (a priori standard deviations) of the image positions (XYZ position and rotation angles) and of the imported ground control points. For this tutorial, the default values are correct.
The Region of Interest parameter allows you to specify a region to adjust. For this tutorial, you'll perform alignment on the entire dataset, so there is no need to set a region of interest.
- Click Create.
Clicking Create adds the Alignment tab to the ribbon. The alignment is ready to run.
Running the alignment will start the automatic tie point matching and bundle block adjustment process. This is a computationally intensive process, and the duration of the processing will depend on your computer hardware.
On a computer with 128 GB RAM, AMD Ryzen 24 core CPU at 3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 2 hours.
- Click Run.
In the Process Manager pane, the Alignment process status appears.
- Expand the Alignment process to see the steps.
The Process Manager allows you to keep track of the stages of the Alignment process and their status.
This might be a good time to take a break or work on something else while the process runs.
When the process finishes, you can see it listed in the Process Manager pane.
Once the alignment finishes, the QA window appears. This window shows the key statistics of the bundle block adjustment.
Measure ground control points
You can measure ground control points before or after the initial alignment. Doing it after the alignment has the benefit that the software has already refined the image positions and can provide a better indication of where to measure.
- In the QA window, on the Overview tab, scroll down and expand Count.
The Image Measurements column for the Ground Control Points row indicates that no image measurements have been done for the ground control points. You will add some now.
- Optionally, close the QA window.
- On the ribbon, on the Alignment tab, in the Tools section, click Image Measurements.
The measurement window appears. The left pane shows a globe view of the project area and a Control Points table with the available ground control points.
Note:
If the Alignment tab is not visible, in the Project Tree pane, scroll down to the Alignments section and click Frankfurt_AT.
The Image pane shows a set of image measuring tool instructions.
- Review the information. When finished, close the information window.
- In the Control Points table, click the row number for the second row, point 990004.
When you click the header for the second row, the Image List section updates to show all of the images that contain point 990004, and the first image is shown, along with a pink circle indicating the projected point location.
A ground control point may or may not be visible in any given image, because each was taken from a different location and angle. Trees, buildings, cars, or pedestrians may block the view of the point in some images. Glare or shadow may make a ground control point blend in to the background of the image. When measuring, you can skip the images where the point is not visible.
The images you see may not appear in the order they are shown in the tutorial. For each image, you should determine whether the ground control point is visible. If it is not visible, you can skip the image by pressing the F key. If the point is visible, you will zoom in to it using the scroll wheel on your mouse, and click the center of the point in the image to measure the difference between its calculated location and its location in the image.
- Move the pointer over the image and use the scroll wheel of your mouse to zoom in to the pink circled point that represents the projected point location for this image.
This point is in an intersection, and some cars were in the intersection when the image was captured. Fortunately, in this image, the ground control point is visible as a light spot with a darker circle surrounding it.
- Click the center of the ground control point in the image.
The location where you clicked is now marked as a measured point.
The Status column of the table for this image updates with a green Measured Point symbol.
- Press the F key to move to the next image.
- Click the center of the ground control point in the image.
After this point is added, the Find Suggestions button is enabled. This tool is designed to assist you in measuring the ground control points. It uses the projected points and inspects the imagery to find locations that look like the point that you have marked in the current image.
- Click Find Suggestions.
The tool scans the images for this ground control point. This may take a minute or so.
- In the table, click the first suggestion.
The image is displayed with a red square box indicating the location of the suggested point.
The suggestion is good, so you will accept it.
- Click Accept to accept the suggestion.
The measured point is added and the next image is displayed. It also has a suggested point.
- Click Accept to accept the suggestion.
The next point does not have a suggestion, so you will manually add it by clicking the ground control point in the image, the way you did for the first two.
- Click the center of the ground control point in the image.
The measured point is added and the Find Suggestions tool becomes active again.
- Click Find Suggestions.
The tool scans the images for this ground control point. This may take a minute or so.
The tool makes suggestions for more of the images. You can use the table to navigate through these and manually accept each one by clicking the Accept button, or you can accept all of the suggestions by clicking the Accept All button.
- Click Accept All.
Now 46 of the 128 images showing ground control point 990004 have measurements.
Collect points for the Forward camera
You can scroll through the table to look at the distribution of measured points. The Camera column indicates which camera captured each image. You want to ensure that you have about five measurements for each camera. To do this, you will sort the table on the column values.
- Click the Camera column header.
Now the table of images is sorted by camera.
- Scroll down the table to see whether each camera is well represented.
Backward has several measured points. Forward has only one. Left has a few, but could use more.
- Click one of the Forward camera images to measure the control point position.
- Click the center of the ground control point in the image.
- Click Find Suggestions.
Several more images have suggested points.
- Click each of the images with suggested points, verify the suggested point matches a good location for a measured point, and click Accept.
You can also review the points and click Accept All.
Collect measurements for the Left camera
Next, you'll collect some measurements for the left camera.
- Scroll down to the Left camera images.
- Click one of the Left camera images to measure the control point position.
- If the image shows the ground control point, click it to add a measured point. If it does not, press the F key to move to the next image.
- After you've collected a measured point for a Left camera image, click Find Suggestions.
- Click each of the images with suggested points, verify the suggested point matches a good location for a measured point, and click Accept.
You can also review the points and click Accept All.
Now 111 of the images showing ground control point 990004 have measurements. Each camera is well represented. This is enough.
Collect measurements for another ground control point
You've collected measurements for ground control point 990004. The next step is to continue collecting measurements for the other ground control points. You should collect representative measurements for each of the cameras for at least five of the other ground control points. Use the same techniques you learned on the first ground control point.
- If the ground control point is not visible in the image (for example, if it is hidden by a car, building, or tree), press F to skip the image.
- If a suggested point location appears correct, click Accept.
- If the suggested point location does not appear correct, click the location of the ground control point.
- In the Control Points table, click the row number for the third row, point 990007.
When you click the header for this row, the Image List section updates to show all of the images that contain point 990007, and the first image is shown, along with a pink circle indicating the projected point location.
Note:
Some ground control points, such as point 990007, were not clearly marked on the ground by a point but were collected at a visually distinguishable location, such as a corner of a crosswalk.
In the Frankfurt_City_Collection folder, the GroundControlPoints folder contains a set of images showing a green Measured Point marker at the location of the ground control point.
If you open the 990007 image file in this folder, you'll see that this ground control point was collected at the corner of a crosswalk. For each ground control point, view the corresponding image in this folder to verify the location before measuring.
When conspicuous existing locations are used as ground control points, the surveyor usually notes the location in a set of field notes and takes a picture showing the GPS antenna at that location. The images in this folder simulate that sort of field data.
- Use the scroll wheel on your mouse to zoom closer to the ground control point, then click the corner of the crosswalk.
- Continue measuring points.
Keep working until you have collected about five measurements for each of the five cameras for each of the ground control points.
Remember that a ground control point may or may not be visible in any given image, because each was taken from a different location and angle. Trees, buildings, cars, or pedestrians may block the view of the point in some images. Glare or shadow may make a ground control point blend in to the background of the image. When measuring, you can skip the images where the point is not visible.
The images you see may not appear in the order they are shown in the tutorial. For each image, you should determine whether the ground control point is visible. If it is not visible, you can skip the image by pressing the F key. If the point is visible, you will zoom in to it using the scroll wheel on your mouse, and click the center of the point in the image to measure the difference between its calculated location and its location in the image.
The Control Points table shows statistics for the reprojection errors for each control point. If some control points have higher reprojection error statistics than others, you can click the header for the row in the Control Points table, and then in the Image List, search for and remeasure or remove images with high Reprojection Error values.
Remove a measurement
If a control point has a high reprojection error, you may need to remove a measurement or remeasure it.
You can examine the statistics in the Control Points table.
In this example, 990002 has the highest Maximum Reprojection Error value.
The values you see in your table will depend on the measurements you make and will not match these sample images.
- Click the row header for point 990002.
- In the Image table, click the column header for Reprojection Error (xy).
The table is sorted by Reprojection Error (xy) value.
- Scroll to see the image with the highest Reprojection Error (xy) value.
- Click the high Reprojection Error (xy) value.
- Click Remove.
In the Control Points table, check that the values improved.
Change a ground control point to a check point
Check points are used for evaluating and reporting on the accuracy of the alignment. Their 3D position and image residuals are estimated using the output image orientation for quality assurance purposes. You'll convert one of the ground control points to a check point.
- In the Control Points table, click the header for the fourth row (ground control point 990006).
The row for this ground control point is highlighted.
- On the toolbar at the top of the
Control Points table, click Set Role and choose CP.
In the table, the role changes to CP, indicating this is a Check Point.
Refine the alignment
After adding and measuring control points, or changing other settings of the alignment, you will run the alignment again to refine the positions based on the new information. This reruns the bundle-block adjustment, but it will be much faster than the initial alignment process.
- On the ribbon, on the Alignment tab, in the Process section, click Run.
The Process Manager opens and shows the progress on the alignment process. After one or two minutes, the process completes.
The QA tool pane appears.
To check the quality of the alignment results, examine the statistics on the QA tool pane.
For best results with this data, keep the following in mind:
- The overall Sigma 0 value should be less than 1 px for a well-calibrated photogrammetric camera.
- The RMS of the tie point reprojection error, which is also expected to be less than 1 px.
- The RMS for the horizontal and vertical object residuals for control points should be less than 1.5 GSD (12 cm).
Also check the count data, such as the number of automatic tie points per image and image measurements per tie point, which indicate how well tie points are distributed in the project area and how well adjacent images are connected by a common measurement. You can also review the tie point visualization in the globe view.
Note:
These steps are meant to give you basic guidance for analyzing the alignment results. Doing an in-depth analysis of the quality requires knowledge about project requirements and specifications as well as knowledge about the quality of the input data.
- In the QA pane, click General Information and view the Sigma 0 value.
The value in this example is 0.7616, which is a good value for this dataset.
- On the
right side of the QA pane, scroll down to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors section and click the View table button.
The RMS value for the tie point reprojection errors in this example is 0.762, which is a good value for this dataset.
- On the
right side of the QA pane, scroll up to the 3D Residuals section. View the Ground Control Points Residuals section.
The RMS value for the Ground Control Points Residuals in this example is 0.079 meters, acceptable for this exercise.
- On the
left side of the QA pane, scroll down and expand the
Count section.
In this example, there are six ground control points with 463 image measurements and one check point with 98 image measurements.
- Optionally, review the other QA statistics and measurements.
- On the QA tool, click the Control Points tab.
The Control Points table appears.
You can use this table to check the X,Y, and Z residuals for each control point. Unexpectedly large Delta XYZ values may indicate that points need to be remeasured.
You can also review the geography of the actual project data (ground control points, automatic tie points, image positions).
- On the ribbon, on the Alignment tab, in the Display section, click Automatic Tie Points.
The automatic tie points are drawn in the Display pane.
If other elements, such as Camera Positions, are drawing over the Automatic Tie Points, you can click the Project Tree, click Visualization, expand Capture Sessions, expand Frankfurt_Flight_RS, and for each camera, turn off the Camera Positions.
- Click the Automatic Tie Points drop-down arrow and choose RMS of Reprojection Errors.
The Display pane updates to show the manual tie points symbolized by the RMS of the reprojection errors.
- On the QA tool, click the Automatic Tie Points tab.
The table shows the automatic tie points.
You can view and sort the data in this table to identify the automatic tie points with the highest error values.
- On the QA tool, click the Overview tab. On the right side, scroll to the Reprojection Errors section and view the Automatic Tie Points Reprojection Errors histogram.
The symbology of the histogram matches the symbology of the globe view.
- On the ribbon, on the Alignment tab, in the Results section, click Report.
- In the Create Alignment Report window, browse to a location to save the report. For Name, type Frankfurt_AT_report.
- Click Save.
The PDF is saved on your computer. It is a way to share the QA statistics of the alignment.
- Close the QA tool and save the project.
You've performed an initial alignment, added control points, refined the alignment, and examined the alignment statistics. You also exported a PDF copy of the alignment statistics to document your work, and share with your stakeholders.
Next, you'll use the aligned data to create a reconstruction.
Perform a reconstruction
Now that the alignment process is complete and the results have been examined and determined to be high quality, you are ready to create output products. For this tutorial, you'll create a 3D point cloud and a 3D mesh.
Create a reconstruction
The first step to generate the products is to create a reconstruction.
- On the ribbon, click the Home tab. In the Processing section,
click New Reconstruction.
- In the Reconstruction pane, for Reconstruction Name, type Frankfurt_RS_3D.
This reconstruction session will be used to create two 3D outputs.
- For
Capture Scenario, click the drop-down list and choose Aerial Oblique.
Choosing a scenario sets some output products and processing settings.
The Aerial Oblique setting is useful now because the sample data is a multihead capture session, and all the available imagery will be used to create the output 3D products. The Aerial Nadir setting is more useful when you are creating 2D products. For optimal quality, 2D products should be produced using only Nadir images.
- In the Camera Sessions section, check the Frankfurt_AT alignment session
that you created.
The alignment is selected.
- In the Products section, review the output products.
The Point Cloud and Mesh products are highlighted.
The SLPK mesh format will be exported by default. You can check other formats for the output mesh if you choose.
- In the Workspace section, specify a local folder for the output of the reconstruction.
The results of the reconstruction process will be stored there. Ensure that there is enough disk space for the output.
- In the Optional section, for Quality, click Ultra.
The Ultra setting will run the 3D reconstruction at the native image resolution. This will take a longer time to process than the High quality option, but the results will look better. On a single computer with 128 GB RAM, AMD Ryzen 24 core CPU at 3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 8 hours.
You can choose the High quality option if you want the output to have slightly reduced detail and lower texture resolution.
Note:
The reconstruction process is designed to support processing in a distributed environment, with a local network of workstations running ArcGIS Reality Studio serving as processing nodes. To run efficiently in such an environment, the process is split into individual tasks and the project is divided into manageable subprojects.To run a reconstruction on multiple nodes, you need to specify the following:
- A workspace, where the results of the reconstruction run are collected. The workspace needs to be accessible for each processing node.
- A temporary processing folder, which is used to store intermediate processing results for the automatically defined subprojects.
- For Region of Interest, choose Frankfurt_AOI.
Region of Interest allows you to limit processing for your output products to the images relevant to your project.
- For
Water Body Geometries, choose Frankfurt_waterbody.
The Water Body Geometries parameter is used to flatten and simplify areas within water bodies. These can be tricky to process and lead to undesirable outputs due to the reflective nature of water.
- For Correction Geometries, accept the default value of None.
- Click Create.
This finishes the Reconstructions setup. The reconstruction is added to the Project Tree pane.
Run the reconstruction
Now that the reconstruction has been set up, the next step is to run it. This will take some time, depending on your computer resources. On a single computer with 128 GB RAM, AMD Ryzen 24 core CPU at 3.8 GHz, and Nvidia GeForce RTX4090 GPU, the process will take approximately 8 hours. Adding additional nodes will make the process faster.
- On the ribbon, on the Reconstruction tab, in the Processing section, click Submit.
After you click Submit, the Process Manager will show the reconstruction process as pending.
- On the ribbon, on the Reconstruction tab, in the Workspace section, click Start Contribution.
The Process Manager now shows the status of the reconstruction process.
You can use the Workspace Monitor to get an overview of which machine is contributing to a reconstruction job. In this example, there is only one machine, but you can use multiple machines to process a reconstruction.
You can use the Job Monitor to get an overview of which task of a reconstruction job is running.
After the analysis step is finished, the globe view will show the processing progress as well. You can observe the individual stereo models being processed in dense matching. Later in the process, you'll see individual tiles of the point cloud and the mesh added to the globe view.
Once the process has finished, the products are added to the Project Tree pane. You can use the Visualization tab to show or hide these products.
- Wait for the reconstruction process to run.
The Progress Manager will indicate when each process is done.
For this example, it takes approximately 8 hours on the single example machine.
- In the Project Tree pane, click Visualization.
- In the Project Tree pane, scroll down to the Reconstructions section, expand Frankfurt_RS_3D, expand Products, and check Point Cloud.
- Click the Globe tab to view your output.
If necessary, in the Project Tree pane, on the Visualization tab, uncheck other layers.
Optionally, turn off the point cloud layer and turn on the mesh layer, and explore the results.
- On the ribbon, on the Reconstruction tab, click Open Results Folder.
This opens the Results folder in Microsoft File Explorer. It contains the 3D point cloud in i3s (SLPK) format, as well as the 3D mesh in i3s (SLPK) format. Use the .slpk files to add the products to ArcGIS Online.
If you need to deliver your reconstruction products in a different tiling scheme or projection, on the Reconstruction tab, click the Export button and choose an alternative export option.
- If you do not run the process, view the results.
In this tutorial, you have created an ArcGIS Reality Studio project, added a capture session, performed an initial alignment, measured ground control points, and refined the alignment. You evaluated the quality of the alignment and determined that it was acceptable. You used the alignment to create a reconstruction, and you used that reconstruction to create point cloud and 3D mesh outputs. These can be shared to ArcGIS Online or used with local applications on your computer. You can use a similar process in the reconstruction stage to create 2D products such as true orthophotos and digital surface models. The main difference for creating 2D outputs is that you would use the aerial nadir scenario and limit the camera session to nadir camera captures.
You can find more tutorials in the tutorial gallery.