Master the 3D Reconstruction Process: A Step-by-Step Guide

-

journey from 2D photographs to 3D models follows a structured path. 

This path consists of distinct steps that construct upon one another to remodel flat images into spatial information. 

Understanding this pipeline is crucial for anyone trying to create high-quality 3D reconstructions.

Let me explain…

Most individuals think 3D reconstruction means:

  • Taking random photos around an object
  • Pressing a button in expensive software
  • Waiting for magic to occur
  • Getting perfect results each time
  • Skipping the basics

No thanks.

Probably the most successful 3D Reconstruction I even have seen are built on three core principles:

  • They use pipelines that work with fewer images but position them higher.
  • They ensure that users spend less time processing but achieve cleaner results.
  • They enable troubleshooting faster because users know exactly where to look.

Due to this fact, this hints at a pleasant lesson:

Your 3D models can only be pretty much as good as your understanding of how they’re created.

Taking a look at this from a scientific perspective is basically key.

Allow us to dive right into it!

🦊 

🦚Notes🌱Growing .

The Complete 3D Reconstruction Workflow

Let me highlight the 3D Reconstruction pipeline with Photogrammetry. The method follows a logical sequence of steps, as illustrated below.

What is significant to notice, is that every step builds upon the previous one. Due to this fact, the standard of every stage directly impacts the end result, which could be very essential to take into consideration!

🦊 

With that in mind, let’s detail each step, specializing in each the idea and practical implementation.

Natural Feature Extraction: Finding the Distinctive Points

Natural feature extraction is the muse of the photogrammetry process. It identifies distinctive points in images that could be reliably situated across multiple photographs.

These points function anchors that tie different views together.

🌱

Common feature extraction algorithms include:

Algorithm Strengths Weaknesses Best For
SIFT Scale and rotation invariant Computationally expensive High-quality, general-purpose reconstruction
SURF Faster than SIFT Less accurate than SIFT Quick prototyping
ORB Very fast, no patent restrictions Less robust to viewpoint changes Real-time applications

Let’s implement an easy feature extraction using OpenCV:

#%% SECTION 1: Natural Feature Extraction
import cv2
import numpy as np
import matplotlib.pyplot as plt

def extract_features(image_path, feature_method='sift', max_features=2000):
    """
    Extract features from a picture using different methods.
    """

    # Read the image in color and convert to grayscale
    img = cv2.imread(image_path)
    if img is None:
        raise ValueError(f"Couldn't read image at {image_path}")
    
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
    # Initialize feature detector based on method
    if feature_method.lower() == 'sift':
        detector = cv2.SIFT_create(nfeatures=max_features)
    elif feature_method.lower() == 'surf':
        # Note: SURF is patented and is probably not available in all OpenCV distributions
        detector = cv2.xfeatures2d.SURF_create(400)  # Adjust threshold as needed
    elif feature_method.lower() == 'orb':
        detector = cv2.ORB_create(nfeatures=max_features)
    else:
        raise ValueError(f"Unsupported feature method: {feature_method}")
    
    # Detect and compute keypoints and descriptors
    keypoints, descriptors = detector.detectAndCompute(gray, None)
    
    # Create visualization
    img_with_features = cv2.drawKeypoints(
        img, keypoints, None, 
        flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS
    )
    
    print(f"Extracted {len(keypoints)} {feature_method.upper()} features")
    
    return keypoints, descriptors, img_with_features

image_path = "sample_image.jpg"  # Replace along with your image path

# Extract features with different methods
kp_sift, desc_sift, vis_sift = extract_features(image_path, 'sift')
kp_orb, desc_orb, vis_orb = extract_features(image_path, 'orb')

What I do here is run through a picture, and hunt for distinctive patterns that stand out from their surroundings.

These patterns create mathematical “signatures” called descriptors that remain recognizable even when viewed from different angles or distances. 

Consider them as unique fingerprints that could be matched across multiple photographs.

The visualization step reveals exactly what the algorithm finds essential in your image.

# Display results
plt.figure(figsize=(12, 6))
    
plt.subplot(1, 2, 1)
plt.title(f'SIFT Features ({len(kp_sift)})')
plt.imshow(cv2.cvtColor(vis_sift, cv2.COLOR_BGR2RGB))
plt.axis('off')
    
plt.subplot(1, 2, 2)
plt.title(f'ORB Features ({len(kp_orb)})')
plt.imshow(cv2.cvtColor(vis_orb, cv2.COLOR_BGR2RGB))
plt.axis('off')
    
plt.tight_layout()
plt.show()

Notice how corners, edges, and textured areas attract more keypoints, while smooth or uniform regions remain largely ignored.

This visual feedback is invaluable for understanding why some objects reconstruct higher than others.

🦥 Geeky Note:

Feature Matching: Connecting Images Together

Once features are extracted, the subsequent step is to search out correspondences between images. This process identifies which points in numerous images represent the identical physical point in the true world. Feature matching creates the connections needed to find out camera positions.

I’ve seen countless attempts fail since the algorithm couldn’t reliably connect the identical points across different images.

The ratio test is the silent hero that weeds out ambiguous matches before they poison your reconstruction.

#%% SECTION 2: Feature Matching
import cv2
import numpy as np
import matplotlib.pyplot as plt

def match_features(descriptors1, descriptors2, method='flann', ratio_thresh=0.75):
    """
    Match features between two images using different methods.
    """

    # Convert descriptors to appropriate type if needed
    if descriptors1 is None or descriptors2 is None:
        return []
    
    if method.lower() == 'flann':
        # FLANN parameters
        if descriptors1.dtype != np.float32:
            descriptors1 = np.float32(descriptors1)
        if descriptors2.dtype != np.float32:
            descriptors2 = np.float32(descriptors2)
            
        FLANN_INDEX_KDTREE = 1
        index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
        search_params = dict(checks=50)  # Higher values = more accurate but slower
        
        flann = cv2.FlannBasedMatcher(index_params, search_params)
        matches = flann.knnMatch(descriptors1, descriptors2, k=2)
    else:  # Brute Force
        # For ORB descriptors
        if descriptors1.dtype == np.uint8:
            bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False)
        else:  # For SIFT and SURF descriptors
            bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=False)
        
        matches = bf.knnMatch(descriptors1, descriptors2, k=2)
    
    # Apply Lowe's ratio test
    good_matches = []
    for match in matches:
        if len(match) == 2:  # Sometimes fewer than 2 matches are returned
            m, n = match
            if m.distance < ratio_thresh * n.distance:
                good_matches.append(m)
    
    return good_matches

def visualize_matches(img1, kp1, img2, kp2, matches, max_display=100):
    """
    Create a visualization of feature matches between two images.
    """

    # Limit the variety of matches to display
    matches_to_draw = matches[:min(max_display, len(matches))]
    
    # Create match visualization
    match_img = cv2.drawMatches(
        img1, kp1, img2, kp2, matches_to_draw, None,
        flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS
    )
    
    return match_img

# Load two images
img1_path = "image1.jpg"  # Replace along with your image paths
img2_path = "image2.jpg"
    
# Extract features using SIFT (or your chosen method)
kp1, desc1, _ = extract_features(img1_path, 'sift')
kp2, desc2, _ = extract_features(img2_path, 'sift')
    
# Match features
good_matches = match_features(desc1, desc2, method='flann')
    
print(f"Found {len(good_matches)} good matches")

The matching process works by comparing feature descriptors between two images, measuring their mathematical similarity. For every feature in the primary image, we discover its two closest matches within the second image and assess their relative distances. 

If the closest match is significantly higher than the second-best (as controlled by the ratio threshold), we consider it reliable.

# Visualize matches
img1 = cv2.imread(img1_path)
img2 = cv2.imread(img2_path)
match_visualization = visualize_matches(img1, kp1, img2, kp2, good_matches)
    
plt.figure(figsize=(12, 8))
plt.imshow(cv2.cvtColor(match_visualization, cv2.COLOR_BGR2RGB))
plt.title(f"Feature Matches: {len(good_matches)}")
plt.axis('off')
plt.tight_layout()
plt.show()

Visualizing these matches reveals the spatial relationships between your images.

Good matches form a consistent pattern that reflects the transform between viewpoints, while outliers appear as random connections. 

This pattern provides immediate feedback on image quality and camera positioning—clustered, consistent matches suggest good reconstruction potential.

🦥 Geeky Note: The ratio_thresh parameter (0.75) is Lowe’s original suggestion and works well in most situations. Lower values (0.6-0.7) produce fewer but more reliable matches, which is preferable for scenes with repetitive patterns. Higher values (0.8-0.9) yield more matches but increase the chance of outliers contaminating your reconstruction.

Beautiful, now, allow us to move on the predominant stage: the Structure from Motion node.

Structure From Motion: Placing Cameras in Space

Structure from Motion (SfM) reconstructs each the 3D scene structure and camera motion from the 2D image correspondences. This process determines where each photo was taken from and creates an initial sparse point cloud of the scene.

Key steps in SfM include:

  1. Estimating the basic or essential matrix between image pairs
  2. Recovering camera poses (position and orientation)
  3. Triangulating 3D points from 2D correspondences
  4. Constructing a track graph to attach observations across multiple images

The essential matrix encodes the geometric relationship between two camera viewpoints, revealing how they’re positioned relative to one another in space.

This mathematical relationship is the muse for reconstructing each the camera positions and the 3D structure they observed.

#%% SECTION 3: Structure from Motion
import cv2
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

def estimate_pose(kp1, kp2, matches, K, method=cv2.RANSAC, prob=0.999, threshold=1.0):
    """
    Estimate the relative pose between two cameras using matched features.
    """

    # Extract matched points
    pts1 = np.float32([kp1[m.queryIdx].pt for m in matches])
    pts2 = np.float32([kp2[m.trainIdx].pt for m in matches])
    
    # Estimate essential matrix
    E, mask = cv2.findEssentialMat(pts1, pts2, K, method, prob, threshold)
    
    # Get well pose from essential matrix
    _, R, t, mask = cv2.recoverPose(E, pts1, pts2, K, mask=mask)
    
    inlier_matches = [matches[i] for i in range(len(matches)) if mask[i] > 0]
    print(f"Estimated pose with {np.sum(mask)} inliers out of {len(matches)} matches")
    
    return R, t, mask, inlier_matches

def triangulate_points(kp1, kp2, matches, K, R1, t1, R2, t2):
    """
    Triangulate 3D points from two views.
    """

    # Extract matched points
    pts1 = np.float32([kp1[m.queryIdx].pt for m in matches])
    pts2 = np.float32([kp2[m.trainIdx].pt for m in matches])
    
    # Create projection matrices
    P1 = np.dot(K, np.hstack((R1, t1)))
    P2 = np.dot(K, np.hstack((R2, t2)))
    
    # Triangulate points
    points_4d = cv2.triangulatePoints(P1, P2, pts1.T, pts2.T)
    
    # Convert to 3D points
    points_3d = points_4d[:3] / points_4d[3]
    
    return points_3d.T

def visualize_points_and_cameras(points_3d, R1, t1, R2, t2):
    """
    Visualize 3D points and camera positions.
    """

    fig = plt.figure(figsize=(10, 8))
    ax = fig.add_subplot(111, projection='3d')
    
    # Plot points
    ax.scatter(points_3d[:, 0], points_3d[:, 1], points_3d[:, 2], c='b', s=1)
    
    # Helper function to create camera visualization
    def plot_camera(R, t, color):
        # Camera center
        center = -R.T @ t
        ax.scatter(center[0], center[1], center[2], c=color, s=100, marker='o')
        
        # Camera axes (showing orientation)
        axes_length = 0.5  # Scale to make it visible
        for i, c in zip(range(3), ['r', 'g', 'b']):
            axis = R.T[:, i] * axes_length
            ax.quiver(center[0], center[1], center[2], 
                      axis[0], axis[1], axis[2], 
                      color=c, arrow_length_ratio=0.1)
    
    # Plot cameras
    plot_camera(R1, t1, 'red')
    plot_camera(R2, t2, 'green')
    
    ax.set_title('3D Reconstruction: Points and Cameras')
    ax.set_xlabel('X')
    ax.set_ylabel('Y')
    ax.set_zlabel('Z')
    
    # Attempt to make axes equal
    max_range = np.max([
        np.max(points_3d[:, 0]) - np.min(points_3d[:, 0]),
        np.max(points_3d[:, 1]) - np.min(points_3d[:, 1]),
        np.max(points_3d[:, 2]) - np.min(points_3d[:, 2])
    ])
    
    mid_x = (np.max(points_3d[:, 0]) + np.min(points_3d[:, 0])) * 0.5
    mid_y = (np.max(points_3d[:, 1]) + np.min(points_3d[:, 1])) * 0.5
    mid_z = (np.max(points_3d[:, 2]) + np.min(points_3d[:, 2])) * 0.5
    
    ax.set_xlim(mid_x - max_range * 0.5, mid_x + max_range * 0.5)
    ax.set_ylim(mid_y - max_range * 0.5, mid_y + max_range * 0.5)
    ax.set_zlim(mid_z - max_range * 0.5, mid_z + max_range * 0.5)
    
    plt.tight_layout()
    plt.show()

🦥 Geeky Note: The RANSAC threshold parameter (threshold=1.0) determines how strict we're about geometric consistency. I’ve found that 0.5-1.0 works well for controlled environments, but increasing to 1.5-2.0 helps with outdoor scenes where wind might cause slight camera movements. The probability parameter (prob=0.999) ensures high confidence but increases computation time; 0.95 is sufficient for prototyping.

The essential matrix estimation uses matched feature points and the camera’s internal parameters to calculate the geometric relationship between images.

This relationship is then decomposed to extract rotation and translation information – essentially determining where each photo was taken from in 3D space. The accuracy of this step directly affects every little thing that follows.


# This can be a simplified example - in practice you'd use images and matches
# from the previous steps
    
# Example camera intrinsic matrix (replace along with your calibrated values)
K = np.array([
        [1000, 0, 320],
        [0, 1000, 240],
        [0, 0, 1]
])
    
# For first camera, we use identity rotation and 0 translation
R1 = np.eye(3)
t1 = np.zeros((3, 1))
    
# Load images, extract features, and match as in previous sections
img1_path = "image1.jpg"  # Replace along with your image paths
img2_path = "image2.jpg"
    
img1 = cv2.imread(img1_path)
img2 = cv2.imread(img2_path)
    
kp1, desc1, _ = extract_features(img1_path, 'sift')
kp2, desc2, _ = extract_features(img2_path, 'sift')
    
matches = match_features(desc1, desc2, method='flann')
    
# Estimate pose of second camera relative to first
R2, t2, mask, inliers = estimate_pose(kp1, kp2, matches, K)
    
# Triangulate points
points_3d = triangulate_points(kp1, kp2, inliers, K, R1, t1, R2, t2)

Once camera positions are established, triangulation projects rays from matched points in multiple images to find out where they intersect in 3D space.

# Visualize the result
visualize_points_and_cameras(points_3d, R1, t1, R2, t2)

These intersections form the initial sparse point cloud, providing the skeleton upon which dense reconstruction will later construct. The visualization shows each the reconstructed points and the camera positions, helping you understand the spatial relationships in your dataset.

🌱 SfM works best with an excellent network of overlapping images. Aim for a minimum of 60% overlap between adjoining images for reliable reconstruction.

Bundle Adjustment: Optimizing for Accuracy

There may be an additional optimization stage that is available in throughout the Structure from Motion “compute node”. 

This is known as: Bundle adjustment.

It's a refinement step that jointly optimizes camera parameters and 3D point positions. What meaning, is that it minimizes the reprojection error, i.e. the difference between observed image points and the projection of their corresponding 3D points.

Does this make sense to you? Essentially, this optimization is great because it permits to:

  • improves the accuracy of the reconstruction
  • correct for accrued drift
  • Ensures global consistency of the model

At this stage, this needs to be enough to get an excellent intuition of how it really works.

🌱 In larger projects, incremental bundle adjustment (optimizing after adding each latest camera) can improve each speed and stability in comparison with global adjustment at the tip.

Dense Matching: Creating Detailed Reconstructions

After establishing camera positions and sparse points, the ultimate step is dense matching to create an in depth representation of the scene. 

Dense matching uses the known camera parameters to match many more points between images, leading to a whole point cloud.

Common approaches include:

  • Multi-View Stereo (MVS)
  • Patch-based Multi-View Stereo (PMVS)
  • Semi-Global Matching (SGM)

Putting It All Together: Practical Tools

The theoretical pipeline is implemented in several open-source and industrial software packages. Each offers different features and capabilities:

Tool Strengths Use Case Pricing
COLMAP Highly accurate, customizable Research, precise reconstructions Free, open-source
OpenMVG Modular, extensive documentation Education, integration with custom pipelines Free, open-source
Meshroom User-friendly, node-based interface Artists, beginners Free, open-source
RealityCapture Extremely fast, high-quality results Skilled, large-scale projects Industrial

These tools package the varied pipeline steps described above right into a more user-friendly interface, but understanding the underlying processes remains to be essential for troubleshooting and optimization.

Automating the reconstruction pipeline saves countless hours of manual work.

The true productivity boost comes from scripting your entire process end-to-end, from raw photos to dense point cloud.

COLMAP’s command-line interface makes this automation possible, even for complex reconstruction tasks.

#%% SECTION 4: Complete Pipeline Automation with COLMAP
import os
import subprocess
import glob
import numpy as np

def run_colmap_pipeline(image_folder, output_folder, colmap_path="colmap"):
    """
    Run the entire COLMAP pipeline from feature extraction to dense reconstruction.
    """

    # Create output directories in the event that they don't exist
    sparse_folder = os.path.join(output_folder, "sparse")
    dense_folder = os.path.join(output_folder, "dense")
    database_path = os.path.join(output_folder, "database.db")
    
    os.makedirs(output_folder, exist_ok=True)
    os.makedirs(sparse_folder, exist_ok=True)
    os.makedirs(dense_folder, exist_ok=True)
    
    # Step 1: Feature extraction
    print("Step 1: Feature extraction")
    feature_cmd = [
        colmap_path, "feature_extractor",
        "--database_path", database_path,
        "--image_path", image_folder,
        "--ImageReader.camera_model", "SIMPLE_RADIAL",
        "--ImageReader.single_camera", "1",
        "--SiftExtraction.use_gpu", "1"
    ]
    
    try:
        subprocess.run(feature_cmd, check=True)
    except subprocess.CalledProcessError as e:
        print(f"Feature extraction failed: {e}")
        return False
    
    # Step 2: Match features
    print("Step 2: Feature matching")
    match_cmd = [
        colmap_path, "exhaustive_matcher",
        "--database_path", database_path,
        "--SiftMatching.use_gpu", "1"
    ]
    
    try:
        subprocess.run(match_cmd, check=True)
    except subprocess.CalledProcessError as e:
        print(f"Feature matching failed: {e}")
        return False
    
    # Step 3: Sparse reconstruction (Structure from Motion)
    print("Step 3: Sparse reconstruction")
    sfm_cmd = [
        colmap_path, "mapper",
        "--database_path", database_path,
        "--image_path", image_folder,
        "--output_path", sparse_folder
    ]
    
    try:
        subprocess.run(sfm_cmd, check=True)
    except subprocess.CalledProcessError as e:
        print(f"Sparse reconstruction failed: {e}")
        return False
    
    # Find the biggest sparse model
    sparse_models = glob.glob(os.path.join(sparse_folder, "*/"))
    if not sparse_models:
        print("No sparse models found")
        return False
    
    # Sort by model size (using variety of images as proxy)
    largest_model = 0
    max_images = 0
    for i, model_dir in enumerate(sparse_models):
        images_txt = os.path.join(model_dir, "images.txt")
        if os.path.exists(images_txt):
            with open(images_txt, 'r') as f:
                num_images = sum(1 for line in f if line.strip() and never line.startswith("#"))
                num_images = num_images // 2  # Each image has 2 lines
                if num_images > max_images:
                    max_images = num_images
                    largest_model = i
    
    selected_model = os.path.join(sparse_folder, str(largest_model))
    print(f"Chosen model {largest_model} with {max_images} images")
    
    # Step 4: Image undistortion
    print("Step 4: Image undistortion")
    undistort_cmd = [
        colmap_path, "image_undistorter",
        "--image_path", image_folder,
        "--input_path", selected_model,
        "--output_path", dense_folder,
        "--output_type", "COLMAP"
    ]
    
    try:
        subprocess.run(undistort_cmd, check=True)
    except subprocess.CalledProcessError as e:
        print(f"Image undistortion failed: {e}")
        return False
    
    # Step 5: Dense reconstruction (Multi-View Stereo)
    print("Step 5: Dense reconstruction")
    mvs_cmd = [
        colmap_path, "patch_match_stereo",
        "--workspace_path", dense_folder,
        "--workspace_format", "COLMAP",
        "--PatchMatchStereo.geom_consistency", "true"
    ]
    
    try:
        subprocess.run(mvs_cmd, check=True)
    except subprocess.CalledProcessError as e:
        print(f"Dense reconstruction failed: {e}")
        return False
    
    # Step 6: Stereo fusion
    print("Step 6: Stereo fusion")
    fusion_cmd = [
        colmap_path, "stereo_fusion",
        "--workspace_path", dense_folder,
        "--workspace_format", "COLMAP",
        "--input_type", "geometric",
        "--output_path", os.path.join(dense_folder, "fused.ply")
    ]
    
    try:
        subprocess.run(fusion_cmd, check=True)
    except subprocess.CalledProcessError as e:
        print(f"Stereo fusion failed: {e}")
        return False
    
    print("Pipeline accomplished successfully!")
    return True

The script orchestrates a series of COLMAP operations that might normally require manual intervention at each stage. It handles the progression from feature extraction through matching, sparse reconstruction, and eventually dense reconstruction – maintaining the right data flow between steps. This automation becomes invaluable when processing multiple datasets or when iteratively refining reconstruction parameters.

# Replace along with your image and output folder paths
image_folder = "path/to/images"
output_folder = "path/to/output"
    
# Path to COLMAP executable (could also be just "colmap" if it's in your PATH)
colmap_path = "colmap"
    
run_colmap_pipeline(image_folder, output_folder, colmap_path)

One key aspect is the automated choice of the biggest reconstructed model. In difficult datasets, COLMAP sometimes creates multiple disconnected reconstructions relatively than a single cohesive model. 

The script intelligently identifies and continues with probably the most complete reconstruction, using image count as a proxy for model quality and completeness.

🦥 Geeky Note: The –SiftExtraction.use_gpu and –SiftMatching.use_gpu flags enable GPU acceleration, speeding up processing by 5-10x. For dense reconstruction, the –PatchMatchStereo.geom_consistency true parameter significantly improves quality by enforcing consistency across multiple views, at the price of longer processing time.

The Power of Understanding the Pipeline

Understanding the total reconstruction pipeline gives you control over your 3D modeling process. Whenever you encounter issues, knowing which stage is likely to be causing problems permits you to goal your troubleshooting efforts effectively.

As illustrated, common issues and their sources include:

  1. Missing or incorrect camera poses: Feature extraction and matching problems
  2. Incomplete reconstruction: Insufficient image overlap
  3. Noisy point clouds: Poor bundle adjustment or camera calibration
  4. Failed reconstruction: Problematic images (motion blur, poor lighting)

The power to diagnose these issues comes from a deep understanding of how each pipeline component works and interacts with others.

Next Steps: Practice and Automation

Now that you simply understand the pipeline, it’s time to place it into practice. Experiment with the provided code examples and check out automating the method for your personal datasets.

Start with small, well-controlled scenes and regularly tackle more complex environments as you gain confidence.

Do not forget that the standard of your input images dramatically affects the end result. Take time to capture high-quality photographs with good overlap, consistent lighting, and minimal motion blur.

🌱

References and useful resources

I compiled for you some interesting software, tools, and useful algorithm prolonged documentation:

Software and Tools

  • COLMAP – Free, open-source 3D reconstruction software
  • OpenMVG – Open Multiple View Geometry library
  • Meshroom – Free node-based photogrammetry software
  • RealityCapture – Industrial high-performance photogrammetry software
  • Agisoft Metashape – Industrial photogrammetry and 3D modeling software
  • OpenCV – Computer vision library with feature detection implementations
  • 3DF Zephyr – Photogrammetry software for 3D reconstruction
  • Python – Programming language ideal for 3D reconstruction automation

Algorithms

In regards to the creator

Florent Poux, Ph.D. 

Resources

  1. 🏆Awards: Jack Dangermond Award
  2. 📕Book: 3D Data Science with Python
  3. 📜Research: 3D Smart Point Cloud (Thesis)
  4. 🎓Courses: 3D Geodata Academy Catalog
  5. 💻Code: Florent’s Github Repository
  6. 💌3D Tech Digest: Weekly Newsletter
ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x