Visual Perception and Control of Underwater Robots 1st Edition Yu – Ebook Instant Download/Delivery ISBN(s): 9781000346619, 1000346617
Product details:
- ISBN 10:1000346617
- ISBN 13: 9781000346619
- Author: Yu
Visual Perception and Control of Underwater Robots
Table contents:
CHAPTER 1 INTRODUCTION
1.1 Research Background
1.2 Review of Underwater Visual Restoration
1.2.1 Formation of Underwater Image
1.2.2 Visual Restoration Based on Image Formation Model
1.2.3 Visual Restoration Based on Information Fusion
1.3 Review of Deep-Learning-Based Object Detection
1.3.1 Two-Stage Detector
1.3.1.1 RCNN
1.3.1.2 Fast RCNN
1.3.1.3 Faster RCNN
1.3.1.4 RFCN
1.3.2 Single-Stage Detector
1.3.2.1 YOLO
1.3.2.2 SSD
1.3.2.3 RetinaNet
1.3.2.4 RefineDet
1.3.3 Temporal Object Detection
1.3.3.1 Post-Processing
1.3.3.2 Cascade of Detection and Tracking
1.3.3.3 Feature Fusion Based on Motion Estimation
1.3.3.4 Feature Propagation Based on RNN
1.3.3.5 Temporally Sustained Proposal
1.3.3.6 Batch-Processing
1.3.4 Benchmarks of Object Detection
1.3.4.1 PASCAL VOC
1.3.4.2 MS COCO
1.3.4.3 ImageNet VID
1.3.4.4 Evaluation Metrics
1.4 Review of Underwater Stereo Measurement
1.5 Overview of the Subsequence Chapters
References
CHAPTER 2 Adaptive Real-Time Underwater Visual Restoration with Adversarial Critical Learning
2.1 Introduction
2.2 Review of Visual Restoration and Image-to-Image Translation
2.2.1 Traditional Underwater Image Restoration Methods
2.2.2 Image-to-Image Translation
2.3 GAN-Based Restoration with Adversarial Critical Learning
2.3.1 Filtering-Based Restoration Scheme
2.3.2 Architecture of the GAN-Based Restoration Scheme
2.3.3 Objective for GAN-RS
2.3.3.1 Adversarial Loss
2.3.3.2 DCP Loss
2.3.3.3 Underwater Index Loss
2.3.3.4 Full Loss
2.4 Experiments and Discussion
2.4.1 Details of ACL
2.4.1.1 Basic Settings
2.4.1.2 Multistage Loss Strategy
2.4.2 Compared Methods
2.4.3 Runtime Performance
2.4.3.1 Running Environment
2.4.3.2 Time Efficiency
2.4.4 Restoration Results
2.4.4.1 Visualization of Underwater Index
2.4.4.2 Comparison on Restoration Quality
2.4.4.3 Feature-Extraction Tests
2.4.5 Visualization of Discriminator
2.4.6 Discussion
2.5 Concluding Remarks
References
CHAPTER 3 A NSGA-II-Based Calibration for Underwater Binocular Vision Measurement
3.1 Introduction
3.2 Related Work
3.3 Refractive Camera Model
3.4 Akin Triangulation and Refractive Constraint
3.4.1 Akin Triangulation
3.4.2 Refractive Surface Constraint
3.5 Calibration Algorithm
3.5.1 A Novel Usage of Checkerboard
3.5.2 Analysis of the Binocular Housing Parameters
3.5.3 NSGA-II Algorithm
3.5.4 Process of the Calibration Algorithm
3.6 Experiments and Results
3.6.1 Experimental Setup
3.6.2 Results of Calibration
3.6.3 Experiments on Position Measurement
3.6.4 Experiments on Position Measurement
3.6.5 Discussion
3.7 Conclusion and Future Work
References
CHAPTER 4 Joint Anchor-Feature Refinement for Real-Time Accurate Object Detection in Images and Videos
4.1 Introduction
4.2 Review of Deep Learning-Based Object Detection
4.2.1 CNN-Based Static Object Detection
4.2.2 Temporal Object Detection
4.2.3 Sampling for Object Detection
4.3 Dual Refinement Network
4.3.1 Overall Architecture
4.3.2 Anchor-Offset Detection
4.3.2.1 From SSD to RefineDet, then to DRNet
4.3.2.2 Anchor Refinement
4.3.2.3 Deformable Detection Head
4.3.2.4 Feature Location Refinement
4.3.3 Multi-deformable Head
4.3.4 Training and Inference
4.4 Temporal Dual Refinement Networks
4.4.1 Architecture
4.4.2 Training
4.4.3 Inference
4.5 Experiments and Discussion
4.5.1 Ablation Studies of DRNet320-VGG16 on VOC 2007
4.5.1.1 Anchor-Offset Detection
4.5.1.2 Multi-deformable Head
4.5.1.3 Toward More Effective Training
4.5.2 Results on VOC 2007
4.5.3 Results on VOC 2012
4.5.4 Results on COCO
4.5.5 Results on ImageNet VID
4.5.5.1 Accuracy vs. Speed Trade-off
4.5.5.2 Comparison with Other Architectures
4.5.6 Discussion
4.5.6.1 Key Frame Scheduling
4.5.6.2 Further Enhancement of Refinement Networks
4.5.6.3 Refinement Networks for Real-World Object Detection
4.6 Concluding Remarks
References
CHAPTER 5 Rethinking Temporal Object Detection from Robotic Perspectives
5.1 Introduction
5.2 Review of Temporal Detection and Tracking
5.2.1 Temporal Object Detection
5.2.2 Tracking Metrics
5.2.3 Tracking-by-Detection (i.e., MOT)
5.2.4 Detection-SOT Cascade
5.3 On VID Temporal Performance
5.3.1 Non-reference Assessments
5.3.1.1 Recall Continuity
5.3.1.2 Localization Stability
5.3.2 Online Tracklet Refinement
5.3.2.1 Short Tracklet Suppression
5.3.2.2 Fragment Filling
5.3.2.3 Temporal Location Fusion
5.4 SOT-by-Detection
5.4.1 Small-Overlap Suppression
5.4.2 SOT-by-Detection Framework
5.5 Experiments and Discussion
5.5.1 Analysis on VID Continuity/Stability
5.5.1.1 Tracklet Visualization
5.5.1.2 Numerical Evaluation
5.5.2 SOT-by-Detection
5.5.2.1 Speed Comparison of NMS and SOS-NMS
5.5.2.2 SOT-by-Detection vs. Siamese SOT
5.5.3 Discussion
5.5.3.1 Detector-Based Improvement
5.5.3.2 Limitation of SOT-by-Detection
5.6 Concluding Remarks
References
CHAPTER 6 Reveal of Domain Effect: How Visual Restoration Contributes to Object Detection in Aquatic Scenes
6.1 Introduction
6.2 Review of Underwater Visual Restoration and Domain-Adaptive Object Detection
6.2.1 Underwater Visual Restoration
6.2.2 Domain-Adaptive Object Detection
6.3 Preliminary
6.3.1 Preliminary of Data Domain Based on Visual Restoration
6.3.1.1 Domain Generation
6.3.1.2 Domain Analysis
6.3.2 Preliminary of Detector
6.4 Joint Analysis on Visual Restoration and Object Detection
6.4.1 Within-Domain Performance
6.4.1.1 Numerical Analysis
6.4.1.2 Visualization of Convolutional Representation
6.4.1.3 Precision-Recall Analysis
6.4.2 Cross-Domain Performance
6.4.2.1 Cross-Domain Evaluation
6.4.2.2 Cross-Domain Training
6.4.3 Domain Effect on Real-World Object Detection
6.4.3.1 Online Object Detection in Aquatic Scenes
6.4.3.2 Online Domain Analysis
6.4.4 Discussion
6.4.4.1 Recall Efficiency
6.4.4.2 CNN’s Domain Selectivity
6.5 Underwater Vision System and Marine Test
6.5.1 System Design
6.5.2 Underwater Object Counting
6.5.3 Underwater Object Grasping
6.6 Concluding Remarks
References
CHAPTER 7 IWSCR: An Intelligent Water Surface Cleaner Robot for Collecting Floating Garbage
7.1 Introduction
7.2 Prototype Design of IWSCR
7.2.1 Configuration of IWSCR
7.2.2 Framework of Control System
7.3 Accurate and Real-Time Garbage Detection
7.4 Sliding Mode Controller for Vision-Based Steering
7.4.1 Dynamic Model of Underwater Vehicle
7.4.2 Formulation of the Vision-Based Steering
7.4.3 Design and Stability Analysis of Sliding Mode Controller
7.5 Dynamic Grasping Strategy for Floating Bottles
7.5.1 Kinematics and Inverse Kinematics of Manipulator
7.5.2 Description of the Feasible Grasping Strategy
7.6 Experiments and Discussion
7.6.1 Experimental Results of Garbage Detection
7.6.2 Experimental Results of SMC for Vision-Based Steering and Achievement of TTs
7.6.3 Discussion
7.7 Conclusion and Future Work
References
CHAPTER 8 Underwater Target Tracking Control of an Untethered Robotic Fish with a Camera Stabilizer
8.1 Introduction
8.2 System Design of the Robotic Fish with a Camera Stabilizer
8.2.1 Mechatronic Design
8.2.2 CPG-Based Motion Control
8.3 Active Vision Tracking System
8.4 RL-Based Target Tracking Control
8.4.1 Tracking Control Design
8.4.2 Performance Analysis of DDPG-Based Control System
8.5 Experiments and Results
8.5.1 Static and Dynamic Tracking Experiments
8.5.2 Discussion
8.6 Conclusions and Future Work
People also search:
visual perception
visual perception news
gestalt principles of visual perception
motor free visual perception test
visual perception disorder