版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領
文檔簡介
1、附錄 AMulti-modal Force/Vision Sensor Fusion in 6-DOF Pose TrackingAbstract—Sensor based robot control allows manipulation in dynamic and uncertain environments. Vision can be used to estimate 6-DOF pose of an object by mo
2、del-based poseestimation methods, but the estimate is not accurate in all degrees of freedom. Force offers a complementary sensor modality allowing accurate measurements of local object shape when the tooltip is in conta
3、ct with the object. As force and vision are fundamentally different sensor modalities, they cannot be fused directly.We present a method which fuses force and visual measurements using positional information of the end-e
4、ffector. By transforming the position of the tooltip and the camera to a same coordinate frame and modeling the uncertainties of the visual measurement, the sensors can be fused together in an Extended Kalman filter. Exp
5、erimental results show greatly improved pose estimates when the sensor fusion is used.I. INTRODUCTIONRobot control in unstructured environments is a challenging problem. Simple position based control is not adequate,if t
6、he position of the workpiece is unknown during manipulation as uncertainties present in robot task prevent the robot from following a preprogrammed trajectory. Sensor based manipulation allows a robot to adapt to a dynam
7、ic and uncertain environment. With sensors the uncertainties of the environment can be modeled and the robot can take actions based on the sensory input. In visual servoing the robot is controlled based on the sensory in
8、put from a visual sensor. A 3-D model of the workpiece can be created and 6-DOF pose of the object can be determined by pose estimation algorithms. Visual servoing enables such tasks as tracking a moving object with an e
9、nd-effector mounted camera. However, a single camera visual measurement is often not accurate in all degrees of freedom. Only the object translations perpendicular to the camera axis can be determined synchronization of
10、the positional information and visual measurement. Otherwise vision will give erroneous information while the end-effector is in motion.In this paper, we present how vision and force can be fused together taking into acc
11、ount the uncertainty of each individual measurement. A model based pose estimation algorithm is used to extract the unknown pose of a moving target. The uncertainty of the pose depends on the uncertainty of the measured
12、feature points in image plane and this uncertainty is projected into Cartesian space. A tooltip measurement is used to probe the local shape of the object by moving on the object surface and keeping a constant contact fo
13、rce. An Extended Kalman filter (EKF) is then used to fuse the measurements over time by taking into account the uncertainty of each individual measurement. To our knowledge this is the first work using contact informatio
14、n to compensate the uncertainty of visual tracking while the tooltip is sliding on the object surface.II. RELATED WORKReduction of measurement errors and fusion of several sensory modalities using a Kalman filter (KF) fr
15、amework is widely used in robotics, for example, in 6-DOF pose tracking [1]. However, in visual servoing context Kalman filters are typically used only for filtering uncertain visual measurements and do not take into acc
16、ount the positional information of the end-effector. Wilson et al. [2] proposed to solve the pose estimation problem for position-based visual servoing using the KF framework as this will balance the effect of measuremen
17、t uncertainties. Lippiello et al. propose a method for combining visual information from several cameras and the pose of the end effector together in KF [3]. However, in their approaches the KF can be understood as a sin
18、gle iteration of an iterative Gauss-Newton procedure for pose estimation, and as such is not likely to give optimal results for the non-linear pose estimation problem.Control and observation are dual problems. Combining
19、of force and vision is often done on the level of control [4], [5], [6]. As there is no common representation for the two sensor modalities combining the information in one observation model is not straight forward. Prev
20、ious work on combining haptic information with vision in observation level primarily uses the two sensors separately. Vision is used to generate a 3D model of an object and a force sensor to extract physical properties s
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 外文翻譯---多模態(tài)六自由度力視覺傳感器融合的姿態(tài)跟蹤
- 外文翻譯---多模態(tài)六自由度力視覺傳感器融合的姿態(tài)跟蹤
- 外文翻譯---多模態(tài)六自由度力視覺傳感器融合的姿態(tài)跟蹤.docx
- 外文翻譯---多模態(tài)六自由度力視覺傳感器融合的姿態(tài)跟蹤.docx
- 六自由度光學測量傳感器設計與研究.pdf
- 外文翻譯---多傳感器數據融合的多分類器系統(tǒng)
- 多傳感器數據融合跟蹤算法研究.pdf
- 外文翻譯--六自由度機器人
- 基于匹配跟蹤的多傳感器圖像融合.pdf
- 多自由度多傳感器機器人控制系統(tǒng)研究.pdf
- 用于六自由度納米工作臺的位移傳感器研制.pdf
- 多傳感器數據融合目標跟蹤算法研究.pdf
- 基于六維力傳感器的協(xié)作型六自由度機器人控制系統(tǒng)研究.pdf
- 外文翻譯--六自由度機器人.doc
- 基于多傳感器信息融合的人體姿態(tài)識別研究.pdf
- 多傳感器目標跟蹤融合算法研究.pdf
- 基于多傳感器融合的車輛檢測與跟蹤.pdf
- 多自由度步行機器人外文翻譯
- 畢業(yè)論文外文翻譯-多傳感器數據融合的多分類器系統(tǒng)
- 多傳感器目標跟蹤信息融合算法研究.pdf
評論
0/150
提交評論