2023年全國(guó)碩士研究生考試考研英語(yǔ)一試題真題(含答案詳解+作文范文)_第1頁(yè)
已閱讀1頁(yè),還剩15頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、附錄 AMulti-modal Force/Vision Sensor Fusion in 6-DOF Pose TrackingAbstract—Sensor based robot control allows manipulation in dynamic and uncertain environments. Vision can be used to estimate 6-DOF pose of an object by mo

2、del-based poseestimation methods, but the estimate is not accurate in all degrees of freedom. Force offers a complementary sensor modality allowing accurate measurements of local object shape when the tooltip is in conta

3、ct with the object. As force and vision are fundamentally different sensor modalities, they cannot be fused directly.We present a method which fuses force and visual measurements using positional information of the end-e

4、ffector. By transforming the position of the tooltip and the camera to a same coordinate frame and modeling the uncertainties of the visual measurement, the sensors can be fused together in an Extended Kalman filter. Exp

5、erimental results show greatly improved pose estimates when the sensor fusion is used.I. INTRODUCTIONRobot control in unstructured environments is a challenging problem. Simple position based control is not adequate,if t

6、he position of the workpiece is unknown during manipulation as uncertainties present in robot task prevent the robot from following a preprogrammed trajectory. Sensor based manipulation allows a robot to adapt to a dynam

7、ic and uncertain environment. With sensors the uncertainties of the environment can be modeled and the robot can take actions based on the sensory input. In visual servoing the robot is controlled based on the sensory in

8、put from a visual sensor. A 3-D model of the workpiece can be created and 6-DOF pose of the object can be determined by pose estimation algorithms. Visual servoing enables such tasks as tracking a moving object with an e

9、nd-effector mounted camera. However, a single camera visual measurement is often not accurate in all degrees of freedom. Only the object translations perpendicular to the camera axis can be determined synchronization of

10、the positional information and visual measurement. Otherwise vision will give erroneous information while the end-effector is in motion.In this paper, we present how vision and force can be fused together taking into acc

11、ount the uncertainty of each individual measurement. A model based pose estimation algorithm is used to extract the unknown pose of a moving target. The uncertainty of the pose depends on the uncertainty of the measured

12、feature points in image plane and this uncertainty is projected into Cartesian space. A tooltip measurement is used to probe the local shape of the object by moving on the object surface and keeping a constant contact fo

13、rce. An Extended Kalman filter (EKF) is then used to fuse the measurements over time by taking into account the uncertainty of each individual measurement. To our knowledge this is the first work using contact informatio

14、n to compensate the uncertainty of visual tracking while the tooltip is sliding on the object surface.II. RELATED WORKReduction of measurement errors and fusion of several sensory modalities using a Kalman filter (KF) fr

15、amework is widely used in robotics, for example, in 6-DOF pose tracking [1]. However, in visual servoing context Kalman filters are typically used only for filtering uncertain visual measurements and do not take into acc

16、ount the positional information of the end-effector. Wilson et al. [2] proposed to solve the pose estimation problem for position-based visual servoing using the KF framework as this will balance the effect of measuremen

17、t uncertainties. Lippiello et al. propose a method for combining visual information from several cameras and the pose of the end effector together in KF [3]. However, in their approaches the KF can be understood as a sin

18、gle iteration of an iterative Gauss-Newton procedure for pose estimation, and as such is not likely to give optimal results for the non-linear pose estimation problem.Control and observation are dual problems. Combining

19、of force and vision is often done on the level of control [4], [5], [6]. As there is no common representation for the two sensor modalities combining the information in one observation model is not straight forward. Prev

20、ious work on combining haptic information with vision in observation level primarily uses the two sensors separately. Vision is used to generate a 3D model of an object and a force sensor to extract physical properties s

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論