外文翻譯-機械模具自動化【期刊】從同步任務(wù)執(zhí)行演示中學(xué)習(xí)多個機器人聯(lián)合行動計劃-外文文獻_第1頁
已閱讀1頁,還剩7頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

1、Learning Multirobot Joint Action Plans from Simultaneous Task Execution DemonstrationsMurilo Fernandes Martins Dept. of Elec. and Electronic Engineering Imperial College London London, UK murilo@ieee.orgYiannis Demiris D

2、ept. of Elec. and Electronic Engineering Imperial College London London, UK y.demiris@imperial.ac.ukABSTRACTThe central problem of designing intelligent robot systems which learn by demonstrations of desired behaviour ha

3、s been largely studied within the field of robotics. Numerous archi- tectures for action recognition and prediction of intent of a single teacher have been proposed. However, little work has been done addressing how a gr

4、oup of robots can learn by simultaneous demonstrations of multiple teachers. This paper contributes a novel approach for learning mul- tirobot joint action plans from unlabelled data. The robots firstly learn the demonst

5、rated sequence of individual actions using the HAMMER architecture. Subsequently, the group behaviour is segmented over time and space by applying a spatio-temporal clustering algorithm. The experimental results, in whic

6、h humans teleoperated real robots during a search and rescue task deployment, successfully demonstrated the efficacy of combining action recognition at individual level with group behaviour segmen- tation, spotting the e

7、xact moment when robots must form coalitions to achieve the goal, thus yielding reasonable gen- eration of multirobot joint action plans.Categories and Subject DescriptorsI.2.9 [Artificial Intelligence]: RoboticsGeneral

8、TermsAlgorithms, Design, ExperimentationKeywordsLearning by Demonstration, Multirobot Systems, Spectral Clustering1. INTRODUCTIONA substantial amount of studies in Multirobot Systems (MRS) addresses the potential applica

9、tions of engaging mul- tiple robots to collaboratively deploy complex tasks such as search and rescue, distributed mapping and exploration of unknown environments, as well as hazardous tasks and for- aging – for an overv

10、iew of the field, see [13]. Designing distributed intelligent systems, such as MRS, is a profitableCite as: Learning Multirobot Joint Action Plans from Simultaneous Task Execution Demonstrations, M. F. Martins, Y. Demiri

11、s, Proc. of 9th Int. Conf. on Autonomous Agents and Multiagent Systems (AA- MAS 2010), van der Hoek, Kaminka, Lespérance, Luck and Sen (eds.), May, 10–14, 2010, Toronto, Canada, pp.? Copyright c ? 2010, Internationa

12、l Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.Figure 1: The P3-AT mobile robots used in this paper, equipped with onboard computers, cameras, laser and sonar range senso

13、rs.technology which brings benefits such as flexibility, redun- dancy and robustness, among others. Similarly, a substantial amount of studies have proposed numerous approaches to robot Learning by Demonstration (LbD) –

14、for a comprehensive review, see [1]. Equipping robots with the ability to understand the context in which they interact without the need of configuring or program- ming the robots is an extremely desired feature. Regardi

15、ng LbD, the methods which have been proposed are mostly focussed on a single teacher, single robot sce- nario. In [7], a single robot learnt a sequence of actions demonstrated by a single teacher. In [12], the authors pr

16、e- sented an approach where a human acted both as a teacher and collaborator to a robot. The robot was able to match the predicted resultant state of the human’s movements to the observed state of the environment based o

17、n its underlying capabilities. A supervised learning method was presented in [4] using gaussian mixture models, in which a four-legged robot was teleoperated during a navigation task. Few studies addressed the prediction

18、 of intent in adver- sarial multiagent scenarios, such as the work of [3], in which group manoeuvres could be predicted based upon existing models of group formation. In the work of [5], multiple humanoid robots requeste

19、d a teacher’s demonstration when facing unfamiliar states. In [14], the problem of extracting group behaviour from observed coordinated manoeuvres of multiple agents along time was addressed by using a clus- tering algor

20、ithm. The method presented in [9] allowed a single robot to predict the intentions of 2 humans based on spatio-temporal relationships. However, the challenge of designing an MRS system in which multiple robots learn grou

21、p behaviour by observation931931-938I1 I2 In F1 F2 Fn state s (at t) M1 M2 Mn Prediction verification (at t+1) Prediction verification (at t+1) Prediction verification (at t+1) P1 P2 Pn … … Figure 3: Diagramatic s

22、tatement of the HAMMER architecture. Based on state st, multiple inverse models (I1 to In) compute motor commands (M1 to Mn), with which the corresponding forward models (F1 to Fn) form predictions regarding the next sta

23、te st+1 (P1 to Pn) which are verified at st+1.may perform certain actions sequentially or simultaneously resulting in a combination of actions, while the robot has access to the joystick commands only. In order to recogn

24、ise actions from observed data and ma- noeuvre commands, this paper makes use of the Hierarchical Attentive Multiple Models for Execution and Recognition (HAMMER) architecture [7], which has been proven to work very well

25、 when applied to distinct robot scenarios. HAM- MER is based upon the concepts of multiple hierarchically connected inverse-forward models. In this architecture, an inverse model has as inputs the observed state of the e

26、n- vironment and the target goal(s), and its outputs are the motor commands required to achieve or maintain the target goal(s). On the other hand, forward models have as inputs the observed state and motor commands, and

27、the output is a prediction of the next state of the environment. As illustrated in Fig. 3, each inverse-forward pair results in a hypothesis by simulating the execution of a primitive be- haviour, and then the predicted

28、state is compared to the observed state to compute a confidence value. This value represents how correct that hypothesis is, thus determining which robot primitive behaviour would result in the most similar outcome to th

29、e observed action.3. SYSTEM IMPLEMENTATIONThe MRLbD approach proposed in this paper is demon- strated using the aforementioned platform for robot teleop- eration, which consists in a client/server software written in C++

30、 to control the P3-AT robots (Fig. 1) utilised in the experiments, as well as an implementation of the HAM- MER architecture for action recognition and a Matlab im- plementation of the SC algorithm similar to the one pre

31、- sented in [14]. An overview of the teleoperation platform can be seen in Fig. 4. The server software comprises the robot cognitive capa- bilities and resides on the robot’s onboard computer. The server is responsible f

32、or acquiring the sensor data and send- ing motor commands to the robot, whereas the client soft- ware runs on a remote computer and serves as the interface between the human operator and the robot.3.1 The robot cognitive

33、 capabilitiesWithin the Robot cognitive capabilities block, the server communicates with the robot hardware by using the well- known robot control interface Player [6], which is a network server that works as a hardware

34、abstraction layer to interfaceHuman-robot interface Robot cognitive capabilities WiFi network Joystick Visualisation Robot control Environment perception Player Server Logging Robot hardware Plan Extraction Action

35、recognition (HAMMER) Group behaviour segmentation Multirobot plan Figure 4: Overview of the teleoperation platform developed in this paper.with a variety of robotic hardware. Initially, the internal odometry sensors ar

36、e read. This data provides the current robot’s pose, which is updated as the robot moves around and used as the ground truth pose for calculating objects’ pose and building the 2D map of the environment. Odometry sensors

37、 are known for inherently adding incremental errors and hence lead to inaccurate pose estimations; but nevertheless, it is shown later on in Sec- tion 5 that this inaccuracy was immaterial to the results. The image captu

38、red (320x240 pixels, coloured) from the robot’s camera (at 30 frames per second) is compressed using the JPEG algorithm and sent to the client software over a TCP/IP connection using the Wi-Fi network. Additionally, the

39、image is also used to recognise objects based upon a known objects database, using the approach presented in [15]. This algorithm consists in detecting the pose (Cartesian coordinates in the 3D space, plus rotation on th

40、e respective axes) of unique markers. The known objects database comprises a set of unique markers and the object that each marker is attached to, and also offset values to compute the pose of the object based upon the d

41、etected marker’s pose. A short memory algorithm, based upon confidence levels, was also implemented to enhance the object recognition; the object’s pose is tracked for approximately 3 seconds after it has last been seen.

42、 This approach was found extremely use- ful during the experiments, as the computer vision algorithm cannot detect markers from distances greater than 2 metres and occlusion is likely to happen in real applications. The

43、Sick LMS-200 laser range scanner provides millimetre- accuracy distance measurements (from up to 80 metres), ranging from 0 degrees (right-hand side of the robot) to 180 degrees (left-hand side). In addition, 16 sonar ra

44、nge sen- sors, placed in a ring configuration on the robot, retrieve moderately accurate distance measurements from 0.1 to 5 metres and a 30-degree field of view each. Despite the lack of precision, the sonar sensors pla

45、y a fundamental role in the overall outcome of the teleoperation platform: as the human operator has limited perception of the environment, particular manoeuvres (mainly when reversing the robot) may be potentially dange

46、rous and result in a collision. Thus, obstacle avoidance is achieved by using an implementation based upon the well-known algorithm VFH (Vector Field Histogram) [2]. However, the human operator is able to inhibit the son

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論