版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、<p> A System for Video Surveillance and Monitoring</p><p> The thrust of CMU research under the DARPA Video Surveillance and Monitoring (VSAM) project is cooperative multi-sensor surveillance to supp
2、ort battlefield awareness. Under our VSAM Integrated Feasibility Demonstration (IFD) contract, we have developed automated video understanding technology that enables a single human operator to monitor activities over a
3、complex area using a distributed network of active video sensors. The goal is to automatically collect and disseminate real-time information</p><p> Automated video surveillance is an important research are
4、a in the commercial sector as well. Technology has reached a stage where mounting cameras to capture video imagery is cheap, but finding available human resources to sit and watch that imagery is expensive. Surveillance
5、cameras are already prevalent in commercial establishments, with camera output being recorded to tapes that are either rewritten periodically or stored in video archives. After a crime occurs – a store is robbed or a car
6、 is</p><p> Keeping track of people, vehicles, and their interactions in an urban or battlefield environment is a difficult task. The role of VSAM video understanding technology in achieving this goal is to
7、 automatically “parse” people and vehicles from raw video, determine their geolocations, and insert them into dynamic scene visualization. We have developed robust routines for detecting and tracking moving objects. Dete
8、cted objects are classified into semantic categories such as human, human group, car, </p><p> Detection of moving objects in video streams is known to be a significant, and difficult, research problem. Asi
9、de from the intrinsic usefulness of being able to segment video streams into moving and background components, detecting moving blobs provides a focus of attention for recognition, classification, and activity analysis,
10、making these later processes more efficient since only “moving” pixels need be considered.</p><p> There are three conventional approaches to moving object detection: temporal differencing ; background subt
11、raction; and optical flow. Temporal differencing is very adaptive to dynamic environments, but generally does a poor job of extracting all relevant feature pixels. Background subtraction provides the most complete featur
12、e data, but is extremely sensitive to dynamic scene changes due to lighting and extraneous events. Optical flow can be used to detect independently moving objects in the pre</p><p> Under the VSAM program,
13、CMU has developed and implemented three methods for moving object detection on the VSAM testbed. The first is a combination of adaptive background subtraction and three-frame differencing . This hybrid algorithm is very
14、fast, and surprisingly effective – indeed, it is the primary algorithm used by the majority of the SPUs in the VSAM system. In addition, two new prototype algorithms have been developed to address shortcomings of this st
15、andard approach. First, a mechanism </p><p> The current VSAM IFD testbed system and suite of video understanding technologies are the end result of a three-year, evolutionary process. Impetus for this evol
16、ution was provided by a series of yearly demonstrations. The following tables provide a succinct synopsis of the progress made during the last three years in the areas of video understanding technology, VSAM testbed arch
17、itecture, sensor control algorithms, and degree of user interaction. Although the program is over now, the VSAM IFD tes</p><p> l better understanding of human motion, including segmentation and tracking of
18、 articulated body parts;</p><p> l improved data logging and retrieval mechanisms to support 24/7 system operations;</p><p> l bootstrapping functional site models through passive observation
19、of scene activities;</p><p> l better detection and classification of multi-agent events and activities;</p><p> l better camera control to enable smooth object tracking at high zoom; and</
20、p><p> l acquisition and selection of “best views” with the eventual goal of recognizing individuals in the scene.</p><p><b> 視頻監(jiān)控系統(tǒng)</b></p><p> 在美國(guó)國(guó)防部高級(jí)研究計(jì)劃局,視頻監(jiān)控系統(tǒng)項(xiàng)目下進(jìn)
21、行的一系列監(jiān)控裝置研究是一項(xiàng)合作性的多層傳感監(jiān)控,用以支持戰(zhàn)場(chǎng)決策。在我們的視頻監(jiān)控綜合可行性示范條約下,我們已經(jīng)研發(fā)出自動(dòng)視頻解析技術(shù),使得單個(gè)操作員通過(guò)動(dòng)態(tài)視頻傳感器的分布式網(wǎng)絡(luò)來(lái)監(jiān)測(cè)一個(gè)復(fù)雜區(qū)域的一系統(tǒng)活動(dòng)。我們的目標(biāo)是自動(dòng)收集和傳播實(shí)時(shí)的戰(zhàn)場(chǎng)信息,以改善戰(zhàn)場(chǎng)指揮人員的戰(zhàn)場(chǎng)環(huán)境意識(shí)。在其他軍事和聯(lián)邦執(zhí)法領(lǐng)域的應(yīng)用包括為部隊(duì)提供邊境安防,通過(guò)無(wú)人駕駛飛機(jī)監(jiān)控和平條約及難民流動(dòng),保證使館和機(jī)場(chǎng)的安全,通過(guò)收集建筑物每個(gè)進(jìn)口和出口的印時(shí)
22、圖片識(shí)別可疑毒品和恐怖分子藏匿場(chǎng)所。</p><p> 自動(dòng)視頻監(jiān)控在商業(yè)領(lǐng)域同樣也是一個(gè)重要的研究課題。隨著科技的發(fā)展,安裝攝像頭捕捉視頻圖像已經(jīng)非常廉價(jià),但是通過(guò)人為監(jiān)視圖像的成本則非常高昂。監(jiān)視攝像頭已經(jīng)在商業(yè)機(jī)構(gòu)中普遍存在,與相機(jī)輸出記錄到磁帶或者定期重寫(xiě)或者存儲(chǔ)在錄像檔案。在犯罪發(fā)生后---比如商店被竊或汽車(chē)被盜后,再查看當(dāng)時(shí)錄像,往往為時(shí)已晚。盡管避免犯罪還有許多其他的選擇,但現(xiàn)在需要的是連續(xù)24小
23、時(shí)的監(jiān)測(cè)和分析數(shù)據(jù),由視頻監(jiān)控系統(tǒng)提醒保安人員,及時(shí)發(fā)現(xiàn)正在進(jìn)行的盜竊案,或游蕩在停車(chē)場(chǎng)的可疑人員。</p><p> 在城市或戰(zhàn)場(chǎng)環(huán)境中追蹤人員、車(chē)輛是一項(xiàng)艱巨的任務(wù)。VSAM視頻解析技術(shù)視頻,確定其geolocations,并插入到動(dòng)態(tài)場(chǎng)景可視化。我們已經(jīng)開(kāi)發(fā)強(qiáng)有力的例程為發(fā)現(xiàn)和跟蹤移動(dòng)的物體。被測(cè)物體分為語(yǔ)義類別,如人力,人力組,汽車(chē)和卡車(chē)使用形狀和顏色分析,這些標(biāo)簽是用來(lái)改善跟蹤一致性使用時(shí)間限制。進(jìn)一
24、步分類的人類活動(dòng),如散步,跑步,也取得了。 Geolocations標(biāo)記實(shí)體決心從自己的形象坐標(biāo)使用廣泛的基準(zhǔn)立體聲由兩個(gè)或兩個(gè)以上的重疊相機(jī)的意見(jiàn),或查看射線相交的地形模型由單眼意見(jiàn)。這些計(jì)算機(jī)的位置飼料進(jìn)入了更高一級(jí)的跟蹤模塊,任務(wù)多傳感器變盤(pán),傾斜和縮放,以合作,不斷追蹤的對(duì)象,通過(guò)現(xiàn)場(chǎng)。所有產(chǎn)生的對(duì)象假設(shè)所有傳感器轉(zhuǎn)交作為象征性的數(shù)據(jù)包返回到一個(gè)中央控制單元操作者,他們都顯示在圖形用戶界面提供了廣泛概述了現(xiàn)場(chǎng)活動(dòng)。這些技術(shù)已證明
25、,通過(guò)一系列每年演示,使用的試驗(yàn)系統(tǒng)上發(fā)展起來(lái)的城市校園的債務(wù)工具中央結(jié)算系統(tǒng)。</p><p> 檢測(cè)移動(dòng)物體的視頻流被認(rèn)為是一個(gè)重要和困難,研究問(wèn)題。除了固有的作用能夠部分進(jìn)入移動(dòng)視頻流和背景的組成部分,移動(dòng)塊檢測(cè)提供了一個(gè)關(guān)注的焦點(diǎn)識(shí)別,分類,分析和活動(dòng),使這些后來(lái)過(guò)程更有效率,因?yàn)橹挥小耙苿?dòng)”像素需要加以考慮。</p><p> 有三種常規(guī)方法來(lái)進(jìn)行移動(dòng)物體的檢測(cè):時(shí)間差分法;
26、背景減法;和光流法。時(shí)間差分非常適應(yīng)動(dòng)態(tài)環(huán)境,但通常是一個(gè)貧窮的工作中提取所有相關(guān)的功能像素。背景減除提供最完整的功能數(shù)據(jù),但極為敏感,動(dòng)態(tài)場(chǎng)景的變化,由于燈光和不相干的活動(dòng)。光流可以用來(lái)檢測(cè)獨(dú)立移動(dòng)的物體,在場(chǎng)的攝像機(jī)運(yùn)動(dòng),但大多數(shù)的光流計(jì)算方法的計(jì)算復(fù)雜,不能適用于全幀視頻流的實(shí)時(shí)沒(méi)有專門(mén)的硬件。</p><p> 根據(jù)VSAM計(jì)劃,債務(wù)工具中央結(jié)算系統(tǒng)制定并實(shí)施了三種方法的運(yùn)動(dòng)目標(biāo)檢測(cè)的VSAM試驗(yàn)。首先
27、是結(jié)合自適應(yīng)背景減除與三幀差分。這種混合算法是非???,令人驚訝的有效的-事實(shí)上,它是主要的算法所使用的大多數(shù)SPUs在VSAM系統(tǒng)。此外,兩個(gè)新的原型已經(jīng)開(kāi)發(fā)的算法來(lái)解決這一缺陷的標(biāo)準(zhǔn)辦法。首先,一個(gè)機(jī)制,保持顳對(duì)象層次開(kāi)發(fā),使更多的歧義的移動(dòng)物體,可以有效地阻止了一會(huì)兒,是閉塞的其他物體,而且然后恢復(fù)動(dòng)議。一個(gè)限制,影響到該方法和標(biāo)準(zhǔn)算法是他們唯一的工作靜態(tài)相機(jī),或在“ stepand凝視”模式泛傾斜相機(jī)。為了克服這一局限,第二次延長(zhǎng)
28、了beendeveloped讓背景減法從不斷平移和傾斜相機(jī)。通過(guò)巧妙的積累形象的證據(jù),該算法可以實(shí)現(xiàn)實(shí)時(shí)的傳統(tǒng)PC平臺(tái)。第四個(gè)辦法來(lái)探測(cè)移動(dòng)物體從空中移動(dòng)平臺(tái),也得到了發(fā)展,根據(jù)分包合同的Sarnoff公司。這種方法是基于圖像穩(wěn)定使用特殊的視頻處理硬件。</p><p> 目前VSAM通用試驗(yàn)系統(tǒng)和一套視頻理解技術(shù)的最終結(jié)果是一項(xiàng)為期三年的,漸進(jìn)的過(guò)程。推動(dòng)這一演變提供了一系列每年示威。下列表格提供了一個(gè)簡(jiǎn)明的
29、大綱方面所取得的進(jìn)展在過(guò)去三年中在視頻領(lǐng)域的理解,技術(shù), VSAM試驗(yàn)架構(gòu),傳感器控制算法,并一定程度的用戶交互。雖然該計(jì)劃是在現(xiàn)在, VSAM通用試驗(yàn)繼續(xù)提供寶貴的資源開(kāi)發(fā)和測(cè)試新的視頻理解能力。今后的工作將致力于實(shí)現(xiàn)以下目標(biāo):</p><p> l 更好地理解人類的議案,其中包括分割和跟蹤闡明身體部位;</p><p> l 改善數(shù)據(jù)記錄和檢索機(jī)制,以支持24 / 7系統(tǒng)的運(yùn)作&l
30、t;/p><p> l 引導(dǎo)功能的網(wǎng)站模式,通過(guò)被動(dòng)觀察現(xiàn)場(chǎng)活動(dòng);</p><p> l 更好的檢測(cè)和分類Multi -l Agent的事件和活動(dòng)</p><p> l 更好的相機(jī)控制,實(shí)現(xiàn)了流暢的目標(biāo)跟蹤高變焦;和</p><p> l 購(gòu)置和選擇的“最佳意見(jiàn)”的最終目標(biāo)是承認(rèn)個(gè)人在現(xiàn)場(chǎng)。</p><p> 視
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 視頻監(jiān)控系統(tǒng)外文翻譯
- 視頻監(jiān)控系統(tǒng)設(shè)計(jì)外文翻譯
- 視頻監(jiān)控系統(tǒng)外文翻譯譯
- 遠(yuǎn)程視頻監(jiān)控系統(tǒng)外文翻譯
- 遠(yuǎn)程視頻監(jiān)控系統(tǒng)外文翻譯
- 遠(yuǎn)程視頻監(jiān)控外文翻譯.doc
- 遠(yuǎn)程視頻監(jiān)控外文翻譯.doc
- 遠(yuǎn)程視頻監(jiān)控外文翻譯.doc
- 遠(yuǎn)程視頻監(jiān)控外文翻譯.doc
- 閉路監(jiān)控系統(tǒng)外文翻譯
- 遠(yuǎn)程視頻監(jiān)控畢業(yè)課程設(shè)計(jì)外文文獻(xiàn)翻譯、中英文翻譯、外文翻譯
- 視頻監(jiān)控系統(tǒng)方案 煤礦視頻監(jiān)控系統(tǒng)設(shè)計(jì)方案
- 煤礦安全監(jiān)控系統(tǒng)外文翻譯
- 遠(yuǎn)程視頻監(jiān)控畢業(yè)課程設(shè)計(jì)外文文獻(xiàn)翻譯、中英文翻譯、外文翻譯
- 遠(yuǎn)程視頻監(jiān)控畢業(yè)課程設(shè)計(jì)外文文獻(xiàn)翻譯、中英文翻譯、外文翻譯
- 智能視頻監(jiān)控系統(tǒng)
- 網(wǎng)絡(luò)視頻監(jiān)控系統(tǒng)
- [雙語(yǔ)翻譯]--外文翻譯--風(fēng)機(jī)監(jiān)控系統(tǒng)仿真和驗(yàn)證
- 視頻監(jiān)控系統(tǒng)范例
- 視頻特技外文翻譯
評(píng)論
0/150
提交評(píng)論