版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、<p> 畢業(yè)設(shè)計(jì)(論文)外文資料翻譯</p><p> 學(xué) 院: 自動(dòng)化工程學(xué)院 </p><p> 專(zhuān) 業(yè):□自動(dòng)化</p><p><b> □測(cè)控技術(shù)與儀器</b></p><p> 姓 名: </p>
2、;<p> 學(xué) 號(hào): </p><p> 附 件: 1.外文資料翻譯譯文;2.外文原文。 </p><p> 附件1:外文資料翻譯譯文</p><p> 改進(jìn)型智能機(jī)器人的語(yǔ)音識(shí)別方法</p><p><b> 2、語(yǔ)音識(shí)別概述</b></p>
3、;<p> 最近,由于其重大的理論意義和實(shí)用價(jià)值,語(yǔ)音識(shí)別已經(jīng)受到越來(lái)越多的關(guān)注。到現(xiàn)在為止,多數(shù)的語(yǔ)音識(shí)別是基于傳統(tǒng)的線性系統(tǒng)理論,例如隱馬爾可夫模型和動(dòng)態(tài)時(shí)間規(guī)整技術(shù)。隨著語(yǔ)音識(shí)別的深度研究,研究者發(fā)現(xiàn),語(yǔ)音信號(hào)是一個(gè)復(fù)雜的非線性過(guò)程,如果語(yǔ)音識(shí)別研究想要獲得突破,那么就必須引進(jìn)非線性系統(tǒng)理論方法。最近,隨著非線性系統(tǒng)理論的發(fā)展,如人工神經(jīng)網(wǎng)絡(luò),混沌與分形,可能應(yīng)用這些理論到語(yǔ)音識(shí)別中。因此,本文的研究是在神經(jīng)網(wǎng)絡(luò)和
4、混沌與分形理論的基礎(chǔ)上介紹了語(yǔ)音識(shí)別的過(guò)程。</p><p> 語(yǔ)音識(shí)別可以劃分為獨(dú)立發(fā)聲式和非獨(dú)立發(fā)聲式兩種。非獨(dú)立發(fā)聲式是指發(fā)音模式是由單個(gè)人來(lái)進(jìn)行訓(xùn)練,其對(duì)訓(xùn)練人命令的識(shí)別速度很快,但它對(duì)與其他人的指令識(shí)別速度很慢,或者不能識(shí)別。獨(dú)立發(fā)聲式是指其發(fā)音模式是由不同年齡,不同性別,不同地域的人來(lái)進(jìn)行訓(xùn)練,它能識(shí)別一個(gè)群體的指令。一般地,由于用戶不需要操作訓(xùn)練,獨(dú)立發(fā)聲式系統(tǒng)得到了更廣泛的應(yīng)用。 所以,在獨(dú)立發(fā)
5、聲式系統(tǒng)中,從語(yǔ)音信號(hào)中提取語(yǔ)音特征是語(yǔ)音識(shí)別系統(tǒng)的一個(gè)基本問(wèn)題。</p><p> 語(yǔ)音識(shí)別包括訓(xùn)練和識(shí)別,我們可以把它看做一種模式化的識(shí)別任務(wù)。通常地,語(yǔ)音信號(hào)可以看作為一段通過(guò)隱馬爾可夫模型來(lái)表征的時(shí)間序列。通過(guò)這些特征提取,語(yǔ)音信號(hào)被轉(zhuǎn)化為特征向量并把它作為一種意見(jiàn),在訓(xùn)練程序中,這些意見(jiàn)將反饋到HMM的模型參數(shù)估計(jì)中。這些參數(shù)包括意見(jiàn)和他們響應(yīng)狀態(tài)所對(duì)應(yīng)的概率密度函數(shù),狀態(tài)間的轉(zhuǎn)移概率,等等。經(jīng)過(guò)參數(shù)
6、估計(jì)以后,這個(gè)已訓(xùn)練模式就可以應(yīng)用到識(shí)別任務(wù)當(dāng)中。輸入信號(hào)將會(huì)被確認(rèn)為造成詞,其精確度是可以評(píng)估的。整個(gè)過(guò)程如圖一所示。</p><p> 圖1 語(yǔ)音識(shí)別系統(tǒng)的模塊圖</p><p><b> 3、理論與方法</b></p><p> 從語(yǔ)音信號(hào)中進(jìn)行獨(dú)立揚(yáng)聲器的特征提取是語(yǔ)音識(shí)別系統(tǒng)中的一個(gè)基本問(wèn)題。解決這個(gè)問(wèn)題的最流行方法是應(yīng)用線性預(yù)
7、測(cè)倒譜系數(shù)和Mel頻率倒譜系數(shù)。這兩種方法都是基于一種假設(shè)的線形程序,該假設(shè)認(rèn)為說(shuō)話者所擁有的語(yǔ)音特性是由于聲道共振造成的。這些信號(hào)特征構(gòu)成了語(yǔ)音信號(hào)最基本的光譜結(jié)構(gòu)。然而,在語(yǔ)音信號(hào)中,這些非線形信息不容易被當(dāng)前的特征提取邏輯方法所提取,所以我們使用分型維數(shù)來(lái)測(cè)量非線形語(yǔ)音擾動(dòng)。</p><p> 本文利用傳統(tǒng)的LPCC和非線性多尺度分形維數(shù)特征提取研究并實(shí)現(xiàn)語(yǔ)音識(shí)別系統(tǒng)。</p><p&
8、gt; 3.1線性預(yù)測(cè)倒譜系數(shù)</p><p> 線性預(yù)測(cè)系數(shù)是一個(gè)我們?cè)谧稣Z(yǔ)音的線形預(yù)分析時(shí)得到的參數(shù),它是關(guān)于毗鄰語(yǔ)音樣本間特征聯(lián)系的參數(shù)。線形預(yù)分析正式基于以下幾個(gè)概念建立起來(lái)的,即一個(gè)語(yǔ)音樣本可以通過(guò)一些以前的樣本的線形組合來(lái)快速地估計(jì),根據(jù)真實(shí)語(yǔ)音樣本在確切的分析框架(短時(shí)間內(nèi)的)和預(yù)測(cè)樣本之間的差別的最小平方原則,最后會(huì)確認(rèn)出唯一的一組預(yù)測(cè)系數(shù)。</p><p> LPC
9、可以用來(lái)估計(jì)語(yǔ)音信號(hào)的倒譜。在語(yǔ)音信號(hào)的短時(shí)倒譜分析中,這是一種特殊的處理方法。信道模型的系統(tǒng)函數(shù)可以通過(guò)如下的線形預(yù)分析來(lái)得到:</p><p> 其中p代表線形預(yù)測(cè)命令,,(k=1,2,… …,p)代表預(yù)測(cè)參數(shù),脈沖響應(yīng)用h(n)來(lái)表示,假設(shè)h(n)的倒譜是。那么(1)式可以擴(kuò)展為(2)式:</p><p> 將(1)帶入(2),兩邊同時(shí) ,(2)變成(3)。</p>
10、<p> 就獲得了方程(4):</p><p> 那么 可以通過(guò)來(lái)獲得。</p><p> ?。?)中計(jì)算的倒譜系數(shù)叫做LPCC,n代表LPCC命令。</p><p> 在我們采集LPCC參數(shù)以前,我們應(yīng)該對(duì)語(yǔ)音信號(hào)進(jìn)行預(yù)加重,幀處理,加工和終端窗口檢測(cè)等,所以,中文命令字“前進(jìn)”的端點(diǎn)檢測(cè)如圖2所示,接下來(lái),斷點(diǎn)檢測(cè)后的中文命令字“前進(jìn)”語(yǔ)音波
11、形和LPCC的參數(shù)波形如圖3所示。</p><p> 圖2 中文命令字“前進(jìn)”的端點(diǎn)檢測(cè)</p><p> 圖3 斷點(diǎn)檢測(cè)后的中文命令字“前進(jìn)”語(yǔ)音波形和LPCC的參數(shù)波形</p><p> 3.2 語(yǔ)音分形維數(shù)計(jì)算</p><p> 分形維數(shù)是一個(gè)與分形的規(guī)模與數(shù)量相關(guān)的定值,也是對(duì)自我的結(jié)構(gòu)相似性的測(cè)量。分形分維測(cè)量是[6-7]。
12、從測(cè)量的角度來(lái)看,分形維數(shù)從整數(shù)擴(kuò)展到了分?jǐn)?shù),打破了一般集拓?fù)鋵W(xué)方面被整數(shù)分形維數(shù)的限制,分?jǐn)?shù)大多是在歐幾里得幾何尺寸的延伸。</p><p> 有許多關(guān)于分形維數(shù)的定義,例如相似維度,豪斯多夫維度,信息維度,相關(guān)維度,容積維度,計(jì)盒維度等等,其中,豪斯多夫維度是最古老同時(shí)也是最重要的,它的定義如【3】所示:</p><p> 其中,表示需要多少個(gè)單位來(lái)覆蓋子集F. &
13、lt;/p><p> 端點(diǎn)檢測(cè)后,中文命令詞“向前”的語(yǔ)音波形和分形維數(shù)波形如圖4所示。</p><p> 圖4 端點(diǎn)檢測(cè)后,中文命令詞“向前”的語(yǔ)音波形和分形維數(shù)波形</p><p> 3.3 改進(jìn)的特征提取方法</p><p> 考慮到LPCC語(yǔ)音信號(hào)和分形維數(shù)在表達(dá)上各自的優(yōu)點(diǎn),我們把它們二者混合到信號(hào)的特取中,即分形維數(shù)表表征語(yǔ)音
14、時(shí)間波形圖的自相似性,周期性,隨機(jī)性,同時(shí),LPCC特性在高語(yǔ)音質(zhì)量和高識(shí)別速度上做得很好。</p><p> 由于人工神經(jīng)網(wǎng)絡(luò)的非線性,自適應(yīng)性,強(qiáng)大的自學(xué)能力這些明顯的優(yōu)點(diǎn),它的優(yōu)良分類(lèi)和輸入輸出響應(yīng)能力都使它非常適合解決語(yǔ)音識(shí)別問(wèn)題。</p><p> 由于人工神經(jīng)網(wǎng)絡(luò)的輸入碼的數(shù)量是固定的,因此,現(xiàn)在是進(jìn)行正規(guī)化的特征參數(shù)輸入到前神經(jīng)網(wǎng)絡(luò)[9],在我們的實(shí)驗(yàn)中,LPCC和每個(gè)樣
15、本的分形維數(shù)需要分別地通過(guò)時(shí)間規(guī)整化的網(wǎng)絡(luò),LPCC是一個(gè)4幀數(shù)據(jù)(LPCC1,LPCC2,LPCC3,LPCC4,每個(gè)參數(shù)都是14維的),分形維數(shù)被模范化為12維數(shù)據(jù),(FD1,FD2,…FD12,每一個(gè)參數(shù)都是一維),以便于每個(gè)樣本的特征向量有4*14+12*1=68-D維,該命令就是前56個(gè)維數(shù)是LPCC,剩下的12個(gè)維數(shù)是分形維數(shù)。因而,這樣的一個(gè)特征向量可以表征語(yǔ)音信號(hào)的線形和非線性特征。</p><p&g
16、t; 自動(dòng)語(yǔ)音識(shí)別的結(jié)構(gòu)和特征</p><p> 自動(dòng)語(yǔ)音識(shí)別是一項(xiàng)尖端技術(shù),它允許一臺(tái)計(jì)算機(jī),甚至是一臺(tái)手持掌上電腦(邁爾斯,2000)來(lái)識(shí)別那些需要朗讀或者任何錄音設(shè)備發(fā)音的詞匯。自動(dòng)語(yǔ)音識(shí)別技術(shù)的最終目的是讓那些不論詞匯量,背景噪音,說(shuō)話者變音的人直白地說(shuō)出的單詞能夠達(dá)到100%的準(zhǔn)確率(CSLU,2002)。然而,大多數(shù)的自動(dòng)語(yǔ)音識(shí)別工程師都承認(rèn)這樣一個(gè)現(xiàn)狀,即對(duì)于一個(gè)大的語(yǔ)音詞匯單位,當(dāng)前的準(zhǔn)確度水
17、平仍然低于90%。舉一個(gè)例子,Dragon's Naturally Speaking或者IBM公司,闡述了取決于口音,背景噪音,說(shuō)話方式的基線識(shí)別的準(zhǔn)確性?xún)H僅為60%至80%(Ehsani & Knodt, 1998)。更多的能超越以上兩個(gè)的昂貴的系統(tǒng)有Subarashii (Bernstein, et al., 1999), EduSpeak (Franco, etal., 2001), Phonepass (Hink
18、s, 2001), ISLE Project (Menzel, et al., 2001) and RAD (CSLU, 2003)。語(yǔ)音識(shí)別的準(zhǔn)確性將有望改善。</p><p> 在自動(dòng)語(yǔ)音識(shí)別產(chǎn)品中的幾種語(yǔ)音識(shí)別方式中,隱馬爾可夫模型(HMM)被認(rèn)為是最主要的算法,并且被證明在處理大詞匯語(yǔ)音時(shí)是最高效的(Ehsani & Knodt, 1998)。詳細(xì)說(shuō)明隱馬爾可夫模型如何工作超出了本文的范圍,但可
19、以在任何關(guān)于語(yǔ)言處理的文章中找到。其中最好的是Jurafsky & Martin (2000) and Hosom, Cole, and Fanty (2003)。簡(jiǎn)而言之,隱馬爾可夫模型計(jì)算輸入接收信號(hào)和包含于一個(gè)擁有數(shù)以百計(jì)的本土音素錄音的數(shù)據(jù)庫(kù)的匹配可能性(Hinks, 2003, p. 5)。也就是說(shuō),一臺(tái)基于隱馬爾可夫模型的語(yǔ)音識(shí)別器可以計(jì)算輸入一個(gè)發(fā)音的音素可以和一個(gè)基于概率論相應(yīng)的模型達(dá)到的達(dá)到的接近度。高性能就意
20、味著優(yōu)良的發(fā)音,低性能就意味著劣質(zhì)的發(fā)音(Larocca, et al., 1991)。</p><p> 雖然語(yǔ)音識(shí)別已被普遍用于商業(yè)聽(tīng)寫(xiě)和獲取特殊需要等目的,近年來(lái),語(yǔ)言學(xué)習(xí)的市場(chǎng)占有率急劇增加(Aist, 1999; Eskenazi, 1999; Hinks, 2003)。早期的基于自動(dòng)語(yǔ)音識(shí)別的軟件程序采用基于模板的識(shí)別系統(tǒng),其使用動(dòng)態(tài)規(guī)劃執(zhí)行模式匹配或其他時(shí)間規(guī)范化技術(shù)(Dalby & Ke
21、wley-Port,1999). 這些程序包括Talk to Me (Auralog, 1995), the Tell Me More Series (Auralog, 2000), Triple-Play Plus (Mackey & Choi, 1998), New Dynamic English (DynEd, 1997), English Discoveries (Edusoft, 1998), and See it,
22、Hear It, SAY IT! (CPI, 1997)。這些程序的大多數(shù)都不會(huì)提供任何反饋給超出簡(jiǎn)單說(shuō)明的發(fā)音準(zhǔn)確率,這個(gè)基于最接近模式匹配說(shuō)明是由用戶提出書(shū)面對(duì)話選擇的。學(xué)習(xí)者不會(huì)被告之他們發(fā)音的準(zhǔn)確率。特別是內(nèi)里,(2002年)評(píng)論例如Talk to Me和Tel</p><p> 一個(gè)視覺(jué)信號(hào)可以讓學(xué)習(xí)者把他們的語(yǔ)調(diào)同模型揚(yáng)聲器發(fā)出的語(yǔ)調(diào)進(jìn)行對(duì)比。</p><p> 學(xué)習(xí)者發(fā)音
23、的準(zhǔn)確度通常以數(shù)字7來(lái)度量(越高越好)</p><p> 那些發(fā)音失真的詞語(yǔ)會(huì)被識(shí)別出來(lái)并被明顯地標(biāo)注。</p><p> 附件2:外文原文(復(fù)印件)</p><p> Improved speech recognition method</p><p> for intelligent robot</p><p&
24、gt; 2、Overview of speech recognition</p><p> Speech recognition has received more and more attention recently due to the important theoretical meaning and practical value [5 ]. Up to now, most speech recog
25、nition is based on conventional linear system theory, such as Hidden Markov Model (HMM) and Dynamic Time Warping(DTW) . With the deep study of speech recognition, it is found that speech signal is a complex nonlinear pro
26、cess. If the study of speech recognition wants to break through, nonlinear</p><p> -system theory method must be introduced to it. Recently, with the developmentof nonlinea-system theories such as artificia
27、l neural networks(ANN) , chaos and fractal, it is possible to apply these theories to speech recognition. Therefore, the study of this paper is based on ANN and chaos and fractal theories are introduced to process speech
28、 recognition.</p><p> Speech recognition is divided into two ways that are speaker dependent and speaker independent. Speaker dependent refers to the pronunciation model trained by a single person, the iden
29、tification rate of the training person?sorders is high, while others’orders is in low identification rate or can’t be recognized. Speaker independent refers to the pronunciation model trained by persons of different age,
30、 sex and region, it can identify a group of persons’orders. Generally, speaker independent syste</p><p> Speech recognition can be viewed as a pattern recognition task, which includes training and recogniti
31、on.Generally, speech signal can be viewed as a time sequence and characterized by the powerful hidden Markov model (HMM). Through the feature extraction, the speech signal is transferred into feature vectors and act asob
32、servations. In the training procedure, these observationswill feed to estimate the model parameters of HMM. These parameters include probability density function for the observati</p><p> Fig. 1 Block diagr
33、am of speech recognition system</p><p> 3 Theory andmethod</p><p> Extraction of speaker independent features from the speech signal is the fundamental problem of speaker recognition system.
34、The standard methodology for solving this problem uses Linear Predictive Cepstral Coefficients (LPCC) and Mel-Frequency Cepstral Co-efficient (MFCC). Both these methods are linear procedures based on the assumption that
35、speaker features have properties caused by the vocal tract resonances. These features form the basic spectral structure of the speech signal. However, the n</p><p> This paper investigates and implements sp
36、eaker identification system using both traditional LPCC and non-linear multiscaled fractal dimension feature extraction.</p><p> 3. 1 L inear Predictive Cepstral Coefficients</p><p> Linear pr
37、ediction coefficient (LPC) is a parameter setwhich is obtained when we do linear prediction analysis of speech. It is about some correlation characteristics between adjacent speech samples. Linear prediction analysis is
38、based on the following basic concepts. That is, a speech sample can be estimated approximately by the linear combination of some past speech samples. According to the minimal square sum principle of difference between re
39、al speech sample in certain analysis frame short-ti</p><p> LPC coefficient can be used to estimate speech signal cepstrum. This is a special processing method in analysis of speech signal short-time cepstr
40、um. System function of channelmodel is obtained by linear prediction analysis as follow.</p><p> Where p represents linear prediction order, ak,(k=1,2,…,p) represent sprediction coefficient, Impulse respons
41、e is represented by h(n). Suppose cepstrum of h(n) is represented by ,then (1) can be expanded as (2).</p><p> The cepstrum coefficient calculated in the way of (5) is called LPCC, n represents LPCC order.&
42、lt;/p><p> When we extract LPCC parameter before, we should carry on speech signal pre-emphasis, framing processing, windowingprocessing and endpoints detection etc. , so the endpoint detection of Chinese com
43、mand word“Forward”is shown in Fig.2, next, the speech waveform ofChinese command word“Forward”and LPCC parameter waveform after Endpoint detection is shown in Fig. 3.</p><p> 3. 2 Speech Fractal Dimension C
44、omputation</p><p> Fractal dimension is a quantitative value from the scale relation on the meaning of fractal, and also a measuring on self-similarity of its structure. The fractal measuring is fractal dim
45、ension[6-7]. From the viewpoint of measuring, fractal dimension is extended from integer to fraction, breaking the limitof the general to pology set dimension being integer Fractal dimension,fraction mostly, is dimension
46、 extension in Euclidean geometry.</p><p> There are many definitions on fractal dimension, eg.,similar dimension, Hausdoff dimension, inforation dimension, correlation dimension, capability</p><p
47、> imension, box-counting dimension etc. , where,Hausdoff dimension is oldest and also most important, for any sets, it is defined as[3].</p><p> Where, M£(F) denotes how many unit £ needed to cover subs
48、et F.</p><p> In thispaper, the Box-Counting dimension (DB) of ,F, is obtained by partitioning the plane with squares grids of side £, and the numberof squares that intersect the plane (N(£)) and is define
49、d as[8].</p><p> The speech waveform of Chinese command word“Forward”and fractal dimension waveform after Endpoint detection is shown in Fig. 4.</p><p> 3. 3 Improved feature extractions metho
50、d</p><p> Considering the respective advantages on expressing speech signal of LPCC and fractal dimension,we mix both to be the feature signal, that is, fractal dimension denotes the self2similarity, period
51、icity and randomness of speech time wave shape, meanwhile LPCC feature is good for speech quality and high on identification rate.</p><p> Due to ANN′s nonlinearity, self-adaptability, robust and self-learn
52、ing such obvious advantages, its good classification and input2output reflection ability are suitable to resolve speech recognition problem.</p><p> Due to the number of ANN input nodes being fixed, therefo
53、re time regularization is carried out to the feature parameter before inputted to the neural network[9]. In our experiments, LPCC and fractal dimension of each sample are need to get through the network of time regulariz
54、ation separately, LPCC is 4-frame data(LPCC1,LPCC2,LPCC3,LPCC4, each frame parameter is 14-D), fractal dimension is regularized to be12-frame data(FD1,FD2,…,FD12, each frame parameter is 1-D), so that the feature vector
55、of </p><p> Architectures and Features of ASR</p><p> ASR is a cutting edge technology that allows a computer or even a hand-held PDA (Myers, 2000) to identify words that are read aloud or spo
56、ken into any sound-recording device. The ultimate purpose of ASR technology is to allow 100% accuracy with all words that are intelligibly spoken by any person regardless of vocabulary size, background noise, or speaker
57、variables (CSLU, 2002). However, most ASR engineers admit that the current accuracy level for a large vocabulary unit of speech (e.g., the sen</p><p> Among several types of speech recognizers used in ASR p
58、roducts, both implemented and proposed, the Hidden Markov Model (HMM) is one of the most dominant algorithms and has proven to be an effective method of dealing with large units of speech (Ehsani & Knodt, 1998). Deta
59、iled descriptions of how the HHM model works go beyond the scope of this paper and can be found in any text concerned with language processing; among the best are Jurafsky & Martin (2000) and Hosom, Cole, and Fanty (
60、2003). Put si</p><p> While ASR has been commonly used for such purposes as business dictation and special needs accessibility, its market presence for language learning has increased dramatically in recent
61、 years (Aist, 1999; Eskenazi, 1999; Hinks, 2003). Early ASR-based software programs adopted template-based recognition systems which perform pattern matching using dynamic programming or other time normalization techniqu
62、es (Dalby & Kewley-Port, 1999). These programs include Talk to Me (Auralog, 1995), the Tell Me M</p><p> A visual signal allows learners to compare their intonation to that of the model speaker.</p&g
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 電氣自動(dòng)化外文翻譯
- 自動(dòng)化外文翻譯資料
- 機(jī)器人語(yǔ)音識(shí)別算法的研究外文翻譯
- 機(jī)械設(shè)計(jì),外文翻譯,自動(dòng)化機(jī)器人規(guī)劃
- 電氣自動(dòng)化外文及翻譯
- 機(jī)器人的語(yǔ)音識(shí)別技術(shù)20265
- 自動(dòng)化外文翻譯---自動(dòng)化制造系統(tǒng)與plc關(guān)系
- 基于改進(jìn)型人工魚(yú)群算法的機(jī)器人軌跡優(yōu)化研究.pdf
- 基于語(yǔ)音識(shí)別的家庭護(hù)助智能機(jī)器人系統(tǒng).pdf
- 辦公自動(dòng)化外文翻譯--辦公自動(dòng)化系統(tǒng)
- 基于改進(jìn)型DTW的語(yǔ)音識(shí)別算法設(shè)計(jì)與實(shí)現(xiàn).pdf
- 面向家庭服務(wù)機(jī)器人的動(dòng)態(tài)手勢(shì)識(shí)別方法研究.pdf
- 全局視覺(jué)足球機(jī)器人目標(biāo)識(shí)別方法的研究.pdf
- 機(jī)器人外文翻譯
- 基于說(shuō)話人轉(zhuǎn)換的語(yǔ)音識(shí)別方法.pdf
- 電氣工程及其自動(dòng)化外文資料翻譯
- 葡萄套袋機(jī)器人目標(biāo)識(shí)別方法研究.pdf
- 外文文獻(xiàn)翻譯--智能管道檢測(cè)機(jī)器人
- 畢業(yè)論文---語(yǔ)音識(shí)別機(jī)器人的設(shè)計(jì)
- 基于改進(jìn)型主動(dòng)外觀模型的面部特征定位與人臉識(shí)別方法研究.pdf
評(píng)論
0/150
提交評(píng)論