2023年全國(guó)碩士研究生考試考研英語(yǔ)一試題真題(含答案詳解+作文范文)_第1頁(yè)
已閱讀1頁(yè),還剩16頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、<p>  畢業(yè)設(shè)計(jì)外文資料翻譯</p><p>  基于曲波變換的多聚焦圖像融合算法研究</p><p>  摘要:由于光學(xué)透鏡聚焦深度的限制,往往很難得到一個(gè)包含所有相關(guān)聚焦目標(biāo)的圖像。多聚焦圖像融合算法可以有效地解決這個(gè)問(wèn)題?;趶V泛應(yīng)用的多聚焦圖像融合算法的分析,本文提出一種基于多聚焦圖像融合算法的曲波變換。根據(jù)曲波變換分解的不同頻率區(qū),分別討論低頻系數(shù)和高頻系數(shù)的選擇規(guī)

2、律。本文中低頻系數(shù)和高頻系數(shù)被分別與NGMS(就近梯度最大選擇性)和LREMS(局部區(qū)域能量最大的選擇性)融合。結(jié)果表明,提出的多聚焦圖像融合算法可以獲得和圖像聚焦融合算法相同的圖像,在客觀評(píng)價(jià)和主觀評(píng)估方面較其他算法有明顯的優(yōu)勢(shì)。</p><p>  關(guān)鍵字:曲波變換;多聚焦圖像;融合算法</p><p><b>  1.簡(jiǎn)介</b></p><

3、p>  如今,圖像融合被廣泛應(yīng)用于軍事、遙感、醫(yī)學(xué)和計(jì)算機(jī)圖像等領(lǐng)域。圖像融合的主要目的將來(lái)自?xún)蓚€(gè)或更多相同場(chǎng)景的信息相結(jié)合以獲得一個(gè)包含完整信息的圖像。比如,廉價(jià)相機(jī)的主要問(wèn)題是我們不能獲得不同距離的每個(gè)目標(biāo)以獲得一個(gè)聚焦所有目標(biāo)的圖像。因此,我們需要一種多聚焦圖像融合方法來(lái)聚焦和獲得更清晰的圖像。</p><p>  經(jīng)典融合算法包括計(jì)算源圖像平均像素的灰度值,拉普拉斯金字塔,對(duì)比度金字塔,比率金字塔和

4、離散小波變換(DWT)。然而,計(jì)算源圖像平均像素灰度值的方法導(dǎo)致一些不期望的影響例如對(duì)照物減少。小波變換的基本原理是對(duì)每個(gè)源圖像進(jìn)行分解,然后將所有這些分解單元組合獲取合成表示,從中可以通過(guò)尋找反變換恢復(fù)融合圖像。這種方法顯然是有效的。但是,小波變化只能通過(guò)變換邊緣特征反映出來(lái),卻不能表達(dá)邊緣的特點(diǎn)。同時(shí),也因?yàn)樗捎酶飨蛲运孕〔ㄗ兓療o(wú)法顯示邊緣方向。由于小波變換的限制,Donoho 等人提出了曲波變換的概念,它采用邊緣作為基本元素

5、,較為成熟并可以適應(yīng)圖像特征。此外,曲波變換具有各向異性和有更好的方向,可以提供更多圖像處理的信息。</p><p>  通過(guò)曲波變換的原則我們知道:曲波變化除了具有多尺度小波變換和地方特色外,它還具有方向特征和支持會(huì)話的基礎(chǔ)特征。曲波變化可以適當(dāng)代表圖像邊緣和相同逆變換精度的光滑區(qū)。繼曲波變化低波段和高波段融合算法系數(shù)的研究后,提出一種思想:低-帶系數(shù)采用NGMS方法和不同的方向高帶系數(shù)采用LREMS方法。&l

6、t;/p><p><b>  2.第二代曲波變化</b></p><p>  第二代曲波變換和第一代的曲波變換不同的是,沒(méi)有導(dǎo)入脊波變換的實(shí)施過(guò)程,但直接提交曲波基本格式的具體表達(dá),這可以說(shuō)是深入解釋曲波變換。在此根本文獻(xiàn)讓離散算法快速實(shí)現(xiàn)。</p><p>  頻率區(qū)域中窗口函數(shù)的概念::</p><p><b>

7、;  (1) </b></p><p><b>  ,和是</b></p><p>  一維的窗口函數(shù);,定義字段 [-1,1] 實(shí)數(shù)函數(shù),時(shí)間內(nèi)容函數(shù)在頻域內(nèi)假定會(huì)議接近于</p><p><b>  ,</b></p><p>  曲線函數(shù)定義的兩個(gè)維數(shù):</p>&l

8、t;p><b>  (2)</b></p><p>  隨著頻率,比例,方向角和位置</p><p><b>  ,</b></p><p>  為粗略的尺寸(低域)</p><p><b>  位置</b></p><p><b>  

9、曲線函數(shù)定義為:</b></p><p><b>  (3)</b></p><p>  比較公式2和公式3可知“粗略”規(guī)模曲波函數(shù)相對(duì)于其他曲波函數(shù)構(gòu)想并沒(méi)有介紹方向參數(shù)。所以曲波變換在低頻段區(qū)域靠近小波變換,但在j規(guī)模,曲波變換平等的瓜分坡度區(qū)間加入契形區(qū)。</p><p>  二維連續(xù)函數(shù)曲線變換定義為公式: </p&g

10、t;<p><b>  (4)</b></p><p>  同時(shí)曲線變換在離散實(shí)例中定義為公式:</p><p><b>  (5)</b></p><p><b>  窗口函數(shù)支持會(huì)話</b></p><p>  是二維離散信號(hào)發(fā)散:FFT,</p>

11、<p>  公式5顯示了該圖像是分解使用快速離散曲波變換的FFT方法:</p><p>  圖像被分解二維的FFT,獲取序列</p><p><b>  ,</b></p><p>  根據(jù)不同比例和方向重復(fù)采樣或插入要接收的值</p><p>  讓按比例放大窗口函數(shù)來(lái)接收新序列</p>&

12、lt;p>  并讓保存二維FFT,因此得到比例j方向l和位置離散曲波變換系數(shù)。</p><p><b>  圖像融合算法的研究</b></p><p>  圖像融合算法依賴(lài)于:</p><p><b>  圖A曲波變換系數(shù)</b></p><p><b>  圖B曲波變換系數(shù)<

13、/b></p><p>  圖1 曲波變換基礎(chǔ)上的圖像融合算法的過(guò)程</p><p>  A:低頻率系數(shù)融合算法</p><p>  曲波變換在低頻率地區(qū)接近小波轉(zhuǎn)換,圖像組件包括主要能源圖像并決定其輪廓,它可以通過(guò)正確選擇低頻率的視覺(jué)系數(shù)提高圖像效果?,F(xiàn)有融合規(guī)則主要有最大像素法、最小像素法,計(jì)算平均像素灰色的級(jí)別值,圖像法、LREMS法、本地源區(qū)域偏差方法

14、[6]。</p><p>  最大像素法、 最小像素法和計(jì)算平均像素灰度級(jí)別源圖像方法的值沒(méi)有考慮到局部鄰接相關(guān)性,所以融合結(jié)果不能獲得更好的結(jié)果。局部區(qū)域能源法和偏差法恰當(dāng)?shù)目紤]到了局部鄰接相關(guān)性,但沒(méi)有考慮到圖像邊緣和定義。</p><p>  考慮到這種缺陷,本文提出的NGMS,它主要描述圖像詳細(xì)信息和圖像聚焦級(jí)別。八種局部鄰接相關(guān)性拉普拉斯算法總和被用于圖像定義的評(píng)估,它被定義為[

15、9]:曲線變換是牽引源圖像A,B被曲波分解,然后采取不同的融合算法來(lái)選擇不同的曲波變化系數(shù),結(jié)果獲得融合圖像曲波變換系數(shù),最后反曲波變換獲得融合的圖像。根據(jù)圖1顯示:</p><p><b>  (6)</b></p><p>  B:高頻系數(shù)融合算法</p><p>  曲波變換有過(guò)度的方向特征,因此可以精確地表示圖像的特征邊緣的方向,并且該

16、高頻系數(shù)區(qū)域即表達(dá)圖像的邊緣細(xì)節(jié)信息。</p><p>  像素絕對(duì)值最大法、LREMS法、局部地域差法、方向?qū)Ρ榷确ǖ?,都被運(yùn)用于高頻系數(shù)。由于曲波變換的特點(diǎn),LREMS法被該文件引用。假設(shè)圖像高頻系數(shù)是CH,那么融合算法像:</p><p><b>  (7)</b></p><p>  CHA和CHB表示曲波變換高頻系數(shù)的圖像A和圖像B,

17、CHF(x,y)表示保存的x,y融合高頻系數(shù)。ECHA(x,y)表示局部區(qū)域能源保存的x,y的圖像A的曲波變換高頻系數(shù),ECHB(x,y)表示局部區(qū)域能源保存的x,y的圖像B的曲波變換高頻系數(shù)。</p><p><b>  4.實(shí)驗(yàn)結(jié)果與分析</b></p><p>  為了能夠生動(dòng)正確的驗(yàn)證和保證算法的有效性以達(dá)到熟練使用多聚焦圖像技術(shù),請(qǐng)見(jiàn)圖2和圖3的實(shí)驗(yàn)。<

18、;/p><p>  圖a和圖b是表2的源文件,圖c是使用小波變換的方式形成的聚焦圖像,圖d、e、f都是采用曲波變換方式,但是他們的融合算法不同。為了計(jì)算應(yīng)用于低波段源圖片法的平均像素陣列的灰度值及圖d中的高波段使用的是LREMS技術(shù).在圖e中,也就是本篇論文采用的方法,圖片的高波段采用的是LREMS技術(shù)而在低波段的區(qū)域采用的是NGMS技術(shù)。在圖片f中的高低波段區(qū)域都是采用的是LREMS技術(shù)。圖3也是一樣。圖片a和b是

19、原圖片,圖片c是使用波紋取出轉(zhuǎn)移技術(shù)的結(jié)果。圖片d、e、f都是采用的是曲波轉(zhuǎn)移方式。圖片d中采用的是計(jì)算平均像素的陣列應(yīng)用于低波段區(qū)域,而圖像e采用的是應(yīng)用于高波段的LREMS技術(shù)和應(yīng)用于低波段NGMS技術(shù),而在圖f中都是采用的是應(yīng)用于高波段的LREMS技術(shù)。</p><p>  圖像融合的優(yōu)化問(wèn)題還是沒(méi)有得到解決。目標(biāo)圖像視覺(jué)效果和圖像壓縮比率是衡量圖像處理的技術(shù)指標(biāo)。在視覺(jué)效果中,曲波變換和曲波變換能夠在聚焦

20、上獲得明顯的效果。但是使用曲波技術(shù)的模糊圖像要比使用波形技術(shù)的圖像處理效果要好。應(yīng)用于低頻段區(qū)的局部區(qū)域能源法和高頻段區(qū)的局部地區(qū)能源優(yōu)于其他曲波變換的方法,在本文所提算法中可以得到詳細(xì)紋理的焦點(diǎn)與已刪除的模糊圖像。</p><p>  在此有目標(biāo)熵,交錯(cuò)熵,平均梯度,標(biāo)準(zhǔn)偏差和學(xué)習(xí)偏差。熵,交錯(cuò)熵以及平均梯度在該文件中被運(yùn)用。目標(biāo)圖像融合的結(jié)果現(xiàn)實(shí)于表1和表2。</p><p>  圖2

21、多聚焦圖像融合的實(shí)驗(yàn)</p><p>  圖3多聚焦圖像融合的實(shí)驗(yàn)</p><p>  表1 圖2多聚焦圖像融合實(shí)驗(yàn)?zāi)繕?biāo)比較</p><p>  表2 圖3多聚焦圖像融合實(shí)驗(yàn)?zāi)繕?biāo)比較</p><p>  A:基于小波變換的圖像融合。</p><p>  B:計(jì)算應(yīng)用于低頻段區(qū)和高頻段區(qū)的源圖像法和LREMS法(有曲波

22、變換分解)的平均像素灰度級(jí)。</p><p>  C:應(yīng)用于低頻段區(qū)和高頻段區(qū)的LREMS法(有曲波變換分解)。</p><p>  D:應(yīng)用于低頻段區(qū)的NGMS法和高頻段區(qū)的LREMS法(有曲波變換分解)(該文件運(yùn)用的方法)。</p><p>  然后就可以從我們方法得到的結(jié)果得出結(jié)論,這種方法優(yōu)于小波變換方法以及其他基于曲波變換的同時(shí)具有客觀評(píng)價(jià)和視覺(jué)的方法。&

23、lt;/p><p><b>  5.結(jié)論</b></p><p>  在該文中,我們提出了應(yīng)用于低頻段區(qū)的NGMS法和高頻段區(qū)的LREMS法都是以曲波變換算法為基礎(chǔ)的。相對(duì)于DWT和其他基于曲波變換的功能規(guī)則它具有一定的優(yōu)勢(shì)。因此,我們提出的方法得到了一種多聚焦圖像融合的有效方法。</p><p><b>  外文原文</b>

24、</p><p>  Multi-focus image fusion algorithms research</p><p>  based on Curvelet transform</p><p>  Qiang Fu, Fenghua Ren, Legeng Chen, Zhexin Xiao</p><p>  Guilin Uni

25、versity of Electronic Technology</p><p>  fqrfh@guet.edu.cn</p><p>  Abstract: Due to the limited depth-of-focus of optical lenses, it is often difficult to get an image that contains all releva

26、nt objects in focus. Multi-focus image fusion algorithms can solve this problem effectively. Based on the analysis of the most widely used multi-focus image fusion algorithms, a new curvelet transform based multi-focus i

27、mage fusion algorithm is proposed in this paper. According to the different frequency areas decomposed by Curvelet transform, the selection rules of the lo</p><p>  Keywords: Curvelet transform; multi-focus

28、image; fusion algorithm</p><p>  Ⅰ. Introduction</p><p>  Nowadays, image fusion was broadly applied in military, remote sensing, medicine, and computer vision etc. The main objective of image f

29、usion is to combine information from tow or more source images of the same scene to obtain an image with completely information. For example, the main general problem of inexpensive cameras is that, we can not take every

30、 object on different distances to obtain an image with focus on all objects in the same scene. In this case, a multi-focus image fusion</p><p>  method is needed to get in focus or sharply images [1-2]. <

31、/p><p>  Classical fusion algorithms include computing the average pixel-pixel gray level value of the source images, Laplacian pyramid, Contrast pyramid, Ratio pyramid, and Discrete Wavelet Transform (DWT). Ho

32、wever, computing the average pixel-pixel gray level value of the source images method leads to undesirable side effects such as contrast reduction. The basic idea of DWT based methods is to perform decompositions on each

33、 source image, and then combine all these decompositions to obtain composite re</p><p>  Through the principle of Curvelet transform we know that: Curvelet transform has direction characteristic, and its bas

34、e supporting session satisfies content anisotropy relation, except have multi-scale wavelet transform and local characteristics. Curvelet transform can represent appropriately the edge of image and smoothness area in the

35、 same precision of inverse transform. The low-bands coefficient adopts NGMS method and different direction high-bands coefficient adopts LREMS method was proposed</p><p>  Ⅱ. The Second Era Curvelet Transfor

36、m [8, 10-11]</p><p>  The second eras Curvelet transform is different to the first eras Curvelet transform that is without importing Ridgelet transform within implementing process, but directly present idiog

37、raphic express format of base of Curvelet, this can be say in deed meaning Curvelet transform. Hereon fundamentally literature gives it fast discrete implement algorithm. Conception of window function in frequency region

38、:</p><p><b>  (1)</b></p><p>  Where , and is one dimension low window function;, is definition field [-1,1] real number function, event content Function supposed session in freque

39、ncy region is close to,</p><p>  Two dimensions Curvelet function defines:</p><p><b>  (2)</b></p><p>  With in frequency, scale, direction angle and position . Where ,&

40、lt;/p><p>  But "coarse" scale (low domain), position</p><p>  Curvelet function define:</p><p><b>  (3)</b></p><p>  After comparing formula (2) and

41、 formula (3) can know that "coarse" scale Curvelet function did not introduce direction parameter relative to others scale Curvelet function conception. So Curvelet transform is close to wavelet transform in lo

42、w bands region, but in j scale, Curvelet transform equality slope interval Carve up entries cuniform area.。</p><p>  Two dimension continuum functions Curvelet transform define as formula (4): </p>&l

43、t;p><b>  (4)</b></p><p>  And Curvelet transform define as formula (5) in the discrete instance:</p><p><b>  (5) </b></p><p>  Where is window function s

44、upport session,</p><p><b>  Where </b></p><p>  is two dimension discrete signal discrete FFT,</p><p>  Formula (5) shows that image was decomposed using fast discrete C

45、urvelet transform with FFT method:</p><p>  (1) Image was decomposed two dimensions FFT, getting the sequence</p><p><b>  ,</b></p><p>  According to different scale an

46、d direction , repeat sampling or insert value to</p><p><b>  receive</b></p><p>  (3) Let multiply window function to receive new sequence</p><p>  , and let pot two di

47、mensions FFT,Accordingly</p><p>  get scale j, direction l and position discrete Curvelet transform coefficient.</p><p> ?、? Researching of Image Fusion Algorithm</p><p>  Image fu

48、sion algorithm process base on Curevelet transform is: tow source image A, B was decomposed by Curvelet transform, then adopt different fusion algorithm to select different Curvelet transform coefficient, receive result

49、fusion image Curvelet transform coefficient, at last inverse Curvelet transform to get fusion image. According to figure 1 show:</p><p>  Image A Curvelet Transform coefficient</p><p>  Image B

50、Curvelet Transform coefficient</p><p>  Figure 1 process of image fusion algorithm base on Curvelet transform</p><p>  A. low frequency coefficient fusion algorithm</p><p>  Curvele

51、t transform is close to wavelet transform in low frequency region, image component including main energy decide image contour, so it can enhance effect of the image vision by correctly selecting low frequency coefficient

52、. Existing fusion rule mostly have max pixel method, min pixel method, computing the average pixel-pixel gray level value of the source images method, LREMS method, local region deviation method [6]. Max pixel method, mi

53、n pixel method and computing the average pixel-pixel </p><p><b>  (6)</b></p><p>  B. high frequency coefficient fusion algorithm</p><p>  Curvelet transform have excess

54、ive direction characteristics, so can precisely express image edge orientation, and that region of high frequency coefficient namely express image edge detail information. Pixel absolute max method, LREMS method, local r

55、egion deviation method, direction contrast method etc. was used in high frequency coefficient. LREMS method was adopted in this paper base on characteristics of Curvelet transform. Hypothesis image high frequency coeffi

56、cient is CH, then fusion algor</p><p><b>  (7)</b></p><p>  Where CHA and CHB express Curvelet transform high frequency coefficient of image A and image B, CHF(x, y) show high freque

57、ncy coefficient in pot(x, y) fusion high frequency coefficient, ECHA (x, y) show Curvelet transform high frequency coefficient of image A in pot(x, y) local region energy, ECHB (x, y) show Curvelet transform high frequen

58、cy coefficient of image B in pot(x, y) local region energy.</p><p> ?、? Experiments result and analyze</p><p>  In order to validate right and validity algorithm by using multi-focus image to ex

59、perimented in this paper. Experiment is shown in figure2 and figure3. </p><p>  Figure(a)and (b) is source image in figure2; figure(c) is result of using wavelet transform to fusion image; figure(d), (e) and

60、 (f) all were adopted decomposed by Curvelet transform, but their fusion rule is different; computing the average pixel-pixel gray level value of the source images method used in low-bands area and LREMS method used in h

61、igh-bands were adopted in figure(d), NGMS method used in low-bands area and LREMS method used in high-bands area are adopted in figure(e)(this paper meth</p><p>  Result of fusion image evaluating still cann

62、ot solve. Subjective vision effect and image ration analyst were used to evaluate quality of result fusion image. With subjective vision effect, Curvelet transform and wavelet transform can get panorama in focus image, b

63、ut fusion image using Curvelet transform is better to wavelet transform; method of local region deviation using in low-bands area and high-bands area using local region energy was better to other methods in Curvelet tran

64、sform, in this p</p><p>  There was target of entropy, across entropy, average grads, standard deviation and leaning deviation etc. Target of entropy, across entropy and average grads was used in this paper.

65、 The image fusion result of the target is shown in table1 and table 2.</p><p>  Figure2. Experiment of multi-focus image fusion</p><p>  Figure3. Experiment of multi-focus image fusion</p>

66、<p>  Table1 multi-focus image fusion experiment of figure2 targets compare</p><p>  Table2 multi-focus image fusion experiment of figure3 targets compare</p><p>  A: Image fusions based

67、on wavelet transform</p><p>  B: computing the average pixel-pixel gray level value of the source images method used in low-bands area and LREMS method used in high-bands (decomposed by Curvelet transform)&l

68、t;/p><p>  C: LREMS method used in low-bands area and LREMS method used in high-bands area (decomposed by Curvelet transform)</p><p>  D: NGMS method used in low-bands area and LREMS method used in

69、 high-bands area (decomposed by Curvelet transform) (this paper method)</p><p>  Then we can get the conclusion from abovethe results obtained by our method are superior to wavelet transform method and other

70、s methods based on Curvelet transform in both objective and visual evaluations.</p><p> ?、? Conclusion</p><p>  In this paper we presented NGMS method used in low-bands area and LREMS method use

71、d in high-bands area are based on Curvelet transform algorithm. It has an advantage over DWT and others fusion rules based on Curvelet transform. Therefore, our proposed approach leads to an effective method for multi-fo

72、cus image fusion.</p><p>  References</p><p>  [1] PAJARES G,CRUZ J M, “A wavelet-based image fusion tutoria”, Pattern Recognition,2004,37:1855-1872.</p><p>  [2] DONOHO D L,DUNCAN

73、M R, “Digital Curvelet transform:Strategy, implementation and experiments”, SPIE,2000,4056: 12-29.</p><p>  [3] Starck J L,Cand’s E J,Donoho D L, “The Curvelet Transform for Image enoising”, IEEE Transaction

74、s on Image,Processing, 2002,ll(6):670-684.</p><p>  [4] Pusit Borwonwatanadelok, Wirat Rattanapitak,SomkaitUdombunsakul, “Multi-focus image fusion based on stationary wavelet transform and extended spatial f

75、requency measurement”, IEEE, 2009, 77-81</p><p>  [5] Yinghua Lu, Xue Feng, Jingbo Zhang, Rujuan Wang, Kaiyuan Zheng, Jun Kong, “A multi-focus image fusion based on wavelet and region detection”, IEEE, 2007,

76、 9,294-298</p><p>  [6] Jun Yang, Zhongming Zhao, “Multi-focus image fusion method based on curvelet transform”, Opto-Electronic Engineering 2007, 6(34) :( 67-71)</p><p>  [7] Qiang Zhang, Baolo

77、ng Guo, “Image fusion algorithm using curvelet transform”, Journal of jilin University (Engineering and Technology Edition), 2007, 3(37):458-463</p><p>  [8] Qiang Zhang, Baolong Zhang, “Fusion of remote sen

78、sing images based on the second generation curvelet transform”, Optics and Precision Engineering, 2007, 7 (15): 1131-1135</p><p>  [9] Wei Sun, Ke Wang, Guoliang Yuan, Nan Wang. “A multi-focus image fusion a

79、lgorithm in the complex wavelet domain”, Journal of Image and Graphics, 2008, 5(13):951-957</p><p>  [10] CANDES E J, DONGOHO D L. “New tight Frames of curvelets and optimal representations of objects with p

80、iecewise-C2 singularities”, Comm. onPureand Appl. Math. 2004, 57:219-266.</p><p>  [11] CANDES E J, DONOHO D L, YING L. Fast discrete curvelet transforms[R]. California:Applied and computational Mathematics,

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論