2023年全國碩士研究生考試考研英語一試題真題(含答案詳解+作文范文)_第1頁
已閱讀1頁,還剩11頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

1、<p>  Digital Image Processing and Edge Detection</p><p>  Digital Image Processing</p><p>  Interest in digital image processing methods stems from two principal applica- tion areas: imp

2、rovement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for au- tonomous machine perception. </p><p>  An image may be define

3、d as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When

4、 x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. N

5、ote that a digital image is composed o</p><p>  Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike

6、humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated b

7、y sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscop</p><p>  There is no general agreement among authors regarding where image proce

8、ssing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a proce

9、ss are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yie</p><

10、p>  There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in

11、this continuum: low-, mid-, and highlevel processes. Low-level processes involve primitive opera- tions such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process

12、 is characterized by the fact that both its inputs and outputs </p><p>  Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area

13、 of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses proc

14、esses that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these c</p><p>  The areas of application of digital image proc

15、essing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing ap

16、plications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources o

17、f energy incl</p><p>  Images based on radiation from the EM spectrum are the most familiar, es- pecially images in the X-ray and visual bands of the spectrum. Electromagnet- ic waves can be conceptualized

18、 as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particl

19、e contains a certain amount (or bundle) of energy. Each bundle of energy is called a photo</p><p>  Image acquisition is the first process. Note that acquisition could be as simple as being given an

20、image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.</p><p>  Image enhancement is among the simplest and most appealing areas of digita

21、l image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is

22、 when we increase the contrast of an image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that</p><

23、p>  Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color

24、 models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.</p><p>  Wavelets are the foundation for repre

25、senting images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into s

26、maller regions.</p><p>  Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmi it.Although storage technology has i

27、mproved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image co

28、mpression is familiar (perhaps inadvertently) to most users of computers in the form of image</p><p>  Morphological processing deals with tools for extracting image components that are useful in the

29、representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.</p><p>  Segmentation procedure

30、s partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a

31、long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In

32、general, the more accurate the segme</p><p>  Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the bound- a

33、ry of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary.

34、The first decision that must be made is whether the data should be represented as a boundary or as a complete r</p><p>  Recognition is the process that assigns a label (e.g., “vehicle”) to an o

35、bject based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects.</p><p>  So far we have

36、said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image process

37、ing system in the form of a knowledge database. This knowledge may be as sim- ple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be con

38、ducted in seeking that information. The kno</p><p>  Edge detection</p><p>  Edge detection is a terminology in image processing and computer vision, particularly in the areas of feature detec

39、tion and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities.Although point and line detection

40、certainly are important in any discussion on segmentation,edge dectection is by far the most common approach for detecting meaningful discounties in gray level.</p><p>  Although certain literature has consi

41、dered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal blur caused b

42、y a finite depth-of-field and finite point spread function; 2.penumbral blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in

43、the vicinity of object </p><p>  A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a smal

44、l number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line.</p><p>  To illustrate why edge detection is not a

45、 trivial task, let us consider the problem of detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.</p><p>  If th

46、e intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in

47、 the corresponding region. Moreover, one could argue that this case is one in which there are several edges.Hence, to firmly state a specific threshold on how large the intensity change between two neighbouring pixels mu

48、st be for us to say that there should be an edge betwee</p><p>  There are many methods for edge detection, but most of them can be grouped into two categories,search-based and zero-crossing based. The searc

49、h-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magn

50、itude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods sear</p><p>  The edge detection methods that have been published mainly

51、differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types o

52、f filters used for computing gradient estimates in the x- and y-directions.</p><p>  Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshol

53、d, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking out irrelevant feat

54、ures from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges.</p><p>  If the edge thresholding is applied to just the gradient magnitude image, the resulting edges w

55、ill in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into

56、edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-orde</p><p>  A commonly u

57、sed approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find th

58、e start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value

59、 falls below our lower threshold. This approach </p><p>  Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the i

60、ntensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.</p><p>  We can come to a conclusion that,to be classified a

61、s a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that point.Since we are dealing with local computations,the method of choice to de

62、termine whether a value is “significant” or not id to use a threshold.Thus we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion

63、 of conne</p><p>  數(shù)字圖像處理與邊緣檢測</p><p><b>  數(shù)字圖像處理</b></p><p>  數(shù)字圖像處理方法的研究源于兩個主要應用領(lǐng)域:其一是為了便于人們分析而對圖像信息進行改進:其二是為使機器自動理解而對圖像數(shù)據(jù)進行存儲、傳輸及顯示。</p><p>  一幅圖像可定義為一個二

64、維函數(shù)f(x,y),這里x和y是空間坐標,而在任何一對空間坐標(x,y)上的幅值f 稱為該點圖像的強度或灰度。當x,y和幅值f為有限的、離散的數(shù)值時,稱該圖像為數(shù)字圖像。數(shù)字圖像處理是指借用數(shù)字計算機處理數(shù)字圖像,值得提及的是數(shù)字圖像是由有限的元素組成的,每一個元素都有一個特定的位置和幅值,這些元素稱為圖像元素、畫面元素或像素。像素是廣泛用于表示數(shù)字圖像元素的詞匯。</p><p>  視覺是人類最高級的感知器官

65、,所以,毫無疑問圖像在人類感知中扮演著最重要的角色。然而,人類感知只限于電磁波譜的視覺波段,成像機器則可覆蓋幾乎全部電磁波譜,從伽馬射線到無線電波。它們可以對非人類習慣的那些圖像源進行加工,這些圖像源包括超聲波、電子顯微鏡及計算機產(chǎn)生的圖像。因此,數(shù)字圖像處理涉及各種各樣的應用領(lǐng)域。</p><p>  圖像處理涉及的范疇或其他相關(guān)領(lǐng)域(例如,圖像分析和計算機視覺)的界定在初創(chuàng)人之間并沒有一致的看法。有時用處理的

66、輸入和輸出內(nèi)容都是圖像這一特點來界定圖像處理的范圍。我們認為這一定義僅是人為界定和限制。例如,在這個定義下,甚至最普通的計算一幅圖像灰度平均值的工作都不能算做是圖像處理。另一方面,有些領(lǐng)域(如計算機視覺)研究的最高目標是用計算機去模擬人類視覺,包括理解和推理并根據(jù)視覺輸入采取行動等。這一領(lǐng)域本身是人工智能的分支,其目的是模仿人類智能。人工智能領(lǐng)域處在其發(fā)展過程中的初期階段,它的發(fā)展比預期的要慢的多,圖像分析(也稱為圖像理解)領(lǐng)域則處在圖

67、像處理和計算機視覺兩個學科之間。</p><p>  從圖像處理到計算機視覺這個連續(xù)的統(tǒng)一體內(nèi)并沒有明確的界線。然而,在這個連續(xù)的統(tǒng)一體中可以考慮三種典型的計算處理(即低級、中級和高級處理)來區(qū)分其中的各個學科。</p><p>  低級處理涉及初級操作,如降低噪聲的圖像預處理,對比度增強和圖像尖銳化。低級處理是以輸入、輸出都是圖像為特點的處理。中級處理涉及分割(把圖像分為不同區(qū)域或目標物

68、)以及縮減對目標物的描述,以使其更適合計算機處理及對不同目標的分類(識別)。中級圖像處理是以輸入為圖像,但輸出是從這些圖像中提取的特征(如邊緣、輪廓及不同物體的標識等)為特點的。最后,高級處理涉及在圖像分析中被識別物體的總體理解,以及執(zhí)行與視覺相關(guān)的識別函數(shù)(處在連續(xù)統(tǒng)一體邊緣)等。</p><p>  根據(jù)上述討論,我們看到,圖像處理和圖像分析兩個領(lǐng)域合乎邏輯的重疊區(qū)域是圖像中特定區(qū)域或物體的識別這一領(lǐng)域。這樣

69、,在研究中,我們界定數(shù)字圖像處理包括輸入和輸出均是圖像的處理,同時也包括從圖像中提取特征及識別特定物體的處理。舉一個簡單的文本自動分析方面的例子來具體說明這一概念。在自動分析文本時首先獲取一幅包含文本的圖像,對該圖像進行預處理,提?。ǚ指睿┳址缓笠赃m合計算機處理的形式描述這些字符,最后識別這些字符,而所有這些操作都在本文界定的數(shù)字圖像處理的范圍內(nèi)。理解一頁的內(nèi)容可能要根據(jù)理解的復雜度從圖像分析或計算機視覺領(lǐng)域考慮問題。這樣,我們定義

70、的數(shù)字圖像處理的概念將在有特殊社會和經(jīng)濟價值的領(lǐng)域內(nèi)通用。</p><p>  數(shù)字圖像處理的應用領(lǐng)域多種多樣,所以文本在內(nèi)容組織上盡量達到該技術(shù)應用領(lǐng)域的廣度。闡述數(shù)字圖像處理應用范圍最簡單的一種方法是根據(jù)信息源來分類(如可見光、X射線,等等)。在今天的應用中,最主要的圖像源是電磁能譜,其他主要的能源包括聲波、超聲波和電子(以用于電子顯微鏡方法的電子束形式)。建模和可視化應用中的合成圖像由計算機產(chǎn)生。</

71、p><p>  建立在電磁波譜輻射基礎(chǔ)上的圖像是最熟悉的,特別是X射線和可見光譜圖像。電磁波可定義為以各種波長傳播的正弦波,或者認為是一種粒子流,每個粒子包含一定(一束)能量,每束能量成為一個光子。如果光譜波段根據(jù)光譜能量進行分組,我們會得到下圖1所示的伽馬射線(最高能量)到無線電波(最低能量)的光譜。如圖所示的加底紋的條帶表達了這樣一個事實,即電磁波譜的各波段間并沒有明確的界線,而是由一個波段平滑地過渡到另一個波段

72、。</p><p>  圖像獲取是第一步處理。注意到獲取與給出一幅數(shù)字形式的圖像一樣簡單。通常,圖像獲取包括如設置比例尺等預處理。</p><p>  圖像增強是數(shù)字圖像處理最簡單和最有吸引力的領(lǐng)域?;旧?,增強技術(shù)后面的思路是顯現(xiàn)那些被模糊了的細節(jié),或簡單地突出一幅圖像中感興趣的特征。一個圖像增強的例子是增強圖像的對比度,使其看起來好一些。應記住,增強是圖像處理中非常主觀的領(lǐng)域,這一點很

73、重要。</p><p>  圖像復原也是改進圖像外貌的一個處理領(lǐng)域。然而,不像增強,圖像增強是主觀的,而圖像復原是客觀的。在某種意義上說,復原技術(shù)傾向于以圖像退化的數(shù)學或概率模型為基礎(chǔ)。另一方面,增強以怎樣構(gòu)成好的增強效果這種人的主觀偏愛為基礎(chǔ)。</p><p>  彩色圖像處理已經(jīng)成為一個重要領(lǐng)域,因為基于互聯(lián)網(wǎng)的圖像處理應用在不斷增長。就使得在彩色模型、數(shù)字域的彩色處理方面涵蓋了大量基

74、本概念。在后續(xù)發(fā)展,彩色還是圖像中感興趣特征被提取的基礎(chǔ)。</p><p>  小波是在各種分辨率下描述圖像的基礎(chǔ)。特別是在應用中,這些理論被用于圖像數(shù)據(jù)壓縮及金字塔描述方法。在這里,圖像被成功地細分為較小的區(qū)域。</p><p>  壓縮,正如其名稱所指的意思,所涉及的技術(shù)是減少圖像的存儲量,或者在傳輸圖像時降低頻帶。雖然存儲技術(shù)在過去的十年內(nèi)有了很大改進,但對傳輸能力我們還不能這樣說,

75、尤其在互聯(lián)網(wǎng)上更是如此,互聯(lián)網(wǎng)是以大量的圖片內(nèi)容為特征的。圖像壓縮技術(shù)對應的圖像文件擴展名對大多數(shù)計算機用戶是很熟悉的(也許沒注意),如JPG文件擴展名用于JPEG(聯(lián)合圖片專家組)圖像壓縮標準。</p><p>  形態(tài)學處理設計提取圖像元素的工具,它在表現(xiàn)和描述形狀方面非常有用。這一章的材料將從輸出圖像處理到輸出圖像特征處理的轉(zhuǎn)換開始。</p><p>  分割過程將一幅圖像劃分為組成

76、部分或目標物。通常,自主分割是數(shù)字圖像處理中最為困難的任務之一。復雜的分割過程導致成功解決要求物體被分別識別出來的成像問題需要大量處理工作。另一方面,不健壯且不穩(wěn)定的分割算法幾乎總是會導致最終失敗。通常,分割越準確,識別越成功。</p><p>  表示和描述幾乎總是跟隨在分割步驟的輸后邊,通常這一輸出是未加工的數(shù)據(jù),其構(gòu)成不是區(qū)域的邊緣(區(qū)分一個圖像區(qū)域和另一個區(qū)域的像素集)就是其區(qū)域本身的所有點。無論哪種情況

77、,把數(shù)據(jù)轉(zhuǎn)換成適合計算機處理的形式都是必要的。首先,必須確定數(shù)據(jù)是應該被表現(xiàn)為邊界還是整個區(qū)域。當注意的焦點是外部形狀特性(如拐角和曲線)時,則邊界表示是合適的。當注意的焦點是內(nèi)部特性(如紋理或骨骼形狀)時,則區(qū)域表示是合適的。則某些應用中,這些表示方法是互補的。選擇一種表現(xiàn)方式僅是解決把原始數(shù)據(jù)轉(zhuǎn)換為適合計算機后續(xù)處理的形式的一部分。為了描述數(shù)據(jù)以使感興趣的特征更明顯,還必須確定一種方法。描述也叫特征選擇,涉及提取特征,該特征是某些感

78、興趣的定量信息或是區(qū)分一組目標與其他目標的基礎(chǔ)。</p><p>  識別是基于目標的描述給目標賦以符號的過程。如上文詳細討論的那樣,我們用識別個別目標方法的開發(fā)推出數(shù)字圖像處理的覆蓋范圍。</p><p>  到目前為止,還沒有談到上面圖2中關(guān)于先驗知識及知識庫與處理模塊之間的交互這部分內(nèi)容。關(guān)于問題域的知識以知識庫的形式被編碼裝入一個圖像處理系統(tǒng)。這一知識可能如圖像細節(jié)區(qū)域那樣簡單,在

79、這里,感興趣的信息被定位,這樣,限制性的搜索就被引導到尋找的信息處。知識庫也可能相當復雜,如材料檢測問題中所有主要缺陷的相關(guān)列表或者圖像數(shù)據(jù)庫(該庫包含變化檢測應用相關(guān)區(qū)域的高分辨率衛(wèi)星圖像)。除了引導每一個處理模塊的操作,知識庫還要控制模塊間的交互。這一特性上面圖2中的處理模塊和知識庫間用雙箭頭表示。相反單頭箭頭連接處理模塊。</p><p><b>  邊緣檢測</b></p>

80、;<p>  邊緣檢測是圖像處理和計算機視覺中的術(shù)語,尤其在特征檢測和特征抽取領(lǐng)域,是一種用來識別數(shù)字圖像亮度驟變點即不連續(xù)點的算法。盡管在任何關(guān)于分割的討論中,點和線檢測都是很重要的,但是邊緣檢測對于灰度級間斷的檢測是最為普遍的檢測方法。</p><p>  雖然某些文獻提過理想的邊緣檢測步驟,但自然界圖像的邊緣并不總是理想的階梯邊緣。相反,它們通常受到一個或多個下面所列因素的影響:1.有限場景深

81、度帶來的聚焦模糊;2.非零半徑光源產(chǎn)生的陰影帶來的半影模糊;3.光滑物體邊緣的陰影;4.物體邊緣附近的局部鏡面反射或者漫反射。</p><p>  一個典型的邊界可能是(例如)一塊紅色和一塊黃色之間的邊界;與之相反的是邊線,可能是在另外一種不變的背景上的少數(shù)不同顏色的點。在邊線的每一邊都有一個邊緣。</p><p>  在對數(shù)字圖像的處理中,邊緣檢測是一項非常重要的工作。如果將邊緣認為是一

82、定數(shù)量點亮度發(fā)生變化的地方,那么邊緣檢測大體上就是計算這個亮度變化的導數(shù)。為簡化起見,我們可以先在一維空間分析邊緣檢測。在這個例子中,我們的數(shù)據(jù)是一行不同點亮度的數(shù)據(jù)。例如,在下面的1維數(shù)據(jù)中我們可以直觀地說在第4與第5個點之間有一個邊界:</p><p>  如果光強度差別比第四個和第五個點之間小,或者說相鄰的像素點之間光強度差更高,就不能簡單地說相應區(qū)域存在邊緣。而且,甚至可以認為這個例子中存在多個邊緣。除非

83、場景中的物體非常簡單并且照明條件得到了很好的控制,否則確定一個用來判斷兩個相鄰點之間有多大的亮度變化才算是有邊界的閾值,并不是一件容易的事。實際上,這也是為什么邊緣檢測不是一個簡單問題的原因之一。</p><p>  有許多用于邊緣檢測的方法,它們大致可分為兩類:基于搜索和基于零交叉.基于搜索的邊緣檢測方法首先計算邊緣強度,通常用一階導數(shù)表示,例如梯度模;然后,用計算估計邊緣的局部方向,通常采用梯度的方向,并利用

84、此方向找到局部梯度模的最大值。基于零交叉的方法找到由圖像得到的二階導數(shù)的零交叉點來定位邊緣。通常用拉普拉斯算子或非線性微分方程的零交叉點,我們將在后面的小節(jié)中描述.濾波做為邊緣檢測的預處理通常是必要的,通常采用高斯濾波。</p><p>  已發(fā)表的邊緣檢測方法應用計算邊界強度的度量, 這與平滑濾波有本質(zhì)的不同. 正如許多邊緣檢測方法依賴于圖像梯度的計算, 他們用不同種類的濾波器來估計x-方向和y-方向的梯度.&

85、lt;/p><p>  一旦我們計算出導數(shù)之后,下一步要做的就是給出一個閾值來確定哪里是邊緣位置。閾值越低,能夠檢測出的邊線越多,結(jié)果也就越容易受到圖片噪聲的影響,并且越容易從圖像中挑出不相關(guān)的特性。與此相反,一個高的閾值將會遺失細的或者短的線段。</p><p>  如果邊緣閾值應用于正確的的梯度幅度圖像,生成的邊緣一般會較厚,某些形式的邊緣變薄處理是必要的。然而非最大抑制的邊緣檢測,邊緣曲

86、線的定義十分模糊,邊緣像素可能成為邊緣多邊形通過一個邊緣連接(邊緣跟蹤)的過程。在一個離散矩陣中,非最大抑制階梯能夠通過一種方法來實現(xiàn),首先預測一階導數(shù)方向、然后把它近似到45度的倍數(shù)、最后在預測的梯度方向比較梯度幅度。</p><p>  一個常用的這種方法是帶有滯后作用的閾值選擇。這個方法使用不同的閾值去尋找邊緣。首先使用一個閾值上限去尋找邊線開始的地方。一旦找到了一個開始點,我們在圖像上逐點跟蹤邊緣路徑,當

87、大于門檻下限時一直紀錄邊緣位置,直到數(shù)值小于下限之后才停止紀錄。這種方法假設邊緣是連續(xù)的界線,并且我們能夠跟蹤前面所看到的邊緣的模糊部分,而不會將圖像中的噪聲點標記為邊緣。但是,我們?nèi)匀淮嬖谶x擇適當?shù)拈撝祬?shù)的問題,而且不同圖像的閾值差別也很大。</p><p>  其它一些邊緣檢測操作是基于亮度的二階導數(shù)。這實質(zhì)上是亮度梯度的變化率。在理想的連續(xù)變化情況下,在二階導數(shù)中檢測過零點將得到梯度中的局部最大值。另一方

88、面,二階導數(shù)中的峰值檢測是邊線檢測,只要圖像操作使用一個合適的尺度表示。如上所述,邊線是雙重邊緣,這樣我們就可以在邊線的一邊看到一個亮度梯度,而在另一邊看到相反的梯度。這樣如果圖像中有邊線出現(xiàn)的話我們就能在亮度梯度上看到非常大的變化。為了找到這些邊線,我們可以在圖像亮度梯度的二階導數(shù)中尋找過零點。</p><p>  總之,為了對有意義的邊緣點進行分類,與這個點相聯(lián)系的灰度級變換必須比在這一點的背景上變換更為有效

89、。由于我們用局部計算進行處理,決定一個值是否有效的選擇方法就是使用門限。因此,如果一個點的二維一階導數(shù)比指定的門限大,我們就定義圖像中的此點是一個邊緣點。術(shù)語“邊緣線段”一般在邊緣與圖像的尺寸比起來很短時才使用。分割的關(guān)鍵問題是如何將邊緣線段組合成更長的邊緣。如果我們選擇使用二階導數(shù),則另一個可用的定義是將圖像中的邊緣點定義為它的二階導數(shù)的零交叉點。此時,邊緣的定義同上面講過的定義是一樣的。應注意,這些定義并不能保證在一幅圖像中成功地找

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論