版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領
文檔簡介
1、<p> Progress in Computers</p><p> Prestige Lecture delivered to IEE, Cambridge, on 5 February 2004</p><p> Maurice Wilkes</p><p> Computer Laboratory</p><p>
2、 University of Cambridge</p><p> The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.</p><p> These earl
3、y experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This pro
4、ved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious e
5、rror in a computer.</p><p> As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pent
6、odes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and
7、CMOS. Of these, CMOS has now become dominant.</p><p> In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering a
8、long with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a c
9、onference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE pub</p><p> Consolidation in the 1960s </p><p> By the late 50s or
10、early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To
11、those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.</p><p> Above
12、 all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they m
13、easured up superbly well to the challenge and that the change could not have gone more smoothly. </p><p> Soon it was found possible to put more than one transistor on the same bit of silicon, and this was
14、the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips
15、known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anyth</p><p> These chips made
16、a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisati
17、on, a business or a university was able to have a minicomputer for each major department.</p><p> Before long minicomputers began to spread and become more powerful. The world was hungry for computing power
18、 and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.</p><p> The fall in the cost of computing
19、did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for thei
20、r money, not less. </p><p> Research in Computer Hardware. </p><p> The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work
21、at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build an
22、y digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy ca</p><p> The 7400 series was still going strong in
23、 the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appea
24、red, users had mostly been content with teletype-based local area networks. </p><p> Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and r
25、egenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring. </p><p> The RISC Movement and Its Aftermath <
26、;/p><p> Early computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measur
27、ements were done and on the whole the choice of features depended upon the designer’s intuition.</p><p> In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a
28、 paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.</p><p> Apart from leading to a striking acronym, this title conveys little of the insights into instruction set d
29、esign which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining wa
30、s not new, but it was new for small computers </p><p> The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer desi
31、gn without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC desi
32、gn would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately</p><p> Simulation made rapid progress and soon came into universal use by c
33、omputer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to s
34、uch a roomful by the attractive name of computer farm. </p><p> The x86 Instruction Set </p><p> Little is now heard of pre-RISC instruction sets with one major exception, namely that of the I
35、ntel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fig
36、ht for survival.</p><p> This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lo
37、t to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogeth
38、er. We need to keep reminding ourselves that there is much to be said for strict binary compatibili</p><p> There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction s
39、et. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; al
40、though it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, af</p><p> In this summing
41、 up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4,
42、 157-8. </p><p> The IA-64 instruction set. </p><p> Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised n
43、eed for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that the
44、y did the exact opposite. </p><p> Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it ne
45、eds 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer. </p><p> I
46、n spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.</p><p> Chip
47、s for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.</p><p> Because of the above complications, implementation of IA
48、-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Go
49、rdon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very read
50、y to</p><p> AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is
51、 what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.] </p><p> The Relentless Drive t
52、owards Smaller Transistors </p><p> The scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics
53、 were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed. </p><p> There was
54、a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to ge
55、t more chips on a wafer, the cost per chip goes down. </p><p> Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on
56、 offering the old ones, at least not indefinitely. There can thus be one product for the entire market. </p><p> However, detailed cost calculations showed that, in order to maintain this advantage as shrin
57、kage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were
58、as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now</p><p> The degree
59、 of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 n
60、m chips is still building up</p><p> Suspension of Law </p><p> In March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at th
61、e Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it i
62、s usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromi</p><p> In a reference book available on the web, Mur
63、phy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than e
64、ither of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some
65、law or principl</p><p> The single-chip computer </p><p> At each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional
66、 increment in overall speed, since the transmission of signals from one chip to another takes a long time. </p><p> Eventually, shrinkage proceeded to the point at which the whole processor except for the c
67、aches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe conse
68、quences for the computer industry and for the people working in it. </p><p> From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors cou
69、ld be put on a single chip and the speed went up in proportion. </p><p> Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment
70、concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followed</p&g
71、t;<p> Equally surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at
72、the top of the System 360 range.are now to be found on microcomputers </p><p> Murphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a
73、 small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this
74、 was possible, if not easy </p><p> Unfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic
75、process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern. </p><p> The Semiconduc
76、tor Road Map </p><p> The extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor indus
77、try.</p><p> At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the law
78、s. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner. </p><p&
79、gt; The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became
80、international in 1998. Membership is open to any organisation that can contribute to the research effort. </p><p> Every two years SIA produces a new version of a document known as the International Technol
81、ogical Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded a
82、s the true beginning of the series. </p><p> Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over
83、a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.</p>
84、;<p> In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the t
85、ables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.</p><p> The targets set out in the Roadmaps
86、 have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competi
87、tion have been combined in an admirable manner.</p><p> It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative o
88、penness, rather than behind closed doors. These include the progression to larger wafers. </p><p> By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it
89、became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious prob
90、lems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or r</p><p> I presented the above in
91、formation from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.<
92、/p><p> The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical
93、fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt d
94、o not arise through shortage of electrons, but because the insulating layers on</p><p> There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially
95、problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress w
96、ill stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the </p><p> It is satisfactory to report that,
97、so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opini
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 微機發(fā)展簡史外文翻譯
- 外文翻譯---微機發(fā)展簡史
- 文獻翻譯——微機發(fā)展簡史
- 有關單片機外文翻譯---微機發(fā)展簡史
- 計算機專業(yè)外文資料翻譯----微機發(fā)展簡史
- 外幣折算會計簡史【外文翻譯】
- 微機控制外文翻譯
- 外文翻譯---微機系統
- 外幣折算會計簡史外文翻譯
- 數控專業(yè)畢業(yè)設計外文翻譯---數控系統發(fā)展簡史及趨勢
- 數控專業(yè)畢業(yè)設計外文翻譯---數控系統發(fā)展簡史及趨勢
- 西方翻譯簡史
- 天線發(fā)展簡史
- 地毯發(fā)展簡史
- 冰淇淋發(fā)展簡史
- 發(fā)展簡史ppt課件
- 鈦白粉發(fā)展簡史
- 康復醫(yī)學發(fā)展簡史
- 大棚蔬菜發(fā)展簡史
- 齊文化發(fā)展簡史
評論
0/150
提交評論