Jump to content

Talk:Von Neumann architecture/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Merge Proposal

'No: Von Neumann machine has a wider implication than Von Neumann Arcitecture. The latter is a article on the computer arcitecture concepts. That article it'self points out the the term "Von Neumann Machine" means more than the computer arcitecture.

Von Neumann Machines also applies to concepts in molecular nanotechnology and theoretical starship desings (see: Von Neumann Probe).

The article on "Von Neumann Machines" is not about computer architectures. Trying to include it in a page about computer arcitecture is akin to putting an article about McDonalds in an article about Cows.

—Preceding unsigned comment added by Beowulf314159 (talkcontribs) 12:52, 12 December 2005

Ehh... Well about half of the article on Von Neumann Machines IS about computer architecture. Honestly I've never heard of self-replicating machines being called "von Neumann machines," so I guess this is just my ignorance combined with a terribly referenced article... -- uberpenguin 20:01, 12 December 2005 (UTC)
I guess that makes us even - I'd never heard of the term Von Neumann machine being used to refer to a Turing machine before, and only thought of them in terms of self-replicating systems. Beowulf314159 21:14, 12 December 2005 (UTC)
My take: (1) Von Neumann architecture should remain more-or-less as-is, about the computer architecture. (2) The nanotech content of the current Von Neumann machine article should either be (2a) merged into one of the other nanotech articles (e.g., Molecular nanotechnology), or (2b) used to create an entirely new article (e.g., Von Neumann machine (nanotechnology)). I don't know much about nanotech and/or the "Von Neumann machine" aspect thereof, so I can't say which is more appropriate. If there is significant stuff specific to the "Von Neumann meachine" term, then a separate article makes sense. If "Von Neumann machine" is just an alternate name for "molecular nanotechnology machine", then I'd say merge. (3) The final Von Neumann machine page should be a standard disambiguation page. One dab target should be Von Neumann architecture. The other should be the article resulting from Step 2 above. See also Quuxplusone's comments at Talk:Von Neumann machine#Is this a dab page?. --DragonHawk 04:26, 13 December 2005 (UTC)
With a little digging and comparing, it appeared that the Von Neumann machine arcticle really was nothing more than a DAB page with material rolled into it from the pages that it pointed to - save for the comment someone had made about the propriety of attaching John von Neumann's name to the "Von Neuman architecture". That was easily added by mentioning it, and adding a link to the article on John von Neumann - where the issue is discussed.
There appear to be no specific article about molecular nanotechnology as Von Neumann machines, and the article about macroscopic self-replicating machines is covered in Clanking replicators and Von Neumann probes.
With a little care, the Von Neumann machine article could be scaled back to a dab page, citing both, discrete, uses of the term, and providing links to the various pages (Von Neumann architecture, John von Neumann, Clanking replicator, Molecular Nanotechnology, and Von Neumann probes) with little loss in information in the article. What information was taken out of the article is replicated already verbatim in the Von Neumann architecture page.
Beowulf314159 13:37, 13 December 2005 (UTC)
For the record, I just finished moving content out of von Neumann machine, and into von Neumann architecture and Self-replicating machine. von Neumann machine is now a proper dab page. --DragonHawk 00:50, 17 August 2006 (UTC)

Good job. Thanks! --70.189.73.224 01:27, 22 August 2006 (UTC)

A link to the http://wiki.riteme.site/wiki/Von_Neumann_machine article should be added. —Preceding unsigned comment added by 80.128.243.28 (talkcontribs) 06:43, 10 June 2004

Actually, that article should be merged into this one. -- uberpenguin 16:35, 12 December 2005 (UTC)
Done. --68.0.120.35 07:22, 15 March 2007 (UTC)

Call me dumb

...but doesn't this architecture describe the general structure of all computers used today? If so, could somebody put this rather significant observation in the intro? Thanks. ---Ransom (--71.4.51.150 18:44, 6 August 2007 (UTC))

Most computers these days DO present a von Neumann architecture model at the machine language level, although almost all are a hybrid of von Neumann and Harvard in the hardware level (e.g., separate Instruction and Data caches). Pure Harvard architectures are still common in microcontrollers and Digital Signal Processors because of their specialized purposes. Highly parallel computers can get hard to classify. -- RTC 22:11, 6 August 2007 (UTC)

Princeton architecture

Wikipedia:Redirect#What_needs_to_be_done_on_pages_that_are_targets_of_redirects.3F suggests

"Normally, we try to make sure that all "inbound redirects" are mentioned in the first couple of paragraphs of the article."

Therefore, since Princeton architecture is a redirect to Von Neumann architecture, we should mention "Princeton architecture" in first couple of paragraphs of this article.

I say "von Neumann architecture" when I try to emphasize the fact that the program is stored in memory, as well as all kinds of other important-to-understand facts.

I think I would prefer to say that a "von Neumann architecture" is an entire category of things -- everything that suffers from the "von Neumann bottleneck". In particular, both the "Princeton architecture" and the "Harvard architecture" are included in that category. Then a "Princeton architecture" is "a von Neumann architecture with a single memory". Also a "Harvard architecture" is "a von Neumann architecture with 2 memories, where the program is stored in one memory, and most of the data is stored in the other memory".

(I've also seen DSPs that have 3 or 4 different memories that could be read simultaneously -- would you also call those Harvard architectures, or is there another name for that?).

So we could have a section "types of von Neumann architectures", listing the Princeton architecture, the Harvard architecture, the Turing machine, ... any others?

On the other hand, some people seem to think that a "von Neumann architecture" and a "Princeton architecture" are identically the same thing. Then "Harvard architecture" is something a little bit different from that one thing. If we decide to go this way, then the intro paragraph should have a little parenthetical "(also called the Princeton architecture)" statement.

--70.189.73.224 01:27, 22 August 2006 (UTC)

Re: "Also a "Harvard architecture" is "a von Neumann architecture with 2 memories..."
In my understanding this is definitely NOT correct. One of the points of the original "von Neumann draft paper" discribing the architecture was to point out the advantages of using the same memory (i.e. 1 memory) so that instructions could be treated exactly like data. This is not typical of standard Harvard architecture machines (although some "modified Harvard architecture" machines do provide for access to the instruction memory as data). -- RTC 22:46, 23 August 2006 (UTC)
Yes, well, definitions change with time. Do you have a better name for "computer architectures that suffer from the von Neumann bottleneck" ?
Do you think that a "von Neumann architecture" and a "Princeton architecture" are identically the same thing? --70.189.73.224 05:14, 24 September 2006 (UTC)
I would respond to your second question: Yes, that "Princeton" and "von Neumann" *are* identical (except for colloquial usage). And I would respond to your first question: A better name for "computer architectures that suffer from the von Neumann bottleneck" is a "stored-program architecture", of which von Neumann (Princeton) and Harvard architectures are two examples.
This article's own definition of "von Neumann Architecture", as stated in the very first sentence, is:
"The von Neumann architecture is a computer design model that uses a processing unit and a single separate storage structure to hold both instructions and data."
That is synonymous with my understanding, and it is in direct contradiction with the definition of the Harvard architecture. I think the contradiction could be resolved in several different ways:
1) Make a clear demarkation within this article in two broad sections: The first section would deal with the unique features of a computer architecture in which both data and instructions share a single address space. This section would state early on that such an architecture is also known as the "Princeton architecture".
The second section would explicitly state that it is shifting gears to adopt a *different* definition of "von Neumann architecture", to mean "stored-program architecture", of which both the Harvard and Princeton architectures are special cases.
2) Focus this article entirely on the usage of the term "von Neumann" in reference to all stored-program computers in general (both Priceton and Harvard).
Filter out every mention of a single address space for both instructions and data, and place it an a separate article entitled "Princeton architecture". Place a re-direct or some other reference in this article pointing to the new "Princeton" article, with a short explanation of the ambiguity in terminology.
3) Focus this article entirely on the usage of the term "von Neumann" in reference specifially to an architecture in which instructions and data share a single address space. Mention that another word for this is "Princeton architecture".
Move everything about the characteristics of stored-program architectures in general so a separate "Stored-program architecture" article.
I think it is very important in an encyclopedic article to make every possible effort to avoid unnecessary ambiguities, or where such ambiguities are unavoidable, to explain the contradiction as completely as possible. Goosnarrggh (talk) 14:58, 29 May 2008 (UTC)

Citation on article title

An editor has added a citation to the First Draft report after the words "von Neumann architecture" in the first sentence of the lead. The citation did not provide any explanation of what was being cited. I've reverted this change. In general citations should follow complete sentences (i.e. subject-verb combinations), as only complete sentences present facts that are verifiable ("He did this" deserves sourcing but putting a citation on "he" alone doesn't make sense). A citation attached to an instance of a term (in this case "von Neumann architecture") should preferably only be used as a citation for the usage or origin of the term itself. (For instance, in the ENIAC article, some edits repeatedly altered the expansion of the acronym, so the meaning of the word "ENIAC" itself deserved sourcing to an authority.) The First Draft does not use the term "von Neumann architecture" so it's rather inappropriate here. I have no problem with the First Draft being mentioned somewhere in the lead. I just have a problem with it being stuffed into the first sentence sans explanation. That's not good article writing. Robert K S (talk) 18:37, 31 July 2008 (UTC)

I would support what Robert K S has done, although there can be a case for citations mid-sentance. It seems to me that it is important to distinguish between the First Draft document and the phrase Von Neumann architecture which subsequently arose from it. Alan Turing's 1945 design of the ACE (Automatic Computing Engine) in his paper Proposed Electronic Calculator (delivered to the Executive Committee of the UK National Physical Laboratory in February 1946) conforms, in all important respects, to what later became known as the von Neumann Architecture. As Jack Copeland has pointed out, Turing's was a relatively complete specification of a stored-program digital computer, whereas the First Draft was much more abstract and was said by Harry Huskey, who drew up the first designs of EDVAC, to be of no help to him.[1] TedColes (talk) 16:10, 2 August 2008 (UTC)

References

  1. ^ Copeland, Jack (2006), "Colossus and the Rise of the Modern Computer", in Copeland, B. Jack (ed.), Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford: Oxford University Press, p. 108, ISBN 978-0-19-284055-4

"implements a Turing machine"

It implements a register machine, not a Turing machine, which is Turing-equivalent. (Where's the tape? Where's the alphabet and states?) —Preceding unsigned comment added by 71.142.52.74 (talk) 04:59, 8 October 2008 (UTC)

Interpretation

This article reads like this was a one off architecture only used in the 40s. Their should be mention (though with improvements) is still the basis base for most most modern computer architecture since then. —Preceding unsigned comment added by 209.30.228.224 (talk) 20:53, 28 January 2009 (UTC)

Bottleneck section

The last paragraph of the section on the von Neumann bottleneck has some things that I find kind of confusing.

First:

Backus's proposed solution has not had a major influence.
It doesn't say what his "proposed solution" is.

Then:

Modern functional programming and object-oriented programming are much less geared towards "pushing vast numbers of words back and forth" than earlier languages like Fortran, but internally, that is still what computers spend much of their time doing.
I'm not sure how the programming paradigm at all changes the "pushing words back and forth" situation. The programming language seems pretty peripheral to this. The "internally, ..." part recognizes this. But I think it should be stronger. Pretty much all the computer does is move words around.
But the introduction of more indirection and higher level code actually makes part of the problem as stated a lot worse, as the original quote says (emphasis added):
Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.
The part that I've bolded seems to be about indirection. I'd say that functional and object oriented languages make this a lot more true now, when everything ends up being a pointer to something on the heap...

128.151.69.131 (talk) 16:45, 25 April 2008 (UTC)

"The performance problem is reduced by a cache between the CPU and the main memory . . . ." There's also Direct Memory Access (DMA), which takes those pointers and does large transfers quickly without involving the CPU or its cache at all. –97.100.244.77 (talk) 20:23, 27 April 2009 (UTC)

I think that, in this section, the sentence about overcoming the bottleneck with 'branch prediction' is questionable. Branch prediction was invented for overcoming stalls due to long processing pipelines. When a branch is detected late in the pipe it causes the subsequent instructions to be canceled. I see no connection with the Von Neumann bottleneck. Zack73 (talk) 16:50, 15 September 2008 (UTC)

Early von Neumann-architecture computers

This article says that 'The terms "von Neumann architecture" and "stored-program computer" are generally used interchangeably, and that usage is followed in this article'. Yet we have two headings of 'Early von Neumann-architecture computers' and 'Early stored-program computers'. Would it not be more sensible to merge these two, and perhaps limit the list to machines brought into use before 1955? --TedColes (talk) 17:12, 10 June 2009 (UTC)

What does the amount of data an instruction processes or the number of operands have to do with sequential architecture in the opening paragraph? DGerman (talk) 20:38, 15 October 2009 (UTC)

please correct the diagram

Colleagues, I'd like to point out that in the von Neuman's scheme, Control Unit never writes to memory! It can only read (commands). The only way to change a memory slot's contents is to copy the accumulator to the slot, so it is ALU, not CU, who actually writes the memory. Please correct the diagram, I've got no time now to do so. Thanks! DrCroco (talk) 10:57, 20 April 2009 (UTC)

I'ld fix it but I don't know how. It is wrong and it bothers me too. 71.214.223.133 (talk) 05:05, 28 November 2009 (UTC)

I think this is confusing, too. The only explanation that occurs to me is that the author of the diagram wanted the arrow from Memory to Control Unit to represent the transfer of instructions, with the arrow in the opposite direction representing "orders for CC to transfer its own connection with M to a different point in M, with the purpose of getting its next order from there;" as von Neumann puts it.

Right, in modern terms this would be the address bus, not the data bus that comes from the control, to specify the next instruction. At least explain in text, but the diagram needs to not get too busy. Actually the diagram was taken out now; see comments below. W Nowicki (talk) 17:08, 26 May 2011 (UTC)

Influence that the von Neumann Architecture may have had historically

I lecture in Computer Science and particularly Operating Systems -- while updating a tutorial on security, this thought ocurred.

Adding what I believe may be a relevant observation. When considering the problem of buffer overflow exploits, it is precisely the shared instructions and data from the von Neumann model that contributes to the viability of this exploit. I submit that the vision of computer memory as shared between code and data may have been a factor in why the Multix architecture with an execute permission bit took so long to be accepted. OK, it's obvious now that memory can be shared between data and instructions wilst being dedicated to either one or the other at any moment in time, but did the perceived requirements of the model blind people to the possibility?

WRT to Multix, the papers are Karger and Schell [1974] and Karger and Schell [2002] one of the points that they made was that, had the Multix design been adopted generally, the buffer overflow attacks that characterised the late 1990s would not have been possible. Degingrich.au (talk) 04:49, 15 October 2009 (UTC)

Obvious question is what is Multix? There was an important operating system named Multics and one called Unix. Do you mean one of these or both? Page protect bits seem an aspect of the hardware, although in Multics' case it was intertwined since it was before the idea of a portable operating system. But you do have a point that the security downside of treating instructions as data should be discussed. W Nowicki (talk) 17:08, 26 May 2011 (UTC)

Vast majority of computers?

In the vast majority of modern computers, the same memory is used for both data and program instructions.

Is this really true? The vast majority of modern computers are probably some kind of embedded computers. Embedded quite often uses Flash or (E)EPROM for program storage, and a separate very small RAM for data. 193.11.235.35 (talk) 05:49, 23 May 2010 (UTC)

Many microcontrollers indeed have a Harvard architecture [1]. I don't know if that invalidates the statement though. Tijfo098 (talk) 13:15, 18 May 2011 (UTC)
Maybe qualify by saying "most general-purpose computers", since indeed more and more embedded electronics devices include one more multiple simple programmed controllers. W Nowicki (talk) 17:08, 26 May 2011 (UTC)

For information, Stored-program computer, which was formerly redirected to this article, has been (re-)created. Maybe an expert could determine whether there is sufficient distinction to warrant a separate article. Dave.Dunford (talk) 15:32, 10 November 2010 (UTC)

Yes, seems to be enough material to support two, but then the statement that this article uses the two terms interchangeably seems very out of place and should be removed. Instead try to summarize the distinction. W Nowicki (talk) 17:08, 26 May 2011 (UTC)

Scope

Schematic of the von Neumann architecture. The Control Unit and Arithmetic Logic Unit form the main components of the Central Processing Unit (CPU)

Von Neumann architecture can mean two related, although different things. I can mean (broadly) mead "stored-program computer" with a unified store for programs and data as this article defines it (as opposed to Harvard architecture), see [2] [3] for RS, but it can also more narrowly refer to some specific way to interconnect the CPU/memory/IO as was done in the EDVAC and IAS [4] [5]. The IBM 7094 for instance has a Von Neumann architecture in the first sense, but not in the 2nd one, having a multiplexer and I/O channels. [6]. The picture to the right is suitable illustration for the 2nd sense, but not the first, which can have pretty much any interconnect. Finally VN architecture can have third sense, derived from the 2nd: a certain primitive instruction set architecture, e.g. having no index registers [7]. Tijfo098 (talk) 13:04, 18 May 2011 (UTC)

Yes good points, and the article needs to clarify and distinguish between related articles. There already is a disambig page Von Neumann machine, to which I would add IAS machine to the list, since that was often called the "von Neuman machine" too. Actually the IAS machine article has a list of similar machines, so perhaps we could point to that as the article on roughly your third case: the series of machines in that era having a similar architecture. We definitely need to have at least one diagram in the article; perhaps say "example of a von Neuman architecture" or something like that? Another idea would be to add one or more additional diagrams of related organizations. How about system bus for example? W Nowicki (talk) 17:08, 26 May 2011 (UTC)

'Early von Neumann-architecture computers' and 'Early stored-program computers'

Why are there these two different sections? Also the section 'Non-von Neumann processors' says the fist of these dates from 1986. This is confusing as earlier in the article, von Neumann architecture is contrasted with Harvard architecture, implying that Harvard architecture is a non-von Neumann architecture. Clarification is needed. --TedColes (talk) 17:48, 25 August 2011 (UTC)

Book: "Nucleus of the Digital Age"

Here's a very good review of the book Turing's Cathedral by George Dyson (Pantheon), describing the history of the von Neumann architecture: 

 {{cite
 |url=http://online.wsj.com/article/SB10001424052970204909104577237823212651912.html
 |title=The Nucleus of the Digital Age
 |author=Konstantin Kakeas
 |publisher=[[The Wall Street Journal]]
 |date=2012-03-03}}

At some point, I'd like to find one or two places to work it into the article. — Loadmaster (talk) 16:23, 8 March 2012 (UTC)

Article problem

The discussion of what exactly a Von Neumann computer does seems to not have much to do with the title of this article, Von Neumann architecture -- that is, a CPU with Harvard architecture does exactly the same things. Tempshill 19:57, 15 Nov 2003 (UTC)

That is because most people use "von Neumann architecture" to refer to a wider class of architectures which includes the Harvard architecture and other similar CPU+RAM architectures.
Random examples:
The von Neumann Architecture of Computer Systems
Non von Neumann Architectures
The article is currently mistakenly focusing on a very narrow definition of von Neumann architecture, which is typically only used when explicitly contrasting with the Harvard architecture.
Similarly, the term "von Neumann bottleneck" usually has nothing to do with the bus, and everything to do with the inefficiency of the overall CPU+RAM organization. For example, cellular automata can sort N numbers in O(N) time, but a machine with a von Neumann bottleneck will require O(N log N) time. Matt Cook (talk) 02:17, 16 April 2012 (UTC)
Those are two poorly written articles. They encourage (as here) a fallacy of the excluded middle because they set up a false dichotomy between Von Neumann machines and massively parallel non-CPU machines. This is wrong on two counts.
Simply, there are three architectures at question here, not two. Von Neumann, Harvard (and Modified), then non-CPU architectures where the processing isn't carried out by any "central" processor. The two articles cited set up a contrast between the first and third, but they don't mention the second at all. Their implied comparison is between the CPU and non-CPU groups, which (as you rightly note) places the Harvard group alongside the Von Neumann group. Yet this isn't a claim that Harvard is Von Neumann, it's simply failing to identify or describe Harvard as a group at all.
Secondly, the Von Neumann / Harvard distinction is a matter of bus, not processor, and is orthogonal to the single CPU / massively multiple CPU distinction. Choosing single/multiple/massively multiple processors is a separate decision to how to interconnect them. In particular, high multiple processor architectures also tend towards the Harvard approach of separated buses (because it's just easier) rather than shared Von Neumann buses. So in contrast to your assumption here, the Harvards in these two refs are closer to the multiple CPUs than they are to the Von Neumanns.
Finally there's the issue that Von Neumann is well-defined (and as the narrow interpretation, contra to Harvard) and has been so for decades before these two quite minor papers. Andy Dingley (talk) 09:41, 16 April 2012 (UTC)

suggest split: von Neumann bottleneck

The von Neumann bottleneck article is currently a redirect to von Neumann architecture. I suggest that we WP:SPLIT out the "Von Neumann bottleneck" section of this article into a full article of its own, leaving behind a WP:SUMMARY. (Later, we could move bottleneck-related text in the random-access memory article to that "Von Neumann bottleneck" article). Should the new article discussing this bottleneck be titled "Von Neumann bottleneck", or would some other title ("memory bottleneck", etc.) be better? --DavidCary (talk) 11:52, 3 August 2011 (UTC)

I second this proposal. I feel the name should remain "Von Neumann Bottleneck" to honour John Backus's original article where he says,

"I propose to call this tube the von Neumann bottleneck." doi:10.1145/359576.359579

p.r.newman (talk) 14:29, 16 April 2012 (UTC)
Such a split would be an excellent idea. They really are different things with similar names. It needs to be titled Von Neumann bottleneck because that's the name most people use when referring to it. We should also add a "not to be confused with Von Neumann architecture" note to is and a "not to be confused with Von Neumann bottleneck" note to this article. --Guy Macon (talk) 15:48, 16 April 2012 (UTC)

Babbage

Babbage's work isn't mentioned anywhere in the article, his name only comes up "accidentally" in a quote about Neumann "He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing— in so far as not anticipated by Babbage ... "
A bit ironic that a section detailing "who knew what and published when" between 1935 to 1937 fails to mention that Babbage described his analytical engine 100 years earlier. Ssscienccce (talk) 09:18, 1 September 2012 (UTC)

The "fundamental conception" is the notion of storing instructions in the same memory as data, so that a program can load or create new code to then execute (which, admittedly, is also possible if you have a (modified?) Harvard architecture machine that includes instructions to write to the instruction memory). The analytical engine as described by Babbage, as far as I know, would not have supported that, so, unless there's a paper in which Babbage did discuss that concept, the fundamental concept of a von Neumann architecture machine was, in fact, not anticipated by Babbage. Guy Harris (talk) 18:06, 1 September 2012 (UTC)
From [ http://www.turing.org.uk/turing/scrapbook/computer.html ]:
"Charles Babbage's 1840s Analytical Engine ... didn't incorporate the vital idea which is now exploited by the computer in the modern sense, the idea of storing programs in the same form as data and intermediate working. His machine was designed to store programs on cards, while the working was to be done by mechanical cogs and wheels. There were other differences — he did not have electronics or even electricity, and he still thought in base-10 arithmetic. But more fundamental is the rigid separation of instructions and data in Babbage's thought."
An interesting side note: being able to move information between code and data is so useful (and it is so easy to add) that pretty much every modern Harvard architecture machine incorporates some way to do it, thus making them modified Harvard architecture machines. That being said, none of them have the main disadvantage of a pure Von Neumann architecture, which is that you cannot access code and data simultaneously. Of course you really can if your system has separate code and data cache, so the differences are pretty much irrelevant for CPUs other than microcontrollers and antiques. --Guy Macon (talk) 20:10, 1 September 2012 (UTC)

"Misnomer"

This new paragraph does not state what the "misnomer" is, presumably the claim that von Neumann was "the father of the computer". It is also rather abrupt in tone an not in line with the later quotation in which Stanley Frankel said "I am sure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing— in so far as not anticipated by Babbage ... " --TedColes (talk) 17:26, 22 December 2013 (UTC)

Eckert-Mauchly-Von Neumann architecture?

Regarding this edit, I would like to discuss the proposed change from "Von Neumann architecture" to "Eckert-Mauchly-Von Neumann architecture". as I wrote in my edit comment, I am not going to reject the idea out of hand, but we need more than one paper by Mark Burgess before we make such a controversial change, so I am opening up a discussion about it per WP:BRD. --Guy Macon (talk) 04:52, 20 January 2014 (UTC)

"This edit" was an attempt to 1) clean up the previous edits by User:Computernerd354 and 2) at least provide a reference for the use of "Eckert" with the architecture. If we need more than one paper by Burgess before we make said change, we need to back out all the "Eckert architecture"/"Eckert-Mauchly architecture" stuff, which I just did. Guy Harris (talk) 06:52, 20 January 2014 (UTC)
No need for angry-sounding edit summaries. I saw an edit, determined that the source wasn't good enough to support the edit, and reverted it. There is no requirement that I fix any other problems in the article. I would have fixed those edits had I noticed them, but they slipped by me. Your new version is a big improvement, BTW. --Guy Macon (talk) 07:39, 21 January 2014 (UTC)

Capitalization of "von" in "von Neumann architecture"

There seems to be continued confusion as to how to deal with the pesky "von" in this article. It is, of course, proper to lowercase the "V" in "von" when the word is used inter-sentence. However, various editors have taken their zeal a step too far by adding the {{lowercase}} template, making the title of the article "von Neumann architecture", or decapitalizing the "V" in "von" when it begins a section title or a sentence. This is no more proper than referring to the Steinbeck novel as of Mice and Men. Robert K S 13:40, 4 September 2007 (UTC)

"Von Neumann" is a proper name, used to signify the architecture. In engineering textbooks going back to 1982, "Von" is usually capitalized. In English writing style, when a person's name is assigned to the proper name of a thing, all words are capitalized, as in "The Da Vinci Project" and not the awkward "The da Vinci Project". The test is, does the usage aid or hinder understanding? Practically, when 'Von' is not capitalized as in "this machine is a von Neumann architecture", it raises questions to new readers as to the relevance of the word 'von' and whether or not 'von' is a typo. It is not useful that way and hinders understanding. Corwin8 (talk) 19:52, 15 August 2011 (UTC)
Let's say you're right; I also came to the page to fix this.. again commenting years later. Von Neumann (disambiguation) has other examples that use lower case.. comp.arch (talk) 01:10, 18 June 2014 (UTC)

Harvard architecture "more modern" than von Neumann architecture?

The machine that inspired the use of the term "Harvard architecture", the Harvard Mark I, first ran in 1944; the First Draft of a Report on the EDVAC by von Neumann, which inspired the use of the term "von Neumann architecture", was published in 1945.

In what fashion is the Harvard architecture "more modern than" the von Neumann architecture? Processors in which there are separate level 1 instruction and data caches, and separate data and address lines to those caches, are a relatively modern development, but that's, at best, a modified Harvard architecture, and arguably is better called a "split cache architecture", with the only difference between it and a von Neumann architecture being, at most, a requirement to flush the instruction cache following a store operation that modifies an in-memory instruction (and split-cache x86 processors, due to backwards-compatibility requirements, don't even need that). Guy Harris (talk) 08:03, 9 November 2014 (UTC)

And Turing's report on the Automatic Computing Engine also dates to 1945. Guy Harris (talk) 09:38, 9 November 2014 (UTC)

Was ENIAC a von Neumann-architecture computer?

At least two IP addresses have added ENIAC to the list of von Neumann architecture computers. According to the article for ENIAC, it was originally programmed with switches and cables (not von Neumann. Then the ability to put programs into the function table, which was a set of switches (read-only by the machine) was added; that would make it a Harvard architecture machine, not a von Neumann architecture machine. That was described in A Logical Coding System Applied to the ENIAC (Electronic Numerical Integrator and Computer) from 1948. Then a core memory was added in 1953; I don't know whether words from that memory could control the operation of the machine or not, but, even if it did, that wouldn't make ENIAC a von Neumann architecture machine until 1953. Guy Harris (talk) 05:45, 10 April 2015 (UTC)

I believe that you are correct. The 1945 ENIAC wasn't a stored program computer of any kind.; it was programmed with knobs and plugboards.[8][9] The 1948 ENIAC was a stored-program computer using read-only memory -- a Harvard architecture.[10]
(Unrelated but interesting:[11][12][13])
As for the 1953 ENIAC, does anyone have access to A static magnetic memory system for the ENIAC (ACM '52 Proceedings of the 1952 ACM national meeting (Pittsburgh) Pages 213-222 ACM New York, NY, USA ©1952 doi 10.1145/609784.609813)?[14] --Guy Macon (talk) 10:48, 10 April 2015 (UTC)
Yes, I just bought it. The article says "At present the ENIAC has only 20 words of internal memory, in the form of electronic accumulators. This small memory capacity limits the size of the problem that can be programmed without repeated reference to the external punched-card memory." (which is in the summary that you can get for free), which seems to suggest that it's additional data memory. In section IV "The Memory System for the ENIAC" it says that one of the requirements is "that the control signals used in the memory be derived from the standard control pulses in the ENIAC", which suggests that it didn't itself generate control signals, i.e. it couldn't contain code.
Article such as this one from 1961 seem don't speak of the core memory as a particularly significant change, unlike the description of the function table read-only code memory:
Programming new problems meant weeks of checking and set-up time, for the ENIAC was designed as a general-purpose computer with logical changes provided by plug-and-socket connections between accumulators, function tables, and input-output units. However, the ENIAC's primary area of application was ballistics--mainly the differential equations of motion.
In view of this, the ENIAC was converted into an internally stored fixed-program computer when the late Dr. John von Neumann of the Institute for Advanced Study at Princeton suggested that code selection be made by means of switches so that cable connections could remain fixed for most standard trajectory problems. After that, considerable time was saved when problems were changed.
The ENIAC performed arithmetic and transfer operations simultaneously. Concurrent operation caused programming difficulties. A converter code was devised to enable serial operation. Each function table, as a result of these changes, became available for the storage of 600 two-decimal digit instructions.
Those revolutionary modifications, installed early in 1948, converted ENIAC into a serial instruction execution machine with internal parallel transfer of decimal information. The original pluggable connections came to be regarded as permanent wiring by most BRL personnel.
so I suspect the machine remained a Harvard architecture machine after the core memory was added. Guy Harris (talk) 18:29, 10 April 2015 (UTC)

Removing redundant diagram

There are two diagrams, both with the caption "Von Neumann architecture scheme", that are almost identical. The only real difference, other than color, is that the input and output devices are shown connecting directly to the accumulator in the arithmetic logic unit in the second example, rather than to the unit as a whole. I believe that is less accurate. Hence, I am removing the reference to the second diagram. - AlanUS (talk) 20:53, 7 January 2016 (UTC)

Agreed. Andy Dingley (talk) 22:44, 7 January 2016 (UTC)
The one you left in was a newer addition to the page; you removed the older one. I consider the one you left there to be better than the one you removed, so I think you did the right thing here. Guy Harris (talk) 22:55, 7 January 2016 (UTC)

Hello fellow Wikipedians,

I have just modified 3 external links on Von Neumann architecture. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

checkY An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 19:36, 11 December 2017 (UTC)

Works, but it's better to use the patent templates, so I did that for the patent reference. Guy Harris (talk) 20:07, 11 December 2017 (UTC)

Recent edits

Recently, this article has been edited by Camion (talk · contribs · deleted contribs · logs · filter log · block user · block log). Camion is a new Wikipedia editor, and clearly wants to improve the encyclopedia, so we want to encourage him/her -- we need more good editors.

Alas, Camion's first edits have some problems. Nowhere near as bad as the garbage I was putting out when I first started editing Wikipedia in 2006 (I edited as an IP from January to June, so thankfully my worst edits are hard to find in the history), but still not good.

Again, I do not want to discourage Cameron from learning more about Wikipedia and becoming a valued editor, but in particular, Cameron first changed

Self-modifying code has largely fallen out of favor, since it is usually hard to understand and debug, as well as being inefficient under modern processor pipelining and caching schemes.

to

Self-modifying code has largely fallen out of favor (in the sens "is not recommended"), since it is usually hard to understand and debug, as well as being inefficient under modern processor pipelining and caching schemes. However, it can remain used to compress the code (and uncompress it as the time of loading, or to obfuscate it in order to protect anti piracy mechanisms against reverse engineering.

With the edit summary "self modifying code can uncompress it or protect it against reverse engineering"[15]

I saw several problems with this which I will detail below, but at the time I picked one problem and reverted with the edit summary "Not self-modifying. The code that does the compression is separate from the code that gets uncompressed."[16]

This is based upon the fact that self-modifying code that decompresses itself is almost unknown, even if you go back to when authors tried (and failed) to use it for Commodore 64 and Apple ][ copy protection.

A couple of days later, Camion reintroduced his edit with a modification:

Self-modifying code has largely fallen out of favor (in the sens "is not recommended"), since it is usually hard to understand and debug, as well as being inefficient under modern processor pipelining and caching schemes. However, it can remain used to obfuscate it in order to protect anti piracy mechanisms against reverse engineering."

with the edit summary "correction, then:[17]

There are still a few problems with this edit.

  • The first problem is easy to correct, and if it was the only problem I would have simply fixed it The original edit had a "(" without a matching ")" and a spelling error ("sens" instead of "sense"). Camion needs to click on "preview" and check his spelling and grammar before clicking on "publish page".
  • "Fallen out of favor" works just fine by itself. There is no need to explain what it means, furthermore, it didn't fall out of favor because it isn't recommended (which it isn't) but because as compilers improved and microprocessors got memory protection, self modifying code simply would not compile and run.
  • "to obfuscate it in order to protect anti piracy mechanisms against reverse engineering." is correct as far as it goes, (but a bit awkwardly worded), but fails to mention that this was pretty much only done in the days when people were writing hand-assembled code for thy 8088, 6502, and Z80. By the time the first Pentium came out, self modifying code was pretty much abandoned by everybody. It never really worked anyway -- to this day the most clever copy protection is usually broken within hours. On the PC side of things, different techniques are now used, with mixed results. See Digital Rights Management. On the microcontroller side of things, the smart developer who wants to protect ARM Cortex code uses a chip with TrustZone security. Other chips have fuses that prevent a third party from accessing code memory.

Now I could insert a paragraph that explains some or all of this, But I don't see any real benefit to the reader whon wants to know about the Von Neumann architecture.

Because of all of this, I am once again reverting the edit. I hate to do this to a new editor, but the edit does not meet our quality standards. --Guy Macon (talk) 13:31, 26 October 2018 (UTC)

I'm not a new contributor at all, but I'm sorry about the fact that English is not my mother tongue. By the way, about the code obfuscation, I wasn't thinking that much about anti-copy protection but more about protocol protection like in Skype which is really tricky to reverse engineer and was never cracked in hours. -- Camion (talk) 06:35, 27 October 2018 (UTC)
No problem regarding language. Any problems from English not being your mother tongue are easily fixed.
Regarding skype reverse engineering, we know that some hackers reverse engineered Skype, but we don't know how long it took (they didn't publish but instead used it to find out more about Skype users.) Some have speculated that it was a government that did it first.
A complete reverse engineering of Skype is described at
http://www.oklabs.net/skype-reverse-engineering-the-long-journey/
where the author says it took him 6 months of study to exactly understood the entire mechanism. That was in 2012.
Six years earlier in 2006 http://www.oklabs.net/wp-content/uploads/2012/06/bh-eu-06-Biondi.pdf described all of the mechanisms behind Skype protection in detail. It is a fascinating read. The section labeled "Anti-dumping tricks" shows that they used the usual encrypted transition code to decrypt the ciphered code, then run the deciphered code -- no self-modifying code was involved.
--Guy Macon (talk) 10:49, 27 October 2018 (UTC)