Von Neumann Architecture


Totally off topic but can someone explain to me in the most simplest terms cause i don't understand big technical stuff what the Von Neumann Architecture is and The Bottlekneck is aswell. I've got a big IPT test tommorow and yeah i really dunno what it is. Thanks in advance help me !! ::ha:: lol.....
Ok, let's try to clear this up:
"The bandwidth, or the data transfer rate, between the CPU and memory is very small in comparison with the amount of memory." -> The processor is quite slow when compared to the amount of RAM at its disposition.

"Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it." -> Much data that goes through the processor from and to RAM is not data at all, but just pointers to where to fetch or store data. The "words" Backus talks about here are the instructions contained in the program.

What happens in a Von Neumann computer is the following cyclus:

1. FETCH (get program instructions from RAM)
2. DECODE (what do the instrucitons mean?)
3. FETCH OPERANDS (get the data to fill in te variables in the instructions)
4. EXECUTE (perform tha actions of the instruction)
5. UPDATE INSTRUCTION POINTER (keep track of which instruction we are executing)

This is what happens at each clock cycle of the processor and it happens mostly serially. Programs and Data alike are stored into RAM and all the traffic to and from RAM goes to the processor. Thus the processor-RAM connection is the bottleneck that contrains the overal speed of a Von Neumann computer.

Remember that there have been quite some developments since Von Neumann proposed his architecture and since Backus criticised it. Things like caches (temporary intermediate storage), for instance, are new.

However, the basic problem remains: computation time is dominated by the amount of time taken to shift data from RAM to the CPU. Obviously this problem increases with the size of RAM. "But more RAM makes my machine _faster_!" You might object. True: RAM is faster than your harddisk, but it is painfully slow nevertheless. Computers are good at all sorts of tasks, especially tasks that humans (or other biological systems information processing systems) are good at. Computers are good at maths and chess, humans are good at catching balls. Guess what kind of computer could calculate the trajectory of a ball flying in a parabolic trajectory at slichtly non-linear speed _in real time_ and having to match that movement with a limb apt to not just hit the ball, but catch it? Think your desktop can do it? No way, think XServe cluster. Computers can't even (yet) reliably decode an image to find the 3D contours of objects in real time to e.g. steer a car. Humans can, because our brains do not have a Von Neumann bottleneck. Forget Mac VS PC benchmarks, computers are painfully slow when compared to biological information processing systems. Real life reaction time: that is where the Bottlececk can be felt.
Cat said:
Ok, let's try to clear this up:
"The bandwidth, or the data transfer rate, between the CPU and memory is very small in comparison with the amount of memory." -> The processor is quite slow when compared to the amount of RAM at its disposition.

Good explanation Cat, but this was the only thing that I'm not too sure I agree with. Doesn't that basically mean that the processor is _faster_ compared to the available memory?

This is a major problem with the current crop of processors. You've got a multi-GHz processor (very fast) coupled with memory that is barely half that speed (if you're lucky). The implications this has for performance is enormous, since in the typical fetch-decode-execute cycle, the processor has to wait for data to come from main memory. This means that for a few clock ticks, the processor is sitting there, doing nothing.
That's an ulterior problem. Purely theoretically, without looking at the relative clockspeeds of RAM and processor, the bottleneck is the connection between those two. "The processor is quite slow when compared to the amount of RAM at its disposition" is not meant as a statement about RAM clockspeed VS processor clockspeed, but about bandwidth between processor and RAM and size of RAM. Obviously they cannot easily be compared without taking the relative clockspeeds and busses into account, but those are implementational details. Considering those, we can see that processors have gone from 32MHz to 3200MHz in ten years. RAM has gone from 4MB to 4000MB in ten years. Both in linear growth. Ever faster processors have addressed ever more memory. The problem hence remains relatively constant. What do we see? To really speed up the computer, we now have caches, everywhere. Design is getting a little bit more intelligent, but the solutions are always more or less the same. Instead of reding from HD, we read from RAM and now instead of reading from RAM we read from cache. Just wait until you can get a computer with only persisting RAM and 64MB caches ... the Bottleneck is _still_ going to be there. Dual processors/cores or clustering is another way of speeding things up, both the CPU and at the GPU are doing this. A massive application of this paradigm might give more results than inserting smaller but faster (and ever more expensive) intermediate storage everywhere.
Thankyou soo much Cat and Viro, it really helped me out for the exam even though i probly failed it. Thanks again !