| Apr | MAY | Jun |
| 23 | ||
| 2012 | 2013 | 2014 |
COLLECTED BY
Collection: Wide Crawl started April 2013
| WikiProject Computer science | (Rated C-class, High-importance) | |||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||||||||||||||||||||
Contents |
anyone want to explain this a bit more in the article? What is the practical meaning of this? And is this a "real" law, as in law of physics, or a "approximate" law, like Moore's Law, or a "folklore law" like Murphy's? -- Tarquin
I just put up the general case of Amdhal's law with an explaination of what it means. Another example could still be put in to show the law of diminishing returns, which is really the "big picture" you're looking for, but I want to see the current version stabilize first. MShonle 02:43, 24 Jan 2004 (UTC)
I was a little confused about this law, usually Wikipedia explains things so nicely but this article was a little hard to understand :). This link does some explaining as well, but is very long winded: http://www.phy.duke.edu/brahma/beowulf_book/node21.html --ShaunMacPherson 19:24, 8 Apr 2004 (UTC)
The link above does not work anymore, here is another one including PDF and PS
http://www.phy.duke.edu/~rgb/Beowulf/beowulf_book/
Do I misunderstand, or does the meaning of F flip-flop in the middle of this article. Yuck. GWO 15:35, 5 May 2004 (UTC)
I think the problem with explaining this "law" is that it is too trivial. If you express it in durations rather than speedups and proportions then it boils down to something so obvious that nobody would even call it a law. Let me try:
Consider a process with multiple phases, such as getting out in the morning. Let's call the total duration of the process T and duration of one phase (such as eating breakfast), t. Duration of all other phases is then T-t. If we speed up the chosen phase by a factor of S, the time it takes to complete it will be t/S, and the total duration will become:
(T-t) + t/S
How hard is that?
You get the usual form of the law by taking a ratio of T by the above expression and using P for t/T.
-- bdolicki 14:35, 9 May 2005 (UTC)
I'd lake to make the observation that the addition of the Generalized Amdahl's law entry on this page isn't actually general, but applies only in the special-case where different sections of the program are sped-up. It would seem that this applies more toward software optimization (i.e. optimized one function to be 2x faster, and another function to be 3x faster), than it does to generalized speed-up.
If a program is sped-up in two different ways (i.e. part of the program is vectorized, and part of the program is threaded to run on multiple processors) then the generalized rule that is mentioned falls apart if there is any overlap between the two methods (i.e. some vector code is also parallelized across multiple processors).
A generalized rule (i.e. one that applies both to parallelization and software optimization) would probably be unnecessarily complex, but for two dimensions of parallelism, the speedup is:

Where:
is the proportion of the program that can be improved from method #1 (i.e. multi-processing),
is the speed-up applied for improvement #1,
is the proportion of the program that can be improved from method #2 (i.e. vectorization),
is the speed-up applied for improvement #2,
is the proportion of the program where method #1 and #2 overlap,
is the total resulting speed-up.
Note that the range over which
is valid is necessarily constrained to the interval
.
It is also interesting to note that 4 processors with a 4-wide vector will out-perform 16 processors when
and there is any non-overlapping parallelism at all (
). If there is only overlapping parallelism, then the speedup is the same as 16 processors.
I made an SVG version of the graph at the top, but it probably has some flaws. It's at Image:AmdahlsLaw.svg—anyone want to improve and put it in? Daniels220 (talk) 19:10, 13 April 2008 (UTC)
The whole Amdahl's argument is a dead triviality. I have never heard about it. But all of you were totally successful in creating a fairly complicated article about it.
Please, first do understand the argument, second write an article about it.
The formulae and the fog of complications in this article are useful to demonstrate to the mathematically less educated readers, that we are much more clever. But in fact, not.
prohlep (talk) 18:04, 10 August 2008 (UTC)
Amdahl's law is well known in computer architecture, and I think this wikipedia entry is useful. However, I don't think that the Limitations section is correct. I searched for information about superlinear speedup, and from what I could see it is limited to cases where the sequential solution for the problem was flawed, and a more efficient parallel algorithm was developed. However, it does not imply that it is possible to achieve greater than linear speedup. The notes about differences in cache structure also do not directly contradict the argument that Amdahl made. Since there are no citations in the limitations section I am removing it because I do not believe it is correct. If someone thinks I am mistaken, please include citations so that readers will have some basis for evaluating the validity of the claim.Wikiuser1239 (talk) 21:21, 16 March 2009 (UTC)
hello.................................... —Preceding unsigned comment added by 117.196.227.53 (talk) 15:27, 24 August 2009 (UTC)
I think Amdahl's law can be misleading. It can only be applied when you look at 1 single isolated program that always does the same thing. You can't say that it is unnecessary to develop cpu's with only 32 core because beyond that you don't get enough improvement. Currently there are 52 processes running on my system which means I can only let them all run at exactly the same time if I have at least 52 cores. You also got the face the fact that in the future we will want more and more eye candy, speech recognition, ... All of that results in even more processes which can almost all parallelized. Also things like servers will always get better performance if the have more cores, given that bandwidth is not an issue. Because the server usually creates a new thread for every request. Kurr0Zaki (talk) 09:17, 26 October 2009 (UTC)
Hi, may I suggest a simpler version of the introduction, which explains the significance of the law and also sets it in a more general context:
Amdahl's law, also known as Amdahl's argument,[1] is named after computer architect Gene Amdahl, and defines a theoretical maximum for the ability of systems to scale up. "Scaling up" refers to the ability to do more work by adding more resources. The critical parameter in Amdahl's law is the percentage of work which cannot be parallelized - in other words, work which must be performed by one central component and cannot be divided between resources. The basic finding of the law is that as long as there is a non-zero percentage of work which cannot be parallelized, the system will not be able to scale beyond a certain theoretical maximum. The law also shows how to compute the theoretical maximum scale depending on the percentage, and some other parameters.
Amdahl's law is most often applied to parallel computing. In this context, the speedup of a program using multiple processors is limited by the time needed for the "sequential fraction" of the program to run - this is the percentage of work which cannot be parallelized. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20×, as the diagram illustrates.
Anne.naimoli (talk) 12:27, 13 September 2011 (UTC)
The application of this would tie well to the concepts of Critical Path Method for project management. The tracing of the shortest path identifies the shortest time of execution. Please note a link to http://en.wikipedia.org/wiki/Critical_path_method — Preceding unsigned comment added by JimDelRossi (talk • contribs) 22:17, 23 January 2012 (UTC)
The explanations in this article are pretty horrible. In fact other articles are also not so good, because of errors, but this one isn't too bad - http://software.intel.com/en-us/articles/amdahls-law-gustafsons-trend-and-the-performance-limits-of-parallel-applications/ though there is an incorrect sign in one part. If we use P for the proportion of time spent on the parallelisable part of the task, then we can write P=1-S, so that the expression is really quite easily seen to be Speedup (N) = 1/ (S + P/N) - offset terms. Then the details do become really rather obvious. The first section of this article is far too confusing, and if needed at all, should come after the basic expression shown here. If the offset terms (due to communications) are ignored, then if N is allowed to "proceed to infinity" it is easy to see that the maximum Speedup is 1/S. As commented, the performance/price ratio deteriorates if attempts are made to use more than 1/S processors/threads/cores. I don't have time to sort this myself, but the article really could be tidied up a lot. David Martland (talk) 09:37, 9 March 2012 (UTC)
This article (itself) is longer, has more math, and more graphs than Amdahl's own original article. 198.123.56.223 (talk) 18:52, 12 April 2012 (UTC)
Cite error: There are <ref> tags on this page, but the references will not show without a {{Reflist}} template or a <references /> tag (see the help page).