Amdahl's argument is important in two ways. First, it provides some perspective on the unrealistically high expectations that many people have for parallelism. Second, it shows that the truly big gains in parallel programming involve changing the application's algorithm to reduce serial bottlenecks.
Persons who dispute the relevance of Amdahl's law point out that it is actually possible for N processors to execute a program in less than 1/N of the time it takes to execute in serial. This phenomenon, referred to as superlinear speedup, is actually attributable to differences in the serial and parallel versions of the code (for example, the parallel version may make better use of cache memory or eliminate some data initializations). A more pointed criticism of Amdahl's law centers on the fact that it ignores the effects that an increase in problem size might have on computational characteristics. As noted in the text and the case studies, larger problem sizes might well affect the number and extent of serial bottlenecks.
In spite of its weaknesses, Amdahl's law remains a simple, fast way to think about the potential for achieving performance gains through parallelism. Even more importantly, it can be applied (albeit crudely) in situations where only serial timing information is available.
 G. Amdahl, "Validity of the Single-Processor Approach to Achieving Large-Scale Computing Capabilities," Proc. AFIPS Conf., 1967, p. 483.