The efficiency is actually a property of each reaction alone. But it's hard to calculate from the single one (there are tools to do that, from each amplification curve, but they are not commonly used).
So in theory efficiency should be constant within dilution series and from several known concentration points you can calculate it.
So, imagine you use 1:5 dilution and get a bad efficiency. It's known that non-linear and/or too high efficiency means that some concentration points have different efficiency. Usually the least diluted one is inhibited or something.
So, within that series you use (i.e. undiluted cDNA and dilutions) it means, probably that the undiluted is "screwed". You use 1:8 and now it looks good. Does it mean now that the undiluted is not inhibited or something any more? Of course it doesn't.
By making different dilutions of the original, you are not adressing the problem (unless the problem was bad dilution in the first place, that is also possible) but only the output.
Of course, from a non-linear dilution curve you can't (usualy) calculate efficincy at all. So to get at least an idea, you need to make some linear. BUT.. that still means some of the points from the original one are amplifying with different efficiency than others -> leads to different extrapolations of the initial template concentration.
That's why the rule should be that if your dilution curve is linear and spans a defined range, that all samples of the same kind within that range have this defined efficiency. Outside that.. you don't know.
So making several dilutions until you get the one that "looks good enough" is not solving anything. Just hiding the "bad" out of sight.
It's something like for example "fixing" the too high number of patients that died on your clinic, by moving those about to die outside to the hall. They will be dead anyway, but your numbers will look better.