I am not sure I understand what you are saying exactly, but I believe in some circumstances you may be working with numerical values which you know to have errors > ulps.
For example, suppose you working with data that you've obtained from some physical measurement, and your knowledge of the measurement process means you know it will have around 1% relative error.
Or another example: suppose you are solving a linear system of equations Ax=b with highly precise right-hand-side data b (< \epsilon floating-point error), but the condition number of the matrix is large. Then perhaps it is only meaningful to expect the solution to be accurate to within k(A) \epsilon, the condition number of A.
(I haven't thought about this stuff for a while, please correct me if this is wrong).
edit: here's another reason. Suppose there's a heavy tradeoff between runtime and accuracy for some algorithm, so for some application it makes sense to use some cheap but somewhat inaccurate approximation. Then you're probably going to have and expect far larger errors. But, you might still be able to get some kind of (worst case?) bounds on the result, and want to assert something about that in test cases or sanity checks.
https://en.wikipedia.org/wiki/Unit_in_the_last_place
http://docs.oracle.com/javase/7/docs/api/java/lang/Math.html