Sometime ago I wrote a post about how high code coverage may not necessarily mean something.
This time I want to expand on that and talk about missing requirements. It's clear that high code coverage doesn't mean that your code is well tested, but oftentimes the main argument to use it is that it will detect gaps that must be tested. The implicit assumption in that is that we have all the requirements captured and in the code. What if this is not true?
As Robert Glass says, "it is difficult to test for something that simply isn't there. Some testing approaches, such as coverage analysis, help us test all segments of a program to see if they function properly. But if a segment is not there, its absence cannot be detected by coverage approaches."
Also, a good point that Glass makes is that, even other approaches like code reviews, "may or may not so readily spot pieces of code that are totally missing."
That is why it doesn't matter the methodology, if requirements are missed, logic will be missed in the resulting software. And it's more likely than not that these will be missed until it gets to production. Once it hits production, this omission will have the highest code to be fixed (as compared to early in the development).