I read a good article today on lessons on code reviews from open source software.
Contemporary Peer Review in Action: Lessons from Open Source Development
Unfortunately you need to be an IEEE member/subscriber to access, but if you do have access it, read these lessons.
The core idea is pretty much that: (1) reviews must be small; (2) reviews must be done by experts, otherwise they don't offer much value.
From my experience, most of the developers wanted feedback and took them well to improve the code. However, on the negative side, I've seen some techniques to work around the process in place to require code review - and that's where the purpose was defeated.
The main technique that I've seen is: avoid the developer that gives more feedback and send to an "auto-approver" developer. This is just the technique to bypass the process, as there is essentially zero interest in getting feedback and the code better.
Another technique is to send the review to newhires, with the excuse of ramping them up, but with the intent of not having the design or code questioned at all.
Of course, if a reviewer unexpectedly "annoys" the developer with valid concerns, just reply as "won't fix" and get that captured in a bug fix that will never get prioritized.
This issue becomes even more critical if technical leaders employ these techniques to "get things done".
How do we get developers not to use these techniques and do the right thing of sending reviews to the experts and wait for their feedback? I wonder if these developers are actually vested or just prioritize other things over the quality.