Forum menu
Writing lots of unit tests, or thinking how your code can be tested before writing it is not TDD.
We know. No one said otherwise.
But unit testing is the basis of TDD, hence the bit where I said [i]"Unit Testing is always a good thing, especially when properly automated - but full TDD isn't always useful."[/i]
Have you lot ever worked on a project that was fully and adequately specced before development started?
I wasn't referring to any of your posts GrahamS
Have you lot ever worked on a project that was fully and adequately specced before development started?
Nope, but I've worked on plenty of safety-critical systems (medical, military, financial) where they don't like it much when stuff goes wrong.
Right, so the way you develop safety critical stuff isn't the way you develop non-critical web based apps, say ๐
the project in question issued a dictat saying we had to have 90% unit test coverage
Not unusual - I believe there's quite a few of the OS projects now demand 80-90% coverage.
A framework that requires lots of getters and setters sounds a bit broken. But yes, that could be a reason
Fairly common in Java web frameworks, JSF and IceFaces for example require it.
Not unusual - I believe there's quite a few of the OS projects now demand 80-90% coverage.
Which as diktat is stupid - I'd like 100% coverage on the important stuff, and closer to 0% on the boilerplate.
The lead developer on the project in question issued a dictat saying we had to have 90% unit test coverage. The only way to achieve that was to unit test methods that just set a member. Valuable use of client time and money I think not.
Personally I'd probably have used something like [url= http://www.unitils.org/summary.html ]unitils[/url] to automatically test the getters/setters (or just written my own using BeanUtils).
Huge code coverage increase, minimal impact on time, budget or morale ๐
Right, so the way you develop safety critical stuff isn't the way you develop non-critical web based apps, say
Yeah I'm mainly talking about proper software [u]engineering[/u] of important systems, not just hacking about on the web ๐
Which as diktat is stupid - I'd like 100% coverage on the important stuff, and closer to 0% on the boilerplate.
The guiding principle of unit testing [i]should[/i] be "test everything which can possibly break" and as demonstrated the "boilerplate" can break.
But yeah, I agree that testing the bits that are more likely to break is a good approach on systems that are non-critical and don't have the budget for proper full testing.
The guiding principle of unit testing should be "test everything which can possibly break" and as demonstrated the "boilerplate" can break.But yeah, I agree that testing the bits that are more likely to break is a good approach on systems that are non-critical and don't have the budget for proper full testing.
You're assuming end-to-end responsability for the entire system, though - if the client demands I use, say, Hibernate, I'm not about to test it. If it goes wrong it's his problem, not mine!
And technically speaking it's not "hacking about on the web", it's "well-paid minimal-risk programming". ๐
if the client demands I use, say, Hibernate, I'm not about to test it. If it goes wrong it's his problem, not mine!
All the more reason to test YOUR boiler plate code.
Yeah I'm mainly talking about proper software engineering of important systems, not just hacking about on the web
I meant enterprise systems with a web front end, not silly websites.
If time is at a premium I'd rely on system testing. My principles involve writing code that is so clear and simple that every block of code is obvious as to whether or not it works just by looking at it. Integration is where you are most likely to get bugs, which is where system testing comes in.
If I'd commissioned a system and they were spending my money testing to see if the line a = b; does indeed result in a being equal to b, I'd be bloody annoyed.
That's what we ended up doing with the 90% coverage thing.
But we do seem to agree with what I said originally, which is that you would set your working practices according to the customer's situation.
You're assuming end-to-end responsability for the entire system, though - if the client demands I use, say, Hibernate, I'm not about to test it. If it goes wrong it's his problem, not mine!
Hibernate would be covered by normal Verification and Validation (V&V) of Off-The-Shelf (OTS) software, as per the Capability Maturity Model or similar.
So no you wouldn't write unit tests to test Hibernate. That would be daft. But you would (ideally) still test your classes that interface with Hibernate (though if they were really just data POJOs with no logic then I'd probably test them automatically as mentioned).
I meant enterprise systems with a web front end, not silly websites.
I know, I'm just tweaking your leash for suggesting I have no valuable experience to add ๐
But we do seem to agree with what I said originally, which is that you would set your working practices according to the customer's situation.
We are. Compromise is part of the job.
(Incidentally, I've done 100% code coverage projects! Now that sucks!)
Java Gurus?
Is this a bit like the Kama Sutra? ๐ฏ
(though if they were really just data POJOs with no logic then I'd probably test them automatically as mentioned).
Why bother at all? So you could get a 100% coverage figure?
I know, I'm just tweaking your leash for suggesting I have no valuable experience to add
That in turn was in jest as I am sure you understand.
I once wrote a booking system for a hotel on my own under extremely difficult circumstances. Finished the last thing at 3am one morning, emailed the lady to say the code was done and it was deployed, thinking we'd start some testing. 11am the following day, got in to the office and there were 4 call centre girls taking bookings on it!
Worked absolutely fine, couple of minor bugs. All due to my transparent code ๐
Incidentally, I've done 100% code coverage projects! Now that sucks!)
Still no guarantee of bug free code - shhhhh - but dont tell the PM ๐
Why bother at all? So you could get a 100% coverage figure?
Because, as demonstrated, even simple, no-logic, no-validation getter/setters can have bugs (and can be quite prone to them, as people will copy-paste them and they will be barely glanced at during all but the most diligent code review).
If you think testing getter/setters makes no sense in Java then try my earlier example but this time in C#:
[code]class DepositAccount
{
public int Balance { get; set; }
public int AccountID { get; set; }
}[/code]
Now [I]that[/I] has nothing worth testing.
Because, as demonstrated, even simple, no-logic, no-validation getter/setters can have bugs
Only if you really try to put them in there!
What about peer reviews? Buddy programming? Component level testing? All alternatives that have less overhead imo.
Only if you really try to put them in there!
Typos and copy-paste errors happen. In my experience, complex code can sometimes have less bugs in it than the simple stuff - purely because the complex stuff catches peoples eye, it makes them think and so it will be carefully examined during code reviews, walkthroughs and subsequent coding.
All alternatives that have less overhead imo.
I wouldn't cite any of them as alternatives. They all complement unit testing.
And what "overhead"? I already pointed out you could always automate the simple no-logic testing. Even if you don't and you choose to write it by hand, what are we really talking about: a couple of extra lines in your unit tests for that class. Probably take less than a minute to write.
If you unit test and system test, you've done a 100% thorough job. If you only system test, you've done a 99% thorough job in maybe half the time.
Like I say, it depends on the project and the customer. And what you consider a unit.
This discussion needs an exit clause.
A system level test will barely touch the sides in terms of coverage.
And in terms of overhead, I'd much rather a silly bug was quickly found by the developer that wrote it without anyone else seeing it, rather than it making it up to component/system level where some tester finds it, raises a bug report, which a PM has to assign and monitor, and some other developer has to investigate, fix then provide a test showing it was fixed and fill out bug tracking information.
Anyway Exit clause:
[code]if (time.IsBedtime())
exit(Status.SLEEPY)[/code]
- Its easier to review tests than code.What about peer reviews?
- One for the price of two !Buddy programming?
- ????Component level testing?
static {
statuts = "GoodNightJohnBoy";
}
Ones where the entire project isn't completely and fully specified or even understood up front. Which is most, in my experience
agreed - but that is one of the drivers for XP/Agile and TDD, with short iteration releases to the client to mitigate this problem. The 'client' may only be internal, such as a local domain expert that wants to see how the release is progressing and address any issues that may have come to light from an inconsistency/incompleteness of the external clients specification - delivering early allows time for this to be addressed before the final release to the client.
With TDD you write tests to prove the functionality that you are delivering for this release - the code does not have to be perfect/interfaces do not have to be complete - the priority is only to satisfy the tests.
Obviously the developer is probably under time pressure, so instead of making the code perfect she should make the tests perfect.
Then you tackle the functionality required for the next release, so you write tests to prove that functionality, refactoring as you go along to ensure the code remains/reaches a decent state. The refactoring is protected by the tests being in place, without the tests the refactoring is risky.
If the required functionality changes along the release cycle you may need to refactor the tests, or you may need to refactor them if functionality moves between classes.
The point of pair programming is to have two people thinking up the tests, which, like debugging, is often more efficient than leaving it to one person. Writing the actual code is the trivial part.
Having a suite of automatically run unit tests and, where possible, automatically run system tests, leads to much more stable and mature systems. If a bug is found the first thing to do is to write the missing test that would have exposed it, then to fix it.
Code reviews have much less relevance nowadays as it is more difficult to grock what the code is doing from just reading it. Peer reviews were one developer explains the code and functionality to another are much more useful.
Making excuses such as TDD is not appropriate to this project or there isn't time to write the tests is just making shortcuts and increasing the risk of developing a project that is going to fail.
TDD also often forces a better design as components have to be written in a manner where they can be tested in isolation and therefore have clearly defined interfaces and roles/responsibilities.
The tests are to test logic, testing getters and settings is pedantic. If possible classes should be made immutable anyway. Testing that all properties are set after construction can be surprisingly useful, however ๐
It is also often useful to write tests (integration?) tests to prove that a library/framework behaves as you are expecting in your application, not to just assume that it works that way because the documentation says so. It also protects you from breakages when you update that framework.
Obviously the developer is probably under time pressure, so instead of making the code perfect [b]she [/b]should make the tests perfect.
Sexist!
๐
Code reviews have much less relevance nowadays as it is more difficult to grock what the code is doing from just reading it
I would disagree in my area of enterprise application development. Mostly you are just tying APIs together, and the code itself is very noddy.
As for buddy programming - two developers often work more than twice as fast as one, ime. No time for daydreaming, surfing STW or general slacking.
Turner guy seems to be talking about large systems undergoing continuous development and maintenance within an organisation. Like I keep saying, these approaches are all tools you can deploy [b]if appropriate[/b].
I have also used TDD style for smaller projects and love the feeeling of solidity you get from it - the time taken writing the tests is easily compensated for by the lack of the traditional debugging.
Before using that style I would like to walk through new code in the debugger, as recommended in "writing solid code", but find the need less now. And when using 'technologies' like xslt using TDD style tests to prove the logic you want is invaluable, as it often doesn't work as you think it should from looking at the code.
With mickey mouse languages like Java and C# I feel TDD is essential, but have used it with C++ to good effect.
Async stuff is a problem still, but it is better than nothing.
The tests are to test logic, testing getters and settings is pedantic.
Yes, yes it is. Aren't tests supposed to be pedantic??
I don't test to prove that it is more or less basically right, give or take.
Likewise I use a compiler set to -pedantic, I am pedantic about fixing warnings and I use pedantic tools like lint, StyleCop, BullsEye, etc.
Code reviews have much less relevance nowadays as it is more difficult to grock what the code is doing from just reading it. Peer reviews were one developer explains the code and functionality to another are much more useful.
The trouble with peer-reviews is that it is like someone reading you an essay they wrote, rather than reading it yourself. It is easier to understand their intent, but much harder to spot basic errors.
If the code is hard to grock then it is poorly written and documented. A future maintenance programmer may not have the luxury of the original author on hand to explain it - so it should be understandable without him/her.
Exactly.
I like to write code that's so obvious it doesn't need documentation apart from on a high level.
grock
?
Grok, surely. And if you fail at that level of basic geekdom, I don't want you anywhere near my code!
And don't talk to me about coding standards that make you javadoc every single method, member, ... ๐
code that's so obvious it doesn't need documentation apart from on a high level
So its undocumented as well as untested....
.....oh the horror, the HORROR ! ๐
So its undocumented as well as untested....
But working perfectly, delivered before the deadline and under budget to a very happy client.. How is such sorcery possible?!
Grok, surely
that was a trap to catch the true geeks who would correct the spelling error...
The trouble with peer-reviews is that it is like someone reading you an essay they wrote, rather than reading it yourself. It is easier to understand their intent, but much harder to spot basic errors.
Whereas just reading code used to be useful in the days of simpler coding styles/languages, nowadays it can be less easy to see the full intent of the code by just looking at it, with active objects, event handling, etc.
Therefore having the author explaining the code and intent of the system to the reviewer(s), possibly at her terminal tends to offer high value as well as low overhead and is worth doing. Old style code-reading is rarely worthwhile.
I'm not a geek, that's why I'm good at my job ๐
it can be less easy to see the full intent of the code by just looking at it, with active objects, event handling, etc.
Possibly but you should still be able to make sense of a single class/component without knowing the full context of the system. If you can't then that suggests it is poorly designed, doesn't follow SOLID principles and will be a maintenance headache later in the life-cycle.
I've code reviewed everything from ARM assembler and bit-bashed I2C drivers in C, up to enterprise applications in Java, C++ and C#. If someone needs to take my hand to lead me through it then it is crap code.
welcome to the world of off-shoring where variables and comments come in a variety (usually jumbled together) of languages.
I'm not a geek, that's why I'm good at my job
You must be a geek otherwise you would not recognise the word and I must be less of a geek as, although I recognised the word, I could not spell it correctly ๐
If someone needs to take my hand to lead me through it then it is crap code.
they are not leading you through the code that is in front of you, they are responding to questions about how the system works. You could sit there looking for all references to that object/method and work it all out, but having a more interactive session is more time efficient. With everyone under time pressure nobody wants to sit there for hours pondering over some code listings, and they won't. And there is little point human readers repeating what an automatic tool or compiler setting could achieve.
Anyway, with TDD styles the important thing is public interfaces, not necessarily the underlying code which may be put together in less than ideal conditions to met the next deadline, and will be refactored afterwards. The tests prove that, although the code is sub-optimal, it meets it goals - and that is what you want after all.