Podcast 041
[incomplete]
Intro, advertising
[00:57]
Spolsky: I've had dreams where I'm debugging and then I have to like, dream the bug itself, in order make, then I have to invent the bug, to make the dream work out.
Atwood: Yeah, [laughs]
Spolsky: I guess I should bring our listeners up to speed because at some point they will have turned us on. This weeks' guest host is Robert Martin, better known as Uncle Bob, he's a consultant and author in the field of Agile Programming and Object-Oriented Design. Thanks for being with us Bob.
Martin: Oh, thank you for having me on and letting me yell at you guys.
Spolsky: Before we get started I do want to apologize a little bit for episode 38, my arguments at one point in that show were 'ad hominem' and personal and there was no call for that and I apologize, but I am glad that you agreed to be on the show today and come set us straight.
Martin: Well, apology accepted and lets move on
Spolsky: Alright, cool, so what's new on StackOverflow
Atwood: Well it's been a little bit of a big week for us, we've actually moved data-centers we now with PEAK internet in Corvales Oregon, which coincidently happens to be where, one of the team members Geoff Dalgas also lives, so he's our human remote access card, which is awesome. That went relatively smoothly, we did have a few DNS blips, which I don't think I appreciated when you have a truly global audience how weird that can get. We set our time to live to the lowest interval we could, in advance, and it went relatively smoothly but...
Spolsky: Whenever I've done that, there's always like 1% of people who complain about the DNS not resolving because they have a broken DNS server somewhere
Atwood: Yeah, internally we called it bounty but I think featured that was the word that made more sense to, maybe that's wrong. But that's were it is. And also if you go to the questions header there's also a sword where you can (see) featured as well.
Spolsky: Yeah, I didn't realize you were calling it that. I was looking under unanswered.
Atwood: Yeah, it should be under unanswered as well, people have pointed out that it's kind of silly that it's not...
Spolsky: Because they are unanswered, that kind of what it means.
Atwood: Yeah, they are technically unanswered by definition. Anyway, that's all the StackOverflow news, I just wanted to get that out of the way.
[07:00]
Spolsky: So let's start mopping up the mess from StackOverflow episode number 38. [laughs] Still 3 weeks later. I want to play a one clip from that episode, I don't want to go too much detail, but I just want to play one clip of something Jeff said on that show.
Episode 38 clip, Atwood: But there's multiple axes you're working on here; quality is just one axis. And I find, sadly, to be completely honest with everybody listening, quality really doesn't matter that much, in the big scheme of things... There was this quote from Frank Zappa: "Nobody gives a crap if we're great musicians." And it really is true.
Spolsky: I just want to interrupt the clip here to point out that at this point I immediately started trying to search the Internet to find that Frank Zappa joke with the table and the wood leg and all that kind of stuff. And so, I take no responsibility for anything you were saying, because I was looking for the joke.
Martin: [laughs]
Atwood: [laughs]
Spolsky: See how I managed to like...
Martin: That was the clip that I wanted to start with by the way, so thank you for that.
Spolsky: Let me finish playing it now.
Episode 38 clip, Atwood: And it really is true. The people that appreciate Frank Zappa's music aren't going, "that guitar was really off." They're hearing the whole song; they're hearing the music, they're not really worried whether your code has the correct object interfaces, or if it's developed in a pure way, or written in Ruby or PHP... they don't really care about that stuff. We do internally, but it's important to balance that, I think, and I think that gets missed a lot, which is, maybe, the point you're getting at.
Episode 38 clip, Spolsky: Yeah.
Episode 38 clip, Atwood: I think over time, more and more, I've become really lax on my thinking about this, because what matters is what you deliver to the customer, and how happy the customer is with what you've delivered. There's many, many ways to get there.
Spolsky: So Jeff, what did you mean there when you were like I don't care about quality, quality is a waste of time and so?Atwood: Well, obviously I don't think you should take it literally. I wasn't literally saying we don't care about quality...
Martin: [laughs]
Atwood: [laughs] ...but I think in the context it's about the axes, right. So, to me the root issue is if you deliver a product, a software product, that nobody likes or wants to use, it really doesn't matter how high quality your code is. That's really the bottom line. And I think I've learned this from WordPress, because WordPress is a great, it's a fantastic tool, but the code is the worst code you can possibly imagine. First of all it's written in PHP which is already a problem right, on top of that it's crazy PHP, like it'll melt your brain if you look at it too long. Everybody who looks at it comes back like they've looked in to the horrors and like it looks back in to them. But ultimately it doesn't matter because there is this fantastic community around WordPress, there's all these people hacking on it, there's all these people using it, it's doing all these great things out in the World. And, that really changed the way I view code quality, again, as these axes of these things you're trying to balance. It's like you want this product that people want to use and enjoy using and it has this great community around it. Because you can fix code quality, you cannot fix 'nobody gives a crap about your product', it's unfixable. So, to focus on code quality to the detriment of 'Do people give a crap about my product?' is really the wrong way to go. And I think that's how I'd phrase it.
[10:19]
Martin: And, well, I'd agree with that, but it's a false dichotomy. Nobody wants to create a product that you don't care about. On the other hand, you also want to make sure that your products can survive, and one of the problems that I face, as a consultant, is going in to companies where code quality is so bad that the management can't get anything done. No features can be added, every time they touch the system it breaks in 50 places, every estimate for any task is weeks long because everyone is so fearful to touch this code. Like you said of the PHP code - I'm not familiar with WordPress so I can't comment on that - but they have a community of people who know it deeply and they're enthused about it and they must be working in it and knowing it, and apparently they are not destroying it. But many, many products get destroyed through this horrible lack of quality, because they are focusing on the features and not on what's under the hood.
Spolsky: I think there's, um... We're using the word quality and the more I thought about this the more I realized that there's about 8 different levels of what we mean when we say code quality, and there's different tools for addressing the quality at all these different levels. So I think of... one level of code quality is does the code do what the programmer intended for it to do, in other words they wrote a loop and does it do what they intended or did they make a mistake in that, and that's a very low level of very granular quality. And, you know, you can design for that and test for that and so forth. At a higher level the question is did what the programmer wanted to do was that even the right thing to have it do, and maybe they're expressing themselves correctly in code and it doesn't appear to have a bug, but actually what they are trying to do is not the right thing to do because maybe it conflicts with something else that they didn't think about, or it just doesn't look right when you actually run it. And then there's the level of... then there's all the other dimensions, and I think these are kind of what Jeff was getting at with the axes. Usability is obviously a very important thing and we are all... I don't think anyone would doubt that usability isn't an important aspect of quality, and the only way to test that, really, is usability testing and that's a completely different kind of activity than a lot of other code testing activities. And then there's sort of suitability to task - scalability, will it run, will it run fast, can put it on real servers, will crash all the time. And then there's a whole realm of modifiability - can the code be changed easily or have we painted ourselves into various corners.
Martin: And so, let me tell you a story of a company I know of. They produced a product in the '80s and '90s and it was a C debugger and it was fantastic. I don't know if you were doing any C coding at that time...
Spolsky: Yeah.
Martin: ...but a bunch of us got this tool and it was like being born again because you could debug your C code in text form instead of in binary. It would interpret the text, and you could set breakpoints in it and you could look at the heap and so forth. It was very very clever because it would only debug in text certain modules, other modules could be compiled down to relocatable and they would execute native. We loved this tool, it was a terrific tool. But this happened right at the time when C++ was getting popular and so a lot of us moved to C++ world and we were going right back to the company saying OK where's the C++ version. It took them a while, but they eventually came out with a C++ version, and it took 45 minutes to load and then crashed. We complained bitterly about this and they said we're going to fix this in the next version. The next version took 6 months to deliver, and it took 45 minutes to load and then crashed. After that the company went away. They had a wonderful tool, it was terrific, and they could not make the change to a different language, they couldn't upgrade that product. I talked to one of the guys who worked there several years later, I ran in to him at a conference, and he said "Yeah, we'd rushed to market, we'd gotten out real early and we'd made a horrible mess. There was just no way we were going to take that mess and migrate it to the new language, it was just all broke".
Spolsky: Yeah, but at the same time the compiler vendors were starting to put source debuggers in their products, I assume. So, they also had competition.
Martin: This was like '90-'91. I was working at Rational at the time on the first release of Rose, God help me.
Atwood: [laughs] We forgive you!
Spolsky: [laughs] So how's that doing?
Spolsky: So lets call that the axis of modifiability. Because when you look at... When Jeff says the ultimate thing is 'Is the user happy?', that has to be taken across time. They have to be happy and they have to continue to be happy in the future and if the code can't be modified...
Martin: Yes. That is one of the values of code, that it can be changed and modified over time. It's not good enough that it does what it's supposed to do and that it meets its requirements it also has to migrate with the needs of the customer.
Spolsky: What I remember from my code in the '80s is that you couldn't even do things the right way even if you wanted to. You couldn't write clean code because it wouldn't fit. It was just not possible. The instance I remember is looking at Borland Quattro Pro for Windows which came to compete against Excel for Windows, which I was working on, and Excel was written in C and Quattro Pro was written in C++ and they actually marketed it. The fact that they had used object-oriented programming as their programming technique was actually listed as a marketing benefit in those days.
Martin: What a mistake! [laughs]
Spolsky: Yeah, it didn't work because nobody cared what it was written in. Actually, the mainstream chip then was an 80386, and on the 80386 it had this segmented memory architecture. You had near pointers and far pointers, a near pointer can only be 16 bits and a far pointer was 32 bits.
Martin: I want to find the guy who designed that system.
Spolsky: Right, that was what we all wasted our time on. So, the interesting story here is that when your writing C++, the first C++ compiler, I think all the C++ compilers, for that architecture whenever you had a v-table, in other words your method pointers, in order to make life simple they just used 32 bit pointers for everything. So everything was a 32 bit pointer, or a far pointer, which gave you, basically, the full 32 bits of address space, for your function calls at least. And this was much more important in C++ than it was in C, in C you would make enormous efforts to keep your pointers local, as much as possible.
Martin: Yes.
[17:19]
Spolsky: And the real reason was that it turns out that using a 32 bit pointer was just orders of magnitude slower than a 16 bit pointer. Because you had to load the left 16 bits (which) was a segment and every time you did that the CPU went off in to space and did all kinds of memory management for you without telling you what it was doing and it took a really really long time. The net result, actually, was that the Borland C++ products, there were two of them, there was Quattro Pro for Windows and Paradox for Windows was their database, their start-up time was on the order of minutes, they showed you a progress indicator while they were launching and their performance was just un-bearable, in those days. Excel launched in a matter of seconds using very very optimized C code, in fact the C code... the code in Excel was astonishing, instead of using near pointers or far pointers they came up with this idea of a based-pointer where in C you actually had to load the segment register yourself event though your in C. It's the worst of all possible worlds and it 's just a way of going in and making your code, like, utterly horrific.
Martin: You see there I'll take you to task. Because, although your right that's a complicating issue and it would make for lots of problems for the programmer, you could still keep your code clean. It doesn't mean that you have to throw away any of your discipline, you just have to deal with this external complexity of the memory mapping of the machine. But you could still keep it well organized, you could still keep names well done, you could still keep your methods small, you could still practice reasonable craftsmanship quality. Right?
Spolsky: Yeah, they never did any of that stuff. [laughs]
Martin: Well, no, of course not...
Spolsky: You could.
Atwood: A lot of these rules... You know Bob, I was reading the SOLID principles and I realized that I had actually linked to one of the SOLID principles in an article I had written called 'Curly's Law: Do One Thing' where Tim Ottinger was talking about outliving the great variable shortage where people were using one variable to do... basically re-using variables. Which is silly because you can just create as many as you want, you don't need to reuse variables. And that tied in to the Single Responsibility principle, I thought, which was have the variable do one thing. And a lot of these principles, they're good principles and I think they boil down to great guidelines. But, the more I write code the more I think that writing code is like writing in general which is really difficult for a lot of people, and this is, you know, structured writing that is enforced by a compiler sometimes, so this is a theoretically easier form of writing. But, there's no real rules that can make you a good writer, right? Nobody can sit down and say "I'm going to read these rules and at the end of this list I will be an excellent writer".
Martin: [inaudible]... that Strunk and White should not have written "The Elements of Style".
Atwood: No, I think they should and I think the rules are important but I think they tend more...
Martin: Involved? Yes, there is a talent involved. There is some deep thing that's wrong with programmers that makes them good programmers. But, there's also a set of techniques that those of use who have been programming for over 40 years have learned over time and can share. And, Strunk and White is a good example of that. It'd be here's a couple of ways to make your papers look regular and normal or here's a set of principles that you can follow when you have problems that would help you get out of it. You can't just be ad-hoc.
Spolsky: Bob, can you explain again the Single Responsibility principle? Because I don't think I understand it right.
Martin: The Single Responsibility principle is actually a very old principle, I think it was coined by Bertrand Meyer a long time ago. The basic idea is simple, if you have a module or a function or anything it should have one reason to change. And by that I mean that if there is some other source of change... if there are sources of change out there one source of change should impact it. So a simple example, we have an employee, this is the one I use all the time...
Spolsky: Wait, wait, hold on, let me stop you for a second. Change, you mean like at run-time?
Martin: No. At development time.
Spolsky: You mean changes of code. There should be one source of entropy in the world which causes you to have to change the source code for that thing.
Martin: Yeah. That's the idea. Do you ever achieve that? No.
Spolsky: OK, we'll get to that in a second.
Martin: You try to get your modules so that if a feature changes a module might change but no other feature change will affect that module. You try and get your modules so partitioned that when a change occurs in the requirements the minimum possible number of modules are affected. So the example I always use is the employee class. Should an employee know how to write itself to the database, how to calculate it's pay and how to write an employee report. And, if you had all of that functionality inside an employee class then when the accountants change the business rules for calculating pay you'd have to change the employee, you'd have to modify the code in the employee. If the bean counters change the format of the report you'd have to go in and change this class, or if the DBAs changed the schema you'd have to go in and change this class. I call classes like this dependency magnets, they change for too many reasons.
Spolsky: Is that... how is that bad?
[22:58]
Spolsky: So is that really... I mean, it seems to me that if the schema of the employee changed because, let's say there is some new federal reporting requirement and now employees/we have to track this thing, and it seems to me that would actually impact a few places, like that would impact how it's written to the database, that would impact what the report shows, that would impact maybe even the payroll calculation.
Martin: It might, might not.
Spolsky: .... So I have to touch some subset, the subset that have actually changed.
Martin: But here's the counter-example. The bean counters decide that they want two columns in the report swapped. They just want them moved, one on the left and the other to the right.
Spolsky: So I only change the report, the employeeReporter.
Martin: Right. So it's just the report has changed and you've got to go into this module and just change the string manipulations for this report. [inaudible] ... the employer record is changed.
Spolsky: This sounds to me like... if you were highly concerned about... if you were in a very large C++ environment and you're highly concerned with not triggering full builds because a .h file changed, that this would be a reasonable concern. But, I don't that that's a... I'm still not buying that it's a humongous problem, I mean, OK the two things in the report change (and) you've got all these people that "depend on" the employee but something about the employee has changed because of the way that it's reporting has changed, and they re-compile and nothing is really changed there.
Martin: Yes. Now, so, first of all you're right, the C++ world is the hell where this principle is really born. Because, any small change to a header file kicks off a massive build, and in the '80s and '90s a massive build could be two hours, or more than that.
Spolsky: Yeah, but we fix that by getting better Linkers or by using Dynamic Programming or by...
Martin: But what we haven't fixed is the problem of independent deploy-ability. So, you have a system that is composed of several different JAR files, for example, and you would like to be able to release those JAR files independently of one another. But, if there are dependencies that sneak though those JAR files, such that you have to re-build a JAR file and then re-deploy the other JAR files that depend upon it, you can't independently deploy the modules. So, this is an issue of component-ization, you want to deploy all that stuff independently.
Spolsky: We're moving in the other direction, which is, I think that the world started - you know there's the saying "DLL hell" on Windows, the DLLs were created on Windows...
Martin: .Net fixed that, you know. There's no DLL file anymore.
Spolsky: Well, that's just because there's lots of copies of stuff.
Martin: [laughs]
Spolsky: [laughs] Well, what we're discovering is we can't even ship... our product runs on unix, relies on Apache, relies on a database server, relies on a particular version of Mono because it's .Net code - and we've just discovered that it's a complete waste of time to try to run with whatever version of Mono the user has on their system, or whatever version of Apache. That we're better off actually just bundling one big gigantic hairball that we give people that has everything that they need, because at least we have some control. We know it's been tested with those versions of those components. It sounds to me like, is it really a goal to be able to deploy things independently, doesn't that just create a deployment matrix testing hell, where you've got lots of different versions of things out there and you want to at least check that it works against each other.
Martin: I'm completly with you. I want to control the hairball myself. I would much rather package all the versioned JAR files together and ship it in one great big clump.
Spolsky: Heck, I would even give the client the compiler that we used to build it, in case...
Martin: And, I have worked on projects where we did exactly that. But, the idea of independent deploy-ability carries a whole distance. Not only do you want to be able to independently deploy to customers, if possible, you'd also like to be able to independently deploy within your teams. You would like to be able to kick off a build, build your component, and, without affecting everybody else, run your tests.
Spolsky: Right, right.
Martin: You'd like to be able to work in a team environment, where many teams are working together, and the dependencies through all the components are minimized so that everyone understands when you build this one you don't have to build that one. And, when you build this one you don't have to build that one.
Spolsky: There's no question that there are certainly cases where... there are a lot of instances... for example, think of plug-ins. Does WordPress have a... Jeff, does WordPress have any kind of plug-in architecture?
Atwood: [sighs] Oh my gosh, are you kidding me? It's huge!
Spolsky: So, no matter how bad the WordPress code is, they got that part right, which is that you can write plug-ins for it.
Atwood: Well, not really! [laughs] Let me give you an example to be very specific about what you're talking about. So, they have a plug-in ecosystem, it's very vibrant, any plug-in you can possibly imagine has been created, which is awesome. But, the problem for me is that I'm running WordPress on Windows. Which works.
Spolsky: Well, nobody does that. Come on!
Atwood: Well it works and it's supported by WordPress but not every random plug-in author test on Windows. Right. It's not their fault. Maybe they're Linux users or they never use Windows, I don't blame them. But, their plug-ins will just, inexplicably, not work in really bizarre ways that don't make any sense.
Spolsky: Yeah, the filename with the slashes and...
Atwood: Like, when I was looking for captcha plug-ins for the blog to reduce spam on the comments, I had to go through, literally, like six or seven. I got super frustrated before I found one that would work on Windows. So, yeah. For what it's worth.
[29:29]
Spolsky: Bob tell me what you do about, I think that some of these things are important engineering principles if you understand what they do, but I think sometime that they fall into the hands of people who don't really know what they do or don't really know why you are doing them. These people become doctrinaire about doing them 100% of the time even when they don't understand what they're suppose to accomplish.
Atwood: Let me interrupt real briefly there and add just one point. At what point does having the rules make things worse? Like you have somebody who reads Strunk and White and becomes like Joel saying, very doctrinaire, you must do X and you must do Y. Writing is a very fluid process where there is a lot of ways to get it.
Martin: Have you read papers that are perfectly formatted and they follow all the Strunk and White rules and they are crap.
Spolsky: Or you can't understand what they are trying to stay.
Martin: Or they are stupid. Or the point they are making is stupid. Or, yeah. You certainly you take a set of principles like this and if that is all you follow, you will create a different kind of mess. A very well formatted mess. That's is clearly not the point. The point is that is if you know these principles and have the talent to write good software, you will be able to apply them. You will look at them, oh yeah, there is too many dependencies coming into this module, look when I change this, that changes, I should break that.
Atwood: One thing I have observed is there is really two kinds of programmers. You have the kinds that observe what they are doing and they adjust what they are doing. In other words they are thinking about what they are doing. And then you have the developers that pretty much don't think about what they are doing. So if you throw a rule set at a thoughtful developer, they will get something out of it. But I think the type of developer that are just going to write this crap code are sort of immune to these rules because they are not thinking about what they are doing at some fundamental level. They are just like whatever it takes to get it done and then they move on to the next thing. They are not thinking how can I do that better the next time. It's sad. You mention these corporate environments. Ugh.
Martin: The corporate environments are awful and I have no interest in the second set of developers, except to the extent that they might become members of the first set. I think one of the problem that we have as a industry is that we have way too many people slinging code. And we should probably reduce the number of people slinging code to the group that cares about it.
Spolsky: Hear, hear.
Atwood: You brought up the other problem with the corporate environments is a lot of these products you are building internally, and Joel has talked about this many, many times, but it really is true, having worked at large companies, and even when you are working with teams that really want to do the right thing, you are building products that will never see the light of day in any meaningful way. Like they are only used by internal people for very narrow things. These are products that would never survive in the outside world. They are just that bad, they have bad features, they are not usable, they don't meet a real need even internally to the business. Not a real viable need. But somehow they amble along and make their way through the pipeline. So you end up working on these products.
Spolsky: It's sad.
[32:50]
Atwood: Yeah, it's really sad. And this why Joel and I spend a lot of time urging people to get jobs in the software industry. If you really love this stuff, if you're a super-thoughtful developer, you're in the wrong place. Right? If you're working in a company where a product's never going to see the light of day, its like how good can you really make it? WordPress has to be good because it's living in the real world. These other products that you're building internally don't have to be good, they don't even, sometimes, have to exist! It's tricky, how deep do you want to go to examine the root cause of that problem, because you may not like what you see there. At least that was my reaction working in corporate America.
Martin: I am constantly amazed at the unfortunate level of expertise of the people working in large software groups. And, there are exceptions to that, there are some groups, some teams, that are just terrific. But, the vast majority of the code out there is really really bad.
Atwood: And I think one thing that would help that, even in terms...
Spolsky: But, wait. Although that is true, it's also the case that, in the year 2009 we have this gigantic software infrastructure that makes our lives better and 95% of that is running on 'bad code'.
Martin: Yes!
Spolsky: And, it's doing something. It's in some way making our lives better and allowing us to send Tweets from the subway, and contact our loved ones more reliably, and track our customers and provide better customer service, and count our money without hiring book-keepers to sit and type things into those adding machines with the paper tape.
Atwood: Well, actually. Let me tell you this. Bob the first time I saw you was at SD West in 2006, that was when I met Steve McConnell so that was an exciting time for me. I saw you talking and one thing I remember you saying, and I remember this very clearly, “why is open-source software so much better?”
Martin: Yeah, I say that quite a bit.
Atwood: Yes, and the reason it's better is kind-of what I'm getting at and is because it's living in the real world where real people have to work with it. It's Not isolated in these islands, so it has to be good.
Spolsky: Would you actually say that the code quality is better, or just that the functionality of the products that you get is better?
Martin: I've seen both. I've seen open-source products where the code quality is not terrific, although most of the time the quality of the code in living open-source projects is not bad, compared to what you see in other places. The reason I think open-source software is better is that the people who write it care about it. They're not writing it for any kind of gain other than the satisfaction of writing something good. So there are many many products for example that do live in the real world but are just bad. And people pay money for them and they're still bad, I won't name any names.
Atwood: I don't think those will survive forever though. Eventually that'll get corrected.
Martin: They've survived a very very long time, but I won't name any names.
Atwood: [laughs] Right.
Spolsky: I want to talk a little bit about test-driven development. Because that was a long issue that we we're talking about. And, also one part of the record that I do want to correct, what triggered me on that rant – I couldn't find this, I tried, Ï went back and listened to the whole show and I couldn't find it – when you were on Scott Hanselman's show he just sort of had a half-moment in which he mentioned that once some programmer that was working for him tried to accomplish 100% test-driven development - use test-driven development and have tests for all new code, like literally 100% - and that's really actually what set me off because my experience everywhere, and I've done loads and loads of test-driven development, has been that there's a point at which getting from 0% coverage to 50% coverage is pretty easy, getting to 75% (I don't know what the real numbers are) is hard but worth it, but getting from 80% to 90% to 95% to even 100% is extremely hard especially if you have graphical user interfaces, if you have any kind of real-time networking, at some point the costs go up dramatically to get those last few percents. Am I completely off-the-wall there?
Martin: No, I actually don't think you are. I was reading through the transcript today and I was reading again how you were talking about 100% coverage and that's something I agree with. I don't want people to have 100% coverage, although I think it's a good goal to shoot for. I don't want people... I don't think people will ever get to 100%.
Spolsky: The thing that worried me about that is the idea that if I had a programmer working for me who had ten things they could do, let's say they had (this is a real hypothetical) 99% code coverage, and there were ten things they could do, and one of them is to get from 99% to 100% and the second on is to deliver a whole other feature to a customer and the third would be to improve the usability dramatically by doing a few usability tests and so forth. And, they chose to go to 100% instead of considering the whole panoply of other things that they could be doing with that time. That would worry me as a sort of compulsive-obsessive order, basically.
Martin: [laughs] I think I agree with you, on that, as well. There comes a breakpoint where you think 'well alright, there's just no point in trying to test this one thing'. And usually you're right, it happens around user interfaces or some other snake-y part of the code. Personally, I keep the number around 90-something, 90-ish.
Spolsky: 90% of methods or 90% of lines of code, or...
Martin: Lines. 90% of lines.
Spolsky: Let me play a question by Andrew Davis because that's going to lead into this...
Davis: Hi guys this is Andrew from Infinity Software in western Australia. I've a rule of thumb with unit tests that you should only write them for non-trivial pieces of software that have really well defined inputs and outputs – basically numbers going in and numbers coming out. For Stack Overflow it probably means the only area I can think of where you should have units tests is maybe the stuff that calculates reputation. Aside from that, no. Jeff I hope that makes you feel a little bit better. Don't bother trying to write it for GUI code, that is a black-hole of time, hours will go in and nothing productive will ever come out. See you guys.
Martin: OK, so I completely disagree with that!
Atwood: Let me take that in a slightly related direction real quickly because I actually had dinner last night with Jon Galloway and Kevin Dente who are in the Herding Code podcast. We talked about this briefly. I like to look at coding as reducing pain, whenever I have pain coding I try to fix it. I figure 'what can I do to reduce the pain from this pain that I'm experiencing?' Because, when I'm not experiencing pain I feel like OK we're doing good. There's no pain points. One of the pain points that we have in the system on Stack Overflow now, specifically, is that our rule set is getting really complicated in terms of the way we allocate reputation, the rules for how you can turn a post into a community wiki post. There's a lot of rules in the system about how the way things happen and we've started to actually forget them ourselves, which I view as a pain point. We've started to actually forget the way the system's supposed to work because it's getting complicated. So, one use for testing in this situation wouldn't necessarily be to validate the behaviors, although obviously and hopefully it would, but as a form of documentation. We could go i and say here's the unit test that define all the rules around how you get reputation, and how you get badges, and all that stuff. So if we forget we could... The code is the documentation, right? The tests document the way the application is supposed to work. That's just one aside of one area I've thought about where it actually would help us to have unit tests. Anyway, go ahead Bob, take it away.
[41:08]
Martin: Well, yes. That's an extremely important point, that unit tests are usually very simple bits of code, they describe in excruciating detail and unerring accuracy how the code they are testing work, there is nothing ambiguous about them and they're written in a language that the programmer understands so they are almost the perfect spec. If you want to know how a module works you look at the unit tests that describe that module. Hopefully those unit tests exist. If there are a set of unit tests that don't exist, for example the writer of the unit tests followed the advice of the caller, then you won't have that spec, what you'll have is a couple of statements of the spec but you won't have the spec itself. So, my guidelines for unit tests are very simple, if you've got a function like a getter or a setter you don't have to test it, it's just too stupid to test. Everything else you're going to test.
Spolsky: Should you even have setters and getters, can we finally get rid of those? [laughs] Sorry. I thought getters and setters were just an artifact of C++ really.
Martin: Or Java, or languages in which people decided that they wanted to make their variables private. OK, fine. Any little function like that that has no possibility that it can fail, OK don't write a test for that. But if you've got an if-statement you should write a test around that if-statement. You should make sure that you've made the decision correctly. It doesn't take you much time to do that.
Spolsky: I think we're all kind of agreed that there's a trivial ground, let's call it the getters and the setters, that's not worth testing, there's a middle ground of logic, business rules, functions that do functional things and can get a little bit complicated, where it's enormously beneficial. But then there's this other world, let's take GUIs for example, my experience with GUIs is that the first big problem is if you're a believer in test-first development specifically that it's kind of unpredictable what the GUIs going to do in the small. So the classic example is I'm going to display the following sentence in the following dialog box and I can write a test which goes to the dialog box and sends a message to the window's control that has the label in it and checks the text of that label. And, that will pretty much always pass, this is essentially a getter at this point, sorry a setter. I set the label and then I check that the label has that text. And what's really interesting is, what are the things that are going to fail, well, the label wasn't big enough it got chopped off on the screen, maybe it wrapped around in some funny way, or it just wound up being in an illogical place or it didn't line-up with something.
Atwood: Just as an aside, that's actually a really important case because people who have high DPI settings. This is really a common failure mode for GUIs where they didn't think about the fact that this is going to be used under different DPIs. So that's a real problem.
Spolsky: So, my problem is how can I develop this test so that it tests what's showing up on the screen as opposed to what's the underlying logic, which I know is right. I could try to create, in a bitmap editing program, a bitmap of what I thought the dialog box should look like but I am going to be off by one. I'm going to be off by a pixel because Windows is going to decide to word-wrap on that word and not on the word that I thought it would. So, I think I'm probably precluded, I think it's probably an impossibility to do test first. But maybe I can do test afterwards, so I run it once and I look at it and I say “that's OK” and I store that bitmap somewhere because it really is...
Martin: And there are tools that do exactly this! I don't happen to like them, but they are there.
Spolsky: Oh, yeah. There are a lot of tools for this. And this is what I think I was getting to when I said that a large percentage of things will fail because I have done this on large projects and then discovered that you take it to another machine with a different DPI setting, for example, and all the tests, basically everything that has a bitmap in it, the bitmap has now changed.
Martin: Or the point you raised in one of the other podcasts was that you move a menu and all the tests you've done broke. And I actually have clients who have thousands upon thousands of tests that run through the GUI and everytime anybody changes anything trivial on the GUI thousands of tests brake. And they have adopted the only approach they can, at this point, nobody changes the GUI!
Spolsky: [laughs]
Atwood: And that seems harmful to the consumer, ultimately, right?
Martin: Yeah. Well, that company actually went out of business.
Spolsky: [laughs]
Atwood: [laughs]
Spolsky: So what do I do, what do I do, Uncle Bob? Tell me what to do.
Martin: So, obviously you can't do that. The goal here is to test as much as you possible can without getting into the trap of testing everything through the GUI. The mistakes that these people made was that they tested the whole system through the GUI. What do you want to test through the GUI? You want to test the GUI through the GUI. And that's it. So I wouldn't mind getting one of these tools that does all automatic bitmapping and checking whatever, as long as the only tests that went through it were tests of the GUI, not a single business rule, no validation, nothing else, just is the wiring of the GUI correct.
[46:42]
Spolsky: I see, so you're saying they're not even testing that the app does what it's supposed to do; they're just testing that if the app changes a 24 to a 26, it shows up here.
Martin: Yeah. I don't even want the app connected. I want a dummy app connected.
Spolsky: Okay.
Martin: Write something stupid on the back end, and hook it up to the same GUI, so there's no temptation to test what's behind the GUI.
Spolsky: This assumes like a really, really kind of large level of separation between the GUI and the quote-unquote, what you're calling app, but I'm calling maybe the core...
Martin: Yeah, it assumes a good bit of that.
Spolsky: I've worked on so many apps where I wish we could do that. People don't understand FogBugz, it's a bug tracking system, it's feature tracking, it's got a lot of stuff there. And what they don't understand about it is, how we spend so much time on it, when really internally it's a few tables. There's the Bug table, there's the User table and there's the Project table. To a programmer it looks like it must be really simple, and one of the reasons is that there is an enormous amount of code in the GUI, in making certain things come out in the user interface. If you tried to cut FogBugz subcutaneously and kind of remove the face of the GUI and separate it from the actual functionality, I think 80% of that code would be up there in the UI.
Martin: Well, that's an interesting supposition. And what makes you think that? There must be windows that are communicating with each other; there must be cells that if you change a cell in one place it changes in another. There must be a cross-coupling inside that GUI.
Spolsky: Uh, yeah. It's all very complicated. [laughs]
Martin: And is that where that cross-coupling belongs? Or are those communication channels that are actually business rules that ought to be tested separately?
Spolsky: Agreed.
Martin: I don't know if that's true or not, but maybe it is.
Spolsky: If I had to think of-- actually, we did this, because we just went through the exercise of creating an API for people to use FogBugz, which really is access to the underlying non-GUI stuff. And that API is very, very large, very complicated, and has a very very big surface area, basically. I think it's well-designed; we know how to design these things from years of experience, and it will be very effective for our customers who want to do things not through the GUI but through a programmatic interface to FogBugz. The number of points--I mean, a GUI is like a face, right, and the number of connections between the face and the brain is huge.
Martin: Sure.
Spolsky: The number of nerves that are there, and there is a lot of stuff that appears to be happening solely on the GUI level. So for example, a lot of user interface conveniences which are happening not, which don't really represent any kind of business rules, they're sort of... the classic example is a social security field that makes it easy to type a nine-digit number, that kind of stuff.
Martin: Sure, sure.
Spolsky: It's GUI sugar, so to speak.
Martin: Sure. The social security field that makes it easy and puts the dashes in is something that doesn't have to live in the GUI and you could test.
Spolsky: Right, right. And anyway, those things are all components that we just got somewhere. [laughs]
Atwood: Let me use an example. I think Joel, a while ago you read, what's that classic book about the UNIX way of developing, did Eric Raymond write it?
Spolsky: Yeah, Eric Raymond, The Art of UNIX Programming, I think, or...
Atwood: Right, right. And one of the things you mentioned was that the classic UNIX way of developing a GUI was you start with a command-line app, which has a defined set of inputs and outputs, pipes, basically text going in and out of it.
Spolsky: A lot of command-line arguments.
Atwood: And then you put a GUI on top of that, so then you have perfect separation. You don't necessarily have a great app [laughs] which is the deeper problem, but you have perfect separation, because you can test the command line independently and test the GUI independently, couldn't you, in that scenario?
Martin: Yeah, certainly. And by the way, that's the Git approach, which... I'm deeply in love with Git. [laughs]
Spolsky. I think that Git is awesome. That's a very UNIX-y approach, to start with a command line and worry about the GUI later, and I think there's a lot of problems in the UNIX world because of that, because they can never quite get the user interface good enough, because it's depending on...
Atwood: They're very different things. This is the problem, right? The command-line world, opening a terminal... I was reading a blog entry where someone said if you just disabled the terminal on every UNIX programmer's desktop worldwide, you would immediately have massive increase in quality in the GUI, because the immediate thing they do is they drop to the command line, and [think] "oh, I can just do this through GREP"; they don't think about "how would a person who doesn't want to go through the terminal do this?" They don't think that way, so you end up with crappy GUIs and great command-line apps.
[50:53]
Spolsky: Let me take another question here. This is from Tim Kington.
Kington: Hi Joel and Jeff. This is Tim Kington and I'm from Columbus, Ohio. I really enjoy the podcast. I was listening to your discussion on testing a couple weeks ago, and I wanted to say that I think true test-driven development, and by this I mean writing your tests before you write the code, has a large benefit that you didn't talk about, and that's that it can significantly improve your design, because you approach your APIs from the perspective of the client. Since you write the client code first, you wind up thinking about what would be easy to use, and not so much what would be easy to implement. And I was interested in hearing what you think about that.
Martin: This is really a powerful point.
Atwood: Um hmm, yeah, it's great. Walking a mile in another developer's shoes is an incredibly -- it's a really underated benefit, I agree. So, take it away, Bob.
Martin: Oh, well, the fellow makes a very good point, that, if you are writing your tests first, it forces you into the mindset of the kind of separation that we were talking about. If you knew that you had to put dashes in between the social security number fields, you could write the test for that first. And then you would write the code for it, but the test would force you to create a module that did that separation, and then the GUI could call that module.
Atwood: Um hmm.
Martin: So, the mere act of writing tests first puts you in the mindset of separating things, simply so they can be tested. But that sepration pays off manifold.
Atwood: Well isn't it more like eating your own dog food? Like at Fog Creek, do you guys use your own API?
Spolsky: Yeah, we're going to, I don't think we're going to -- I think once it's done we're not going to add another feature to the core app ever again. [laughs]
Atwood: [laughs] But, that's the reaction I got out of it. It was just wierd to be in the situation of consuming your own APIs, you really...
Martin: You are not consuming the API because you haven't written the API yet. You are writing to test them...
Atwood: Well, in order to test..
Martin: ...to call, an API that has not yet been..
Atwood: Well, in my case I'd already written it so I was sort of backfilling tests, but in general, it's also design.. but it's great, anything you can do to eat your own dog food whether it's -- I mean, testing, it's a great way to do that as well, unit testing. Like unit tests for your API, Joel. Obviously that's a place you would probably want unit tests.
[53:12]
Spolsky: There's something here I have, you know, I think that good developers, um, there's sort of a range of developers. There's bad developers, who, no matter how much you teach them they're never going to get this stuff right. There's really good developers who, even if they don't use test-driven development or, what was it, behavior-driven development, basically, where, almost like design, you know, that form of design, there are really really good developers who will actually probably design things right just through their intuition or their experience, and then there's some middle world of people who will be helped, substantially, by thinking about, "how am I going to unit test this" and "how can I create a test for this thing, I better break it up into these particular ways."
[65:02]