Spolsky: Hey it's Podcasting, with Joel Atwood and Jeff Spolsky. I'm your host Joel Atwood.
Atwood: And I'm your buddy, Jeff Spolsky.
Atwood: Good, you know, we got a request our show to be all about Twitter. Just, the whole show. Talk about Twitter, talk about how we use Twitter, what Twitter's good for, make money using Twitter.
Spolsky: Do you know how to make money using Twitter? You spam bloggers. There's gonna be something in there for the PR people.
Atwood: Yeah, so probably we shouldn't talk.
Spolsky: Because we talk about it too much?
Atwood: I think I bring it up ocassionaly, I blame me frankly, this is the one area... usually I blame you
Spolsky: The one thing you can say about Twitter is that once youre into it, you're into it, you wanna talk a lot about it, and if you're not into it you're just like "shut the hell up already"
Atwood: That's right!
Spolsky: I get it. It's your instant message service on your cellphone or something. I know. I know.
Atwood: Yeah. Exactly. So I have a different topic we could talk about. Actually before I go into that topic let's talk a little bit about Stack Exchange. So we had our first meeting with Aaron, and you didn't tell me that Aaron has like 10,000 reputation on Stack Overflow.
Spolsky: That's why we hired him.
Atwood: I know. I was very impressed. So I'm not going to out him, but you can find him if you're a Stack Overflow user you can find him and you can bother him about Stack Exchange and how it's going to work and all that stuff.
Spolsky: Well he does follow the Stack Exchange tag on meta.stackoverflow.com so it's a good way to put in your feature requests and ideas and questions about Stack Exchange.
Atwood: That's right. No I definitely encourage that. Just tag everything "Stack Exchange". Eventually I think Stack Exchange will host its own discussion about itself. But until then Meta is totally an appropriate place to do that. Just make sure it's tagged "Stack Exchange" and I'm sure Aaron will look at it.
Atwood: So we're still good on our timeline as far as...what was the date on Stack Exchange?
Spolsky: Well, we were going to go into beta on September 1st and I think that right now evidence-based scheduling is giving him a more than fifty percent chance of doing that, so...
Spolsky: ...maybe just under.
Atwood: Well, you know I have another type of scheduling. I'm going to launch some software for it. It's going to be "sixtoeightweeks.com". You can sign up there and get an account.
Adam Goucher: Hi guys, this is Adam Goucher and I'm a tester in Toronto. The Fog Creek way of hiring programmers has been well documented both online and through this podcast. What I haven't seen is any discussion of how testers are hired at Fog Creek and how they fit into either of your views of how software should be built. Thanks, bye.
Spolsky: Yeah, testers. That's kind of an interesting question because the neat thing about good testers -- so first of all let's clarify, I guess we've had this beaten into us. Test Driven Development has nothing to do with testing, different thing altogether. We can completely put that aside. Happens to use the same word, okay? That's all. But the kind of testers that I'm talking about here are people. That their job is to make the software break, and find out when it did and tell the development team so the customers don't see it break. And what's interesting about testers is that they come from a much much more diverse set of backgrounds than programmers. The good testers that I've known very rarely come from a computer science background. Sometimes they do. They may or may not be programmers. There's a lot of room for non-programmer testers who basically do black box testing where they just sort of bang on the code with a user's perspective and try to find areas in which they think it will work in a non... user... happy-making fashion. And every tester just sort of has a different story of how they got into it and what their background is and why they like doing it. The two testers at Fog Creek, I can't even really explain how they got into it. I don't know if they could either. One of them was an example-- one of the testers that we recently hired, this guy Sam who's a great tester. And we just had a summer intern here who had a friend and his friend was sort of interested in getting into the computer business. That's kind of all I heard. And I said, "Well, alright, if he's smart." And we interviewed him and it sure sounded like he was smart and he could learn things quickly. And he was and he could and he became a great tester. On the other hand, sometimes you have testers that you think might be really good and they have all the qualifications, but for whatever reason when you sit them down in front of code they just never find any kind of bugs. There seems to be a real dramatic difference in productivity of testers, even more than with programmers. When I worked at Juno Online Services... ah, boy... somebody's going to get offended. Oh, well. There were four, five testers on the team, I think? I can't remember exactly because the dropoff was real quick. There was Jill and there was all the other testers. And Jill found 70-90% of the bugs that were found by the team. And I think I remember doing a calculation that we would rather have Jill two days a week than the rest of the team. Like I would give up the rest of the team if we could have her for Mondays and Tuesdays looking for bugs.
Atwood: Okay, but what, do you think that's something particular about her? Or... why?
Spolsky: I just don't know. She was really good at looking at something with a critical eye. I guess that's the only way I can put it. She would bang on something and just say, "Hmm, I wonder what happens if I try to break this in this particular way." And she just noticed those things quickly and she was just really energetic about exercising the product to find the bugs. And it's kind of interesting, there are certain classes-- like the people testers that we have here, they find certain classes of bugs. Those of you that are thinking about automated testing-- none of the bugs that good testers find are caught by automated tests. Some of them can be, in principle. But mostly it's like, "Why doesn't this line up? Why doesn't this make sense? Why did I just log on and now I'm not logged on? Why when I up an open quotation mark and no close quotation mark intot his field does the server come to a screeching halt?"
Atwood: These are more like usability/testing. So really are you saying you're getting two for the price of one? You're also getting some usability testing of, like, what you're doing doesn't make sense-- it's not wrong, but it doesn't make sense.
Spolsky: Well, you certainly get a certain level of that and it's valuable to do usability testing with real users who are in your actual target audience as well. You do definitely get some of that. But what you're really getting is... sort of, a lot of fit and finish testing. And also a lot of edge case testing. You know, one of the things that-- there's a big mentality difference between programmers and testers. And the mentality difference is that the programmer spends all day long trying to get the code to work. And so, they're writing some code and it doesn't work. And then they write some more code and it doesn't work. And then they fix the code and it still doesn't work. And they fix the code and it still doesn't work. And they finally get-- and each time, they're testing one thing to see if their code that they just wrote works. Right, like they change their code, and then they do a test, and they change their code and they do a test. And even if you're doing Test Driven Development, that's what you're doing. Because you wrote a test and you change your code, and you automatically run a test, and you change your code and you automatically run a test. And then once that test passes, whether it's manual or automatic, once that test passes you feel like you're done. You're like, "Tada!" Okay, ping! Move on. And then what you've actually said is there is one condition under which your code appears to work under this one very special set of circumstances. But as a programmer you then tend to just kind of move on. And then you give that to a tester and the first thing they're going to do is they're going to take that one piece of code and they're going to try it under 30 different circumstances. They're going to fill in all the fields and they're going to leave out all the fields and they're going to just try all kinds of edge cases and they're going to try all kinds of interesting and fairly consistent easy to learn ways of breaking code. Like if you have a form you need to test, try leaving things out. Try putting things in. Try overfilling all the fields. Try filling the fields with garbage. Try filling the fields with Unicode. Try filling the fields with a lot of spaces. Try filling the fields with accents and single quotes and double quotes and all kinds of special characters which might have special meaning. And you do all that and what's amazing is that this always finds 17 bugs no matter how experienced the developer is. They just keep generating code with the same bugs again and again and again. "Oh, yeah, I didn't test Unicode because I don't know, I just thought it would work, grumble grumble grumble."
Atwood: You gotta use the snowman, paste in the snowman and see what happens.
Spolsky: Yeah, heck, the snowman is just the beginning of things, the little Unicode character for the snowman. There's also, obviously-- there's all kinds of characters that a good tester will find somewhere, a list of these characters that are worth testing in various fields.