View  Info
 

Podcast 080

 

Jeff:     I was just sort of ranting a little bit on twitter about eh.. my experience with GitHub has been pretty negative so far. Part of it is just me but..

Joel:    Yeah, they probably hate you there.

Jeff:     No, no no. I think what they do is cool. It's just I find it.. We're getting a lot of noice of the GitHub. So what's hosted there is the WMD editor that we had to reverse engineer that Dana Robinson did pretty much all the work on. Dana has been tied up with some other stuff. In the mean time, a bunch of people have forked off the project, which is fine, I don't really have a problem with that, but it makes it really hard to figure out like what is going on with the project, because you have Dana working, then stopping for an extended period of time. Then you have a bunch of people that sort of picked up and started doing semi-random stuff.

Joel:    Wait, you just let them check it in? Or what, what, where, what?

Jeff:    What do you mean? No, no, no. They have their forks, their own private forks. Well, not private, that's actual the problem, I think. So anyway, in talking to people on Twitter about this is like, everyone that approaches WMD wants to help, wants to figure out, they have this immediate hurdle of like, well, what's the current version, you know, and if you go to derobins branch you'll see here's derobins and he did some work, and then you will see some bunch of people work and you're like, which one of those is the good one.

Joel:    That's the nature of any open-source project. There's supposed to be some kind of, like, shall we call it a parent, to use a metaphor.

Jeff:    Yes

Joel:    Who exhibits some kind of parenthood.

 

Jeff:    Yes, now that's definitely true, and I agree with that. But I also think that there's a symptom here, in that, part of GitHub's model is their not free, they're pay, which, again, I have no problem with. I have no problem with paying for stuff, I have no problem with it. The artificial distinction that they use, is you cannot make stuff private, unless you pay. So I think this is a little bit annoying, because with it means is people who are playing around will come in and get, they'll check out WMD. Well, not checkout, but whatever the correct term is, pull I guess, and they will show up in the timeline for WMD, even if they have no intention of..

Jeff and Joel: ....

Joel: ..as pulling

 

Jeff: Apparently, I mean, that's what I'm hearing on Twitter. That's what I'm kinda objecting to, that's there is always some sort of random stuff in the timeline. <Joel talking in the mean time>. You know, I don't care.. Okay, here's my thinking. And people correct me if i'm wrong about this, because i'm somewhat new to distributed version control, so I could be completely wrong, but what I'm thinking is, you know, I only want to know about your timeline if you have intent, you fork if you will, your your pull, if you have intent to fold back in.

Joel: It's not up to the person, you see, I think that in an open source project <unclear: it doesn't> work that way.

Jeff: I think this is all very side-effecty. What I'm saying is these people come with free accounts, they can't mark their stuff private, therefore, it always shows up in the timeline, if they even touch WMD at all. What makes the timeline...<Joel interupting>

Joel: What's your WMD, what's your account on GitHub?

Jeff: I actually just deleted it, because I wasn't using it...

Joel: Where's WMD on GitHub?

Jeff: Just do a Google search for WMD GitHub.

Joel: There's a whole bunch, there'se derobins's WMD at master.

Jeff: There's a whole bunch, this is the whole problem. <laughing> This is what I'm trying to tell you. So derobins's is theoretical the correct one, in that that's the one we use, but there's newer ones.

Joel: Okay, this is fine. Who care's there are newer ones. This is a branch though. "Functionality removed that we not needed"

Jeff: There's a lot of changes going on with other people, and it's confusing. I mean, this is my problem. <laughing> I think it's really confusing.

Joel: This is <Jeff talking: part of that is..> C# WMD. This is the WMD..

Jeff: No, this is the Javascript, the client.

Joel: This is the official, this is the original, not really official fork ten million different ways.

Jeff: Well, that's where I'm getting at. The whole situations is a little weird. Part of it is bad parenting. And granted, Dana hasn't been around, I haven't been around. So I have actually moved ..<interrupted>

Joel: Wait a minute, where have you been? 

Jeff: Well, I just haven't any time to work on the Javascript stuff at all.

 

<around the 19 minute mark>

 Jeff: And one thing I'll tell you right off the bat, the state machine was waaaaay more code. Way more code. You're talking, like, 25 lines of code! <laughing, astonished>

Joel: Yeah, but that could be.. <interrupted>

Jeff: Could have been done in the Regex in like, three lines. I'm just sayin'!

Joel: Yeah, <interrupted>

Jeff: So that's the downside, it's a *ton* of code!

Joel: Yeah, but that code is all legible. <laughs a lot>

Jeff: Hummmmmm. I don't know. I mean it's debatable. <talks over Joel> It gets somewhat debatable. It's definitely faster. There's no question that it's faster because you're doing three Regexes in this case to do the <interrupted>

Joel: It's much easier, it's easier to debug too.

Jeff: Naaaah, I dunno. The way I was seeing this code <laugh> there's actually a bug, sadly, in the normalize routine he contributed, I'm gonna have to roll it back, umm because it's not actually removing the newlines at the end of the code like it's supposed to <laugh>. Uh, but I don't know, I was a little taken back.. <interrupted>

Joel: So why don't you just fix it?

Jeff: Well, because, I, th, th, the thing is it's a bunch, like 25 lines of code, I have to look at and understand verses like three. <laughs> It's actually quite a bit more complicated. <talks over Joel>

Joel: Yeah, Fine. <whatever>

Jeff: I mean I can give you the code if you want to look at it. I mean It's not *super* complicated code but it's 25 lines of code.

Joel: Right. <exasperated>

Jeff: I'm mean lines of code, the more lines of code the more bugs, man.

<end classic segment>

(25:50)

 

Hey Joel and Jeff, this is Dave from the tri-state area.  I'm calling with a question for you guys.  I work at a large software company with a humongous code base.  Well in surplus of 50 million lines, and we have every language in there from FORTRAN to C++ to Javascript to God knows what else.  And there's a heavy kind of pervasive philosophy all around the company whenever I ask for documentation.  The usual answer I get is, "documentation? What documentation? The source code is the documentation!" I find this kind of irksome as a new guy over there, still trying to get my way around the environment. But what are your thoughts on this kind of approach?  Do you think that it helps you to learn more about the environment you're working in? Or impede development? Or what? Anyway, I'd love to hear your thoughts on this, and thanks for a great podcast and a great site.

Joel: Yeah.

Jeff: Documentation. It's exciting.

Joel: Did he say "dongumentation?"  Okay, this just makes me want to quit my job as a programmer even more.

Jeff: What, having to answer the question about documentation?

Joel: Just even thinking about this, yeah.

Jeff: Well, I think, I don't know, this is the theme I come back to a lot, but I think having worked in a large company, discoverability was the number one problem, and I just feel like the more you can rely on an external code base versus an internal code base, or have some sort of open-source thing where you're contributing as a company to some open-source thing that's sort of larger than your company... I just think there's... it's too difficult to attack this stuff internally, basically.  The reason that people would resist, well now I gotta write documentation for this internal thing, y'know, when I could be building this internal thing. And y'know if you create documentation, it's only going to be visible internally.  So what kind of benefit is even a large, large...

Joel: That's not even, okay come one, that doesn't even... I'm going to have to disagree with you here, because it sounds like what you're about to say is well of course this problem doesn't have a... you're starting to become of these open-source weenies, or that this problem doesn't happen on the Macintosh.

Jeff: No, no, no, no.

Joel: We don't have this problem on Linux.  On Linux, it's awesome to write documentation, because everybody reads it.  The whole world can read your awesome documentation, whereas who wants to write documentation for 50,000 Microsoft programmers to read?  That's boring and lame.

Jeff: Well let me backtrack a little bit. That wasn't exactly what I was saying.  What I was saying is that most large companies, the audience for anything at a typical large company is what, like 50 people?

Joel: No, because it's two and half people who are going to be using your code, who are going to have to be working on your code, and you don't know when they're coming and when they're not. But you're right, it's just a tiny number of people.

Jeff: That's why it's hard to get excited about it.  When I was documenting stuff internally, I said okay, I'll document it for the three people who are ever going to look at this, ever.

Joel: Exactly. And by the time, and you go to some major effort to write some documentation, and you're like I finally documented that.  Three years later, the code is changing every possible which way.  In the meantime, nobody's read your documentation, because they don't like to read, they're rather watch television, they can always just ask you. And you're like well did you read the documentation? And they're like, oh, yeah, I didn't think that was going to be right. So you're right, documentation is an impossible... or specifically documenting code...

Jeff: Wait, wait, wait, I got an idea, I got a perfect idea. I think what would have much better value in a company, a large company, is unit testing. Unit testing can be a form of documentation. (Joel sighs.) No? No? I mean, if you're going to sit down and write documentation, I think you'll get much more value out of writing actual unit tests.  And particularly because he's talking about large code bases, do you remember when I referred to unit tests as scaffolding around some grand old building?

Joel: But, but but, he's talking about, the problem he's talking about I think, he said he's kinda new at the company, and he's trying to get his mind around 50 billion lines of code they have. The problem there, what he really is asking for is like, isn't there a textbook I can read, where somebody will get me all started on all this complicated code and figure out where things are and all that kinda stuff.  And I don't think there is a code base in the world that has that documentation that is gonna make job easy for you, of learning your way around a large code base.  It's just hard.  It's like when you're a doctor, or when you're training to become a doctor, you spend a couple of years learning, like, human anatomy, and that's documentation for a big complicated system, in a sense.  And you read it, and there's lots of textbooks, and they teach you things, and after a couple years of studying the documentation and studying the human body in various other ways, you finally have kind of a good grasp on how that complicated system works.  And it's the same thing with a code base, except you don't even have, y'know, an anatomy textbook, so it's even harder, I guess.

Jeff:  You just need a native guide.  I guess there's inventory that would be more useful in that scenario.  Although I'm still going to go with my unit testing.  I still think it would be helpful to just read through some of the unit tests.  Plus, if he's going to be changing some of this code...?

Joel:  That's documentation of a very small... of a unit.  It doesn't really tell you that this subdirectory contains a whole bunch of files which are input to this function, in that place, which generates automatically a parser that can handle the watchamacallit thing that you use for this gigantic module that you don't have to know about because we haven't used that code in 15 years.  It's still there because we have a particular customer that's apparently using it, we're not sure, we're afraid to ask, there's no reason to delete it.  So if you have a large body of code, if you've ever tried to take over a large body of code, or to just start working on a large body of code, it's like impossible. You can't, y'know, it's really hard.

Jeff:  Yeah. Well, your recommendation was to pick some tiny bug...

Joel: Yes, that was my recommendation, I was just going to mention that again, so I guess this'll probably come up on every single podcast.  It's that the best way to start on a large body of code is to just be assigned a whole bunch of random little bugs all over the place, and just somehow figure them out.  It's gonna be hard, but eventually you'll just get better and better at understanding what's in the code and where it should be.  That doesn't... now, it sounds like I'm making excuses for not documenting your work, and there are different levels of documentation.  So you got documentation in the small.  Unit tests are awesome if you want to document a tiny little piece of... like, if you had very, very detailed unit tests for MarkDown, your MarkDown parser, or your WMD parser, you could look at them to resolve ambiguities.  You'd be like, hmm, there is an ambiguity here as to whether a line of dashes is a horizontal rule or it means heading-one.  And then you would go look for dashes in the unit tests and you'd see if anybody had made any unit tests ... you'd discover that they hadn't, and you'd say, well, I guess that's not documented.

Jeff: Well, somebody pointed out that part of my cognitive friction with testing on MarkDown# was that the way I was testing was too large.  Actually doing the input/output testing was considered too large to be a true unit test. Unit tests are supposedly smaller. It's like another kind of testing

Joel: But if you have a well-developed body of unit tests, for a well-developed body of code, and the unit testers are doing a great job, and they've been studying it for 200 years, and they've spent a year in a cave with Robert, with Uncle Bob Martin, actually side-by-side, what's the word, pair programming, and they've got awesome unit tests for their body of code, then in that circumstance, wouldn't you expect that the unit tests would be about the same size as the code base, if not larger?

Jeff: They'd be pretty large.

Joel: They might be larger than the code base.

Jeff: Yeah.

Joel: Okay, how does that help?  Go read that other thing, which is just an alternate expression of the same code base, in a different language, or a different format...

Jeff: Well, maybe I was thinking ahead in terms of not technically reading it, but just making a change and then being able to

Joel: Oh, yeah, that for sure is a very useful, that's useful and that's valuable...

Jeff: I would say that any time spent on documentation, to me, is really hard to justify.  At least, with unit testing it's still somewhat hard to justify, but I can see the benefit of, like, a new programmer who doesn't know anything about your code, can come in and make a change, and have some reassurance that, okay, I didn't break everything.

Joel: Yeah.

Jeff: I may have broken something small, but at least it wasn't something covered by one of the major unit tests that we have, or even the...  whatever the kind of testing I'm doing on MarkDown#. I still don't fully understand the distinction, but uh, y'know, input/output type testing.

Joel: Yeah.  There is definitely a feeling among programmers that there's never enough documentation of the code they've been told to go work on, ever. And there's also a pretty clear reluctance to ever write any documentation, because documentation in and of itself almost never gets written.  In fact, if you follow a team of programmers just kind of working naturally, they might document something they're about to code, as a way of understanding what they're about to do, and then they'll check that in as the documentation, and what they do is maybe 25% different.  And that code is going to change 14 different times, and that piece of documentation is still going to be checked in - that wrong piece of documentation.

Jeff: Yes, it doesn't stay in lock-step with the code. At least, the unit tests, if you break something, you kind of have to...

Joel: You kind of have to, yeah.  But the unit tests don't tell you enough.  They tell you about things in the small that you could figure out by looking at the code, or by reading the comment in front of the function that explains what the function does.  I dunno, sometimes they may clarify something, and they may be useful.  There's just a whole bunch of clues that you're gonna have to get.  There's some other stuff that's sometimes kind of weird like, I found that if you have a database, and you don't carefully document every column, that after a year or two you start to have a really, really brittle world.  So somebody'll make, um... whatever your application is, some table that has the most columns and is the most central to your application.  Y'know it's like the StackOverflow questions, or the user table, or whatever. It's got 48 different columns, and it's really, really kind of crucial, and 15 of those columns are a little bit mysterious.  Somebody put them in because they wanted to basically hang their data onto a user, or a question or whatever...  And if you don't ever document those, you just sort of throw them in there, then what you'll find is that people will write code, and it'll create new users without setting appropriate defaults, and other people won't keep those columns up to date, and just stuff will break, because those columns are not well-understood.

Jeff:  Right.  So maybe what you're trying to say is just document the core, the center.

Joel: Uh, document the data structures, at least, is the most crucial ...  data structures, and your tables and columns and stuff like that.  The most important thing is just to have very, very tight documentation of...

Jeff:  But start at the center, I think that's a good observation.  Like, find the center and document the crap out of that.  And, the center in terms of data structures, specifically.  That's a good idea.

Joel: And you can even, at some point you wanna have the new developer's guide, which you should maintain up to date.  There's something we've done, I don't know if we're still doing it, but a policy I used to have is there'd be a new developer's guide that'd say, here's how you get a checkout, here's how you set up the tools that you're going to be needing, just to compile, and here's how to get you to the point at which you can edit any file in our source code and cause there to be a compiled version and test it and debug it under your debugger.  And maybe even deploy it.  So like, the minimum, like, how the hell do I work with this code.  Not even, what does it do, or where is the code, or whatever, just how do I work with this body of code in this situation: how do I check things out, what passwords do I have, what compilers do I need, what tools do I need in my PATH, what environment variables do I have to set, all that kind of stuff.  And just like everything else, that stuff gets out of date pretty quickly, and nobody maintains it.  So you have a rule that the new guy has to use that documentation to get started, and every time they find a mistake, the new guy is responsible to fix it.

Jeff: That's a good idea.

Joel: So, at least every time a new guy joins, it gets refreshed, to be up to date.

Jeff: Yes. I like that.  Well, I think we have some good tips answering that question...

Joel: But just the idea of documentation makes me want to cry, because it really is impossible to... And when I think about writing verbosely, like the way you and I write our blog posts, where you actually try to explain everything in a way that somebody who's not patient and reads will understand them, and then you see the way people have gotten to be reading on the internet, where they're just skipping forward, they're ignoring paragraphs, they're just jumping from pretty bullet list to the next pretty bullet list, they're in Twitter mentality, they don't sit patiently and read your documentation, even if it is going to save them, they will not read it.  They will just skip to, y'know, interesting little pictures and blobs and blurbs and stuff like that on the page.

Jeff: You know Joel, I didn't even listen to any of that, 'cause I was browsing the internet.

 

...

 

 

Robert: (T+ITC=39:30) Hi Jeff, Joel. In an effort to drive improved task estimation in the future, the dev team I work on has tried a few times to start the habit of logging time against issues in our tracker. Unfortunately, like many New Year's resolutions, the habit quickly gets forgotten before enough useful information is collected. At the Cambridge dev days my colleague asked one of the Fog Bugz guys how their team was motivated to keep such information up to date. The answer was that they were keen to test and tune their own evidence based scheduling code. Obviously this is a special case and doesn't transfer well to other projects. So in the spirit of getting away from 6 to 8 weeks, and towards a more engineered approach, what blend of carrot and stick do you recommend to motivate or manipulate our developers into maintaining an accurate work log? Or, are we going about this all the wrong way? Thanks, Rob.
 
Jeff: That's right up your alley Joel.
 
Joel: Yes! Ok, how do you, what was that, I was reading my email. Well I wasn't really, I was just sortof glancing over each one, jumping from topic to topic. The question is "How do you track time?"
 
Jeff: I think this is a good question. Garbage in, garbage out. If you don't have enough information about how long it took to complete things you can't get better data out.
 
Joel: The first thing we discovered is you don't need precise information. In other words it doesn't have to be "you know, I spent 24 minutes on this and 76 minutes on that." It can be accurate to plus or minus like a 1/4 day, 1/2 day, day, those kinds of things. And I've gotten the best results personally, like the least amount of work you have to do, I think, assuming that you don't have a tool, there are tools, that will watch what you check in, and that's not a bad idea -- so if you have a tool that watches what you check-in, and in the check-in you designate what feature you were working on as a part of that check-in like what bug it was for, what feature it was for, and the tool just assumes basically that all time that was spent in that tool was doing that, you'll probably get really really good results without any additional work so all you have to remember is that when you check stuff in you have to flag them with a feature number. That's the easiest way. Now absent that, I personally have had really good luck with at the end of the day just going into the spreadsheet or whatever and allocating eight hours to the things that I knew that I worked on that day so what I would do is as the day went on I would make a little note to myself, "alright I worked on feature 27 and feature 28 and feature 31", and at the end of the day just sortof decide how to spread 8 hours across those three features. Like "Well this one took me most of the time so I'll give it 6, and I'll give the other two 1 hour." And it's a little bit sloppy but that's ok you're not looking for like timer-accuracy, unless you're billing these things to clients which is unlikely, and um, uh, almost all of the reasonable algorithms you look at whether it's evidence based scheduling or something simpler, will generally withstand lack of precision and accuracy as long as you get something kindof approximately right. Because what we're really trying to do is figure out those things that you thought would take one hour ended up taking four weeks, or those things that you thought would be done in September were actually done in May and that's what's important to learn from.
 
Jeff: (T+ITC=42:55) Well I'm curious about this check-in cause that seems so painless the whole check-in. What would they be giving up by going for that really simple route?
 
Joel: Nothing. I talk about this if you go read the documentation on evidence based scheduling in the article that I wrote in my blog about evidence based scheduling. But the basic assumption is, let's say that, let's take that huge simplification, let's just imagine. And let's assume that I am a programmer, so there's a lot of simplifications in here. And let's assume during that course of the day, I implement mostly in the morning until a little after lunch I'm implementing a big-old feature and then in the afternoon I'm fixing a big-old bug from yesterday's feature. So I got a big old feature and a big old bug and now if I record that as 4 hours and 4 hours, that's going to be pretty accurate. Now let's just say that from 10:00 to 10:15 I had to deal with some idiotic thing, who knows what it was, like I had to re-install my VM-ware workstation thingamajiggy, or like right before today's phone call Skype has decided not to recognize the microphone, or whatever the situation may be. And so you might think to yourself "Oh Gosh, I wish to record that I just wasted 20 minutes on some stupid thing" as separate from working on that feature that I did this morning, because it's not really right to take that wasted time and charge it against that feature, and you're like "that feature should have only taken 2 hours", but the correct thing to do is actually just to charge that thing to the feature, even though it was not related to that feature.
 
Jeff: That's ok; that comes out ok?
 
Joel: That comes out ok because what you want is elapsed, wall-clock time. You don't want CPU time. Let's say that of the four hours that I spent in the morning on that feature
 
Jeff: So you're really just capturing how much task switching you're doing.
 
Joel: But you don't need to capture that. So you've got this feature that you do, it took you all morning, it took you from 9:00 to 1:00 to do it because you spent 15 minutes writing the code, and the rest of the 4 hours dealing with idiotic shit that happens all day long because life sucks.
 
Jeff: But that's the beauty of this, I see what you're saying, this totally works
 
Joel: It's the 4 hours that matters, no the 15 minutes.
 
Jeff: But that's just the way that your work day goes, that's standard, so that's just, it just takes you that long to do a 15 minute feature.
 
Joel: Any 15 minute feature is going to take that long.
 
Jeff: You have all these task-switches that you have to do.
 
Joel: Or whatever it is, because people interrupt me and I'm responsible for the espresso machine.
 
Jeff: The only advantage then to breaking it out would be if you're trying to figure out if your team is doing too much task switching.
 
Joel: But you can just look; you can do that one day. What really happens, is that the reason everything seems to take forever is that everybody says "Well that seems like a 15 minute feature", and then they put down 15 minutes when they know perfectly well that in real life they would maybe do it in 15 minutes and then they would surf the web for an hour and then some emergency would happen and they would have to re-install their compiler, and then the phone would ring, and then they'd have an interview.
 
Jeff: Or Skype would stop working.
 
Joel: And Skype would stop workind, all that stuff. And so, all that stuff, what you want to do, what you actually want to do, is measure wall-clock time. And say "Ok fine, four hours. Cause I can't count it against any other feature so I'll count it against this feature." And now, you've got all these 15 minute features that you estimated at 15 minutes and they're really taking 4 hours, and you started to realize that stuff takes 16 times longer than you thought; or however long it really takes.
 
Jeff: And then the next question to ask, which would require a little bit more data collection is like "How do we actually fix that?" And you would just ask, like you said, you wouldn't really need fancy data collection tools, you could just ask "Why did this take 4 hours?" Like, wow.
 
Joel: Yeah, and sometimes it's like totally legit that that took four hours. You're doing stuff that has to get done. It doesn't feel like it's a part of that feature and sometimes it happens, and that's the beauty of evidence based scheduling I think is that sometimes it happens, and sometimes it doesn't. Sometimes you just sit down, and you take 15 minutes, and it's done, and then you move on to the next feature and you don't go read hacker news or the blog.stackover.com or whatever it is that you might have wasted time on.
 
Jeff: Right. Well I think that was a great answer. I was right up your alley. I'm encouraged that we got a nice simple answer out of that which was, as you check in, just assume all the time since the previous check-in, that's awesome, it's great. I'm ready to sign up.
 
Joel: Ok cool. Well we'll put that into your version control system. Ok here we go, ok this is a good question.
 
Chap (T+ITC=48:00): Hey Joel and Jeff, this is Chap Ambrose in Philly. I was wondering, how you guys prioritize features and functionality for your products and how do you decide what to spend your time on and what's worth doing, and also what are the differences between how StackOverflow does that, and Joel, Fog Creek, and Fogbugz and all that. Thanks
 
Jeff: That's a really good question. I think, you should start. Because I think, your product being way more commercial, I'm really curious about how you guys do that.
 
Joel: Ok. It's kinda weird, because sometimes it a gut kinda reaction. But in general, whenever we're about to start a major new release, we get around the table a bunch of people from every possible, well first what we do is we make a list of every major item on the backlog that we're even considering, and that's usually sometimes a couple of pages. And we don't spec them out and that point, we just sort of list them, and historically with Fogbugz we usually wind up with something like a hundred things on that list that we're considering that are relatively major things that we might want to do. The next thing we do is the developer team, you get a bunch of developers together, and you go through that list very quickly, doing what we call T-shirt sizing, where for each feature you say Tiny, Small, Medium, Large, Extra Large, XX-Large, XXX-, you know, you can put in a lot of X's.
 
Jeff: (laughs) The more X's the better. The more Z's as well.
 
Joel: Yes. "I use a scientific notation for the size of my T-shirt." Anyway, and then we roughly translate those into on day, one week, one month, three months, six months, that kind of thing. And it doesn't have to be accurate at this point, because this is not estimating, this is just deciding "We can either do this, or five of these things". And it only has to be very very approximate at this point. So we come up with a cost for each of these features, in time.
 
Joel (50:23): So we come up with a cost for each of these features, in time, and then
 
Jeff: Can I interrupt...I found that we do internally do t-shirt sizing and sometimes we're, radically off.
 
Joel: But are you ever off by like, like...
 
Jeff: Sometimes we're I'd say an order of magnitude off.
 
Joel: It's ok to be an order of magnitude off as long as things are relatively accurate.
 
Jeff: Ok
 
Joel: So you never say, "This is extra large and that's extra tiny." Alright, so the extra tiny might actually have been kind of large, but the extra large thing would be like monumental, in that case. You know what I mean? You do definitely know, "This would be a major rennovation of these four things, and that's just a quick little one-off, possibly only three lines of code."
 
Jeff: Ok, I don't want to interrupt, so please proceed.
 
Joel: So you get these little costs for things that are very approximate, and then what we do is, we get a team of people to sit around a table, with representatives from basically every branch of government -- so we have salespeople, marketing, developers, testers, program managers, executive management, pretty much we try to get everybody involved at that point --
 
Jeff: Mmm-hmmm.
 
Joel: We try to have representatives from everyone involved. And we give them all a certain number of "dollars" to "buy" features. And there's a price for each feature. So the really large features might be, you know, $0.50, and the really small features might be a dime (ie. $0.10), and the tiny features might be a penny (i.e. $0.01), and they get [for example] $2.50 to spend any way they want.
 
Jeff: Mmm-hmmm.
 
Joel (51:49): Or you can give them a dollar; it's simpler. (And [then] you make the really large features, you know, $0.50, and the smaller features $0.25.) And then everybody has to basically go through this list of features, and "purchase" their own selection of what they want to purchase with their dollar.
 
Jeff: Mmm-hmmm.
 
Joel (52:03): And then you add how up much money was spent on each feature, and you divide it by the cost, so...
 
Jeff: How many people are involved in this process?
 
Joel: Maybe...It depends on how big your company or you team is but we've...
 
Jeff: In your case...
 
Joel: We've done it with... There was a version of FogBugz where we did this with nine or ten people, I think.
 
Jeff: Wow.
 
Joel: Ummm...Again, both of these exercises, the t-shirt sizing and then the paying for features, are like afternoon exercises. It's not like...
 
Jeff: Well, but you guys are cheating a little bit, because you guys use FogBugz internally. I mean, what if you were doing this for a product that you didn't even really use?
 
Joel: Ahh...
 
Jeff: I mean, that would be tougher right? You'd have to have, like, actual users come in and have them do this right?
 
Joel: No, because your salespeople know what people are asking for.
 
Jeff: Ok, so the salespeople... Wow, we're putting the salespeople in charge of stuff? Now that's scary.
 
Joel (52:50): Well 'cuz the salespeople definitely, yeah I mean they have their things where they know when they're trying to sell something they keep getting questions A, B, and C, and that's why they can't get their commission and make the sale. And they're going to spend all their money on features A, B, and C.
 
Jeff: <laughs>
 
Joel: And the programmers, all they ever want to do is rewrite the whole thing in Python, or Ruby or something, which is of no benefit to anybody whatsoever, but makes them happy, so they're going to spend their money on... So that's why you want a selection of different people with different attitudes, you know...
 
Jeff: Sure.
 
Joel: ...to pay for things.
 
Joel (53:19): So anyway, what you wind up with then is, you take each feature and its cost and you divide it by how much has been spent on it, and you try and see which things got the most funding. So you might have something that only costs $1 but it got $3 worth of funding, 'cuz everybody wants it so much, and you may have something else which only costs a dime but it got zero funding 'cuz nobody thought it was important, and so, you sort by that, and you get these things kind of at the bottom of the list, and now you have a prioritized sort order of things that you want to do, and you just sort of bite off some amount of time, and you do as much stuff as you have time for, in that order of priority.
 
Jeff (53:43): Well I think that's, I think I've [you've?] written about that before, I mean that that's a fun, kind of a fun planning exercise too, to have money to spend on features.
 
Joel: It's kind of... It works great for, you know, for getting from version 3 to version 4, right? It's good to have some customers at that point already, uh... (if you're starting from scratch)
 
Jeff: I'm a little uncomfortable that the salespeople are representing, that the salespeoplea are representing the customers makes me a little uncomfortable.
 
Joel: Well you've also got a program manager who is supposed to be responsible for your user interface and stuff like that, you've got executives. People care about making a good product for their customers. You hopefully have some people, their representative, whose job it is to understand what the customers and trying to do and it is that they want.
 
Joel: Now there's a much... like I said this a great way to go from version 3 to version 4. You already have some customers, the salespeople are already talking to them, you...you're just looking for your next round of features.
 
Jeff: Mmm-hmmm.
 
(54:44)
 
...

 

Last Modified: 4/10/2012 11:17 AM

You can subscribe to this wiki article using an RSS feed reader.