Peter Coffee raises a good question:
At what point does a software/hardware project go from
tons of bugs to
good to go?
As for me I have to ask
Who makes these decisions? and
What say do we software developers have in this matter?Top-tier toolmakers may not be their own best testers.By Peter Coffee
Like many others who'd long awaited Microsoft's Visual Studio 2005, I felt more than a little let down when that development suite shipped without the full-spectrum
collaboration tools that we'd been told to expect among its most distinctive improvements. Microsoft now assures us that the Team Foundation Server (TFS) is forthcoming:
Sources tell eWEEK's Darryl Taft to expect it in the first quarter of this year.
I get a queasy feeling, though, from a combination of comments by Visual Studio Team System Lead Program Manager Jeff Beehler, who told us all on his blog last week that (i) "we've been fixing tons of bugs" and (ii) "we're only fixing the most critical of issues to help prevent regressions."
Does that give anyone else a sense of "uh-oh"? There's plenty of room for debate about the precise behavior of bug discovery rates as the number of remaining defects in code shrinks down, but I don't know of any model that estimates a sharp and sudden cutoff between "tons of bugs" and "good to go."
Even assuming that the quality of the code will meet our hopes--and even, perhaps, exceed our expectations--I was struck by another assumption implied in comments about the
imminent Team Foundation Server release. Darryl's story mentions a blog post by Microsoft developer division VP "Soma" Somasegar, citing the degree to which the team that's building
the company's life cycle tools is using those tools itself--"eating its own dog food," as the common saying goes.
Read the rest of Peter's column.
http://www.eweek.com/article2/0,1895,1914426,00.asp