Friday, March 13, 2009

A number of the sessions I've attended here at SD West 2009 cover the same theme: software quality. There are a number of practices that can ensure quality; most of them involve the same thing - feedback.

  1. Requirements analysts can build prototypes to get rapid feedback from usability tests before building any code - time frame: hours or days

  2. Developers can write a unit test before each bit of production code to get quick feedback - time frame: minutes

  3. Business analysts can pair with QA roles to develop acceptance criteria / acceptance tests for each user story - time frame: hours or days

  4. Developers can run continuous intergration test suites to get feedback on whether they broke existing functionality - time frame: seconds or minutes

  5. Executives and product owners can communicate their vision for the product or feature, so the team can work toward common goals. Team members then can constantly validate what they are working on, and make design choices based on shared understanding of priorities - time frame: continuous

  6. QA teams can do exploratory testing on working code to find subtler bugs, usability problems, spelling and grammar mistakes - time frame: hours or days

  7. Developers can peer review each others code to find code smells and spot potential bugs early - time frame: hours

  8. Developers in some languages can use strong typing systems to their advantage. Strongly typing parameters and return values to validate input and output - time frame: instant

  9. Developers can use static analysis tools that help find places your code is likely to do something other than what you intended. Some of these tools are shipped with your compiler or IDE, but you can always get more detailed ones, and you can write your own to enforce local coing standards. These tools can be part of your continuous integration - time frame: seconds or minutes

  10. Agile teams can estimate task sizes to identify risks as early as possible - time frame: 1 hour


One thing that was empasized over and over, was that developer and tester are two very different roles. Developers want to write code that works; testers want to break it. Developers will always need that foil, trying to break their code using just as creative techniques as developers are using to write the code in the first place.


TDD is a method of development and code design, not a method of testing. People often get this mixed up because of the name. Even if TDD reduces your bug count by 95%, your QA team will have plenty of work to fill their time, doing whta should have been their role in the first place - exploratory testing: testing the unknown-unknowns (everyone who said this, first felt the need to apologize for the Rumsfeldian phrase - you know they must really mean what they are saying, because nobody's going around quoting Donald Rumsfeld just for the fun of it).

Tuesday, March 10, 2009

SOLID with Uncle Bob


This morning I attended a course on class and component design by Robert C. Martin.

Class Design

You can read about the SOLID Principles here, but he emphasized a couple of things that would help in applying the principles:

  1. Objects don't really model the "real world" because the real world lies to us: real-world objects would violate the Single Responsibility Principle; Liskov Substitution Principle means real world IS-A relationships do not imply inheritance in classes, only "is-substitutable-for" relationships

  2. I asked if this conflicts with Domain Driven Design in any way. It doesn't, because the DDD people are still modeling your domain in a fine-grained enough way to satisfy the Single Responsibility Principle

  3. You won't be able to predict what will and will not change, so don't spend too much effort on that. Instead use TDD to develop simpler class structures, and refactor to Open/Closed later, as requirement changes come up

  4. That these are heuristics - instead of trying in vain to follow them all the time, you should design "above the line" and "below the line" classes


Above the Line:

  • abstract classes

  • business logic

  • closed for modification, except for the relatively uncommon case where you are changing the business rules encapsulated in them

  • open for extension, by concrete classes below the line

  • abstract factories

  • more stable code, changed less often, harder to change because it has more dependencies on it


Below the Line:

  • implementations of the abstractions above the line

  • messy things, like dependencies on external services, databases, GUIs, printers

  • e.g. changing a dependency from a database call to a web service call would not change the business logic above the line, you would just change a factory somewhere to sub in the new dependency for the old one

  • factory implementations

  • less stable code, changes often, because you want to change it often, and easier to change because it has few dependencies on it


Interesting point: if an abstract factory method accepts an enum parameter to determine the runtime type of the return value, then that constitutes the Above the Line code knowing about the implementing types, which is not desirable. If you just pass in a string, then you have weak typing, at least in that one place, but you'll be following SOLID. Ok, I lied about this point being interesting.

Component Design- depending on your language, this refers to packages, JAR files, DLLs, projects, whatever you call the group of classes you build and deploy together

  • No cycles in thee dependency graph - C# enforces this for us, others do not

  • The other three main rules: CCP, REP, CRP - contradict each other. Early in a project's life, use more CCP. As a project matures, you'll need to move classes into different components based on REP, CRP

  • Components should be independantly deployable. Versioned components are good things - release a version while keeping the old version up and running for a while, during which its dependents release new versions of themselves with the newer dependencies

  • Independent deployablility does not mean you must deploy independently to production, it just means different teams can develop components separately and deploy to test servers separately, and have independent CI builds.