Agile Brazil 2010 – First day

I just had the opportunity to participate in Agile Brazil, our national conference on agile methods. It was worth attending, I heard some good talks and also enjoyed traveling to Porto Alegre. In this post, I’ll summarize some of the talks of the first day of the conference.

Martin Fowler opened the event with a keynote encompassing three topics: the essence of agile, the value of software design and continuous integration and delivery. Agile methods appeared as an attempt to change the chaotic situation software development was in, in the 90’s: late projects, unhappy users, buggy software, etc. People from the Smalltalk community, Scrum, FDD, XP and others got together and created the Agile Manifesto for Software Development. Martin mentioned the semantic diffusion problem – how ideas can quickly get lost, so it’s important to remind the key points. He then compared agile with plan-driven approaches to software development.  The former emphasizes adaptive planning (1 – release early and often; 2 – learn), whereas the latter values predictive planning (sucess = being in accordance to the plan). Predictive planning has its strengths, but has weaknesses, and a major drawback of them is depending on requirements stability. But the question is: in software development, how stable can our requirements be? So they try to stabilize requirements by using change control, sign-offs and up-front requirements gathering. As opposed to that, in adaptive planning, the plan is seen as a tool rather than a prediction, “a late change in requirements is a competitive  advantage” (Mary Poppendieck). Adaptive planning gives rise to evolutionary design, which involves self-testing code, continuous integration, refactoring and simple design. Martin also compared the two approaches in regard to people vs process. Agile methods put people first, while plan-driven approaches put processes first. Plan-driven processes design the process first and then “slot the people in”. They don’t realize that “a bad process will beat a good person every time” ( W. Edwards Demming).

Martin went on to his next short talk about the value of software design. People sometimes cut down on quality to add more features to a software in less time. Quality has two dimensions: external quality and internal quality. The former is visible to the users and customers and comprises things like a pleasant user interface and few defects. The latter comprises good modular design. The reason we care about internal quality is referred by Martin is the design stamina hypothesis. It comes down to how much effort it takes to add new features. With good design, over time we can add more features quickly and more cheaply. He also talked about code base complexity (essential complexities and accidental complexities.) and the metaphor of technical debt. In particular, he emphasized the technical debt quadrant and remembered that a mess a not a technical debt. Just like in our finances, sometimes it’s necessary to take a prudent amount of debt. “The very best team will produce technical debt, we need to manage the quality of our debt”, he concluded.

Martin’s last talk was about Continuous Integration and Delivery, which I have already commented in a previous post.

Alisson Vale talked about Kanban, a process based on the toyota production system whose main idea is to limit the Work in Progress (WIP). He talked about the differences between manufacturing and knowledge work, and that the essence of this difference lies in the concept of variability. In manufacturing, variability is always harmful and needs to be avoided. In knowledge work, variability is inherent to the activity and we need to adapt to it. Kanban’s principles can be adapted to software, but the way it is implemented needs to be adapted for the work’s context and nature. When we speak about Kanban in software development, we have three guidelines: 1- make work visible , 2 – limit WIP and 3 – help work to flow. Alisson also stressed that Kanban is not a methodology for process management, it requires a pre-existing process/methodology. Kanban is a starting point to continuous improvement.

David Hussman talked about products and people over process and technology. He gave an informal talk with some good advice. First, he pointed out the idea of learning by comparing and indicated three “agile” books: the Black Swan, Freakonomics and Blink. He also mentioned the Checklist Manifesto – how to get things right. David said we have three kinds of problems: the simple (e.g.: preparing a cake), the complicated (e.g.: sending a rocket) and the complex (e.g.: being a parent – though for this last one we have the biblical advice in proverbs 22:6). The complicated problems are difficult but have a solution. The complex ones have no defined solution. Agile methods help with dealing with complex problems. What do we know about the people we’re making software for? We have two aspects to consider: Discovery (what and why) and Delivery (how and when). Are we building the right thing? Are we doing it in time? Most companies are just concerned about the last question. Our focus should be in continuous product learning. The question we should be continually asking ourselves should be: “why are we doing what we are doing?” David concluded by giving the equation: value = why / how.

Paulo Camara and Daniel Vieira Magalhães, from Ci&T,  talked about how it is possible to have a lean software architecture. The idea is to avoid BDUF and “see the whole”. Is it useful to have an architect? In their view, yes, it is, but what we need is an architect with a different profile. That will be the person who’ll bring the necessary skills (technical knowledge) and experience. This architect should be 100% hands-on, a leader not a boss, an example, a reference and his major ambition should be to become dispensable (this reminded me of this article). They like to do on their projects what they call “setup sprints”, one or two week sprints whose objective is to address the “high stake constraints”. For the production sprints, the focus is on collaborative design. Architecture activities are tasks associated to stories and it needs to be a competence of the team. Lastly, they quoted some lean principles found in Mary and Tom Poppendieck’s book Lean Software Development (eliminate waste, amplify learning, decide as late as possible, deliver as fast as possible, etc.). I really liked to see Agile gaining popularity in a CMMi-5 company.

Rodrigo de Toledo and Daniel Teixeira talked about their experience of three years using Scrum on a Research and Development project for the Oil Industry. The system that was to be developed was for integrated 3D visualization for exploration and production. The project started in 2006 and they spent one year just discussing how the system would be. It was one year with no software! By this time, some team members got the opportunity to learn Scrum and set out to use it on this project. Nevertheless, they had a challenge:  how to do R&D with Scrum? They came up with research stories. These stories had limited scope, they were time-boxed (e.g.: one or two weeks to research) and paper-boxed (e.g.: read the three most relevant papers on the topic). The acceptance criteria for these stories included documentation generated in a wiki and the capability to assess the development story.

Mauricio Aniche talked about common deviations when applying Test-driven development (TDD). He started off by reminding the audience that TDD is about design and for one to do TDD effectively it’s necessary to explicit dependencies. He took a survey asking lots of developers, both experienced and beginners with TDD, whether they followed some TDD’s principles all the time, rarely, never, etc. The principles he used were: 1- not seeing the test fail, 2- forgetting to refactor, 3-refactoring another part of the code, 4-not starting with the simplest test first, 5- running only the current test, 6- writing complex tests, 7- not implementing the simplest code to make the test run, 8- giving unclear names for tests and 9- not refactoring the test code. The most common deviations were: writing complex tests, forgetting to refactor and refactoring another piece of code. He concluded by saying we should take into account our experience when deciding whether to break a TDD principle or not, but we should be careful since breaking the rules all the time can be dangerous. More information about his survey can be found here.

To conclude the first day of the conference, Bruno Pedroso presented his ideas about the mixture of the following techniques: GTD, Scrum, Pomodoro and TDD. After giving an overview of each technique, he described the fractal nature of each one. For him, GTD focuses on “me”, Scrum focuses on “us”, Pomodoro focuses on “now” and TDD focuses on “code”. We should balance static and dynamic quality, we need to balance the period when we think and when we do and the principles are: 1) don’t mixture, 2) make things explicit and 3) conclude often.

Software design in the XXI century

Yesterday I was able to take part in a nice event about software development. I had the opportunity to hear ThoughtWorks‘ chief scientist Martin Fowler talk about three current trending topics in our industry: Domain Specific Languages (DSLs), REST and Continuous Integration and Delivery. All topics were in one way or the other familiar to me (expect for continuous deployment which I just had heard of) and I quite liked the way Fowler conveyed the overall idea of each one.

Domain Specific languages

Domain specific languages have been around for quite a while, but it seems the software community has not yet taken full advantage of them. DSLs are everywhere and examples include CSS, Rake, SQL, EasyMock, FIT and Graphviz, just to mention a few. Fowler used an example based on the design of a state machine to illustrate where a DSL might come in hand. He showed some code for the machine in plain-old-object-oriented-java style and explained the shortcomings of that approach: poor code expressiveness and lack of flexibility due to static typing. To be able to change the state machine at runtime, he then showed the design using XML. That way, one could be able to declaratively specify the machine’s behavior and the code would be more readable. The drawback of the XML approach is its verbosity (opening and closing tags). He then went on to explain how a DSL (either external or internal [he used Ruby as host language]) could be used to better model the machine. Fowler gave his definition for a DSL: a computer programming language of limited expressiveness focused on a particular domain. DSLs should come with a strong semantic model. He stressed that the value of a DSL lies in its readability rather than writability, thus allowing domain experts to be able to read and strike a rich conversation.

Steps to REST

Fowler’s next short talk was about REST. He used the Richardson Maturity Model, describing the three levels of improvement toward better RESTful architectures. The idea of REST is to expose web services as resources using the infrastructure already provided by the HTTP protocol. He commented on the fact that it is foolish to turn everything into resources and we should think about the principles of object orientation (such as encapsulation) when choosing which resources are important to be exposed. The contents of the presentation are here.

Continuous Integration and Delivery

The last talk was about Continuous Integration and Delivery. In all but the simplest software projects several people are simultaneously working on the code base (many times in isolated feature branches) and, as a result, sometime down the road the need for integration arises and Big Scary Merge is on his way.  Powerful merging tools are available, but they only address textual problems and cannot tackle the semantic integration problems. The fact is: integration is hard… so we should be doing it often. Fowler’s rule of thumb is: every one on the team should be integrating with the main line at least once a day. As a result, merge problems are rare and we guarantee integrated working software in the development area. That’s pretty good, but the software is only useful when it is in production. So, how we come into production quicker? The key concept is the notion of a build pipeline which can also minimize human intervention for software deployment.

Books

The following books were recommended:

Book review: Writing Effective Use Cases


“Writing Effective Use Cases”, by Alistair Cockburn, is an excellent reading  for those who want to learn the art of writing good behavioral specifications in the form of use cases. The book is fairly easy to read, it contains plenty of examples of use cases and also presents good tips and the pitfalls to avoid when writing a use case. In this post, I’ll summarize some of the things I learned from the book.

The whole point of the use cases technique is to describe interactions between actors in order to accomplish a goal, while protecting the stakeholders’ interests.  They are usually written in simple text form (they can be either more casual or more formal, depending on the needs and characteristics of the project) and can be used to describe the behavior of any “system”, be it a software or an enterprise or business process (this is the “scope” of the use case). They can also be written viewing the system either as a white-box (considering the inner workings) or as a black-box (considering the external interface).  The former is more used to describe business processes while the latter is more used to describe the functionality of a software system to be designed.

The author uses a set of graphical symbols to denote the scope (computer system, organization, component) and level of the use cases (summary, user-goal, subfunction). The idea of levels is an interesting one because it allows use cases to be treated as an ever-unfolding story – a piece of behavior can be a simple step in a use case or a use case of its own (to increase the level, ask “why”, and to decrease the level, ask “how”). Especially useful is the advice of writing a few summary use cases that connect all the other user-goal level use cases, thus providing overall context and a good starting point for whoever wants to quickly figure out the whole picture.

To make the most out of the writing process, the author recommends working breadth-first, from lower to higher precision. This way, it’s easier to manage the energy and not get quickly overwhelmed with lots of details of the extension conditions and extension handling sections.

It’s also emphasized that use cases are not all the requirements, they are only the behavioral requirements. There are other requirements, such as business rules, performance, protocols, UI, data formats, etc. However, use cases do act like a glue that connects all the other requirements. The author illustrates this through the “Hub-and-Spoke” model of requirements, which sees  use cases as the hub of a wheel with the other requirements being spokes that lead in different directions; and that’s why some processes have a strong focus on use cases.

The author provides several reminders and checklists for improving the quality of a use case (those are nicely summarized at the beginning and at the end of the book). Among those, I quote the following three questions to be asked regarding every use case:

  • To the sponsors and users:
    • “is this what you want?”
    • “will you be able to tell, upon delivery, whether you got this?”  (acceptance tests are great for addressing this one)
  • To the developers:
    • “can you implement this?”

In a nutshell, the book is full of useful recommendations and what is exposed can surely be applied in order to improve requirements elicitation in any project, whether using use cases, user stories or any other technique.