Book review: Managing Humans

I’ve finished reading Michael Lopp’s book “Managing Humans – Biting and humorous tales of a software engineering manager.”

The book is full of anecdotes describing the world of a software engineering manager. Despite the rather slangy style of writing (sometimes using bad language), the book encompasses several facets of management and is full of thoughts useful both for managers and for those who deal with managers (that is, all of us!).

It is divided into three parts: 1) The Management Quiver, 2) The Process is the Product and 3) Versions of You.

Next I transcribe some of my favorite quotes from the book along with my comments.

  • “Managers who don’t have a plan to regularly talk to everyone on their team are deluded”. Everyone likes their opinions to be heard and considered. When managers don’t talk and listen carefully to what their employees have to say, besides disheartening the team, the company is bound to lose great talents and ideas.
  • “The organization’s view of your manager is their view of you”. Therefore, it’s quite important for one to work out the relationship with their manager. Where does your manager want to get? What does he want to achieve? What is important for him? As an employee, you need to figure that all out and make sure that your manager is making progress, because if he succeeds you eventually will succeed. (Conversely, if a manager wants to be successful, he needs to consider these questions in regard to the members of his team.)
  • The author talks a lot about meetings and the different kinds of people that take part in those meetings. In meetings, there are two major sorts of characters that need to be identified: players and pawns. The former is directly interested in the result of the meeting, while the latter typically gives no contribution. Knowing how to identify and deal with these roles is important in order to make the most of a meeting.
  • In the last part of the book, the author described the several meeting creatures (another spectrum of sorts of people found in a meeting). Especially important is the synthesizer, which is the person who can gather all the sparse information thrown by people and turn it into clear-cut sentences understood by all. In my company, during the planning meetings of our sprints (we’re running Scrum), we usually refine the backlog along with all the team, scrutinizing each user story. In this process, several people make comments and we can spend quite some time discussing a single story. When the time to give the estimate comes, it’s extremely useful to have one person be the synthesizer, summing up what the story actually consists of.
  • Information conduit – “for each piece of information you see, you must correctly determine who on your team needs that piece of information to do their job”.  A manager selfish when it comes to information simply won’t be completely trusted by the team. Worse, as the author points out, “in the absence of information, people will create their own.” I also considered quite important the advice to give each piece of information we’re passing on a bit of our personal context. For example, how many times have you sent out an e-mail with a link to something interesting you read, but without adding anything to it, just the raw link? I’ve done it many times. Now I try to at least summarize what I am forwarding and preferably give some personal opinion.

Overall, I would say that the idea of using tales and humor to illustrate the ins and outs of management was a good one, but that could have been better explored. Nevertheless, the book has more illustrations and good advice for anyone interested in this topic (there’s also a humorous glossary of management at the end whose explanations are quite direct and helpful).

Test Automation for the Persistence Layer with FIT, DBUnit and HSQLDB – MundoJava 38

The edition 38 of MundoJava magazine is out! In this issue I wrote an article about Test Automation for the persistence layer using FIT, DBUnit and HSQLDB.

The article is the result of an experience on a past project I worked on recently. It was a simple desktop Java system whose goal was to read data from a bunch of database tables and configuration files and generate several output files formatted in a specific way. We had a simple persistence layer with DAOs implemented with JPA/Hibernate.

The entire project was developed with automated tests (unfortunately, most of it did not use TDD, 😦 ). In order to test the DAOs (some of which had rather complex SQL queries), we had integration tests written with JUnit and DBUnit along with an in-memory HSQLDB database. For simple DAOs (and respective domain objects), that approach worked quite well. However, for the more complicated ones, we started to have a few hard-to-understand tests, since a lot of fixture code was being required to set up the data needed for the tests.

So, we wondered if there would be a way to express the integration tests for our DAOs in a more readable way. Now, our project dealed with accountancy data, so we thought that it would be perfect if we could express some of our tests in tables. That was when FIT came to our mind. FIT is a tool which allows one to write automated tests in a tabular format (it uses HTML tables). Just what we wanted!

So, we were able to successfully convert our automated JUnit tests of DAOs into highly readable HTML fixture tables. Now let’s talk in more details about how our previous strategy was and how it changed with the introduction of FIT.

Using DBUnit, our HSQLDB test database was set up with the test data in an XML dataset file before the execution of each integration test. Then, we would have to construct the list of objects that would be the expected result of the call to the DAO’s method under test. Besides being difficult to read (though we experienced some improvement by using test builders), those objects evidently had a lot of duplication with the data in the XML dataset.

Our approach with FIT was to put all the necessary setup data together with the test itself in the HTML document (each DAO has its corresponding tests in an HTML), removing the XML datasets. The first section of the document, which is the configuration session, starts with the tables containing the data to be inserted in the test database. The last section of the document contains the test scenarios with the actual tests and expected results. The HTML test document was implemented using the Flow Mode of fitlibrary‘s DoFixture. Inside the Flow Mode, for the first section of the document we used the SetUpFixture (also from fitlibrary). The corresponding Java glue code for the SetUpFixture, in turn, used DBUnit’s DefaultDataSet to programatically build the dataset with the contents of the HTML setup tables (which then were inserted in the HSQLDB database just like before). We ended up creating a few utility classes to enable DBUnit to configure the test database with the data from FIT tables.

Even though FIT has been designed for the automation of acceptance tests (hopefully with customer colaboration), we found it pretty effective to be used to test data access code of data-driven applications. At the end, we had highly expressive integration tests that looked like executable specifications.

In the article, our approach is fully described using a working example and showing all the related code. Comments and suggestions are always welcome!

Caelum Day in Rio

Last Saturday I participated in Caelum Day in Rio, an event about Software Development that had great presentations and several speakers.
Phillip Calçado gave the keynote “All I wish I knew before I had become a tech leader” (my own translation of the title in Portuguese). He actually just reinforced the importance that in software development only one thing guarantees survival: delivering value (all the time, at the exact moment, before it is too late). He mentioned that one of the most important tasks of a tech leader is to avoid the unexpected. To do that, it’s necessary to build barriers, whose objective is to encourage the feedback cycle. Barriers come in five layers:1. Development, 2. Integration, 3. Verification, 4. Validation and 5. Production. The barriers on each of these layers can be achieved through the use of established best practices, to name a few: TDD, fast builds, continuous integration, close relationship with the customer, kickoff-play-walkthough model, DDD, incremental and frequent delivery, and simulation environments.  Phillip also emphasized that barriers will be broken, but what we want is to know when that happens.

Fabio Kung talked about Cloud Computing, a topic that is in fashion in our industry recently. He went over the several aspects of cloud computing: Infrastructure as a Service – IaaS (eg.: machines, hardware, network, etc.) , Platform as a Service – PaaS (Google App Engine, Amazon EC2, etc. ), Software as a Service – SaaS (GMail, Google Docs, etc.). The great majority of Cloud Computing relies on virtualization. With virtualization, we can solve common problems related to waste, provisioning and costs. With Cloud Computing, we can go a little beyond and also tackle the problems of capacity planning, maintenance and availability. What I found interesting is that only 1% of the worldwide major applications are running in the cloud, mainly because they need to have control over their own infrastructure. The great niche of cloud computing are small and medium-sized applications, where one doesn’t want to care about infrastructure and platform issues. It sure was a very enlightening talk, full of interesting stuff. BTW, Kung also showed some funny videos of GOGRID, which can be seen here.

I also watched some cool short presentations: Paulo Silveira, on Java Persistence API; and Rafael Martinelli on Adobe Flex. Sergio Junior e Luiz Costa talked about RESTful webservices in Java and Caue Guerra talked about how the adoption of  Ruby on Rails in Brazil has increased tremendously in the last years (especially in 2009). Guilherme Silveira talked about nice features of the Java web-framework VRaptor 3.

To finish the event, Nico Steppat delivered a great presentation about NoSQL and non-relational databases (examples of those include: SimpleDB, CouchDB, MongoDB and BigTable). Nico talked about how difficult it is to scale out relational databases and how non-relational databases can effectively address this problem. He quoted the Brewer’s CAP Theorem, which says that we can have at most two of the following properties for a shared-data system: Consistency, Availability and Partition Tolerance. Relational databases are good to achieve consistency and availability, whereas non-relational databases are better suitable to achieve the other two combinations.