Public competitive examinations in IT – MundoJ 45

My new article on MundoJ magazine (former MundoJava) is out, after about one year without publishing. In this article, my colleague Rafael Pereira and I talk about an interesting but rather unusual topic: public competitive examinations (the best translation I found for “concurso público”, in Portuguese) in IT. The idea for the article came from the desire to help announce the publication of Rafael’s new book, whose subject is Java for public competitive examinations.

Different from what happens in other countries, here in Brazil the candidates for a job position in public companies must pass these kinds of exams; and they are highly competitive. In the article, we go over the advantages and (a few) disadvantages of working at public companies, with emphasis on the IT industry. At the end, we present some questions of real exams with answers and comments (the questions concern primarily Object-orientation, design patterns, Java SE and Java EE). We appreciate any feedback on the article.


Happy Birthday: one year!

This blog completes one year today. It took me a while  to start it, mainly because I didn’t know if I would have time to write (interesting) things regularly. Well, it turned out I’ve really liked the experience so far.

Among other benefits, with the blog I force myself to write down book reviews, which makes me review and ponder a little about what I read. I also like to summarize the talks I hear in events I go about software development as well as Christian messages. For this second year, I’d like to increase the number of general posts related to something other than books and events.

The blog still has modest numbers: 22 posts, 30 comments and 1,976 views (tracked by WordPress). I hope to improve on those.

“I will exalt you, my God the King;
I will praise your name for ever and ever.” (Psalm 145:1)

Test Automation for the Persistence Layer with FIT, DBUnit and HSQLDB – MundoJava 38

The edition 38 of MundoJava magazine is out! In this issue I wrote an article about Test Automation for the persistence layer using FIT, DBUnit and HSQLDB.

The article is the result of an experience on a past project I worked on recently. It was a simple desktop Java system whose goal was to read data from a bunch of database tables and configuration files and generate several output files formatted in a specific way. We had a simple persistence layer with DAOs implemented with JPA/Hibernate.

The entire project was developed with automated tests (unfortunately, most of it did not use TDD, 😦 ). In order to test the DAOs (some of which had rather complex SQL queries), we had integration tests written with JUnit and DBUnit along with an in-memory HSQLDB database. For simple DAOs (and respective domain objects), that approach worked quite well. However, for the more complicated ones, we started to have a few hard-to-understand tests, since a lot of fixture code was being required to set up the data needed for the tests.

So, we wondered if there would be a way to express the integration tests for our DAOs in a more readable way. Now, our project dealed with accountancy data, so we thought that it would be perfect if we could express some of our tests in tables. That was when FIT came to our mind. FIT is a tool which allows one to write automated tests in a tabular format (it uses HTML tables). Just what we wanted!

So, we were able to successfully convert our automated JUnit tests of DAOs into highly readable HTML fixture tables. Now let’s talk in more details about how our previous strategy was and how it changed with the introduction of FIT.

Using DBUnit, our HSQLDB test database was set up with the test data in an XML dataset file before the execution of each integration test. Then, we would have to construct the list of objects that would be the expected result of the call to the DAO’s method under test. Besides being difficult to read (though we experienced some improvement by using test builders), those objects evidently had a lot of duplication with the data in the XML dataset.

Our approach with FIT was to put all the necessary setup data together with the test itself in the HTML document (each DAO has its corresponding tests in an HTML), removing the XML datasets. The first section of the document, which is the configuration session, starts with the tables containing the data to be inserted in the test database. The last section of the document contains the test scenarios with the actual tests and expected results. The HTML test document was implemented using the Flow Mode of fitlibrary‘s DoFixture. Inside the Flow Mode, for the first section of the document we used the SetUpFixture (also from fitlibrary). The corresponding Java glue code for the SetUpFixture, in turn, used DBUnit’s DefaultDataSet to programatically build the dataset with the contents of the HTML setup tables (which then were inserted in the HSQLDB database just like before). We ended up creating a few utility classes to enable DBUnit to configure the test database with the data from FIT tables.

Even though FIT has been designed for the automation of acceptance tests (hopefully with customer colaboration), we found it pretty effective to be used to test data access code of data-driven applications. At the end, we had highly expressive integration tests that looked like executable specifications.

In the article, our approach is fully described using a working example and showing all the related code. Comments and suggestions are always welcome!

Caelum Day in Rio

Last Saturday I participated in Caelum Day in Rio, an event about Software Development that had great presentations and several speakers.
Phillip Calçado gave the keynote “All I wish I knew before I had become a tech leader” (my own translation of the title in Portuguese). He actually just reinforced the importance that in software development only one thing guarantees survival: delivering value (all the time, at the exact moment, before it is too late). He mentioned that one of the most important tasks of a tech leader is to avoid the unexpected. To do that, it’s necessary to build barriers, whose objective is to encourage the feedback cycle. Barriers come in five layers:1. Development, 2. Integration, 3. Verification, 4. Validation and 5. Production. The barriers on each of these layers can be achieved through the use of established best practices, to name a few: TDD, fast builds, continuous integration, close relationship with the customer, kickoff-play-walkthough model, DDD, incremental and frequent delivery, and simulation environments.  Phillip also emphasized that barriers will be broken, but what we want is to know when that happens.

Fabio Kung talked about Cloud Computing, a topic that is in fashion in our industry recently. He went over the several aspects of cloud computing: Infrastructure as a Service – IaaS (eg.: machines, hardware, network, etc.) , Platform as a Service – PaaS (Google App Engine, Amazon EC2, etc. ), Software as a Service – SaaS (GMail, Google Docs, etc.). The great majority of Cloud Computing relies on virtualization. With virtualization, we can solve common problems related to waste, provisioning and costs. With Cloud Computing, we can go a little beyond and also tackle the problems of capacity planning, maintenance and availability. What I found interesting is that only 1% of the worldwide major applications are running in the cloud, mainly because they need to have control over their own infrastructure. The great niche of cloud computing are small and medium-sized applications, where one doesn’t want to care about infrastructure and platform issues. It sure was a very enlightening talk, full of interesting stuff. BTW, Kung also showed some funny videos of GOGRID, which can be seen here.

I also watched some cool short presentations: Paulo Silveira, on Java Persistence API; and Rafael Martinelli on Adobe Flex. Sergio Junior e Luiz Costa talked about RESTful webservices in Java and Caue Guerra talked about how the adoption of  Ruby on Rails in Brazil has increased tremendously in the last years (especially in 2009). Guilherme Silveira talked about nice features of the Java web-framework VRaptor 3.

To finish the event, Nico Steppat delivered a great presentation about NoSQL and non-relational databases (examples of those include: SimpleDB, CouchDB, MongoDB and BigTable). Nico talked about how difficult it is to scale out relational databases and how non-relational databases can effectively address this problem. He quoted the Brewer’s CAP Theorem, which says that we can have at most two of the following properties for a shared-data system: Consistency, Availability and Partition Tolerance. Relational databases are good to achieve consistency and availability, whereas non-relational databases are better suitable to achieve the other two combinations.

New edition of MundoJava magazine


The new edition of MundoJava magazine is finally out!

This edition turned out to be very cool since it’s the first one dedicated to dynamic languages on the JVM, particularly JRuby. We dedicated three articles to this topic, each of which I summarize next.


This article, written by Demetrius Nunes, talks about how to build highly-testable desktop swing applications with the Model-view-presenter (MVP) architectural pattern and Test-driven Development (TDD). The author uses the NetBeans IDE for building up the GUI using the drag-and-drop great utilities offered by Matisse (Netbeans 6.0´s  GUI builder). In addition, RSpec is used to specify and test the presentation-logic of the user interface. RSpec is a behavior-driven-development tool that allows one to write behavior-oriented unit tests in Ruby. The process of building a typical GUI is comprised of the following steps: 1) Write the tests for the Presenter (the presenter will be the class which will centralize the presentation-logic of the GUI) ; 2) See the tests fail; 3) Write the Presenter (using mocks for view and model);  4) See the tests pass; 5) Refactor. At the end of the process, we´re gonna have our GUI fully tested without even having the GUI itself! The GUI then can be visually created using the excellent resources of Netbeans. The whole MVP/TDD approach is a great technique to be used for Swing application development. Extra benefits, like readability and writability, stem from using a more concise and expressive language like Ruby to write our tests.


In this article, my colleague Alex Marques Campos and I talk about how we can extend our Java applications using Ruby and the Java Scripting API. Firstly, we give a Ruby-in-a-nutshell description for Java developers unfamiliar with the scripting language: history, main features, basic types, OO, inheritance and mixins, dynamic typing, metaprogramming, reflection and DSLs.  Next, we go on to describe the Java Scripting API (JSR-223). The API was added to the JDK in 2006. Its goal is to uniform the use of scripting languages on the Java platform through the use of a small set of classes and interfaces developers need to know. We describe how to use the JRuby scripting engine to integrate our Java applications with the Ruby language.


Last but not least, Fabio Kung covers in his article a range of topics regarding JRuby and Java, including: performance issues of JRuby, the advantages of executing Ruby code over the JVM and how we can take advantage of that, how and why execute web applications with Ruby On Rails using Java technology.

Hello world!

Hello everybody! This is the first post of my blog. I intend to share my views on software development in general as well as Christian messages and meditations. I hope you all enjoy!

“For the Lord is good and his love endures forever; his faithfulness continues through all generations.” (Psalm 100:5)