Managing Geneva's Law Courts - from Cobol to Perl


Laurent Dami, laurent.dami at


The information system for Geneva's law courts was written in Cobol about 20 years ago. A complete rewrite in Perl/Catalyst/mod_perl is under way. The project is mission-critical, runs over several years, involves more than 20 people, and has strong deadlines to meet because of major changes in the organization of justice in Switzerland.


This paper was written in answer to theme of the YAPC::EU::2009 conference "Corporate Perl" : it is a testimony of a Perl project within a professional context, namely the Geneva courts of law. The slides are available at slideshare.

The project is of considerable importance because

The paper is mostly non-technical, concentrating on the project motivation, history and results. Nevertheless, some pointers to technical components will be given in the ARCHITECTURE section. A number of these components have been published on CPAN and therefore are publicly accessible; for private components, further details can be obtained by contacting the author.


Geneva, Switzerland

Geneva is one of the 26 "cantons" (states) of Switzerland. It has a small territory (15.88 km2), with a high density of population (450'000 inhabitants, second largest town of Switzerland). In addition, it attracts many commuters from surrounding France or from the neighbouring canton of Vaud; the extended agglomeration contains about 1.2 millions inhabitants.

The economy is mostly oriented towards services. Geneva hosts a number of international organizations, and is a center for international trade and financial activities.

All these factors have an impact on the number of lawyers established in town, which is 1'600, or 355 lawyers for 100'000 inhabitants. This figure is unusually high : in comparison, the Swiss average is 101, and French average is 76; it can be taken as an indication that judicial activity in Geneva is also above average.

Geneva courts of law

In Switzerland's federal system, each canton has its own legislative, executive and judicial authorities.

Geneva currently has 37 judicial authorities, all located together in the center of the town. Their organization is governed by a cantonal law. The procedural law (which governs the steps through which a case is handled) is also currently cantonal, but will become federal at the beginning of 2011, which will induce a major reorganization of the Geneva courts in the coming months.

Currently, the courts employ 95 judges and 410 permanent collaborators; in addition, several hundred other people are involved in occasional activities, such as assisting or replacing judges. The 2008 budget was 105 millions CHF, or 1.26% of the global budget of State of Geneva.

The volume of activities is about 80'000 cases/year, spread among penal cases (22%), civil cases (45%), administrative cases (7%), and other miscellaneous cases (26%).

Information systems for courts in Geneva and Switzerland

In the 1980s, the Swiss high court and a few cantons independently started to build applications for managing their courts; these were generally the places with the highest volume, like Zurich, Geneva and Vaud. Because of differences in organization and timing, and because of the decentralized nature of the swiss system, these applications were not mutualized.

The Geneva application belongs to that first wave; it was developed in Cobol in 1985, and was regularly updated since then. However, it now needs to be replaced because of technical obsolescence and inadequacy to new needs (integration, data exchange, electronic document management, etc.).

In the 1990s, two products for managing courts emerged on the Swiss market, and nowadays are implemented in most cantons. These products are highly parameterized in order to be adaptable to the different needs of various courts in various cantons.

In 2005, Geneva evaluated both products as candidates for replacing its obsolete Cobol application. Both would have brought a number of improvements in technology and in functionalities, but nevertheless the conclusion was negative, for the following reasons :

After this evaluation, the decision was taken to renew the Geneva application in a stepwise fashion : the old and the new application would cohabit, sharing the same database, and functionalities would migrate by chunks until the whole thing had been rewritten and the Cobol could be turned off.


History and future plans

The need to renew the Geneva judicial information system was identified and analyzed in the late 1990s, but it took several years to elaborate a project and obtain the funds. Meanwhile, outside that project, a number of peripheral tools were built in Perl, in read-only mode, for extracting data, generating reports and statistics, hyperlinking various pieces of information, and office automation. These tools quickly gained popularity and provided a first successful track record for Perl within the Geneva judicial system.

The official renewal project started in 2001. The goal of the first phase was to introduce office and collaboration tools, for helping people to produce, store and retrieve judgements, and for getting rid of the obsolete office application on mainframe. A local integrator company won the contract and started assembling various components from a major vendor in collaborative software and electronic document management systems; but these efforts ended up in failure, mainly because the software could not sustain the load of storing all judgements produced by the Geneva courts. After the crisis, a Perl solution came to the rescue, and was later extended for publication of judgements on the internet (presented at YAPC::EU::06). This closed the first phase of the project.

The objective of the second phase was to rewrite the business application, namely storing all data associated with court cases, producing documents, managing deadlines, etc. Initially this was meant to be done in Java, as Java was and still is the official development platform for the State of Geneva. However, after long discussions, backed up by the success stories of the previous Perl implementations mentioned above, and by a convincing prototype written in Perl/Catalyst, it was finally possible to convince all management instances that a Perl project would probably consume less resources and less time. A reduced team started working by the end of 2006, and a full-sized team was gathered by the end of 2007, after an open, official call for offers from several companies.

As of this writing, the second phase is still under way. Following our stepwise philosophy, parts of the new application are in production, and several functionalities are still in development. This phase is further complicated by the fact that the Swiss procedural laws will change radically on January 1st, 2011, and that the system will have to accommodate both the old and the new law at the same time.

Many further steps are planned for the coming years, both technical and in functionalities :


A number of principles were established when the decision was made to rewrite the application.

Stepwise rewriting

As already stated, the old and new applications cohabit, working on the same database, in order to avoid any risky big bang operation. The advantage is that features can be designed, implemented, tested and introduced in separate chunks. The disadvantage is that the data model can hardly be changed as long as the Cobol application is running; additions of tables or columns are possible, but refactoring of existing data should be avoided. However, in a few exceptional cases, we nevertheless decided to change the data and adapt the Cobol in consequence; fortunately this is still possible because a reduced maintenance team for Cobol is still active.

Web application

We decided quite early that the new application would be based on Web technologies, instead of client-server. At that time this choice was not totally obvious, because the flexibility brought by DHTML/Ajax mechanisms was not as well-known as it is today. Some fundamental motivating factors were :

Because of this choice, and since the Cobol application is called DM, the new application was baptized under name DM-Web.

Optimising user efficiency

The old application, optimized over more than 20 years, is very efficient for experienced users. Newcomers are intimidated by the old-fashioned terminal screen, and by the arcane key sequences for performing operations, but old users can blindly enter their data in very short time.

When throughput is important, a modern application with mouse, color and pictures could look sexy for a couple of days, but could very well exasperate users after they realize that they need more time than before to perform the same operations. This sad story already occurred twice within state of Geneva, when big and expensive software products were bought for accounting and for human resources management : these products introduced many improvements for data collection and centralization, but in the trenches people took more time than before for entering the same data, and needed to be reinforced in several places.

To avoid that syndrome, particular care has been taken to optimize operations in the new application, especially through keyboard navigation, and through extensive use of DHTML/Ajax technology.


The old application already had a sophisticated parameterization mechanism called the action generator : all possible procedural acts within a case are codified in a collection of action definition tables. The action generator interprets those tables to implement something similar to a big state transition diagram : given the current state of the case, and the access level of the current user, a number of actions are allowed; each action may modify the state and generate some documents. Actually, this is a kind of workflow application, and therefore some people, looking at this from the outside, suggested that buying a workflow product would do the job; however some experiments showed that customizing a generic workflow product for the specific needs of justice would be a daunting task, and a survey in Switzerland and surrounding countries confirmed that, to our knowledge, no court ever chose a generic workflow product to run their activities.

Outside of the action generator, a number of specificities of each court were hard-coded in the old application. As a matter of fact, the Cobol code was maintained in two separate branches (one for civil, one for penal cases), with lots of common points but also lots of subtle differences. Needless to say, this had a negative impact on maintenance, so for the rewrite we decided to reunify everything into one single application, with an additional layer of parameterization describing each court in detail (how it is organized, which kinds of cases it can handle, etc.). Such parameters are treewise in nature, so instead of squeezing them into a collection of database tables, we store them in a big YAML file, validated through a specific tree parser.



The pluri-annual budget allotted for the initial project was 7'650'000 CHF; this is still running and covers current development.

The change of procedural law in all of Switzerland, initially planned for 2010 but now postponed to January 1st, 2011, adds a considerable burden to development and parameterization efforts; therefore a new project was submitted, and just got approved a few weeks ago, with a budget of CHF 4'140'000 CHF.


Counting people involved in the project is not as obvious as one may think, because they are attached to various organizational units, and some of them share their time among several activities; however, we can say that information systems for Geneva justice currently employ about 30 people, 20 of which are fulltime. Here are some more precise figures :


Successful track record

As mentioned above, Perl had been successfully used for increasingly important usages within the organization. The first components were put in production quite informally, but after some years an internal negotiation process resulted in Perl being accepted as an official component of our architecture, known and maintained as such.

Agile environment

After the decision was taken to rewrite the application, it was realized quite early that analysis, design and development would require agile, iterative methods : many requirements and specific cases are very hard to capture during early phases, and only come to light after users start working with the software.

Several features of Perl are useful assets in such situation :

Dynamic language

Some parts of the application, especially for data extraction and document production, may evolve at a very fast pace. In the Cobol application, this was addressed by a two-layer architecture :

Perl's dynamic features allow us to easily achieve similar flexibility, by dynamically loading some code parts on demand, and dynamically checking for modification dates on those components. The loaded code can be elegantly hooked up to the main core software, by dynamic creation of subclasses or methods. Furthermore, mechanisms like closures are very handy for generating families of functions or methods, in a structured and efficient way.


The project of course re-uses a fair number of modules from the Comprehensive Perl Archive Network (CPAN), and also contributed a few modules to the CPAN. Needless to say, many additional modules were also written for our specific needs, but were not published because they have no interest outside of our context.

DM-Web follows the model-view-controller paradigm, with the help of the well-known Catalyst framework. Catalyst acts as the master controller, receiving HTTP requests and dispatching them to appropriate classes.


Models are located in a separate distribution, outside of Catalyst, so that they can be also used by other programs, like batch jobs.

Database access

Requests to the database of course go through the excellent DBI module. Unfortunately, our database (Livelink Collection Server, formerly called Basis+), has no native Perl driver, so we have to use the vendor's JDBC driver, together with a bridge supplied by DBD::JDBC : Perl requests are transmitted to a java proxy process, that in turn talks to the database server.

The java proxy works quite well, but not surprisingly, this architecture has an impact on performances. Before starting the project we did some measurements and considered that the response times would be reasonable, but this was mainly based on studies of transactions; later on we found that some pages were too slow to render because of the time needed to access some frequently needed reference data, like menu items, code lists, etc. The solution was to build a mirror database in DBD::SQLite, containing a readonly copy of those reference tables, and resulted in some queries being 100 times faster. The ORM layer (described below) has a dispatch mechanism to decide which requests should go to the main database or to its mirror.

Another drawback of the proxy architecture is that DBD::JDBC does not expose the full JDBC API, so some options that would be convenient for fine tuning are not accessible to Perl programs.

Our future plans will be to migrate to another database vendor with a native Perl driver, but this cannot be done until all Cobol code has been disabled.

Object-Relational mapping

Back in 2005, the major object-relational mapping (ORM) modules on CPAN were Class::DBI, Rose::DB, Alzabo and Tangram. None of those was considered appropriate for the project, for reasons that would be too long to expand here in detail (the design space for ORMs is quite complex); to summarize, let's say that these modules had in common the property of being more object-centric than relational-centric, and therefore offered insufficient space for fine-tuning of requests (kinds of joins, sets of columns, detailed criteria for the where clause, etc.).

The ORM for the project, DBIx::DataModel, started in 2005. Some of its distinctive features are :

After a couple of months of internal use, DBIx::DataModel was first published on CPAN in September 2005. More details on its architecture can be found in DBIx::DataModel::Doc::Design. This generic architecture has been complemented with some private, project-specific layers, for example for implementing switches between main and mirror databases, and for exploiting some cursor manipulation methods specific to the JDBC driver.

Meanwhile, another ORM DBIx::Class had just appeared in August 2005, and quickly became leader within the Perl community. Like DBIx::DataModel, DBIx::Class relies on SQL::Abstract for expressing SQL queries in terms of Perl datastructures, and therefore some improvements to that module were done on a collaborative basis. A couple of other points are common, but some fundamental design aspects are quite different, so despite the fact that we are outside of the main community, we have no plans of switching the project to DBIx::Class.

Data Validation

Data validation is performed through the module Data::Domain (presented at YAPC::EU::2007). That module was designed especially for validating trees of data, possibly with internal dependencies.

Business logic

The business logic is implemented in a collection of classes, outside of the Catalyst realm. Most of them implement common operations like searching, inserting or updating data, computing algorithms (like deadlines for example), etc. These classes coordinate the database and validation layers.

Other classes in the business logic layer are responsible for parsing and interpreting all parameterization data, checking for coherence and translating them into standard structures accessible to the other modules.

Finally, there are also classes to implement security and authorization rules. Unfortunately this cannot be handled by any standard authorization module, because the rules are quite specific and complex : access to a case file can be granted at several levels, which may depend not only on the current state of the file, but also of its history.


Following the Catalyst architecture, HTTP requests are dispatched to a collection of controller classes, structured along the main URL parts, and collaborating together through the mechanism of chained actions. Controllers do little work on their own, mainly repackaging incoming data to pass it to the model classes, and then forwarding the results to the view classes.

As an extension to the basic Catalyst action mechanism, incoming form data is filtered through CGI::Expand so that controller methods directly deal with tree structures (clients may as well supply the data in JSON or YAML format, which is convenient for having the same method answer to an Ajax call).

Form handling methods are written as pairs of method, where one handles form display and the other handles form processing; the dispatch is done automatically, depending on the HTTP method, or on some special tunnelling parameters in the query.


Clients may explicitly request results in JSON, YAML or XML format; but the default view is HTML generated by the Template toolkit.

Some dynamic pages also use Jemplate on the client side to react to user actions; but this is going to be deprecated within the project. Initially Jemplate appeared to be a wonderful idea, but since it is not 100% compatible with Template, and will never be, it turned out to be quite difficult to maintain both jemplate and template fragments in the same project.

To address efficiency concerns, the application makes heavy use of DHTML/Ajax. The main Javascript components are :


Like any project of a certain size, we needed a collection of tools to structure the common work and sharing of knowledge within the team. Here is our list, which will be no surprise for a Perl audience :

In addition, we started exploring Perl::Critic for quality insurance, and Alien::Selenium for automating tests of the application front-end; however, both are still at an exploratory level, and have not yet been industrialized within the team.


Perl has proven to be a viable choice for an important, mission-critical project. Although some people were initially skeptical or even hostile against this choice, they had to acknowledge that at no point did the technology or tools create any difficulty, and that Perl is indeed equipped with everything needed for building a modern, professional application.

Personally I am convinced more than ever that, given the constraints and resources of the project, we were able to deliver much more than what would have been achieved with more conventional technologies. This claim is based on informal comparison with other projects within State of Geneva.

The difficulties encountered so far in the project are more on a managerial level.

First of all, recruitment was of course more difficult. Unfortunately Perl is seldom taught in schools and universities, so people who know the language generally learned it in the trenches ... but not necessarily together with software engineering best practices. However, a pleasant finding was that experienced developers from other backgrounds could learn Perl within the project and become productive in reasonable time.

The other difficulty is not bound to technology, but to the nature of the application : in fact, this should rather be called an application framework, instantiated differently in every court. As a result, many aspects and behaviors depend heavily on the parameterization, and it is quite difficult to foresee and test all combinations of features. The automatic test suite and the scenarios for human acceptance tests are both insufficient at the moment, but improving them requires a lot of effort.