Recent Posts

Wednesday, December 5, 2018

New Data Resource: DB.nomics

Some French agencies (including the central bank) have rolled out a useful new data resource called  DB.nomics - This site acts like a "European FRED," with a large variety of official data sources rolled into a single data provider. And like FRED, it comes at the wonderful price of free. (The advantage of being backed by central banks is that they have money to burn...) I have been looking at hooking into external data providers as a side project, and DB.nomics looks like an excellent option for most economic analysis purposes.

(Editorial note: I was hit by a cold last week, and have to catch up on various things. I expect that I will have a publishing pause until next week. I was working on my PCA tutorial, but I want to take time on that.)

I have been looking at downloading data from official sources using Python, based on the SDMX data protocol. (I was using the pandasdmx package.) Most of the official sources now have a SDMX interface (although not all are configured in the Python package). The problem is that each provider has its own classification scheme, and a series can be defined by up to 10 meta-data fields. Mapping that to an existing database time series scheme can be a challenge. Furthermore, each provider implements slightly different query schemes.

(Another issue is that the downloads can be relatively slow. The web programmers who developed the SDMX protocol managed to find the most inefficient data scheme possible. It may be that the backend implementation is more efficient that meets the eye.)

The beauty of DB.nomics is that there is only a single front end to deal with. Furthermore, the web tools allow researchers without computing skills to browse the DB.nomics website to find the exact query needed to access data.

DB.nomics also has support for research replication, so that it is easy to find the exact data set used by other researchers. This may be increasingly important for anyone who deals with academic economic research.

As a disclaimer, I have only started looking at DB.nomics. The first point of concern I can see is the issue of capacity: how much data can a single user download? For a small research shop, I doubt that this would be a concern, but obviously you would want to save the data locally so that you can update required data once. (A typical workflow for economists is to download a time series for some form of publication, either internal or external. These data are put into a chart, and the chart may be re-run hundreds of times before final publication. You do not want to query for the time series in every single chart run.)

The next issue is the speed of updates. Based on an extremely limited survey, the data appeared to be reasonably up-to-date, although it could easily be a day or two behind official release. As long as you are aware of the data you are using, this is not too big a problem. In the worst case, you just need to patch in the latest value of the series before publication. The real problem is if series stop updating for months at random intervals, which forces users to continuously cross-check data every single time. (I saw this problem on other data providers that I prefer not to name.)

The FRED database of the St. Louis Fed is probably easier to use, at least for working with individual time series. If you need to download tables of data (e.g., national accounts data sets), DB.nomics may be a much easier database to work with (using an API).

Funnily enough, I have been working on the interface to DB.nomics for a client, and do not actually have an interface for my own database. I hope to rectify that sooner or later, and my charts will have "downloaded via DB.nomics" in the caption.

  • Easy-to-use web interface that final users can use to track down data themselves for import.
  • A single API for the supported data providers.
  • Tools for data set replication.
  • Data table support.
  • Users will have to judge data download speed and capacity limits, as well as timeliness, based on their own needs.
Finally, I guess we should all give a big merci beaucoup to the French agencies involved.

(c) Brian Romanchuk 2018


  1. Hi Brian,

    I'm from the DBnomics team. We want to thank you for your thorough review of DBnomics, your remarks are very interesting.
    We want to improve the users experience going forward and this type of feedback is really helping us.
    Your reservations are all valid, especially what you say about speed of updates. We do our best update data as soon as it's published by the source provider.
    What do you think would be a good way to reassure users that the data on DBnomics is as fresh as its source?
    We'd love to have your opinion on ways to keep improve DBnomics.

    Would you like to talk and share that with us? That would be really helpful.

    Here's my email:

    There's also our forum:
    As well as a chat room:


    Johan Richer

    1. Hello, thanks for the response. Right now, I’m doing research for a book, and typically cutting off data to December 2018, so being up-to-date is not something I would notice...

      I had been sidetracked by other projects, and so have not been using DBnomics. That should be changing shortly, and will provide more fedback then.


Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.