Software citations now available in Zenodo

by Lars Holm Nielsen on January 10, 2019

We are proud to announce the release of enhancements which significantly facilitate scientific software citation and discovery. These represent the successful outcome of the Asclepias project, funded by the Alfred P. Sloan Foundation, and involving the American Astronomical Society, NASA ADS bibliographic index and the Zenodo repository.

NASA Astrophysics Data System

The NASA ADS now extracts and indexes cited software repositories published with the DataCite registry, making them discoverable through its platform and resulting in new metrics for software use and reuse in astronomical research.



Zenodo, using citation brokering software built for the Asclepias project, receives software citations from ADS and other sources, and displays them on the repository's software records.

Zenodo citations

Discover and cite software

These infrastructure updates provide important new tools for authors of research papers and developers of scientific software.

  • Authors can now find a wider selection of citable software records in ADS, which can be directly be incorporated into their manuscripts using the ADS tools that astronomers rely upon.

  • Developers who preserve their software by creating persistent citable entries, e.g. in Zenodo, can discover their software's reuse metrics right on their Zenodo repository's landing page.

  • Astronomers, who often work across such divisions, can use ADS to track all of their publication and software research outputs.

Tracking citations

ADS and Zenodo rely upon a combination of full-text content from Publishers as well as metadata from the DataCite and CrossRef registries to provide these services. High fidelity indexing and tracking of software objects in the literature is thus dependent on careful, intentional work by developers, authors, and publishers when archiving and citing these digital objects. With these enhancements, ADS and Zenodo now provide citation minting and tracking capabilities which comply with the recommendations of the Force11 software citation implementation working group.

More information:

New (and old) licenses are here!

by Alex Ioannidis on November 22, 2018

Today, we're pleased to announce the extension of the available selection of licenses that you can apply to your Zenodo uploads. The extra licenses include older and newer versions of existing licenses, as well as completely new entries.

New licenses

Our database now contains more than 400 licenses that you can search for and select when you create your upload. All these new licenses are coming from, which we have added as another one of the sources we harvest for licenses metadata.

If you are interested in getting a machine-readable extensive list of all the licenses you can do so by accessing our Licenses REST API. If you cannot find the license you want for your new upload, please drop us a message via our support form.

Major database incident

by Lars Holm Nielsen on August 21, 2018

What happened?

On Friday August 17 at 11:45 UTC the Zenodo database crashed because of a full volume on the database server. During auto-restart the database crashed again corrupting tables, which proved to be extremely difficult to recover from. It took three senior engineers, working round the clock, close to 2 days and 12 hours to bring Zenodo back online eventually on Sunday August 19 at 23:30 UTC.

How did it happen?

The root cause was an import of 2 million new grants to our grants database. This generated a very high insertion rate to our database. The database server has built-in protective mechanisms that automatically extend the disk volumes in case they approach being full. Exceptionally, due to the high insertion rate, the mechanisms did not fire fast enough. The full disk caused the database to crash after which it automatically restarted and tried to recover. Exceptionally, during the recovery phase, the database crashed again causing corruption to parts of the database.

Is it fixed?

Through hours of manual reconstruction work we have recovered the entire database until 20 minutes before the crash. We are still working on recovering the remaining 20 minutes from other sources. We have good hope to have finished recovery of the full database by the end of the week.

Is my record affected?

Only a handful of records submitted in the 20 minutes before the crash are affected. We have temporarily disabled these records until we have managed to thoroughly verify their integrity. If your record has been disabled, you’ll see a page similar to this one

Why so long downtime?

Our prime concern is to avoid data loss, so we chose the harder, longer path to recover in order to minimise data loss rather than maximise uptime. We currently backup our entire database every 12 hours to disk and every week to tape. The latest backup prior to the incident was at 05:00 UTC. Thus using this backup would have meant more than 6 hours of data loss from the database. To avoid data loss, we therefore decided instead to repair the corrupted parts of the database, thus causing significantly longer downtime than if we had restored a backup. The repairing was possible because we could re-validate the database against our search index and web server access logs.

Were my files at risk?

No, the files (the payload of every record) are stored outside the database which only stores the metadata of the records. These files are stored in multiple copies on multiples servers before a new record is completed in the database. However accessing and serving the files requires the database record that points to them. In addition, we write submission information packages (SIPs) to disk which contain both the metadata and uploaded files, however these SIPs are not written to disk immediately, and thus don’t help recover data close to a crash.

What did we learn?

It is said that every challenge faced makes us stronger! Even though we have numerous protection and self-healing mechanisms at each level of the stack they were not enough, and did not protect against combinatorial failures. We are thus taking this incident very seriously and thoroughly investigating what happened and what can be improved to avoid even more failure modes and to make recovery quicker. One track we are looking at is replica configurations that would isolate crash corruptions to one node.