Omnivore has recently announced their shutdown, as such users have limited time to export our data! They have provided a short guide on exporting and converting your data to csv (https://docs.omnivore.app/using/exporting.html). Though unfortunately the label option did not work for me. Now, since Omnivore provides an export option in JSON format, we can use the jq
tool they mention to convert this data into a structured CSV, making it easier to import into other tools.
Disclaimer - This article was written with the help of AI in terms of command explanations and after some experimenting with jq
, the solution command was also finally achieved with ChatGPT.
I’ll walk you through converting Omnivore’s JSON export to a CSV format using jq
, a powerful command-line JSON processor.
Prerequisites
- jq: A command-line JSON processor. If you don’t have it installed, you can download it here.
- Bash terminal: This can be found on most Unix-based systems or via Git Bash on Windows.
Step 1: Export Your Data from Omnivore
Head to Omnivore's export tool and download your data in JSON format. These files will typically contain all your saved links, notes, and any tags associated with each item.
Save these JSON files in a folder where you’ll be working, as we’ll use jq
to parse it.
Step 2: Set Up the jq Command to Convert JSON to CSV
Omnivore exports data in JSON format as one of it's options, but to import your bookmarks into a service like Raindrop.io - which is what I have chosen next, we need a headed CSV file. Here’s how to set up jq
to format Omnivore’s JSON data to match a CSV structure.
Omnivore JSON typically has fields like url
, title
, labels
(tags), and savedAt
. We’ll extract these fields and add a header row for the CSV format.
Run This Command
Note that on windows, I moved the jq.exe file to C:\Program Files\Git\usr\bin and added it to my PATH environment system variables for Bash to recognise the command.
In your terminal, navigate to the folder containing the JSON file and run the following command:
jq -r '["url", "title", "tags", "created"], (.[] | [.url, .title, (.labels | join(", ")), .savedAt]) | @csv' *.json > omnivore_data.csv
Command Breakdown:
-
Headers:
["url", "title", "tags", "created"]
defines the headers of the CSV file. -
Data Extraction:
(.[] | [.url, .title, (.labels | join(", ")), .savedAt])
extracts each bookmark'surl
,title
,labels
(joined with commas for multiple tags), andsavedAt
date. - @csv: Converts the extracted data to CSV format, handling necessary quotes for fields with spaces or commas.
This command will output a CSV file named omnivore_data.csv
with your data neatly formatted. You may adjust the command accordingly if you need the other fields.
Example Output:
The CSV file will look something like this:
url,title,tags,created
https://mantine.dev,"Mantine","react, UI","2024-08-04T22:32:24.347Z"
https://developer.mozilla.org,"MDN Web Docs","documentation, reference","2024-08-10T12:15:30.234Z"
Each line represents a bookmark entry with the following columns:
- url: The link’s URL.
- title: The title of the saved page.
- tags: Comma-separated tags for each link.
- created: The date the link was saved (in ISO 8601 format).
Happy bookmarking!
Top comments (2)
I created an Omnivore export file through the Omnivore website. However, the file contains several thousand feed items that I never deleted. I am only interested in transferring my read it later items. Is there a way to take the export file and mass delete all feed items so only saved articles will appear in Readwise after the import?
I just created an Omnivore JSON export converter to HTML bookmarks, as many applications can import browser bookmarks. Works entirely in the browser, no upload.
omnivore2bookmarks.surge.sh/