Skip to content

Latest commit

Β 

History

History
177 lines (132 loc) Β· 7.52 KB

README.md

File metadata and controls

177 lines (132 loc) Β· 7.52 KB

csv2api – Parse CSV with filtering and sending to API

Go version Go report Wiki License

The csv2api parser reads the CSV file with the raw data, filters the records, identifies fields to be changed, and sends a request to update the data to the specified endpoint of your REST API.

All actions take place according to the settings in the configuration file.

Features:

  • 100% free and open source.
  • Works with any size of input CSV file.
  • Use any configuration file format: JSON, YAML, TOML, or HCL (Terraform).
  • Ability to keep a configuration file (and, in the future, input data file) on a remote server with HTTP access, it will be read as if it was in a folder on your local machine.
  • Configure any request body for the REST API endpoint directly in the configuration file (in a clear declarative style).
  • Extensive options for filtering incoming data from a CSV file.
  • Provides extensive capabilities for constructing multiple filters to accurately perform actions on selected fields.

⚑️ Quick start

First, download and install Go. Version 1.20 (or higher) is required.

Installation is done by using the go install command:

go install github.com/koddr/csv2api@latest

πŸ’‘ Note: See the repository's Release page, if you want to download a ready-made deb, rpm, apk or Arch Linux package.

GNU/Linux and macOS users available way to install via Homebrew:

# Tap a new formula:
brew tap koddr/tap

# Installation:
brew install koddr/tap/csv2api

Next, run csv2api with -i option to generate initial config.yml and data.csv files in the current dir:

csv2api -i

Prepare your config and input data files:

  • In your config.yml:
    • Make sure that the first column name in the columns_order section is a primary key (PK) for your process.
    • Set up your API (base URL, token, endpoints, etc) in the api section.
    • Set up the filter for your fields in the filter_columns section.
    • Set up fields to be updated in the update_fields section.
  • In your input data.csv:
    • Make sure that the first line of your CSV file contains the correct field names.

πŸ’‘ Note: See the repository's Wiki page to understand the structure of the config and input data files.

And now, run csv2api with options:

csv2api \
  -c /path/to/config.yml \
  -d /path/to/data.csv \
  -e CONFIG

Done! πŸŽ‰ Your transactions have been performed:

Hello and welcome to csv2api! πŸ‘‹
                                
– According to the settings in './config.yml', 5 transactions were filtered out of 10 to start the process.
– Only 3 transactions got into the final set of actions to be taken... Please wait!
                                                                                                                                
 βœ“ Field 'tags' with values '[paid]' in the transaction '2' has been successfully updated (HTTP 200)
 βœ“ Field 'tags' with values '[paid]' in the transaction '8' has been successfully updated (HTTP 200)
 βœ“ Field 'tags' with values '[paid]' in the transaction '10' has been successfully updated (HTTP 200)
                                                                                                
– Saving filtered transactions to CSV file './filtered-1686993960.csv' in the current dir... OK!
                                
All done! πŸŽ‰ Time elapsed: 0.11s

🐳 Docker-way to quick start

If you don't want to physically install csv2api to your system, you feel free to using our official Docker image and run it from isolated container:

docker run --rm -it -v ${PWD}:${PWD} -w ${PWD} koddr/csv2api:latest [OPTIONS]

🧩 Options

Option Description Is required? Default value
-i set to generate initial config (config.yaml) and example data (data.csv) files no false
-c set path to your config file yes ""
-d set path to your CSV file with input data yes ""
-e set prefix used in your environment variables no CONFIG

✨ Solving case

In my work, I often have to work with large amounts of raw data in CSV format.

Usually it goes like this:

  1. Unload a file with data from one system.
  2. Clean up this file from duplicates and unnecessary columns.
  3. Make some changes in some columns of some rows.
  4. Mapping the processed lines from CSV file to the database structure fields.
  5. Write a function to bypass the CSV file and form the request body.
  6. Write an HTTP client that will send requests to the REST API endpoint.
  7. Send prepared request body to the REST API endpoint in other system for specified DB records.

And I'm not talking about the fact that the final REST API (where to send a request with the processed data) do not always have the same parameters for the request body.

To ease this whole process, I created this parser that takes absolutely any data file as input, does the conversions and filtering, and is set up in one single configuration file.

Just prepare the data, set the configuration to your liking, run csv2api and wait a bit! Yes, it's that simple.

πŸ† A win-win cooperation

And now, I invite you to participate in this project! Let's work together to create the most useful tool for developers on the web today.

  • Issues: ask questions and submit your features.
  • Pull requests: send your improvements to the current.

Your PRs & issues are welcome! Thank you 😘

⚠️ License

csv2api is free and open-source software licensed under the Apache 2.0 License, created and supported with 🩡 for people and robots by Vic Shóstak.