Shows how items scraped off an example news site are saved in a sql database via a scrapy pipeline.
Including some custom stats, as well as collection of potential sql errors. Pushing the entire stats of the spiders crawl into a sql database with the spiders closed method.
The stats are returned as a dictionary. Once thats converted into a json, it is then dumped as json into the databank using its json compatability.
The mariadb documentation is essential reading regarding the combination of sql and json. Other sql databases will have their own comparable functunality and documentation on the subject.