You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the event of an active platform which is being scraped multiple times, it's a good idea to know which entries have been manually adjusted or corrected, so that their metadata is not overridden with a flawed or incomplete version.
An ideal solution, IMO, would be for the raw, scraped data to be stored separately from manually corrected/user-provided data. From there, they could be merged every time an update is performed.
An alternate solution would be for the metadata to contain the source of the information, allowing skipping such unwanted overrides in this way.
The text was updated successfully, but these errors were encountered:
In the event of an active platform which is being scraped multiple times, it's a good idea to know which entries have been manually adjusted or corrected, so that their metadata is not overridden with a flawed or incomplete version.
An ideal solution, IMO, would be for the raw, scraped data to be stored separately from manually corrected/user-provided data. From there, they could be merged every time an update is performed.
An alternate solution would be for the metadata to contain the source of the information, allowing skipping such unwanted overrides in this way.
This is a quite interesting issue. However, I honestly fail to see a feasible way to implement this without destroying the current pipelines in a major way.
E.g.:
We could add an "audit report" [1] property to the JSON schema reporting every action that has been taken on a entry, reporting which scraper generated it and how, so the generation process is reproducible. On the top of this "initial" step, one could add ones describing user interventions on those JSONs.
At this point I don't see how we can keep the JSONs human editable as they are now, though.
In the event of an active platform which is being scraped multiple times, it's a good idea to know which entries have been manually adjusted or corrected, so that their metadata is not overridden with a flawed or incomplete version.
An ideal solution, IMO, would be for the raw, scraped data to be stored separately from manually corrected/user-provided data. From there, they could be merged every time an update is performed.
An alternate solution would be for the metadata to contain the source of the information, allowing skipping such unwanted overrides in this way.
The text was updated successfully, but these errors were encountered: