-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metadata Proposal for Docs #166
Comments
@nodejs/tsc @nodejs/documentation |
We'll need to make sure this process doesn't add any work for releasers. (I don't think it would, but writing it here just in case.) This will also be a good opportunity hopefully to fix our version picker quirks, at least for future versions of Node.js. |
I like this a lot, although of course we'll see what kinds of unforeseen practical problems (if any) arise in the course of implementation. I wonder if 20.x and forward is more realistic than 18.x and forward. I wouldn't complain if we got this working sooner than 20.x though. Can we try to determine which parts of this can be done incrementally and which need to happen all-at-once? I'm trying to understand how many steps are involved here. (And if it's one big step, that's OK, but of course we'll want to automate everything because keeping the docs in synch with the current version will be an annoying problem otherwise.) |
Is the idea that this would work on the current nodejs.org as well as on nodejs.dev or is the vision here that the nodejs.dev tech/build stack replaces what's on nodejs.org and that's a prerequisite for this to work? |
In theory, it could also work on nodejs.org, as if we enter the topic of "The build process" if we outsource the tooling created on the nodejs.dev repo (which should be pretty much independent of whatever static-framework stuff you use). Yes. A few tweaks would be needed, but in the end, we could reuse the HTML generation part of the existing For |
I foresee 4 major steps:
That's it. Basically the migration itself can be mass done safely. |
Indeed, I was trying to think about retroactively updating till v18, as v18 is the first version of the API docs that are the most Markdown conforming. (I'm referring to the v18 git tree, also on that tree seems like all the doc pages follow the current doc specs, at least for the metadata, hence why migrating at once would be seamless). |
Proposal UpdatesI'm going to update the main proposal adding the following missing sections
|
Really great proposal ! A lot of topics are covered which is really great as this give a good overview of everything that will require some work. Just a few questions:
Following @Trott comments I would agree that v20 would be the best time to have it. Will be short for the others version before that. But do we want to provide a retroactive doc for stuff before v20 ? if yes which version ? should we have all the LTS covered ? A lot of question from my side :) |
As I mentioned before, the building tools will allow you to build just a subset of files if you want. I don't think HTML, PDF and JSON generation should be part of the core of the tooling, but could be added on top of it such as: import docTooling ....
const result docTooling.generateDocs();
return myPdfLibrary... We could add all kinds of output generation on top, but the core tooling is responsible for creating a JavaScript object tree with the "metadata" and content aggregated. Initially, the idea is to be a JSX Buffer (MDX), but we could also just return the result into a JavaScript object with the metadata and content. And then have a plugin that generates to MDX, as, for example, we would have for HTML, PDF, JSON... E.g. (Of the object) for the {
"promises": {
... all the metadata fields,
details: "the content from the Markdown file",
}
}
This is not a responsibility for this proposal.
YAML is more accessible to write than JSON and easier to read. Also less overhead on the transition period. JSON is just a JavaScript object, is not really human friendly (to a certain point) (IMHO)
If it is not compliant, it wouldn't even build (give an error), but this should not be a responsibility of the tooling; it could be part of the build process by using tools such as Remark, and ESLint, for example. |
I think that's debatable, YAML can be very hard for humans as well (e.g. multiline strings is non-intuitive, the type guessing makes it that sometimes one mistakes a string for a number, etc.). Other markup languages, such as e.g. TOML or JSON, do not have those problems. I'm not saying those are deal breakers for using YAML, or that we should not consider YAML for this use-case, but I think we should not disregard the problems of that syntax. |
Gladly that none of those apply to our schema 😛 |
Every markup language has its pros-and-cons. I just personally (please take it with a grain of salt) belive that, in this case, the pros of using YAML are better. |
Thanks for comprehensive proposal ! I think this
and will denfinitely help me understsand/consume what you are suggesting. |
It seems like the "move the YAML to a separate file" part can happen pretty much at any time as long as someone is willing to update the relevant tooling. Would it be beneficial to do this right away so that there's one less structural change to make the rest of this proposal happen? |
Hmm, the way how the YAML is structured right now in the Markdown, it would possibly have no benefits in extracting it. At least to a certain degree the proposed YAML structure needs to be implemented. I also think I got tasked in making a demo repository with example contents 🤔 |
@ovflowd we had discussed an example of what the directory would looke like for a single API, is that what you meant about a demo repository with example contents ? |
Yup, pretty much! |
I had a meeting with @mhdawson, and here's the execution plan for this proposal:
Original source: https://docs.google.com/document/d/1pRa7mqfW79Hc_gDoQCmjjVZ_q9dyc2i7spUzpZ1CW5k |
@mhdawson I'm going to proceed with the demo (example) (mentioned here #166 (comment)) very possibly during December. |
Following the discussion during the last next-10 meeting, it could be great to create another meeting / discussion channel and only keep the update during the next-10 meeting. |
@ovflowd its been a few months so don't completely remember the context but I think I mean to ask if we could not be more specific that just object. Instead an object with this list of properties which of which are of type X. |
@ovflowd I've added the tsc-agenda. The next meeting is 9 ET on Wednesday June 14th so if you can make that time we can plan to have you present an update then. |
I'd say that's what the types are about? To describe the actual methods and their properties of each class? 🤔 the example above is just a sample snippet taken from one of our API docs. |
Also, @mhdawson thanks, I'll attempt to attend to this week's TSC meeting then. (June 14th) |
cc @nodejs/tsc for anyone that didn't join the meeting today to give a last round of feedback, as on the meeting we agreed to move this proposal to its next stage:
Please 👎 if you still have strong disagreements with the proposal (please read the latest comments as the body of the issue was not updated yet) |
I'll leave one more day to see if we get any rejection, but it seems that so far the TSC is OK with this. If we don't get any rejection till EOL of Saturday (UTC) this proposal is moving to its next step as mentioned on the document above. |
@ovflowd Do you consider the issue description to be up-to-date? I'm asking because in your previous comment you said it was not updated yet, but it still hasn't been updated according to GitHub. I'd like to be sure I review the right version of the proposal. |
Hey, @targos as I mentioned the issue body/description is not updated. The next step is to update it and update the demos. |
On a side note, any TSC member can still object to the proposal during the next stages, just like any other collaborator. We are only looking for consensus that the proposal is ready to be discussed with the broader collaborator base at this point :) |
Exactly as Tobias described 🙃 |
Removing agenda tag at suggestion of Claudio, will re-add once there is something to discuss again. |
cc @nodejs/next-10 as the Node.js Website Redesign is virtually done, we're now focusing on transforming it into a Monorepo. Then I'll start working on the fundamental redesign of the API Docs Website. This would also include a revamp of the current build tooling. Note that these changes won't touch any of the source Markdown files. At the moment, the changes mentioned above will be an intermediate step that would allow us to implement the "Metadata Proposal for Docs". The stages at the moment are:
|
FYI: This Description is Outdated! (Need update)
As discussed in our Collaborator Summit 2022 edition, we discussed a series of proposals within the current way we structure the metadata of our API docs. This proposal will eventually deprecate specific proposed changes here.
Within this issue, we will adhere to naming the proposal as an "API metadata proposal" for all further references.
The API Metadata Proposal
Proposal Demo: https://github.com/ovflowd/node-doc-proposal
Introduction
What is this proposal about? Our API docs currently face a few issues varying from maintainability and organization to the extra tooling required to make it work. These are namely the following:
unified
. Making it harder to debug, update or change how things are doneThere are many other issues within the current API docs, from non-standard conventions to ensure that rules are appropriately made, from maintaining those files to creating sustainable docs that are inclusive for newcomers and well detailed.
The Proposal
This proposal, at its core, boils down to 4 simple changes:
doc/api/modules/fs/promises.metadata.yml
hasdoc/api/modules/fs/promises.en.content.md
Re-structuring the existing file directory
In this proposal, the tree of files gets updated by adopting a node approach (pun intended) for how we structure the files of our API docs and how we name them.
Notably, these are the significant changes:
modules
. Globals, will, for example, reside withinglobals
misc
folder, but this is open for debate as this is not a crucial point.modules
, is the name of themodule
(top-level) import. For example, "File Systems" would be "fs
" Resulting indoc/api/modules/fs
node:fs/promises
would bedoc/api/modules/node/fs/promises
.e.g., doc/api/modules/node/fs/promises/file-handle.yaml
, Whereas for thepromises
import itself, it would bedoc/api/modules/node/fs/promises.yaml
promises
is a folder and in the second a YAML file; that's because we're following a Node approach, just like a Binary-Tree.Accomplishing this change
This can be quickly done by an automated script that will break down files and generate files. Using a script for tree shaking and creating this node approach would, in the best scenarios, work for all the current files existing on our
doc/api
and, worst case scenario 98% of the files, based on the consistency of adoption and how modules are following these patterns.Extracting the metadata
As mentioned before, the Markdown files should be clean from the actual Metadata, only containing the Description, Introduction (when needed), Examples (both for CJS and MJS) and more in-depth details of when this class/method should be used, and external references that might be useful.
Extracting the metadata allows our contributors and maintainers to focus on writing quality documentation and not get lost in the specificities of the metadata.
What happens with the extracted metadata?
It will be added to a dedicated YAML file containing all the metadata of a particular class, for example. (We created a new tooling infrastructure that would facilitate this on being done here.
The metadata structure will be explained in another section below.
The extraction and categorization process can be automated for all modules and classes, reducing (and erasing) the manual work needed to adopt this proposal.
Enforcing the Adoption of best practices
The actual content of the Markdown files will be "enforced" for Documentation reviewers and WGs for specific Node.js parts, possibly by the adoption of this PR.
The Metadata (YAML) schema
Similarly to the existing YAML schema, it would namely be structured as this:
The structure above allows easily to structure and organise the metadata of each method available within a Class and quickly describe the types, return types, parameters and history of a method, Class, or anything related.
I18n and ICU on YAML files
The structure is also I18N friendly, as precise text details that should not be defined within the Markdown file can be easily referenced using the ICU format. These details can be accessed on files that match the same level of a specific module. For the example above, for example,
doc/api/modules/node/fs/promises.en.i18n.json
contains entries that follow the ICU format such as:Specification Table
The table below demonstrates the entire length of the proposed YAML schema.
Note.: All the properties of type
Enum
will have their possible values discussed in the future, as this is just a high-level specification proposal.Top Level Properties
name
String
doc
folder.import
String
stability
Enum
tags
Lang ID
history
Array<History>
methods
Array<Method>
constants
Array<Constant>
source
String
History
type
Enum
pullRequest
String
issue
String
details
Lang ID
versions
Array<String>
when
String
Method
name
String
stability
Enum
tags
Lang ID
history
Array<History>
returns
Array<ReturnType|Enum>
params
Array<MethodParam>
MethodParam
name
String
optional
Boolean
defaults
Array<ParameterDefault>
types
Array<ParameterType|Enum>
ReturnType, ParameterType, ParameterDefault
details
Lang ID
type
Enum
Incorporating the Metadata within the Markdown files
As each Class has numerous methods (possibly constants) and more, the parser needs to know where to attach the data within the final generated result when, for example, building for the web.
This would be quickly done by using Markdown compatible Heading IDs
The parser would map the Heading IDs to each YAML entry's
name
fields to the associated Heading ID. Allowing you to write the Heading as you wish by still keeping the Heading ID intact.Naming for Markdown files
To ensure that we have a 1:1 mapping between YAML and Markdown, the Markdown files should reside in the same folder as the YAML ones and have the same name, the only difference being the Markdown files have the
.md
extension in lowercase. They're suffixed by their languages e.g..en.md
.Note.: By default, the Markdown files will default to
.en.md
extension.The Build Process
Generating the final result in a tangible readable format for Humans and IDE's is no easy feat.
The new tooling build process would consist of two different outputs:
Example of the file structure
An essential factor in easing the visualization of how this proposal would change the current folder structure is to show an example of how it would look with all the changes applied. The snippet below is an illustration of how it would look.
Note.: The root directory below would be
doc/api
.The Navigation (Markdown) Schema
Navigating through the API docs is as essential as displaying the content correctly. The idea here is to allow each
module
to define its Navigation entries and then generate the whole Navigation by aggregating all the navigation files.Book of Rules for the Navigation System
navigation.md
)build-docs --navigation-entry=doc/api/v18/navigation.md
)Note.: The Navigation source would be on Markdown, using a Markdown List format with a maximum of X-indentation levels.
The Schema of Navigation
The code snippet below shows all examples of the Schema and how it would be generated in the end.
File:
doc/api/v18/en.navigation.md
File:
doc/api/v18/modules/en.navigation.md
File:
doc/api/v18/modules/fs/en.navigation.md
Example output in Markdown
It is essential to mention that the final output of the Navigation would be Markdown and can be used by the build tools to either generate an output on MDX or plain HTML or JSON.
Conclusion
As explained before, the proposal has several benefits and would significantly add to our Codebase. Knowing that the benefits vary from tooling, build process, maintainability, adoption, ease of documentation, translations, and even more, this proposal is faded to succeed! Also, all the items explained here can be automated, ensuring a smooth transition process.
The text was updated successfully, but these errors were encountered: