Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature proposal: @include blocks #2456

Closed
mnemnion opened this issue Feb 19, 2024 · 10 comments
Closed

Feature proposal: @include blocks #2456

mnemnion opened this issue Feb 19, 2024 · 10 comments

Comments

@mnemnion
Copy link

Rationale

There are various circumstances and ways in which one might want to programmatically generate some Markdown. As discussed in #2455, the @eval block covers some of these cases, but with certain limitations.

A motivating example is auto-generating a @docs block, for e.g. all subtypes of an abstract type, or the members of an enum. There are also cases where generating the Markdown might be expensive, such as benchmarking, where one might not necessarily want it to be run every time documents are generated.

Proposal

Introduce an @include block, with one or more file paths. The semantics is that the content of those files is treated by Documenter as though the text in the file is in that place in the document, with an extra newline between each file, if there's more than one.

Like so:

 ```@include
      enum_docs.md
      expensive_benchmark.md
 ```

I'd argue that the paths should be relative to the /docs folder, rather than relative to the filepath, this is easier to implement and (probably) clearer, with a minor advantage that @include blocks can be moved without changing the path.

I'm not familiar enough with Documenter internals to suggest implementations here, but it's surely simpler than anything which would involve changes to @eval, I would personally find it useful, and I'd even guess I wouldn't be the only one to use it.

@fredrikekre
Copy link
Member

Seems significantly easier and more customizable to just write your own preprocessor and call in make.jl before makedocs?

@mnemnion
Copy link
Author

Easier? Nope, not seeing that. It's not so much the first splice, as it is finding the boundaries of the splice and splicing again.

Doable? Yeah, I could manage. Figure out some way to mark the start and end lines uniquely without affecting the render, generate the new version of the file, read in the old file, look for the line markers, cut, paste, write.

I think for most people, the workflow you're proposing here makes sense for an entire page, but not as much for part of a page. Compare the workflow I describe above with "have some Markdown in a file, put in an @include block".

So yeah, I think it would make a useful feature, which is why I proposed it. Complementary to the @raw and @eval blocks, and Julian enough, it's what include does in the language.

I'm not a contributor, just some fella who files issues, and that's likely to be all that I have time for at this stage. It would be an enhancement, is all.

@mnemnion
Copy link
Author

There's also an ergonomic issue with literal inclusion of a different text in a handwritten document. If the user has opened that document, they probably did so to edit it. It's all too easy to not notice that some part of the text is in the splice, and make changes to it. Then the build system silently clobbers the edits. If they didn't commit or keep the file open in the editor, the changes are just gone.

@goerz
Copy link
Member

goerz commented Feb 20, 2024

I'm also not quite convinced that @include would be a widely useful enough feature to have in Documenter itself. However, it could be implemented as a plugin. It's not that hard to write plugins that handle custom

```@something
```

blocks, where something is anything of your choosing (like include). The Documenter internals can take some time getting used to, but you can probably get quite far from following the examples of existing plugins that define custom blocks, like DocumenterMermaid or DocumenterCitations (or hit us up on Slack for some pointers)

You might also be able use the plugin approach to solve your original problem. So instead of generating some markdown and then having a plugin that includes the generated file, directly have a plugin for some custom project-specific block that directly converts to the releveant Markdown AST.

@mnemnion
Copy link
Author

I might give a plugin a shot. If they don't run early enough in the pipeline to transform an @include into a @docs block there isn't a lot of point, do you happen to know that?

I do think "include this piece of documentation in my documentation" is more of a core feature than a plugin feature, but working code is always better than some guy's opinion. Having other priorities for the indefinite future is fine, as far as I'm concerned. A feature proposal is not a demand, and I hope it didn't come across that way.

@goerz
Copy link
Member

goerz commented Feb 20, 2024

If they don't run early enough in the pipeline to transform an @include into a @docs block there isn't a lot of point, do you happen to know that?

I actually think that's possible. You can certainly choose the order of expanders. A plugin can hook itself into an arbitrary position in the expander pipeline (these "pipelines" are based on the Selectors feature in Documenter. It's a bit idiosyncratic, but once you wrap your had around this, it works fine). And yes, the output of one pipeline step could still be processed by a subsequent pipeline step. It might require diving a bit into the internals, though.

If all of this is still just in response to #2454, though, I think you might be going a little overboard with this. It would be better to work on a PR that just fixes the @docs block. It definitely should pick up on dynamic docstrings automatically, so if you or someone else could fix that bug, there might be no need for this kind of customization.

P.S.: the built-in expanders are in expander_pipeline.jl, see also the docs for ExpanderPipeline. You'll see that plugins like DocumenterMermaid hook into this pipeline.

@mnemnion
Copy link
Author

I actually think that's possible. You can certainly choose the order of expanders.

Excellent, that does make it more likely that I'll actually write this.

If all of this is still just in response to #2454, though, I think you might be going a little overboard with this.

It isn't, although you're right that #2454 is probably a reasonably self-contained bug which would make a good first PR. No promises, it's very much a time-permitting sort of thing, but I'm not at all opposed to balancing out all my issues with the occasional PR.

I have two features in my own documentation which would use @include. One is generating @docs lists of subtypes and enums, the latter would need dynamic documentation to be functional, but the former would work now. This would have a few advantages: I would never forget to add new subtypes to the docs list, the enum docs would stay in order when they get rearranged, and if I added a subtype or enum without a docstring, this would cause an error, prompting me to finish the job.

The other is that there are two tables which I want to share between the README and the docs, and a Julia script which extracts them from the README and puts them in files, so I can @include them into the documentation, would be easy to write and maintain, whereas one which splices them in directly would be less so, for reasons I've already covered.

"Expensive benchmarks which I don't want to have to run every time I build the docs" is also on the roadmap. For those it would probably be feasible to generate the entire page from the script, but I would prefer to generate the tables and flamegraphs as self-contained files, so that the documentation is a Markdown file, not strings in an unrelated script.

@goerz
Copy link
Member

goerz commented Feb 20, 2024

The other is that there are two tables which I want to share between the README and the docs, and a Julia script which extracts them from the README and puts them in files

A plugin that specifically includes the main README would probably welcome. That's a very common use case, and some projects have scripts that process the README into docs/src/index.md or something. Similarly for a CHANGELOG. Those types of scripts could very well be plugin-provided blocks, so that everyone doesn't have to roll their own solution

Expensive benchmarks which I don't want to have to run every time I build the docs

That's something I use Literate.jl for, which you might think of as a more powerful version of @eval. For benchmarks, I don't want to run them on CI, since CI nodes are underpowered and shared, but on my own workstation. I just run Literate.markdown with execture=true, and then commit the resulting .md file to the docs/src folder of my main projects so that it shows up in the documentation. This is a bit of a manual process, of course, so the benchmarks / expensive examples only get updated periodically (like for a new release), not every time the documentation builds.

@goerz
Copy link
Member

goerz commented Feb 20, 2024

Since this proposal probably won't be resolved in Documenter itself, I'm going to tentatively close the issue (anyone, feel free to reopen if we decide we do want this as a core feature).

If you end up doing a plugin for this, we can revisit whether it should be ported into Documenter as a general feature. And of course, don't hesitate to reach out with questions.

@goerz goerz closed this as completed Feb 20, 2024
@mortenpi
Copy link
Member

This sounds like a duplicate of #499. I think doing it as a plugin in the first iteration sounds like the right way to go about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants