Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: incorporate pip-run as pip run #3971

Open
jaraco opened this issue Sep 16, 2016 · 66 comments
Open

Feature: incorporate pip-run as pip run #3971

jaraco opened this issue Sep 16, 2016 · 66 comments
Labels
type: enhancement Improvements to functionality

Comments

@jaraco
Copy link
Member

jaraco commented Sep 16, 2016

In an effort to obviate easy_install (including implicit invocations such as setup_requires directives), I've created rwt. If you're not familiar with rwt, take a look at that readme, which describes its purpose and gives some examples.

I've been using rwt in my daily workflows and I'm finding it quite powerful as a bootstrapping tool. However, it suffers from the same bootstrapping problem as setuptools and pip and virtualenv -- you have to install it before it's available to use it, so breaks the 'one step' workflows and adds a step to the instructions you give to someone (install python, install pip, install rwt, run this command).

I'd like to consider bundling rwt with pip as the pip run command. Since rwt is mostly a thin wrapper around pip and its parameters are passed directly to pip install, it makes sense as a pip command, and adding it as a command to pip would solve the bootstrapping issue. The functionality would also then be available during pip install, potentially eliminating the setup_requires reliance on easy_install (or at least supporting an easy_install-free option).

So I'm proposing two things:

  1. Bundle rwt as pip run and make it the pypa-endorsed mechanism for on-demand dependency resolution.
  2. In pip install when building a source dist, invoke setup.py using rwt.

In the short term, the pip run command can be used for bootstrapping or test runs or whatever ad-hoc requirements one might have.

> pip run requests
...
>>> import requests
>>> requests.get(...)

discarded approach

Once this functionality has established more thoroughly and packagers can rely on all pip versions having this functionality, they could eliminate the reliance on distutils altogether and use whatever dependencies they require to build their package. Consider the pbr use case. Instead of invoking setuptools.setup() in their setup.py, they do this:

__requires__ = ['pbr']

import pbr
pbr.setup()

Where pbr might have dependencies on setuptools or distutils and and invoke that behavior, or perhaps they use another build system. Additionally, pip might be able to eschew the responsibility of supplying setuptools. If a script wants to rely on setuptools, it can require that:

__requires__ = ['setuptools']

import setuptools
setuptools.setup(...)

That last bit (pip without setuptools bundled) makes my heart glow a little. Think it over, maybe try out rwt, and let me know what you think. I plan to put together a PR after addressing any concerns.

@RonnyPfannschmidt
Copy link
Contributor

@jaraco broken link

@dholth
Copy link
Member

dholth commented Sep 16, 2016

That's beautiful. I've long suspected that environments based on "what a particular console script requires" might be a great alternative to environments based on "paths relative to a copy of the Python executable", but you went and did it!

@dstufft
Copy link
Member

dstufft commented Sep 16, 2016

So I don't have a real strong feeling about this. It doesn't feel like something I'd ever use personally, but I already have virtualenvwrapper so when I want a temporary environment I just do mktmpenv and whenever I deactivate or exit my shell that environment goes away automatically without needing some sort of runtime munging. This seems to generally work better to me than what something like pip run would because it's not unusual that I do something like:

$ pip install requests
$ python
>>> import requests
>>>  # Realize I forgot I needed cachecontrol
$ pip install cachecontrol
>>> import requests, cachecontrol

Or something like that. With explicit temporary environment creation/deletion (via deactivate/exit) that is trivial to do. With rwt it seems like I'd have to go and edit my pip run command to do pip run requests cachecontrol and get a whole new environment again. It also works for things where I want a temporary script, since it sets my current directory to the temporary virtual environment so I can use that area as a scratch pad that gets automatically cleaned up.

It's entirely possible (likely even) that there's some workflow or use case that I don't personally use that would make this useful though.

However, I don't think it's something we'd want to recommend as a replacement for setup_requires. Obviously people can do whatever they want in their setup.py (which is how setup_requires works at all today). We already have PEP 518 to use as an alternative to setup_requires (it's just not implemented yet). Looking at the source code for rwt it appears __requires__ must be a static list (which makes sense, need to be able to fetch it without executing) but I think the UX of something that must be static being inside of a setup.py is not great. People are going to expect to be able to do normal Python things with it since it's Python syntax and that's going to fail.

So I guess overall my suggestion about how to implement the feature would be to ditch the Python syntax and do something like:

#!/usr/bin/python
# -*- coding: utf-8 -*-
# Requires: setuptools>=1.0
# Requires: cffi>=1.0

import setuptools
setuptools.setup(...)

I think that makes it more clear that this is not Python syntax and you can't expect to do things like string interpolation or what have you inside of it.

Given all of that though, I am -1 on recommending/using it for a setup_requires replacement and -0 on including the pip run feature, only because I struggle to think of a case where I'd personally use it.

@dholth
Copy link
Member

dholth commented Sep 16, 2016

This might be a great way to implement PEP 518. Surely all the interesting stuff is unrelated to how the dependencies are read. Recall that you would parse a .toml and look for the ['build-system']['requires'] key.

It looks like it would probably run into the "pip --target does not work on global pip in Debian" problem caused by their "--user is the default" patch.

I dread using programs that are written in Python even though I love developing programs in Python, and it's because of virtualenv. Virtualenv is for development. When you just want to run youtube-dl there is no python; ImportError; pip install cycle.

@jaraco
Copy link
Member Author

jaraco commented Sep 16, 2016

I chose to use __requires__ because that's also the mechanism that pkg_resources uses to enforce that a given script meets its requirements. Of course, pkg_resources doesn't require that it be a static list, but I'd be more than willing to document clearly that limitation (which I do consider a feature, disallowing variation).

ditch the Python syntax and do something like

Ugh. I don't want to invent yet another DSL for package management, especially one that mixes Mime headers and shell comments and requirement syntax. I want the declarations to be data and in a widely-accepted format (i.e. Python or JSON).

We already have PEP 518 to use as an alternative to setup_requires

I do recall having read PEP 518, but PEP 518 falls short of the goals of rwt and has a narrower focus. rwt seeks to solve a more general problem. And in fact, rwt may be compatible with the implementation for PEP 518. If pip can read the PEP 518 metadata file, rwt could still be used to create the temporary context in which those packages are made available with something like:

rwt --with-pep518-build dist-info/metadata.json -- setup.py install

Not that you would necessarily want to do it that way, but you could.

I struggle to think of a case where I'd personally use it

As seasoned developers, we're comfortable with tooling and don't mind having and maintaining environments with pip and setuptools and virtualenv and virtualenvwrapper and buildout and pbr and ... But when I'm working with a less experienced Python developer (maybe a developer on a C project, or maybe a project manager, or maybe a bug tracker where I don't know who might want to run it), I'd like a supported mechanism that gives them the ability to rapidly run a script or project with minimal fuss. I'd like to be able to give them a single command to run and have it work in a large number of environments. If I give instructions on installing dependencies, I have to refer them to documentation and they have to learn the caveats (access controls, system versus user environments, virtual environments including activation, and cleanup). This technique instead trades all of that complexity with a concise invocation. All they need is a minimally viable Python environment (with pip).

It's also important that this mechanism supports a single file, so that a script can be represented in a Gist or as downloadable script.

Consider this example where I've replaced setup_requires and tests_require and pytest-runner with test requirements nicely defined and encapsulated, but separate from the project metadata. Of course, since Travis runs are themselves ephemeral, those features are more valuable for someone using the test as a recipe for running tests locally (without having to mimic Travis-CI).

That said, I'm okay with it not being something you would want to use. If you prefer for the state to accumulate in virtualenvs on your machine and virtualenvwrapper helps facilitate that, I'm all in favor of continuing to use those. I don't see rwt as ever obviating those use cases.

I am -1 on recommending/using it for a setup_requires replacement.

That's understandable. And I'm happy to defer that decision to the preferences of the pip maintainers. I present the idea here for consideration.

@jaraco
Copy link
Member Author

jaraco commented Sep 16, 2016

It looks like it would probably run into the "pip --target does not work on global pip in Debian" problem caused by their "--user is the default" patch.

Yes, it does. I've found that I've had to manually install pip on Xenial systems to bypass the issue.

@pfmoore
Copy link
Member

pfmoore commented Sep 16, 2016

Having thought about it, I like the idea as a standalone utility (which I might use occasionally, but see below) but I'm not sure it fits that well as a pip subcommand. It sort of feels more like it's related to virtualenv than to pip.

Personally, I have a ve powershell function that wraps virtualenv. When I want to experiment, I typically do ve -temp and then pip install what I need (basically my own equivalent of virtualenvwrapper). I keep thinking about adding an argument to ve to auto-install stuff, just to remove one step. But either way it's easy enough to do what I want. But point taken that this is from the POV of someone comfortable with command line tools.

Using rwt to run a script and auto-manage its dependencies is quite a nice idea. I'd probably only use it for "one off" stuff, though - for scripts I wanted to distribute I'd likely use zipapp and bundle dependencies in.

@dstufft
Copy link
Member

dstufft commented Sep 16, 2016

To be clear, other than being -1 on using/recommending this for a setup_requires replacement (i don't include using the isolation mechanism but pulling data from pyproject.toml as "using" in this context since it'd just be an implementation detail at that point) I don't feel super strongly about the feature itself in either a positive or negative way. If folks think they'd find it generally useful I wouldn't personally be opposed to it, I am not great at "stepping outside" my own knowledge and I don't do much with beginners. Perhaps it'd make sense to poke some of the people who tend to do a lot of beginner stuff and see what their reactions to this idea are?

I don't want to invent yet another DSL for package management, especially one that mixes Mime headers and shell comments and requirement syntax. I want the declarations to be data and in a widely-accepted format (i.e. Python or JSON).

So my biggest problem with __requires__ = [...] is that I feel like being a restricted set of Python, but within a regular Python file makes it an attractive nuisance. People are going to expect to be able to dynamically compute this unless they happen to have read and understood the documentation about it. You can't document your way out of a usability problem and I feel like this restricted-python-within-python is just that. That pkg_resources uses it for it's runtime stuff sort of makes this worse because it also invalidates the expectation that anyone who has used that feature has, that you can do dynamic things in it. I don't really care what the exact syntax is, but putting it in a comment completely removes people's ability to metaprogram their dependency declarations and removes that particular footgun (unless they do something even crazier like adding an encoding that does magical interpolation or something... but at that point you're well off the beaten path enough to know you're doing something unsupported).

That said, I'm okay with it not being something you would want to use. If you prefer for the state to accumulate in virtualenvs on your machine and virtualenvwrapper helps facilitate that, I'm all in favor of continuing to use those. I don't see rwt as ever obviating those use cases.

Sure! I don't think that every thing that exists in pip needs to be something I'd personally use either :). My "I don't think I'd use this" is mostly about me not having a good frame of reference to judge this particular feature by and whether it's useful to include or not. My only real concern about adding a pip run command is whether or not it passes the "will this be useful to enough people to justify the mental and maintenance overhead of another command". The downloads for rwt are fairly small:

Installer Downloads
bandersnatch 3126
requests 65
Browser 48
Unknown 29
pip 17
z3c.pypimirror 7

Though as you mentioned, this does have the same bootstrapping problem that pip/setuptools/virtualenv has and there is also a certainly a discoverability problem too. Perhaps rebranding as pip-run (or keep it as rwt even) and linking to it from the pip/virtualenv/setuptools/packaging.python.org docs and seeing if it gains traction before adding it to pip itself might be a reasonable way to go about it? Or maybe just soliciting more feedback from groups who might find it useful? I dunno!

@dholth
Copy link
Member

dholth commented Sep 16, 2016

I feel strongly about features like this. I think integrating this and/or something more like pipsi into pip would be transformative - that we would see more Python application development in general.

Consider the humble npm install -g useful_program. The only way to install things with npm means useful_program is never broken just because you installed a second program, and as a result npm is a popular way to distribute command line tools that are not primarily libraries. Of course with Python the isolation mechanism would be different and we would not recursively isolate dependencies of dependencies.

Why not distribute a .pyz in the downloads section of your Wordpress blog instead? Sometimes that works, but pypi is awesome.

We do love our interactive shells in Python. In a system like rwt you would start with the script and say "give me a shell with that script's dependencies". Backwards compared to virtualenv where you start with the dependencies and eventually get the script. Which direction is best depends on what you are doing.

Also, if we build rwt into pip, maybe --target will start working on Debian again.

I'm not worried about __requires__ = [a for a in range(10)] shenanigans, rwt should be able to provide immediate and clear error messages if someone tried it.

I assume @jaraco is familiar with my own attempt at short bootstrap code https://bitbucket.org/dholth/enscons/src/tip/setup.py?fileviewer=file-view-default

@dholth
Copy link
Member

dholth commented Sep 16, 2016

By the way does rwt handle cases like setuptools 27 is installed, but requires = [ 'setuptools==26']

@jaraco
Copy link
Member Author

jaraco commented Sep 16, 2016

Yup

$ python -m easy_install --version
setuptools 27.2.0 from /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (Python 3.6)
$ python -m rwt setuptools==26.0.0 -- -m easy_install --version
Loading requirements using setuptools==26.0.0
setuptools 26.0.0 from /private/var/folders/c6/v7hnmq453xb6p2dbz1gqc6rr0000gn/T/rwt-bn3_nm11 (Python 3.6)

@dstufft
Copy link
Member

dstufft commented Sep 16, 2016

I don't think this feature is related to pipsi (though I also think that something like pipsi inside of pip itself would be great). This seems to be more tailored towards one-off implicit, temporary environments whereas pipsi is tailored more towards per-command, permanent environments.

I do think there is something here we need to figure out though for any potentional pipsi, rwt, and PEP 518 implementation is how is pip going to handle isolation. Currently pip and virtualenv do not have a dependency on each other (and that was done on purpose to allow people to use one without the other). However all of these things need some mechanism of an isolated install so it starts to become attractive to bolt the two things together more -- that or we need something like what rwt is currently doing, which is sort of emphereal environments based on PYTHONPATH. I think this sort of thing is also the reason why @pfmoore feels that it's more related to virtualenv than pip. If we were minimum of 3.3+ we could pretty reasonably just depend on the venv module and be done with it, but 2.7 is still a thorn there (we're dropping 2.6 in the future so that's OK).

I don't think that pip run is going to affect what Debian does with --target.

Anyways, my opinion on pip run is what's above :)

@pfmoore
Copy link
Member

pfmoore commented Sep 17, 2016

Agreed - for me the interesting thing about rwt is the isolation aspect, and that feels more like virtualenv/venv than pip.

I would be very interested in a discussion on how rwt does isolation (I haven't even looked into it myself, yet[1]) and whether that's appropriate as a standard method for pip. IMO, it may even be worth considering an informational PEP "standard approach to providing script-specific dependencies". That would be useful for people writing zipapps, as well.

[1] It may be that it's nothing more clever than "create a temp dir, do pip install -t, set sys.path appropriately, clean up afterwards".

@jaraco
Copy link
Member Author

jaraco commented Oct 13, 2016

I would be very interested in a discussion on how rwt does isolation

I've added a section in the readme on how it works. The isolation is extremely simple - relying on pip install -t to install a given version and relying on PYTHONPATH semantics to put those packages at the front of the sys.path, giving them precedence. So, yes, nothing particularly clever.

However, there are some additional details. For example, in jaraco/pip-run#1, I captured an issue where namespace packages wouldn't work properly on Python 3.2 and earlier because the -nspkg.pth files wouldn't be executed, so a hack was put in place to cause those .pth files to be executed even in the target directory.

There is other code in the codebase for relying on pkg_resources to programmatically manipulate sys.path for run-time dependency injection, but I'm not using that anywhere at the moment, and relying instead on subprocess invocation.

@jaraco jaraco closed this as completed Apr 11, 2017
@lock lock bot added the auto-locked Outdated issues that have been locked by automation label Jun 3, 2019
@lock lock bot locked as resolved and limited conversation to collaborators Jun 3, 2019
@jaraco jaraco reopened this Feb 8, 2020
@jaraco
Copy link
Member Author

jaraco commented Feb 8, 2020

I'd like to revive this discussion. Since the original proposal, the implementation has been refined with a much clearer set of advertised use-cases, mainly enabling those one-off troubleshooting or installation demos.

Here are some reasons why pip-run is different from virtualenv and why integration is valuable:

  • Unlike virtualenv, pip-run is associated with the version of Python running it, similar to how venv works. It must be installed separately in each Python environment in which it is used. As a result, it's inconvenient to require a separate step for each environment to make it available.
  • It's tightly integrated with pip. It requires pip to run and uses pip's own install command syntax (the args to 'run' are the same as the args to 'install').
  • The user doesn't often know in advance that they'll need it. By providing it by default, alongside pip, one can expect a novice user to have it available and rely on it being present to help them troubleshoot. If they have 'pip install', they have 'pip run'.

I'd like to re-introduce the integration via vendoring initially so that pip-run and pip run could iterate independently, but expect eventually (months), pip-run would be sunset in favor of pip run.

Would the maintainers of pip entertain a revival of the #3979? What would it take to be accepted?

@jaraco jaraco changed the title Feature: incorporate rwt as pip run Feature: incorporate pip-run as pip run Feb 8, 2020
@pfmoore
Copy link
Member

pfmoore commented Feb 8, 2020

I've had some situations recently where pip-run would have been useful and I was surprised how much of a hurdle it was to need to get pip-run separately. As a result I've gone from a -0 on this feature to a cautious +1. I still feel that it's taking pip down the "one tool to do everything" route, but from a "practicality beats purity" angle, I'm inclined to think that it's worth doing even so.

Would the maintainers of pip entertain a revival of the #3979?

Personally, yes.

What would it take to be accepted?

I'd say, if you submit an updated PR, and none of the other maintainers raises objections, then (subject to review of the PR, obviously) I'd be happy to accept it.

@jaraco jaraco added state: awaiting PR Feature discussed, PR is needed and removed auto-locked Outdated issues that have been locked by automation labels Mar 15, 2020
@jaraco jaraco mentioned this issue Mar 29, 2020
4 tasks
@jaraco jaraco removed the state: awaiting PR Feature discussed, PR is needed label Apr 5, 2020
@jaraco
Copy link
Member Author

jaraco commented Dec 9, 2022

Unfortunately, it's mostly the UI.

I think there's room for improvement here. I've opened up a couple of issues in the project.

The other big reason is the fact that it creates a new environment every run. So pip-run -q numpy pandas -- -c "print(12)" takes around 20 seconds on my PC, with everything cached as wheels in pip, where py -c "print(12)" is too fast to measure. Some sort of environment caching would be a massive usability win for me.

That's interesting. I get a lot better performance.

 ~ $ time pip-run -q numpy -- -c pass
        1.84 real         1.23 user         0.41 sys

It probably depends a lot on network speed, disk speed, and CPU, because it's live-resolving dependencies on PyPI and live expanding them. Still, 1.8s is a long time to wait for a script to start up, especially if you start it more than once. Concerns about caching and performance are being discussed in jaraco/pip-run#52. Of all of these concerns, I think this one is the most difficult to reconcile with other goals (namely accuracy vs stale caches).

(I should probably have engaged more with the pip-run project. Unfortunately, it's right on the cusp of being almost what I want, but just sticking with the status quo of temporary virtualenv isn't quite bad enough, so I always ended up just leaving it).

To be sure, virtualenvs have at the same time gotten better, especially with the py launcher, which will auto-activate ./venv.

For me, however, virtualenv breaks the ephemeral barrier - it adds implicit debt, requiring the user to find a home for it, then provision it, then clean it up. Pip-run is the only command you can give to a user to do that and not leave them with unmanaged state.

* `pip-run` - Well, that _is_ this solution. As I say, I personally dislike some of the UI choices (one of which is the name...), and maybe "rebranding" as `pip run` would give us a chance to change that. Also, being part of pip would give it a level of prominence which might help it get viewed as the "official" solution for the "run a single-file script with non-stdlib dependencies" use case. But yes, I'll happily admit that it's more of a change in "branding" than an actual change in functionality.

I branded as pip-run with the intention of contributing it here. The main reason for integrating it with pip is because it's needed everywhere pip is needed (in any given environment where one may wish to temporarily extend the environment with more packages).

I don't want this solution to be "official" in the sense that it's exclusionary of other solutions. I want it to be broadly useful for my use cases and use-cases I don't care about. I don't see it as a competitor to venv or even pipx (which as you've observed fits a very different niche). I do want it to be prominent enough that people can use it when it suits them (or when directed) without having to mutate their system. If it would help, I'd be okay with pip run-provisional or pip run-unofficial (as ugly as that would be) in exchange for it being available in most environments.

I'm not sure I fully understand this concern, though. What's the issue with the "pip run" branding?

Basically, I think that once you hit the "big enough to package" level, we have great solutions - as long as you are comfortable with Python's packaging tools. But for people who just "write scripts" (which would include all of my work colleagues, for a start), people who look at the steep learning curve for Python packaging and run away screaming, and for experienced developers who simply want to write a one-off script, there's really only pip-run in my experience. And the fact that it's not better known suggests to me that it needs some sort of boost or improvement. Whether that's being built into pip, or a major UI/publicity overhaul, I don't know. But including it (or functionality equivalent to it) in pip would certainly help with visibility.

I continue to market it and tell people about it and demo it. It gets good enough reception, but one of the drawbacks is that for those who would find it useful, they also already have built up habits that are good enough. They probably are uneasy about installing something into their user or system python environments, and if they're not, then they're probably just as happy to install their requirements into that environment. In my mind, it's dramatically more useful when it already exists in the environment.

I believe the issues in pip-run now capture the biggest concerns. I'm hoping we can address those and then explore exposing the functionality as pip run, even if just provisionally.

@pfmoore
Copy link
Member

pfmoore commented Dec 9, 2022

I'm not sure I fully understand this concern, though. What's the issue with the "pip run" branding?

The point was that pip-run is your project, and as such reflects the use cases that you consider important. In the context of my comment, pip-run isn't exactly "another solution in this space", although if it was made into a pip subcommand, design decisions would become the responsibility of the pip maintainers, and might therefore change. So it's possible that pip-run is a slightly different solution than pip run would end up being, simply because the respective project owners of pip-run and pip run might have different priorities and goals. "Branding" probably isn't the right term - but "ownership" suggests more conflict than I wanted to imply.

It was a minor point, though.

@xavfernandez
Copy link
Member

xavfernandez commented Dec 19, 2022

One specific point, @xavfernandez was rather strongly against the __requires__ feature.

There are two point I (still partially) dislike with this option:

  • the first is the magical python value that isn't really magical (i.e. it is a python variable but since it is parsed statically it cannot be dynamic [and we wouldn't want it to be dynamic]). With the different alternative solutions from Shared redesign of embedded requirements feature jaraco/pip-run#44, I'd say this point isn't really relevant anymore and has been taken care of.
  • the second is that it still feels like it is constructing a new ad hoc standard for pip/pip-run while we've been trying to avoid those these last years.
    I'd feel more confortable if the selected option was built/selected with the goal to be standardised and usable by other installers.
    The good thing with the 4th option (restricting comments to only (PEP-508 ?) requirements) is that it avoids pip-specific options and keeps the door open for such standardization.

@pfmoore
Copy link
Member

pfmoore commented Dec 19, 2022

I'd feel more confortable if the selected option was built/selected with the goal to be standardised and usable by other installers.

+1 on this. I implemented the 4th option as you describe for pipx, so if pip-run adopted it, we'd be in a good place to propose it as a standard. Of course, we could just as well pick a different syntax, and implement it in pipx and pip-run - the main point is that 2 implementations is a good indication that proposing it as a standard won't be too controversial 🙂

@jaraco
Copy link
Member Author

jaraco commented Dec 26, 2022

The latest release of pip-run (9.3) implements behaviors that I believe will satisfy the UX concerns that pfmoore raised. In particular:

  • The double-dash is no longer required. pip-run will infer the beginning of the python arguments if a python script is found.
  • Support for a limited requirements section in comments.
  • The install environment for any set of inputs can be persisted across invocations by setting PIP_RUN_MODE=persist.
  • Runs quiet by default.

I assume these changes address the biggest concerns with the UI. Is there anything else that needs consideration?

@pfmoore
Copy link
Member

pfmoore commented Dec 26, 2022

The double-dash is no longer required. pip-run will infer the beginning of the python arguments if a python script is found.

This sounds fragile, to be honest. Is the exact mechanism by which the split point is inferred documented? I know I was the one who disliked the double dash, but I'm just as uncomfortable with "do what I mean" semantics. I think this is a genuinely hard UI design problem, and my instinct is that what works best depends on your use case. What works best for "run a temporary Python interpreter session with requests and packaging installed" is very different from "run this script which has its dependencies embedded in it", IMO.

The install environment for any set of inputs can be persisted across invocations by setting PIP_RUN_MODE=persist

I would expect this to be something that could be set by a command line flag as well as an environment variable. If we did move this to be a pip subcommand, I'd also expect it to be configurable in the pip config file1. As this is essentially a cache, it would also be good to have some cache management (so the user can explicitly remove the cache if something changes, for example). Again, if this gets included in pip I'd expect this to be managed via something like pip cache (which is currently for the wheel cache only, but would be the "obvious" place for a pip run environment cache.

Footnotes

  1. As a standalone pip-run command, I'd still prefer the option of a config file, but I don't feel as strongly about that case, and ultimately that one's not my call anyway.

@pfmoore
Copy link
Member

pfmoore commented Dec 26, 2022

Runs quiet by default.

How do you disable quiet mode? pip-run --help doesn't document the command line options specific to pip-run, so it's not at all obvious what you should do here. Adding -v doesn't work, for some reason (even though if you include -q -v on a pip command line, they cancel each other out...)

@pfmoore
Copy link
Member

pfmoore commented Dec 27, 2022

On reflection, I'm coming to the conclusion that the biggest problem for tools like pipx, build, and a standalone pip-run is the difficulty of explosing the plethora of pip options to the user via their UI, in a way that allows them to set up a suitable pip command to run in a subprocess.

It would be very easy to claim that pip's UI is too complex, and that's the root problem here, but that's just sidestepping the issue. The key point is that we (pip) maintain that users should call pip from their own program via a subprocess call, and yet we don't actually provide a CLI that's particularly easy to use that way.

I don't have any good answers here, to be honest. Maybe what's needed is a wrapper library, that handles calling pip in a subprocess, and exposes a strictly limited set of options - basically just the ones that might be needed when doing a "simple" install into the current environment. Something like the following:

def pip_install(
    requirements: list[Requirement],
    req_file: os.PathLike,
    *,
    indexes: list[URL] = [PyPI_URL],
    find_links: list[URL|os.PathLike] = [],
    verbosity: int = 0, # zero is "-q", 1 is "normal", 2+ is various levels of "-v"
    allow_sdists: bool = False, # By default, only install wheels (via --only-binary :all:)
) -> subprocess.CompletedProcess

That's a simple enough API that tools can expose the options via their own CLI, and yet covers the core functionality. I'm returning a CompletedProcess object, so clients can't display output "as it's written", but as a first draft I'm going to say YAGNI to anything more complex[^1]. People wanting fine control should run pip directly, and deal with the full CLI. There's also questions around whether such an API should respect the user's pip config file and/or environment variables, which would need to be sorted out.

We could even offer such an API in pip itself - the idea would not be so much to start offering "pip as a library" features, but simply to make a statement on what command line options we expect to be "normal" for clients just wanting to offer "install something" capabilities.

What do people think of this? Is it something that tools might use (cc @pypa/pip-committers @jaraco @pypa/pipx-committers @pypa/build-maintainers)? It's pointless putting effort into something like this if no-one would use it. But maybe it would give people an option that's easier to maintain than a "let the user include any pip options they want" capability.

@cs01
Copy link
Member

cs01 commented Dec 27, 2022

From pipx’s perspective, I would always prefer an official API like this, provided it can do everything we need it to do. In this case, it can’t (yet) since people want pipx to respect pip config files and environment variables. However, pipx already exists and is stable, so this certainly isn’t urgent for pipx even if it did have all the feature pipx needs. I.e. it’s questionable if it would actually get used by pipx. But I think it certainly would find plenty of uses by the Python community. In fact I am sure I’ve seen someone already build pip as a library which wraps pip subprocess calls. (Edit: this may have been what I was thinking of https://github.com/frostming/unearth by @frostming. It’s to find and download packages, but not install them. If you’re looking for use cases, this is one.)

From pip’s perspective, if this delivers a “pip run”-like functionality (or ”pipx-run”?) I think that is a step in the right direction too. Ultimately I’d like to see pipx bundled into pip (as npm ships with npx) or its functionality absorb into pip (making pipx redundant).

@pfmoore
Copy link
Member

pfmoore commented Dec 27, 2022

it can’t (yet) since people want pipx to respect pip config filed and environment variables

Do you have details of precisely which pip config files/environment variables people want to be respected? The big problem here is that there's simply no point in wrapping all of pip's options, as that just leaves us with another equally complex interface. Whereas if we can identify a (much) smaller set of options that callers like pipx actally need, there's a potential benefit. As an example, I can't imagine pipx working properly if a user specified --target in their pip config. So I'd say that pipx should explicitly not support that option, either directly or in config. But the pip project can't make that decision on behalf of pipx, so if you say you support your users supplying that option, I have to accept that choice.

What I'm trying to do here is work out if there's a suitably limited subset of pip's options that would be sufficient for users (of pip) like pipx. What I'm hearing is that no, you want to expose all pip options. Which makes the idea of a "pip wrapper" unlikely to be worth the effort.

Ultimately I’d like to see pipx bundled into pip (as npm ships with npx) or its functionality absorb into pip (making pipx redundant).

My problem is that at the moment, the big frustration for me is that tools like pipx and pip-run are coming up with messy UIs to allow the user to control pip, basically exposing pip's command line options and config mechanisms as a subset of their own CLI. That doesn't seem like a reasonable approach to me, but the only alternative people seem to be suggesting is "let's merge this functionality into pip". And frankly, that seems to me to be a very bad reason for adding functionality to pip (and one that as a maintainer, I don't really want to support).

I don't know much (nothing, really) about npm/npx, so I don't know how they solved this issue. I do note that npm install appears to have very few options - nothing to specify alternative package sources, to set network proxies, to set timeouts/retries, to set cache options or trusted hosts, etc. So my feeling is that they don't have the problem that we're hitting here, and hence they won't have any experience we can draw on. It's arguable (and I've argued it more than once!) that this is a self-inflicted problem, and if pip simply didn't have all of these options, using it as a component in other applications wouldn't be such a pain. But unfortunately, that's not something we can do much about - whatever the reasons, we do have all these options, and removing them would break our users' workflows.

@jaraco
Copy link
Member Author

jaraco commented Dec 27, 2022

How do you disable quiet mode? pip-run --help doesn't document the command line options specific to pip-run, so it's not at all obvious what you should do here. Adding -v doesn't work, for some reason (even though if you include -q -v on a pip command line, they cancel each other out...)

The changelog provides some guidance, and your instinct is right that -v will override, and indeed that's the behavior I observe:

 ~ $ pip-run -v tempora
Collecting tempora
  Using cached tempora-5.1.0-py3-none-any.whl (15 kB)
Collecting pytz
  Using cached pytz-2022.7-py2.py3-none-any.whl (499 kB)
Collecting jaraco.functools>=1.20
  Using cached jaraco.functools-3.5.2-py3-none-any.whl (7.3 kB)
Collecting more-itertools
  Using cached more_itertools-9.0.0-py3-none-any.whl (52 kB)
Installing collected packages: pytz, more-itertools, jaraco.functools, tempora
Successfully installed jaraco.functools-3.5.2 more-itertools-9.0.0 pytz-2022.7 tempora-5.1.0
Python 3.11.1 (main, Dec 23 2022, 09:28:24) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> ^D

Currently, the only pip-run-specific command line parameter is --help. All other parameters are passed through to pip install or python (or more precisely sys.executable).

Is it possible that you're not seeing any output either because you've enabled persist mode or you already have the specified dependencies resolved in the current environment?

@pradyunsg
Copy link
Member

Ultimately I’d like to see pipx bundled into pip (as npm ships with npx) or its functionality absorb into pip (making pipx redundant).

I'd like to see that too FWIW!

I don't know much (nothing, really) about npm/npx, so I don't know how they solved this issue.

When you install NodeJS, you get npm and npx. npm is an entire workflow tool for a project -- allowing you to run project-specific development tools with scripts (https://docs.npmjs.com/cli/v8/using-npm/scripts), publish the project, install + manage the dependencies + runtime environment etc. npx is a tool to download arbitrary-ish packages from the package index (also called "npm", heh) and run them. :)

https://docs.npmjs.com/cli/v9/commands/npx has a good overview of the exact behaviours.

@pfmoore
Copy link
Member

pfmoore commented Dec 27, 2022

I assume these changes address the biggest concerns with the UI. Is there anything else that needs consideration?

Getting back to this question, we need to look at, not the UI of pip-run, but the UI of pip run, if that command were to be added. To clarify what I mean here, pip has a group of "general options" which apply to all commands (network options like --proxy appear here) and "command specific options" like --no-deps and --find-links (which are per-command, with some, like --find-links, shared by all "requirement commands").

We'll need to define the precise option structure for pip run, which will presumably be similar, but not identical, to that of pip install (for example, --target makes no sense for pip run). At the moment, the external pip-run command ignores this issue, because it simply punts on all such decisions, by saying "we pass all options before -- to pip".

I'd imagine that pip run would want to be treated as a "requirements command", so it gets options like --find-links. But I'm not clear on what install-specific options it would need to copy, nor am I clear what options it would need that are specific to pip run (the "persist the environment" option seems likely to fall into this category).

Personally, I think we're close enough that we need to explore in detail the ways in which this isn't going to just be "copy the pip-run code". Back in this comment @jaraco suggested vendoring pip-run, but I do not think that's the right way of doing this. Having pip call pip-run, which in turn calls pip, feels both clumsy and costly, as well as being messy for support purposes.

@pradyunsg
Copy link
Member

pradyunsg commented Dec 27, 2022

I do not think that's the right way of doing this. Having pip call pip-run, which in turn calls pip, feels both clumsy and costly, as well as being messy for support purposes.

I personally feel more strongly -- this would be among the worst ways of doing this.

This has been suggested for pip-audit as well, and IMO the answer is the same as it was for that: the functionality we want to add to pip would have to be absorbed into pip (and adapted within the context of pip itself, as necessary without major breakage ideally) and all future development of that functionality would need to move into pip; with the external implementation effectively ceasing development.

I especially do not want there to be a different public PyPI package which is where "new" functionality and other things are developed, before bringing them into pip run/pip audit. I also don't want the same group of people maintaining "backports" either -- I think having two pieces to get the same functionality is a bad idea. :)

The rationale is primarily based on reducing the effort around co-ordinating development, reducing duplicated effort, not delegating entire subsets of pip's functionality to a project maintained outside of it (I have UX concerns around that) and the vast potential for user confusion; especially if implementation diverge for any reason.

@pfmoore
Copy link
Member

pfmoore commented Dec 27, 2022

Is it possible that you're not seeing any output either because you've enabled persist mode or you already have the specified dependencies resolved in the current environment?

Not as far as I know. Looking a bit further, I was running pip-run packaging and it seems it's picking up the copy of packaging that's installed in the pipx managed environment where I have pip-run installed. Which is definitely not what I expected to happen.

It looks like pip-run runs the user's code in the Python environment that pip-run is installed in, not the current Python environment of the user. I guess I can understand why, but it doesn't make much sense for the user. For example:

❯ pip install shadwell
Collecting shadwell
  Using cached shadwell-0.1-py3-none-any.whl (7.2 kB)
Installing collected packages: shadwell
Successfully installed shadwell-0.1
❯ pip-run -v shadwell
Collecting shadwell
  Using cached shadwell-0.1-py3-none-any.whl (7.2 kB)
Installing collected packages: shadwell
Successfully installed shadwell-0.1
Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> ^Z

Note that although shadwell is installed in the user's environment, pip-run installs it again.

I don't know what should happen as far as pip-run is concerned (although I will note that "pip-run shouldn't be installed globally via pipx" is not an answer that I'd be happy with) but I do know that this is the sort of messy issue to debug that I wouldn't want to see happen with a "pip run" command (and which I suspect would if we went down the vendoring route).

@jaraco
Copy link
Member Author

jaraco commented Dec 27, 2022

My problem is that at the moment, the big frustration for me is that tools like pipx and pip-run are coming up with messy UIs to allow the user to control pip, basically exposing pip's command line options and config mechanisms as a subset of their own CLI. That doesn't seem like a reasonable approach to me, but the only alternative people seem to be suggesting is "let's merge this functionality into pip". And frankly, that seems to me to be a very bad reason for adding functionality to pip (and one that as a maintainer, I don't really want to support).

pip-run dosen't expose a subset of pip install options. It passes through all of them with minimal processing. This simple approach is rather elegant in my opinion, and the only messy part of the UI is that a user must separate the pip args from the python args with a double-dash. I found this syntax slightly annoying at first, but I was already somewhat acclimated to it through the use of tox. And I've since found it not at all messy. Once I've spent just a few weeks using it, I learned to love it and found uses for it that I had not expected. That's the beauty of its design - it's agnostic to the implementation details of either pip or python, and makes available the full suite of current and future features (to the extent they make sense). It decouples itself from the nuances of the underlying tool and directs the user to use pip-run just like they would use pip install.

I'm not proposing to merge pip-run into pip in order to solve the UI issues. I expect the UI to be the same except s/pip-run/pip run/. My intention was for pip-run to provide a preview of the functionality that would become pip run.

Having pip call pip-run, which in turn calls pip, feels both clumsy and costly, as well as being messy for support purposes.

In the previously-proposed pull request, pip doesn't call pip-run. The integration is that pip-run is vendored and implements pip run. So it's much less clumsy than described.

The run implementation does then subprocess to pip as that's the only interface provided by pip, but I'd be open to invoking the pip install behavior in-process. I figured it would be better to first have a proof of concept that works and demonstrates the integration first.

all future development of that functionality would need to move into pip; with the external implementation effectively ceasing development.

I'd be open to this. My intention was that there would likely be a small period of overlap where pip-run continues to exist to support older versions of pip (especially in the interim where pip has the code but hasn't released the functionality).

I'd expected that I would have an outsized stake in maintaining the 'run' command within pip, but that it's design would be influenced more from users than from myself.

@jaraco
Copy link
Member Author

jaraco commented Dec 27, 2022

It looks like pip-run runs the user's code in the Python environment that pip-run is installed in, not the current Python environment of the user. I guess I can understand why, but it doesn't make much sense for the user.

This concern makes a lot more sense when pip-run is part of pip, as the user would expect it to run in whatever environment pip was invoked.

Correct, you probably shouldn't use pipx to install pip-run... unless you're just using it as a script runner and you want it to have that mostly clean environment. One of the advantages is being able to readily invoke python with dependencies across versions:

$ py -3.7 -m pip-run shadwell -- -c pass
$ py -3.8 -m pip-run shadwell -- -c pass

(but for that to work, pip-run needs to be present alongside pip in each Python)

@pfmoore
Copy link
Member

pfmoore commented Dec 27, 2022

https://docs.npmjs.com/cli/v9/commands/npx has a good overview of the exact behaviours.

And it seems to support my suspicion that npm/npx simply don't have all of the config options on the command line that pip has. Maybe that's something we could consider - drop a bunch of command line options in favour of only allowing them to be set via pip config or environment variables? Then having pip subprocesses "work the same as" the command line pip comes for free, because the config applies everywhere.

I can hear them all collecting the tar and feathers for me now... 🙂

@pfmoore
Copy link
Member

pfmoore commented Dec 27, 2022

One of the advantages is being able to readily invoke python with dependencies across versions:

... which would be almost completely negated for me by the need to have pip-run installed in every environment I happened to be using.

@jaraco
Copy link
Member Author

jaraco commented Dec 27, 2022

... which would be almost completely negated for me by the need to have pip-run installed in every environment I happened to be using.

Which is the whole point of this issue. Since users have pip installed in every environment, they would have pip run installed in every environment.

@cs01
Copy link
Member

cs01 commented Dec 27, 2022

https://docs.npmjs.com/cli/v9/commands/npx has a good overview of the exact behaviours.

And it seems to support my suspicion that npm/npx simply don't have all of the config options on the command line that pip has. Maybe that's something we could consider - drop a bunch of command line options in favour of only allowing them to be set via pip config or environment variables? Then having pip subprocesses "work the same as" the command line pip comes for free, because the config applies everywhere.

I can hear them all collecting the tar and feathers for me now... 🙂

I like this. It makes the mental model of the code simpler. It makes maintenance simpler. It seems like a potentially decent trade off between being able to ship a feature to provide user value, as well as being able to maintain sanity as a maintainer. (I say this as someone with basically no skin in the game though, as I’m not a pip maintainer, and after having a child have been unable to do any open source work).

@pradyunsg
Copy link
Member

pradyunsg commented Dec 27, 2022

I'd expected that I would have an outsized stake in maintaining the 'run' command within pip, but that it's design would be influenced more from users than from myself.

/me nods. TBH, it'd be weird if this weren't the case. ;)


From #3971 (comment):

I'd rather not re-consider all UI aspects that pip-run has established over the years.

I empathize and largely agree -- with the caveat that I've not used pip-run personally and don't know if the best direction to go here is its current CLI. I really do like the idea of s/pip-run/pip run/ "just working" in the future but I don't want us to lock ourselves into a situation where pip-run's CLI isn't something that pip can adopt in a straightforward manner -- it's easier to change the CLI design while it's pip-run living outside pip and needs to be manually installed; than it would be for a pip run to change its CLI.

All this to say: I feel similarly, but I am also wary of pigeon-holing ourselves into a tricky/difficult-to-maintain thing for all of us. :)

@pradyunsg
Copy link
Member

pradyunsg commented Dec 27, 2022

FWIW, why was rwt renamed to pip-run?

I'm imagining that some of the UX concerns raised here would've been easier to deal with it, if we didn't have to actively aim for an in-place substitution; and I just realised that this rename was the point where that constraint was effectively added into this effort.

@pfmoore
Copy link
Member

pfmoore commented Dec 27, 2022

This simple approach is rather elegant in my opinion

I don't consider this consequence to be "elegant", I'm afraid:

❯ pip-run --target xyxy shadwell
Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import shadwell
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'shadwell'

(It's even worse if the user has PIP_TARGET set as an environment variable for some reason...)

I'm not proposing to merge pip-run into pip in order to solve the UI issues. I expect the UI to be the same except s/pip-run/pip run/. My intention was for pip-run to provide a preview of the functionality that would become pip run.

Then I guess I'm -1 on this proposal. I think pip run needs to have a command line option structure that matches all of the other pip commands. As a proof of concept of the idea, forwarding options to pip is fine, but it doesn't make sense when the command is (part of) pip.

The integration is that pip-run is vendored and implements pip run.

As @pradyunsg has noted, that's a non-starter. Apart from anything else, I just saw the list of dependencies that pip-run installs. I have no idea why, but pydantic is in there, and that includes compiled extensions, so that's not acceptable either. I'm all in favour of tools freely using 3rd party functionality, but pip has much stricter requirements, and any implementation of pip run would have to conform to those (minimal dependencies, no compiled extensions).

@jaraco
Copy link
Member Author

jaraco commented Dec 27, 2022

I don't consider this consequence to be "elegant", I'm afraid:

❯ pip-run --target xyxy shadwell
Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import shadwell
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'shadwell'

Just because you happened to find an edge case that's inelegant isn't indicative that the interface is inherently broken. Indeed, pip has the same bug:

 draft $ pip install -t out -t out2 shadwell -q
 draft $ env PYTHONPATH=out py -c "import shadwell"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'shadwell'

There's a case to be made that perhaps pip-run should explicitly disallow --target because it supplies that parameter, but the same argument should be made that pip itself should disallow multiple --target options since it can only honor one. That's actually the behavior I would have expected - an error due to conflicting options (options that I'd not expect a user of pip-run to encounter unless they're looking for trouble). Should we consider removing the install command from pip because users can contrive a use case that produces unexpected output?

As @pradyunsg has noted, that's a non-starter. Apart from anything else, I just saw the list of dependencies that pip-run installs. I have no idea why, but pydantic is in there, and that includes compiled extensions, so that's not acceptable either. I'm all in favour of tools freely using 3rd party functionality, but pip has much stricter requirements, and any implementation of pip run would have to conform to those (minimal dependencies, no compiled extensions).

The dependencies were added after it became apparent that pip was unlikely to adopt this functionality. For a long time, I maintained pip-run without any dependencies. I'd imagine any renewed effort to integrate the behavior would need to roll back these changes or re-implement the behavior.

As an aside, Pydantic was added because the CPython core devs recommended it as a solution to a problem I presented at the Python Language Summit.


Unlike pip-run, pipx works great as an installed application, and can even install itself, but not without some bootstrapping. Here's how I use pip-run to install boostrap pipx:

pip-run pipx -- -m pipx install pipx

Imagine how nice it would be for pipx to be able to present that as an installation technique (with pip run generally available).


It really feels like there's no path to a solution here. I've made several concerted efforts to align this project with the visions of the PyPA, to solve the critiques and to come up with a solution that fits, but I continue to meet resistance from members who haven't even tried it or who seem to have only tried it in anger. I thought Paul would be excited that I address all four top critiques about the UI, even if not the way he would have done it.

I appreciate that Paul has actually tried using pip-run, despite it having been presented for experimentation for years, but it doesn't feel like anybody has really tried to understand the general value that pip-run promises.

I'm finding this discussion too hostile and emotionally taxing. I repeatedly get my hopes up only to be let down with more roadblocks. I'm going to unsubscribe from this conversation. If the team can decide that they're prepared to accept pip-run as pip run in some form (and in roughly what form), please feel free to loop me back in.

@pradyunsg
Copy link
Member

FWIW, I think --target is a bad splodge in pip's interface for more reasons than one, and it'd be better to just avoid involving that in this discussion.

@pfmoore
Copy link
Member

pfmoore commented Dec 27, 2022

@jaraco, apologies if I came across as hostile. I think we simply have very different views of what a pip run command should be. Yours is informed by your rwt/pip-run utility, whereas mine is much more closely targetted at my need to be able to run one-off scripts (often adhoc ones) with dependencies from PyPI, but without the overhead of a virtualenv. I'll freely admit that my use case is much closer to pipx run, especially when my PR pypa/pipx#916 gets landed. But I think that simplicity and a very clear focus is beneficial here, so even though pip-run offers a superset of that capability, I'm cautious of compromising on the (to me) core use case for extra generality.

In the interests of being explicit, and but hopefully not as a way of dumping more requirements on this work1, my key requirements for a pip run command are:

  1. It must be a "first class" pip subcommand. That means, it follows the normal pip option structure, its code is largely part of the main pip codebase, and it follows all of the usual constraints on pip code (minimal vendored dependencies, no C extensions, etc).
  2. Its main focus should be on the "run a script with its dependencies" use case, and that should be as simple and foolproof as possible. I'm OK with additional features, but not at the expense of usability for running a script.
  3. If we make the not-unreasonable assumption that this becomes the de facto way to run a Python script with dependencies, it should be fast - and if this means caching environments, then we should do so. I do not want pip run to be the reason we start seeing comments like "I tried to write a simple script in Python and it takes 2 minutes to run" (because it installs numpy, pytorch and pandas every time you run it...).

I also think we need to be clear that once merged, "ownership" of the design and ongoing evolution of the pip run command will lie with the pip maintainers. I don't want to be combative, but I fear that if we don't make that point explicit, we may end up with further difficult debates as the functionality evolves.

If the team can decide that they're prepared to accept pip-run as pip run in some form (and in roughly what form), please feel free to loop me back in.

Again, in the interests of being explicit, I don't personally expect pip-run to ever be acceptable as it stands as a pip run command2. I'm definitely in favour of pip having some form of pip run command, but the constraint that it needs to be (in essence) pip-run is too much for me. I'm not quite sure what to make of your comment "in some form" - I've got the distinct impression that the command line option structure isn't up for negotiation, for example, so I don't see how we can usefully describe "what form" we'd accept pip-run without some guidelines from you. Unless requirements like the ones I gave above are what you're looking for, in which case they can act as a starting point.

FWIW, I think --target is a bad splodge in pip's interface for more reasons than one, and it'd be better to just avoid involving that in this discussion.

Yeah, agreed. It was the easiest one to demonstrate, is all. My point is that certain pip install options make no sense for pip run, and I would not expect them to be accepted. Other examples would be --prefix, or --user, or maybe --dry-run. I don't think it's sufficient to have the pip install subprocess be responsible for reporting such things as errors.

On a somewhat related note, I will say that I've been getting increasingly frustrated with core Python's poor support for users who want to publish simple scripts with a few dependencies. Basically the target audience for a pip run command. Making that a packaging issue, and as a result pushing it out of the core's scope and onto PyPA to address, is a key part of why packaging tools have such problems with bootstrapping, as well as leaving pip bearing a lot of the burden simply because it's the only packaging tool shipped with Python (so that adding functionality to pip is a "back door" way of getting it into the stdlib). And yes, I do see that this is a part of the "general value that pip-run provides", as suggested by @jaraco - but IMO it should be in core, so that we don't have to struggle with the bootstrapping and usability issues we are encountering here. Attempts to actually try to address this in core, such as PEP 582, struggle to gain traction3, and end up in limbo because core devs aren't interested, and packaging experts can't do anything directly as it needs core changes. If I'm absolutely honest, I think that something like pip run itself belongs in the core (or in the stdlib, like runpy). Maybe it needs a standardised interface to "run a package installer", but that could be defined. But the real problem is that no-one in the core team is likely to champion such an effort, and without that, it will end up being a drain on the PyPA's already stretched resources.

Footnotes

  1. I believe I've already implied, if not actually stated, all of these, so I hope they aren't new.

  2. And with that in mind, if that is a general feeling, I'd be grateful if you would consider renaming the command back to rwt, to avoid confusion.

  3. I have my own issues with PEP 582, but they aren't relevant here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement Improvements to functionality
Projects
None yet
Development

No branches or pull requests

9 participants