-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Source File and Lock Specification Approach #316
base: main
Are you sure you want to change the base?
Changes from all commits
dafb50d
da18f01
0ad6f7b
0e9b2a6
48df28e
3fd7106
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,18 +5,30 @@ | |
import typing | ||
|
||
from itertools import chain | ||
from typing import Dict, List, Optional, Tuple, Union | ||
from typing import ( | ||
AbstractSet, | ||
Dict, | ||
Iterable, | ||
List, | ||
Optional, | ||
Sequence, | ||
Set, | ||
Tuple, | ||
Union, | ||
) | ||
|
||
from pydantic import BaseModel, validator | ||
from typing_extensions import Literal | ||
|
||
from conda_lock.common import ordered_union, suffix_union | ||
from conda_lock.common import suffix_union | ||
from conda_lock.errors import ChannelAggregationError | ||
from conda_lock.models import StrictModel | ||
from conda_lock.models.channel import Channel | ||
from conda_lock.virtual_package import FakeRepoData | ||
|
||
|
||
DEFAULT_PLATFORMS = {"osx-64", "linux-64", "win-64"} | ||
|
||
logger = logging.getLogger(__name__) | ||
|
||
|
||
|
@@ -42,7 +54,9 @@ class _BaseDependency(StrictModel): | |
optional: bool = False | ||
category: str = "main" | ||
extras: List[str] = [] | ||
selectors: Selectors = Selectors() | ||
|
||
def to_source(self) -> "SourceDependency": | ||
return SourceDependency(dep=self) # type: ignore | ||
|
||
|
||
class VersionedDependency(_BaseDependency): | ||
|
@@ -59,32 +73,66 @@ class URLDependency(_BaseDependency): | |
Dependency = Union[VersionedDependency, URLDependency] | ||
|
||
|
||
class SourceDependency(StrictModel): | ||
dep: Dependency | ||
selectors: Selectors = Selectors() | ||
Comment on lines
+76
to
+78
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you explain in words what's the idea behind a SourceDependency? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure (I can also add a comment). So we want the LockSpecification to be a mapping from environment (platform right now, but potentially other attributes in the future) to a list of dependencies. Those dependencies don't need selectors, because selectors are only used when constructing the LockSpecification for determining if a dep is required for an environment, or multiple versions to use. Thus, a SourceDependency represents a dependency with any additional info associated with it from a source file. Right now, its just selectors, but in the future, we may have other limiters like min or max Python version. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would something like this make sense as a docstring?
Please rewrite in case I misunderstood the details. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's perfect! Will add |
||
|
||
|
||
class Package(StrictModel): | ||
url: str | ||
hash: str | ||
|
||
|
||
class SourceFile(StrictModel): | ||
file: pathlib.Path | ||
dependencies: List[SourceDependency] | ||
# TODO: Should we store the auth info in here? | ||
channels: List[Channel] | ||
platforms: Set[str] | ||
|
||
@validator("channels", pre=True) | ||
def validate_channels(cls, v: List[Union[Channel, str]]) -> List[Channel]: | ||
for i, e in enumerate(v): | ||
if isinstance(e, str): | ||
v[i] = Channel.from_string(e) | ||
return typing.cast(List[Channel], v) | ||
|
||
def spec(self, platform: str) -> List[Dependency]: | ||
from conda_lock.src_parser.selectors import dep_in_platform_selectors | ||
|
||
return [ | ||
dep.dep | ||
for dep in self.dependencies | ||
if dep.selectors.platform is None | ||
or dep_in_platform_selectors(dep, platform) | ||
] | ||
|
||
|
||
class LockSpecification(BaseModel): | ||
dependencies: List[Dependency] | ||
dependencies: Dict[str, List[Dependency]] | ||
# TODO: Should we store the auth info in here? | ||
channels: List[Channel] | ||
platforms: List[str] | ||
sources: List[pathlib.Path] | ||
virtual_package_repo: Optional[FakeRepoData] = None | ||
|
||
@property | ||
def platforms(self) -> List[str]: | ||
return list(self.dependencies.keys()) | ||
|
||
def content_hash(self) -> Dict[str, str]: | ||
return { | ||
platform: self.content_hash_for_platform(platform) | ||
for platform in self.platforms | ||
for platform in self.dependencies.keys() | ||
} | ||
|
||
def content_hash_for_platform(self, platform: str) -> str: | ||
data = { | ||
"channels": [c.json() for c in self.channels], | ||
"specs": [ | ||
p.dict() | ||
for p in sorted(self.dependencies, key=lambda p: (p.manager, p.name)) | ||
if p.selectors.for_platform(platform) | ||
for p in sorted( | ||
self.dependencies[platform], key=lambda p: (p.manager, p.name) | ||
) | ||
], | ||
} | ||
if self.virtual_package_repo is not None: | ||
|
@@ -105,34 +153,97 @@ def validate_channels(cls, v: List[Union[Channel, str]]) -> List[Channel]: | |
return typing.cast(List[Channel], v) | ||
|
||
|
||
def aggregate_lock_specs( | ||
lock_specs: List[LockSpecification], | ||
) -> LockSpecification: | ||
|
||
# unique dependencies | ||
def aggregate_deps(grouped_deps: List[List[Dependency]]) -> List[Dependency]: | ||
# List unique dependencies | ||
unique_deps: Dict[Tuple[str, str], Dependency] = {} | ||
for dep in chain.from_iterable( | ||
[lock_spec.dependencies for lock_spec in lock_specs] | ||
): | ||
for dep in chain.from_iterable(grouped_deps): | ||
key = (dep.manager, dep.name) | ||
if key in unique_deps: | ||
# Override existing, but merge selectors | ||
previous_selectors = unique_deps[key].selectors | ||
previous_selectors |= dep.selectors | ||
dep.selectors = previous_selectors | ||
unique_deps[key] = dep | ||
|
||
dependencies = list(unique_deps.values()) | ||
try: | ||
channels = suffix_union(lock_spec.channels or [] for lock_spec in lock_specs) | ||
except ValueError as e: | ||
raise ChannelAggregationError(*e.args) | ||
return list(unique_deps.values()) | ||
|
||
|
||
def aggregate_channels( | ||
channels: Iterable[List[Channel]], | ||
channel_overrides: Optional[Sequence[str]] = None, | ||
) -> List[Channel]: | ||
if channel_overrides: | ||
return [Channel.from_string(co) for co in channel_overrides] | ||
else: | ||
# Ensure channels are correctly ordered | ||
try: | ||
return suffix_union(channels) | ||
except ValueError as e: | ||
raise ChannelAggregationError(*e.args) | ||
|
||
|
||
def parse_source_files( | ||
src_file_paths: List[pathlib.Path], pip_support: bool = True | ||
) -> List[SourceFile]: | ||
""" | ||
Parse a sequence of dependency specifications from source files | ||
|
||
Parameters | ||
---------- | ||
src_files : | ||
Files to parse for dependencies | ||
pip_support : | ||
Support pip dependencies | ||
""" | ||
from conda_lock.src_parser.environment_yaml import parse_environment_file | ||
from conda_lock.src_parser.meta_yaml import parse_meta_yaml_file | ||
from conda_lock.src_parser.pyproject_toml import parse_pyproject_toml | ||
Comment on lines
+193
to
+195
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there any particular reason to put these imports inside the function? For more standardized code, I'd prefer to have imports at the top of the file unless there's a good reason. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Those modules import There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, yes, circular dependencies in Python are really annoying.
Are they using them just for type hints? If so, then they are not true import cycles. In that case, you can do from typing import TYPE_CHECKING
if TYPE_CHECKING;
from ... import SourceDependency, ... and the types won't be imported at runtime, avoiding the cycle. (Very nice, I see that you already know this trick! 😄) If it's not just for type hints, then there may be some genuinely circular logic occurring. For instance, if There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's not just for type hints. I will move them to a new module. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I forgot to mention in my previous comment that Python does permit certain types of circular imports. Running So there is another strategy: rather than fix the cycles, try to import modules lazily, i.e. replace |
||
|
||
src_files: List[SourceFile] = [] | ||
for src_file_path in src_file_paths: | ||
if src_file_path.name in ("meta.yaml", "meta.yml"): | ||
src_files.append(parse_meta_yaml_file(src_file_path)) | ||
elif src_file_path.name == "pyproject.toml": | ||
src_files.append(parse_pyproject_toml(src_file_path)) | ||
else: | ||
src_files.append( | ||
parse_environment_file( | ||
src_file_path, | ||
pip_support=pip_support, | ||
) | ||
) | ||
return src_files | ||
|
||
|
||
def make_lock_spec( | ||
*, | ||
src_file_paths: List[pathlib.Path], | ||
virtual_package_repo: FakeRepoData, | ||
channel_overrides: Optional[Sequence[str]] = None, | ||
platform_overrides: Optional[Set[str]] = None, | ||
required_categories: Optional[AbstractSet[str]] = None, | ||
pip_support: bool = True, | ||
) -> LockSpecification: | ||
"""Generate the lockfile specs from a set of input src_files. If required_categories is set filter out specs that do not match those""" | ||
src_files = parse_source_files(src_file_paths, pip_support) | ||
|
||
# Determine Platforms to Render for | ||
platforms = ( | ||
platform_overrides | ||
or {plat for sf in src_files for plat in sf.platforms} | ||
or DEFAULT_PLATFORMS | ||
) | ||
|
||
spec = { | ||
plat: aggregate_deps([sf.spec(plat) for sf in src_files]) for plat in platforms | ||
} | ||
|
||
if required_categories is not None: | ||
spec = { | ||
plat: [d for d in deps if d.category in required_categories] | ||
for plat, deps in spec.items() | ||
} | ||
|
||
return LockSpecification( | ||
dependencies=dependencies, | ||
# Ensure channel are correctly ordered | ||
channels=channels, | ||
# uniquify metadata, preserving order | ||
platforms=ordered_union(lock_spec.platforms or [] for lock_spec in lock_specs), | ||
sources=ordered_union(lock_spec.sources or [] for lock_spec in lock_specs), | ||
dependencies=spec, | ||
channels=aggregate_channels( | ||
(sf.channels for sf in src_files), channel_overrides | ||
), | ||
sources=src_file_paths, | ||
virtual_package_repo=virtual_package_repo, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both
make_lock_spec
andparse_source_files
are specifically related to parsing source files, and don't have a big effect on the rest of the program. Thus, I moved them toconda_lock/src_parser/__init__.py
to reduce the amount of code in this file (since its about 1500 lines of code).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for all this work!!! It would still really help for reviewing to have these refactor steps in separate commits so that I could view the substantive changes separately. (In general it's much easier to follow several smaller logical commits than one massive one.)
I'm sincerely very eager to see this through quickly, but my schedule looks difficult at the moment. I'll see what I can do, but apologies in advance if I'm slow to respond.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maresb I tried to break this PR down into a couple of commits to make it a bit easier. I had some trouble breaking down the last couple of commits since the contents is very much tied together. But if it is still difficult to look through, let me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, the additional commits! This is much better for review.
It could still be even better... The best would be one single logical change per commit. Please don't change this particular commit now, but to explain what I mean, your first commit "Move Function Sub PR" could be further broken down into:
because this is the level to which I need to deconstruct the changes to see what's going on. (Currently I have to diff each function removed from
conda_lock.py
with each function added to__init__.py
in order to see exactly what changed, so a verbatim cut-from-one-file and paste-in-the-other easier to process as a single logical change.)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, good point. I'm used to writing large PRs in general. In the future, I can definitely break down my commits even further. Would you like me to modify the last large commit of this PR? My concern with modifying that commit is that I'm not sure how to break it down without having some commit be broken. I normally try to ensure that every commit exposed to master is a somewhat-working impl of the app or library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Large PRs are fine, it's just large commits which are difficult to understand.
Your commits don't need to be perfect, and I'm asking for a fairly high standard. I can explain how I try to write commits:
This is a good rule in general. But in some situations I think it's fine to break something in one commit and fix it in a subsequent one. (For example, in one commit I might remove functionality X, and then in the next commit I add functionality Y which replaces X. This way the new details of Y aren't confused with the old details of X.)
What also helps is to stage partial changes. For example, in the process of implementing X, I may modify some type hints in another part of the code. In this case, I can stage and commit the type hints in a separate commit, so that my implementation of X remains focused.
The most complicated technique is to rebase code you've already committed, but that is really a lot of work.
A few concrete suggestions for how you could break up the main commit:
ruamel
I think I can handle the large commit as-is, but I would need to find an uninterrupted block of time where I could work through the whole thing at once. If you manage to break up the commit, then I can probably finish reviewing it sooner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you spent a lot of time looking at the last 2 commits:
Initial Version of SourceFile Approach
andAdding Test Yaml Files
. If not, I can try to split those further.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you already looked through the first 2 commits thoroughly, so I will leave them alone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, in fact, if you create a separate PR for those I think we can already merge them since they are minor refactoring changes.
For the big commit I was thinking: since this is a major change, Marius will have to review it after me. Thus it may be worthwhile to invest extra time to make it readable.