Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Support][Yacht] Concerning the lambdas and the sequence in which exercises are solved #3205

Closed
fabiorzfreitas opened this issue Nov 17, 2022 · 9 comments

Comments

@fabiorzfreitas
Copy link

I got to Yacht after completing all previously available Bool, Numbers and Basic exercises. It felt weird to me that this sequence didn't correspond to the first n exercises in the first row on the track's overview.

I couldn't seem to grasp the solution intended for Yacht. If I were to design on from scratch, I though I wouldn't be using the given variables at all. So I decided to compare to community solutions just to get mine started.

I was a bit shocked to find most of the most starred involved using lambda for each given variable. Due to learnpython.org, lambdas are not an entirely new concept to me, but I don't feel knowledgeable enough to use them in my code without being taught through a Concept or the instructions for an exercise.

This had me thinking I might be solving the exercises in a wrong order and that I ended up skipping learning lambdas and more on the way, although I couldn't find any order except from the one at the Syllabus, which doesn't correspond to the overview.

The other possibility is that I'm expected to fully understand lambdas even with no prior mention to them, in which case I believe lambdas deserve a proper lesson!

@github-actions

This comment was marked as resolved.

@BethanyG
Copy link
Member

BethanyG commented Nov 19, 2022

Hi @fabiorzfreitas 👋🏽

Thank you for filing this issue and sharing your perspectives on the Python track ordering and our (very much a work in progress!!) syllabus. We really appreciate the feedback.

First things first:

  1. lambdas are not required for any exercise on the Python track as it stands. They make life easier in a couple of circumstances (mostly in the two or three exercises that use sorting and filtering functions) - but they aren't strictly necessary even then. In fact, lambdas are rarely needed for mainstream real-world Python code at all. This is why we didn't prioritize writing a learning exercise about them. So don't worry or feel bad that you haven't dug into them.

  2. Just because a bunch of people think using lambdas the way the starred solutions do is cool, doesn't mean that those solutions are exemplary. Fun fact: the python style guide explicitly calls out assigning a variable name to a lambda as a no-no. lambdas are intended to be anonymous. Naming them defeats the purpose:













my solution to yacht doesn't use lambdas. Neither does DenLi or chskzl. Our repo example solution is also lambda-free (although the example has a questionable use of functools.partial). And I am sure there are more among all the lambda ones.


For the other points:

I was a bit shocked to find most of the most starred involved using lambda for each given variable. Due to learnpython.org, lambdas are not an entirely new concept to me, but I don't feel knowledgeable enough to use them in my code without being taught through a Concept or the instructions for an exercise.

(click to expand)

I was surprised too!

And it is completely valid to feel like you want specific instruction before trying something advanced.

We never intended to push you into a specific strategy with this. Practice exercises are intended to be open-ended and challenge the student to try multiple strategies from where they are in their learning journey - there isn't any one "expected" answer or any "required" techniques.

For the most part (with a few exceptions), we encourage students explore as many methods as they'd like in making the tests pass & solving the exercise. This can involve multiple iterations that range from super-basic to really convoluted and esoteric. Once a student is done with those iterations, they're free to publish one or all of them to community solutions. In this case many many students chose to show off their lambda strategies.

The thing with our community solutions is that they aren't curated. They are also not checked to make sure that they are using specific techniques, or even solve the most recent version of the exercise.

We have attempted to do some flagging in the UI:

So when I go to look at community solutions, I typically filter on those that pass the most recent tests. I might even sort on "most recent". But neither of those things guarantees excellent code.



It felt weird to me that this sequence didn't correspond to the first n exercises in the first row on the track's overview.
...
This had me thinking I might be solving the exercises in a wrong order and that I ended up skipping learning lambdas and more on the way, although I couldn't find any order except from the one at the Syllabus, which doesn't correspond to the overview.

(click to expand)

Apologies for that syllabus vs overview dissonance.

We used to have it the other way around, where that first row in the overview was grouped the way the syllabus tree groups things. But that created massive confusion as well, so we reverted putting all of the learning exercises in the top row of the exercise chart, followed by all the practice exercises in ascending order of difficulty. The reason we group the learning exercises together is that students are allowed to opt-out of them entirely by turning off learning mode. Having them disabled as a group looks better than the alternative. But I agree that either strategy can be confusing. We're working on it.



Thank you for sticking through to the end! I know that was a LOT. 😅 I wanted to leave you with on last thing:

Our new community process for bugs and support going forward is to have you to post in our community forum, which is full of enthusiastic and friendly people who are eager to discuss things and help you out. I am absolutely sure you'd find many people to talk yacht strategies with .... and maybe a few who could share what the magic is with all those lambdas 😄


Thanks again for sharing your thoughts, and I hope I had some useful answers for you 💙

@fabiorzfreitas
Copy link
Author

Hi, @BethanyG ,

First of all, allow me to thank you for this very gentle, yet thorough response. I was a bit afraid I would be "yelled at" or treated harshly, as it often happens in the internet when you're not very knowledgeable on a given subject. This makes me want to engage even more in the community!


Onto the answer itself, now I'm feeling really better about lambdas. The things you just taught fit perfectly with a mentoring session I had today where I started to undo my wrong concept that the fewer lines of code, the better (in particular, I was trying to fit as much as I could to a single line, which meant lots and lots of unneeded iterations and clause checks).


Concerning the community solutions and the example in the repo, I believe there's an opportunity for improvement here: that's the first time I've been told there are example solutions. Perhaps every exercise should have a curated example pinned as the first community solution. This example solution shouldn't be a test per se (i. e. the student is not required to provide a solution that resembles the example) , but it could illustrate what the student could have done given only what they already know.

For instance, in my first mentoring I found that whenever I needed to iterate both the items of a list and their indexes, I would always use the the cumbersome for i in range(len(mylist)) if mylist[i] etc instead of learning and using more efficient methods such as enumerate(). And here's my point: I had already seen enumerate() in a few community solutions before, I just figured it was a fancy tool I should not worry about yet. Things would've been very different if I saw enumerate() in a pinned example solution. Curation can be that game-changing!


Finally, I may have a suggestion for the Syllabus vs Overview dissonance as well, although that would probably mean proposing a change beyond a single track's scope: perhaps we should add another dimension to overview and start thinking about it in rows and columns instead of a single sequence. This opens a lot of possibilities, like having empty spots on the grid in order to be able to draw arbitrary lines grouping exercises (e.g horizontal lines dividing by difficulty).


Phew, that was a lot. I'm aware my suggestions could be new issues, but I thought I'd rather propose them here first before cluttering the very busy Issues tab!

Thanks again for your time and effort to help me, this exchange certainly improved my experience with Exercism!


tl;dr: My questions were answered and I'd like to suggest that 1) every exercise has a curated example solution pinned as the first in the community tab and 2) exercises in Overview are grouped in columns and rows instead of a single multi-line row.

@AlvesJorge
Copy link

AlvesJorge commented Jul 20, 2023

I was about to start my own issue about this but I'll piggy back on this one.

The reason why everyone is using lambda's is because of the pre-written code.
The solution should obviously not be using named lambdas but the starting code already has these categories written out as constants already assigned None.
Of course people who are doing the exercise are drawn to assigning it to something as well instead of re-writing it as named methods.

Also I think is non-sense to have the second argument be called as "yacht.CATEGORY", the category name as a string would give the person doing the exercise more freedom to pick and choose how to use this.

I thought I couldn't use normal functions because I didn't realize you could call yacht.METHOD_NAME for a defined method, like DenLi did.

I feel even the example solution is very convoluted and strange. Assigning the categories to numbers which are then found in a list? This sounds like a dictionary with extra steps.

Sorry that I'm a little exhalted, is just that I was helping someone with this exercise and they were very confused with all the lambda solutions.

@AlvesJorge
Copy link

Just to add to what I said, I would also like to point out that the user of this yacht.py module, should not need to know that the module has a method/constant defined that should be passed as the second argument. A string with the name of the category should be the expected. Then the module should handle assigning that string to an operation.
Also raising an error is easier if the string is not assigned to anything, rather than if the module doesn't contain a method/variable of the name provided by the module user.

@AlvesJorge
Copy link

If there's a consensus of agreement on what I wrote above I can create a pull request with a fix. I need other POV's because I feel I definitely have a bias here.

@BethanyG
Copy link
Member

Hi @AlvesJorge 👋🏽

Thank you for chiming in on this discussion! A few quick points before I get into the details:

  1. Currently, we're not accepting community contributions for the Repo. You can read about the details here. Had you opened a new issue, it would have been closed automatically.
  2. The upshot of point 1 is that discussions like this have moved to our community forum, so that a wider selection of the community can see and participate in the discussion. So, in the future, please open a thread there.
  3. Practice exercises (like Yacht, Clock, Gigasecond, and others not linked directly to concepts) are actually pulled from a common repo problem specifications. While tracks do have leeway in implementation details, the core exercise definition and test data comes from Problem Specs - so exercise redesigns are also proposed there.
  4. Even if this is more about the Python implementation than the problem itself, redesigning the problem for the track will invalidate all solutions listed in community solutions. That's 9,114 solutions currently. And this is a bit of a problem, since we don't have a way of "zeroing out" the category. So we'll have all of those solutions still there when the new problem launches, and students will still be looking at and emulating them.
  5. Practice exercise example solutions are not in any way considered exemplar, or even appropriate. They are only intended to prove the exercise is solvable and passes all the tests. Their chief purpose is for testing and CI, so the example solution in this case actually doesn't use enums, and also invokes some weird partial eval stuff.

So, given the points above, let me dig into it.

The TL;DR: This is an old exercise that needs some love, but we're still struggling with how to update it without invalidating 9,114 already existing solutions.

Yacht is far from my favorite exercise, and I have seriously considered deprecating it. At the very least, I think it should be unlinked from the bools concept, and moved to a much later position in the track. I am totally unclear as to why so many students think lambdas are appropriate here.

If you look at the notes at the top of the stub from the earlier version of the exercise, the constants were an attempt at prompting students to play with/practice the use of enums. But since Yacht was written to accommodate Python 2.7.x -- Python 3.0.x (enums were introduced in Python 3.4), the constants were added as a way of having the effect of an Enum without needing to subclass a non-existing Enum.

As "silly" as assigning an arbitrary value to a constant is, that's what enums do. They're considered an important data structure in other programming languages, and are fairly popular in some Python circles as well.

Now that Python versions below Python 3.7 have been sent to the end-of-life bin, we could change the stub to be more aggressively enum-focused. Or, we could remove the constants altogether from the stub. My concern there is that students would be left guessing as to how to start solving the problem, since the tests don't really point at enums as the solution. We'd also need to re-work the test file imports -- but that is fairly straightforward, I think. But since this exercise is templated to pull from problem specifications, there is also a template to re-do and some testing to make sure that things don't go sideways. At the very least, we probably need an instruction append that points students at the enum docs, along with a little bit about the logic of using enums.

I think right now, given the amount of maintenance needed, the easiest and quickest change is to bury the exercise farther down the tree, and buy some time to rework it in the least disruptive manner. An alternative would be to deprecate the current exercise version, and replace it with a similarly named but new version that better enforces either enum practice, or implementing the game with simple strings.

But overall, I will have to ask/think about it. Some mentors have argued strongly to keep the exercise. I think that's ... OK.... but I want to avoid giving beginners a foot-gun. And those lambdas are a big foot-gun.

Alright. I hope that sets some context. 😄 Thanks for reading to the bottom!

@AlvesJorge
Copy link

Thanks for the very detailed answer, I didn't even consider the actual implications of changing the exercise.
I had never even heard of enums before funnily enough.
Next time I'll redirect my discussions to the forum instead of here!
I'm at least glad to know that there's definitely room to question the top solutions and the structure of the exercises themselves ^^

@BethanyG
Copy link
Member

Adding a Note

I have un-hooked Yacht from the bools concept on the concept tree, and moved it toward the end of the "easy" category of exercises. See PR 3471 for the details.

Meanwhile, I have opened issue #3472 to think through re-working the exercise so that lambdas are not encouraged quite so much.

Thank you to everyone who has chimed in here! 💙

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants