Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Effect of Hack proposal on code readability #225

Closed
arendjr opened this issue Sep 20, 2021 · 299 comments
Closed

Effect of Hack proposal on code readability #225

arendjr opened this issue Sep 20, 2021 · 299 comments
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@arendjr
Copy link

arendjr commented Sep 20, 2021

Over the past weekend, I did some soul searching on where I stand between F# and Hack, which lead me into a deep-dive that anyone interested can read here: https://arendjr.nl/2021/09/js-pipe-proposal-battle-of-perspectives.html

During this journey, something unexpected happened: my pragmatism had lead me to believe Hack was the more practical solution (even if it didn't quite feel right), but as I dug into the proposal's examples, I started to dislike the Hack proposal to the point where I think we're better off not having any pipe operator, than to have Hack's.

And I think I managed to articulate why this happened: as the proposal's motivation states, an important benefit of using pipes is that it allows you to omit naming of intermediate results. The F# proposal allows you to do this without sacrificing readability, because piping into named functions is still self-documenting as to what you're doing (this assumes you don't pipe into complex lambdas, which I don't like regardless of which proposal). The Hack proposal's "advantage" however is that it allows arbitrary expressions on the right-hand side of the operator, which has the potential to sacrifice any readability advantages that were to be had. Indeed, I find most of the examples given for this proposal to be less readable than the status quo, not more. Objectively, the proposal adds complexity to the language, but it seems the advantages are subjective and questionable at best.

I'm still sympathetic towards F# because its scope is limited, but Hack's defining "advantage" is more of a liability than an asset to me. And if I have to choose between a language without any pipe operator or one with Hack, I'd rather don't have any.

So my main issue I would like to raise is, is there any objective evidence on the impact to readability of the Hack proposal?

@ken-okabe
Copy link

ken-okabe commented Sep 20, 2021

Actually, it's a bad manner to tweak the binary operator itself for any purpose that essentially breaks the algebraic structures, and without messing up the math, we can compose "advantage" with the basic operator if we want to.

Mathematically,
HackStyle = MinimalStyle * PartialApplication("advantage" you feel)
where * is an operator of function composition.

@kiprasmel made an excellent presentation, did you read?
#202 (comment)

With it (partial application + F# pipes), you get the same functionality as you'd get with Hack, but 1) without the need of an additional operator |>> , and 2) the partial application functionality would work outside the scope of pipeline operators (meaning the whole language), which is great, as opposed to the special token of Hack that only works in the context of pipeline operators, 3) all other benefits of F# over Hack.

Another insight
#205 (comment)
@SRachamim

Hack style is a bad proposal that tries to combine two great proposals: F# |> + ? partial application.
For me there's no question. F# is the only way to go.

In software design, we must know:
One Task at a Time Code
or
a function should perform only a single task
and
use function composition
image

or more generally,
KISS principle

When we know the two are equal:

  • HackStyle
  • MinimalStyle * PartialApplication

The latter should be chosen for robustness, and finally,

So my main issue I would like to raise is, is there any objective evidence on the impact to readability of the Hack proposal?

I've just posted
Concern on Hack pipes on algebraic structure in JavaScript #223
This issue actually mentions the impact to readability, so please read if you have a time.

@arendjr
Copy link
Author

arendjr commented Sep 20, 2021

@stken2050 Thanks for pitching in. So what I got from your post is that the usage of lambda expressions on the RHS of the pipe operator would open up the possibility of accidentally referencing variables that happen to be in scope, and in the worst case developers might try to abuse that scope for implicit state management between invocations. That's indeed a good argument against the usage of lambdas there, and it is also another argument against the Hack proposal specifically, as its ability to embed arbitrary expressions makes the likelihood of such maintainability issues even larger. Hope I understood correctly :)

@ken-okabe
Copy link

ken-okabe commented Sep 20, 2021

So what I got from your post is that the usage of lambda expressions on the RHS of the pipe operator would open up the possibility of accidentally referencing variables that happen to be in scope, and in the worst case developers might try to abuse that scope for implicit state management between invocations.

Actually, yes, and I'm very sorry about that I misunderstood your context.

In any way, I firmly believe if we pursuit some advantage on top of the operator, we should compose it. First we provide the minimal and pure operator |>, then using the operator we add on advanced features, which may become |>>.

@mAAdhaTTah
Copy link
Collaborator

@arendjr One clarification from your blog post: the Hack proposal doesn't introduce +> as part of the current proposal. That's a potential follow-on proposal. Hack also doesn't kill the partial application proposal, which could advance as an orthogonal proposal (see #221 for some info on its advancement issues).

Otherwise, the only only somewhat-amusing note is I find all of your examples with Hack pipe an improvement over the status quo versions. The only one I'd speak to specifically is the npmFetch version, which I much prefer over the baseline. If that was broken onto multiple lines, rather than one single line, it would separate the data transformation work being done in the first two steps from the API call on the third step in a way I find much clearer & easier to understand.

Beyond that, I appreciate the examples & exploration. clip is a great example of an API written in more mainstream idiomatic JavaScript that would be harmed by the F# proposal, as it would require either a pipe-specific API or arrow function wrapping.

@arendjr
Copy link
Author

arendjr commented Sep 20, 2021

@arendjr One clarification from your blog post: the Hack proposal doesn't introduce +> as part of the current proposal. That's a potential follow-on proposal. Hack also doesn't kill the partial application proposal, which could advance as an orthogonal proposal (see #221 for some info on its advancement issues).

Yes, good clarification!

Otherwise, the only only somewhat-amusing note is I find all of your examples with Hack pipe an improvement over the status quo versions. The only one I'd speak to specifically is the npmFetch version, which I much prefer over the baseline. If that was broken onto multiple lines, rather than one single line, it would separate the data transformation work being done in the first two steps from the API call on the third step in a way I find much clearer & easier to understand.

For that example specifically, I would rather write it like this:

const { escapedName } = npa(pkgs[0]);
const json = await npmFetch.json(escapedName, opts);

No pipe necessary at all, and in my opinion it’s clearer than both options given in the example. It also achieves the separation between processing and actual fetching you were aiming for.

And this is getting to the heart of why I feel the Hack proposal might be actively harmful. It encourages people to “white-wash” bad code as if it’s now better because it uses the latest syntax. But not only do I not find it an improvement, we have better options available today.

And people will mix and match this with nesting at will. It will not rid us of bad code, but it will open up new avenues for bad code we had not seen before. That’s why in the end I think we’re better off without than with.

For the F# proposal, I don’t see so much potential for abuse, hence why I’m still sympathetic to it. But if we could prohibit lambdas from even being used with it, I would probably be in favor. It’d make it even less powerful, but I do believe this is a case where less is more.

Beyond that, I appreciate the examples & exploration. clip is a great example of an API written in more mainstream idiomatic JavaScript that would be harmed by the F# proposal, as it would require either a pipe-specific API or arrow function wrapping.

Yeah, absolutely. I would gladly make the adjustment if the F# proposal were to be accepted, but it’s true it’s a pain point. I do think once libraries had their time to adjust, we might come out for the better, but I cannot deny it will cause short-term friction.

If the Hack proposal were accepted however, things might initially appear more smooth. But once we have to deal with other people’s code that uses pipes willy-nilly, I fear we may regret it forever.

@mAAdhaTTah
Copy link
Collaborator

mAAdhaTTah commented Sep 20, 2021

And this is getting to the heart of why I feel the Hack proposal might be actively harmful. It encourages people to “white-wash” bad code as if it’s now better because it uses the latest syntax. But not only do I not find it an improvement, we have better options available today.

And people will mix and match this with nesting at will. It will not rid us of bad code, but it will open up new avenues for bad code we had not seen before. That’s why in the end I think we’re better off without than with.

This is a reasonable take, even if I disagree with the cost/benefit calculation.

For the F# proposal, I don’t see so much potential for abuse, hence why I’m still sympathetic to it. But if we could prohibit lambdas from even being used with it, I would probably be in favor. It’d make it even less powerful, but I do believe this is a case where less is more.

I think this makes the use-case for F# unreasonably narrow which would really limit the utility of the operator.

Yeah, absolutely. I would gladly make the adjustment if the F# proposal were to be accepted, but it’s true it’s a pain point. I do think once libraries had their time to adjust, we might come out for the better, but I cannot deny it will cause short-term friction.

My general perspective is forcing mainstream JS to adapt to this API will be more painful than asking functional JS to adapt to Hack. Note that if you were to adapt to the curried style you reference in the blog post, you would no longer be able to use clip outside of a pipe without a very odd calling convention (clip(length, options)(text)) or using a curry helper.

@arendjr
Copy link
Author

arendjr commented Sep 20, 2021

Note that if you were to adapt to the curried style you reference in the blog post, you would no longer be able to use clip outside of a pipe without a very odd calling convention (clip(length, options)(text)) or using a curry helper.

My idea was to just check for the type of the first argument, if it's a number, return a lambda. If it's a string, continue as before. It's JS so why not :) But alternatively, I could just ask the FP folks to use import { clip } from "text-clipper/fp" and expose the new signature there.

@SRachamim
Copy link

SRachamim commented Sep 20, 2021

I'm still surprised to see claims against the minimal/F# style, saying that it will force curried style. It won't, just as .then and .map won't. Why don't we introduce ^ in the then and map methods as well?

PFA proposal is a universal solution to those who worries about the arrow function noise on all three cases: map, then and pipe. If PFA is not ready, and we don't want minimal/F# without PFA, then let's wait, instead of introducing an irreversible Hack pipes.

Pipe, in any resource you'll find, is defined as a composition of functions. From UNIX to Haskell, that's what it is. I really think we should hold that proposal ASAP and continue discussing it.

@js-choi js-choi added documentation Improvements or additions to documentation question Further information is requested labels Sep 20, 2021
@mAAdhaTTah
Copy link
Collaborator

@SRachamim The loss of tacit programming is explicitly a complaint against Hack (see #215 and #206). If the intention is to just use lambdas for everything, then F# is at a significant disadvantage vs Hack.

See #221 for concerns about PFA's advancement.

@shuckster
Copy link

I think it's worth trying to define what "readability" can mean across paradigms and across time.

Functional Programmers worry about mathematical consistency: g -> f is the same as f(g).

  • In F#, this looks like: g |> f
  • In Hack, it's g |> f(^)

You really don't have to be on one side or the other to appreciate that Hack robs the JavaScript FP community of a certain kind of transferable mathematical thinking. "Readability" to them means something very different to "avoiding temporary variables". F# affords another kind of functional compositional to JavaScript.

Should it have it? JS is of course a multi-paradigm language, but despite the currently declarative push it's no secret that it's still very much used in an imperative way, so perhaps Hack suits it "better"?

But JavaScript also has -- by pure luck of history thanks to its original author -- first-class functions. It's baby steps away from being completely FP friendly, and it has an active FP community that seems acutely aware of this.

I'm really trying to avoid falling on one side or the other here, but after that throat-clearing I contend that:

  • Hack solves "soft" readability issues for imperative steps that are better solved with existing syntax and patterns, especially the "unrolling" of logic into dedicated functions / across temporary-variables
  • F# solves "hard" FP readability issues and opens-up more in terms of transferable knowledge from the discipline of mathematics / other FP languages

In other words, Hack is the readability ceiling for imperative programming, whereas F# raises the ceiling for compositional readability for those working in FP, either now in in the future.

@SRachamim
Copy link

SRachamim commented Sep 20, 2021

@SRachamim The loss of tacit programming is explicitly a complaint against Hack (see #215 and #206). If the intention is to just use lambdas for everything, then F# is at a significant disadvantage vs Hack.

See #221 for concerns about PFA's advancement.

Lambda is a (apparently scary) name to something that we're all comfortable with: a function. It's already everywhere: class methods are functions, map, filter and reduce, then. This is a simple concept. But more importantly, it's the essence of the pipe concept.

Every concern you'll have about minimal/f# proposal can be applied to anywhere else you use a function. So why won't we tackle it all with a universal PFA solution?

Why won't we use promise.then(increment(^))) instead of promise.then(increment)? You see, the Hack style is a verbose proposal that tries to tackle two things at once.

And as I said, if PFA is stuck, then it's not a good reason to introduce Hack. We should either wait, or avoid pipe at all (or introduce minimal/f# style anyway).

@mAAdhaTTah
Copy link
Collaborator

Functional Programmers worry about mathematical consistency: g -> f is the same as f(g).

Does this imply that Elixir isn't mathematically consistent because x |> List.map(fn num -> 1 + num end) is the same as Enum.map(x, fn num -> 1 + num end)?

@voronoipotato
Copy link

Elixir pipes allow you to take a lambda or function. Whether it pipes to the first value or the last value is an interesting stylistic question, and I get what you're referring to. In javascript our functions can be thought of as taking a tuple. This adds the wrinkle of, are you injecting into the first value of the tuple, or the last since there is an isomorphism between (a * b * c) -> d and a -> b -> c -> d. The elixir is still mathematically consistent in my opinion with the general idea, because we have various flip functions which could if need be take an a -> (b * c ) -> d and turn it into (b * c) -> a -> d . So they're algebraically equivalent, so we don't really need to worry about it. If we get the a -> (b * c ) -> d , we can just use a flip function to get (b * c) -> a -> d and vice versa.

What is a concern is whether I have to learn a new construct to reason about the new operator. The functional community will simply avoid placeholder pipes, because they're pipes that don't take functions, and we think in terms of passing functions around. You can call it superstitious or close minded, but it is what it is. I don't think the fp community will ever use placeholder pipes because they are difficult for us to reason about in terms of edge cases. I actually agree with @arendjr in that I would prefer to have no pipe rather than hack pipes, and many many functional devs I have talked to this week have agreed with that. As excited as we are about pipes, we want to avoid things which are going to make code harder to reason about the potential consequences and read. Functions and lambdas are easy to understand and read, expressions by contrast are very open ended in what they can accomplish and what they mean. Expressions are very powerful but we fp devs often explicitly trade power for the ability to think clearly. In my opinion javascript is already very open ended, and some creative constraints can make things easier to use.

@mAAdhaTTah
Copy link
Collaborator

Whether it pipes to the first value or the last value is an interesting stylistic question, and I get what you're referring to.

Yeah, my general point is that whether the syntax of a given language specifically matches the syntax of the math is less important than whether it adheres to the values/principles that math.

we think in terms of passing functions around.

Nothing about Hack pipe prevents you from passing functions around. x |> f vs x |> f(^) is immaterial to the underlying math, and is immaterial to your desire to just do function passing. x |> f implies that f is called; x |> f(^) makes that difference explicit. It doesn't change the actual behavior.

Functions and lambdas are easy to understand and read, expressions by contrast are very open ended in what they can accomplish and what they mean.

The body of a lambda, specifically arrow functions in JavaScript, is an expression. Most of the open-ended things you can do with Hack pipe you can do in the body of a lambda (save await / yield) Hack enables you do that in the same scope as the pipeline, instead of the nested scope of the lambda/arrow. Neither pipe prevents you from using expressions, but because expressions are so central to the language, enabling their ease would be significantly more beneficial to the broader language & ecosystem than it will be harmful to your ability to chain together function calls.

@arendjr
Copy link
Author

arendjr commented Sep 20, 2021

Neither pipe prevents you from using expressions, but because expressions are so central to the language, enabling their ease would be significantly more beneficial to the broader language & ecosystem than it will be harmful to your ability to chain together function calls.

[Citation needed]
I’m literally arguing the other way around. I even proposed it might be better to even limit F# pipes from piping into lambdas, to prevent exactly this (which you rejected). I’m okay if you want to argue that this gives neither pipe proposal a reason to exist, but please don’t use this as an argument for why Hack would supposedly be better.

@voronoipotato
Copy link

The problem is that in real life I have to deal with other people's code, and expressions are pretty open ended. If you constrain expressions to exactly what a lambda does, that's great but now I have a lambda that is not obvious that it's a lambda, it's confusing. Therefore, yes, in my codebase with myself alone hack pipes are fine, but in real life I have to deal with my peers, and I will tell them "Please avoid the placeholder pipes, they are confusing and may not work as you think" because people in practice will do all kinds of stuff that are, well confusing. My problem with placeholder pipes is explicitly that they are too open ended. The very thing you espouse as a killer feature, makes it to me a DOA feature. I still after all this discussion for example, don't know if anyone fully understands the scope of a expression pipe. If my coworker declares a var in an expression pipe, is that global, or local like a function? Will my coworker know? They will not, because I don't know, and neither will most people. It's an entirely new construct with new expectations that could be frankly anything. It's why I don't think this proposal works well for javascript, because it's actually subtly complicated and javascript is already quite complex. Remember even if you can give me a nice succinct answer to this question, consider that people will forget, because it's unintuitive to have expressions have a scope. Do those placeholder expressions have a this? Can I add attributes to it or treat it like an object? There's so many ways that people will use placeholder pipes to make weird cursed code and we both know it. By contrast with function pipes I can just refactor any sufficiently cursed lambda to a named function and know that the scope will be exactly the same, period. If it's a lambda, I can just say, "It's a lambda" and they get it. I don't think awaiting mid pipe is actually a good feature, and I think it will just be hard to read because awaits normally go at the beginning of the line. So @arendjr if you're worried about cursed lambdas, they would be trivial to refactor into named functions. Probably even a action in vscode and your favorite editor to do that. Whether such a thing is even strictly possible in expressions is frankly unanswerable, especially when you consider the scope of all the things which may be added yet in the language.

@mAAdhaTTah
Copy link
Collaborator

[Citation needed]

All of the built-in types, all of the Web apis, none of them are designed around unary functions. Even tools as basic as setTimeout are going to require you to wrap it in a lambda in order to use it in F# pipes. If we're trying to introduce new syntax to the language, requiring all of those APIs to wrapped in functions to be useful significantly impairs the usefulness of the operator.


If my coworker declares a var in an expression pipe, is that global, or local like a function?

It would be a SyntaxError because var is a statement, not an expression.

@SRachamim
Copy link

@mAAdhaTTah You are worried about unary functions on a pipe, but why aren't you worry about unary functions on every other method like then? Why it's ok to accept a lambda there?

If you worry about unary functions, why not pushing the PFA proposal, which will allow you to use setTimeout without a lambda on both a pipe and a then (and actually everywhere).

This claim is not a reason to push the Hack proposal.

@mAAdhaTTah
Copy link
Collaborator

Why it's ok to accept a lambda there?

then is a method; it's not new syntax.

If you worry about unary functions, why not pushing the PFA proposal

I already referred you to #221 to discuss the issues PFA had advancing.

@SRachamim
Copy link

SRachamim commented Sep 20, 2021

@mAAdhaTTah New syntax must not imply inventing new unexpected ways of reasoning. A great syntactic sugar is one which leverages existing familiar constructs. You can describe the minimal proposal as a |> f === f(a). It's simple, and doesn't require anyone to learn anything new. It's just |>. Compare it with the Hack proposal which is entirely new concept not only in the JavaScript world, but in programming in general!

And again, some variation of PFA is the solution to your concern about curried/unary functions. Don't leak it to other proposals. Where PFA proposal stands should not affect the nature of a pipe.

Pipe should be a simple function composition, whatever function means in JavaScript. If you feel functions in JavaScript need another syntactic sugar - that's a different proposal!

@arendjr
Copy link
Author

arendjr commented Sep 20, 2021

All of the built-in types, all of the Web apis, none of them are designed around unary functions. Even tools as basic as setTimeout are going to require you to wrap it in a lambda in order to use it in F# pipes. If we're trying to introduce new syntax to the language, requiring all of those APIs to wrapped in functions to be useful significantly impairs the usefulness of the operator.

You’re still making the assumption that it is desirable to call anything and everything from pipes, where I think I made it abundantly clear that is not a desire in the first place.

What you’re suggesting is a new syntax for any call expression in the language, regardless of asking the question whether we should want that. Adding two arbitrarily interchangeable syntaxes for the same is not a benefit to me, it just complicates things. It complicates the language for newcomers, it adds endless discussion about which style should be preferred when and where, it promotes hard-to-read code for which better alternatives exist today and ultimately it has very little to show for it.

Someone on Reddit replied to me:

“I don't look forward to trying to debug a broken npm package and see it's written with pipes for every function call.”

And yet that seems to be exactly what you’re encouraging.

So I did ask this question to myself, and I think that No, we should not want this.

And yet I keep reading statements from champions making sweeping generalizations that this is “beneficial to the broader language” or “beneficial to everyone” and I don’t feel represented by this. And frankly I suspect there might be a large underrepresented, silent majority that will not feel their concerns were represented if this proposal is accepted.

@voronoipotato
Copy link

voronoipotato commented Sep 20, 2021

I am confident there are a lot of unexplored edgecases of expressions which my coworkers or teammates in projects will use, and abuse. For example okay so there are expressions not statements, I get it now, what is this? What about super ? What about properties? If we consider the this scope of a pipe, is that even well defined in the spec? There's just countless unanswered questions in my opinion, and what we gain is the ability to write riddles to punish our future selves for hubris. Not to mention any time from here on after we have to carefully consider any expression being added to the language, how that should work with placeholder pipes, what it can and cannot do. Placeholder pipes are in my opinion, to be avoided, as we should not trust that future features will behave well with them, because the design space is far too large and novel. An awkward case I just thought of with expression pipes is setting a property, which is an expression, so then what happens is the value set gets piped through, counterintuitively.

var x = 10 |> this.whatever = 'yeah'
// x is 'yeah' and yeah you can easily forget to put in the placeholder entirely and imo it is not at all obvious. 
var y = {lets: 'go' } |> ^.lets = 'hooray'
//y is 'hooray' and is no longer an object

This is extremely unintuitive, and will definitely blend the mind of a newbie into fresh paste. By contrast, in a function or lambda it's all about returning stuff, there's a clearly communicated path, you can even use braces and a return statement to be explicit (and I often do). In the first example, I don't even know what I'd expect, but probably either 10 to be passed through unaffected, or undefined. You better believe my coworkers not understanding pipes will put every valid expression in there. At least I kinda understand functions, they are simpler, expressions can be so many things. There is a pre-existing guidance on how/when to use functions, I have no such guidance for placeholders.

@noppa
Copy link
Contributor

noppa commented Sep 20, 2021

var x = 10 |> this.whatever = 'yeah'

From the readme

A pipe body must use its topic value at least once. For example, value |> foo + 1 is invalid syntax, because its body does not contain a topic reference. This design is because omission of the topic reference from a pipe expression’s body is almost certainly an accidental programmer error.

So it would be pretty difficult to have this mistake go unnoticed. Any sort of linter would point it out and trying to run the code would just crash with a syntax error.

var y = {lets: 'go' } |> ^.lets = 'hooray'

I don't see how the F# version is any different

var y = {lets: 'go' } |> _ => _.lets = 'hooray'

I'm also wondering where yall find these crazy coworkers that bend over backwards to write as unintelligible code as possible, and can't be leveled with.

@voronoipotato
Copy link

The difference is people expect to intend to return a value with a lambda. What value are you intending to return there? By contrast an expression is a lot of things. The root of it is, you're treating two very different things like they're equivalent. I guarantee you when this gets released we will see ^.lets = 'hooray' on stack overflow with placeholders, and we won't with lambdas, because lambdas have been in the language for several years now, and function expressions several years before that. People have a general intuition of what is expected with function expressions and lambdas, that will have to be learned with placeholder expressions.

Specifically, they're your bootcamp grads, fresh out of college kids, mom and pop shops, and sometimes your boss who mostly writes Java/C# but thinks javascript is easy. You can explain things to them, but then someone new will come with the same misconception, or perhaps the same person who forgot, so I'd really appreciate it if we make sure to value making things intuitive the first go around. If you think I'm being absurd, okay, but like it's going to happen, and it would be a lot less likely to happen if we leverage an existing construct that is more generally understood.

@aadamsx
Copy link

aadamsx commented Sep 20, 2021

You’re still making the assumption that it is desirable to call anything and everything from pipes, where I think I made it abundantly clear that is not a desire in the first place.

Right, it's not like when they added OOP syntax sugar to JS they expected everyone to wrap all their existing code in Classes or everyone to use Classes for all new code. It's an option if you want to write that way. They didn't pull back from OOP because devs would have to write code differently to interface with it.

Same goes for adding FP syntax sugar, why do we step back from true FP principles like function composition and currying because someone might have to call a curried function or wrap their code? If you don't want to use the features -- you don't have to! But don't kneecap the language with Hack because to use it you have to write new/different code! This argument doesn't make much sense.

@mAAdhaTTah
Copy link
Collaborator

You’re still making the assumption that it is desirable to call anything and everything from pipes, where I think I made it abundantly clear that is not a desire in the first place.

I'm not. I am arguing that the universe of things you can put in a Hack pipe is far greater than the universe of things you can put in an F# pipe. I would by extension argue that the universe of things that benefit from the Hack pipe is far great than F# pipe.

What you’re suggesting is a new syntax for any call expression in the language, regardless of asking the question whether we should want that.

I'm not sure what you mean by this, especially insofar as x |> f(^) looks familiar to anyone who has written f(x). The whole goal of Hack pipe is to avoid a novel call expression.

“I don't look forward to trying to debug a broken npm package and see it's written with pipes for every function call.”

Amusingly enough, this is how I feel every time I pull up a codebase that uses Ramda! I used that library extensively for like 2 years, it's an absolute nightmare to debug because there's no reasonable place to put breakpoints, and it's impossible to explain to non-functional coders because there are too many concepts to explain. So I don't use it anymore, which is a shame, cuz there are significant benefits to the library but they're impossible to integrate without taking on a lot of cognitive overhead.

With syntax we can put a breakpoint in between individual steps in the pipe, similar to how you can put a breakpoint after the arrow in a one-line arrow function. Including an explicit placeholder means I can hover it like a variable & get its type in VSCode. It's easier to debug when its integrated into the language proper, and we still get many of the benefits of Ramda's pipe / compose.

Let me provide an illustrative example, loosely modeled on an actual example of something I had to do at work. At a high level, I have some user input that I need to combine with some back-end data which I then need to ship off to another API. Using built-in fetch, the code looks something like this:

// Assume this is the body of a function
// `userId` & `input` are provided as parameters
const getRes = await fetch(`${GET_DATA_URL}?user_id=${userId}`)
const body = await getRes.json();
const data = {...body.data, ...input }
const postRes = await fetch(`${POST_DATA_URL}`, {
  method: "POST",
  body: JSON.stringify({ userId, data })
});
const result = await postRes.json();
return result.data

This sort of data fetching, manipulation, & sending (all async) is a very common problem to solve. Any of the fetch calls could be other async work (writing to the file system, asking the user for input from a CLI, requesting bluetooth access (!), interacting with IndexedDB) – all of these APIs are built around promises. Sure, I could linearize it with intermediate variables as I did but these variables are basically useless to me except as inputs to the next step (I actually wrote res as the first variable and had to go back & change it once I got to the second fetch). There was an extensive discussion about this in #200 (see my comment here for another real-world example).

Sure, maybe in the real world, I'd use axios, and maybe I'd wrap up some of these functions up into an API client. Even then, we'd have like:

const fetchedData = await api.getData(userId);
const mergedData = { ...fetchedData, ...input };
const sentData = await api.sendData(userId, mergedData);
return sentData;

Compare all of that to Hack:

// The `fetch` example:
return userId
|> await fetch(`${GET_DATA_URL}?user_id=${^}`)
|> await ^.json();
|> {...^.data, ...input }
|> await fetch(`${POST_DATA_URL}`, {
  method: "POST",
  body: JSON.stringify({ userId, data: ^ })
});
|> await postRes.json();
|> ^.data

// Or the axios example:
return userId
|> await api.getData(^)
|> { ...^, ...input }
|> await api.sendData(userId, ^);

This is significantly clearer to see, step-by-step, what's going on. Admittedly, I could combine steps (maybe the data merging can happen inline with the API call; it would save me from writing mergedData) or use better variable names (the data suffix everywhere annoys me), but at core, this what the pipe operator is for: linearizing a nested sequence of expressions.

There are even like small cases, where I've got a small function call with maybe an expression in it, and then I need to call one other function after it to post-process. I would love to be able to just do |> Object.keys(^) (this is a common one for me) after evaluating that function + expression, rather than having to put Object.keys in front and wrap everything, which now puts the order-of-operations in the wrong direction – Object.keys reads first, but evaluates last. I'm not intending to use it everywhere, but this one-off case is more readable than the base case because the code now evaluates in the same order it's read.

@arendjr
Copy link
Author

arendjr commented Oct 4, 2021

@mAAdhaTTah

I think this is a very narrow view of what pipelining is and how it can be helpful.

I actually agree with you here: it is. And it's the way I like it :)

I don't think the fact that the subject changes throughout the pipeline is particularly material to whether something is a pipeline or is useful as a pipeline. If you start working with any of the functional ADTs, you'll definitely find your subject changing, especially if there are Task or IO involved.

Sure, but this is JavaScript. We don't really do ADTs here. I don't consider myself part of the FP crowd, and I never advocated for tacit programming. I'm slightly sympathetic towards the F# proposal because I see a narrow usecase for it, but you're right: it is a narrow use-case and given the downsides I'm skeptical we should add it. But Hack doesn't alter my position that I only see a narrow use-case. Just because it suddenly allows usage everywhere, never convinced me that we should.

If it's about ADTs, I much prefer the way Rust handles them. Not by going overboard to the functional side, but with a very plain "conventional" method-like notation. Incidentally, Rust as a language is totally fine without pipes. And so am I :)

@mAAdhaTTah
Copy link
Collaborator

That's the whole point of a declarative |> as implemented in almost any other programming language.

As mentioned ad naseum, those programming languages have features that make writing & debugging a declarative |> very different from doing so in JavaScript, and expecting that we can just port those things whole cloth into JavaScript without modification or any of the relevant features that make writing & debugging that code easier is a fallacy.


I actually agree with you here: it is. And it's the way I like it :)

Sure, if you want to maintain that narrow view, you're welcome to, just as you're welcome to ban the Hack pipe from your codebase because you don't believe in pipelines. But I think this is a narrow-minded view of the language.

Sure, but this is JavaScript. We don't really do ADTs here. I don't consider myself part of the FP crowd, and I never advocated for tacit programming.

We do, actually, use ADTs in JavaScript. but they're not widespread because there's a bunch of features like point-free pipelining that make them difficult to use without going whole cloth into functional programming. What I expect (and as I explained in my previously linked comment) is for ADTs to become more broadly useful outside of tacit programming with a Hack pipe, because I'm not an advocate for tacit programming either.

@SRachamim
Copy link

SRachamim commented Oct 4, 2021

That's the whole point of a declarative |> as implemented in almost any other programming language.

As mentioned ad naseum, those programming languages have features that make writing & debugging a declarative |> very different from doing so in JavaScript, and expecting that we can just port those things whole cloth into JavaScript without modification or any of the relevant features that make writing & debugging that code easier is a fallacy.

JavaScript is not auto-curried, but that's not a barrier. The |> pipe operator is still what it is: reversed function application. Whatever function means in JavaScript, with that we should live. And it's fine. Many examples here are not idiomatic pipes. We should not encourage them by changing the definition of a |> pipe operator.

Keep it simple (KISS): a |> f === f(a). Barely need a spec, easier to implement, easier to teach. And we'll have the option to add more syntactic sugars later like PFA, auto-currying, symbolic .then or expressive debugger (none of them are required for the minimal proposal, but they are always an option, each addresses a different issue in the language in a universal way instead of specifically in |>).

@noppa
Copy link
Contributor

noppa commented Oct 4, 2021

@mAAdhaTTah I know you were talking about breakpoints specifically with point-free use of Promise .then, and I completely agree that it's not great. However, a bit offtopic with regards to F# pipelines and point-free, I'll mention that the F# folks have done some really great work in this area: dotnet/fsharp#11957.

If we got that variant of the pipeline, I'd fully expect that we'd at some point be able to step through the pipeline steps and hopefully "step into" the function that is created in RHS when the LHS is passed in there.

@tabatkins
Copy link
Collaborator

@arendjr

I’m sorry, but you’re misrepresenting my argument. I never said people chaining methods or using pipe() today are writing bad code, nor did I say variable names are or should be mandatory. In fact, I am somewhat offended by your tone, because it reflects an appearance that you want to be rid of the argument for a lack of understanding.

Apologies for giving off a bad tone. My argument stands, tho: huge bodies of common practice suggest that naming individual steps is rarely desired or used by people using today's variations on code-linearization (method-chaining and pipe()); when it is, it's generally done by cutting the chain at a point where the value is semantically meaningful to the program, assigning to a temp variable, and then working with that variable in a following statement.

For method-chaining, this is the only realistic way to name intermediate values, and it seems to be fine. In theory pipe() users can insert an arrow function in the middle of a pipe solely to name the topic value, but as far as I can tell this practice is rare-to-nonexistent; arrow funcs appear to be used solely when they're needed to express a step that can't be written in point-free style.

So to restate my argument: either all of this is bad code, and we should encourage people to name intermediate values a lot more with the pipeline operator (which requires some stronger justification), or all of that code is fine, and the current proposal's lack of topic naming will be similarly fine. I don't think there's a reasonable path between these, where the existing code is fine with minimal intermediate variable naming but the pipe operator code needs to encourage/require more intermediate variable naming.

It’s the Hack proposal’s ability to put arbitrary expressions on the RHS that I take issue with from a readability perspective.

Expressions are not meaningfully more complicated than functions. In particular, you can put arbitrary expressions in a function call, as arguments; all that tacit style demands is that the top-level operation that actually uses the topic value be presented as a function. And with arrow funcs, that's not even a meaningful restriction.

Again, your point could be made if it was clear from practice that people using pipe() often use arrow funcs with meaningful argument names to aid in code readability, but as far as I can tell that's rarely the case, and instead it's generally a placeholder name anyway.


The overall issue here is that your argument, carried from the OP thru the rest of the thread, is presented as if it's against pipeline, but it's actually a fully generic argument against any code-linearization construct that doesn't name the topic variable. Going back to the Deno example that keeps getting tossed around:

const file = await Deno.open('intro.txt');
const data = await Deno.readAll(file);
const text = new TextDecoder('utf-8').decode(data);
console.log(text)

If these APIs weren't async, they could and almost certainly would be written as a method chain, in practice:

const text = Deno.open('intro.text')
	.readAll()
	.decode(new TextDecoder('utf-8'));
console.log(text);

The exact API shape could vary, of course, but a file exposing a "readAll()" method and a buffer exposing a "decode()" method are both the sorts of very common API design choices one wouldn't bat an eye at normally, and it would lead to this perfectly reasonable and readable code. If JS had made a Rust-style syntax choice for its await operator, these could even stay async without breaking the method chaining:

const text = Deno.open('intro.text').await
	.readAll().await
	.decode(new TextDecoder('utf-8'));
console.log(text);

Instead, due to some design choices that JS and the Web Platform has made, we have prefix-await and a TextDecoder which takes a buffer rather than the other way around, which happens to render this code more awkward to write, such that using several independent statements is currently the most readable. But these were fairly arbitrary choices; they could have been made another way, and if they had, we'd think this was a perfectly fine thing to write.

So, if you want to argue that the same code written with the pipe operator:

const text = await Deno.open('intro.text')
	|> await Deno.readAll(^)
	|> new TextDecoder('utf-8').decode(^);
console.log(text);

is bad, you have to do some pretty significant work arguing why it's different from the above hypothetical method-chaining example that we could (and do, in similar libraries) write today. Or bite the bullet and argue that they're both bad (bad enough that we should be encouraging a different coding style via language design, in spite of this pattern's heavy existing usage).

Either of these arguments are possible to make, but I believe they're very difficult to argue convincingly. What I'm pushing back against is treating pipelined code as a fundamentally new concept to be wary of.

@runarberg
Copy link

Just to throw it out there. If debugging is this big of an issue, why hasn’t there been a proposal for say Function.aside:

.map(Function.aside(() => { debugger; }))

or in the above examples:

await Promise.resolve('intro.txt')
  .then(Deno.open)
  .then(Deno.readAll)
  .then(Function.aside((x) => { console.log(x); debugger; }))
  .then(data => new TextDecoder('utf-8').decode(data))
  .then(console.log)

Which is rougly:

Function.aside = (fn) => (x) => {
  fn(x)
  return x;
};

Prior arts include Rust’s inspect, RxJS’s tap. If this is common enough pattern, perhaps it should be included in the Function helpers proposal.

@voliva
Copy link

voliva commented Oct 4, 2021

So, if you want to argue that the same code written with the pipe operator [...] is bad, you have to do some pretty significant work arguing why it's different from the above hypothetical method-chaining example that we could (and do, in similar libraries) write today. Or bite the bullet and argue that they're both bad (bad enough that we should be encouraging a different coding style via language design, in spite of this pattern's heavy existing usage).

@tabatkins to me, it is different. I also agree my argument it's probably not really strong, but I think it's still a valid concern.

My position on Deno's is that the original code should be preserved, without trying to shoehorn pipe in there. As I've said previously, each intermediate step giving it a name gives me information so that I don't need to "internally parse" what the expression on the right-hand-side does. I see "file", "data" and "text" and I know what's happening without even having to think about promises.

The chainable api you proposed also has this advantage.

const text = Deno
  .open('intro.text')
  .readAll()
  .decode(new TextDecoder('utf-8'));
console.log(text);

Just by reading "open", "readAll" and "decode", which is neatly aligned, I can already imagine what's going on. Adding in rust's await does add a bit of verbosity but it's ignorable.

Pipes, however:

'intro.txt'
  |> await Deno.open(^)
  |> await Deno.readAll(^)
  |> new TextDecoder('utf-8').decode(^)
  |> console.log(^)

On every step there's much more going on when I try to read it. I need to read through the whole line, find the ^ character, imagine the previous value in there, and understand that e.g. I'm opening it and awaiting to get a value which is has a completely different type from the previous one, which I must remember for the next step. If I know beforehand what each step is doing I can easily skip this just by focusing on the method names (even then the TextDecoder is a bit hard to focus), but on the first read this style forces me to parse the whole thing to understand what's happening on every step.

Granted this argument is weak for this case, and probably with some time of working with Hack pipes it would make it easier for me to read by getting used to it, but I can imagine other examples where this issue becomes more and more relevant (places where the palceholder is used twice, expressions with lambdas with the placeholder in, or nested pipes). Hack pipes allow and sometimes promote these kind of usages, which I think they are detrimental.

To me pipes should be used instead on places where you'd want function composition, and/or for libraries which enhance something on every step (e.g. rxjs where you pipe an Observable through operators and each one of them transforms it to another Observable)

@mAAdhaTTah
Copy link
Collaborator

@runarberg That seems reasonable and would be worth opening an issue in that repo to discuss further.

@arendjr
Copy link
Author

arendjr commented Oct 4, 2021

I think @voliva already made a good representation of my argument here.

@shuckster
Copy link

It's relevant because Hack pipe would allow you to tap in a breakpoint anywhere in the pipeline in your devtools or your editor without needing to modify the code or jump through hoops to inspect the behavior.

I'm not going to argue that this wouldn't be a nice feature, but I would not call log/debugger-statements "jumping through hoops". It's a completely unremarkable practice to human-binary-search your way around problem code with breadcrumbs.

Obviously, any talk about debugging comes with a huge dose of YMMV, but since it's usually clear where bugs start and end, doing an HBS over the problem area is what I normally see done. I believe this is because we're broadly (YMMV!) less concerned with the wrongness of a value at a specific point as to how it got to be wrong in the first place.

I don't mind littering my code with breadcrumbs either but I can't even count the amount of times I've actually PR'd or actually shipped those breadcrumbs.

That happens, but committing breadcrumbs is a small tax against the fixing of the bug in the first place, and is not something that a pipeline operator of any flavour is going to save us from.

Pre-commit hooks, eslint-plugins, and logging utilities that only produce output in prod, either via a flag, or by being behind a conditional that can be tree-shaken away during transpilation. These are not practices that are going to be made redundant by a pipeline operator.

More importantly, if I'm in the browser devtools, I need to go back to my code, edit, refresh, reproduce the steps to get the bug, then start debugging, vs just being able to add a breakpoint in the devtools.

But you're really not prevented from adding a breakpoint in the devtools, are you? You're just prevented from placing it specifically. Which can be useful. But it's also useful to place a breakpoint before the problem area to watch the behaviour of the code unfold by stepping through it.

This is generally my whole problem with point-free code: it's incredibly difficult to use the devtools to debug them, and you often resort to this kind of breadcrumbing or other tools (like ramda-debug) to debug them.

But a nice side-effect of point-free is those functions can sit in their own files and have unit-tests set against them. Again, big YMMV, but going into a unit-test to confirm the behaviour of a function can be expediting to solving a problem, and such tests can be run independently of your browser session.

Sometimes not of course. We want to check the behaviour at the point-of-use because the unique integration is where the problem is. But again, HBS'ing over such things with breadcrumbs is pretty BAU stuff and very transferable.

Again, I'm not saying it isn't handy to pick a specific breakpoint with Hack. But we have decades of shared experience already with debugging functions, which is why it's easy for all of us to come up with examples of how to do it.

(I'm also more than a little curious as to why some of our devtools aren't completely up to the task of setting breakpoints in a 6yo ES6 language feature, or just chaining generally, although we're off-topic enough as it is.)

@lightmare
Copy link

To me the debugging argument seems more about a deficiency of the debugger, than the syntax. With arrow functions, a newline after => allows you to put a breakpoint inside. If you could place a breakpoint on a statement, rather than a line, you wouldn't need that newline. And it would be pretty useful in more situations than breaking into one-line arrow functions. For example:

if (!good(value)) return "bad";
// how to break on the return to inspect the value?

@noppa
Copy link
Contributor

noppa commented Oct 5, 2021

@lightmare Inline breakpoints are a thing, so that's not really the issue

example of setting up an inline breakpoint in Chrome dev tools

The problem is that when you have .then(foo), where do you put that breakpoint? Putting it at .then is not useful, because the value that goes through then does not even exist yet at that point (since it's async). You could put it inside the implementation of foo, but what if foo is called from lots of places and you only really care about this particular invocation of foo. So you change that to .then(x => foo(x)) and then you can put a breakpoint there.

@lightmare
Copy link

The problem is that when you have .then(foo),

I see. But that issue does not apply to pipeline operator in any form (because it doesn't "build" the whole chain in advance like .then sequence does). Even with F# style you place the breakpoint at a certain step in the pipeline, and then step into the function.

@noppa
Copy link
Contributor

noppa commented Oct 5, 2021

Yes, this whole breakpoint discussion started with the suggestion that we don't need await support with pipelines because we have .then, to which @mAAdhaTTah pointed out that (paraphrasing) then+point-free is annoying to debug. So this was more about await vs .then than Hack vs F# per se. Or perhaps you could say it's Hack/F# vs Minimal.

@SRachamim
Copy link

SRachamim commented Oct 5, 2021

Just to put another perspective and trying to emphasize how simple a minimal |> is:

Worrying about where to put a debugger on a |> reverse function application:

a |> f |> g |> h

is like worrying about where to put a debugger on a () regular function application:

h(g(f(a)))

With the exception that |> reverse function application (being an infix operator) could have smarter debugging tools.

@noppa
Copy link
Contributor

noppa commented Oct 5, 2021

Well, that is absolutely something that I do worry about today, and one of the reasons why I'd almost always rather split that into

const effed = f(a)
const geed = g(effed)
const hd = h(geed)

even if those variable names add no value.

That said, I don't think there would be any problem debugging synchronous minimal pipelines either.

@SRachamim
Copy link

SRachamim commented Oct 5, 2021

My point is that any concern about debugging ergonomics on a |> reverse function application is not a concern of the |> pipeline operator. It's an existing issue even in the most basic and ancient feature of the language: the () function application. So addressing this issue in a different proposal like Function.tap, Function.spy or Function.trace will fix it in all contexts, being it a |> f |> g |> h or h(g(f(a))). IMHO it should not block the minimal proposal.

The same we should do with any other concern, like the ergonomic of await (either by acknowledging it already has a declarative counterpart,.then, or by proposing a different syntactic sugar p |>> f === p.then(f)).

@tabatkins
Copy link
Collaborator

tabatkins commented Oct 5, 2021

@SRachamim "minimal" F#-style (just plain function application, no special cases for await/yield) has been rejected twice over already; several committee members said they would give blocking objections if the proposal didn't handle await properly, and F#-style in general has been rejected (thus the current state of the proposal).

I'm going to ask you to stop suggesting it; talking about it in this repository is an unproductive waste of time. Further mentions of it will be marked as off-topic (and continuing to file off-topic comments can eventually lead to CoC enforcement, tho I don't anticipate that being an issue here).

@kenokabe001

This comment has been minimized.

@js-choi
Copy link
Collaborator

js-choi commented Oct 5, 2021

Heya, this is a friendly generalized reminder to everyone to please follow the TC39 Code of Conduct. Please be respectful, professional, friendly, and patient with each other, as per the Code. We all should try to assume that everyone else is acting in good faith, and we should be open to learning from each other. Let’s keep it polite and civil. ^_^

Thank you, everyone!

@kenokabe001

This comment has been minimized.

@voliva

This comment has been minimized.

@tabatkins

This comment has been minimized.

@mAAdhaTTah
Copy link
Collaborator

@arendjr

I want to start by thanking you for your initial investigation, the overall conversation, & engaging and developing our ideas on this. Having discussed the Hack pipe a lot over the past few weeks, it can be incredibly frustrating to engage in these conversations and not be heard, so I appreciate the good faith back and forth.

I want to start with the nice things cuz you're not going to like my conclusion: if that's the best argument against the readability of Hack pipe, I'm pretty comfortable with the idea that Hack pipe is a significant net readability win. Having toyed with these examples, the thing I keep coming back to is this line:

Earlier in this thread, someone mentioned how they dislike variable names such as data and value. But given their context, I tend to find names like that very helpful. In the above example, it's obvious data is the file contents. In an example where we are performing an HTTP request, it's obvious data is the body (either a JSON POST body, or a response body, but this too tends to be clear from context).

[emphasis mine]

It is contradictory that variable names like data and value, which commonly do not communicate anything of... value (:trollface:), and who's meaning is explicitly inferred from context, could be net more readable than a placeholder who's name also does not communicate anything of value but who's context, both in source & consumption, is much more limited.

Using these two code snippets:

const text = Deno.open("intro.text")
  .readAll()
  .decode(new TextDecoder("utf-8"));
console.log(text);
'intro.txt'
  |> await Deno.open(^)
  |> await Deno.readAll(^)
  |> new TextDecoder('utf-8').decode(^)
  |> console.log(^)

Setting aside the API changes required to make the first snippet work (both the method chaining & removal of async), these two snippets communicate the same amount of information. If the latter were to remove the await, there's a similar number of characters for each as well. The first snippet also has the same "subject changes" you had issues with in the pipeline examples (going from fd -> Buffer -> string). I'm hard-pressed to believe the difference in readability between these two is material.

Comparing that to the base temp variable version, I still think this is a clear readability win because it explicitly communicates its fundamental nature as a pipeline.

Lastly, this evolution of the pipeline with your new requirement:

const prefixIfModified = file => Deno.mtime(file) > recentlyModified ? 'UPDATED: ' : '';
const decodeFile = async file => await Deno.readAll(file) |> new TextDecoder('utf-8').decode(^);

'intro.txt'
  |> await Deno.open(^)
  |> `${prefixIfModified(^)}${await decodeFile(^)}`
  |> console.log(^)

This is a huge win over the temp variable version with current APIs. If we started with that version, then stuck an if towards the end to add the prefix, maybe that's not a huge deal. But as that snippet evolves, and additional features are added, continuing to do so imperatively is going to expand the complexity and require importing a lot more mental context to understand.

By comparison, breaking down the transformation into a set of steps, extracting & implementing those steps, and composing that back up together is an incredibly powerful (and readable!) approach to solving a problem. It keeps the amount of context you need to keep in working memory down, as each step can be understood in isolation and directly communicates what exactly is happening (renaming decodeFile to getFileContents might be an improvement on this front too). Having Hack pipes in your toolbox both encourages you to think about solving problems like this and encourages you to maintain that structure (and thus the long-term readability) as the code evolves. No temp variables are going to encourage you to maintain a pipeline as a pipeline.

I think the major concern this doesn't address is developers over-contorting code that isn't really a pipeline into a pipeline. Having spent some time trawling through my code at work (a fairly pedestrian React/Nextjs application), I didn't find a ton of places where I could make those kinds of contortions. The places I saw potential usage of Hack pipe are either "in the small" (e.g. a 2-3 expression sequence with useless temp vars or nested expressions) or explicitly "pipeline problems" (e.g. the async sequence I mentioned above, or a data transformation preparing BE data for FE view code). That said, I accept this as a potential downside but personally think the risk is much smaller than the reward, given what I see above as clear & significant readability improvements.

In general, to expand on Tab's point, Hack pipe makes the clarity & readability of method chaining available to free functions without needing to shoehorn APIs into that style. Given the similarity to existing, readable concepts in JavaScript, I don't find the case against it's readability convincing and see the improvements to the above snippets as a strong case for Hack pipe's net readability improvements. I don't necessarily expect this to be fully satisfactory to those who don't see the above as improvements, but I figured it would be worth laying out the case one last time before stepping back from this particular debate.

Again, appreciate the good faith engagement on this topic!

@arendjr
Copy link
Author

arendjr commented Oct 7, 2021

@mAAdhaTTah Thanks for the extensive reply! I believe your position to be balanced and well-reasoned even if I don’t agree with it. I’m still slightly worried about the schism I expect this to create between the people who will embrace Hack pipes and those that will reject it for readability, familiarity or other reasons. That said, I don’t have much else to add :)

Cheers! ❤️

(Should we close this issue?)

@mAAdhaTTah
Copy link
Collaborator

@arendjr Yeah, if we've fully explored this issue, you can close it. Cheers! ❤️


Note to @lozandier: if you return and still want to respond, my suggestion would be to comment on #215 (I think that one is most relevant to what we were discussing) and link back to what you're responding to or create a separate issue.

@arendjr arendjr closed this as completed Oct 7, 2021
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 15, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests