Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why typeclasses? #30

Open
shelby3 opened this issue Jan 29, 2017 · 201 comments
Open

Why typeclasses? #30

shelby3 opened this issue Jan 29, 2017 · 201 comments

Comments

@shelby3
Copy link

shelby3 commented Jan 29, 2017

Is less verbosity the only main benefit of typeclasses over just inputting a set of functions?

Typeclasses can do where bar is a member of Function:

foo(x: A) where Function<A>
   bar(x)

Without typeclasses we could do:

foo(x: A, y: Function<A>)
   y.bar(x)

I understand @keean proposed other functionality for the where clause, but as for the main functionality of providing pluggable (i.e. dependency injection) interfaces, what is the big benefit that justifies creating a new language?

@keean
Copy link
Owner

keean commented Jan 30, 2017

I think the question is why not just pass all the interfaces as arguments? You can of course do this, after all type classes are implemented as implicit dictionary arguments.

The advantage of type classes is that they are implicit, hence you have shorter more readable function signatures. They can also be inferred, so you don't have to explicitly type the constraints. You don't want to have to explicitly thread y through every function call where you pass type A because you might want to call bar at some point. Further when you realise you need to call bar on a value of type A you don't want to have to refactor every call in the program to pass an implementation.

@shelby3
Copy link
Author

shelby3 commented Jan 30, 2017

Thanks for the reply. I appreciate those advantages. I am just thinking about near-term priorities. I think achieving the "perfect" language is a major endeavor that needs a significant amount of funding and time. I've been thinking that your goal is maximum genericity (i.e. the ability to write some code that can parametrized over many types and uses cases) and minimum explicitness (although in other cases you argue for more explicitness where I argue for less).

My near-term goal is not that. That is far too much for me to think about and digest at this point. I am wanting some improvements on JavaScript so I can go use the same code base both on the client and the (Node.js) server. For example a cryptographic library.

  • promises (ES6 has this)
  • anonymous unions (TypeScript has this)
  • guards or match (TypeScript has this)
  • integer types (u8, u16, u32, 64)
  • structures (handles all the boilerplate on top of TypedArrays and ArrayBuffers)
  • everything as an expression
  • Python-like code block indenting instead of curly brace delimited blocks

I've been thinking about achieving a much simpler transpiler to TypeScript that can meet those near-term objectives.

Later can experiment with moving to higher levels of genericity and less explicitness, if warranted by the use cases that are prominent. And with more funding and momentum behind the initial simpler transpiler.

Further when you realise you need to call bar on a value of type A you don't want to have to refactor every call in the program to pass an implementation.

That works if you only allow every data type to implemented only one way for each typeclasses, but as we discussed in the past (with @skaller) that has limitations also. Otherwise, you could get unintended implicit functionality. So explicitly passing has its benefits.


Edit: Dec. 4, 2017

  • integrated low-level C-like coding with the high-level language and memory allocation mgmt (aka GC)

@shelby3 wrote:

@shelby3 wrote:

Perhaps we will eventually think of some low-complexity abstraction that gives us non-marshaled integration for tree data structures between HLL and LLL, and perhaps we can even prove borrowing and lifetimes statically. But I suspect we can not without heading towards the complexity of C++ and Rust.

@keean wrote:

I would not write a web service in 'C' or even 'Rust' ever, this is really ideal ground for a garbage collected language. I would use Node (TypeScript), Django(Python) or 'Go'.

@keean wrote:

My view is there needs to be a high-level-language and a low-level-language that share enough memory-layout and data-structure that data can easily be passed backward and forward.

@shelby3 wrote:

I am wanting to move away from such clusterfucks of never ending aggregation of complexity.

@shelby3 wrote:

My quest in @keean’s repository has to been to get the advantage of GC and perhaps some other aspects of FP such as first-class functions and closures in a simple language that also can do some low-level things such as unboxed data structures. To further erode the use cases where C/C++ would be chosen, to relegate that clusterfucked C++ language tsuris (and perhaps Rust also but the jury is still out on that) more and more to the extreme fringe of use cases. For example, It is ridiculous that I must employ a cumbersome Node.js Buffer library to build an unboxed data structure.

@shelby3 wrote:

[…] so that code which would otherwise be written in for example TypeScript and C/Emscripten would not have to be rewritten later in Lucid. More optimum compiled targets could be developed such as WebAssembly which leverage the full power of the design laid out above, including more optimum compilation of the C-like code. One of my test use cases would be to translate my Blake C code to Lucid.

@keean
Copy link
Owner

keean commented Jan 30, 2017

Regarding one implementation per class, that is the advantage of type classes :-) If you want more you need to pass an object or function (because you want the implementation to depend on the value of the object not the type).

To me it seems you can do nearly everything you want in typescript, so why not use it? I am working on a project in typescript at the moment, and for the ES6 stuff plus types it's good enough.

Regarding goals, I agree maximum genericity is my goal, and minimum boilerplate. However I also want to prioritise readability and understandability over conciseness, as you spend more time debugging and maintaining code than we do writing it. However I am happy for explicitness to be optional, which includes type-inference (type annotations optional) typeclass constraint inference (so constraints/bounds are optional). The only place we need types would be modules, where we want a stable interface that enables separate compilation.

@shelby3
Copy link
Author

shelby3 commented Jan 30, 2017

Regarding one implementation per class, that is the advantage of type classes :-) If you want more you need to pass an object or function (because you want the implementation to depend on the value of the object not the type).

If one universal implementation for each universal class was an absolute advantage or even feasible, then we wouldn't need scoping for (type) name conflicts in programming. It is impossible to guarantee one global implementation for the universe, because total orders don't exist.

I am not arguing that it can not be a feature that is desirable in some cases in a partial order.

To me it seems you can do nearly everything you want in typescript, so why not use it?

I guess you didn't read my list above, or didn't realize that the items I listed which TypeScript doesn't do are extremely important for the work I need to do.

However I also want to prioritise readability and understandability over conciseness

Ditto myself. I take this even further than you as I am thinking all types shall be explicitly declared, except in an assignment where either the destination or source is explicit, but not required for both. Since passing function arguments is assignment, then anonymous lambda functions can be implicitly typed by the function's argument type.

@keean
Copy link
Owner

keean commented Jan 30, 2017

I thought typescript included ES6 features, and has integer types as well as strucures (classes). The only one it does not have is "everything is an expression" I think, so it seems close to what you want?

Regarding type classes, it is clear that each function can only have one implementation for any given set of argument types. In other words we can only define one implementation for (+)<Int, Int>

@shelby3
Copy link
Author

shelby3 commented Jan 30, 2017

Regarding type classes, it is clear that each function can only have one implementation for any given set of argument types.

Disagree (and my reason was already stated). But I don't want to repeat that argument now. I have said all I want to say about it for now. It isn't a priority issue for me at the moment. We can revisit that later if ever get to the point where I want to implement a "typeclass" language.

Edit: we had a long discussion about this in another thread in 2016.

@shelby3
Copy link
Author

shelby3 commented Jan 30, 2017

I thought typescript included ES6 features, and has integer types as well as strucures (classes).

Afaik, there are no integer types in ES6. Are you thinking of 64-bit math methods in ESNEXT?

http://kangax.github.io/compat-table/esnext/

Interfaces and classes do not pack into ArrayBuffers. Perhaps you were thinking of the following proposal which has gained no traction:

http://wiki.ecmascript.org/doku.php?id=harmony:typed_objects

In 2016, I wrote some very complex (untested) JavaScript code to improve on that concept and provide a polyfill. But it has some drawbacks. I need to revisit that code to refresh my memory.

@shelby3
Copy link
Author

shelby3 commented Jan 30, 2017

I looked again at Nim, but it is getting too far from JavaScript (wants to be a systems language competing with C, C++, Rust, and Go). For example, lose JavaScript's Promises, ArrayBuffers, etc..

And it doesn't offer 64-bit integer support on JavaScript.

And I'd much prefer the JavaScript output is recognizeable, so the JavaScript debugger can be effectively used.

@shelby3
Copy link
Author

shelby3 commented Jan 30, 2017

I think I envision a robust way to transpile from a syntax and type system I want into comprehensible TypeScript. For example, I can use TypeScript's getters and setters to access the fields by name of a DataView on an ArrayBuffer. When not packed into a binary ArrayBuffer, 64-bit integers can be represented with TypeScript's tuples.

@keean
Copy link
Owner

keean commented Jan 31, 2017

It may be slower than normal JS objects due to packing issues, or it may be faster due to non-string property lookup. It certainly seems the right way to process binary network data, but I would probably use native JS objects everywhere memory layout is not important.

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

but I would probably use native JS objects everywhere memory layout is not important.

Agreed. I didn't intend to imply otherwise. Not only network data, but file data. You know I am working a blockchain design.

Edit: Also memory data. With a blockchain, you need the UXTO stored in RAM and this becomes huge.

@keean
Copy link
Owner

keean commented Jan 31, 2017

nodejs can mmap files from JavaScipt which would work well with this too. You don't have too many choices in-browser, you are limited to Blobs with a streaming interface, and the file IO API is not supported by all browsers.

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

nodejs can mmap files from JavaScipt which would work well with this too.

Right agreed. Interacting with binary data in JavaScript on Node.js is via very verbose APIs. I'd prefer a language native solution.

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

You don't have too many choices in-browser

I wrote "client side" but that doesn't mean we are limited to the browser. I am thinking Android and iOS.

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

Most in the blockchain realm use a systems programming language with good binary control, such as C or C++. But this is painful in other aspects. There is a lot of verbosity that isn't always needed. Need complex libraries such as Boost for asynchrony, etc... Do not have GC by default, etc.. I think it is overkill.

I am searching for a better balance. Rapid coding with less needless verbosity, yet still be able to handle binary data and integer math without verbose APIs. And also some (explicit) typing for correctness checks and better readability (open source).

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

Others have integrated C/C++ with JavaScript via Emscripten separating out code that requires that control from that can be done more elegantly in JavaScript, but this IMO isn't ideal.

Most code doesn't need to be that heavily optimized (or at least not the first priority). I am thinking there should be something in between a toy language such as JavaScript and a full blown systems language such as C++.

@keean
Copy link
Owner

keean commented Jan 31, 2017

node-webkit could be an option then (npm nw). You can still compile typescript ( for example: https://basarat.gitbooks.io/typescript/content/docs/quick/nodejs.html )

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

node-webkit could be an option then (npm nw).

Good point and agreed if Node.js APIs are needed client-side.

@keean
Copy link
Owner

keean commented Jan 31, 2017

Well if you want to read and write binary files, its pretty much required. In a browser, FileReader ( http://caniuse.com/#search=FileReader ) is well supported, but FileWriter is not ( http://caniuse.com/#search=FileWriter ).

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

Well if you want to read and write binary files, its pretty much required.

Obviously Node.js isn't the only way to read and write files on Android and iOS (and even can access it via JavaScript in a WebBrowser instance). Whether one wants to carry all the baggage of node-webkit just for file APIs is not a certainty. But yeah, it is apparently one of the options.

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

My focus in on the notion that we shouldn't be forced to use a systems language just to do integer operations and packed binary structures. That we should be able to combine these capabilities with a rapid-coding paradigm that has GC, simple-as-pie asynchronous programming, and less optimum but less painful ways to for example manipulate strings (because we don't always need that to be highly optimized).

C++ is too tedious and overly detailed when in most cases we don't need that. Yet we shouldn't have to revert to C/C++ just to get integer operations and packed binary structures.

Premature optimization often means projects that don't get finished.

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

Yeah performance is battery life and that is important on mobile. But projects that get to market are more important than projects which don't.

@keean
Copy link
Owner

keean commented Jan 31, 2017

For Android and iOS there are things like Cordova, is that the sort of thing you are thinking about?

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

Yeah there are also Nativescript and Tabrisjs.

But I am not sure if I choose those, because may also want to think about desktop apps as well.

This is a wide open field of experimentation, because we also have convergence of the laptop/desktop and mobile on the horizon perhaps...

@keean
Copy link
Owner

keean commented Jan 31, 2017

Using HTML5, JavaScipt and IndexedDB you can already write software that runs across mobile, pad, laptop, desktop, and big-TVs. It works pretty well, except for annoying corner cases, like iOS limiting IndexedDB to 50MB of storage. It's definitely fast enough for a lot of applications, and HTML/CSS makes good looking UI reasonably easy.

@shelby3
Copy link
Author

shelby3 commented Jan 31, 2017

Writing to the least common denominator controlled by gatekeepers who squat on and stall innovation seems to me to be a losing paradigm. What about SD cards, cameras, various multi-touch swipe gestures, etc... Consumers care about features not about how you achieved it.

@keean
Copy link
Owner

keean commented Feb 1, 2017

Well you have to write your own "drivers" for each platform. On Android you have to write in Java to call all the native APIs, but you can call Java you have written from JavaScript if you create an android application shell in Java. Same for iOS (9+) where you can call native APIs from objective-c or Swift, and you can call your own objective-c or Swift code from JavaScript, so you would need a different objective-c or Swift wrapper app. These wrapper apps need to get into the respective app-stores, which means getting Google/Apple to approve your code. You can distribute outside of the app store on both devices, but it requires the user to jump through hoops (android you need to enable a device setting to allow non-app-store apps to install, iOS you need to "trust" the developer of each non-app-store app in the device settings).

@shelby3
Copy link
Author

shelby3 commented Feb 1, 2017

Well you have to write your own "drivers" for each platform. On Android you have to write in Java to call all the native APIs, but you can call Java you have written from JavaScript if you create an android application shell in Java. Same for iOS (9+) where you can call native APIs from objective-c or Swift, and you can call your own objective-c or Swift code from JavaScript, so you would need a different objective-c or Swift wrapper app.

Agreed and I was fully aware of that. But good you've summarized for readers.

These wrapper apps need to get into the respective app-stores, which means getting Google/Apple to approve your code. You can distribute outside of the app store on both devices, but it requires the user to jump through hoops

These companies are trying to create walled gardens and be gatekeepers. Me thinks the free market is going to route around these Coasian costs. They are trying to carefully position themselves with "safety" and "convenience" to acquire monopoly control.

In fact, that is one of my other longer-term goals.

If you have an open source app which acts as a base for all other apps and doesn't abuse as a gatekeeper, then it could alleviate this problem. Once everyone has that base app installed, then it takes care of overriding all these toll bridges. Toll because the costs are paid but indirectly.

(android you need to enable a device setting to allow non-app-store apps to install, iOS you need to "trust" the developer of each non-app-store app in the device settings).

Clicking Ok for a confirmation dialog is not problem for users. It is when they make the enabling action modal and hidden, that the achieve insidious control.

Popping up a red alert, "many other users have reported issues with this app" (with a link off to details) would be a sufficient free market solution to safety.

Even Google realizes that ads can't sustain the company. So they are going to have to become monopolistic. Open source should slay this, as it did to Microsoft Windows.

Android defeated iOS in global installed units (not dollar weighted) marketshare because it was more open. Ditto to Android if they make it difficult for users to have freedom.

As I assume you know, Android still lags (or perhaps near parity with) iOS on a dollar weighted basis (such as $ spent on apps), because iOS simply worked better. For example the egregious latency with Android audio was unacceptable for some scenarios and many people aren't even aware of issues like this. Apple's cathedral-style engineering integration was superior, but eventually bazaar-style open source catches up. Takes a while.

@shelby3
Copy link
Author

shelby3 commented Feb 7, 2017

@keean I am not actively coding again yet (only the 19th day of my 8 week intensive 4-drug phase of TB treatment)

I find this comment to be relevant:

Maybe what Rust really needs is fewer programmers writing Rust crates and features, and more programmers who use those features to write end-user programs that won’t be redistributed.

This.

The Rust community looks, from the outside, a bit ingrown and subject to a kind of “can you top this (complexity)” syndrome. I don’t mean this observation in a hostile way; if I were eyeball-deep in the development of a young language I would probably behave the same way – and arguably I did, back in Python’s earlier days when I was briefly on that devteam.

But yes, I think more feedback from people who just want to get stuff done and have a lower complexity tolerance would be good for the language.

Complexity is even more insidious than that, I’m sad to say. It’s not enough to avoid being excessively clever. Even if you restrain yourself, complexity will still creep on you anyway. You need to actively fight it whenever your can, and periodically review your “complexity budget.”

Rust has already, in my opinion as a production user, overdrawn the complexity budget in one place. This happened accidentally, very early on. Specifically, it’s the interaction between several Rust features: (1) The auto-deref features provided by Deref. (2) The split between “owned” types like String and reference/slice types like &str. (3) The unification-based type inference that allows you to write let i: u32 = "10".parse()? and automatically choose the right version of parse. (4) The Into trait (and related traits), which allows you to write functions that take arguments of type Into<String>, which essentially means, “Oh, just give me any type that advertises conversion to a String.”

Any one of these features individually makes Rust much nicer to program in. But if several of them all gang up in the same piece of code, a novice Rust user will get burnt. If I’m reading between the lines correctly, this might actually be what bit you with bind. The bind function takes an argument of type ToSocketAddrs, which is basically one of those Into-like traits which allow you to pass in several different ways of representing a socket address. It’s cute and clever (yuck), and it works most of the time. But if it gets combined with (1-3) above, it’s going to generate crappy error messages. The fix is relatively simple for an experienced Rust programmer, if annoying: Add explicit type declarations all over the place. But at work, my guideline is “just don’t use Into-style conversion traits in your APIs unless you have a damn good reason. They don’t actually improve API ergonomics.”

If this is the only place where Rust exceeds it’s complexity budget, well, users will just learn this one bad interaction and go about their lives. C++ has dozens of warts worse than this. But any further expansions of Rust in this area need to be handled very carefully, and production Rust users need to participate in the RFC process.

But let’s dig deeper.

There are two libraries in the Rust space which worry me: Diesel and Tokio. Diesel looks like an ORM, but it’s really not—it’s just a typed version of the relational algebra which can dump output into Rust data types. It results in really concise and efficient code once it’s working. But the error messages are just horrendous (though not yet in modern C++ territory, though that’s nothing to be proud of). Diesel has chosen to push Rust’s type system to its limits in the name of speed and expressiveness. I’m not sure it’s worth it. We had a long debate at work and I paired on Diesel code with one of our other Ruby backend guys, and he said the tradeoffs with Diesel’s error messages were worth it. I’m not 100% sold.

Where I’m more concerned is tokio. As everybody has told you, tokio is central to the Rust async I/O story, and lots of popular crates are moving to tokio-based backends (though most will still export synchronous APIs). And from what I’m hearing, tokio is currently generating bad error messages for some common use cases. In my opinion, this needs to be fixed—and the core team is discussing pushing up a language feature that’s been in the RFC process for a while now, which will hopefully make the error messages much clearer.

Still, I’m holding off on tokio-based async I/O for at least 6 months in production code, to see how this all plays out.

And the comments complaining about the JavaScript ecosystem chaos such as this one:

The situation with JavaScript and Node.js is more complex. At the time of writing, there are 399,773 packages on npmjs.com. And yet, there are really obvious categories with no good alternatives. For example, I needed a CLI option parser the other day, and I needed it to support git-style subcommands. I searched npmjs.com for 30 minutes and spoke to our in-house Node guru, and everybody said, “No, every single Node package for your use case is terrible.” This makes me sad and frustrated. I eventually found something that was, frankly, pretty bad and cudgeled it until it worked.

@shelby3
Copy link
Author

shelby3 commented Feb 7, 2017

And as I had pointed out on the Rust forum back in 2016, all the complexity doesn't really provide us anything that really helps us that much because most programming errors are not constrained to the sort of safety Rust is checking for (yet those safety checks are very debilitating to code in), as reiterated by this comment:

What you described is probably true for a lot of user-space code, but it is not true for the kernel. If you look at Linux kernel CVEs, you will see that they have nothing to do buffer overflows, stack smashes, use-after-free vulnerabilities. They are more likely to be logical errors, which can be exploited under some condition. Even when you hear a data race condition found in the kernel, it is unlikely to be caused by a missing lock. In most cases, it is a logical error of some kind. For example, here is the patch for a recently discovered data race in Linux (CVE-2016-5195): https://lkml.org/lkml/2016/10/19/860. As you see, it was not about locking, but about proper checking different flags.

Moreover, Linux developers actively use a static analyzer to catch the most common mistakes in their code. So Rust can’t help there much. Also, it is completely unrealistic to write a general purpose kernel without unsafe code. Even if you look at the Rust standard library, you will find a lot of unsafe code there (e.g. all collections). When you work with hardware at low-level, you have to do a lot of things that can be potentially unsafe. But let’s suppose that it was possible to write the whole kernel in a safe programming language, would it eliminate security exploits? Of course, not.

You may look at long list of CVEs for many web-frameworks written in safe languages (such as PHP, Python, Ruby). By its nature, the kernel works at a higher privilege than the user-code, which means many logical errors in the kernel can be exploited to do something that the user should not be allowed to do. So writing a safe kernel is a far more complex task then writing a kernel in a safe language. First of all, you have to have a formal definition of what you mean by “safe” and then decide how you are going to prove that. It is very tricky even for a very small toy kernel.

@keean
Copy link
Owner

keean commented Jun 15, 2020

@shelby3 You can be not afraid to offend socialists, and I agree that analysing the reasons some projects fail could be relevant, although probably better in a project management topic. However I think you are going too far in shifting this towards a discussion of the virus, or economics. The danger is other people with differing views will start to engage on these topics, and the actual PL stuff will get lost in the noise. I think you could have made your point perfectly well without needing to mention those other topics. Now I am going off-topic and adding noise to the discussion too. Let's try and get back on-topic.

@shelby3
Copy link
Author

shelby3 commented Jun 15, 2020

@keean I removed the offending text and inserted the following:

[I removed this text to appease @Ichoran, please refer to the Edit History to find it]

@Ichoran
Copy link

Ichoran commented Jun 15, 2020

@keean - I am concerned with the attitude that one has to accept a particular (abrasive or indifferent) mindset in order to be a welcome contributor. @shelby3 edited out some particularly strongly-worded off-topic stuff (that some people may have found offensive) but not, critically, the attitude that dismisses working with people who dislike said stuff.

However, I accept that this isn't a personal project of his. I'll simply refrain from engaging with him. I'm happy to work with people with all sorts of different views, or to not know what their views are, in order to accomplish something worthwhile, as long as they'll do the same. Shelby's indicated that's not really how he views things, which is up to him, but who I interact directly with is up to me.

(OneTwo exceptions: @shelby3, when discussing technical issues that have in arisen in another project, how you feel about the management or goals of that project is not relevant except possibly as an apology for why you can't master your emotions sufficiently to engage with the technical details. No content needs to be given in that case, just "I dislike them so much that I can't do this.". If you are going to engage with the technical issue, how you feel about governance or whatever does not aid any argument; it's a logical fallacy (of the ad hominem type; sometimes it's called the Genetic Fallacy, e.g. https://owl.purdue.edu/owl/general_writing/academic_writing/logic_in_argumentative_writing/fallacies.html). So even that digression was unnecessary, much less all the other stuff that followed.

Edit--I see in another comment you've ascribed to me positions I don't hold. There's a very big difference between saying that under no circumstances can you offend someone and going out of your way to bring up topics that commonly cause contention and offense. Maybe my view is clear now, maybe not, but I'm not holding you to anything so I don't see a good case to keep talking about it.)

I will be on-topic from here on when I post, save for very brief clarifications if my off-topic stuff has been misunderstood in a problematic way.

@Ichoran
Copy link

Ichoran commented Jun 15, 2020

@keean - I'm not familiar with Agda's implicit typeclass scheme, but Scala does it this way also--if you want to use a different ordering, you just pass in a different Ord (actually, Ordering in Scala). In Scala 2, this capability introduced an unpleasant ambiguity in parsing because neither you nor the compiler could straightforwardly tell whether an argument was filling in an implicit typeclass or calling the return values as a function. But in Scala 3 that's fixed because you have to use a keyword (using) to supply a custom implicit argument.

The problem with Scala's scheme is that without global coherence there may be multiple implicit typeclasses in scope that could be relevant, but clearly some of them make much more sense than others. Knowing where to look for implicits, and letting the compiler make the obvious choice when more than one is in scope, and making what is obvious to one programmer discoverable by other programmers, has been a source of considerable pain for Scala developers. We've mostly borne the pain cheerfully because of the incredible utility one gets in return, but there are improvements there too in Scala 3 (too technically detailed for me to care to get into them, but you can read about the entire reimagined implicit-and-typeclass scheme[https://dotty.epfl.ch/docs/reference/contextual/motivation.html](here, starting at The New Design about halfway down). Though I have some quibbles with some details, overall I think it's a substantial improvement.)

Having used both Rust and Scala extensively at this point, I must say I considerably prefer even the Scala 2 approach over Rust's. Rust interfaces are littered with "Oops, what if I don't want the default implementation?" methods for sorting or whatever. Of the two scenarios, being able to get custom functionality when you're using typeclasses, at the expense of some difficulty with finding the right one in certain obvious cases, seems much better to me in practice. Rust's solution (newtyping) works okay if the thing is a bare type, but when it's a generic type parameter, you can very easily run into major runtime performance penalties in addition to the tedious boilerplate.

It's possible that with sufficiently different language features that there could be no downside to a flexible scheme like Scala's, or no need for anything but global uniqueness. The extant examples have downsides both ways, though. And while I find Haskell not to my liking, plenty of people do and don't seem particularly bothered by uniqueness constraints, so I can't make a strong claim that my preferences are universally shared. But hopefully I've explained why my preferences are what they are.

@shelby3
Copy link
Author

shelby3 commented Jun 16, 2020

@Ichoran I have no way to message you privately, so I will just have to put this in public. I want you to know that I appreciate very much your interaction and the contribution you have made to the brainstorming process in the various issues threads here.

I am tolerant of your criticism. No harm. Nothing significant was censored. I understand your viewpoint that you would prefer to have a purely technological discussion and never mention organizational culture. I understand you would prefer I change my habits at times when necessary to avoid creating tension. I understand you want harmony and production as a highest priority. Seemingly all admirable stances.

At this stage I think my priority is on frank discussion of truths, not on building community. Because right now at the earliest stage, any mistakes in design and/or planned organizational structure could be critical mistakes. Community comes later and really isn’t relevant right now. At the community stage, I might be kicked out, lol. I usually am.

I happen to believe that sometimes being unpopular is synonymous with being unafraid to discuss frankly.

Anyway at least everyone knows I will not be involved in any project that has an organizational structure like Rust with its Baskin-Robbins 31 flavors of leaderless organization. Or JavaScript and its 400,000 libraries. One of the main criticisms against both ecosystems is the lack of stability and official libraries. It’s chaos.

EDIT: I learned from when I created WordUp in the late 1980s which garnished ~30% marketshare of the AtariST word processor market and CoolPage(.com) in the late 1990s that garnished ~1% share of the entire Internet by the turn of the century, that building community is about making things easy-to-use. And since then with my involvement in the cryptocurrency ecosystem I have learned that enabling rampant speculation is the sure-fire way to onboard. Everyone loves a good Ponzi scheme. Think MLM and Amway, which spread like viruses. Building community is not some admirable thing — it’s manipulation of the sheepeople. I like Trilema in the sense that he’s a realist and calls a spade a spade. Socialists are liars who pretend everything is admirable whilst they’re manipulating the situation or are “useful idiots” who are clueless about what they’re really propagating.

EDIT#2: another way to make things popular and onboard a large community is to make them addictive such as Tetris, Farmville, or upsizing soft drinks with increased concentrations of high fructose corn syrup and caffeine.

The Story of Tetris | Gaming Historian

Community building is about manipulating the dopamine receptors (pleasure centers) of the neurological system and appealing to the ancestral hindbrain.

EDIT#3: I think the main discord that comes later in community building a burgeoning PL ecosystem is due to impossibility of being able to make a PL that fulfills every use case perfectly. And then tension about imperfect tradeoffs and subjective issues. At that point, unlike others who argue to discourage forking and splitting the community, I would be of the opinion that may the best fork win. I think I would be a leader who will listen to all views and then try to make sure they are elucidated coherently so that we’re all clear on the tradeoffs. Then if I were a leader, I would make a decision. I would not put it up for a vote. I would let people vote with their feet. Anyway, that’s how I would run a project. And I would never allow a CoC on my project or fork. If someone is being disruptive to the point of filling threads with endless off-topic noise, I would simply make sure I have platform which enables me to press “filter” and anyone who wants to use my filter is free to do so. IOW, decentralized curation. No one is 100% censored, but yet we can squelch the noise. Win-win. I am eager to implement this. And thus eager to finish this PL so I can get on with programming. Well I would use a vote if I was ambivalent about a decision that seemed to have no clear objective choice.

EDIT#4: the other way to make something popular is to scare people. Fight-or-flight is a primordial urge of humans, which this current virus phenomenon exemplifies. Imagine that a virus which kills 0.001% of the population can cause humans to shut down 30% of their GDP and launch the onset of severe economic deprivation due to Minsky Moment deleveraging and the chaos that results from initial economic distress.

EDIT#5: I actually did not read @Ichoran’s messages from today before I wrote the above. So now I want to respond to this bit:

possibly as an apology for why you can't master your emotions sufficiently to engage with the technical details

Conflating criticism of an ideology with emotions is not astute. Criticism of ideology is not ad-hominem.

I am concerned with the attitude that one has to accept a particular (abrasive or indifferent) mindset in order to be a welcome contributor.

That’s totalitarian-speak. You will need a 1984 thought-police to fill the power vacuum of CoC since nobody can quite agree on exactly what is correct behavior. Your “concern” is that you can’t control the behavior of others. That is intolerant browbeating in an attempt to disguise it as virtuous. I am criticizing your ideology also. I refuse to join a project where people can’t speak freely. Those who get offended are the ones who can’t separate their emotions from meaningful decisions about substance.

I suppose you find some value to the Rust project and perhaps perceive my strong opinions about it to be arrogant. I think Rust is a move backwards and it is useful for exemplifying what not to do. I don’t know if there is any feature I would want to take from Rust. I think I have learned nothing at all from the project other than what I do not want to do (with a possible exception for the crates design) — although thinking about how I do not want a total program order on data race safety helped push me towards Actor-like partitions so I guess Rust did influence me in an important way but in the negative sense. And nearly ditto for JavaScript, except that I recognize it was a necessary evolutionary transition and it demonstrates the importance of simplicity and zero compile-time.

I will repeat that I think technologically Rust will fail because it attempted to merge high-level and low-level programming (in addition to the organizational reason I think it will fail). [EDIT: note how much slower Go is on benchmarks because it doesn’t attempt to be pedantic about low-level performance and create a clusterfuck of complexity with the high-level features — which are admittedly also spartan] And I am observing that mistake and deciding instead to focus on the high-level and only add the ability to share binary packed data with the low-level FFI.

Scala was a big leap and learning experience for me which is much appreciated. But I have by now come to realize that Scala is not principled. It’s a kitchen-sink language, e.g. OOP, subclassing, implicit conversions, and type classes.

Learning about Pony (thanks to @Ichoran for telling me about it) was also very important in the development of the design I have so far, because it caused me to think about Actor-like partitions versus green threads versus Rust and the GC and data race safety issues. I think I have devised something better than both of Go and Pony in theory. We’ll see...

@Ichoran
Copy link

Ichoran commented Jun 16, 2020

@shelby3 - Organizational culture can be discussed on an issue devoted to such. I think such conversations are beneficial for a project if conducted calmly and with evidence or at least careful argumentation to back up all points.

@shelby3
Copy link
Author

shelby3 commented Jun 16, 2020

@shelby wrote (replying to myself again):

I remember first reading about Rust way back maybe it was 2010 or so. They were contemplating some design with assertion of invariants on the entrance and exit from methods of an object. I wrote a public post explaining why that was unsound and did not provide the properties of correctness. So I had an early indication that Mozilla was going to stumble into a tar pit, because of my prior experience with that organization.

The paradigm was Typestate. Here’s some links I found about that history:

I found the following comment from the original Rust creator to be illustrative of my point about all that lifetimes borrowing, total ordering tsuris in Rust being not worth it:

https://hub.packtpub.com/rusts-original-creator-graydon-hoare-on-the-current-state-of-system-programming-and-safety/

When asked about safety, Hoare believes that though we are slowly taking steps towards better safety, the overall situation is not getting better. He attributes building a number of new complex computing systems is making it worse. He said, “complexity beyond comprehension means we often can’t even define safety, much less build mechanisms that enforce it.”

Also I recall the discussion with @jdh30 wherein he argued that Rust’s so called zero cost abstraction is not even from a GC standpoint when cost of real world algorithms is considered.

EDIT: ironically one of the early goals was concurrency, yet Rust still doesn’t have a stable concurrency story:

https://www.infoq.com/news/2012/08/Interview-Rust/

EDIT#2: I am correct that the main impetus for Rust is the power vacuum created by C++’s design-by-committee morass:

GH: Our target audience is "frustrated C++ developers". I mean, that's us. So if you are in a similar situation we're in, repeatedly finding yourself forced to pick C++ for systems-level work due to its performance and deployment characteristics, but would like something safer and less painful, we hope we can provide that.

Here’s my current thought process on that which I sort of already outlined upthread. I explained this to @keean in a private message yesterday.

I want TypeScript but with sized and unsigned integer types, binary packed structures and retain references. So a reference is just an ID. No references to references. And you can not reference a primitive type (a reference is also a primitive type).

Thus binary compatible with a low-level language for when you want to twiddle the bits. C might be just fine. Then do all our high-level programming in the high-level language. Only a very small percent of your code needs low-level optimization. Use a profiler. Work smarter.

I also want to add type classes and remove OOP. And I want to add a concurrency paradigm which is sort of the best of Pony and Go combined which I have named ALP = Actor Like Partition. So compatible with green threads or Promise. I can transpile to either, but with a native compiler then I can do the highly optimized GC I have designed for the ALP.

EDIT#3: I think perhaps the culture of Mozilla which is intent on making it as safety oriented as possible to gain marketshare (thus the we-know-better-than-thou-culture) was driven from a goal to displace Windows as the monopoly (but it morphed into a Frankenstein):

https://www.zdnet.com/article/javascript-creator-eich-my-take-on-20-years-of-the-worlds-top-programming-language/

According to Eich and Wirfs-Brock, "The rallying cry articulated by Marc Andreessen at Netscape meetings was 'Netscape plus Java kills Windows'." In May 1995, as Sun announced Java, Netscape outlined its plan to license Java for its browser.

I was thinking that yesterday and now reading the above I gained some confirmation.

@shelby3
Copy link
Author

shelby3 commented Jun 16, 2020

@Ichoran

The problem with Scala's scheme is that without global coherence there may be multiple implicit typeclasses in scope that could be relevant, but clearly some of them make much more sense than others. Knowing where to look for implicits, and letting the compiler make the obvious choice when more than one is in scope, and making what is obvious to one programmer discoverable by other programmers, has been a source of considerable pain for Scala developers. We've mostly borne the pain cheerfully because of the incredible utility one gets in return, but there are improvements there too in Scala 3 (too technically detailed for me to care to get into them, but you can read about the entire reimagined implicit-and-typeclass scheme[https://dotty.epfl.ch/docs/reference/contextual/motivation.html](here, starting at The New Design about halfway down). Though I have some quibbles with some details, overall I think it's a substantial improvement.)

An essential fault appears to remain in Scala 3’s implementation of type classes. That is AFAIK the implicit resolution is not transitively turtles down the call hierarchical according to the function hierarchy? So you can apparently end up with inconsistent implicits in the call hierarchy if you inject an instance as an argument but some place else in the call hierarchy an implicit is resolved implicitly to a different instance?

AFAICS my technological proposal avoids that design error. (oh but remember according to you, I am not capable of making technological contributions here, lol)

Having used both Rust and Scala extensively at this point, I must say I considerably prefer even the Scala 2 approach over Rust's. Rust interfaces are littered with "Oops, what if I don't want the default implementation?" methods for sorting or whatever.

And yet you criticize my generative essence hypothesis for why Rust is the way it is. And criticize me for expressing it. And equate that analysis with a lack of control over my emotions, lol. 😆

I guess you did not even appreciate that upthread recently I was defending your arguments in a couple of instances where I thought @keean was not fully appreciating your points. Yet I am the arrogant one. 🚔 👮

If you only knew that I was writing in a very amicable tone about you yesterday in a private message to @keean. That I appreciate and value you and unfortunate if you would leave. The problem it seems is you are intolerant of strong derogatory opinions about projects or ideologies — I am not criticizing individuals. I cited individuals only as examples where they exemplify an ideology to support my hypothesis of an overriding ideology at an organization.

Of the two scenarios, being able to get custom functionality when you're using typeclasses, at the expense of some difficulty with finding the right one in certain obvious cases, seems much better to me in practice.

And I posit will be even better with my proposal for partial orders. But maybe I have missed some key points that will only become evident later and bust my bubble of optimism.

Rust's solution (newtyping) works okay if the thing is a bare type, but when it's a generic type parameter, you can very easily run into major runtime performance penalties in addition to the tedious boilerplate.

Pita.

It's possible that with sufficiently different language features that there could be no downside to a flexible scheme like Scala's, or no need for anything but global uniqueness.

It seems there could at least be more flexibility in choosing the tradeoff between how coarsely grained the implicit total ordering and how much explicitness one wants. I wrote that upthread already in my response to @andrewcmyers yesterday.

@keean I think @Ichoran makes a very important implicit point here. Rust may end up giving type classes a bad reputation. We need to hurry up if we think we can do better before type classes become spurned.

The extant examples have downsides both ways, though. And while I find Haskell not to my liking, plenty of people do and don't seem particularly bothered by uniqueness constraints, so I can't make a strong claim that my preferences are universally shared. But hopefully I've explained why my preferences are what they are.

I don’t want Haskell. Haskell is principled on equational reasoning, pure FP with type classes. I prefer an imperative paradigm because I think of programs as an execution order not as an equational model. Math doesn’t have state, the real world does.

@keean
Copy link
Owner

keean commented Jun 16, 2020

@shelby3 Rust is doing okay, it's the most loved programming language: https://appetiser.com.au/blog/the-most-loved-and-hated-programming-languages-according-to-developers

@shelby3 shelby3 mentioned this issue Jun 16, 2020
@shelby3
Copy link
Author

shelby3 commented Jun 16, 2020

@keean

@shelby3 Rust is doing okay, it's the most loved programming language: https://appetiser.com.au/blog/the-most-loved-and-hated-programming-languages-according-to-developers

Don’t believe everything you read. Note Scala was one of the most disliked PLs on their survey. Python and TypeScript seem to have the highest combined rankings of Most Popular and Most Loved, which concurs with the PL design I want to create (combining syntax features of Python and transpiling to TypeScript). Go and Swift are in the next tier for both Most Popular and Most Loved.

(Also in the SO survey I see that most people do not contribute much if at all to open source, so reliance on community is overrated. @keean our age bracket is only 3 – 4% of programmers.)

There’s a lot of hype and propaganda causing people to maybe be interested in Rust but let’s wait until they experience the pain points…which for people coming from C++ might be tolerable but probably/perhaps not those coming from Scala or other decent high-level language such as Python.

Rust will continue to have a market until someone makes a better high-level language that integrates with a better C. If someone creates that (and I’d like to but I won’t say I will because old and unhealthy), then I think Rust may lose its luster. Also Go may improve as well interim which may cannabalize interest in Rust. I would be delighted if someone else would create the PL I want. But I been waiting for ~10 years so I am not very hopeful that anyone else will. I also think that is the problem with C++ is that it tried to be Java[Simula] + C instead of just a better C and leave the high-level stuff to something better designed for high-level. Mixing both high-level and low-level detracts from the optimum focus of each.

“Most loved” is apparently not synonymous with ‘most used’:

https://www.zdnet.com/article/developers-love-rust-programming-language-heres-why/

In fact, Rust has been voted the most-loved language for the past four years in Stack Overflow's annual developer surveys, even though 97% of respondents haven't used it. So how has it become the most-loved programming language?

[…]

"The short answer is […hype, hype, hype…]," explains Jake Goulding on Stack Overflow's blog.

[…]

Goulding is the co-founder of Rust consultancy Integer 32, so he has a vested interest in Rust's success, but he's also not alone in taking a shine to the young language.


Brave defies Google's moves to cripple ad-blocking with new 69x faster Rust engine:

Brave's answer, which it argues massively improves browser performance, is found in Rust, the Mozilla-hatched programming language that was in part created by Eich.

[…]

Brave now claims to have delivered a "69x average improvement" in its ad-blocking tech using Rust in place of C++.

Oh so let’s dig into the details of that lie:

https://brave.com/improved-ad-blocker-performance/

Our previous algorithm relied on the observation that the vast majority of requests are passed through without blocking. It used the Bloom Filter data structure that tracks fragments of requests that may match and quickly rule out any that are clean. Alas, blocked trackers are not that uncommon. We reused the dataset from Cliqz ad-blocker performance study that collected requests across top 500 domains and up to 3 pages of each domain. Out of the 242,944 requests, 39% were blocked when using the popular combination of EasyList and EasyPrivacy.

We therefore rebuilt our ad-blocker taking inspiration from uBlock Origin and Ghostery’s ad-blocker approach. This focuses on a tokenization approach specific to ad-block rule matching against URLs and rule evaluation optimised to the different kinds of rules. We implemented the new engine in Rust as a memory-safe, performant language compilable down to native code and suitable to run within the native browser core as well as being packaged in a standalone Node.js module. Overall, we found that:

  • The new algorithm with optimised set of rules is 69x faster on average than the current engine.

@shelby3
Copy link
Author

shelby3 commented Jun 16, 2020

EDIT: Perhaps the most important insight I learned from the following endeavor was that Microsoft schemed to keep JavaScript as limited as possible so that it would not complete with C# and .Net. And even to this day the marching orders for TypeScript is that it is only “a syntactic sugar for JavaScript” being also a superset of ES2015 (formerly ES6) and must not introduce exotic features which do not transpile more or less intact, including apparently the highly requested higher-kinded typing feature aka for F-bounded polymorphism, c.f. also. This is what has opened the opportunity for someone like us to build on top of TypeScript or ES2015 features that “should” have been there a long time ago. IMO, this is why Scala.js is so popular. I mean features that serious programmers have wanted for a decade or more.

Reading Brendan Eich’s historical account.

  1. Design by committee … well perhaps the best thing that happened to JavaScript is the committee was mostly inactive or stuck in political gridlock for 10 years so that by the time they did add they had more perspective on clear priorities:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=3

    By the year 2000, JavaScript was widely used on the Web but Netscape was in rapid decline and Eich had moved on to other projects. Who would lead the evolution of JavaScript into the future? In the absence of either a corporate or individual Benevolent Dictator for Life, the responsibility for evolving JavaScript fell upon the ECMAScript standards committee. This transfer of design responsibility did not go smoothly. There was a decade-long period of false starts, standardization hiatuses, and misdirected efforts as the committee tried to find its own path forward evolving the language. All the while, usage of JavaScript rapidly grew […]

    And they thus avoided introducing some proprietary, byzantine ideas into the standard:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=57

    And it was Brendan Eich, the original leader, who returned to the scene and got the ball rolling again:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=61

    And a new guy Allen Wirfs-Brock (who has hence become a key figure) from Microsoft came rushing in to maintain the Nash equilibrium and restrain Eich’s somewhat overzealous ambition:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=65

    He recognized JavaScript’s role on the Web as being a significant instance of Richard Gabriel’s [1990] “Worse Is Better” concept. It was a minimalist creation that had grown in a piecemeal manner to become deeply ingrained in the fabric of the World Wide Web. In contrast, the ES4 effort appeared to Wirfs-Brock to be what Gabriel calls a “do the Right Thing” project that was unlikely to reach fruition and, if it did, would be highly disruptive to the Web. He concluded that the technically responsible thing to do would be to try to get ECMAScript evolution back onto a path of incremental evolution.

    And the first mention of Graydon Hoare (originator of Rust) in the politics that ensued:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=67

    Eich, Dyer, and Graydon Hoare countered that ES4₂’s type system was the foundation needed for a more stable, secure, and performant browser programming environment.

    BINGO!

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=65

    Cormac Flanagan, in a 2019 personal communication, speculates that Adobe’s withdrawal was really a recognition of the problems with ES42. His postmortem thoughts also include the following:

    • The substantial language extension planned for ES4 was (in retrospect) a high-risk, non-conservative approach.
    • There was cutting edge language [technology] involved in the standardization process, particularly around the addition of the static type system (10+ years later [in > 2019], there are still hard unsolved research and performance problems [Greenman > et al. 2019]). The publication of “Space-Efficient Gradual Typing” at TFP’07 [Herman > et al. 2011], inspired by performance concerns in ES4, is perhaps a reflection of the researchy nature of this work.
    • The ‘buy-in’ concerns around ES4 in TC39, while problematic, were never fatal.
    • The ML reference specification was a workable idea, although discarded for later editions. In retrospect, it might have been better to start with a reference specification for ES3.

    […]

    Douglas Crockford [2008c], in a blog post, attributed the failure of ES42 to excessive unproven
    innovation:

    It turns out that standard[s] bodies are not good places to innovate. That’s what laboratories and startups are for. Standards must be drafted by consensus. Standards must be free of controversy. If a feature is too murky to produce a consensus, then it should not be a candidate for standardization. It is for a good reason that “design by committee” is a pejorative. Standards bodies should not be in the business of design. They should stick to careful specification, which is important and difficult work.

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=125

    TypeScript [Microsoft 2019] is a free Microsoft language product that originally targeted ES5 with ES6+ features and later added ES2015 as a compilation target. TypeScript’s most important feature is an optional statically analyzable type system and type annotations which compile into idiomatic dynamically typed JavaScript code. In 2020, TypeScript is the de facto standard for writing type-annotated JavaScript [Greif and Benitte 2019].

    The production use of transpilers, especially Babel and Typescript, was part of a large cultural transformation within many JavaScript development teams. In those teams, JavaScript is treated similarly to a conventional, ahead-of-time compiled language with development and deployment build toolchains rather than as a dynamic execution environment that loads and directly executes a programmer’s original source code.

  2. Brendan tried to remove implicit conversions but it was too late:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=38

    Brendan Eich recalls that he hoped to include his JavaScript 1.2 changes to the == operator semantics that eliminated its type coercions. Shon Katzenberger successfully argued that it was too late to make such a change given the large number of existing Web pages it would break. Eich reverted to the original semantics in the JavaScript 1.3 release of SpiderMonkey.

  3. Hahaha:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=39

    Brendan Eich [2006b] later expressed his opinion of the naming issue: ECMAScript was always an unwanted trade name that sounds like a skin disease.

  4. That’s cool they gave Douglas Crockford an entire section which he deserves for identifying and evangelizing JSON:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=48

  5. I take issue with this claim:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=49

    Over the course of the first half of the 2000s various organizations built Web applications using these and similar techniques. But this Web application style did not become widely known until Google used it to implement GMail, Google Maps, and other applications.

    I built easy-to-use DHTML features into CoolPage in 2000 and CoolPage had ~1% reach of the entire Internet that year according to Altavista:

    http://web.archive.org/web/20000902162950/http://www.coolpagehelp.com/

    Significantly improved by 2002:

    http://web.archive.org/web/20020726184455/http://coolpagehelp.com:80/

    CPH was created by Anthony C . Turrisi aka TonyT.

  6. This is correct. I also independently discovered the pattern, then later read about it on Crockford’s website:

    https://dl.acm.org/doi/pdf/10.1145/3386327#page=50

    Douglas Crockford is often credited with popularizing the module pattern but it was likely
    independently discovered by many JavaScript programmers.

@sighoya
Copy link

sighoya commented Jun 16, 2020

@shelby3 wrote:

I also think that is the problem with C++ is that it tried to be Java + C instead of just a better C and leave the high-level stuff to something better designed for high-level.

The problem with C++ is that it eats anything what smells like a feature.

@shelby3
Copy link
Author

shelby3 commented Jun 16, 2020

Does anyone really like teamwork? Gavin McInnes at TEDxBrooklyn ← funny

5:16 "You know who likes teamwork? Incompetent people."

One of the truest statements you'll ever hear in life.

[…]

There's a reason why lazy high school kids want to work in groups instead of by themselves during projects. One person does all the work, the rest just tag along.

Note he did not mention mutual defense. That requires teamwork and is essential to surviving periodic societalcide.

@shelby3
Copy link
Author

shelby3 commented Jun 17, 2020

@sighoya wrote:

@shelby3 wrote:

I also think that is the problem with C++ is that it tried to be Java + C instead of just a better C and leave the high-level stuff to something better designed for high-level.

The problem with C++ is that it eats anything what smells like a feature.

Yeah but I think my point remains. They add those features for low-level performance optimizations or high-level programming capabilities, because they want to be both a low-level performant and a high-level PL.

I know you all know the following but I will explain for other future readers.

Yet attempting to be both makes it more complex to excel at either end of the spectrum.

Firstly the humorous but sordid tale, C++ Frequently Questioned Answers:

[6.7] How long does it take to learn OO/C++?

FAQ: In 6-12 months you can become proficient, in 3 years you are a local mentor. Some people won't make it - those can't learn, and/or they are lazy. Changing the way you think and what you consider "good" is hard.

FQA: In 6-12 months you can become as proficient as it gets. It is impossible to "know" C++ - it keeps surprising one forever. For example, what does the code cout << p do when p is a volatile pointer? Hint: as experienced people might expect, there's an unexpected implicit type conversion involved.

While some people are better at learning than others, it is also true that some languages are easier to learn and use than others. C++ is one of the hardest, and your reward for the extra effort spent learning it is likely to be extra effort spent using it. If you find it hard to work in C++, trying another language may be a good idea.

Before you subvert the way you think about programming and your definition of "good" in this context to fit C++, it might be beneficial to ask the common sense again. For example, does compilation time really cost nothing (is development time that cheap, are there compilation servers with 100 GHz CPUs around)? Is run time really priceless (don't user keystrokes limit out speed, how much data are we processing anyway)? How efficient a C++ construct really is in your implementation (templates, exceptions, endless copying & conversion)? The reasoning behind C++ may be consistent, but the assumptions almost never hold.

Learning OO has nothing to do with learning C++, and it is probably better to learn OO using a different language as an example. The OO support in C++ is almost a parody on OO concepts. For example, encapsulation is supposed to hide the implementation details from the user of a class. In C++, the implementation is hidden neither at compile time (change a private member and you must recompile the calling code) nor at run time (overwrite memory where an object is stored and you'll find out a lot about the implementation of its class - although in an unpleasant way).

One example is pointer arithmetic which is enables low-level optimizations in some cases yet at the high-level it breaks encapsulation which prevents buffer overruns.

Another example for C++ is the concept of r-values and all the optimization complexity to prevent copying of return values:

https://en.wikipedia.org/wiki/Copy_elision#Return_value_optimization
https://shaharmike.com/cpp/rvo/
https://bastian.rieck.me/blog/posts/2015/return_value_optimization/

Then the complexity of the corner cases where it doesn’t work:

https://www.linkedin.com/pulse/c-return-value-optimization-dipanjan-das-roy/
https://stackoverflow.com/questions/19792135/return-value-optimizations-and-side-effects/19792614#19792614

So we can sort of emulate type classes with templates but without the implicit functionality:

https://functionalcpp.wordpress.com/2013/08/16/type-classes/

Yet templates are Turing-complete so we end up with template metaprogramming which makes it unpredictable what templates even do, i.e. you have to run the compile to see if they even halt or if they resolve in a predictable manner as you refactor code for maintenance and new features.

AFAICS, the complexity clusterfuck is significantly due to the goal of trying to merge high-level and low-level.

I’ve read that even the creator of C++ doesn’t understand all of the language thoroughly.

I used C++ to code CoolPage and Art-o-matic around the turn of the century but that was before much of the complexity became mainstream such as heavy use of template programming and the Boost libraries, etc.. I thus never learned all that. I essentially stopped using C++ when I halted all development of those applications circa 2003. I transitioned to JavaScript, PHP, ActionScript, HaXe, then directly Scala on the tip from @jdegoes (entirely skipping learning anything about Java). Before C++ I was highly expert with C and Intel and Motorola assembly. Learning Scala exposed me to Haskell, which I resisted because it was so foreign to me given I had done low-level imperative programming since age 13 starting from Apple II BASIC (or let’s say it made my head hurt because I didn’t get it and nobody was explaining it circa 2009 in a way that would click for me). I wrote much of WordUp in the late 1980s in 68000 assembly until I discovered C. Then I went on a tear with my new found productivity with C producing WordUp 3.0 (together with my employee/colleague Mike Fulton) before burning out. I remember consuming the K&R C book afair in one evening and being on a programming tear the next morning. C just made sense to me coming from assembler because I had consumed a Radio Shack book on Microprocessors at age 13 when I was laid up in bed from a high ankle sprain from football with nothing else to do. So I was coming into programming from a desire to want to understand how the hardware functioned. I enrolled at the University to pursue BSEE with a math minor but dropped out when I was spending 10+ hours a day in the library doing my own research and programming at night leaving nearly no time for studying given I was also heavily in drinking and sports. Up until my 3rd year, I maintained a 4.0 GPA by not attending classes and cramming on the night before exams.


I’m not a Rust expert having not used it to write any code (other than maybe a few toy snippets). I’ve read the documentation and read a considerable amount of expert discussion (from both sides including the defenses from the Rust fanboys[1]). Rust seems to have many cases where the desire for both high-level and low-level features create complexity that wouldn’t be the case if it has just focused on being a data race safety and “zero-cost abstraction” low-level language. For example the complexity around closures which @keean raised. Others such as the Rust discussion I cited up-thread have enumerated other examples. Again I will reiterate that Rust seems to be an improvement over C++ complexity, at least if total program order lifetime borrowing is not factored in, but they can’t be factored out as that is the essential USP for Rust.

Rust is only slightly more performant on benchmarks than C/C++, yet it is considerably more performant than Go:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/rust.html

I might be interested in a low-level PL that stripped out the type classes, closures and all the obfuscating, implicit conversions cuteness from Rust and focused on being a faster and more safe C.

If Rust had focused on something I really need, I might be a fan. But as it stands right now I do not need the tsuris of Rust for high-level programming. And I don’t need for low-level programming, the complexity of Rust’s ecosystem and high-level conflation. I want the next low-level language I will master after C to be stable and hyper-focused on seamless integration (without FFI marshalling overhead) with whichever high-level PL has formed a market Schelling point.

For example I might be willing to tolerate the lifetime borrowing complexities and annotation noise (given the apparent improvement in performance and some safety guarantees is provides) if there wasn’t the conflation with the high-level feature of type parameter polymorphism (aka generics). A low-level language doesn’t need type parametrisation.

I suppose after creating a high-level programming language then I need to tackle creating a fork of Rust and new low-level focused PL. Ha. Dreaming, if I was only 25 years old again I could possibly do that. No chance of that anymore. 🌔

Does anyone else have a different perspective?

[1] Intelligent guys who make intelligent technical arguments, but they miss the big picture point. They are so religiously attached to the notion that Rust is some hypersonic, hypersafe killer app that will soon conquer the world. And that unrealistic (“aspirational class”) ambition colors and feeds their confirmation bias blindspot — the world doesn’t need a conflation of high-level ease with low-level performance resulting in a complex Frankenstein pita.

@shelby3
Copy link
Author

shelby3 commented Jun 18, 2020

@andrewcmyers

@andrewcmyers

For some examples of more powerful ways to integrate pattern matching into OO languages, without breaking data abstraction in the way that functional pattern matching typically does, see PredJ and JMatch.

Have not looked at this yet.

Apologies to ask you for extra effort, but is it possible to elucidate the significance (beyond just being some syntax sugar) and essential concept for JMatch without myself needing to try to figure out how balancing red, black trees, CPS, interation, ADT, forward and backward model abstractions, etc all play together to offer something new and important? I suppose this somehow integrates/unifies iteration and pattern matching in some holistic model. Could you help me grasp it?

Can I somehow receive a tl;dr that creates that “a ha” moment which inspires understanding?

I just don’t have the time nor mental energy to digest all the verbiage of the paper. Maybe someone else here does? I got through the first page and half and it was not getting me close enough to holistic understanding to keep me motivated to push on. Is there any way to get more directly to the point or is the model that complex such that some degree of understanding can’t be conveyed with a paragraph and a simple example? I can’t even decipher what the code example is supposed to be accomplishing. I’m not detecting the key point of it from the example on the home page.

Probably someone who has more experience with the specific use case of your example will have more context with which to form an understanding. I don’t think I’ve ever coded red, black trees. Of course I am familiar with the concept of walking the branches of a tree.

Also I am not proficient in Java although I am in Scala so I do not know if the following is legal Java code:

boolean contains(Object elem) {
  if (left != null && value < elem) return left.contains(elem);
  if (value.equals(elem)) return true;
  if (right != null && value > elem) return right.contains(elem);
} iterates(elem) {
  foreach (left != null && left.contains(Object e)) yield e;
  yield value;
  foreach (right != null && right.contains(Object e)) yield e;
}

Thus I don’t comprehend the semantics of the iterates(elem)… there — when and where is it invoked? I don’t recall seeing that syntax in Scala, although perhaps I missed it or have forgotten over the years since I last used Scala.

Also there is a reference to logic programming which I vaguely assimilate although I have never used Prolog for example. @keean has. I am aware of the backtracing feature in Prolog because Keean has mentioned it for example in the context of a model of typeclass resolution and type system unification algorithms.

I understand the concept of a predicate for filtering a set or defining a domain:

image

I understand by forward and backward you may be referring to the relation between the domain and codomain?

Looking at this with a clearer mind and more sleep and ignoring the white paper and focusing solely on the code examples on the JMatch homepage:

  1. AFAICS the “Containment predicate is also a tree iterator” unification of contains and iteration of contained can be abstracted with an interface that provides a new iterator and a findNext. Thus contains instantiates a new iterator and invokes findNext on it returning the boolean of whether was found. Whereas iteration of the contained will repetitively invoke findNext.

  2. The “Balancing a red-black tree with pattern matching” is a clear example of how constraints expressed as pattern matching are much more coherent and succinct compared to the equivalent conditional expressions code required to accomplish the same algorithm. I presume the JMatch compiler compiles the pattern matching syntax into the optimal conditional expressions code.

  3. The “A client loop selecting elements matching a pattern” exemplifies that pattern matching on iteration more coherently and succinctly expresses the invariants of the algorithm compared to the equivalent conditional expressions code required to accomplish the same algorithm.

I don’t know if there is additional significance or novelty to your work, because I have not tried again to digest the research paper.

I do agree with the utility of these features and would be interested to explore them.

@andrewcmyers
Copy link

I think you're getting the gist of it. Pattern matching can be viewed as a kind of reverse-mode computation in which values are deconstructed instead of constructed. JMatch allows any method to declare reverse modes that can be used to pattern-match (one or more times) against a set of arguments. What it means to match can be specified with a logical formula, so pattern-matching is not tied to the concrete representation of values as in Haskell, OCaml, etc.

@sighoya
Copy link

sighoya commented Jun 19, 2020

@keean, so your idea is likewise for records.
Records are datatypes and get bounded to fields over typeclasses:

data MyStructInstance = MyStructInstance

class struct StructInstance where
    assocType A
    assocType B
    a :: A
    b :: B

instance struct MyStructInstance where
    A=Int
    B=Float
    a=2
    b=3.0

-- Generic Typeclass Bound
fun :: struct s => s -> s.B
fun s = s.b  

-- Trait Object
data TraitObject = forall s. (struct s) => TraitObject s
fun :: [TraitObject] -> TraitObject
fun = ts = ts!!0

All this kind of stuff can then be generated with normal record like syntactic sugar:

data struct A B = struct {fieldA::A, fieldB::B}

@shelby3
Copy link
Author

shelby3 commented Jun 19, 2020

[…] Rust seems to have many cases where the desire for both high-level and low-level features create complexity that wouldn’t be the case if it has just focused on being a data race safety and “zero-cost abstraction” low-level language. For example the complexity around closures which @keean raised. Others such as the Rust discussion I cited up-thread have enumerated other examples. Again I will reiterate that Rust seems to be an improvement over C++ complexity, at least if total program order lifetime borrowing is not factored in, but they can’t be factored out as that is the essential USP for Rust.

Rust is only slightly more performant on benchmarks than C/C++, yet it is considerably more performant than Go:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/rust.html

I might be interested in a low-level PL that stripped out the type classes, closures and all the obfuscating, implicit conversions cuteness from Rust and focused on being a faster and more safe C.

If Rust had focused on something I really need, I might be a fan. But as it stands right now I do not need the tsuris of Rust for high-level programming. And I don’t need for low-level programming, the complexity of Rust’s ecosystem and high-level conflation. I want the next low-level language I will master after C to be stable and hyper-focused on seamless integration (without FFI marshalling overhead) with whichever high-level PL has formed a market Schelling point.

For example I might be willing to tolerate the lifetime borrowing complexities and annotation noise (given the apparent improvement in performance and some safety guarantees is provides) if there wasn’t the conflation with the high-level feature of type parameter polymorphism (aka generics). A low-level language doesn’t need type parametrisation.

I suppose after creating a high-level programming language then I need to tackle creating a fork of Rust and new low-level focused PL. Ha. Dreaming, if I was only 25 years old again I could possibly do that. No chance of that anymore. moon

The creator of Node.js has created Deno to correct the accumulated crud in Node.js. He remarks that Go is a more performant PL for a server but the JavaScript server still has some valid use cases (e.g. sand boxed):

https://youtu.be/M3BM9TB-8yA?t=174

Ryan Dahl mentions the clusterfuck of module importing, require, package.json, index.js, etc., which BTW was what I told the Node.js project to simplify and move to ES6 modules deprecating that crud and got me banned from their Github issues some years ago. Design-by-committee can never shred the crud and start anew. Ryan also mentions that his design wisdom has improved with age. 😜

Ryan loves TypeScript, which furthers my expectation that an improvement on TypeScript would be popular.

The Node.js replacement Deno is advocating Rust as the low-level language for the (probably much) less than ~20% (i.e. Pareto principle aka power-law distribution of resources) of your code that needs to be fast.

https://youtu.be/1gIiZfSbEAE?t=723

Yet the rudimentary integration across the FFI is apparently not mutually typed:

https://youtu.be/HjdJzNoT_qg?t=1379

They’ve coded the entire Deno executable in Rust, but that’s understandable given all of it needs to be fast given it is a system layer. Their mainstream options were essentially C++ or Rust, so it makes sense they chose Rust per my prior comments in this thread. But for the majority of users who just want to code some lower-level subroutines in a high performance, low-level PL, then the conflation of higher-level Rust features with the lower-level requirements, adds unnecessary complexity and unsoundness.

EDIT: indeed I was correct that Ryan considered C++ as the other option:

https://youtu.be/M3BM9TB-8yA?t=1406

My current priority (if I can get healthy enough to consistently think clearly) is to improve upon TypeScript for the high-level language part.

Again Scala was a breath of fresh air for me and a significant improvement over what I had used before it. But then I ran into the aforementioned issues of its non-principled, “kitchen-sink” design (being a jack of all trades and not hyperfocused on a superior paradigm, removal of implicit conversions, simpler and more sound type system, etc), and Scala's lack of unsigned and sized integers, lack of non-disjoint, structural (aka anonymous) unions (Scala 3 adds these), the crap concurrency support with inability of Scala.js to integrate with async / await, lack of green threads on the server, the monolithic GC for all threads (ditto Go and Node.js/Deno which is another aspect I want to improve), etc...

Pony is/was an experimental PL that provided some interesting insights into what is and isn't needed for an Actor-like partitions (ALP) paradigm, but I posit it’s not the right design and will very likely die on the vine. Sorry to be so frank and overtly opinionated, but they didn’t incorporate green threads and their GC scheme is not performant for intense sharing and if you're not going to do intense sharing then the reference capabilities do not need to be as complex and I posit the GC scheme can be made radically more performant with a paradigm-shift.

I wrote more details about this in the WD-40 thread #35.


P.S. Yet another example of how design-by-committee creates a clusterfuck, setting up Unicode character shortcuts for ellipsis, right and left quote marks in Kubuntu is byzantine, although Shift+Ctrl+U+[code] worked fine in Mint Mate and on standard Ubuntu:

https://askubuntu.com/questions/1095339/how-can-i-input-unicode-characters-in-kate-and-konsole-on-kubuntu-18-04
https://bugs.kde.org/show_bug.cgi?id=103788#c15

Maybe eventually I will find the compose key combinations for all the Unicode characters I want:

https://en.wikipedia.org/wiki/Compose_key#cite_note-xorg-12
https://askubuntu.com/questions/34932/where-can-i-find-the-full-list-of-compose-combinations-for-my-locale

And then employ keyboard shortcuts to make them easier to remember and type:

https://forum.kde.org/viewtopic.php?f=66&t=97493#p206186
[doesn't integrate with compose key ostensibly because of the aforelinked KDE bug]

@shelby3
Copy link
Author

shelby3 commented Jun 19, 2020

(Also in the SO survey I see that […] @keean our age bracket is only 3 – 4% of programmers.)

Ryan also mentions that his design wisdom has improved with age. 😜

“Uncle” Bob Martin points out in the “The Future of Programming” that the rate of increase of programmers is growing so fast that more than half of the programmers have less than 5 years experience and are young. Apparently the younger the programmers, the more heavily tilted towards males the programmer demographic is.

https://youtu.be/ecIWPzGEbFc?t=3055

The implication is that PLs and frameworks are becoming less disciplined because the programmer demographic is becoming less disciplined (young age, low-level of experience, high testosterone, more energy than wisdom, etc).

@Ichoran
Copy link

Ichoran commented Jun 20, 2020

Despite @shelby3's accurate understanding of my technical points, which I appreciate, he's misunderstood my non-technical points almost without fail (e.g. mistaking the logical thoroughness of breaking down a situation into cases as an implicit accusation of the negative parts of each case). I do want to clarify that I am an extremely strong proponent of freedom of expression, just not of having ideology litmus tests backed up by strong rhetoric to try to drive away people with valuable technical skills.


@keean - I don't know if you've programmed in Rust, but along with your comment about Rust being a most-loved language, it's useful to know why Rust is most-loved. It's not just because they've learned from history in a number of areas (e.g. crates, docs, community) and created either a best-in-class or a refreshingly-nice-alternative (for those people who want to use Rust). It also solves a problem that many systems-level programmers have, and for which there were zero good solutions, which is how to use mutability safely.

Not every new language has to allow the same thing. Immutability is another approach, though if you think the borrow-checker is annoying, try putting someone up against either the borrow-checker or the entire field of pure FP ("so, you see, just use State, except no not really, use IO instead...but anyway, monads don't commute, so we need monad transformers (uh, actually, a lot of them), except that final tagless is a better way to go, except..."), the borrow-checker both admits much higher performance without insane compiler wizardry, and is a lot easier to swallow. (Admittedly it solves many fewer problems, but for that problem it's absolutely brilliant.)

I've done a lot of programming in C++ over the years. Although I occasionally get frustrated with Rust not allowing something that is "obviously" safe, it is VASTLY outweighed by the number of times that it stops me from doing something obviously "safe", meaning actually not safe at all, and I didn't think it through carefully enough. It's almost as big an advantage for me as having strong types instead of none.

My point in writing this is not to say that you have to bite off the whole borrow checker thing. My point is that Rust has that covered, and it's a very compelling feature, and if you go head-to-head with it, you have to either come up with something incredible, or you will lose. (That goes for multithreading, too...it's not enough to be 30% better at multithreading if Rust can beat you by 2x on single threads.)

If you don't need to go head-to-head, though, because you're targeting a different area (e.g. not the-very-fastest-speed), then the design constraints are a bit less acute. Competing with Go is vastly easier than competing with Rust on a technical level. (Sociologically it may be harder...I'm not sure.)

All this makes me wonder if you've read about Unison, specifically its abstraction construct which it calls abilities.

@keean
Copy link
Owner

keean commented Jun 20, 2020

@Ichoran I have programmed in Rust, and I have published the iterator chapter from Stepanov's "Elements of Programming" translated to rust as best I can, here on GitHub.

Rust has some nice ideas, but fails in the execution for me. The borrow checker is ad-hoc, and has no systematic design, which means it misses obviously safe cases, and the work arounds add complexity to the language. The type system also seems thrown together with no use of well known concepts from the functional world that solve the problems (like universal and existential quantification). Again this results in corner cases with undefined behaviours, or unsoundness where the type system and the language semantics don't match up. I agree that the safe concurrency is nice, but it is frustrating that it does not allow common use cases like two write pointers to non-overlapping sub-regions of the same region. I would like to see algorithms like quick-sort accepted as safe. I am reasonably sure that there are some identities missing from rusts analysis. If I was going to take on Rust I would want to have solutions to the above problems before doing so. I have a feeling all of these problems are solvable, but not without breaking changes, so the question is, will Rust make them, are they even looking at this? How different from Rust would a language designed around getting the above type-theory correct would such a language look.

Again, for me Go's lack of generics makes it a non-starter. It's like going back to 'C' from 'C++', at first you appreciate the simplicity, and straightforwardness, but by the time you write your third linked list implementation for a different type, you are over it. They are adding generics to Go, but it's going to have added complexity because it's not in the original design, and they have to work around certain things. For me lack of interfaces for operators adds needless complexity to generics.

The thing is that neither of these languages are really a "target". They all tackle different aspects of the same problem space. The language I am looking at could be used for some problems you would use Go for but not all, and some problems you would use Rust for but not all.

I guess I the end I want to design the language I want, that fixes the issues I have with all the other languages I have programmed in. Designing a language is a big task, so I keep hoping someone else will do it for me. When I try a new language, or a new language is created, I get excited that this might be the one, so I start trying to implement solutions to common problems from my experience. I have written software in Assembly, C, C++, Logo, Java, Pascal, Haskell, Ada, Perl, Python, JavaScript, TypeScript, Prolog, Go, Rust, and maybe some I have forgotten. All of these have some parts I like , and some I dislike. The work in Stepanov's "Elements of Programming" is the best treatment of generics I have come across, but I find the use of C and pointers problematic. Ada does well without pointers and the generics are better than C++ templates, but it has some difficult restrictions that make efficient programs hard to write, like the inability to return a writable reference to an array cell, even if you are sure the array is still in scope.

@shelby3
Copy link
Author

shelby3 commented Jun 20, 2020

@Ichoran and @keean I have continued the tangential discussion about Rust in the WD-40 issues thread #35 where it belongs.

I rebutted or let’s say responded to @Ichoran in the aforelinked reply. EDIT: I also replied to @keean’s latest comment at the aforelinked.

@thickAsABrick
Copy link

Thanks, @sighoya

@keean
Copy link
Owner

keean commented Jun 22, 2020

@sighoya sorry deleted by accident. Asked to be restored, now waiting...

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

@sighoya

In case you haven’t already noticed, I created a new thread interim while we are waiting for Github to hopefully restore that thread. I sure hope it isn’t lost. There is a few years of valuable discussion in that thread.

Sorry, to mention you again, but the original thread seems gone, here you can block mentions from other users: Link

Are you sure his ongoing problem was mentions from other users because you and I had stopped mentioning him but he came back 3 hours after our cluster of mentions apologizing to him, and said he was still being emailed. Did Github auto-subscribe him to the thread? He said he tried to unsubscribe from the thread. Here is what he wrote last in the accidentally deleted thread #35:

My apologies for asking: is there something I can do at my end to stop these emails? I tried unsubscribe, but it does not seem to be helping?

@shelby3
Copy link
Author

shelby3 commented Sep 17, 2020

The big break in computer languages

An orthogonal problem is the type system: in most object-derived systems, there is a complex type system with at least single inheritance. This leads to an error a former customer made: we /completely/ modelled a tournament in the form of a type hierarchy.

Net result? When you wanted to change it, you had to change everything. When you wanted to add something new, you had to apply it to everything. We re-invented spagetti code, only this time it was spagetti data structures.

Instead of abstracting and simplifying, we made it more complex. Bummer!

Yeah, this is why Rust and Go don’t have class inheritance [aka subclassing]. Good call by both design teams.

Absolutely, inheritance in large projects tend to cause so many problems and makes it difficult to understand and follow! OOP with composition and interfaces is all you need.
Except for the lack of Sum types and Generics :D

[…]

At that point I am not even sure what is the point of OOP. Since SQL tables as a single “type” are useful for a tremendous range of purposes, while I never tried systems programming, if I would try I would probably use any “list of thingies with named properties” idiom that comes my way, be that a hash table or a struct.

OOP was pretty much invented for inheritance, at least the ways I was taught at school, these awesome chains of concepts that a BMW inherits from Car and Car from Vehicle, i.e. basically what David described was taught as good design at my school… but if it is not, then why even bother? I take an SQL table or the language equivalent thereof, hashtable, struct, whatever, name it Car, some of the fields will be Make and Model and call it a day.

Essentially I have to find the sweet spot in the conceptual category-subcategory tree, which in this case is car. Inheritance was meant to be able to move up and down on this tree, but the tree is not cast in stone because the Chevrolet company can acquire Daewoo and next week the Daewoo Matiz is called Chevrolet Matiz, then I am sure as hell not having any object class called BMW: that will be data, easily changed not part of the data structure!

Encapsulation is a better idea but unless I have massive, database-like data structures (which in real life I always do but system programmers maybe not), how am I going to automatically test any function that works not only with its own parameters but pretty much everything else it can find inside the same object? I mean great, objects cut down global variable hell to a protected variable minihell that is far easier to eyeball but is it good enough for automated testing? I think not.

I am afraid to write things like this, because only a narrow subset of my profession involves writing code and as such I am not a very experienced programmer so I should not really argue with major CS concepts. Still… for example Steve Yegge had precisely this beef with OOP: you are writing software yet OOP really seems to want you make you want to make something like unchangeable, fixed, cast in stone hardware.

OOP was hugely hyped, especially in the corporate world by Java marketers, whom extolled the virtues of how OOP and Java would solve all their business problems.

As it turns out, POP (protocol-oriented programming) is the better design, and so all modern languages are using it. POP’s critical feature is generics, so it’s baffling as to why Go does not have generics.

Basically, rather than separating structures into class hierarchies, you assign shared traits to structures in a flat hierarchy. You can then pull out trait-based generics to execute some rather fantastical solutions that would otherwise require an incredible degree of copying and pasting (a la Go).

This then allows you to interchangeably use a wide variety of types as inputs and fields into these generic functions and structures, in a manner that’s very efficient due to adhering to data-oriented design practices.

It’s incredibly useful when designing entity-component system architectures, where components are individual pieces of data that are stored in a map elsewhere; entities consist of multiple components (but rather than owning their components directly, they hold ID’s to their components), and are able to communicate with other entities; and systems, which are the traits that are implemented on each entity that is within the world map. Enables for some incredible flexibility and massively parallel solutions, from UIs to game engines.

Entities can have completely different types, but the programmer does not need to be aware of that, because they all implement the same traits, and so they can interact with each other via their trait impls. And in structuring your software architecture in this way, you ensure that specific components are only ever mutably borrowed when it is needed, and thus you can borrow many components and apply systems to them in parallel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants