Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v3 proposal #2982

Open
wants to merge 95 commits into
base: main
Choose a base branch
from
Open

v3 proposal #2982

wants to merge 95 commits into from

Conversation

dead-claudia
Copy link
Member

@dead-claudia dead-claudia commented Oct 13, 2024

I want your feedback. Please use it, abuse it, and tell me what issues you find. Bugs, ergonomic issues, weird slowdowns, I want to know it all. Feel free to drop a comment here on whatever's on your mind, so I can easily track it.

Also, @MithrilJS/collaborators please don't merge this yet. I want feedback first. I also need to create a docs PR before this can get merged.

Miscellaneous TODOs

Don't count on these happening anytime soon. Not all of these are even guaranteed to happen. They're mostly reminders for me.

Description

This is my v3 proposal. It's pretty detailed and represents a comprehensive overhaul of the API. It synthesizes much of my API research. And yes, all tests pass. 🙂

Preserved the commit history if you're curious how I got here. (Reviewers: you can ignore that. Focus on the file diff - you'll save yourself a lot of time.)

If you want to play around, you can find artifacts here: https://github.com/dead-claudia/mithril.js/releases/tag/v3.0.0-alpha.2

This is not published to npm, so you'll have to use a GitHub tag/commit link. If you just want to mess around in a code playground, here's a link for you.

Quick line-by-line summary If you're short on time and just want a very high level summary, here's a comprehensive list, grouped by type.

Highlights:

  • 9.12 KB to 7.23 KB, or a little over 20% smaller.
  • Memory overhead reduced by as much as 35% in Chromium (20% in Firefox) compared to v2.
  • A base of ES2018 is assumed, including syntax.
  • IE compatibility is no more, and most IE hacks have been removed.
  • No more magic attributes to fuss with!

Additions:

  • New vnode: m.layout((dom) => ...), scheduled to be invoked after render
  • New vnode: m.remove((dom) => ...), scheduled to be invoked after the render in which it's removed from
  • New vnode: m.retain(), retains the current vnode at that given position
  • New vnode: m.set(contextKeys, ...children), sets context keys you can get via the third parameter in components
  • New vnode: m.use([...deps], ...children), works similar to React's useEffect
  • New vnode: m.init(async (signal) => ...), callback return works just like event listeners
  • New utility: tracked = m.tracked() to manage delayed removal
  • New utility: m.throttler()
  • New utility: m.debouncer()

Changes:

  • Vnode have reduced in size (10 to 6) and gained much shorter property names (to reduce code size).
  • key: attribute → m.keyed(view, (item) => [key, vnode])
  • m.buildPathname(template, query)m.p(template, query)
  • m.buildQueryString(query)m.query(query)
  • In event listeners, return false to prevent redraws.
  • Event listeners can return promises that are awaited before redrawing. Whatever it resolves to is taken as the return value.
  • If setting an attribute throws, the error is caught and logged unless removeOnThrow: false is passed to m.render.
  • If rendering a view throws, the view is removed and the error is caught and logged unless removeOnThrow: false is passed to m.render.
  • m.route(elem, defaultRoute, {...routes}) -> m.route(prefix, view)
  • m.mount(...) now returns a redraw function that works like m.redraw, but only for that one root. redraw is also set in the context via the redraw key.
  • m.render(...) now accepts an options object for its third parameter. redraw sets the redraw function (and is made available in the context).
  • Components are now either (attrs, old, context) => vnode functions or an (attrs, null, context) => ... function returning that
  • m.request(url, opts)m.fetch(url, opts), accepts all window.fetch options, onprogress, extract, and responseType.

Removals:

  • All submodules other than mithril/stream - the main library no longer needs the DOM to load
  • The built-in router no longer does its own route dispatching
  • m.trust - use innerHTML instead
  • m.censor
  • Magic attributes
  • m.vnode(...) - use m.normalize if it's truly necessary
  • m.mount(root, null) - use m.render(root, null) instead
  • m.redraw() -> either redraw function returned from m.mount or the redraw context key
  • oninit
  • oncreate/onupdate - use m.layout(...) instead
  • onremove - use m.remove(...) instead
  • onbeforeupdate - use m.retain() instead
  • onbeforeremove - use m.tracked() utility to manage parents instead
  • m.parsePathname
  • m.parseQueryString
  • ev.redraw = false - return/resolve with false or reject/throw from event handlers
  • Support for symbols in style attribute objects - those now throw. (This won't likely impact many.)

Miscellaneous:

  • Optimized mithril/stream to require a lot less allocation - I don't have precise numbers, but it should both be faster and require less memory.
  • Redid the benchmark program.

Benchmarks

For the raw benchmark data and specifics on how the memory benchmark was performed, see this gist. I initially posted the full data here, but apparently it's gotten too powerfulbig for GitHub Issues - I got the error message "There was an error posting your comment: Body is too long" when I tried.

Memory benchmarks

v2.2.8:

  • Chromium:
    • window final retained size: 1001416 bytes
    • Vnode root final retained size: 188020 bytes
    • Average retained size per item: 7211.0 bytes
  • Gecko:
    • GC root final retained size: 4184729 bytes
    • Vnode root final retained size: 388088 bytes
    • Average retained size per item: 38398 bytes

This PR:

  • Chromium:
    • window final retained size: 862976 bytes
    • Vnode root final retained size: 115996 bytes
    • Average retained size per item: 5812.6 bytes
  • Gecko:
    • GC root final retained size: 4004569 bytes
    • Vnode root final retained size: 310128 bytes
    • Average retained size per item: 36576 bytes

Raw DOM:

  • Chromium:
    • window final retained size: 611780 bytes
    • Average retained size per item: 3275.3 bytes
  • Gecko:
    • GC root final retained size: 3394065 bytes
    • Average retained size per item: 30412 bytes

Raw empty:

  • Chromium:
    • window size: 287292 bytes
  • Gecko:
    • GC root size: 383289 bytes

CPU benchmarks

Notes:

  • I show both estimated median ("Med.") and 99th percentile ("P99") since those are what best predict end user experience.
  • "v2" refers to v2.2.9, while "v3" refers to the code in this PR.

Chromium (Blink):

Benchmark Med. v2 Med. v3 Change P99 v2 P99 v3 Change
do nothing 4997551 4659861 2120021 2033344
route match 3614782 580969 -83.9% 1468955 261742 -82.2%
route non-match 3713216 2602891 -29.9% 1513114 1134868 -25.0%
path generate with string interpolations 618107 938630 +51.9% 257342 448619 +74.3%
path generate with number interpolations 627418 1050786 +67.5% 259070 457516 +76.6%
path generate no interpolations 3733366 2063431 -44.7% 1545921 964335 -37.6%
construct simpleTree 36281 55630 +53.3% 14875 25021 +68.2%
construct nestedTree 4399797 3791234 -13.8% 1843460 1790158 -2.89%
construct mutateStylesPropertiesTree 1307 2187 +67.3% 539 1115 +107%
construct repeatedTree 4470342 3824937 -14.4% 1837281 1802728 -1.88%
construct shuffledKeyedTree 2294 5754 +151% 993 2772 +179%
render simpleTree 11453 12179 +6.34% 4792 4744 -1.00%
render nestedTree 9837 10939 +11.2% 3900 4958 +27.1%
render mutateStylesPropertiesTree 76.4 101 +31.7% 45.4 53.0 +16.7%
render repeatedTree 76066 46428 -39.0% 38602 20138 -47.8%
render shuffledKeyedTree 684 727 +6.22% 108 76.0 -29.6%
add/remove simpleTree 1650 901 -45.4% 613 313 -49.0%
add/remove nestedTree 1567 906 -42.2% 553 329 -40.4%
add/remove mutateStylesPropertiesTree 108 99.2 -8.49% 61.0 53.0 -13.1%
add/remove repeatedTree 6005 3975 -33.8% 2177 1053 -51.6%
add/remove shuffledKeyedTree 594 567 -4.58% 264 261 -1.06%
mount simpleTree 1382 994 -28.0% 529 363 -31.4%
mount all 47.8 47.0 -1.67% 29.2 28.2 -3.42%
redraw simpleTree 11006 11959 +8.66% 4565 5164 +13.1%

Firefox (Gecko):

Benchmark Med. v2 Med. v3 Change P99 v2 P99 v3 Change
do nothing 14823314 14375924 5925703 6371058
route match 8043812 1232765 -84.7% 3522805 526285 -85.1%
route non-match 8043243 4615883 -42.6% 3475679 2173199 -37.5%
path generate with string interpolations 627252 2137280 +241% 138999 897648 +546%
path generate with number interpolations 632250 2135230 +238% 130060 889377 +584%
path generate no interpolations 8434450 4320984 -48.8% 3731146 1700677 -54.4%
construct simpleTree 29233 47591 +62.8% 12956 19637 +51.6%
construct nestedTree 9499372 10018884 +5.47% 5250779 4386436 -16.5%
construct mutateStylesPropertiesTree 1969 2383 +21.0% 811 753 -7.15%
construct repeatedTree 9270115 9779863 +5.50% 4198525 4489073 +6.92%
construct shuffledKeyedTree 6065 3755 -38.1% 2358 1675 -29.0%
render simpleTree 11100 10263 -7.54% 6174 6129 -0.72%
render nestedTree 9619 9585 -0.35% 5484 5723 +4.36%
render mutateStylesPropertiesTree 74.0 80.0 +8.11% 25.6 36.2 +41.4%
render repeatedTree 59826 53520 -10.5% 27791 23875 -14.1%
render shuffledKeyedTree 266 221 -17.0% 89.4 74.6 -16.6%
add/remove simpleTree 1567 1065 -32.0% 480 482 +0.33%
add/remove nestedTree 1557 1084 -30.4% 437 382 -12.7%
add/remove mutateStylesPropertiesTree 77.8 76.6 -1.52% 36.8 31.1 -15.6%
add/remove repeatedTree 11523 8843 -23.3% 4135 4206 +1.71%
add/remove shuffledKeyedTree 332 301 -9.34% 132 139 +5.15%
mount simpleTree 1222 819 -33.0% 240 320 +33.3%
mount all 37.0 31.0 -16.2% 21.8 17.2 -21.1%
redraw simpleTree 10148 10376 +2.25% 5475 6912 +26.3%

Motivation and Context

This resolves a number of ergonomic issues around the API, crossing out many long-standing feature requests and eliminating a number of gotchas.

Related issues - Fixes https://github.com//issues/1937 by enabling it to be done in userland, mostly via `m.render(elem, vnode, {removeOnThrow: true})` - Resolves https://github.com//issues/2310 by dropping `m.trust` - Resolves https://github.com//issues/2505 by not resolving routes - Resolves https://github.com//issues/2531 by not resolving routes - Fixes https://github.com//issues/2555 - Resolves https://github.com//issues/2592 by dropping `onbeforeremove` - Fixes https://github.com//issues/2621 by making each mount independent of other mount points - Fixes https://github.com//issues/2645 - Fixes https://github.com//issues/2778 - Fixes https://github.com//issues/2794 by dropping internal `ospec` - Fixes https://github.com//issues/2799 - Resolves https://github.com//issues/2802 by dropping `m.request`
Related discussions - Resolves https://github.com//discussions/2754 - Resolves https://github.com//discussions/2775 by not resolving routes - Implements https://github.com//discussions/2912 - Implements https://github.com//discussions/2915 by throwing - Resolves https://github.com//discussions/2916 by dropping `m.request` - Implements https://github.com//discussions/2917 with some minor changes - Implements https://github.com//discussions/2918 - Implements https://github.com//discussions/2919 - Implements https://github.com//discussions/2920 - Resolves https://github.com//discussions/2922 by dropping `m.request` - Resolves https://github.com//discussions/2924 by dropping `m.request` - Implements https://github.com//discussions/2925 - Resolves https://github.com//discussions/2926 by dropping `m.request` - Resolves https://github.com//discussions/2929 by not doing it (doesn't fit with the model) - Resolves https://github.com//discussions/2931 by not resolving routes - Resolves https://github.com//discussions/2934 by dropping `m.request` - Resolves https://github.com//discussions/2935 by not resolving routes - Partially implements https://github.com//discussions/2936 - Partially implements https://github.com//discussions/2937, intentionally skips rest - Resolves https://github.com//discussions/2941 by not resolving routes - Implements https://github.com//discussions/2942 by offering a configurable `route` context key - Implements https://github.com//discussions/2943 - Implements https://github.com//discussions/2945 - Resolves https://github.com//discussions/2946 by making the router a component instead (thus making it implicit) - Implements https://github.com//discussions/2948 by throwing - Resolves https://github.com//discussions/2950 by dropping `m.request`
Things this does *not* resolve - https://github.com//issues/2256 (This needs some further digging to lock down precisely what needs done) - https://github.com//issues/2315 - https://github.com//issues/2359 - https://github.com//issues/2612 - https://github.com//issues/2623 - https://github.com//issues/2643 - https://github.com//issues/2809 - https://github.com//issues/2886

New API

This is all in the style of https://mithril.js.org/api.html. I also include comparisons to v2. Collapsed so you can skip past it without scrolling for days.

New API overview

Vnodes

Vnodes have changed under the hood massively. Now, everything is packed into a much smaller object (10 to 6) when normalized. This means that in large apps, you can get away with way more components before you see perf issues. (Significantly reducing memory overhead is a partial goal of this change.)

Each type below also contains a quick explainer of the underlying normalized vnode object representing them. If a field isn't documented below, it's either not used or an implementation detail.

Note that vnode.m & m.TYPE_MASK is using the bitwise AND operator to select only the low 4 bits of the vnode.m field of the normalized vnode. vnode.m is also guaranteed to be non-negative except for m.retain(), in which the whole field is always -1. All other bits (from offset 4 to offset 30) are an implementation detail unless stated otherwise. At the time of writing, 9 of the remaining 27 bits are currently in use.

Note that once Mithril receives a vnode, the vnode could get modified.

onbeforeupdatem.retain()

Return m.retain() if you want to retain the previous vnode. Also, components always receive the previous attributes via their second argument (null on first render).

Here's an overoptimized, overabstracted counter to show the difference:

This PR:

function CounterDisplay(attrs, old) {
	if (old && attrs.current === old.current) {
		return m.retain()
	}

	return m("div.counter", [
		m("button.counter-increment", {onclick: attrs.onincrement}, "🔼"),
		m("div.counter-value", attrs.current),
		m("button.counter-decrement", {onclick: attrs.ondecrement}, "🔽"),
	])
}

function Counter() {
	let count = 0
	return () => m(CounterDisplay, {
		current: count,
		onincrement() { count++ },
		ondecrement() { count++ },
	})
}

v2:

const CounterDisplay = {
	onbeforeupdate: (vnode, old) => vnode.attrs.count !== old.attrs.count,
	view: ({attrs}) => m("div.counter", [
		m("button.counter-increment", {onclick: attrs.onincrement}, "🔼"),
		m("div.counter-value", attrs.current),
		m("button.counter-decrement", {onclick: attrs.ondecrement}, "🔽"),
	]),
}

function Counter() {
	let count = 0
	return {
		view: () => m(CounterDisplay, {
			current: count,
			onincrement() { count++ },
			ondecrement() { count++ },
		}),
	}
}

Normalized vnode fields:

  • vnode.m is specially set to -1.
  • On render, this vnode becomes a clone of whatever vnode it's retaining, provided it's not a hole (boolean, null, or undefined). For instance, if the vnode it was retaining is an m("div"), it gets its vnode.d field set to the rendered DOM element.

oncreate/onupdate/onremovem.layout/m.remove

DOM-related lifecycle methods have been moved to vnodes. Here's an example using the FullCalendar Bootstrap 4 plugin (an old version of this is currently used in the docs), show the difference:

In both, it uses the same setup usage as is currently shown in the README on npm. It then wraps it the same way Mithril's docs currently wrap it.

This PR, option 1: nest the m.layout and m.remove

import { Calendar } from "@fullcalendar/core"
import bootstrapPlugin from "@fullcalendar/bootstrap"
import dayGridPlugin from "@fullcalendar/daygrid"

function FullCalendar(attrs) {
	let calendar

	return () => m("div", [
		m.layout((elem) => {
			if (calendar == null) {
				calendar = new Calendar(elem, {
					plugins: [
						bootstrapPlugin,
						dayGridPlugin,
					],
					themeSystem: "bootstrap",
					initialView: "dayGridMonth"
				})
				attrs.onready(calendar)
			}

			calendar.render()
		}),

		m.remove(() => calendar.destroy()),
	])
}

function Demo() {
	let calendar

	return () => [
		m("h1", "Calendar"),
		m(FullCalendar, {onready(c) { calendar = c }}),
		m("button", {onclick() { calendar.prev() }}, "Mithril.js Button -"),
		m("button", {onclick() { calendar.next() }}, "Mithril.js Button +"),
	]
}

m.mount(document.body, () => m(Demo))

This PR, option 2: use vnode.d to get the element

import { Calendar } from "@fullcalendar/core"
import bootstrapPlugin from "@fullcalendar/bootstrap"
import dayGridPlugin from "@fullcalendar/daygrid"

function FullCalendar(attrs) {
	let calendar
	let div

	return () => [
		div = m("div"),

		m.layout(() => {
			if (calendar == null) {
				calendar = new Calendar(div.d, {
					plugins: [
						bootstrapPlugin,
						dayGridPlugin,
					],
					themeSystem: "bootstrap",
					initialView: "dayGridMonth"
				})
				attrs.onready(calendar)
			}

			calendar.render()
		}),

		m.remove(() => calendar.destroy()),
	]
}

function Demo() {
	let calendar

	return () => [
		m("h1", "Calendar"),
		m(FullCalendar, {onready(c) { calendar = c }}),
		m("button", {onclick() { calendar.prev() }}, "Mithril.js Button -"),
		m("button", {onclick() { calendar.next() }}, "Mithril.js Button +"),
	]
}

m.mount(document.body, () => m(Demo))

v2:

import { Calendar } from "@fullcalendar/core"
import bootstrapPlugin from "@fullcalendar/bootstrap"
import dayGridPlugin from "@fullcalendar/daygrid"

function FullCalendar() {
	let calendar

	return {
		view: () => m("div"),
		oncreate(vnode) {
			calendar = new Calendar(vnode.dom, {
				plugins: [
					bootstrapPlugin,
					dayGridPlugin,
				],
				themeSystem: "bootstrap",
				initialView: "dayGridMonth"
			})
			vnode.attrs.onready(calendar)
			calendar.render()
		},
		onupdate: () => calendar.render(),
		onremove: () => calendar.destroy(),
	}
}

function Demo() {
	let calendar

	return {
		view: () => [
			m("h1", "Calendar"),
			m(FullCalendar, {onready(c) { calendar = c }}),
			m("button", {onclick() { calendar.prev() }}, "Mithril.js Button -"),
			m("button", {onclick() { calendar.next() }}, "Mithril.js Button +"),
		],
	}
}

m.mount(document.body, Demo)

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_LAYOUT for m.layout(...), m.TYPE_REMOVE for m.remove(...).
  • vnode.a holds the callback for both types.

m(selector, attrs, children), JSX <selector {...attrs}>{children}</selector>

Same as before, but with a few important differences:

  • No magic lifecycle methods. Use relevant vnodes as needed. (For similar reasons, m.censor is removed.)
  • Event handlers await their callbacks before deciding whether to redraw, and they use return false to prevent redraw rather than to "capture" the event (use m.capture(ev) for that).
  • The element's identity is no longer the tag name, but the whole selector. If that changes, the element is replaced, even if the two selectors share a tag name.

This PR:

// Simple case
m("div.class#id", {title: "title"}, ["children"])

// Simple events
m("div", {
	onclick(e) {
		console.log(e)
	},
})

// Prevent redraw
m("div", {
	onclick(e) {
		return false
	},
})

// Redraw after doing something async
m("button.save", {
	async onclick(e) {
		m.capture(e)
		try {
			await m.fetch(m.p("/save/:id", {id: attrs.id}), {
				body: JSON.stringify({value: ev.target.value}),
			})
		} catch (e) {
			error = e.message
		}
	},
}, "Save")

v2:

// Simple case
m("div.class#id", {title: "title"}, ["children"])

// Simple events
m("div", {
	onclick(e) {
		console.log(e)
	},
})

// Prevent redraw
m("div", {
	onclick(e) {
		e.redraw = false
	},
})

// Redraw after doing something async
m("button.save", {
	async onclick(e) {
		e.redraw = false
		e.preventDefault()
		e.stopPropagation()
		try {
			await m.request("/save/:id", {
				params: {id: attrs.id},
				body: JSON.stringify({value: ev.target.value}),
			})
		} catch (e) {
			error = e.message
		}
		m.redraw()
	},
}, "Save")

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_ELEMENT.
  • vnode.t holds the selector, "div.class#id" in this case.
  • vnode.a holds the attributes. Note that if both the selector and attributes have a class/className attribute, they're now normalized to class, not className.
  • vnode.c holds the (normalized) children.
  • vnode.d holds the DOM element, once rendered.

m(Component, attrs, children), JSX <Component {...attrs}>{children}</Component>

Usage is the same as before, but with one difference: no magic lifecycle methods. You'll need to use special attributes to expose inner DOM nodes, lifecycle, and state, and you'll need to use lifecycle vnodes outside the component for everything else. (For similar reasons, m.censor is removed.)

The component definition differs a lot, though, and that'll be covered later.

Note: be careful with attrs.children - that takes precedence over children arguments in both m(Component) and m(".selector") vnodes, and so you may have to extract it as done below.

This PR:

function Greeter({...attrs, children}) {
	return m("div", attrs, ["Hello ", children])
}

// consume it
m(Greeter, {style: "color: red;"}, "world")

// renders to this HTML:
// <div style="color: red;">Hello world</div>

v2:

// define a component
const Greeter = {
	view(vnode) {
		return m("div", vnode.attrs, ["Hello ", vnode.children])
	},
}

// consume it
m(Greeter, {style: "color: red;"}, "world")

// renders to this HTML:
// <div style="color: red;">Hello world</div>

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_COMPONENT.
  • vnode.t holds the component, Greeter in this case.
  • vnode.a holds the attributes. Any children passed in are merged in as {...attrs, children}, but they are not normalized.
  • vnode.c holds the instance vnode, once rendered.

Holes: null, undefined, false, and true

Holes work exactly the same as before, with all the same rules. And like before, they're normalized to null.

"text"

Text vnodes work exactly the same as before, with all the same rules. Anything neither a hole, a fragment array, nor a normalized vnode is stringified. Symbols can even be stringified like before.

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_TEXT.
  • vnode.a holds the (stringified) text.

[...children], m(m.Fragment, ...children), JSX <>{children}</>

Unkeyed fragments work the same as before, but with three differences:

  • If you truly want or need a pre-normalized vnode, you were probably using m.fragment(...). Use m.normalize([...]) or m(m.Fragment, ...) instead - there's no longer a dedicated normalized fragment vnode factory.
  • m.Fragment actually looks like a component, rather than looking like an element selector (and being faked as a component in TypeScript types).
  • Fragments can't have lifecycle methods, not even through m(m.Fragment, ...).

Skipping examples, since it's the same across both aside from the above.

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_FRAGMENT.
  • vnode.c holds the (normalized) children.

m.keyed(list, view), m.keyed([...entries])

Keyed fragments have changed. Instead of returning an array of vnodes with key properties, you use m.keyed(...). In this new model, keys are more implicit, and while there's a few more brackets and parentheses, it's a bit easier to follow due to the added context of "this is a keyed list, not a normal one" given by the m.keyed call.

This PR:

const colors = ["red", "yellow", "blue", "gray"]
let counter = 0

function getColor() {
	const color = colors[counter]
	counter = (counter + 1) % colors.length
	return color
}

function Boxes() {
	const boxes = new Map()
	let nextKey = 0

	function add() {
		boxes.set(nextKey++, getColor())
	}

	function remove(key) {
		boxes.delete(key)
	}

	return () => [
		m("button", {onclick: add}, "Add box, click box to remove"),
		m(".container", m.keyed(boxes, ([key, box]) => [
			key,
			m(".box",
				{
					"data-color": box.color,
					onclick() { remove(key) },
				},
				m(".stretch")
			)
		])),
	]
}

v2:

const colors = ["red", "yellow", "blue", "gray"]
let counter = 0

function getColor() {
	const color = colors[counter]
	counter = (counter + 1) % colors.length
	return color
}

function Boxes() {
	const boxes = new Map()
	let nextKey = 0

	function add() {
		boxes.set(nextKey++, getColor())
	}

	function remove(key) {
		boxes.delete(key)
	}

	return {
		view: () => [
			m("button", {onclick: add}, "Add box, click box to remove"),
			m(".container", Array.from(boxes, ([key, box]) => (
				m(".box",
					{
						key,
						"data-color": box.color,
						onclick() { remove(key) },
					},
					m(".stretch")
				)
			))),
		],
	}
}

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_KEYED.
  • vnode.c holds the key to (normalized) child map.

m.use([...deps], ...children)

For cases where you want to reset something based on state, this makes that way easier for you, and it's much more explicit than a single-item keyed list. (In fact, it was such a common question during v0.2 and v1 days that we had to eventually document it.) This new factory provides a better story, one that should also hopefully facilitate hooks-like use cases as well.

This PR:

m.mount(rootElem, function() {
	let match

	if (m.match(this.route, "/")) {
		return m(Home)
	}

	if (match = m.match(this.route, "/person/:id")) {
		return m(Layout, m.use([match.id], m(Person, {id: match.id})))
	}

	m.route.set("/")
})

v2:

m.route(rootElem, "/", {
	"/": Home,
	"/person/:id": {
		render: ({attrs}) => m(Layout, [m(Person, {id: attrs.id, key: attrs.id})]),
	},
	// ...
})

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_USE.
  • vnode.a holds the collected dependency array (deps can be any iterable, not just an array).
  • vnode.c holds the (normalized) children.

m.set({key: value, ...}, ...children)

This type is entirely new, without a v2 equivalent. You've probably noticed a bunch of this.route and this.redraw() spattered everywhere. Well, this.redraw() is provided implicitly by the runtime, but the new m.route(...) directly uses m.set({route}, view()) internally.

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_USE.
  • vnode.a holds the collected dependency array (deps can be any iterable, not just an array).
  • vnode.c holds the (normalized) children.

m.inline(view)

This type is entirely new, without a v2 equivalent. It's equivalent to the following component, but is optimized for internally (and is its own vnode type).

function Inline({view}) {
	// Call with the context as both `this` and the first argument. Doing it this way reduces how
	// many arrow functions you need, and it also provides an easier way to read context state.
	return view.call(this, this)
}

m.inline = (view) => m(Inline, {view})

It's used internally by the new m.route(prefix, view). See that helper for more details on how this plays out in practice.

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_INLINE.
  • vnode.a holds the view function.
  • vnode.c holds the instance vnode, once rendered.

m.mount(element, component)m.mount(element, view)

This PR:

const state = {
	count: 0,
	inc() { state.count++ },
}

function Counter() {
	return m("div", {onclick: state.inc}, state.count)
}

m.mount(document.body, () => m(Counter))

v2:

const state = {
	count: 0,
	inc() { state.count++ },
}

const Counter = {
	view: () => m("div", {onclick: state.inc}, state.count)
}

m.mount(document.body, Counter)

m.redraw()context.redraw()

This PR, option 1: use from a component

This way makes for easier local cleanup.

function Counter() {
	let count = 0
	const timer = setInterval(() => {
		count++
		this.redraw()
	}, 1000)
	return () => [
		m("div", count),
		m.remove(() => clearInterval(timer))
	]
}

m.mount(document.body, () => m(Counter))

This PR, option 2: use from a mount view callback

let count = 0

m.mount(document.body, function (isInit) {
	if (isInit) {
		setInterval(() => {
			count++
			this.redraw()
		}, 1000)
	}
	return m("div", count)
})

v2:

function Counter() {
	let count = 0
	const timer = setInterval(() => {
		count++
		m.redraw()
	}, 1000)
	return {
		onremove: () => clearInterval(timer),
		view: () => m("div", count),
	}
}

m.mount(document.body, Counter)

Routing: m.route(root, defaultRoute, {...routes})m.route(prefix, view)

It's more verbose, but there are unique benefits to doing it this way.

  • Adding and tweaking routes requires not any more code movement than a magic object. If you want to regain that, you can still do it with a simple wrapper.
  • Abstracting routes is as simple as creating a function. The framework doesn't need any special functionality for that.
  • Allowing route matching anywhere means you can just as easily do routing only where you need to. You don't need to specially wrap everything just to make it work. (I've partly run into this trap with other routing frameworks.)
  • You don't need as much effort to glue together a component and a route that doesn't quite perfectly match what that view component expects. (I've been bit by this many times.)
  • If your default route varies based on context, this lets you do that.
  • Conditional routes are trivial. Just use if statements to guard your routes.

The main goal here is to just get as out of your way as possible. Give you the helpers needed to easily solve it yourself with only basic abstractions, instead of trying to magically solve routing for you and ultimately failing because the abstraction didn't quite work as you needed it to.

Note that prefixes are now required - there is no longer a default. They help for documentation, so you can just look at the call and know right away what the prefix is.

Also, m.match/route.match templates are cached just like hyperscript selectors. Updating those too often may result in memory leaks.

This PR, option 1: define using static strings and/or regexps

m.mount(document.body, () => m.route("#!", ({route}) => {
	let match

	if (route.path === "/home") {
		return m(Home)
	}

	if (match = /^\/user\/([^/]+)$/.exec(route.path)) {
		return m(ShowUser, {id: decodeURIComponent(match[0])})
	}

	route.set("/home") // navigate to default route
}))

This PR, option 2: use route.match(...) (equivalent to m.match(route, path)) to match route templates

This method has been benchmarked to oblivion. Hundreds of routes aren't a problem. If anything, they're less of a problem now.

m.mount(document.body, () => m.route("#!", ({route}) => {
	let match
	if (route.match("/home")) return m(Home)
	if (match = route.match("/user/:id")) return m(ShowUser, {id: match.id})
	route.set("/home") // navigate to default route
}))

v2:

m.route.prefix = "#!" // Technically redundant as it's the v2 default
m.route(document.body, "/home", {
	"/home": Home,
	"/user/:id": ShowUser,
})

Note that while the above uses the first argument, the view is actually passed into an m.inline, so it receives all the context, not just the route key, and it also receives it as both this and the first parameter. So this is equivalent to the second option:

m.mount(document.body, () => m.route("#!", function () {
	let match

	if (this.route.match("/home")) {
		return m(Home)
	}

	if (match = this.route.match("/user/:id")) {
		return m(ShowUser, {id: match.id})
	}

	// And finally, set your default path here.
	this.route.set("/home")
}))

route.set(path)

route.set("/home")

Current route

// Returns the full current route, including query and hash
const fullCurrentRoute = route.current
// Returns the current path, including query and hash
const currentPath = route.path
// Returns the current query parameters in a `URLSearchParams` instance.
const currentQuery = route.query

m(m.route.Link, ...)m.link(href, opts?)

This PR:

// Simple static routes can just be given directly
m("a", m.link("/home"), "Go to home page")

// If you want parameters, use `m.p`.
m("a", m.link(m.p("/user/:id", {id: user.id})), user.name)

// If you want to replace the URL state on click, you can pass the same options you can pass to
// `route.set(url, opts)`.
m("a.wizard-exit", {ping: m.p("/event", {a: "wizard-exit", s: wizardState})}, [
	m.link("/things/list", {replace: true}),
	"Exit create wizard",
])

v2:

// Simple static routes can just be given directly
m(m.route.Link, {href: "/home"}, "Go to home page")

// If you want parameters, use `m.p`.
m(m.route.Link, {href: "/user/:id", params: {id: user.id}}, user.name)

// If you want to replace the URL state on click, you can pass the same options you can pass to
// `route.set(url, opts)`.
m(m.route.Link, {
	class: "wizard-exit",
	ping: m.buildPathname("/event", {a: "wizard-exit", s: wizardState}),
	href: "/things/list",
	replace: true,
}, "Exit create wizard")

What happened to route resolvers?

Instead of accepting components, everything is essentially a render function from v2. But for onmatch, the only two legitimate uses for it are auth and lazy route loading. There's better ways to handle both:

  • For lazy route loading, there's a Comp = m.lazy({fetch: () => import("./path/to/Comp.js")}) helper.
  • For auth, lazily rendering a view is the wrong abstraction. It needs to check for auth from a cached token, prompt on immediate failure, and render the original view on successful auth in either case.
  • On the website, it currently mentions data pre-loading as a use case. That shouldn't use onmatch. Instead, the view for it should be rendered right away, just with loading placeholders for parts that aren't yet ready to show. Trying to force that into a lazy loading system would just cause blank screen flickering and other stuff that at best doesn't look pretty and at worst causes accessibility issues.

m.buildPathname(template, params)m.p(template, params)

It's the same API, just with a shorter name as it's expected to be used a lot more. It also has an entirely new implementation that's almost twice as fast when parameters are involved, to match that same expected jump in usage.

m.fetch, m.link, and route.set expect raw strings and don't accept template parameters, unlike their v2 equivalents.

Here's another example of how it could end up used.

// Issues a fetch to `PUT /api/v1/users/1?name=test`
const result = await fetch(m.p("/api/v1/users/:id", {id: 1, name: "test"}), {
	method: "PUT",
})
console.log(result)

m.buildQueryString(object)m.query(object)

Same as before, just with a shorter name to complement m.p. It also reuses some of the optimizations created for m.p.

// "a=1&b=2"
const querystring = m.query({a: "1", b: "2"})

m.request(url, opts?)m.fetch(url, opts)

The new m.fetch accepts all the same arguments as window.fetch, plus a few other options:

  • opts.responseType: Selects the type of value to return (default: "json")
    • "json": Parse the result as JSON and return the parsed result
    • "formdata": Parse the result as multipart/form-data and return the parsed FormData object
    • "arraybuffer": Collect the result into an ArrayBuffer object and return it
    • "blob": Collect the result into a Blob object and return it
    • "text": Collect the result into a UTF-8 string and return it
    • "document": Parse the result as HTML/XML and return the parsed Document object
  • opts.extract: Instead of using responseType to determine how to extract it, you can provide a (response) => result message to extract the result. Only called on 2xx and takes precedence over opts.responseType.
  • opts.onprogress: Pass a (current, total) => ... function to get called on download progress.

Unfortunately, upload progress isn't currently monitorable using fetch: whatwg/fetch#607. However, this wasn't possible in any previous version of Mithril without using config(xhr) { xhr.upload.onprogress = ... } directly in options, so this isn't much of a removal.

m.fetch returns a promise that resolves according to opts.responseType and opts.extract on 2xx, rejects with an error on anything else. Errors returned through rejections have the following properties:

  • message: The returned UTF-8 text, or the status text if empty. If an error was thrown during the m.fetch call, its message is duplicated here instead.
  • status: The status code, or 0 if it failed before receiving a response.
  • response: The response from the inner fetch call, or undefined if the inner fetch call itself failed to return a response.
  • cause: If an error was thrown during the m.fetch call, this is set to that error.

m.throttler()

This is a general-purpose bi-edge throttler, with a dynamically configurable limit. It's much better than your typical throttle(f, ms) because it lets you easily separate the trigger and reaction using a single shared, encapsulated state object. That same separation is also used to make the rate limit dynamically reconfigurable on hit.

Create as throttled = m.throttler(ms) and do if (await throttled()) return to rate-limit the code that follows. The result is one of three values, to allow you to identify edges:

  • Leading edge: undefined
  • Trailing edge: false, returned only if a second call was made
  • No edge: true

Call throttled.update(ms) to update the interval. This not only impacts future delays, but also any current one.

To dispose, like on component removal, call throttled.dispose().

If you don't sepecify a delay, it defaults to 500ms on creation, which works well enough for most needs. There is no default for throttled.update(...) - you must specify one explicitly.

Important note: due to the way this is implemented in basically all runtimes, the throttler's clock might not tick during sleep, so if you do await throttled() and immediately sleep in a low-power state for 5 minutes, you might have to wait another 10 minutes after resuming to a high-power state.

Example usage:

function Search() {
	const throttled = m.throttler()
	let results, error
	return () => [
		m.remove(throttled.dispose),
		m("input[type=search]", {
			oninput: async (ev) => {
				// Skip redraw if rate limited - it's pointless
				if (await throttled()) return false
				error = results = null
				this.redraw()
				try {
					results = await m.fetch(m.p("/search", {q: ev.target.value}))
				} catch (e) {
					error = e.message
				}
			},
		}),
		results.map((result) => m(SearchResult, {result})),
		!error || m(ErrorDisplay, {error})),
	]
}

Here's the v2 equivalent of the above code, to show how this (and async event listeners) help reduce how much you need to write.

function Search() {
	let timer, results, error

	async function doSearch(query) {
		error = results = null
		try {
			results = await m.request("/search", {params: {q: query}})
		} catch (e) {
			error = e.message
		}
		m.redraw()
	}

	return {
		onremove: () => clearTimeout(timer),
		view: () => [
			m("input[type=search]", {
				oninput: (ev) => {
					if (timer) {
						ev.redraw = false // Skip redraw while throttled
					} else {
						timer = setTimeout(() => { timer = null })
						doSearch(ev.target.value)
					}
				},
			}),
			results.map((result) => m(SearchResult, {result})),
			!error || m(ErrorDisplay, {error})),
		],
	}
}

m.debouncer()

A general-purpose bi-edge debouncer, with a dynamically configurable limit. It's much better than your typical debounce(f, ms) because it lets you easily separate the trigger and reaction using a single shared, encapsulated state object. That same separation is also used to make the rate limit dynamically reconfigurable on hit.

Create as debounced = m.debouncer(ms) and do if (await debounced()) return to rate-limit the code that follows. The result is one of three values, to allow you to identify edges:

  • Leading edge: undefined
  • Trailing edge: false, returned only if a second call was made
  • No edge: true

Call debounced.update(ms) to update the interval. This not only impacts future delays, but also any current one.

To dispose, like on component removal, call debounced.dispose().

If you don't sepecify a delay, it defaults to 500ms on creation, which works well enough for most needs. There is no default for debounced.update(...) - you must specify one explicitly.

Important note: due to the way this is implemented in basically all runtimes, the throttler's clock might not tick during sleep, so if you do await throttled() and immediately sleep in a low-power state for 5 minutes, you might have to wait another 10 minutes after resuming to a high-power state.

Example usage:

function SaveButton() {
	const debounced = m.debouncer()
	let results, error
	return (attrs) => [
		m.remove(debounced.dispose),
		m("input[type=text].value", {
			async oninput(ev) {
				// Skip redraw if rate limited - it's pointless
				if ((await debounced()) !== false) return "skip-redraw"
				try {
					await m.fetch(m.p("/save/:id", {id: attrs.id}), {
						body: JSON.stringify({value: ev.target.value}),
					})
				} catch (e) {
					error = e.message
				}
			},
		}),
		results.map((result) => m(SearchResult, {result})),
		!error || m(ErrorDisplay, {error})),
	]
}

Here's the v2 equivalent of the above code, to show how this (and async event listeners in general) help reduce how much you need to write.

function SaveButton() {
	let timer, results, error

	return {
		onremove: () => clearTimeout(timer),
		view: (vnode) => [
			m("input[type=text].value", {
				oninput(ev) {
					ev.redraw = false // Skip redraw while waiting
					clearTimeout(timer)
					timer = setTimeout(async () => {
						timer = null
						error = results = null
						try {
							results = await m.request("/save/:id", {params: {id: vnode.attrs.id}})
						} catch (e) {
							error = e.message
						}
						m.redraw()
					})
					value = ev.target.value
				},
			}),
			results.map((result) => m(SearchResult, {result})),
			!error || m(ErrorDisplay, {error})),
		],
	}
}

onbeforeremovem.tracked(redraw, initial?), m.trackedList(redraw, initial?)

In v2, for managing animations, you'd do something like this:

const Fader = {
	view: (vnode) => Boolean(vnode.attrs.show) && m("div", {
		onbeforeremove: (vnode) => new Promise((resolve) => {
			vnode.dom.addEventListener("transitionend", resolve, {once: true})
			vnode.dom.classList.add("fade-out")
		}),
	}, "Hi"),
}

function AnimatedTodoList() {
	const items = new Map()
	let nextItem = ""
	let id = 0

	return {
		view: () => m(".todo-list", [
			m(".todo-list-add", [
				m("input[type=text].next-item-value", {
					value: nextItem,
					oninput(ev) { nextItem = ev.target.value },
				}),
				m("button.next-item-add", {
					onclick() { items.set(id++, nextItem) },
				}, "Add"),
			]),
			m(".todo-list-items", Array.from(items, ([key, value]) => (
				m(".todo-item", {
					key,
					onbeforeremove: (vnode) => new Promise((resolve) => {
						vnode.dom.ontransitionend = (ev) => {
							resolve()
							return false
						}
						vnode.dom.classList.add("fade-out")
					}),
				}, [
					m(".item-value", value),
					m("button.remove", {
						onclick() {
							items.delete(key)
						},
					}, "Remove"),
				])
			))),
		]),
	}
}

In this PR, things are done a bit differently as it's much lower-level: you don't remove right away, but instead you first trigger the animation, and then you remove. Tracking the virtual list needed for this is complicated, but that's where m.tracked() and m.trackedList() come in to help - those help track it for you. You just need to make sure to render what they say is live in the given moment.

Here's the equivalent for this PR:

function Fader() {
	const trackHit = m.tracked(this.redraw)

	return (attrs) => m.keyed(trackHit(attrs.show), (handle) => [
		handle.key,
		m("div", {
			class: handle.signal.aborted && "fade-out",
			ontransitionend: handle.release,
		}, "Hi"),
	])
}

function AnimatedTodoList() {
	const items = m.trackedList(this.redraw)
	let nextItem = ""
	let id = 0

	return () => m(".todo-list", [
		m(".todo-list-add", [
			m("input[type=text].next-item-value", {
				value: nextItem,
				oninput(ev) { nextItem = ev.target.value },
			}),
			m("button.next-item-add", {
				onclick() { items.set(id++, nextItem) },
			}, "Add"),
		]),
		m(".todo-list-items", m.keyed(items.live(), (handle) => [
			handle.key,
			m(".todo-item", {
				class: handle.signal.aborted && "fade-out",
				ontransitionend: handle.release,
			}, [
				m(".item-value", handle.value),
				m("button.remove", {onclick: handle.remove}, "Remove"),
			]),
		])),
	])
}

Separating this from the framework also helps bring better structure. For page transitions, you could do something like this:

function Page(attrs) {
	return m("div", {
		class: this.pageHandle.signal.aborted && "fade-out",
		ontransitionend: this.pageHandle.release,
	}, [
		attrs.children,
	])
}

const defaultRoute = "/home"
const routes = {
	"/home": () => m(Home),
	"/user/:id": ({id}) => m.m(ShowUser, {id}),
}

function App() {
	const trackHit = m.tracked(this.redraw)

	return () => m.route("#!", ({route}) => {
		const template = Object.keys(routes).some((t) => route.match(t))

		return m.keyed(trackHit(template), (handle) => Boolean(handle.value) && [
			handle.key,
			m.set({pageHandle: handle}, m(Page, routes[handle.value](match)))
		])
	})
}

m.mount(document.body, () => m(App))

This replacement also comes with a unique advantage: you get a pre-made signal that you can use to immediately abort stuff like fetches before the node is removed.

function ShowUser({id}) {
	let user, error

	queueMicrotask(async () => {
		const response = await fetch(m.p("/api/user/:id", {id}), {signal: this.pageHandle.signal})
		try {
			if (response.ok) {
				user = await response.json()
			} else {
				error = await response.text()
			}
		} catch (e) {
			error = e.message
		}
		this.redraw()
	})

	return () => [
		user != null && m(".user-info", [
			m(".user-name", user),
		]),
		error != null && m(".error", error),
		user == null && error == null && m(".loading"),
	]
}

m.tracked() API

  • tracked = m.tracked(redraw, initial?): Create a tracked value. Default initial value is undefined, but you should
  • handles = tracked(state): Track an incoming state value. Returns a list of handles

m.trackedList() API

  • tracked = m.trackedList(redraw, initial?): Create a tracked value.
  • handles = tracked(state): Track an incoming state value. Returns a list of handles

oninitm.init(callback)

In general, oninit is redundant, even in v2. However, m.init has some use in it that makes it not entirely redundant.

To take the ShowUser example from before:

function ShowUser({id}) {
	let user, error

	queueMicrotask(async () => {
		const response = await fetch(m.p("/api/user/:id", {id}), {signal: this.pageHandle.signal})
		try {
			if (response.ok) {
				user = await response.json()
			} else {
				error = await response.text()
			}
		} catch (e) {
			error = e.message
		}
		this.redraw()
	})

	return () => [
		user != null && m(".user-info", [
			m(".user-name", user),
		]),
		error != null && m(".error", error),
		user == null && error == null && m(".loading"),
	]
}

Let's assume there wasn't a this.pageHandle in the context:

function ShowUser({id}) {
	const ctrl = new AbortController()
	let user, error

	queueMicrotask(async () => {
		const response = await fetch(m.p("/api/user/:id", {id}), {signal: ctrl.signal})
		try {
			if (response.ok) {
				user = await response.json()
			} else {
				error = await response.text()
			}
		} catch (e) {
			error = e.message
		}
		this.redraw()
	})

	return () => [
		m.remove(() => ctrl.abort()),
		user != null && m(".user-info", [
			m(".user-name", user),
		]),
		error != null && m(".error", error),
		user == null && error == null && m(".loading"),
	]
}

This could save a couple lines by using m.init.

function ShowUser({id}) {
	let user, error

	return () => [
		m.init(async (signal) => {
			const response = await fetch(m.p("/api/user/:id", {id}), {signal})
			try {
				if (response.ok) {
					user = await response.json()
				} else {
					error = await response.text()
				}
			} catch (e) {
				error = e.message
			}
		}),
		user != null && m(".user-info", [
			m(".user-name", user),
		]),
		error != null && m(".error", error),
		user == null && error == null && m(".loading"),
	]
}

The closest v2 equivalent would be this, similar to the above variant without m.init:

function ShowUser({id}) {
	const ctrl = new AbortController()
	let user, error

	queueMicrotask(async () => {
		const response = await fetch(m.p("/api/user/:id", {id}), {signal: ctrl.signal})
		try {
			if (response.ok) {
				user = await response.json()
			} else {
				error = await response.text()
			}
		} catch (e) {
			error = e.message
		}
		this.redraw()
	})

	return {
		onremove: () => ctrl.abort(),
		view: () => [
			user != null && m(".user-info", [
				m(".user-name", user),
			]),
			error != null && m(".error", error),
			user == null && error == null && m(".loading"),
		],
	}
}

It's a bit niche, but it can save code in certain situations. It also has the added benefit of scheduling the block to not block frame rendering. (This was done before with queueMicrotask, but common idioms aren't usually aware of that.)

Note that the similar Comp = m.lazy({fetch: () => import("./path/to/Comp.js")}) does not do similar, but it's also aiming to minimize load delay, letting the initial tree be built during script fetch - there's different perf considerations there.

How Has This Been Tested?

I've written a bunch of new tests, and every current test (to the extent I've kept them) pass. At the time of writing, there's about 219k tests, but all but about 8k of that is for the near exhaustive m.fetch tests.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist

  • My code follows the code style of this project.
  • I have added tests to cover my changes.
  • All new and existing tests passed.
  • My change requires a documentation update, and I've opened a pull request to update it already: I need to write that first, and that may be a bit.
  • I have read https://mithril.js.org/contributing.html.

Perf is somewhat improved in spots, but generally the same otherwise.
Cuts only about 2% of the bundle, but removes a pretty bad pain point in the code. Plus, inserting arbitrary HTML outside a container is usually a recipe for disaster in terms of styling.
It's simpler that way and I don't have to condition it.
It's redundant and has been for ages. A v3 increment gives me the ability to minimize surprise.
People can just use equality and regexps as needed.
Also, rename it `m` to make dev stack traces a little more readable, and remove a useless layer of indirection.
This makes that class fully monomorphic and also a bit safer.
That's not been there for years
The long comment at the top of it explains the motivation.
It's now been made redundant with `m.tracked`.
@17dec
Copy link

17dec commented Oct 21, 2024

I was worried ES2018 might break Pale Moon, but just tested the counter example that seems to work. 👍

@panoply
Copy link

panoply commented Oct 21, 2024

I've been waiting for this proposal @dead-claudia. Been some years you've been hacking on it, cool to see it reach this level. Can't wait to get some time and dig into it.

@kfule
Copy link
Contributor

kfule commented Oct 21, 2024

Great job. I will touch it later.
Regarding performance, you might want to compare it to v2.0.4 as well, as v2.0.4 still seems to have a lot of users. Also, as mentioned in #2983, the bug fixes and improvements after v2.0.4 have caused a performance regression.

@StephanHoyer
Copy link
Member

StephanHoyer commented Oct 24, 2024

First, thanks again @dead-claudia for the work you put into that PR.

I really like the way you're heading with the move away from magic-props to special vnodes. This will greatly improve composability on the lifecycle front.

I still don't fully understand, how some of the new special vnodes work, esp. m.retain, m.set, m.use, m.init, m.throttler and m.debouncer. Maybe you can give a few more examples on this.

What I'm skeptical about

  • drop of m.request: I really like the api and flexibility
  • m.q/m.p: why abbreviate that?
  • the new style of routing (but not sure here, because I think I do not fully grasp that)

Another thing that I'm worried about is the amount of breaking changes and the backwards compatibility. As you might have noticed the mithril community is not in a great shape. Most of the main contributors (community and code) have moved away to other projects and a lot of the remaining ones have build massive apps that rely on a stable API, which is a big strength of mithril and allows for such big projects.
So a massive change in the API might not help them and drive them further away from the community.
And for new users I'm not sure if the new (more academic) API is easier to understand then the old one, which was much more straightforward.

So maybe it would be a good idea to keep some of the old ergonomics and deprecate them for at least one major version, so people have time to switch to the newer way of doing stuff. And maybe it turns out that it does not live up to it's expectations and all people like the old API more.

I would at least keep m.redraw, m.route and m.request and the lifecycle-attrs. Sure this would mean a few kb more, but who cares if we have 6kb or 10kb. They also can partly live in a separate compat-package.

So hope this helps.

@StephanHoyer
Copy link
Member

One thing I forgot to ask:

What's the SSR story of v3? I would really like to discontinue mithril-node-render and see it integrated into the main framework. And the question the comes next: How is the support for hydration of SSR-Apps?

…e `m.use`

`m.use` is specially designed to be used a fair amount, and simulating it is relatively slow, so it's worth making a built-in.

`m.key` -> `m.keyed` provides for a simpler interface, one that you don't have to type `key` out explicitly for anymore. (Most frameworks require you to explicitly add a "key" attribute, while this approaches it more like a glorified `Map`.)
It's very common to want context and not care about attributes. What's *not* common is wanting the old attributes and not the new. So that's what this optimizes for.
@kevinfiol
Copy link
Contributor

This is the best early Christmas present 🥲

@dead-claudia
Copy link
Member Author

Updated this PR with some changes and a much better explainer of the new API.

@dead-claudia
Copy link
Member Author

@StephanHoyer

First, thanks again @dead-claudia for the work you put into that PR.

Thanks! 🙏

I really like the way you're heading with the move away from magic-props to special vnodes. This will greatly improve composability on the lifecycle front.

I still don't fully understand, how some of the new special vnodes work, esp. m.retain, m.set, m.use, m.init, m.throttler and m.debouncer. Maybe you can give a few more examples on this.

Updated the initial comment to explain these better, with detailed examples that took about a day or two to work out. In the process, I had to modify m.tracked() as I identified a usability issue in it.

What I'm skeptical about

  • drop of m.request: I really like the api and flexibility

Re-added that, just with a different name (to help people more easily notice there's a difference).

  • m.q/m.p: why abbreviate that?

Unabbreviated m.q into m.query since my initial reasoning (to align with m.p) was admittedly weak.

m.p is abbreviated as it's expected to be used a lot. Where v2's m.request, m.route.Link, and m.route.set accept a separate parameters object to interpolate into a route pattern, I've extracted all that into a separate m.p utility that just spits out an interpolated path. It's abbreviated because I anticipate it to be used a lot with fetching and links.

If you're wondering about why I optimized it so far, check the initial comment of src/std/path-query.js - that explains why I did it.

  • the new style of routing (but not sure here, because I think I do not fully grasp that)

Fixed with some explainers in the initial comment. It's pretty easy to squint and see the similarities.

Another thing that I'm worried about is the amount of breaking changes and the backwards compatibility. As you might have noticed the mithril community is not in a great shape. Most of the main contributors (community and code) have moved away to other projects and a lot of the remaining ones have build massive apps that rely on a stable API, which is a big strength of mithril and allows for such big projects. So a massive change in the API might not help them and drive them further away from the community.

To be fair, this isn't our first rodeo. And we did go through a similar transition from v0.2 to v1.

And for new users I'm not sure if the new (more academic) API is easier to understand then the old one, which was much more straightforward.

More academic APIs have proven to be exactly the right type of abstraction needed for many problems. While the API in my PR is more academic, I also tried to make it simpler.

This feedback did lead me to move from m.key vnodes to m.keyed fragment vnodes, though. So there were some fair criticisms.

Will call out that I'm pulling a page out of the lessons learned from all the ECMAScript design discussions, including max-min classes. There's three major lessons that stick out the most to me:

  1. The smaller the feature, the more likely it is to gain adoption.
  2. The simpler the feature, the less likely it is people will make mistakes with it.
  3. The narrower the feature, the less likely it will be misused.

Academic APIs can sometimes produce more gotchas than they get rid of, but pragmatic use of academic styles of APIs can often get you 90% of the way there with 10% of the effort. And it's that pragmatic use that was my ultimate goal.

So maybe it would be a good idea to keep some of the old ergonomics and deprecate them for at least one major version, so people have time to switch to the newer way of doing stuff. And maybe it turns out that it does not live up to it's expectations and all people like the old API more.

I would at least keep m.redraw, m.route and m.request and the lifecycle-attrs. Sure this would mean a few kb more, but who cares if we have 6kb or 10kb. They also can partly live in a separate compat-package.

Ironically, it's those that you named (mod m.request) that are the very things I wanted to switch up the most.

  • m.redraw()this.redraw() isn't much of a change in practice. Most people aren't dealing with multiple mount points, so this will almost be sed-able in practice.
  • m.route(...) is mostly a one-time thing. v2 routes necessarily can't be spread out very far, so users are only porting one set of things. I did make it a point to retain the same route patterns, modulo :rest... being replaced with *rest, and it's very possible to codemod most v2 m.route(elem, defaultRoute, {...routes}) to this PR's m.mount(elem, () => m.route(prefix, ({route}) => ...)).
  • The lifecycle attributes were causing massive composability issues. I almost raised concerns about this back before the v1 redesign was even released.

I did restore m.request in the form of m.fetch. And I locked down its error behavior a lot better.

So hope this helps.

Oh, this feedback was immensely helpful.


One thing I forgot to ask:

What's the SSR story of v3? I would really like to discontinue mithril-node-render and see it integrated into the main framework. And the question the comes next: How is the support for hydration of SSR-Apps?

There's none currently, but I'm open to adding such a story for that. I'd rather see it deferred until after this PR, though.

I want a hard guarantee that it doesn't unnecessarily allocate here. To do otherwise could introduce uncertainty into tests.
@JAForbes
Copy link
Collaborator

JAForbes commented Nov 5, 2024

@Claudia Meadows I just had a proper read through of the proposal. Thank you for all the hard work you've put into this!

There's a lot here to discuss, and even if I don't agree with all of the proposals, it is so nice to have a proposal to discuss!

First main point, I personally think v3 should just be a clean up release of v2, extremely minor changes, tightening up the v2 API e.g. drop support for classes/pojo's. No new abstractions/concepts. I think if anything this should be a v4 proposal. That would give us a little bit more breathing room to consider the API surface without holding off a v3 release that removes some footguns.

I think there's a lot of things here that don't need to be in mithril core. At least not in an initial release. I think for example the debounce/throttle stuff is way more elegant with streams and its kind of streams' wheelhouse. If we added those methods to the stream API I think it would be a lot cleaner than the proposal example.

I also think as a high level goal we should strive to not have too many top level methods / properties on the m namespace. Each time we add a method or property it should be after a lot of dogfooding and community demand. Mithril's API should always feel small. I think we shouldn't just optimize for the number of kilobytes. We should optimize for the number of concepts.


I really like the direction of removal of magic properties from VNodes. But it feels a little strange having effects in the view, they get re-evaluated every render, and it adds noise to the markup (in my opinion at least). If something isn't returning/producing vtree I don't see why it should sit in the view.

Could they instead sit on the vnode:

const Example = v => {
  v.create(() => console.log('hello') )
  v.remove(() => console.log('goodbye') )

  return () => m('div')
}

I think this would also establish precendence for having more component scoped APIs that auto cleanup / cancel. E.g. v.stream / v.request.


I like v.init, not sure about the naming. I think having a built in util that provides an abort signal when the component is unmounted is an extremely useful util to add to core, and it is easy to understand too.

But I think it should work even if called after init. E.g. you should be able to create arbitrary effects and bind clean up to that signal in event listener, in another lifecycle hook, whenever you want. It is an essential building block for other userland abstractions and something I always missed in previous versions (juggling multiple onremoves was painful).

There is a bit of API overlap with onremove and an injected AbortSignal in oninit, I feel like we could unify that a bit better.


I don't think we should support the functional components variation I think that will be a huge footgun and is one of the worst API design mistakes React ever made. Writing those extra () => characters solves so many problems and I think it would be a fantastic simplification if mithril didn't have component variations anymore and just had one way to write components.

Love the simplified closure component interface though (return () => ...). I think we should do that in v3 for sure.


I think create/update should be separate. I don't see how having it in one hook simplifies things as any persistent binding needs to sit in a separate scope anyway.


I'm not sure about m.use, I think key: [fields].join('|') does the job and doesn't open the can of worms of having probably the worst abstraction from React (useEffect). Mithril doesn't have an answer to useEffect, so I can see why people would ask for it but that doesn't mean we should give them what they know. I also don't see why this couldn't land later. I really don't think mithril should go down the path of reference equality diffing of attributes / state. I also don't see why this couldn't be a separate npm module to test adoption.


I like offering those route matching methods and ditching route resolver. I really strongly believe we shouldn't offer "just use a regex" as a solution.

I don't like the second proposal as the default happy path, its too imperative. But I think offering those utils would be extremely useful.

Additionally can we skip m.lazy and just support async functions that return { default: Component }? as in

m.route({
  '/': Home,
  '/settings': () => import('./pages/settings.js')
})

Also I didn't see if there was a nested routes example, any thoughts there? I think if we're going to do a route api refresh it should support nesting. Perhaps v.route could be scoped to the subroute, or route.match returns a subroute instance? I think this needs some careful design/discussion. I think you touched on it with m.set but it would be good to see a more complete example.


I think we can drop m.route as an entry point and just always use m.mount, keep m.route as just a view level utility. I think we should really strive for simplifying call signatures / usages.


I like dropping XHR in favour of fetch internally for m.request, but I don't see a huge win in changing the m.request API. If we're not offering any sugar on top of fetch, may as well just document using native fetch and use .then(() => m.redraw()). I think keeping m.request as is makes sense until there's sufficient justification.


I think async event handlers are great (didn't we already land that in v2), I don't think we need m.capture, I think in your comparison it is so clear what is going on when you have those 3 lines. When I see m.capture its a bit of a blackbox, its not enough complexity to warrant a util I think, at least not in core.


I'm not sure about the m.keyed API. I think this is one case where we should have a magic property on the vnode. In my code I'll end up having m.keyed all over the place it will create a lot of visual noise. It's going to be really painful performing the upgrade on legacy codebases as its not just an alias its introducing a nested scope with parens etc. I don't think the tradeoff is worth it.


I see where you're going with this.redraw() but part of the motivation for closure components is to avoid the this context. Could it instead be v.redraw()? I feel if we're going to use this we should only use this. It shouldn't be encouraged to mix in this with other styles.

In my codebases I pretend this isn't even a language feature and I'm much happier for it.


Can we rename m.set something like m.context as that is what its generally called within the ecosystem now, it will make it a little easier for newcomers to find the functionality. And could the context could be a second arg to all components instead of using this?


m.link is a definite improvement. I like in sin you can just use native anchor elements but mithril should be a little more conservative and m.link strikes the right balance.


I think m.tracked is so low level and complicated and onbeforeremove is one of the best features ever added to mithril.

Can we keep it as is but use the new lifecycle API?

Slightly boosts performance (by around 5-10%) and slightly saves on bundle size
@dead-claudia
Copy link
Member Author

dead-claudia commented Nov 7, 2024

@JAForbes I'll address the rest of your comment if/when I find time, but I do want to address a couple parts now:

I really like the direction of removal of magic properties from VNodes. But it feels a little strange having effects in the view, they get re-evaluated every render, and it adds noise to the markup (in my opinion at least). If something isn't returning/producing vtree I don't see why it should sit in the view.

Could they instead sit on the vnode:

const Example = v => {
  v.create(() => console.log('hello') )
  v.remove(() => console.log('goodbye') )

  return () => m('div')
}

I think this would also establish precendence for having more component scoped APIs that auto cleanup / cancel. E.g. v.stream / v.request.

What are your thoughts on injecting them in the context alongside this.redraw() (which is already there)? Something like this.onlayout(f) and this.onremove(f)? Worth noting that I could have those scheduled without even needing the vnode, and I did consider it, but ended up just not doing it because the vnode-based alternative was simpler to code at the time. (By the time I did this, I hadn't yet written the code for context.)

I do have an m.inline vnode to provide access to that outside components, so that's not an issue.

I don't think we should support the functional components variation I think that will be a huge footgun and is one of the worst API design mistakes React ever made. Writing those extra () => characters solves so many problems and I think it would be a fantastic simplification if mithril didn't have component variations anymore and just had one way to write components.

Love the simplified closure component interface though (return () => ...). I think we should do that in v3 for sure.

Unfortunately, we don't really have a simpler or cheaper abstraction than a function for a component. Keep in mind, there aren't anything like React-style hooks. Also, I will point out that there are unique optimization benefits beyond just memory consumption for doing it this way. V8 in particular optimizes functions by reference, not by code body.

@dead-claudia
Copy link
Member Author

@JAForbes I'll go ahead and address most of the rest:


I think there's a lot of things here that don't need to be in mithril core. At least not in an initial release. I think for example the debounce/throttle stuff is way more elegant with streams and its kind of streams' wheelhouse. If we added those methods to the stream API I think it would be a lot cleaner than the proposal example.

I get where you're coming from on adding it to streams, but pragmatically speaking, in most web applications, debouncing and throttling is applied very early in the pipeline. So early, people commonly just do debounce(fn, ms) where fn is what does the actual dispatch into the stream or store.

For one concrete example, https://www.npmjs.com/package/redux-debounced gets less than 10k weekly downloads, while this SO answer on how to debounce Redux actions simply uses a debounce helper.

In the world of React Hooks, debouncing also similarly occurs very early, like immediately after the variable state is defined. You use the debounced variable instead of the raw state variable. But even that answer also recommends people just do something like useMemo(() => debounce(onFoo, ms), [onFoo]) or useMemo(() => debounce(setState, ms), [setState]) instead.

Debouncing streams really only makes sense if you're passing events directly into a stream, and in the front end, this is a very rare thing to see if you're not procedurally subscribing to events on elements.

As for why in core? Debouncing and throttling are extremely common operations. It's so common, there's a current WHATWG proposal just to add it to event emitters as an option as well as to WHATWG's (very WIP) observable spec as a method. People have been wanting a native debouncing and throttling API for years. The basic concept itself is about 15 years old, and the name itself is borrowed from a similar mechanical issue switches have faced for many decades.


I think m.tracked is so low level and complicated and onbeforeremove is one of the best features ever added to mithril.

Can we keep it as is but use the new lifecycle API?

Unfortunately, I found it very hard to defend porting onbeforeremove, for multiple reasons.

  1. Tt'd be very complicated to support from a technical perspective. The internals of the new reconciliation algorithm rely extremely heavily on the stack and global state for tracking the current parent, and in order to properly remove a parent whose child is blocking their removal, I'd have to either bloat the vnode or add a weak map of node to {parent, refCount} pairs. And if even one node has its removal delayed, I have to link every removed element and all their ancestors so I don't prematurely remove those from the DOM. This would likewise result in a significant increase in bundle size, even after removing m.tracked/m.trackedList.
  2. From an API perspective, child components have no way to know it's slated for removal but blocked due to a parent's onbeforeremove. onbeforeremove on the child simply doesn't run. And in this state, you specifically need to ignore user clicks and keyboard input, as any input is almost certainly erroneous. This is solved with m.tracked/m.trackedList.
  3. Stuff like transitions require raw DOM operations in v2. Exposing the data flow like this allows users to stay within the confines of the framework and just use vnodes to declaratively wire everything up.

I see where you're going with this.redraw() but part of the motivation for closure components is to avoid the this context. Could it instead be v.redraw()? I feel if we're going to use this we should only use this. It shouldn't be encouraged to mix in this with other styles.

In my codebases I pretend this isn't even a language feature and I'm much happier for it.


Can we rename m.set something like m.context as that is what its generally called within the ecosystem now, it will make it a little easier for newcomers to find the functionality. And could the context could be a second arg to all components instead of using this?

I'm okay with changing the name of the factory if there's sufficient support for it. However, the choice of this was very deliberate. (TL;DR: performance + end developer UX)

When I first opened this PR, I used an (attrs, old, context) => vnode as the main component body. But, I quickly ran into problems with this, especially when it came to routing, but also when it came to other utilities like m.use (before I made it a proper built-in vnode).

  • When you need context, it's rare that you'd also need the old vnode.
  • When you need the old vnode, it's rare that you also need context.

I more than once forgot to add the old parameter in this case with that argument order...while working on this very PR. If I'm having issues with that, others most certainly will as well.

So, to minimize bug risk, both arguments need to be in some way optionally accessible. There's two ways to do that: pass it all in an object (say, tree = view({attrs, prev, context})) and pass one of the two via this. I chose the second for a few reasons:

  1. Less boilerplate accessing those two properties. One can just be accessed via this, and the other only needs a second argument. If you don't need the one in this, just don't use this. If you don't need the one in the argument, just omit the argument.
  2. Better performance by default. I'm allocating one less object per component, and if users never need the old attributes in callbacks, they don't have to pay for all that memory each time.
  3. Better TypeScript JSX compatibility. When checking JSX element properties, TypeScript checks the first argument. I've tried for over 5 years to get them to add the flexibility needed for literally any of our versions, but I've not seen anything move forward.

And I chose to have this be the context and the second argument be the old attributes for a couple reasons:

  1. this has long been considered the most idiomatic place to put any sort of execution context in JS, especially when this isn't just the context object in an instance method. Virtual DOM frameworks like React using components or special functions to access it has always been the exception, not the norm. The norm is frameworks like Express, Koa, and Mocha, each passing contextual information in this. It's Node.js passing the source event emitter as this in event listener callbacks. It's browsers passing event.currentTarget as this when invoking event listeners. In all of these, this is used as a context object, but it isn't an instance object in the object-oriented sense.
  2. If you make the second argument be the old attributes, it's easy to just look at the arguments and instantly know they're attributes. One less thing you have to think about as you crunch out components or stare at dizzying amounts of code.

I like v.init, not sure about the naming. I think having a built in util that provides an abort signal when the component is unmounted is an extremely useful util to add to core, and it is easy to understand too.

But I think it should work even if called after init. E.g. you should be able to create arbitrary effects and bind clean up to that signal in event listener, in another lifecycle hook, whenever you want. It is an essential building block for other userland abstractions and something I always missed in previous versions (juggling multiple onremoves was painful).

There is a bit of API overlap with onremove and an injected AbortSignal in oninit, I feel like we could unify that a bit better.

I'll admit I don't like m.init's current form. I was hoping someone could help me figure out a better solution, one that's better than if (old !== undefined) queueMicrotask(init).

I will point out that there are two performance constraints that I'm trying to meet:

  1. I want it to defer its initializer callback until after paint. In an animation frame, waiting an extra microtask does that trick as promise jobs don't run until after the browser is to observably paint the current DOM state. In reality, this paint step is really it serializing state to then send to a shared compositor process, which makes for a perfect opportunity to concurrently kick off downloads and the like as the JS code is no longer trying to build a tree.
  2. Unconditional provision of an AbortController will cause memory usage and startup time to suffer greatly. A single AbortController weighs about the same as 2-3 entire DOM elements, and it has a somewhat heavy allocation process. I need these to not be allocated by default.

I think create/update should be separate. I don't see how having it in one hook simplifies things as any persistent binding needs to sit in a separate scope anyway.

I've been torn on this. There's arguments both ways on this one. Both mechanisms simplify one thing but make the other more complicated. I wonder if it'd be best to just provide both create/update and layout hooks - it'd all just be going into the same queue anyways.


I'm not sure about m.use, I think key: [fields].join('|') does the job and doesn't open the can of worms of having probably the worst abstraction from React (useEffect). Mithril doesn't have an answer to useEffect, so I can see why people would ask for it but that doesn't mean we should give them what they know. I also don't see why this couldn't land later. I really don't think mithril should go down the path of reference equality diffing of attributes / state. I also don't see why this couldn't be a separate npm module to test adoption.

To be clear, this isn't a direct analogue to useEffect. It looks incredibly similar, but it's a vnode and can only work with other vnodes. If anything, it's just an evolution of an existing pattern.


I like offering those route matching methods and ditching route resolver. I really strongly believe we shouldn't offer "just use a regex" as a solution.

I don't like the second proposal as the default happy path, its too imperative. But I think offering those utils would be extremely useful.

If you have a better alternative that preserves the flexibility, I'm all ears.


Additionally can we skip m.lazy and just support async functions that return { default: Component }? as in

m.route({
  '/': Home,
  '/settings': () => import('./pages/settings.js')
})

I screwed up that example, but it's also a bit simplistic. The correct API is m.lazy({fetch: () => ...}). And m.lazy also supports pending and error views. So there's more to it than that.

Also I didn't see if there was a nested routes example, any thoughts there? I think if we're going to do a route api refresh it should support nesting. Perhaps v.route could be scoped to the subroute, or route.match returns a subroute instance? I think this needs some careful design/discussion. I think you touched on it with m.set but it would be good to see a more complete example.

I glossed over that, but the intent is that you could do something like this for easy subroutes:

let match
if (match = this.route.match("/prefix/:route...")) {
    return doSubroute(match.route)
}

Basically, subroutes are just a matter of abstraction.


I think we can drop m.route as an entry point and just always use m.mount, keep m.route as just a view level utility. I think we should really strive for simplifying call signatures / usages.

The problem is the need to redraw whenever someone clicks the back button. That's why m.route can't fully disappear.


I like dropping XHR in favour of fetch internally for m.request, but I don't see a huge win in changing the m.request API. If we're not offering any sugar on top of fetch, may as well just document using native fetch and use .then(() => m.redraw()). I think keeping m.request as is makes sense until there's sufficient justification.

m.fetch mainly exists to wrap the return value (so you don't have to awkwardly decode it) and add progress events (a feature XHR has that fetch doesn't).


I think async event handlers are great (didn't we already land that in v2), I don't think we need m.capture, I think in your comparison it is so clear what is going on when you have those 3 lines. When I see m.capture its a bit of a blackbox, its not enough complexity to warrant a util I think, at least not in core.

The v3 async event handlers can auto-redraw when the handler resolves. v2 does not have this functionality.

And m.capture is to capture the functionality that's currently return false. The return value is just to save some hassle if all you need to do is ignore an event.


I'm not sure about the m.keyed API. I think this is one case where we should have a magic property on the vnode. In my code I'll end up having m.keyed all over the place it will create a lot of visual noise. It's going to be really painful performing the upgrade on legacy codebases as its not just an alias its introducing a nested scope with parens etc. I don't think the tradeoff is worth it.

Keep in mind this is intended for multi-child fragments. I am interested in learning more about how this could impact you, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
major Type: Breaking Change For any feature request or suggestion that could reasonably break existing code Type: Enhancement For any feature request or suggestion that isn't a bug fix Type: Meta/Feedback For high-level discussion around the project and/or community itself
Projects
Status: High priority
Development

Successfully merging this pull request may close these issues.

8 participants